_______

SELF MONITORING (MEASUREMENT) OF BLOOD GLUCOSE (SMBG):

_______

_______

Prologue:

Way back in 1991, on a Sunday afternoon, a young Parsi lady from Mumbai who was holidaying in a nearby village came to me with sudden breathlessness at my nursing home at Vapi, 160 km north of Mumbai. Clinical examination was normal except severe breathlessness. I suspected diabetic ketoacidosis and asked about history of diabetes. Patient and her relatives flatly denied any history of diabetes and told me that she was investigated in Mumbai for weakness recently and there is no diabetes. In those days, glucometer was not available in India. We used to blood sugar by Folin-Wu method. Being Sunday, laboratory was closed, so I could not do blood and urine sugar. I believed the story of relatives, ignored my gut feeling of diabetes, did ECG and X-ray chest which were normal, gave some primary care and sent patient back to Mumbai in their car. She got admitted in Mumbai same night and they repeated the same story of not having diabetes. However, blood tests in Mumbai revealed diabetic ketoacidosis and she died at Mumbai. Till today I have a guilty feeling of missing diabetes diagnosis. If I had glucometer available, I could have clinched diagnosis, could have given insulin and saved her life. I hope her relatives forgive me. Diabetes was originally identified by the presence of glucose in the urine. Indian physicians around 3500 years ago identified diabetes and classified it as madhumeha or honey urine noting that the urine would attract ants. In the 18th and 19th centuries the sweet taste of urine was used for diagnosis before chemical methods became available to detect sugars in the urine. Tests to measure glucose in the blood were developed over 100 years ago, and hyperglycemia subsequently became the sole criterion recommended for the diagnosis of diabetes. Self-monitoring of blood glucose (SMBG) was described as one of the most important advancements in diabetes management since the invention of insulin in 1920. Since approximately 1980, a primary goal of the management of type 1 and type 2 diabetes mellitus has been achieving closer-to-normal levels of glucose in the blood for as much of the time as possible, guided by SMBG several times a day. The benefits include a reduction in the occurrence rate and severity of long-term complications from hyperglycemia as well as a reduction in the short-term, potentially life-threatening complications of hypoglycemia. Unlike some other diseases that rely primarily on professional medical treatment, diabetes treatment requires active participation by the person who has it. Monitoring your blood glucose level on a regular basis and analyzing the results is believed by many to be a crucial part of the treatment equation. Worldwide, the glucose monitoring devices market is expected to be more than $16 billion by the end of this year. World Diabetes Day is celebrated every year on November 14 and this article will promote awareness about diabetes via SMBG.    

________

Abbreviations and synonyms:

DM = diabetes mellitus = diabetes (and not diabetes insipidus)

T1DM = type 1 DM

T2DM = type 2 DM

BG = blood glucose

FPG = fasting plasma glucose (venous)

PPG = post-prandial plasma glucose (venous, 2 hours after meal)

FBS = fasting blood sugar (venous)

PPBS = post prandial blood sugar (venous, 2 hours after meal)

Pre-prandial = before meal

Post-prandial = after meal (usually 2 hour)

Fasting = no calorie intake for 8 hours

Random = anytime other than fasting (test taken from a non-fasting subject)

Glycated Hemoglobin = Hemoglobin A1c = HbA1c = Hb1c = HbA1c = A1c = A1C

SMBG = self monitoring (measurement) of blood glucose = capillary whole blood/plasma glucose

SMUG = self monitoring (measurement) of urine glucose

CGM = continuous glucose monitoring

Hyperglycemia = high blood glucose

Hypoglycemia = low blood glucose

When you ask FBS/PPBS from lab, they usually do FPG/PPG.  

ADA = American Diabetes Association

IDF = International Diabetes Federation

__________

Terminology of Blood samples:

Blood is pumped around the body by the heart. The major vessels that take blood away from the heart are called arteries. The major vessels that take blood back to the heart are called veins. Between the two networks are many tiny blood vessels called capillaries. The composition of the blood in the three types of vessel varies slightly. When a blood sample is taken by the doctor or nurse, it is taken from a vein and called a venous sample. At the laboratory, the blood may be analysed as it is, in which case it is a ‘whole blood’ measurement. Often the clear liquid part of the blood may be separated from the red blood cells. This yields either serum or plasma (depending on whether or not the blood sample in the tube is treated with a special reagent called an anticoagulant).  A serum or plasma measurement of glucose will give a result which is 10 – 15 % higher than a whole blood measurement. When a home blood glucose test is performed, blood is usually taken from a finger-prick sample which gives capillary whole blood glucose. Now there is a move towards all glucometers giving plasma-calibrated results so that readings made at home and at the laboratory can be more easily compared.  

_

Sugar vs. glucose:

Sugar is the generalized name for sweet, short-chain, soluble carbohydrates, many of which are used in food. They are carbohydrates, composed of carbon, hydrogen, and oxygen. Simple sugars are called monosaccharides and include glucose (also known as dextrose), fructose and galactose. The table or granulated sugar most customarily used as food is sucrose, a disaccharide. (In the body, sucrose hydrolyses into fructose and glucose.) Other disaccharides include maltose and lactose. In the physiological context, the term sugar is a misnomer because it refers to glucose, yet other sugars besides glucose are always present. Food contains several different types [e.g., fructose (largely from fruits/table sugar/industrial sweeteners), galactose (milk and dairy products), as well as several food additives such as sorbitol, xylose, maltose, etc.]. But because these other sugars are largely inert with regard to the metabolic control system (i.e., that controlled by insulin secretion), and since glucose is the dominant controlling signal for metabolic regulation, the term has gained currency, and is used by medical staff and lay folk alike. In this article, sugar means glucose. 

_

Units of glucose measurement in blood/plasma:

A blood glucose measurement will provide the concentration of glucose that is in your bloodstream; the result is given as the amount of glucose per unit volume (of whole blood/plasma/serum). The measurement unit used for indicating the concentration of blood or plasma glucose can either have a weight dimension (mg/dl) or a molarity (mmol/l).  In some countries, they use millimoles to measure the amount, and take a ‘unit volume’ of blood to be one liter. ‘Millimoles per liter’ is written mmol/l. In the US, and some other countries, milligrams are used to measure the amount and a deciliter is taken as ‘unit volume’. ‘Milligrams per deciliter’ is written mg/dl. Deciliter means 100 ml. Since the molecular weight of glucose C6H12Ois 180, for the measurement of glucose, the difference between the two scales is a factor of 18, so that 1 mmol/L of glucose is equivalent to 18 mg/dL. To convert from mmol/l to mg/dl, simply multiply the figure by a factor of 18.

________

Introduction to SMBG:

An important goal in the treatment of diabetes is to achieve and maintain blood glucose levels as close to normal as possible. That is why it is essential to train patients in how to effectively self-manage their diabetes, not only to improve their treatment but also to improve their quality of life. The development in the late 1970s of methods to self-monitor blood glucose levels was an indispensable prerequisite for this. Only through regular self-monitoring of blood glucose levels (SMBG) it has become possible to coordinate drug therapy as well as food intake and exercise so that a good metabolic control can be achieved. Furthermore, it has become easier to identify asymptomatic hypo- and hyperglycemias and blood glucose fluctuations.

_

Blood glucose monitoring is a way of testing the concentration of glucose in the blood (glycemia). Particularly important in the care of diabetes mellitus, a blood glucose test is performed by piercing the skin (typically, on the finger) to draw blood, then applying the blood to a chemically active disposable ‘test-strip’. Different manufacturers use different technology, but most systems measure an electrical characteristic, and use this to determine the glucose level in the blood. The test is usually referred to as capillary blood glucose. Blood glucose monitoring reveals individual patterns of blood glucose changes, and helps in the planning of meals, activities, and at what time of day to take medications. Also, testing allows for quick response to high blood sugar (hyperglycemia) or low blood sugar (hypoglycemia). This might include diet adjustments, exercise, and insulin (as instructed by the health care provider).

_

Self-monitoring of blood glucose (SMBG) has been accepted as an important instrument that empowers people with diabetes to achieve and maintain therapeutic goals. Nevertheless, it is underprescribed and underused by patients. On the other hand, determination of HbA1c is accepted as the gold standard for assessing glycemic control, but its limitations are not sufficiently appreciated. Patients with normal or near-normal HbA1c levels may still display postprandial hyperglycemia, putting them at risk for long-term adverse outcomes. In addition, frequent unrecognized hypoglycemia may lead to falsely low HbA1c levels, and HbA1c does not allow any estimate of glycemic variability. Determination of immediate blood glucose control is best assessed by SMBG because this provides timely information of hyperglycemia and hypoglycemia. Thus, SMBG is a prerequisite for implementing strategies to optimally treat, as well as to avoid, out-of-range glucose values. Healthcare professionals must be capable of making evidence-based clinical decisions in regard to the use of SMBG and balance issues, such as patient abilities, costs, and clinical outcomes. As a base of diabetes treatment, blood glucose monitoring contributes to clinically determining the level of carbohydrate metabolism, formulating therapeutic measures, evaluating effects, and realizing optimal blood glucose control. Intensive blood glucose monitoring and strict blood glucose control significantly eliminate or postpone occurrence or development of chronic diabetic complications. It is important to monitor accurate blood glucose concentrations which may obviously fluctuate from time to time due to various factors such as daily activity, mental status, diet component, environmental change. Blood glucose monitoring is also a necessary method adopted by many food nutrition experts to investigate the carbohydrate-induced glycemic reaction in addition to its clinic applications to diabetes patients.

______

Blood glucose basics:

Glucose is the most important carbohydrate fuel in the body. In the fed state, the majority of circulating glucose comes from the diet; in the fasting state, gluconeogenesis and glycogenolysis maintain glucose concentrations. Very little glucose is found in the diet as glucose; most is found in more complex carbohydrates that are broken down to monosaccharides though the digestive process. About half of the total carbohydrates in the diet are in the form of polysaccharides and the remainder as simpler sugars. About two-thirds of the sugar in the diet is sucrose, which is a disaccharide of glucose and fructose. Glucose is classified as a monosaccharide because it cannot be broken down further by hydrolysis. It is further classified as a hexose because of its six-carbon skeleton and as an aldose, because of the presence of an aldehyde group on carbon 1. The aldehyde group condenses with a hydroxyl group so that glucose exists as a hemiacetal ring structure. This ring structure explains many of the reactions of glucose. Ordinarily the concentration of glucose in the blood is maintained at a relatively stable concentration from 80 to 120 mg/dl. The strong reducing properties of glucose made it relatively easy to measure and thus the clinical estimation of circulating glucose was one of the earliest tests available to the clinician. The recent introduction of microglucose oxidase technology has now made it possible for the patient to measure his or her own blood glucose concentration and undoubtedly makes the estimation of blood glucose the most widely used test of blood chemistry.

_

Natural blood glucose regulation:

Glucose is a simple sugar which is a permanent and immediate primary source of energy to all of the cells in our body. Glucose C6H12O6  is a carbohydrate whose most important function is to act as a source of energy for the human body, by being the essential precursor in the synthesis of ATP (adenosine triphosphate). The energy stored in ATP can then be used to drive processes requiring energy, including biosynthesis, and locomotion or transportation of molecules across cell membranes. According to cellular requirements, glucose can also be used in the creation of proteins, glycogen, and lipids. The blood glucose concentration is very tightly regulated. Human body has two hormones released by pancreas that have opposite effects: insulin and glucagon. Insulin is produced by beta cells of the pancreas while glucagon is produced by alpha cells. The release of insulin is triggered when high levels of glucose are found in the bloodstream, and glucagon is released with low levels of glucose in the blood.

This blood glucose regulation process can be explained in the following steps:

1. After the glucose has been absorbed from the food eaten, it gets released in the bloodstream. High blood glucose levels triggers the pancreas to produce insulin. Insulin enables the muscle cells to take glucose as their source of energy and to form a type of molecule called glycogen that works as secondary energy storage in the case of low levels of glucose. In the liver cells, insulin instigates the conversion of glucose into glycogen and fat. In the fat cells of the adipose tissue, insulin also promotes the conversion of glucose into more fat and the uptake of glucose.

2. The pancreas will continue to release insulin and liver and fat cells continue to use glucose till the drop of concentration of glucose is below a threshold; in that case, glucagon will be released instead of insulin.

3. When glucagon reaches the liver cells, it initiates the conversion of glycogen into glucose, and fat into fatty acids, which many body cells can use as energy after the glucagon enables them to. The cells will continue to burn fat from the adipose tissue as an energy source, and follow with the protein of the muscles, until the levels of glucose increase again by the digestion of food, and that terminates the cycle.

_

Homeostasis of glucose:

_

Essentially blood glucose levels determine the time of secretion of these hormones. The glucose in blood is obtained from the food that you eat. This glucose gets absorbed by intestines and distributed to all of the cells in body through bloodstream and breaks it down for energy. Body tries to maintain a constant supply of glucose for your cells by maintaining a constant blood glucose concentration. The concentration of glucose in blood, expressed in mg/dl, is defined by the term glycemia.

_

Glucose metabolism in normal person:

_

The blood sugar concentration or blood glucose level is the amount of glucose (sugar) present in the blood of a human or animal. The body naturally tightly regulates blood glucose levels as a part of metabolic homeostasis. With some exceptions, glucose is the primary source of energy for the body’s cells, and blood lipids (in the form of fats and oils) are primarily a compact energy store. Glucose is transported from the intestines or liver to body cells via the bloodstream, and is made available for cell absorption via the hormone insulin, produced by the body primarily in the pancreas. Glucose levels are usually lowest in the morning, before the first meal of the day (termed “the fasting level”), and rise after meals for an hour or two by a few millimoles. Blood sugar levels outside the normal range may be an indicator of a medical condition. A persistently high level is referred to as hyperglycemia; low levels are referred to as hypoglycemia. Diabetes mellitus is characterized by persistent hyperglycemia from any of several causes, and is the most prominent disease related to failure of blood sugar regulation. Intake of alcohol causes an initial surge in blood sugar, and later tends to cause levels to fall. Also, certain drugs can increase or decrease glucose levels.

_

The figure below shows fluctuation of blood sugar (red) and the sugar-lowering hormone insulin (blue) in humans during the course of a day with three meals. One of the effects of a sugar-rich vs. a starch-rich meal is highlighted.

_

Normal values in humans:

Normal value ranges may vary slightly among different laboratories. Many factors affect a person’s blood sugar level. A body’s homeostatic mechanism, when operating normally, restores the blood sugar level to a narrow range of about 4.4 to 6.1 mmol/L (79.2 to 110 mg/dL) (as measured by a fasting blood glucose test). The normal blood glucose level (tested while fasting) for non-diabetics, should be between 70 and 100 milligrams per deciliter (mg/dL). The mean normal blood glucose level in humans is about 5.5 mM (5.5 mmol/L or 100 mg/dL; however, this level fluctuates throughout the day. Blood sugar levels for those without diabetes and who are not fasting should be below 125 mg/dL. Despite widely variable intervals between meals or the occasional consumption of meals with a substantial carbohydrate load, human blood glucose levels tend to remain within the normal range. However, shortly after eating, the blood glucose level may rise, in non-diabetics, temporarily up to 7.8 mmol/L (140 mg/dL) or slightly more. The actual amount of glucose in the blood and body fluids is very small. In a healthy adult male of 75 kg with a blood volume of 5 liters, a blood glucose level of 5.5 mmol/L (100 mg/dL) amounts to 5 grams, slightly less than two typical American restaurant sugar packets for coffee or tea. Part of the reason why this amount is so small is that, to maintain an influx of glucose into cells, enzymes modify glucose by adding phosphate or other groups to it.

________

Diabetes Mellitus:

Diabetes mellitus (DM) refers to a group of common metabolic disorders that share the phenotype of hyperglycemia. Several distinct types of DM are caused by a complex interaction of genetics and environmental factors. Depending on the etiology of the DM, factors contributing to hyperglycemia include reduced insulin secretion, decreased glucose utilization, and increased glucose production. The metabolic dysregulation associated with DM causes secondary pathophysiologic changes in multiple organ systems that impose a tremendous burden on the individual with diabetes and on the health care system.

_

Diagnosis of DM:

_

Classification of DM:

_

The figure above shows spectrum of glucose homeostasis and diabetes mellitus (DM). The spectrum from normal glucose tolerance to diabetes in type 1 DM, type 2 DM, other specific types of diabetes (type 3 DM), and gestational DM (type 4 DM) is shown from left to right. In most types of DM, the individual traverses from normal glucose tolerance to impaired glucose tolerance to overt diabetes (these should be viewed not as abrupt categories but as a spectrum). Arrows indicate that changes in glucose tolerance may be bidirectional in some types of diabetes. For example, individuals with type 2 DM may return to the impaired glucose tolerance category with weight loss; in gestational DM, diabetes may revert to impaired glucose tolerance or even normal glucose tolerance after delivery. The fasting plasma glucose (FPG), the 2-h plasma glucose (PG) after a glucose challenge, and the A1c for the different categories of glucose tolerance are shown at the lower part of the figure. These values do not apply to the diagnosis of gestational DM. The World Health Organization uses an FPG of 110–125 mg/dL for the prediabetes category. Some types of DM may or may not require insulin for survival. *Some use the term “increased risk for diabetes” (ADA) or “intermediate hyperglycemia” (WHO) rather than “prediabetes.”

 _

DM is classified on the basis of the pathogenic process that leads to hyperglycemia, as opposed to earlier criteria such as age of onset or type of therapy. The two broad categories of DM are designated type 1 (T1DM) and type 2 (T2DM). Both types of diabetes are preceded by a phase of abnormal glucose homeostasis as the pathogenic processes progress. Type 1 DM is the result of complete or near-total insulin deficiency.

Glucose metabolism in T1DM:

 

_

Type 2 DM is a heterogeneous group of disorders characterized by variable degrees of insulin resistance, impaired insulin secretion, and increased glucose production. Distinct genetic and metabolic defects in insulin action and/or secretion give rise to the common phenotype of hyperglycemia in type 2 DM and have important potential therapeutic implications now that pharmacologic agents are available to target specific metabolic derangements. Type 2 DM is preceded by a period of abnormal glucose homeostasis classified as impaired fasting glucose (IFG) or impaired glucose tolerance (IGT).

Glucose metabolism in T2DM:

 

_

Two features of the current classification of DM diverge from previous classifications. First, the terms insulin-dependent diabetes mellitus (IDDM) and non-insulin-dependent diabetes mellitus (NIDDM) are obsolete. Since many individuals with type 2 DM eventually require insulin treatment for control of glycemia, the use of the term NIDDM generated considerable confusion. A second difference is that age is not a criterion in the classification system. Although type 1 DM most commonly develops before the age of 30, an autoimmune beta cell destructive process can develop at any age. It is estimated that between 5 and 10% of individuals who develop DM after age 30 years have type 1 DM. Although type 2 DM more typically develops with increasing age, it is now being diagnosed more frequently in children and young adults, particularly in obese adolescents.

_

Comparison of variation of plasma glucose between non-diabetic and diabetic individuals:

_

Other Types of DM (type 3 DM):

Other etiologies for DM include specific genetic defects in insulin secretion or action, metabolic abnormalities that impair insulin secretion, mitochondrial abnormalities, and a host of conditions that impair glucose tolerance . Maturity-onset diabetes of the young (MODY) is a subtype of DM characterized by autosomal dominant inheritance, early onset of hyperglycemia (usually <25 years), and impairment in insulin secretion. Mutations in the insulin receptor cause a group of rare disorders characterized by severe insulin resistance. DM can result from pancreatic exocrine disease when the majority of pancreatic islets are destroyed. Cystic fibrosis-related DM is an important consideration in this patient population. Hormones that antagonize insulin action can also lead to DM. Thus, DM is often a feature of endocrinopathies such as acromegaly and Cushing’s disease. Viral infections have been implicated in pancreatic islet destruction but are an extremely rare cause of DM. A form of acute onset of type 1 diabetes, termed fulminant diabetes, has been noted in Japan and may be related to viral infection of islets.

_

Gestational Diabetes Mellitus (GDM) (type 4 DM):

Glucose intolerance developing during pregnancy is classified as gestational diabetes. Insulin resistance is related to the metabolic changes of late pregnancy, and the increased insulin requirements may lead to IGT or diabetes. GDM occurs in 7% (range 2–10%) of pregnancies in the United States; most women revert to normal glucose tolerance postpartum but have a substantial risk (35–60%) of developing DM in the next 10–20 years. The International Diabetes and Pregnancy Study Groups now recommend that diabetes diagnosed at the initial prenatal visit should be classified as “overt” diabetes rather than gestational diabetes.

_

Note:

This article is written on SMBG and not DM and therefore detailed discussion on DM is inappropriate.

_

Hyperglycemia vs. DM:

For diagnosis of DM, persistent hyperglycemia is must but occasionally you may have transient hyperglycemia without DM. Stress hyperglycemia is a medical term referring to transient elevation of the blood glucose due to the stress of illness. Transient hyperglycemia occurs as a part of stress response in acute illnesses and is brought about by elevated levels of counter regulatory hormones. It usually resolves spontaneously. Stress hyperglycemia is especially common in patients with hypertonic dehydration and those with elevated catecholamine levels (e.g., after emergency department treatment of acute asthma with epinephrine). Steroid diabetes is a specific and prolonged form of stress hyperglycemia. In some people, stress hyperglycemia may indicate a reduced insulin secretory capacity or a reduced sensitivity, and is sometimes the first clue to incipient diabetes. Because of this, it is occasionally appropriate to perform diabetes screening tests after recovery from an illness in which significant stress hyperglycemia occurred.  Even fear of needles or pain during blood collection may provoke transient hyperglycemia. Blood glucose is also amplified by drugs or intravenous glucose. 

_

Why do diabetics develop complications?

Chronic elevation of blood glucose level leads to damage of blood vessels (angiopathy). The endothelial cells lining the blood vessels take in more glucose than normal, since they do not depend on insulin. They then form more surface glycoproteins than normal, and cause the basement membrane to grow thicker and weaker. In diabetes, the resulting problems are grouped under “microvascular disease” (due to damage to small blood vessels) and “macrovascular disease” (due to damage to the arteries). The risk of chronic complications increases as a function of the duration and degree of hyperglycemia; they usually do not become apparent until the second decade of hyperglycemia. Since type 2 DM often has a long asymptomatic period of hyperglycemia, many individuals with type 2 DM have complications at the time of diagnosis. The microvascular complications of both type 1 and type 2 DM result from chronic hyperglycemia. Large, randomized clinical trials of individuals with type 1 or type 2 DM have conclusively demonstrated that a reduction in chronic hyperglycemia prevents or delays retinopathy, neuropathy, and nephropathy. Other incompletely defined factors may modulate the development of complications. For example, despite long-standing DM, some individuals never develop nephropathy or retinopathy. The fact that 40% of diabetics who carefully control their blood sugar nevertheless develop neuropathy, and that some of those with good blood sugar control still develop nephropathy, requires explanation. Many of these patients have glycemic control that is indistinguishable from those who develop microvascular complications, suggesting that there is a genetic susceptibility for developing particular complications. The familial clustering of the degree and type of diabetic complications indicates that genetics may also play a role in causing complications such as diabetic retinopathy and nephropathy. Non-diabetic offspring of type 2 diabetics have been found to have increased arterial stiffness and neuropathy despite normal blood glucose levels, and elevated enzyme levels associated with diabetic renal disease have been found in non-diabetic first-degree relatives of diabetics. Evidence implicating a causative role for chronic hyperglycemia in the development of macrovascular complications is less conclusive. However, coronary heart disease events and mortality rate are two to four times greater in patients with type 2 DM. These events correlate with fasting and postprandial plasma glucose levels as well as with the A1c. Other factors (dyslipidemia and hypertension) also play important roles in macrovascular complications.

Mechanisms of Complications:

Although chronic hyperglycemia is an important etiologic factor leading to complications of DM, the mechanism(s) by which it leads to such diverse cellular and organ dysfunction is unknown. At least four prominent theories, which are not mutually exclusive, have been proposed to explain how hyperglycemia might lead to the chronic complications of DM. An emerging hypothesis is that hyperglycemia leads to epigenetic changes in the affected cells. One theory is that increased intracellular glucose leads to the formation of advanced glycosylation end products (AGEs), which bind to a cell surface receptor, via the nonenzymatic glycosylation of intra- and extracellular proteins. Nonenzymatic glycosylation results from the interaction of glucose with amino groups on proteins. AGEs have been shown to cross-link proteins (e.g., collagen, extracellular matrix proteins), accelerate atherosclerosis, promote glomerular dysfunction, reduce nitric oxide synthesis, induce endothelial dysfunction, and alter extracellular matrix composition and structure. The serum level of AGEs correlates with the level of glycemia, and these products accumulate as the glomerular filtration rate (GFR) declines. A second theory is based on the observation that hyperglycemia increases glucose metabolism via the sorbitol pathway. Intracellular glucose is predominantly metabolized by phosphorylation and subsequent glycolysis, but when increased, some glucose is converted to sorbitol by the enzyme aldose reductase. Increased sorbitol concentration alters redox potential, increases cellular osmolality, generates reactive oxygen species, and likely leads to other types of cellular dysfunction. However, testing of this theory in humans, using aldose reductase inhibitors, has not demonstrated significant beneficial effects on clinical endpoints of retinopathy, neuropathy, or nephropathy. A third theory proposes that hyperglycemia increases the formation of diacylglycerol leading to activation of protein kinase C (PKC). Among other actions, PKC alters the transcription of genes for fibronectin, type IV collagen, contractile proteins, and extracellular matrix proteins in endothelial cells and neurons. Inhibitors of PKC are being studied in clinical trials. A fourth theory proposes that hyperglycemia increases the flux through the hexosamine pathway, which generates fructose-6-phosphate, a substrate for O-linked glycosylation and proteoglycan production. The hexosamine pathway may alter function by glycosylation of proteins such as endothelial nitric oxide synthase or by changes in gene expression of transforming growth factor  (TGF-) or plasminogen activator inhibitor-1 (PAI-1). Growth factors appear to play an important role in some DM-related complications, and their production is increased by most of these proposed pathways. Vascular endothelial growth factor A (VEGF-A) is increased locally in diabetic proliferative retinopathy and decreases after laser photocoagulation. TGF- is increased in diabetic nephropathy and stimulates basement membrane production of collagen and fibronectin by mesangial cells. Other growth factors, such as platelet-derived growth factor, epidermal growth factor, insulin-like growth factor I, growth hormone, basic fibroblast growth factor, and even insulin, have been suggested to play a role in DM-related complications. A possible unifying mechanism is that hyperglycemia leads to increased production of reactive oxygen species or superoxide in the mitochondria; these compounds may activate all four of the pathways described above. Although hyperglycemia serves as the initial trigger for complications of diabetes, it is still unknown whether the same pathophysiologic processes are operative in all complications or whether some pathways predominate in certain organs.

___

Blood glucose tests:

  1. fasting blood sugar (i.e., glucose) test (FBS)—it means fasting plasma glucose (FPG)
  2. two-hr postprandial blood sugar test (2-h PPBS)—it means postprandial plasma glucose (PPG)
  3. oral glucose tolerance test (OGTT)
  4. intravenous glucose tolerance test (IVGTT)
  5. glycated hemoglobin (HbA1C)
  6. self-monitoring of blood glucose (SMBG) level via patient testing
  7. Random blood sugar (RBS)
  8. Average blood glucose (eAG = estimated average glucose) may be estimated by measuring HbA1c

_

What are the Target Ranges?

Blood glucose targets are individualized based on:

  • duration of diabetes
  • age/life expectancy
  • comorbid conditions
  • known CVD or advanced microvascular complications
  • hypoglycemia unawareness
  • individual patient considerations.

____

_____

Blood glucose profile:

No matter how mild your diabetes may be, it is very unlikely that any physician can tell you how to normalize your blood sugars throughout the day without knowing what your blood glucose values are around the clock. Don’t believe anyone who tells you otherwise. The only way to know what your around the clock levels are is to monitor them yourself. A table of blood sugar levels, with associated events (meals, exercise, and so on), measured at least 4 times daily over a number of days, is the key element in what is called a blood glucose profile. This profile gives you and your physician or diabetes educator a glimpse of how your medication, lifestyle, and diet converge, and how they affect your blood sugars. Without this information, it is impossible to come up with a treatment plan that will normalize blood sugars. If your treatment includes insulin injections before each meal, your diabetes is probably severe enough to render it impossible for your body to automatically correct small deviations from a target blood glucose range. To achieve blood sugar normalization, it therefore may be necessary for you to record blood glucose profiles every day for the rest of your life, so that you can fine tune any out of range values. If you are not treated with insulin, or if you have a very mild form of insulin treated diabetes, it may only be necessary to prepare blood glucose profiles when needed for readjustment of your diet or medication. Typically, this might be for one to two weeks prior to every routine follow up visit to your physician, and for a few weeks while your treatment plan is being fine tuned for the first time. After all, your physician or diabetes educator cannot tell if a new regimen is working properly without seeing your blood glucose profiles. It is wise, however, that you also do a blood glucose profile for 1 day at least every other week, so you will be assured that things are continuing as planned.

_______

Hypo and hyperglycemia:

Levels which are significantly above or below normal range are problematic and can in some cases be dangerous. A level of <3.8 mmol/L (<70 mg/dL) is usually described as a hypoglycemic attack (low blood sugar). Most diabetics know when they are going to “go hypo” and usually are able to eat some food or drink something sweet to raise levels. A patient who is hyperglycemic (high blood glucose) can also become temporarily hypoglycemic, under certain conditions (e.g. not eating regularly, or after strenuous exercise, followed by fatigue). Intensive efforts to achieve blood sugar levels close to normal have been shown to triple the risk of the most severe form of hypoglycemia, in which the patient requires assistance from by-standers in order to treat the episode. There were annually 48,500 hospitalizations for diabetic hypoglycemia and 13,100 for diabetic hypoglycemia resulting in coma in the period 1989 to 1991 in the U.S., before intensive blood sugar control was as widely recommended as today. One study found that hospital admissions for diabetic hypoglycemia increased by 50% from 1990-1993 to 1997-2000, as strict blood sugar control efforts became more common. Among intensively controlled type 1 diabetics, 55% of episodes of severe hypoglycemia occur during sleep, and 6% of all deaths in diabetics under the age of 40 are from nocturnal hypoglycemia in the so-called ‘dead-in-bed syndrome,’ while National Institute of Health statistics show that 2% to 4% of all deaths in diabetics are from hypoglycemia. In children and adolescents following intensive blood sugar control, 21% of hypoglycemic episodes occurred without explanation. In addition to the deaths caused by diabetic hypoglycemia, periods of severe low blood sugar can also cause permanent brain damage. Interestingly, although diabetic nerve disease is usually associated with hyperglycemia, hypoglycemia as well can initiate or worsen neuropathy in diabetics intensively struggling to reduce their hyperglycemia. Levels greater than 13-15 mmol/L (230–270 mg/dL) are considered high, and should be monitored closely to ensure that they reduce rather than continue to remain high. The patient is advised to seek urgent medical attention as soon as possible if blood sugar levels continue to rise after 2-3 tests. High blood sugar levels are known as hyperglycemia, which is not as easy to detect as hypoglycemia and usually happens over a period of days rather than hours or minutes. If left untreated, this can result in diabetic coma and death. Prolonged and elevated levels of glucose in the blood, which is left unchecked and untreated, will, over time, result in serious diabetic complications in those susceptible and sometimes even death. There is currently no way of testing for susceptibility to complications. Diabetics are therefore recommended to check their blood sugar levels either daily or every few days. There is also diabetes management software available from blood testing manufacturers which can display results and trends over time. Type 1 diabetics normally check more often, due to insulin therapy. A history of blood sugar level results is especially useful for the diabetic to present to their doctor or physician in the monitoring and control of the disease. Failure to maintain a strict regimen of testing can accelerate symptoms of the condition, and it is therefore imperative that any diabetic patient strictly monitor their glucose levels regularly.

_

Hypoglycemia is most commonly caused by drugs used to treat diabetes mellitus or by exposure to other drugs, including alcohol. However, a number of other disorders, including critical organ failure, sepsis and inanition, hormone deficiencies, non–beta-cell tumors, insulinoma, and prior gastric surgery, may cause hypoglycemia. Hypoglycemia is most convincingly documented by Whipple’s triad: (1) symptoms consistent with hypoglycemia, (2) a low plasma glucose concentration measured with a precise method, and (3) relief of those symptoms after the plasma glucose level is raised. The lower limit of the fasting plasma glucose concentration is normally approximately 70 mg/dL (3.9 mmol/L), but substantially lower venous glucose levels occur normally, late after a meal. Glucose levels <55 mg/dL (3.0 mmol/L) with symptoms that are relieved promptly after the glucose level is raised document hypoglycemia. Hypoglycemia can cause serious morbidity; if severe and prolonged, it can be fatal. It should be considered in any patient with episodes of confusion, an altered level of consciousness, or a seizure.

______

Glycemic control:

Glycemic control is a medical term referring to the typical levels of blood sugar (glucose) in a person with diabetes mellitus. Much evidence suggests that many of the long-term complications of diabetes, especially the microvascular complications, result from many years of hyperglycemia (elevated levels of glucose in the blood). Good glycemic control, in the sense of a “target” for treatment, has become an important goal of diabetes care, although recent research suggests that the complications of diabetes may be caused by genetic factors or, in type 1 diabetics, by the continuing effects of the autoimmune disease which first caused the pancreas to lose its insulin-producing ability. Because blood sugar levels fluctuate throughout the day and glucose records are imperfect indicators of these changes, the percentage of hemoglobin which is glycosylated is used as a proxy measure of long-term glycemic control in research trials and clinical care of people with diabetes. In nondiabetic persons with normal glucose metabolism the glycosylated hemoglobin is usually 4-6% by the most common methods (normal ranges may vary by method). Measurement of glycated hemoglobin is the standard method for assessing long-term glycemic control. When plasma glucose is consistently elevated, there is an increase in nonenzymatic glycation of hemoglobin; this alteration reflects the glycemic history over the previous 2–3 months, since erythrocytes have an average life span of 120 days (glycemic level in the preceding month contributes about 50% to the A1C value). In patients achieving their glycemic goal, the ADA recommends measurement of the A1C at least twice per year. More frequent testing (every 3 months) is warranted when glycemic control is inadequate or when therapy has changed. The degree of glycation of other proteins, such as albumin, can be used as an alternative indicator of glycemic control when the A1C is inaccurate (hemolytic anemia, hemoglobinopathies). The fructosamine assay (measuring glycated albumin) reflects the glycemic status over the prior 2 weeks. Alternative assays of glycemic control should not be routinely used since studies demonstrating that it accurately predicts the complications of DM are lacking. “Perfect glycemic control” would mean that glucose levels were always normal (70–130 mg/dl, or 3.9-7.2 mmol/L) and indistinguishable from a person without diabetes. In reality, because of the imperfections of treatment measures, even “good glycemic control” describes blood glucose levels that average somewhat higher than normal much of the time. In addition, one survey of type 2 diabetics found that they rated the harm to their quality of life from intensive interventions to control their blood sugar to be just as severe as the harm resulting from intermediate levels of diabetic complications. Accepted “target levels” of glucose and glycosylated hemoglobin that are considered good control have been lowered over the last 25 years, because of improvements in the tools of diabetes care, because of increasing evidence of the value of glycemic control in avoiding complications, and by the expectations of both patients and physicians. What is considered “good control” also varies by age and susceptibility of the patient to hypoglycemia. In the 1990s the American Diabetes Association conducted a publicity campaign to persuade patients and physicians to strive for average glucose and hemoglobin A1c values below 200 mg/dl (11 mmol/l) and 8%. Currently many patients and physicians attempt to do better than that. Poor glycemic control refers to persistently elevated blood glucose and glycosylated hemoglobin levels, which may range from 200–500 mg/dl (11-28 mmol/L) and 9-15% or higher over months and years before severe complications occur. Meta-analysis of large studies done on the effects of tight vs. conventional, or more relaxed, glycemic control in type 2 diabetics have failed to demonstrate a difference in all-cause cardiovascular death, non-fatal stroke, or limb amputation, but decreased the risk of nonfatal heart attack by 15%. Additionally, tight glucose control decreased the risk of progression of retinopathy and nephropathy, and decreased the incidence peripheral neuropathy, but increased the risk of hypoglycemia 2.4 times.

__

_______

DM prevalence, awareness, morbidity and mortality:

_

In 2006, the General Assembly of the United Nations unanimously adopted a resolution (61/225) which recog­nizes that diabetes is a global pandemic posing a serious threat to global health, acknowledging it to be a chronic, debilitating, and costly disease associated with major complications. Diabetes reduces the quality of life, can generate multi-system morbidities and premature death, and consequently increases healthcare costs. Currently, in many countries, people with diabetes have a significantly decreased life expectancy.

_

The figure below shows rising worldwide DM prevalence:

_

In our modern world, diabetes prevalence is on the rise. In 2010, statistics showed that over 25 million people in the United States, including children, have diabetes mellitus. Of that population, seven million people who have diabetes are undiagnosed. In addition, diabetes prevalence increases with age. Between 2005 and 2008, statistics showed 26.9% of people with type 1 or type 2 diabetes mellitus were over the age of 65 years, while 17.4% comprised those between 20 and 64 years of age. At this rate, the number of people diagnosed with diabetes in the world is expected to increase by 114% from the year 2000 to 2030. As a result, effective diabetes management will continue to be an important consideration for patients and is key to reducing the risk of complications such as heart disease, blindness, renal disease, and unnecessary amputations.

_

WHO on diabetes 2013:

Key facts:

1. 347 million people worldwide have diabetes.

2. In 2004, an estimated 3.4 million people died from consequences of high fasting blood sugar.

3. More than 80% of diabetes deaths occur in low- and middle-income countries.

4. WHO projects that diabetes will be the 7th leading cause of death in 2030.

5. Healthy diet, regular physical activity, maintaining a normal body weight and avoiding tobacco use can prevent or delay the onset of type 2 diabetes.  

_

World health statistics 2012 reports data on people with raised blood glucose levels. One in 10 adults has diabetes. While the global average prevalence is around 10%, up to one third of populations in some Pacific Island countries have this condition. Left untreated, diabetes can lead to cardiovascular disease, blindness and kidney failure. Already, diabetes extracts a high cost in health care dollars, economies’ financial stability, lost productivity, and it destroys lives and families.

_

The International Diabetes Federation (IDF) — the umbrella organization for 200 diabetes associations in more than 160 countries — just released its 2013 Diabetes Atlas. It cites current statistics and the rise of diabetes worldwide. If you’ve been following the trend in diabetes, it will not surprise you to know diabetes continues to rise, unabated, around the world. Type 2 diabetes, which many consider an epidemic currently, is increasing worldwide predominantly due to poor diet, sedentary lifestyle and the fact that we are living longer. The research, published by the American Heart Association’s journal Circulation, found that eating fast food two or more times a week increases the risk of developing Type 2 diabetes by 27 percent.

 

_

New wealth and development in the Middle East has already led to one in 10 adults having the disease. The greatest number of people with diabetes worldwide is between the ages of 40 and 59. Every six seconds someone dies from diabetes. Diabetes imposes unacceptably high human, social and economic costs on countries at all income levels. In Africa, three quarters of diabetes deaths are in people under 60 years old, handicapping Africa’s ability for development. In 2013, the world spent $548 billion (US) on diabetes health care — 11 percent of the total spent for health care worldwide. 175 million people are currently undiagnosed and progressing toward complications unaware. The number of people with diabetes globally will increase by 55 percent by 2035. 

_

Diabetic death rates:

Diabetes causes 4.6 million deaths and costs over 465 billion US dollars in global healthcare expenditure every year. Diabetes is already the world’s most costly epidemic. By 2020, in countries such as the US, Malaysia and Indonesia over 10% of the population will be diabetic and there will be over 300m diabetics worldwide. Up to 5% of GDP and over 25% of many public healthcare budgets globally will be typically being spent on dealing with the consequences of diabetes.  

_

Diabetic awareness:

46 % of diabetics are unaware that they have diabetes.

_

The incidence of both type 1 and type 2 diabetes mellitus is increasing; the former has been attributed to an increase in environmental factors, whereas the latter is strongly associated with increasing rates of obesity. Alarmingly, during the past 10 years, type 2 diabetes has been diagnosed more frequently in patients younger than 44 years. In this context, physicians face a dual challenge: not only are there more patients with diabetes, but also the disease is being increasingly diagnosed in younger patients who will require lifelong management. Adding to this burden is the increasing complexity of caring for patients with type 1 diabetes and the expanding armamentarium of medications for patients with type 2 diabetes. The chronic hyperglycemia of diabetes is associated with both micro- and macrovascular complications, which result in significant increases in morbidity and mortality. Improving glycemic control in diabetic patients has been shown to reduce these complications. The main goal of treatment is to keep blood sugar levels in the normal or near-normal range. Checking one’s blood sugar is one of the best ways to know how well the diabetes treatment plan is working.

_

Diabetes mellitus is a condition characterized biochemically by increased blood glucose concentrations and associated with both small blood vessel complications in the eyes (retinopathy), kidneys (nephropathy), and peripheral nerves (neuropathy) and large blood vessel complications of the heart (causing heart attacks), head and neck (causing strokes), and legs (leading to gangrene and amputations). Diabetic retinopathy is the leading cause of blindness in industrialized countries in people between the ages of 20 and 74 years. Diabetic nephropathy is the leading cause of people requiring dialysis for kidney failure. Diabetic neuropathy underlies most cases of lower extremity amputations, much more so than the large vessel complication in the legs. There is overwhelming evidence that keeping blood glucose near normal will have a marked beneficial effect of limiting (and possibly preventing) the small vessel complications. Although one recent article showed that lowering blood glucose concentrations in type 1 diabetic patients had a beneficial effect on coronary artery disease (CAD) many years later, five previous articles in type 2 diabetic patients did not.

___

What is the DCCT?

The Diabetes Control and Complications Trial (DCCT) was a major clinical study conducted from 1983 to 1993 and funded by the National Institute of Diabetes and Digestive and Kidney Diseases. The study showed that keeping blood glucose levels as close to normal as possible slows the onset and progression of the eye, kidney, and nerve damage caused by diabetes. In fact, it demonstrated that any sustained lowering of blood glucose, also called blood sugar, helps, even if the person has a history of poor control. The DCCT involved 1,441 volunteers, ages 13 to 39, with type 1 diabetes and 29 medical centers in the United States and Canada. Volunteers had to have had diabetes for at least 1 year but no longer than 15 years. They also were required to have no, or only early signs of, diabetic eye disease. The study compared the effects of standard control of blood glucose versus intensive control on the complications of diabetes. Intensive control meant keeping hemoglobin A1C levels as close as possible to the normal value of 6 percent or less. The A1C blood test reflects a person’s average blood glucose over the last 2 to 3 months. Volunteers were randomly assigned to each treatment group.

What is the EDIC?

When the DCCT ended in 1993, researchers continued to study more than 90 percent of participants. The follow-up study, called Epidemiology of Diabetes Interventions and Complications (EDIC), is assessing the incidence and predictors of cardiovascular disease events such as heart attack, stroke, or needed heart surgery, as well as diabetic complications related to the eye, kidney, and nerves. The EDIC study is also examining the impact of intensive control versus standard control on quality of life. Another objective is to look at the cost-effectiveness of intensive control.

_

DCCT Study Findings:

Intensive blood glucose control reduces risk of

  • eye disease
    76% reduced risk
  • kidney disease
    50% reduced risk
  • nerve disease
    60% reduced risk

EDIC Study Findings:

Intensive blood glucose control reduces risk of

  • any cardiovascular disease event
    42% reduced risk
  • nonfatal heart attack, stroke, or death from cardiovascular causes
    57% reduced risk

_

Large, long-term, randomized controlled trials in both type 1 diabetes (T1DM) and T2DM have shown that ag­gressive treatment of hyperglycemia significantly reduces the development and progression of microvascular com­plications. A weaker relationship is observed in most studies between hyperglycemia and the development/ progression of macrovascular disease. However, in a systematic review with meta-analysis including 6 randomized controlled trials involving 27,654 patients, tight blood glucose control reduces the risk for some macrovascular and microvascular events, without effect on all-cause mortality and cardiovascular mortality. Recent RCTs have not shown a benefit of tight glucose control on macrovascular disease in people with T2DM of long duration and high cardiovascular risk. In the earlier studies, the benefits of tight control on macrovascular outcomes were seen only many years after the initial trial had ended and when levels of glycemic control in the intervention and control arms had converged. This so called ‘metabolic memory’ or ‘legacy effect’ suggests that, while the short-term benefits of tight glycemic control for macrovascular disease have not been shown in RCTs, the longer-term benefits may be substantive particu­larly when good HbA1c levels are achieved and maintained early in the course of the disease. The longer-term findings suggest that greater benefits (clinical and economic) are obtained when simultaneous control of glycemia, blood pressure and lipid levels has been achieved. Diabetes is a significant and growing worldwide concern with potentially devastating consequences. Numerous studies have demonstrated that optimal management of glycemia and other cardiovascular risk factors can reduce the risk of development and progression of both microvascular and macrovascular complications.

_

Proper glycemic control, including self-monitoring of blood glucose (SMBG) is key to managing diabetes. It has been shown that microvascular complications, such as neuropathy, nephropathy, and retinopathy, are reduced 40% for every percentage reduction in hemoglobin A1c values. Furthermore, a survey of 1,895 diabetic patients suggested that decreased blood glucose monitoring compliance was observed in patients who had more than two hospitalizations in a 2-year period. Yet, despite current evidence of the importance of daily SMBG, many patients who have diabetes do not regularly check their blood glucose at home. For example, up to 67% of patients do not check their blood glucose regularly for reasons such as sore fingers, inconvenience, and the fear of needles.  Hence, some patients choose to avoid these unpleasant aspects by simply not checking blood sugars on a regular basis, especially since hyperglycemia is often asymptomatic in the early stages of diabetes mellitus. In addition to the difficulties posed by SMBG, maintaining proper glycemic control can be a challenge, especially for patients who are on insulin therapy. Patients who use short-acting insulin to help control blood glucose during a meal must constantly estimate their insulin doses by counting the carbohydrate content of the meal. Since most of us do not eat the same meal every day for breakfast, lunch, and dinner, counting carbohydrates can become a cumbersome process. Furthermore, improperly estimating an insulin dose can potentially result in undertreatment or overtreatment, which may have grave consequences. Based on discharge data of Californian hospitals, hypoglycemia was found to be responsible for approximately 1.7% of hospitalized diabetic patients. In today’s society, even with better understanding of the importance of glycemic control, only 41% of people with diabetes have the ability to calculate an insulin dose based on carbohydrate intake and blood glucose levels. Controlling blood sugar with fast-acting insulin is difficult because it poses the risk of hypoglycemia or hyperglycemia if insulin is not administered in a correct manner. It is difficult to estimate the amount of insulin required with varying portion sizes and fluctuating sugar levels throughout the day.

_

Screening for DM:

Widespread use of the FPG or the A1c as a screening test for type 2 DM is recommended by experts because (1) a large number of individuals who meet the current criteria for DM are asymptomatic and unaware that they have the disorder, (2) epidemiologic studies suggest that type 2 DM may be present for up to a decade before diagnosis, (3) some individuals with type 2 DM have one or more diabetes-specific complications at the time of their diagnosis, and (4) treatment of type 2 DM may favorably alter the natural history of DM. The ADA recommends screening all individuals >45 years every 3 years and screening individuals at an earlier age if they are overweight [body mass index (BMI) >25 kg/m2] and have one additional risk factor for diabetes. In contrast to type 2 DM, a long asymptomatic period of hyperglycemia is rare prior to the diagnosis of type 1 DM. A number of immunologic markers for type 1 DM are becoming available, but their routine use is discouraged pending the identification of clinically beneficial interventions for individuals at high risk for developing type 1 DM.

_

Note:

I have seen many patients who have normal FPG but higher PPG and these patients ultimately develop frank type 2 DM. I therefore recommend only PPG as a screening test for T2DM (vide infra).

_________

History of SMBG:

In 1957, Kohn showed that Clinistix could also give approximate results for blood glucose. In 1965 an Ames research team under Ernie Adams went on to develop the first blood glucose test strip, the Dextrostix, a paper reagent strip which used the glucose oxidase/peroxidise reaction but with an outer semipermeable membrane which trapped red blood cells but allowed soluble glucose to pass through to react with the dry reagents. A large drop of blood (approximately 50–100 μL) was applied to the reagent pad, and after one minute the surface blood was gently washed away and the pad colour visually assessed against a colour chart to give a semiquantitative blood glucose value. However, the colours were difficult to visualise as the colour blocks were affected by ambient lighting conditions, and variation in individual visual acuity made it difficult to obtain accurate and precise readings. Although the Dextrostix was designed for use in doctors’ offices, the concept of diabetic patients undertaking the measurements had not been considered. Around the same time, the German company Boehringer Mannheim developed a competitive blood glucose strip, the Chemstrip bG. This was easier to use because the drop of blood was wiped off using a cotton wool ball, and, as it had a dual colour pad (one beige, the other blue), it was easier to visualise the colour. The visually monitored blood glucose test strips, Dextrostix (Ames) and Chemstrip bG (Boehringer Mannheim), were widely used in clinics, surgeries and hospital wards, notably intensive care units, for adults and neonates. However, colours were prone to fade and it was realised that there were highly significant visual variations in the assessment of colours across the range of glucose concentrations using Dextrostix. These limitations became the trigger to develop an automatic, electronic glucose test strip reader to improve precision and give more quantitative blood glucose results.  The development in the 1950s of the oxygen electrode by Clarke for the measurement of pO2 was the forerunner in the development of the first biosensor electrode. The first description of a biosensor, an amperometric enzyme method for glucose measurement, was made by Clarke and Lyons in 1962. This concept was incorporated in the measurement of blood glucose in the Yellow Spring 24AM ‘desktop’ analyser, which became commercially available in the mid-1970s. The first blood glucose biosensor system, the ExacTech, was launched in 1987 by MediSense. It used an enzyme electrode strip developed in the UK at Cranford and Oxford universities. The strip contained glucose oxidase and an electron transfer mediator, ferrocene, which replaced oxygen in the original glucose oxidase reaction; the reduced mediator was reoxidised at the electrode to generate a current detected by an amperometric sensor. The meter was available in two highly original forms, a slim pen or a thin card the size of a credit card. Evaluation reports showed that accuracy, precision and error grid analysis were satisfactory. The use of electrode technology thus heralded what became designated the third-generation BGMS. In 1987, with the increased use of SMBG systems, the American Diabetic Association (ADA) lowered the preferred glucose meter deviation compared to laboratory reference methods to 15%. A useful evaluation statistical tool, error grid analysis, was developed by Clarke et al. and applied by Kochinsky et al., which gave an improved measure of accuracy related to clinical significance and decision making.

_

Four generations of glucometer:

The figure above shows four generations of blood glucose meter. Sample sizes vary from 30 to 0.3 μl. Test times vary from 5 seconds to 2 minutes (modern meters typically provide results in 5 seconds).

_________

Body fluid sample for glucose measurement:

_

The figure below shows overview of body fluid glucose measurement by various techniques and from various sites:

 

_

Three of the major factors that influence glucose test results are the type of chemical analysis used for the test, the type of sample analyzed (whole blood verses plasma), and the source of the blood (venous, capillary, or arterial).

_________

Whole blood vs. plasma glucose:

Glucose is measured in whole blood, plasma or serum. Historically, blood glucose values were given in terms of whole blood, but most laboratories now measure and report plasma or serum glucose levels. Because red blood cells (erythrocytes) have a higher concentration of protein (e.g., hemoglobin) than serum/plasma, serum/plasma has higher water content and consequently more dissolved glucose than does whole blood. To convert from whole-blood glucose, multiplication by 1.15 has been shown to generally give the serum/plasma level. Under usual circumstances, the concentration of glucose in whole blood is about 15% lower than in plasma or serum, but the difference will be less in patients with low hematocrits. Collection of blood in clot tubes for serum chemistry analysis permits the metabolism of glucose in the sample by blood cells until separated by centrifugation. Red blood cells, for instance, do not require insulin to intake glucose from the blood. Higher than normal amounts of white or red blood cell counts can lead to excessive glycolysis in the sample, with substantial reduction of glucose level if the sample is not processed quickly. Ambient temperature at which the blood sample is kept prior to centrifuging and separation of plasma/serum also affects glucose levels. At refrigerator temperatures, glucose remains relatively stable for several hours in a blood sample. Loss of glucose can be prevented by using Fluoride tubes (i.e., gray-top) since fluoride inhibits glycolysis. However, these should only be used when blood will be transported from one hospital laboratory to another for glucose measurement. Red-top serum separator tubes also preserve glucose in samples after being centrifuged isolating the serum from cells. If you have so far been using a blood glucose monitoring system calibrated for whole blood and are now switching to one calibrated for plasma or vice versa, you may need new target values. You will have to re-adjust though when interpreting the results: since glucose concentration in plasma is approx. 10-15 per cent higher than in whole blood, the levels indicated by meters with plasma-calibrated test strips are approx. 10-15 per cent higher. All the manufacturers will probably switch to plasma in future and in many European countries it has already taken place. Diabetics will find the information about how their meter has been calibrated on the leaflet accompanying the test strips or also in the operating instructions for the meter.

_

Is plasma glucose measurement best?

Conversion of glucose concentrations determined in different sample systems by use of factors is an oversimplification and probably leads to unpredictable rates of discordant disease classifications. These problems are becoming more relevant with the widespread use of point-of-care testing instruments, including blood gas analyzers with integrated glucose sensors that measure glucose in the plasma water fraction. The only solution for this dilemma is to use only one sample system. The experimental data clearly indicate that the use of plasma should be preferred to diagnose glucose intolerance, including diabetes. The logistic disadvantages are the centrifugation step and the prevention of glycolysis. Chan showed that delays in processing blood specimens in hospital practice may lead to misclassification in up to 7% of GTTs. Stahl proposed storage on ice for not more than 1 h until centrifugation. However, this recommendation may not be acceptable for many hospitals. The use of capillary hemolysate together with a reduced decision limit thus may be a second choice for the detection of diabetes.

__________

IV fluid and blood glucose measurement:

To prevent contamination of the sample with intravenous fluids, particular care should be given to drawing blood samples from the arm opposite the one in which an intravenous line is inserted. Alternatively, blood can be drawn from the same arm with an IV line after the IV has been turned off for at least 5 minutes, and the arm has been elevated to drain infused fluids away from the vein. Inattention can lead to large errors, since as little as 10% contamination with a 5% glucose solution (D5W) will elevate glucose in a sample by 500 mg/dL or more.

_

60 % of body weight in men and 50 % of body weight in women is water. Total body water is distributed between 3 fluid compartments in body. Intracellular water (intracellular fluid-ICF) makes up about two- thirds of total body water, with remaining one-third, the extracellular water (extracellular fluid- ECF) being distributed between intravascular (25 %) and interstitial (75 %) compartment. The pores between endothelial cells in capillary allow free movement of water and solutes but do not allow proteins to pass through. So glucose readily passes through intravascular compartment to interstitial compartment. When any glucose-containing IV drip is given, each pint (500ml) of D5W/D5NS contains 25 gm glucose. If you give each pint slowly in 8 hours, body gets 52 mg of glucose every minute. This 52 mg glucose is distributed in 13.8 liter of extracellular water in a 70 kg man (blood water plus interstitial water). So blood glucose value will rise by 0.38 mg/dL every minute if there is no insulin.  So if you have given glucose containing IV drip to a diabetic who has near zero insulin secretion, blood glucose will rise by 0.38mg/dL every minute when drip duration is 8 hour. If the same drip is given in 4 hour, the rate of glucose rise is 0.76mg/dL every minute.  However, if the same drip is given to a normal non-diabetic person, slight increase in blood glucose will stimulate insulin secretion and therefore blood glucose will be reasonably maintained provided drip rate is 4 to 8 hours per each pint of fluid. The corollary is that if you have collected blood from the arm opposite the one in which an intravenous line is inserted, and if you are getting high glucose level, do not blame IV drip but patient may be diabetic.

_

Every day average person eats 900 to 1300 Kcal of carbohydrates in 2000 Kcal diet. If you divide it in three meals, breakfast, lunch and dinner equally, you eat approximately 300 to 430 Kcal of carbohydrate in each meal. Even if half of carbohydrate is converted into glucose, every meal contains 150 to 215 Kcal from glucose [the other being fructose/galactose]. In other words, every meal generates minimum 37 to 53 gm of glucose to be assimilated in body. Yet in normal non-diabetic person, insulin secretion does not allow the blood sugar to rise much and maximum blood sugar 2 hr after meal is < 140 mg/dL. If normal person can dispose off 50 gm glucose in 2 hours with blood sugar < 140 mg/dL; sure it can dispose off IV D5W/D5NS 500 ml containing 25 gm glucose without rise in blood sugar provided drip rate is not very fast. So drip rate of 125ml/hr to 60 ml/hr would not cause hyperglycemia in normal non-diabetic person.

_

 

_

Why have I not taken renal glucose loss into consideration in the above formula?

First let me discuss normal non-diabetic person. His GFR is 125 ml/min and blood glucose 100 mg/dL. His kidneys would filter 180 gms of glucose in 24 hrs and more than 99.9 % of filtered glucose is reabsorbed resulting in urine glucose of < 130 mg/24 hr. Now let me discuss uncontrolled diabetic with GFR 125 ml/min and blood glucose 450 mg/dL. He will filter 810 gms of glucose in 24 hrs. Since his blood sugar is far higher than renal threshold, he would have massive renal glycosuria with osmotic diuresis and polyuria. I have gone through various studies on osmotic diuresis in diabetes. For blood glucose of 450mg/dL, when urine output is 5 liter/24 hr, urine glucose concentration is about 8 gm/liter. Further increase in urine output reduces urine glucose concentration, so urine output of 12 liter/24 hour has urine glucose concentration 5 gm/liter. In other words, maximum urine glucose loss in uncontrolled diabetes is 40 to 60 gm glucose in 24 hours. So approximately 6 % of blood glucose is lost in urine and 94 % of filtered glucose is still reabsorbed in uncontrolled diabetes. Therefore, rate of rise of blood glucose after IV D5W/D5NS even in uncontrolled diabetes will change slightly due to renal glucose loss.

_

Note:

Excess water infused through routine IV drip would be excreted by kidneys and would not dilute blood glucose concentration.  

_________

Artery vs. capillary vs. vein vis-à-vis blood glucose:

_

Artery to capillary to vein:

_

Home glucose monitoring has traditionally relied on a drop of capillary blood from the finger. Blood glucose is generally measured as the venous plasma level. There is a 3–5 mg/dL difference between arterial and venous levels, with higher differences in the postprandial state. Levels are higher in the arterial blood because some of the glucose diffuses from the plasma to interstitial fluid (IF) as blood circulates through the capillary system. Arterial blood glucose and capillary blood glucose have been shown to be almost identical in concentration, even though the distribution of the glucose to the systemic capillaries does not occur instantaneously. The finger-prick blood sampling is to collect blood in peripheral capillaries and the blood glucose concentration approximates to the level of arterial blood glucose (Rasaiah, 1985). Despite few differences between fasting capillary blood glucose and fasting venous blood glucose, postprandial venous blood glucose is lower than postprandial capillary blood glucose by 7 % because glucose absorbed by the human tissues and remaining glucose returns to veins. Accordingly, the level of arterial blood glucose or postprandial capillary blood glucose is higher than that of postprandial venous blood glucose.

______

Glucose test values may not match with different blood samples because glucose is being consumed by the body:

Glucose diffuses through the capillaries and is consumed by the cells, so arterial glucose concentration (the capillaries’ source) should be higher than venous glucose concentration (the capillaries’ drain) unless capillary diffusion or muscle glucose consumption has been stopped. It has been shown that in fasting subjects the glucose levels in arterial, capillary, and venous samples are practically the same (venous glucose is generally 2-5 mg/dL lower than fingerstick capillary or arterial blood glucose). It is only after meals, when glucose uptake in the periphery is rapid, that glucose levels in fingerstick capillary blood samples can exceed those in concurrently drawn venous samples. A typically quoted value is up to 80 mg/dL difference between venous and fingerstick capillary blood glucose values one hour after ingestion of 100 grams of glucose. Current literature has attempted to determine exactly how glucose levels in venous, arterial and fingerstick capillary blood vary so comparisons can be made. Venous blood is usually employed for laboratory analysis and is preferable in diabetes testing. However, because of the widespread use of SMBG instruments, fingerstick capillary blood samples have also become a standard. Fingerstick capillary blood has been shown to be predominantly arterial and so approximates the concentration of arterial blood. Somogyi compared the glucose content of blood samples simultaneously drawn from the femoral artery and the fingertip of non-diabetics one-hour after ingestion of 50 grams of glucose. The ingested glucose would produce a substantial difference between the arterial and venous glucose levels, and so should indicate whether fingerstick capillary blood was predominantly arterial, venous, or a combination of the two. The discrepancies between arterial and fingerstick capillary blood were less than 1 mg/dL for all three subjects studied and seemed to justify the substitution of fingerstick capillary for arterial blood glucose. Somogyi also studied the difference between fingerstick capillary and venous glucose levels during the fasting state on 100 healthy individuals (fasting for 10-14 hours). The average fasting glucose value in fingerstick capillary blood samples was 89 mg/dL (78 – 97 mg/dL) with the average venous blood glucose value 5 mg/dL lower (84 mg/dL).  In the same study, both venous and fingerstick capillary blood glucose values were followed for a period of 4 hours in 44 healthy individuals that had ingested a 100-gram glucose load (see figure below). A substantial increase in the fingerstick capillary to venous blood glucose difference were measured after an oral administration of glucose, and this difference remained consistently higher than the initial fasting level until the blood glucose returned to the fasting blood glucose level. The paper also found that the larger the glucose load ingested, the higher the glucose peaks, and the greater the maximal difference between the fingerstick capillary (paper assumed this to be arterial) blood glucose level and the venous blood glucose level.

_

The difference between capillary and venous blood in the postprandial state is due to muscles removing more glucose from the blood than the liver in the presence of adequate insulin action.  It has been shown that a lack of insulin (in the de-pancreatized animal) shows an arterial to venous glucose difference that is extremely small and that injection of insulin produces an increase in this difference. As such, glucose uptake by the tissue is dependent on the sensitivity of the tissue to insulin, the circulating insulin level and the local blood flow. Diabetics may have various degrees of peripheral insulin resistance or various blood insulin levels or both, so a single patient’s nonfasting difference may not be seen in other patients. The nonfasting difference will depend on meal size, meal content, time of sample collection, and individual patient variability. In summary, glucose levels in arterial and fingerstick capillary blood have been so closely correlated that most studies refer to arterial glucose measurements even if they measure fingerstick capillary samples. When studies are performed with the patient under fasting conditions, glucose levels in fingerstick capillary blood gives reliable quantitative estimates of the venous glucose concentration as determined in the laboratory for most patients. However, when the patients are under a glucose load the venous and fingerstick capillary glucose levels diverge in a similar but unpredictable manner where the venous value may be anywhere from 2% lower during fasting to 26% lower within one hour after a glucose load. Unfortunately, empirical conversion factors have been applied to generate equivalent glucose values for different blood sample compartments without adequate data to show equivalence. One such conversion is that fingerstick capillary blood has a glucose concentration that is 7-8% higher than the concurrently drawn venous concentration. Others have presented charts showing the equivalence of venous and capillary glucose levels that differ between 0% to 13% depending on the glucose level. The validity of these conversion factors has been called into question since individual differences between capillary and venous blood glucose values are too great to allow for a meaningful transformation to be applied. It can be reasonably concluded that there is no simple conversion factor available to explain differences between glucose values in the various blood compartments.

_

Glucose test values may not match because the body is consuming oxygen:

A study by Liu measured arterial, fingerstick capillary, and venous blood samples from six healthy males for oxygen saturation and glucose. Each subject’s right hand was placed in a warm air box at 55-60 degrees C to determine if warm air would arterialize the venous blood obtained from a cannula inserted into the dorsal right-hand vein. The oxygen saturation measured in the arterial blood was 97%. The oxygen saturation measured in venous blood on a nonheated hand was 80%. The oxygen saturation measured in the heated ‘arterialized’ venous blood was 94% or approximately 3% below the average arterial value. Glucose levels also showed equilibration between the two blood compartments with heating. The difference between fasting arterial glucose levels and venous glucose levels with no heating of the hand ranged between 4-9 mg/dL (6% – 9%), and this glucose difference significantly correlated with the differences in oxygen saturation between the two blood supplies. The difference between the arterial glucose levels and ‘arterialized’ venous glucose levels obtained by heating the hand averaged less than 2 mg/dL difference, and this glucose difference had a low correlation with the differences in oxygen saturation between the two blood supplies.

_

Many analytical procedures are used to measure blood glucose but the most common techniques are enzymatic. Enzymes commonly used in commercial test strips are glucose oxidase, glucose dehydrogenase, or hexokinase combined with glucose-6-phosphate dehydrogenase. Glucose oxidase has historically been the preferred enzyme because of its excellent specificity for glucose, good room temperature stability, and relatively low cost. However, the reaction requires an adequate oxygen supply, and this leads to an oxygen dependence problem in certain measurement systems. Electrochemical measurement combined with glucose oxidase involves a mediator to transfer electrons between the electrodes. The mediator attempts to replace oxygen in the reaction sequence. This makes oxygen in the blood sample a competitor in the reaction and produces varying results with varying oxygen concentrations (oxygen dependence). Electrochemical test strips that are calibrated using fingerstick capillary blood can read up to 30% higher when tested with venous blood because of its 50-60% lower pO2 values. A similar situation exists with some optical reflectance methods. Generally, atmospheric oxygen is sufficient to meet the glucose oxidase reaction requirements, but different test strip design can block the diffusion of oxygen to the reaction site. Commercial analyzers attempt to circumvent oxygen effects by pre-dilution of the sample into an oxygenated buffer. Glucose dehydrogenase can be made oxygen independent when it is combined with a cofactor called pyrroloquiniline quinone (PQQ). Using this enzyme combination effectively eliminates oxygen competition and enables the use of venous or arterial samples where extremes of pO2 may occur. The trade-off is reduced specificity for glucose in that it also detects maltose, galactose, and metabolites of maltodextrins. There is also reduced operational stability when compared to glucose oxidase. Hexokinase combined with glucose-6-phosphate dehydrogenase also avoids oxygen dependence, but the test strip is inherently more sensitive to heat and moisture, and therefore special attention is paid to packaging. Glucose comparison studies between arterial, capillary, and venous blood must consider the significant differences in oxygen tension between the blood compartments when using analytical systems that are oxygen dependent. Ideally, the effect of pO2 needs to be examined by monitoring oxygen concentrations and determining if a correlation exists for glucose.  

_

Glucose test values may not match because of low blood flow in the forearm:

Glucose consumption and oxygen variation concern physiological parameters that would lead to a bias between glucose test results taken simultaneously from two different blood compartments during either fasting or the meal cycle. A third physiological parameter that would cause glucose in one blood compartment to lag or lead another is flow or circulation problems in a capillary bed. Many medical and physical conditions can affect capillary blood flow with the problem being either systemic or localized. Localized variations in blood flow associated with the capillary beds would be a major contributing factor to erroneous comparison data between two capillary blood supplies such as within the finger and forearm. A localized variation in blood flow would also be a contributing factor in glucose differences measured within capillary, arterial, and venous blood.  Lower flow in the capillaries will lead to greater exchange of nutrients and metabolites. Simplistically, a drop of blood moving slowly will have more time to lose glucose to the consuming tissue compared to a drop of blood moving quickly. In tissues like the heart, all capillaries are normally open to perfusion, but in skeletal muscle and intestine only 20% – 30% of capillaries are normally open. As an example, it is possible that only 70% of the forearm capillaries are flowing normally at any one time, and 30% have slower-moving blood that is being depleted of glucose and oxygen by diffusion into the cellular space. Lancing into such a location would produce glucose readings lower than both arterial and venous blood glucose since more glucose consumption would occur in areas with no flow. If the measurement technique were oxygen sensitive, then the measurement would also be lower because of oxygen consumption by the surrounding tissue. Ideally, blood collection from sites such as the forearm and thigh should target a highly perfused capillary bed, and either compensate for or be independent of temporal changes in blood flow.

_

Simultaneous measurements of arterial and venous blood samples should produce different glucose values in healthy people due to glucose utilization by peripheral tissues. Unfortunately, the magnitude of this glucose difference cannot be predicted due to the large number of variables that affect it. Since capillary blood has been expanded to refer to blood collected from the finger, forearm, ear, heel, calf, and stomach, questions have arisen if each of these is predominantly arterial or venous. Published studies have justified equating arterial and fingerstick capillary glucose levels under most conditions but no other capillary blood source has been equally studied. Local, rhythmic changes of blood flux within capillary beds play a larger role in the variation of forearm capillary blood glucose vs. fingerstick capillary blood glucose than the differences between arterial and venous values. It is not to say that forearm capillary is more like venous, but that the independent temporal changes in select capillary beds affect the venous value because it is upstream.   

_

A Comparison between Venous and Finger-Prick Blood Sampling on Values of Blood Glucose: a 2012 study:

The purpose of this study is to investigate changes in fasting and postprandial blood glucose values of 12 healthy voluntary subjects, who were asked to take 50g glucose solution, during 2 hours and compare correlations and differences based on two types of blood samplings, venous blood sampling and finger-prick blood sampling. It can be seen from experimental results that (1) there is no significant difference between the fasting venous blood glucose value (87.4±0.4 mg/dL) and the fasting capillary blood glucose (91.6±4.4 mg/dL) (0 min); (2) there is significant difference between the postprandial venous blood glucose concentration and the postprandial capillary blood glucose concentration, both of which reach the maximum levels at 30 min (postprandial venous blood glucose value=122.0±1.2 mg/dL; postprandial capillary blood glucose value=163.8±1.3 mg/dL), with glucose solution ingested by subjects; (3) the mean capillary blood glucose concentration is higher than the mean venous blood glucose concentration by 35%; (4) the correlation coefficient r=0.875(p<0.001) suggests statistical discrepancy and positive correlation between two groups of blood glucose concentrations which imply the venous blood glucose concentration is a better indicator to clinically test blood glucose due to higher stability and fewer interference factors. The blood glucose is defined as venous plasma glucose according to the criteria of WHO to diagnose diabetes but the whole blood glucose on the peripheral capillary basis is available in a glucose meter. As one simple and convenient tool, the glucose meter is applicable to self-monitoring of blood glucose values which are accurate enough but proportional to venous plasma glucose values by a stable factor of 1.12 due to a different numerical benchmark rather than an error.

_

Comparability of Blood Glucose Concentrations Measured in Different Sample Systems for Detecting Glucose Intolerance: A study:

The interconversion of glucose values for venous and capillary blood is further complicated by the arteriovenous difference. In the fasting state, the glucose concentrations in arterial, capillary, and (forearm) venous blood are supposed to be almost indistinguishable. In contrast, arterial blood glucose values may differ by 20% or as much as 70% in the postprandial state. The mean arteriovenous differences are largest in lean nondiabetic individuals, smallest in diabetic individuals, and larger in deep veins than in superficial vessels. Other factors can influence the differences in glucose concentrations among the various samples. Thus, the conversion of concentration values from one system (or sample type) to another is subject to unpredictable errors. Several authors have already rejected the practice of converting glucose concentrations and have recommended that plasma be used for all glucose determinations. In a recent editorial, glucose measurement in whole blood was considered anachronistic, but only whole blood is used by home monitoring and near-patient monitoring devices. Many laboratories measure the glucose concentration in whole blood, especially in capillary whole blood, for therapeutic monitoring and for diagnosing hypo-, normo-, and hyperglycemia. However, the applicability of whole blood for determining glucose intolerance is still a matter of debate. Many practitioners tend to use capillary blood (CB) for diagnostic purposes. The decision limits usually applied for whole blood are those recommended by WHO and the American Diabetes Association, which are based on epidemiologic studies with venous plasma (VP). In practice, either measured values or decision limits are converted from one sample system to another.

________

Blood glucose vs. Interstitial fluid (IF) glucose:

_

Interstitial fluid (IF):

IF constitutes approximately 45% of the volume fraction of human skin, with blood vessels contributing to the 5% of the skin volume. IF is a relatively passive medium that has one-third of the total protein concentration as compared to plasma with an average albumin/globulin ratio of 1.85  The total body volume of the interstitial space is three times that of plasma; however, IF compartments around the cells are microscopic. IF bathes the cells and feeds them with nutrients, including glucose, by providing a corridor between the capillaries and the cell. There is less IF in the subcutaneous tissue than in the dermis. Adipose tissue, just below the dermis, is richly vascularized with capillary walls that are relatively thinner (0.03 μm vs. 0.1 μm) than the capillaries of the dermis. The basal membranes of the capillaries are in direct contact with the adipose cell cytoplasmic membrane. The size of adipocytes might affect the amount of IF in the subcutaneous tissue, suggesting that adiposity might have an effect on IF glucose concentrations.

_

The Relationship between Plasma Glucose and Interstitial Glucose:

Plasma and IF have different characteristics and should be considered as separate glucose compartments. Glucose is transferred from the capillary endothelium to the IF by simple diffusion across a concentration gradient without the need of an active transporter. Blood flow to the area dictates the amount of glucose delivered. Interstitial glucose values are determined by the rate of glucose diffusion from plasma to the IF and the rate of glucose uptake by subcutaneous tissue cells. Thus, the metabolic rate of the adjacent cells and other factors, like insulin, affecting glucose uptake by cells, the glucose supply from the blood vessel, blood flow to the area, and the permeability of the capillary that can be altered by many factors, including nerve stimulation, influence the interstitial glucose levels. The time required for glucose to diffuse from the capillary to the tissue plays an important role in the lag time between changes in plasma and interstitial glucose levels, but the lag during rapid changes of blood glucose is likely due to the magnitude of concentration differences in various tissues at a time of rapid change. A major confounding factor in evaluating the dynamics of changes in IF glucose concentrations has been the complexity of direct sampling methods, including insertion of wicks, blister formation, lymph sampling, and ultrafiltration. Microdialysis is an indirect method of estimating IF glucose values. Lönnroth et al. was the first to use this method to show that IF glucose was almost identical to venous plasma glucose in healthy individuals during steady state. Jannson et al. demonstrated an increase in lag time between IF and plasma glucose when there is a rapid rise in the plasma glucose level. The data of Jensen et al. revealed lower IF glucose levels than plasma glucose during clamp experiments extracting IF by suction blister technique. There are relatively limited data on dermal IF glucose levels. Bantle and Thomas demonstrated no significant difference between the dermal and plasma pre- and postprandial glucose levels in subjects with type 1 diabetes. A plasma and dermal interstitial glucose concentration lag time of 10–20 min was reported by Stout et al. In general, IF and plasma glucose variations were evaluated in two different conditions: steady state and non–steady state. Under steady-state conditions, IF glucose generally correlated with the blood glucose with a lag time reported to be between 0 and 45 min and an average lag of 8–10 min. Increasing the blood flow to the interstitial glucose sampling site by applying controlled pressure has been shown to decrease the lag time between the blood and interstitial glucose at times of increasing plasma glucose levels. The reported gradient between interstitial and plasma glucose concentrations has varied between 20% to 110%. During the time of decreasing glucose, interstitial glucose may fall in advance of plasma glucose and reach nadir values that are lower than corresponding venous glucose levels. Interstitial glucose levels have been shown to remain below plasma glucose concentrations for prolonged periods of time after correction of insulin-induced hypoglycemia. These findings could be explained by the push–pull phenomenon during which the glucose is pushed from the blood to the interstitial space at times of increased blood glucose, and later on glucose being pulled from the IF to the surrounding cells during decreasing blood glucose levels. This phenomenon has been a matter of debate for some time, in light of data failing to support the push–pull phenomenon and instead reporting compensation of enhanced uptake of glucose in the IF by increased plasma glucose delivery and lack of glucose removal effect of insulin in the adipose IF. Current continuous glucose monitoring systems have the advantage of direct insertion of electrochemical sensors into the IF space rather than transporting the sampled fluid outside the body to detect glucose concentrations. Software programs have been designed to accommodate the lag in IF glucose readings.  Recent advances in glucose sensor technology for measuring interstitial glucose concentrations have challenged the dominance of glucose meters in diabetes management, while raising questions about the relationships between interstitial and blood glucose levels.

_

For glucose meter measurements, a skin-pricking device is used to access the dermal capillary plexus. Human skin consists of two layers—epidermis and dermis—residing above the adipose and muscle tissue. Epidermis is an avascular epithelial membrane. It has enzymes with glucose metabolizing effect. Moreover, glucose is formed from the breakdown of ceramide at the stratum granulosum–corneum interface. Dermis comprises many arterioles, venules, and capillaries, including a deep vascular plexus interfacing dermis and the subcutaneous tissue as seen in the figure below. Another vascular plexus located 0.3–0.6 mm from the skin surface is formed by the feeding vessels arising from the deep vascular plexus. It supplies the blood flow to the dermis and epidermis with the help of small capillary loops branching from the superficial plexus. The blood sampled from the skin prick comes from the capillaries of dermis with a small amount of blood from cut arterioles and venules providing a mix concentration. Blood flow to the skin is controlled by many factors, including autonomic nervous system, temperature, hormonal changes during menstrual cycle for females, and chemical inputs.

_

The figure above shows skin layers with the magnified IF space. (a) Vasculature in different skin layers with the CGM inserted into the subcutaneous tissue. (b) Diffusion of glucose from plasma to IF is in proportion to the concentration in each compartment. IF glucose is cleared by the surrounding cell uptake. Insulin may increase cellular glucose uptake after binding its membrane receptor.  

________

Methods of monitoring glucose:   

Intermittent vs. continuous glucose monitoring:

The difference between an intermittent and a continuous monitor for monitoring blood glucose is similar to that between a regular camera and a continuous security camera for monitoring an important situation. A regular camera takes discrete, accurate snapshots; its pictures do not predict the future; it produces a small set of pictures that can all be carefully studied; and effort is required to take each picture. A continuous security camera, on the other hand, takes multiple, poorly focused frames; displays a sequential array of frames whose trend predicts the future; produces too much information for each frame to be studied carefully; and operates automatically after it is turned on. The two types of blood glucose monitors differ in much the same way: 1) an intermittent blood glucose monitor measures discrete glucose levels extremely accurately, whereas a continuous monitor provides multiple glucose levels of fair accuracy; 2) with an intermittent monitor, current blood glucose levels do not predict future glucose levels, but with a continuous monitor, trends in glucose levels do have this predictive capability; 3) with an intermittent monitor, it is easy to study every measured blood glucose value over most time periods, but with a continuous monitor, too many data are generated to study all data points; and 4) an intermittent blood glucose monitor requires effort to operate, whereas a continuous monitor does not. Returning to the camera analogy, just as the best tool for closely monitoring a situation when the outcome is important often may be a continuous security camera rather than a regular camera, likewise the best way to monitor diabetes often may be a continuous glucose monitor (CGM) rather than an intermittent monitor.

_

Various methods of glucose monitoring are available, including HbA1c measurement, blood glucose monitoring and urine glucose testing.

_

Basic approaches for blood glucose measurement:

There are three basic approaches to the laboratory measurement of blood glucose concentration: reducing methods, condensation methods, and enzymatic methods. Reducing methods are the oldest and take advantage of the reducing properties of glucose to change the state of a metal ion while glucose is being oxidized. Reducing methods are nonspecific, and any strong reducing agent can cross react to yield spuriously elevated values. While steps can be added to remove most cross-reacting reducing agents, this approach has largely been abandoned in the clinical laboratory. The aldehyde group of glucose can undergo condensation with aromatic compounds to yield a colored product. In the most commonly used condensation reaction, o-toluidine reacts with glucose to form a glucosamine that has an intense green color. The color is then measured spectrophotometrically to estimate the glucose concentration. The reaction is rapid, and the intense color allows a high degree of sensitivity. Other aldoses can cross react, but only mannose and galactose give a highly colored product. These sugars are not found in great concentrations in the blood and their cross reactivity is ordinarily not significant. o-Toluidirie has the drawback of being highly corrosive and toxic. For this reason, this method is rapidly being phased out of the clinical laboratory. More precise blood glucose measurements are performed in a medical laboratory, using hexokinase, glucose oxidase, or glucose dehydrogenase enzymes.

_

The enzyme glucose oxidase reacts with glucose, water, and oxygen to form gluconic acid and hydrogen peroxide. The hydrogen peroxide can then be used to oxidize a chromogen or the consumption of oxygen measured to estimate the amount of glucose present. Glucose oxidase is specific for β-d-glucose, so cross reaction with other sugars is not a problem. In aqueous solution, approximately 66% of d-glucose is in the β state and 34% exists as α- d-glucose. The rate of interconversion is pH and temperature dependent. Some methods add a glucomutarostase to the reagents to speed up the conversion to the beta anomere, but this does not seem to alter the clinical results. The measurement of generated hydrogen peroxide is not as specific as the first glucose oxidase reaction. Numerous reducing substances can potentially inhibit the oxidation of the chromogen. Although uric acid and creatinine, even in uremic patients, seem to have little effect on the results, ascorbic acid will yield spuriously low blood glucose measurements. The high concentration of uric acid found in urine will affect the result and so glucose oxidase methods are not directly applicable to urine samples. The measurement of oxygen consumption using an oxygen-specific electrode avoids the problem of interfering reducing agents. In general, the glucose oxidase method is relatively inexpensive and specific. Many glucose meters employ the oxidation of glucose to gluconolactone catalyzed by glucose oxidase (sometimes known as GOx). Others use a similar reaction catalysed instead by another enzyme, glucose dehydrogenase (GDH). This has the advantage of sensitivity over glucose oxidase but is more susceptible to interfering reactions with other substances.

_

Enzymatic methods to measure blood glucose:

_

Amperometric and photometric techniques for measurement of blood glucose using enzymatic methods:

__

Timing of the test:

Blood sugar is measured at various points of time to give an idea about the body’s blood glucose regulation system. The primary test is the fasting blood glucose (FBG). This is measured after overnight fasting. Blood glucose normally is lowest early in the morning after 6 to 8 hours of fasting overnight. FBG is also called fasting blood sugar (FBS). However, the correct terminology is venous fasting plasma glucose (FPG) as whole blood sugar is obsolete in most laboratories. Two hours post prandial plasma glucose or PPG is the next common test. After a carbohydrate rich, full meal, two hours are allowed to elapse before blood is taken again for estimation of glucose. This test gives an estimation of glucose handling by the body. Other tests include oral glucose tolerance test (OGTT) and intravenous glucose tolerance test (IVGTT) wherein a fixed amount of glucose is administered orally or intravenously respectively and repeated blood sugar tests are performed to check on the body’s glucose handling. Another important test is the glycosylated haemoglobin (HbA1C). This test gives an idea about fluctuations of glucose in blood over a period of last three months.

_

These glucose measurements are useful for different reasons:

Fasting plasma glucose levels, taken in the morning before eating, should fall in a normal range. Normal value for FPG is 70 to 100 mg/dL. Measurement of glucose in plasma of fasting subjects is widely accepted as a diagnostic criterion for diabetes.  Advantages include inexpensive assays on automated instruments that are available in most laboratories worldwide. Nevertheless, FPG is subject to some limitations. One report that analyzed repeated measurements from 685 fasting participants without diagnosed diabetes from the Third National Health and Nutrition Examination Survey (NHANES III) revealed that only 70.4% of people with FPG <126 mg/dL on the first test had FPG <126 mg/dL when analysis was repeated ~2 weeks later. Numerous factors may contribute to this lack of reproducibility. 

Two hours after eating:

Blood sugar rises and then falls to a baseline level. By sampling blood sugar levels two hours after eating, you find out if glucose is being removed from your blood in a reasonable time. The sugar level peaks in 30-60 minutes and the falls back to a baseline level. The timing and height of the peak level will vary with the composition of the meal and activity levels. Normal PPG value is 100 to 140 mg/dL.

OGTT:

The OGTT evaluates the efficiency of the body to metabolize glucose and for many years has been used as the “gold standard” for diagnosis of diabetes. An increase in postprandial glucose concentration usually occurs before fasting glucose increases. Therefore, postprandial glucose is a sensitive indicator of the risk for developing diabetes and an early marker of impaired glucose homeostasis. Published evidence suggests that increased 2-h plasma glucose during an OGTT is a better predictor of both all-cause mortality and cardiovascular mortality or morbidity than the FPG. The OGTT is accepted as a diagnostic modality by the ADA, WHO/International Diabetes Federation (IDF), and other organizations. However, extensive patient preparation is necessary to perform an OGTT. Important conditions include, among others, ingestion of at least 150 g of dietary carbohydrate per day for 3 days prior to the test, a 10- to 16-h fast, and commencement of the test between 7:00 A.M. and 9:00 A.M. In addition, numerous conditions other than diabetes can influence the OGTT. Consistent with this, published evidence reveals a high degree of intraindividual variability in the OGTT, with a CV of 16.7%, which is considerably greater than the variability for FPG. These factors result in poor reproducibility of the OGTT, which has been documented in multiple studies. The lack of reproducibility, inconvenience, and cost of the OGTT led the ADA to recommend that FPG should be the preferred glucose-based diagnostic test. Note that glucose measurement in the OGTT is also subject to all the limitations described for FPG. An abbreviated screening glucose tolerance test is recommended for all women between their 24th and 28th week of pregnancy. The test consists of 50 g of oral glucose and the measurement of venous plasma glucose 1 hour later. The test may be administered at any time of day and non-fasting. A 1 hour plasma glucose of 140 mg/dl or greater indicates the need for a full-scale glucose tolerance test as described above.

Checking symptomatic episodes:

You measure blood sugar when you are not feeling well to find out how your symptoms correlate with the blood sugar level. High levels are associated with an intoxicated feeling – drowsy, hard to concentrate, judgment impaired. Levels above 17 mmol or 300 mg are dangerously high – you are likely to want to sleep at this level but the most effective way to reduce the sugar levels is to exercise as vigorously as you can. Levels below 4.5 mmol capillary may be associated with hypoglycemic symptoms – you feel  strange, anxious, irritable; a tremor develops if the blood sugar value falls lower and you become desperate to eat something. If you can take a quick sugar hit – a glass of orange juice will do and measure your sugar immediately you can determine how low the value dropped; as you feel better do another blood sugar check to find the value that feels normal.

___________

Diabetes and urine glucose monitoring:  

The glucose urine test measures the amount of sugar (glucose) in a urine sample. The presence of glucose in the urine is called glycosuria or glucosuria. After you provide a urine sample, it is tested right away. The health care provider uses a dipstick made with a color-sensitive pad. The color the dipstick changes to tell the provider the level of glucose in your urine. The proximal tubules reabsorb more than 99.9 % of glucose filtered by glomerulus in a normal person. When the blood glucose level exceeds about 160 – 180 mg/dl, the proximal tubule becomes overwhelmed and begins to excrete glucose in the urine. This point is called the renal threshold of glucose. If renal threshold is so low that even normal blood glucose levels produce glycosuria, it is referred to as renal glycosuria.  Although used in the past to self-monitor diabetes control, urine glucose testing has been largely replaced by self blood glucose monitoring using a small, personal meter. The reason for this is the greater accuracy with which blood glucose monitoring reflects your blood glucose level. However, if you have difficulty obtaining a drop of blood, or you have some other difficulty performing blood glucose monitoring, your doctor or diabetes educator may suggest that urine glucose monitoring is suitable for you. A urine glucose test determines whether or not glucose (sugar) is present in the urine. Glucose will overflow into the urine only when the blood glucose level is high, that is, too high for the kidneys to stop it spilling over into the urine. In most people, blood glucose levels above 10 mmol of glucose per liter of plasma will cause glucose to appear in the urine. This level is called the ‘renal threshold’ for glucose.  However, the renal threshold for glucose can be lower in some people who are otherwise healthy, during pregnancy, and in people who have a kidney disorder. In these people, glucose may be present in the urine despite the blood glucose being normal. This can sometimes make urine glucose tests difficult to interpret. 

To perform the test:

1. Collect a small amount of urine;

2. Apply this to the test strip, usually by dipping the strip in the urine sample;

3. Read the test result at the specified time, by comparing the colour change on the test strip with the standard colour range for your brand of test strip. The reference colour chart is usually printed on the container.  

_

Advantages of urine glucose monitoring

1. Urine glucose testing is easy to do: just dip the test strip in the urine and read the result at the allocated time.

2. It is less painful than blood glucose monitoring — no finger pricks to collect blood!

3. Urine test strips are less costly than buying a blood glucose monitor and its test strips.

_

Limitations of urine glucose monitoring:

If monitoring your diabetes control by testing your urine glucose, it’s important to understand the limitations of this method.

1. A urine glucose test does not reflect your blood glucose level at the time of testing; instead, it gives an indication of your blood glucose level over the past several hours. For example, some of the urine present in your bladder may be 2 hours old, and may show glucose even though your blood glucose may have normalised since then. Compare this to a blood glucose test which gives you a reading of your current blood glucose level.

2. A urine glucose test does not give you any information about low blood glucose levels, as glucose is only found in the urine when the blood glucose level is above 10 mmol/L. That is, a negative urine glucose test may be the result of a normal blood glucose level or a dangerously low blood glucose level, with the urine glucose test unable to differentiate between the 2 situations.

3. The results of a urine glucose test are influenced by the volume and concentration of urine that you pass, which will vary with the amount of fluid you consume and your fluid loss due to such things as heavy sweating or vomiting.

4. Urine glucose tests designed for home use rely on interpreting a colour change to define the result. Subtle colour differences may be difficult to interpret.

5. If a urine glucose test is not read at the specified time after applying the urine to the test strip, then the result is prone to error.

6. Some medications may interfere with the results of urine glucose testing.

_

Urine ketone testing:

People with type 1 diabetes should perform urine testing for ketones if the blood sugar level is above 240 mg/dL (13.3 mmol/L), during periods of illness or stress, or if you have symptoms of ketoacidosis, such as nausea, vomiting, and abdominal pain. Ketones are acids that are formed when the body does not have enough insulin to get glucose into the cells, causing the body to break down fat for energy. Ketones can also develop during illness, if an inadequate amount of glucose is available (due to skipped meals or vomiting). Ketoacidosis occurs when high levels of ketones are present and can lead to serious complications such as diabetic coma. Urine ketone testing is done with a dipstick, available in pharmacies without a prescription. If you have moderate to large ketones, you should call your healthcare provider immediately to determine the best treatment. You may need to take an additional dose of insulin, or your provider may instruct you to go to the nearest emergency room.

_

Urine protein testing:

Urine is tested for microalbuminuria in diabetes. Microalbuminuria is defined as levels of albumin ranging from 30 to 300 mg in a 24-h urine collection. Overt albuminuria, macroalbuminuria, or proteinuria is defined as a urinary albumin excretion of ≥300 mg/24 h. The presence of albuminuria is a powerful predictor of renal and cardiovascular risk in patients with type 2 diabetes and hypertension. In addition, multiple studies have shown that decreasing the level of albuminuria reduces the risk of adverse renal and cardiovascular outcomes. The pathophysiology is not definitively known, but is hypothesized to be related to endothelial dysfunction, inflammation, or possibly abnormalities in the renin-angiotensin-aldosterone system.

______

Glycosylated Hemoglobin: Glycated Hemoglobin:

Glycated hemoglobin (hemoglobin A1c, HbA1c, A1C, Hb1c or HbA1c) is a form of hemoglobin that is measured primarily to identify the average plasma glucose concentration over prolonged periods of time. It is formed in a non-enzymatic glycation pathway by hemoglobin’s exposure to plasma glucose. When hemolysates of red cells are chromatographed, three or more small peaks named hemoglobin A1a, A1b, and A1c are eluted before the main hemoglobin A peak. These “fast” hemoglobins are formed by the irreversible attachment of glucose to the hemoglobin in a two-step reaction. The percentage of hemoglobin glycosylated depends on the average glucose concentration the red cell is exposed to over time. Since the average life of the red cell is 120 days, the percentage of glycosylated hemoglobin gives a good indication of the degree of blood sugar control over the preceding weeks. Hemoglobin A1c is quantifiably the largest peak so that most laboratories measure it selectively, although some laboratories measure all the “fast” hemoglobins. Normal levels of glucose produce a normal amount of glycated hemoglobin. As the average amount of plasma glucose increases, the fraction of glycated hemoglobin increases in a predictable way. This serves as a marker for average blood glucose levels over the previous months prior to the measurement. Glycation of proteins is a frequent occurrence, but in the case of hemoglobin, a nonenzymatic reaction occurs between glucose and the N-end of the beta chain. This forms a Schiff base which is itself converted to 1-deoxyfructose. This rearrangement is known as Amadori rearrangement. When blood glucose levels are high, glucose molecules attach to the hemoglobin in red blood cells. The longer hyperglycemia occurs in blood, the more glucose binds to hemoglobin in the red blood cells and the higher the glycated hemoglobin. Once a hemoglobin molecule is glycated, it remains that way. A buildup of glycated hemoglobin within the red cell, therefore, reflects the average level of glucose to which the cell has been exposed during its life-cycle. Measuring glycated hemoglobin assesses the effectiveness of therapy by monitoring long-term serum glucose regulation. At any moment, the glucose attached to the hemoglobin A protein reflects the level of the blood sugar over the last two to three months. The A1c test measures how much glucose is actually stuck to hemoglobin A, or more specifically, what percent of hemoglobin proteins are glycated. Thus, having a 7% A1c means that 7% of the hemoglobin proteins are glycated. A person’s A1c level will not change significantly over the course of a few days, but it will shift in response to a change in overall glucose control. It is estimated that the past month will account for about 50% of an A1c value, so that value can change within just a few weeks. Some researchers state that the major proportion of its value is weighted toward the most recent 2 to 4 weeks. This is also supported by the data from actual practice showing that HbA1c level improved significantly already after 20 days since glucose-lowering treatment intensification. It has also been noted that at any given time a blood sample contains erythrocytes of varying ages, with different levels of exposure to hyperglycemia. Although older erythrocytes are likely to have more exposure to hyperglycemia, younger erythrocytes are more numerous. Consequently, BG levels from the preceding 30 days have been shown to contribute approximately 50% to HbA1c, whereas those from the period 30–90 days and 90–120 days earlier contribute approximately 40% and 10%, respectively.

_

Glycated hemoglobin measurement is not appropriate where there has been a change in diet or treatment within 6 weeks. Likewise, the test assumes a normal red blood cell aging process and mix of hemoglobin subtypes (predominantly HbA in normal adults). Hence, people with recent blood loss, hemolytic anemia, or genetic differences in the hemoglobin molecule (hemoglobinopathy) such as sickle-cell disease and other conditions, as well as those that have donated blood recently, are not suitable for this test. Concentrations of hemoglobin A1 (HbA1) are increased, both in diabetic patients and in patients with renal failure, when measured by ion-exchange chromatography. The thiobarbituric acid method (a chemical method specific for the detection of glycation) shows that patients with renal failure have values for glycated hemoglobin similar to those observed in normal subjects, suggesting that the high values in these patients are a result of binding of something other than glucose to hemoglobin.  In autoimmune hemolytic anemia, concentrations of hemoglobin A1 (HbA1) is undetectable. Administration of prednisolone  will allow the HbA1 to be detected. The alternative fructosamine test may be used in these circumstances and it also reflects an average of blood glucose levels over the preceding 2 to 3 weeks.

_

A number of techniques are used to measure A1c:

•High-performance liquid chromatography (HPLC): The HbA1c result is calculated as a ratio to total hemoglobin by using a chromatogram.

•Immunoassay

•Enzymatic

•Capillary electrophoresis

•Boronate affinity chromatography

Point of care (e.g., doctor’s office) devices use:

•Immunoassay

•Boronate affinity chromatography

_

Indications and use:

Glycated hemoglobin testing is recommended for both (a) checking the blood sugar control in people who might be pre-diabetic and (b) monitoring blood sugar control in patients with more elevated levels, termed diabetes mellitus. There is a significant proportion of people who are unaware of their elevated HbA1c level before they have blood lab work. For a single blood sample, it provides far more revealing information on glycemic behavior than a fasting blood sugar value. However, fasting blood sugar tests are crucial in making treatment decisions. The American Diabetes Association guidelines are similar to others in advising that the glycated hemoglobin test be performed at least two times a year in patients with diabetes that are meeting treatment goals (and that have stable glycemic control) and quarterly in patients with diabetes whose therapy has changed or that are not meeting glycemic goals. In diabetes mellitus, higher amounts of glycated hemoglobin, indicating poorer control of blood glucose levels, have been associated with cardiovascular disease, nephropathy, and retinopathy. Monitoring HbA1c in type 1 diabetic patients may improve outcomes.

_

 The HbA1c level measures glycemic control during the preceding 2 to 3 months, but it does not provide information about day-to-day glucose levels, nor does it provide immediate feedback to patients about medication or lifestyle choices. For these reasons, self-monitoring of blood glucose (SMBG) levels is considered an important adjunct to HbA1c measurements for achieving and maintaining glycemic control and consequently for reducing diabetes-related complications. Self-monitoring of blood glucose represents an important adjunct to HbA1c because it can distinguish among fasting, preprandial, and postprandial hyperglycemia; detect glycemic excursions; identify hypoglycemia; and provide immediate feedback to patients about the effect of food choices, activity, and medication on glycemic control.

_

HbA1c as a measure of glycemic control: 

Numerous studies have shown that elevated HbA1c is associated with an increased risk of complications in patients with type 1 and type 2 diabetes mellitus and that lowering HbA1c reduces such risk. In the DCCT, reductions in HbA1c were accompanied by proportional reductions in the risk of complications, with clinically meaningful risk reductions observed even when HbA1c was reduced toward the normal range of less than 6%. Similar findings were observed in patients with newly diagnosed type 2 diabetes in the United Kingdom Prospective Diabetes Study, in which intensive blood glucose control yielded a 25% reduction in the risk of microvascular complications (P=0.0099) and a 16% risk reduction for myocardial infarction (P=0.05) compared to conventional therapy. Analysis of these data in terms of HbA1c levels revealed a continuous relationship between HbA1c and the risk of complications, with each 1% decrease in HbA1c resulting in statistically significant reductions of 37% for microvascular complications and 14% for myocardial infarction (P<0.0001). These findings are similar to those derived from the DCCT and indicate that the much larger number of patients with type 2 diabetes benefit from glucose lowering to the same degree as those with type 1 diabetes. Thus, HbA1c serves as a surrogate for the risk of microvascular and macrovascular complications, and these results firmly establish HbA1c as a useful measure of long-term glycemic control.

_

For someone who doesn’t have diabetes, a normal A1c level can range from 4.5 to 6 percent. Someone who’s had uncontrolled diabetes for a long time might have an A1c level above 8 percent. When the A1c test is used to diagnose diabetes, an A1c level of 6.5 percent or higher on two separate tests indicates you have diabetes. A result between 5.7 and 6.4 percent is considered prediabetes, which indicates a high risk of developing diabetes. For most people who have previously diagnosed diabetes, an A1c level of 7 percent or less is a common treatment target. Higher targets may be chosen in some individuals. If your A1c level is above your target, your doctor may recommend a change in your diabetes treatment plan. Remember, the higher your A1c level, the higher your risk of diabetes complications.  Also keep in mind that the normal range for A1c results may vary somewhat among labs. If you consult a new doctor or use a different lab, it’s important to consider this possible variation when interpreting your A1c test results.

_

Convert A1c to estimated average glucose (eAG):

The A1c-Derived Average Glucose (ADAG) Study is an international study sponsored by the American Diabetes Association (ADA), European Association for the Study of Diabetes (EASD), and International Diabetes Federation (IDF). The objective of the ADAG Study was to define the mathematical relationship between A1c and estimated average glucose (eAG) and determine if A1c could be reliably reported as eAG, which would be in the same units as daily self-monitoring. The ADAG Study establishes what has long been assumed but never demonstrated… that A1c does represent average glucose over time.  With that relationship demonstrated and defined, health care providers can now report A1c results to patients in the same units that they are using for self-monitoring (i.e., mg/dl) which should benefit clinical care. Reporting glucose control as ‘average glucose’ will assist health care providers and their patients in being able to better interpret the A1C value in units similar to what patients see regularly through their self-monitoring.

 _

_

New formula to convert your A1c to estimated average blood sugar in either mg/dl or mmol/L:

This new formula is based on CGMS data and is believed to be more accurate than either the DCCT formula or the Nathan Formula.
Estimated Average Blood Glucose in mmol/L = [(1.583 X A1c) - 2.52]
The conversion factor used to convert mmol/L to mg/dl is 18.

_

Many patients who practice SMBG already see an “average glucose” on their blood glucose meters. 

Is eAG the same thing?

No, an eAG value is unlikely to match the average glucose level shown on a person’s meter.  Because people with diabetes are more likely to test more often when their blood glucose levels are low—first thing in the morning, and before meals—the average of the readings on their meter is likely to be lower than their eAG, which represents an average of their glucose levels 24 hours a day, including post-meal periods of higher blood glucose when people are less likely to test.  One advantage of using eAG as a measure of glucose control is that it will help patients more directly see the difference between their individual meter readings and how they are doing with their glucose management overall.  A range of factors has been postulated to influence the relationship between HbA1c and BG. In particular, the time of BG measurement (fasting, postprandial, etc.) and the frequency and timing of BG measurement appear to have significant impact on this relationship. Analysis of data from one clinical study found that among individual time points, the afternoon and evening prandial glucose readings showed higher correlations with HbA1c than the morning time points.

_

Estimating A1c from SMBG:

Accuracy and Robustness of Dynamical Tracking of Average Glycemia (A1c) to Provide Real-Time Estimation of Hemoglobin A1c Using Routine Self-Monitored Blood Glucose Data: 2013 study:

Laboratory hemoglobin A1c (HbA1c) assays are typically done only every few months. However, self-monitored blood glucose (SMBG) readings offer the possibility for real-time estimation of HbA1c. Researchers present a new dynamical method tracking changes in average glycemia to provide real-time estimation of A1c (eA1c).  In diabetes, the struggle for tight glycemic control results in large BG fluctuations over time. These fluctuations are the measurable result of the action of a complex dynamical system, influenced by many internal and external factors, including the timing and amount of insulin and other drug therapies, food eaten, physical activity, etc. The macro (human)-level optimization of this system depends on self-treatment behavior. Such an optimization has to be based on feedback utilizing readily available data, such as SMBG. Although HbA1c is the gold standard marker for average glycemia, HbA1c assays typically require a laboratory and are routinely done only every few months. Thus, a method to track changes in average glycemia in between laboratory assessments is needed. SMBG offers this possibility, provided that appropriate algorithms are used to retrieve SMBG data. This report describes a method for tracking changes in average glycemia, based on a conceptually new approach to the retrieval of SMBG data. The principal premise of this approach is the understanding of HbA1c fluctuation as the measurable effect of the action of an underlying dynamical system. SMBG provides occasional glimpses at the state of this system, and, using these measurements, the hidden underlying system trajectory can be reconstructed for each individual. Using compartmental modeling, researchers constructed a new two-step algorithm that includes real-time eA1c from fasting glucose readings, updated with any new incoming fasting SMBG data point, and initialization and calibration of the estimated HbA1c trace with daily SMBG profiles taken approximately every month. A conceptually new, clinically viable procedure has been developed for real-time tracking of average glycemia from self-monitoring data. The average glucose tracing is then converted into running estimates of A1c, which can be presented to the patient daily. This technique allows for simple parameterization of the dynamics of average glycemia and thereby HbA1c, has a robust estimation procedure capable of working on sparse readings of fasting BG and occasional seven-point SMBG profiles, and has an inherent capability for calibration of the algorithm using SMBG profiles and/or reference HbA1c readings. It should be emphasized, however, that this procedure is not intended as a substitute for laboratory assessments of HbA1c—it should be viewed as a surrogate measure that allows convenient tracing of average glucose, readily implementable in a point-of-care SMBG device.

_

Potential Alternatives to HbA1C:

Two other analytes, fructosamine and 1,5-anhydroglucitol (1,5-AG), have been evaluated as intermediate markers of glycemia. The fructosamine assay measures glycation of serum proteins, principally albumin, that have a shorter half-life than hemoglobin. Thus, fructosamine provides an index of glycemia over a shorter period (approximately 2 weeks) compared to HbA1c measurements. Each 75 µmol change equals a change of approximately 60 mg/dl blood sugar or 2% HbA1c. Unlike A1c, fructosamine is not affected by the varying length of red blood cell lifespans in different individuals. Fructosamine is especially useful in people who are anemic, or during pregnancy, when hormonal changes cause greater short-term fluctuations in blood glucose levels. The accuracy and clinical utility of fructosamine have been questioned because of interference from various substances. The 1,5-AG assay measures serum levels of a compound that competes with glucose for reabsorption at the renal tubule and was recently approved by the US Food and Drug Administration.  Used in Japan for more than a decade, 1,5-AG levels appear to be less sensitive to small changes in glycemic control at high HbA1c levels. It cannot identify hypoglycemia, and results are influenced by impaired renal function. Future studies may support the use of 1,5-AG as a means to detect postprandial glycemic excursions.

_________

Advantages and disadvantages of FPG, OGTT and HbA1c vis-à-vis diagnosis of diabetes:

_

 FPG for the diagnosis of diabetes:

Advantages:

•Glucose assay easily automated

•Widely available

•Inexpensive

•Single sample

Disadvantages:

•Patient must fast 8 hr

•Large biological variability

•Diurnal variation

•Sample not stable

•Numerous factors alter glucose concentrations, e.g., stress, acute illness

•No harmonization of glucose testing

•Concentration varies with source of the sample (venous, capillary, or arterial blood)

•Concentration in whole blood is different from that in plasma

•Guidelines recommend plasma, but many laboratories measure serum glucose

•FPG less tightly linked to diabetes complications (than A1c)

•Reflects glucose homeostasis at a single point in time

_

OGTT for the diagnosis of diabetes:

Advantages

•Sensitive indicator of risk of developing diabetes

•Early marker of impaired glucose homeostasis

Disadvantages:

•Lacks reproducibility

•Extensive patient preparation

•Time-consuming and inconvenient for patients

•Unpalatable

•Expensive

•Influenced by numerous medications

•Subject to the same limitations as FPG, namely, sample not stable, needs to be performed in the morning, etc.

_

A1c for the diagnosis of diabetes:

Advantages

•Subject need not be fasting

•Samples may be obtained any time of the day

•Very little biological variability

•Sample stable

•Not altered by acute factors, e.g., stress, exercise

•Reflects long-term blood glucose concentration

•Assay standardized across instruments

•Accuracy of the test is monitored

•Single sample, namely whole blood

•Concentration predicts the development of microvascular complications of diabetes

•Used to guide treatment

Disadvantages:

•May be altered by factors other than glucose, e.g., change in erythrocyte life span, ethnicity

•Some conditions interfere with measurement, e.g., selected hemoglobinopathies

•May not be available in some laboratories/areas of the world

•Cost

____________

Self monitoring (measurement) of blood sugar (SMBG) definition:

SMBG is checking the level of glucose in their blood regularly by patients themselves with diabetes mellitus. SMBG can be performed at home, work, or elsewhere — the process involves pricking a fingertip to collect a drop of blood, absorbing the blood with a test strip, and inserting the test strip into an electronic glucose monitor (glucometer) which then displays a number on its screen. Glucose meters are widely used in hospitals, outpatient clinics, emergency rooms, ambulatory medical care (ambulances, helicopters, cruise ships) besides home self-monitoring. Glucose meters provide fast analysis of blood glucose levels and allow management of both hypoglycemic and hyperglycemic disorders with the goal of adjusting glucose to a near-normal range, depending on the patient group. As SMBG is invasive and painful, non-invasive techniques are developed to determine glucose level in body fluids. Non-invasive techniques can be used by patients themselves or medical personnel. So if I have to define SMBG today, I would call it measurement of glucose in blood either directly (invasive) or indirectly through other body fluids (non-invasive) by patients themselves or their care givers or medical personnel outside laboratory. So blood glucose measured by nurses by glucometer or non-invasive technique would also fall under umbrella of SMBG as it is not done in a laboratory. 

__________

SMBG technique:

__

First drop vs. second drop of blood for SMBG:

Many insulin-treated patients have to perform SMBG for a lifetime—some of them every day. Discarding the first drop of blood and refraining from squeezing the finger makes measurements more complex and necessitates deeper and more painful punctures. International guidelines and studies about SMBG (e.g., the American Diabetes Association [ADA] and the Diabetes UK guidelines) recommend using the first drop of blood after washing the hands. Some also allow squeezing or milking the finger. The manufacturer’s instructions of the meter used in the study include washing hands with warm water and soap and drying the hands. The first drop of blood can be used after gently squeezing the finger. In daily practice, patients cannot or do not always wash their hands before performing SMBG. In international guidelines, these situations are not discussed.

_

The Use of the First or the Second Drop of Blood: squeezing or not: a study:

There is no general agreement regarding the use of the first or second drop of blood for glucose monitoring. This study investigated whether capillary glucose concentrations, as measured in the first and second drops of blood, differed ≥10% compared with a control glucose concentration in different situations. Capillary glucose concentrations were measured in two consecutive drops of blood in the following circumstances in 123 patients with diabetes: without washing hands, after exposing the hands to fruit, after washing the fruit-exposed hands, and during application of different amounts of external pressure around the finger. The results were compared with control measurements. Not washing hands led to a difference in glucose concentration of ≥10% in the first and in the second drops of blood in 11% and 4% of the participants, respectively. In fruit-exposed fingers, these differences were found in 88% and 11% of the participants, respectively. Different external pressures led to ≥10% differences in glucose concentrations in 5–13% of the participants. Authors recommend washing the hands with soap and water, drying them, and using the first drop of blood for self-monitoring of blood glucose. It does not matter which finger is used for glucose measurements. If washing hands is not possible, and they are not visibly soiled or exposed to a sugar-containing product, it is acceptable to use the second drop of blood after wiping away the first drop. External pressure may lead to unreliable readings. Firm squeezing of the finger should be avoided.

_

Other two studies investigated the differences between glucose concentrations in the first and the second drops of blood. Both of these studies, however, involved volunteers without diabetes. In one study of 53 volunteers, no differences were found in the readings when the hands were clean. Glucose readings for 25 volunteers in the other study were shown to be greatly affected when the fingers were exposed to glucose (i.e., fruit). Even the third drop of blood cannot be used in these cases.  Fruhstorfer and Quarder also investigated the influence of milking the finger in 10 volunteers without diabetes and concluded that milking the finger gives correct glucose values. In another study, authors used two pressures to explore whether there would be any influence on the capillary glucose concentration. Venous stasis is achieved with a pressure of 40 mmHg. A pressure of 240 mmHg is above the systolic pressure of the participants. This study shows more deviation between the glucose concentrations with the higher pressure.

___

Lancet: Finger-pricking (lancing) device:

A finger-pricking device (called a lancet) is used to get the drop of blood. The lancet can often be set at different depths for different people. The adjustable lancets are particularly good for young children who have tender skin and may not need much lancing depth. Remember to change the lancet every day. A sharp lancet helps prevent injury and infection.

_

_

How often do you recommend changing lancets?

In the early days of blood glucose self-monitoring, pricking the finger to get a “hanging drop” of blood often hurt and left a scar. This was because the procedure created a laceration, rather than a puncture. We’ve come a long way since then, with improved spring-loaded devices, strips that require less blood and lancets that are sharper and usually coated with a lubricant. Lancets are now much more comfortable to use and less likely to cause a scar. Today’s lancets are so good that they are commonly reused. The reasons to reuse lancets are obvious: It’s cheaper and quicker not to have to change them each time; it’s easier not to carry extra lancets around; and, for some users, the lancets actually seem more comfortable after being “broken in.” Since the lancet goes into the subcutaneous space and is not being used intravenously, and since blood is flowing out of the body, sterility is generally not an issue. The rate of infections and injury from lancets is extremely low. Many people, however, are not able to reuse lancets because they feel discomfort or they experience scarring if the lancet is not in optimal condition. Once a lancet has been used, its surface is rougher, the lubricant wears off and the point is duller. Any handling of the lancet, such as cleaning with alcohol, tends to worsen it. For these individuals, using a new lancet each time is well worthwhile.

_

Fingertip puncture pain:

Although the fingertip possesses well-developed capillaries to provide enough blood for the test, pain receptors concentrated on the fingertip induce significant pain when the skin is punctured. As a result, some patients avoid the self test, which could lead to failure of glucose control. In a survey by Park et al., 55% of the diabetes patients responded to the survey questions, and only 35% performed the self-test. These survey results show that only 20% of the patients may perform the routine self-test to control their blood glucose levels. SMBG using capillary blood sampled from the finger is a standard technique for the management of diabetes. However, it induces pain and may force the patient to avoid the test, thereby leading to a failure in maintaining the appropriate glucose levels. Therefore, the pain experienced during sampling is considered to be a significant problem, and a few alternative methods have been suggested. Noninvasive bloodless glucose measurement techniques have been evaluated; however, their accuracy and consistency have not been proven in clinical application, and the higher costs of commercialization of these techniques may also have to be considered. Capillary blood sampling from an alternative site, such as the forearm, could minimize pain, and perhaps be a practical solution to this problem. While blood sampling from the forearm induces significantly less pain than that from the finger, only a small amount of blood, usually less than a few microliters, can be obtained due to the low degree of capillary distribution in the forearm. This small volume of blood is not sufficient for traditional glucometers. Fortunately, modern high-end but inexpensive glucometers can provide accurate glucose measurements within 5 sec by using less than 1 micro-liter of blood. Therefore, to minimize the pain during glucose self-testing, blood sampling from the forearm is a feasible and practical option.

_

Take the pain out of blood sugar checks:

Pricking fingers is a vital part of daily diabetes management. In a recent study, up to 35% of the participants stated that pain is the main reason people with diabetes refrain from regular blood glucose testing. One factor contributing to greater pain sensation when pricking the finger is wrong handling of the lancing device. You can test more comfortably with these 7 easy steps:

1. Ensure hands are clean and dry.

2. Lance on the side of the fingertip rather than the pad.

3. Keep the skin taut by pressing the lancing device firmly against the skin.

4. Select a penetration depth as shallow as possible but still produces blood.

5. Alternate fingers daily and take the necessary steps to ensure good blood circulation.

6. Consider testing beyond the fingertip. If you and your healthcare professional agree that checking from other sites is right for you, you may experience less pain after a blood sugar test if you use your palm, forearm or upper arm instead of your sensitive fingertips.

7. Use a fresh lancet. Today’s lancets are so tiny that just a single use can bend or dull the tips. As a result, they can hurt more if you try to reuse them.

_

Laser lancet:

It is a laser lancing device that uses a laser beam to draw a drop of blood rather than using a steel lancet.

How does it work?

The fingertip is placed over the disposable lens cover where the laser beam comes out of. Water in the skin absorbs the energy from the laser beam, instantly vaporizing tissue which draws blood.

_______

Why alternate site?

During the past decade, several studies have clearly established the importance of frequent daily self-monitoring of blood glucose to control one’s glycemic condition and thereby reduce the onset of complications caused by diabetes. Pain associated with finger lancing is one of the major barriers to frequent daily testing. Consequently, it has been argued that skin lancing at less sensitive parts of the body would increase testing compliance. Suzuki was the first to perform such alternate-site testing. In response to the need for less painful testing, several manufacturers have now released products that are specifically designed to be used at body sites other than the fingertip.

_

Finger tip vs. alternate site:

Capillary blood glucose levels at the fingertip have been shown to correlate well with systemic arterial blood glucose levels. During times of blood glucose stability, identical glucose levels were demonstrated from alternate sites (e.g., forearm) as compared with finger tip samples. However, at times of rapid change, mainly due to blood flow variability, levels from alternate sites differ considerably.  Capillary blood glucose measured from the forearm is lower than fingertip values at times of rapid increases (>2 mg/dL/min) in systemic blood and higher during rapid decreases. Samples from the dorsal forearm have been shown to correspond better to fingertip values when compared with volar forearm samples. The only exception for the alternate site testing is the palm. The skin type of the palm is in the same skin category, hairless or glabrous skin, as the fingertip, and they share the same amount of blood flow, which is considerably more (five to 20 times) than the blood flow to most alternate sites like the forearm. In that respect, blood flow to forearm and abdomen upper dermal region has been reported to be comparable.

_

Alternate sites:

Many children prick sites other than the fingers or toes because they may not hurt as much. The most common alternate site is the forearm. Other places to get blood include the fleshy part of the hand, upper arm, thigh, and back of the calf. The lancet must be dialed to the maximum depth to get enough blood from these sites. You would need a meter that works for these testing sites. The main problem with not using the fingertips is that the blood flow through the arm is slower than through the fingers. The slower blood flow means the blood sugar value from the arm is 10 minutes behind the fingertip.

_

Alternate site testing: 

Several blood glucose meters are now available that use sites other than the finger to obtain blood samples in an effort to reduce the discomfort involved with finger sticks. Monitoring at alternate sites, such as the forearm, palm of the hand or thigh, may give slightly lower results than those taken at the fingertips, since they may sample venous blood rather than capillary blood. While this should not be a problem if the patient uses one or the other site exclusively, the between-test variability will increase if numerous sites (such as fingertips and forearm sites) are used. In addition, during times when the blood glucose concentration is either rising rapidly (such as immediately after food ingestion) or falling rapidly (in response to rapidly acting insulin or exercise), blood glucose results from alternate sites may give significantly delayed results compared with finger stick readings. In comparison, blood samples taken from the palm near the base of the thumb (thenar area), demonstrate a closer correlation to fingertip samples at all times of day, and during periods of rapid change in BG levels.

_

Alternate site testing should only be used when blood sugar is stable:

•Immediately before a meal

•When fasting

•Near bedtime

_

Always check from your fingertip, however, when blood sugar may be changing:

1. Following a meal, when blood sugar is rising quickly

2. After exercise

3. Whenever you think your blood sugar might be low or falling

4. You have just taken insulin

5. The results do not agree with the way you feel

6. You are ill

7. You are under stress

Also, you should never use results from an alternative sampling site to calibrate a continuous glucose monitor (CGM), or in insulin dosing calculations.

_

Forearm meter:

_

Whole-Blood Glucose Testing at Alternate Sites: 2001 study:

Glucose values and hematocrit of capillary blood drawn from fingertip and forearm:

In this cross-sectional study of 50 nonfasting subjects whose blood glucose concentration changed to various degrees during the experiment, no significant glucose difference was observed between the capillary beds of the forearm and fingertip, regardless of whether glucose was assayed with HemoCue or the Sof-Tact Blood Glucose System. On the other hand, Hb concentration and Hct were found to be significantly higher in the capillary blood of the forearm. No explanation has yet been given for the occurrence of Hb concentration differences across the integument.

_

Forearm blood glucose testing in diabetes mellitus: 2002 study:

Self monitoring of blood glucose plays a vital role in the treatment plan of children with diabetes mellitus. Regular self blood glucose monitoring enables the appropriate changes to be made in the treatment and management of the child’s diabetes to meet individual goals and needs. Barriers to frequent self monitoring include the pain and trauma associated with the finger prick necessary to obtain blood for the test. Non-compliance with blood glucose monitoring is common, especially in adolescents. Although modern blood glucose meters only require a small sample of blood, monitoring remains a problem. Using an alternate site for sampling, namely the forearm, may be beneficial to the patient and reduce the level of pain they experience. The main objective of the study was to assess the accuracy of a forearm testing device (SoftSense) in a paediatric population, in comparison to a standard reference laboratory method. Blood glucose measurements from samples taken from the forearm and the finger were compared in an outpatient setting from 52 children and adolescents with diabetes mellitus aged 6–17 years. Opinions on forearm sampling were collected by questionnaire.  Blood glucose results obtained from forearm sampling correlated well with results from the finger measured by the Yellow Springs Instrument analyser. Error grid analysis showed that 100% of measurements were clinically acceptable; 61% of children reported that forearm testing was painless and 19% that it was less painful than finger prick testing.

Conclusion: Forearm testing is an acceptable alternative to finger prick testing for blood glucose measurement in children and adolescents.

 _________

What if I can’t get a drop of blood?

If you don’t get blood from your fingertip, try washing your hands in hot water to get the blood flowing. Then dangle your hand below your heart for a minute. Prick your finger quickly and then put your hand back down below your heart. You might also try slowly squeezing the finger from the base to the tip. If lancing device has dial-a-depth, increase setting by 1 level. Use a new lancet every time you check blood glucose.  

_

Your fingertips may get sore from frequent pricking for blood sugar testing. To help prevent sore fingertips:

1. Always prick the side of your finger. Do not prick the tip of your finger. This increases the pain, and you may not get enough blood to do the test accurately. Also, do not prick your toes to get a blood sample. This can increase your risk of getting an infection in your foot.

2. Don’t squeeze the tip of your finger. If you have trouble getting a drop of blood large enough to cover the test area of the strip, hang your hand down below your waist and count to 5. Then squeeze your finger, beginning close to your hand and moving outward toward the tip of your finger.

3. Use a different finger each time. Keep track of which finger you stick so that you don’t use some fingers more than others. If a finger becomes sore, avoid using it to test your blood sugar for a few days.

4. Use a different device. If you are having trouble with sore fingers, you may want to try a meter that obtains a blood sample from sites other than the fingers, such as the palm of the hand or the forearm.

_

What supplies are needed for SMBG? 

Doing a blood test requires a method of pricking the skin to get a drop of blood as well as a method of reading the results. Results are read using test strips that are put in a blood glucose meter.

______

HOW TO PERFORM BLOOD SUGAR TESTING:

To test your blood sugar level, collect your blood glucose meter, a test strip and lancing device. The following steps include general guidelines for testing blood sugar levels; you should get specific details for your blood glucose monitors from the package insert or your healthcare provider. Never share blood glucose monitoring equipment or fingerstick lancing devices. Sharing of this equipment could result in transmission of infection, such as hepatitis B.

1. Wash hands with soap and warm water. Dry hands.

2. Prepare the lancing device by inserting a fresh lancet. Lancets that are used more than once are not as sharp as a new lancet, and can cause more pain and injury to the skin.

3. Prepare the blood glucose meter and test strip (instructions for this depend upon the type of glucose meter used).

4. Choose your spot—don’t check from the same finger all the time.

5. Use the lancing device to obtain a small drop of blood from your fingertip or alternate site (like the skin of the forearm). Alternate sites are often less painful than the fingertip. However, results from alternate sites are not as accurate as fingertip samples when the blood glucose is rising or falling rapidly. If you have difficulty getting a good drop of blood from the fingertip, try rinsing your fingers with warm water, shaking the hand below the waist, or squeezing (“milking”) the fingertip.

6. Apply the blood drop to the test strip in the blood glucose meter. The results will be displayed on the meter after several seconds.

7. View your test result and take the proper steps if your blood sugar is too high or low, based on your healthcare professionals’ recommendations.

8. Dispose of the used lancet in a puncture-resistant sharps container (not in household trash).

9. Record the results in a logbook, hold them in the meter’s memory or download to a computer so you can review and analyze them later.

______

What affects the Test: 

Reasons you may not be able to have the test or why the results may not be helpful include:

1. Alcohol in the drop of blood. If you clean your skin with rubbing alcohol, let the area dry completely before sticking it with the lancet.

2. Water or soap on your finger.

3. Squeezing your fingertip.

4. A drop of blood that is either too large or too small.

5. Very low (below 40 mg/dL or 2.2 mmol/L) or very high (above 400 mg/dL or 22.2 mmol/L) blood sugar levels.

6. Humidity or a wet test strip. Do not store your test strips in the washroom. When you remove a strip from the bottle, promptly secure the lid back on the bottle to prevent humidity from damaging the unused strips.

__________

Glucometer overview:

_

_

A glucose meter (or glucometer) is a medical device for determining the approximate concentration of glucose in the blood. It can also be a strip of glucose paper dipped into a substance and measured to the glucose chart. It is a key element of home blood glucose monitoring by people with diabetes mellitus or hypoglycemia. A small drop of blood, obtained by pricking the skin with a lancet, is placed on a disposable test strip that the meter reads and uses to calculate the blood glucose level. The meter then displays the level in mg/dl or mmol/l. Blood glucose meters today are small, portable, and easy to use. The mark of a good meter is one that the patient will use regularly and that returns accurate and precise results. Over the past few years the trend with blood glucose meters has been to maximize patient comfort and convenience by reducing the volume of the blood sample required. The blood sample size is now small enough that alternate-site testing is possible. This eliminates the need to obtain blood from the fingers and greatly reduces the pain associated with daily testing. Accurate and precise results have been increased by using better test strips, electronics, and advanced measurement algorithms. Other conveniences include speedy results, edge fill strips, and illuminated test strip ports, to name just a few. Although the cost of using blood glucose meters seems high, it is believed to be a cost benefit relative to the avoided medical costs of the complications of diabetes.  

_

Benefits of glucometer:

There are many benefits of using a glucometer for diabetics:

1. It allows diabetics to take care of themselves sans any need to visit doctors and labs regularly.

2. It works towards promoting well-being of the patient.

3. It helps to detect and confirm hypoglycemia.

4. These meters ensure better understanding of medications.

5. The meters help in altering medications.

6. It also helps in detecting infections. Since high blood sugars may be a sign of infection or illness, timely assistance can save many health problems.

 _

Disadvantages of home blood glucose testing:

The disadvantages are mainly seen when either the patient lacks motivation to test or does not have sufficient education on how to interpret the results to make sufficient use of home testing equipment. Where this is the case, the following disadvantages may outweigh the potential benefits:

1. Anxiety about one’s blood sugar control and state of health

2. The physical pain of finger pricking

3. Costly affair

_

There are several key characteristics of glucose meters which may differ from model to model:

•Size: The average size is now approximately the size of the palm of the hand. They are battery-powered.

•Test strips: A consumable element containing chemicals that react with glucose in the drop of blood is used for each measurement. For some models this element is a plastic test strip with a small spot impregnated with glucose oxidase and other components. Each strip is used once and then discarded. Instead of strips, some models use discs, drums, or cartridges that contain the consumable material for multiple tests.

•Coding: Since test strips may vary from batch to batch, some models require the user to manually enter in a code found on the vial of test strips or on a chip that comes with the test strip. By entering the coding or chip into the glucose meter, the meter will be calibrated to that batch of test strips. However, if this process is carried out incorrectly, the meter reading can be up to 4 mmol/L (72 mg/dL) inaccurate. The implications of an incorrectly coded meter can be serious for patients actively managing their diabetes. This may place patients at increased risk of hypoglycemia. Alternatively, some test strips contain the code information in the strip; others have a microchip in the vial of strips that can be inserted into the meter. These last two methods reduce the possibility of user error. One manufacturer has standardized their test strips around a single code number, so that, once set, there is no need to further change the code in their older meters, and in some of their newer meters, there is no way to change the code.

•Volume of blood sample: The size of the drop of blood needed by different models varies from 0.3 to 1 μl. (Older models required larger blood samples, usually defined as a “hanging drop” from the fingertip.) Smaller volume requirements reduce the frequency of unproductive pricks.

•Alternative site testing: Smaller drop volumes have enabled “alternate site testing” — pricking the forearms or other less sensitive areas instead of the fingertips. Although less uncomfortable, readings obtained from forearm blood lag behind fingertip blood in reflecting rapidly changing glucose levels in the rest of the body.

•Testing times: The times it takes to read a test strip may range from 3 to 60 seconds for different models.

•Display: The glucose value in mg/dl or mmol/l is displayed on a digital display. The preferred measurement unit varies by country: mg/dl are preferred in the U.S., France, Japan, Israel, and India. mmol/l are used in Canada, Australia, China and the UK. Germany is the only country where medical professionals routinely operate in both units of measure. (To convert mmol/l to mg/dl, multiply by 18. To convert mg/dl to mmol/l, divide by 18.) Many meters can display either unit of measure; there have been a couple of published instances in which someone with diabetes has been misled into the wrong action by assuming that a reading in mmol/l was really a very low reading in mg/dl, or the converse. In general, if a value is presented with a decimal point, it is in mmol/l, without a decimal it is most likely mg/dl.

•Clock/memory: All meters now include a clock that is set by the user for date and time and a memory for past test results. The memory is an important aspect of diabetes care, as it enables the person with diabetes to keep a record of management and look for trends and patterns in blood glucose levels over days and weeks. Most memory chips can display an average of recent glucose readings. A known deficiency of all current meters is that the clock is often not set to the correct time (i.e. – due to time changes, static electricity, etc…) and therefore has the potential to misrepresent the time of the past test results making pattern management difficult.

•Data transfer: Many meters now have more sophisticated data handling capabilities. Many can be downloaded by a cable or infrared to a computer that has diabetes management software to display the test results. Some meters allow entry of additional data throughout the day, such as insulin dose, amounts of carbohydrates eaten, or exercise. A number of meters have been combined with other devices, such as insulin injection devices, PDAs, cellular transmitters and Game Boys. A radio link to an insulin pump allows automatic transfer of glucose readings to a calculator that assists the wearer in deciding on an appropriate insulin dose.

_

Blood glucose vs. plasma glucose:

Glucose levels in plasma (one of the components of blood) are generally 10%–15% higher than glucose measurements in whole blood (and even more after eating). This is important because home blood glucose meters measure the glucose in whole blood while most lab tests measure the glucose in plasma. Currently, there are many meters on the market that give results as “plasma equivalent,” even though they are measuring whole blood glucose. The plasma equivalent is calculated from the whole blood glucose reading using an equation built into the glucose meter. This allows patients to easily compare their glucose measurements in a lab test and at home. It is important for patients and their health care providers to know whether the meter gives its results as “whole blood equivalent” or “plasma equivalent.”

_

Glucose meters vary in their method of analysis. Some meters take a fixed volume of patient whole blood, lyse the cells, and analyze the amount of glucose in that volume of lysate. Other meters utilize a series of absorbent pads to separate the cellular portion of a sample from the serum/plasma portion. This allows only serum/plasma to react with the enzymatic reagents. In order to harmonize glucose results, consensus recommends reporting serum/plasma-based results from glucose meters such that the value will most closely match that of a laboratory method using a serum/plasma sample. Glucose meter whole blood lysate results must therefore be corrected to serum/plasma by either applying a fixed mathematical offset to obtain a “plasma-corrected result” (assuming a normal hematocrit) or correcting the whole blood lysate result using the patient’s actual hematocrit. There are meters on the market that use both types of correction. However, it is more common for manufacturers whose meters separate the cellular portion of the sample to set the calibration of the meter against a laboratory method in order to report a “plasma-calibrated” result. The differences between these various calibration and correction functions are one source of variability among the many glucose meter models when analyzing the same specimen.

_

Measurement of glucose content in plasma from capillary blood in diagnosis of diabetes mellitus: a 2003 study:

Overall, there is good correlation between glucose values obtained from ear capillary blood and those from peripheral venous plasma, but there are considerable individual differences. Results obtained with these two methods are generally not interchangeable and the converted values should not be used in the diagnosis of diabetes mellitus, because of the risk of misclassification. The aim of this study was to investigate whether these differences might be less significant if measurements were taken at the plasma phase of capillary blood and expressed directly as capillary plasma results and if finger capillary blood were used instead of ear capillary blood. The Hitachi 717 instrument was used for measurements of glucose concentrations in venous plasma, the Cobas Mira S in capillary whole blood and the Accu-Chek Inform from Roche in capillary plasma. The conclusions drawn were (1) capillary ear blood glucose concentration correlates well with capillary finger blood concentration and the two sites can be used interchangeably, yielding similar results in the individual patient; (2) sampling variation is almost the same (approx. 0.16 mmol/L) on capillary plasma and capillary whole blood from finger and ear. Sampling variation for venous plasma measured on the Hitachi instrument was 0.13 mmol/L; not significantly better; (3) the analytical imprecision of glucose measurements on capillary plasma (Accu-Chek Inform) and capillary whole blood (haemolysate method) is almost the same (approx. 2.0%). The analytical imprecision of glucose measurements on venous plasma is 0.9% using a laboratory method and almost twice as high using Accu-Chek Inform (2.1%); (4) determination of capillary plasma values in the finger did not improve the correlation with venous plasma values. Even though average values were in better concordance, individual differences did not change. For some persons, both ear- and finger capillary blood measurements deviate significantly from results on venous plasma, such that they cannot be used for diagnosis of diabetes mellitus; (5) the main factor for good correlation is the sampling site. Results obtained on plasma and whole blood from the same puncture correlate well; (6) neither capillary blood nor capillary plasma correlates with the venous plasma method recommended by the American Diabetes Association. It is concluded that physiologic differences in glucose content in capillary- and venous blood prohibit the random use of these two materials in the diagnosis of diabetes.

_

Meter Types:

There are continuous and discrete (single-test) meters on the market today, and implantable and noninvasive meters are in development. Continuous meters are by prescription only and use a subcutaneous electrochemical sensor to measure at a programmed interval.

_

Electrochemical meter:

Single-test meters use electrochemical or optical reflectometry to measure the glucose level in units of mg/dL or mmol/L. The majority of blood glucose meters are electrochemical. Electrochemical test strips have electrodes where a precise bias voltage is applied with a digital-to-analog converter (DAC), and a current proportional to the glucose in the blood is measured as a result of the electrochemical reaction on the test strip. There can be one or more channels, and the current is usually converted to a voltage by a transimpedance amplifier (TIA) for measurement with an analog-to-digital converter (ADC). The full-scale current measurement of the test strip is in the range of 10µA to 50µA with a resolution of less than 10nA. Ambient temperature needs to be measured because the test strips are temperature dependent.

_

Optical-reflectometry meter:

Optical-reflectometry test strips use color to determine the glucose concentration in the blood. Typically, a calibrated current passes through two light-emitting diodes (LEDs) that alternately flash onto the colored test strip. A photodiode senses the reflected light intensity, which is dependent on the color of the test strip, which, in turn, is dependent on the amount of glucose in the blood. The photodiode current is usually converted to a voltage by a TIA for measurement with an ADC. The full-scale current from the photodiode ranges from 1µA to 5µA with a resolution of less than 5nA. Ambient temperature is required for this method.

_

Meter use for hypoglycemia:

Although the apparent value of immediate measurement of blood glucose might seem to be higher for hypoglycemia than hyperglycemia, meters have been less useful. The primary problems are precision and ratio of false positive and negative results. An imprecision of ±15% is less of a problem for high glucose levels than low. There is little difference in the management of a glucose of 200 mg/dl compared with 260 (i.e., a “true” glucose of 230±15%), but a ±15% error margin at a low glucose concentration brings greater ambiguity with regards to glucose management. The imprecision is compounded by the relative likelihoods of false positives and negatives in populations with diabetes and those without. People with type 1 diabetes usually have glucose levels above normal, often ranging from 40 to 500 mg/dl (2.2 to 28 mmol/l), and when a meter reading of 50 or 70 (2.8 or 3.9 mmol/l) is accompanied by their usual hypoglycemic symptoms, there is little uncertainty about the reading representing a “true positive” and little harm done if it is a “false positive.” However, the incidence of hypoglycemia unawareness, hypoglycemia-associated autonomic failure (HAAF) and faulty counterregulatory response to hypoglycemia make the need for greater reliability at low levels particularly urgent in patients with type 1 diabetes mellitus, while this is seldom an issue in the more common form of the disease, type 2 diabetes mellitus. In contrast, people who do not have diabetes may periodically have hypoglycemic symptoms but may also have a much higher rate of false positives to true, and a meter is not accurate enough to base a diagnosis of hypoglycemia upon. A meter can occasionally be useful in the monitoring of severe types of hypoglycemia (e.g., congenital hyperinsulinism) to ensure that the average glucose when fasting remains above 70 mg/dl (3.9 mmol/l).

 _

False results:

Probably the greatest concern when using glucose meters is false results. All users should be educated about factors contributing to false results. The Office of In Vitro Diagnostics (OIVD), a service of the FDA, evaluates glucose meters. They evaluate long term safety and effectiveness of the analysers and how devices are used. OIVD, in consultation with manufacturers and users, have produced a table of common problems encountered when using glucose meters as seen in the table below. Causes of false results may be patient/sample based or user/device based. Probably the most important advice for any user of a blood glucose meter is to question any result not consistent with the clinical picture. This needs to be investigated and, at a minimum, the test repeated.

Common problems with glucose meter results.

Results Problem Recommendation
Falsely low results Sensor strips not fully inserted into meter Always be sure strip is fully inserted in meter
Not enough blood applied to strip Repeat test with a new sample
Patient in shock Treat appropriately. Venous sample should be sent immediately to a laboratory
Squeezing fingertip too hard because blood is not flowing Repeat test with a new sample from a new stick
Polycythaemia/increased haematocrit Venous sample should be sent to a laboratory
Falsely high results Patient sample site (for example the fingertip) is contaminated with sugar Always clean test site before sampling
Patient is dehydrated Treat appropriately. Venous sample should be sent immediately to a laboratory
Anemia/decreased haematocrit Venous sample should be sent to a laboratory
Variable results Test strips/controls stored at temperature extremes Store kit according to directions
Sites other than fingertips Results from alternative sites may not match finger stick results
Test strips/controls damaged Always inspect package for cracks, leaks, etc.
Dirty meter Even small amounts of blood, grease, or dirt on a meter’s lens can alter the reading
Error codes Batteries low on power Change batteries and repeat sample collection
Test will not complete Check package details, calibration code, and expiry dates are all compatible

 _

Variables:

We make decisions based on the results of our blood glucose measurements – it is therefore important to us that the readings we obtain are true. What then, other than glucose, may affect the outcome of the test?

•System variables:

- Batch-to-batch or strip-to-strip variation of strips

 - Meter-to-meter variability

•Testing variables:

- Environment – temperature, humidity, altitude

 - User – technique (note hands should be dry and clean before finger-pricking), timing

•Patient variables:

- Blood sample – capillary or venous, red blood cell count

 - Dehydration of the patient

Extreme hypo- or hyperglycemia (if the blood glucose level falls outside the working range of the meter a ‘LO’ or ‘HI’ message will usually be displayed)

_

Calibration:

Each pack of test strips usually comes with a special ‘calibration code’. This is a correction factor for the meter which is derived by comparing meter response with a standardised laboratory assay or ‘reference method’. For accurate readings it is essential that the meter is recalibrated between batches of strips.  If you choose to check the accuracy of your meter using a ‘quality control’ solution then you must use one specially formulated by the meter manufacturer. It may be easy enough to make up a standard solution with known concentration of glucose; unfortunately though, such home-made standards, usually water-based, do not behave the same way as blood on the test strip.

_

Considerations for Glucose Meter selection:

Feature Clinical advantages
Smaller sample size requirement Less painful, permits alternate site testing
Alternate site testing Less discomfort for patients who use fingertips regularly (e.g., for typing)
Results in less than 15 seconds Increased convenience

 _

_

How do you choose a Glucose Meter?

There are many different types of meters available for purchase that differs in several ways, including:

  • accuracy
  • amount of blood needed for each test
  • how easy it is to use
  • pain associated with using the product
  • testing speed
  • overall size
  • ability to store test results in memory
  • likelihood of interferences
  • ability to transmit data to a computer
  • cost of the meter
  • cost of the test strips used
  • doctor’s recommendation
  • technical support provided by the manufacturer
  • special features such as automatic timing, error codes, large display screen, or spoken instructions or results

Talk to your health care provider about the right glucose meter for you, and how to use it.

_

How can you check your meter’s performance? There are three ways to make sure your meter works properly:

1. Use liquid control solutions:

–every time you open a new container of test strips

–occasionally as you use the container of test strips

–if you drop the meter

–whenever you get unusual results

To test a liquid control solution, you test a drop of these solutions just like you test a drop of your blood. The value you get should match the value written on the test strip vial label.

2. Use electronic checks:

 Every time you turn on your meter, it does an electronic check. If it detects a problem it will give you an error code. Look in your meter’s manual to see what the error codes mean and how to fix the problem. If you are unsure if your meter is working properly, call the toll-free number in your meter’s manual, or contact your health care provider.

3. Compare your meter with a blood glucose test performed in a laboratory:

Take your meter with you to your next appointment with your health care provider. Ask your provider to watch your testing technique to make sure you are using the meter correctly. Ask your health care provider to have your blood tested with a laboratory method. If the values you obtain on your glucose meter match the laboratory values, then your meter is working well and you are using good technique.

What should you do if your meter malfunctions?

 If your meter malfunctions, you should tell your health care provider and contact the company that made your meter and strips.

_

What other supplies do you need?

All meters require test strips to operate – a small chemically treated strip that slides into the meter. After insertion, a drop of blood is placed on the opposite end of the strip that protrudes from the meter, and the meter reads the glucose level and displays the number on the screen. Some monitors use test strip drums or discs, which are self-enclosed packages of strips that automatically load without user intervention or handling. Small children and adults who have difficulties with their fine motor skills may find this type of monitor easier to use. You’ll also need a lancet (a small, fine needle) to get a blood sample for testing. Lancets are inserted into a lancet device – a spring-loaded mechanism about the size and shape of a pen. A dial allows the user to adjust the depth of the lancet stick. Typically there is a button that you push to release the lancet into a fingertip or other site to draw a blood sample. Lancets come in different gauges; the higher the gauge, the finer (i.e., thinner) the needle. Higher gauge needles are less painful, but they also may create a smaller blood sample. Your blood glucose monitor may also come with control solution (for calibrating the monitor per manufacturer’s directions for use) and a carrying case.

______

Glucometer sensor (test strips):

_

_

The sensor used has an electroenzymatic approach, which means that it takes advantage of glucose oxidation with a glucose oxidase enzyme. The presence of glucose oxidase catalyzes the chemical reaction of glucose with oxygen, which causes an increase in pH, decrease in the partial pressure of oxygen, and increase of hydrogen peroxide because of the oxidation of glucose to gluconic acid: The test strip measures changes in one or several of these components to determine the concentration of glucose. A negative voltage of –0.4 V is applied at the reference electrode. When blood or a glucose solution is placed in the strip, a chemical reaction occurs inside it, generating a small electrical current proportional to the glucose concentration. This current is constantly monitored while the strip is in place, allowing the device to monitor when blood is placed. After the chemical reaction stabilizes, 5 s, the voltage is read by the ADC and compared using a look-up table to obtain the proportional glucose value in mg/dL. This value is sent to the host computer to inform the glucose value. When choosing test strips, make sure they work in the meter you are using. Look for strips that need only a small drop of blood and can draw the blood into the strip (capillary action).

_

Caring for Strips:

It is important to care for your strips so that you get an accurate reading. To do this, refer to the manufacturer’s instructions. It will include recommendations like:

•Storing them in a dry place

•Replacing the cap immediately after use

•Checking the expiry date is valid.

__

_

Avoiding problems with meter usage:

Blood sugar meters need to be used and maintained properly. Follow these tips to ensure proper usage:

•Follow the instructions in the user manual for your device, as procedures may vary from one device to another.

•Use a blood sample size as directed in the manual because different meters require different sample sizes.

•Change batteries as recommended by the manufacturer.

•Use only test strips designed for your meter because not all devices and strips are compatible.

•Store test strips as directed.

•Don’t use expired test strips.

•Clean the device regularly as directed.

•Run quality control tests as directed.

•Check the manual for additional troubleshooting tips.

•Bring the meter with you to doctor appointments to address any questions and to demonstrate how you use your meter.

____

Recent advances in glucometer:

Recent advances include:

1. ‘Alternate site testing’, the use of blood drops from places other than the finger, usually the palm or forearm. This alternate site testing uses the same test strips and meter, is practically pain free, and gives the real estate on the finger tips a needed break if they become sore. The disadvantage of this technique is that there is usually less blood flow to alternate sites, which prevents the reading from being accurate when the blood sugar level is changing.

2. ‘No coding’ systems. Older systems required ‘coding’ of the strips to the meter. This carried a risk of ‘miscoding’, which can lead to inaccurate results. Two approaches have resulted in systems that no longer require coding. Some systems are ‘autocoded’, where technology is used to code each strip to the meter. And some are manufactured to a ‘single code’, thereby avoiding the risk of miscoding.

3. ‘Multi-test’ systems. Some systems use a cartridge or a disc containing multiple test strips. This has the advantage that the user doesn’t have to load individual strips each time, which is convenient and can enable quicker testing.

4. ‘Downloadable’ meters. Most newer systems come with software that allows the user to download meter results to a computer. This information can then be used, together with health care professional guidance, to enhance and improve diabetes management. The meters usually require a connection cable, unless they are designed to work wirelessly with an insulin pump, or are designed to plug directly into the computer.

_______

Specialized glucometer:

Hospital glucose meters:

Special glucose meters for multi-patient hospital use are now used. These provide more elaborate quality control records. Their data handling capabilities are designed to transfer glucose results into electronic medical records and the laboratory computer systems for billing purposes.

_

ACCU-CHEK® Aviva Expert, the first and only stand-alone blood glucose meter system with a built-in insulin calculator:

Roche announced that Roche’s ACCU-CHEK® Aviva Expert system, the first and only blood glucose meter system with a built-in insulin calculator to be approved by the U.S. Food and Drug Administration (FDA), is now available by prescription. The device represents a significant advancement in blood glucose meter technology for people with diabetes who take multiple daily insulin injections. The meter’s integrated bolus calculator provides easy-to-use and reliable dose recommendations based on automated calculations, eliminating the need for manual dosing calculations and estimations. A survey of ACCU-CHEK Aviva Expert users found that 79 percent reported increased confidence with insulin dose calculation, and 52 percent reported a reduced fear of hypoglycemia.  In the United States, approximately 6 million people take insulin to help manage their diabetes. Many people also take multiple daily injections of insulin to help manage their disease, which requires them to calculate proper insulin dosage amounts based on their food intake and blood glucose readings. These calculations are complex, and constant precision is critical to determine the proper insulin dose. A multicenter study found that 63 percent of manually calculated insulin doses were incorrect. As an incorrect insulin dose can lead to serious health complications, including hypoglycemia, accurate calculations are required. Researchers from the U.S. Centers for Disease Control and Prevention (CDC) reported that there were nearly 100,000 emergency room (ER) visits each year between 2007 and 2011 that were attributed to insulin-related hypoglycemia and other errors, and that these visits accounted for roughly 9 percent of all ER visits due to drug reactions during this timeframe. The availability of the ACCU-CHEK Aviva Expert system marks an important, game-changing milestone in diabetes self-management by making the process of calculating insulin dosage easier and less susceptible to error. One of the biggest barriers to optimal self-management is the ability to calculate bolus doses. It is hoped is that the device will become the standard of care for patients on multiple daily insulin injection therapy due to the simplicity of the built-in bolus calculator.

_

Glucometer for blind:

If you’ve been certified as legally blind, it’s likely you’ll meet the requirements of most insurers to obtain a blood glucose monitor with speech capability, also called a talking blood glucose monitor. Be aware that talking meters fall into 2 categories – those with partial speech and those with full speech. Those with partial speech may only announce your blood glucose result while meters with full speech not only announce your result but also the results in memory, low battery warning, and audible steps to set the time and other monitor features. I wonder how a blind person would use lancet to pierce finger tip and put drop of blood on test strip.

_

Meter to determine glucose plus ketones in blood:

__________

Integrated Self-Monitoring of Blood Glucose System: Handling Step Analysis: 2012:

Self-monitoring of blood glucose (SMBG) implicates a number of handling steps with the meter and the lancing device. Numerous user errors can occur during SMBG, and each step adds to the complexity of use. This report compares the required steps to perform SMBG of one fully integrated (the second generation of the Accu-Chek® Mobile), three partly integrated (Accu-Chek Compact Plus, Ascensia® Breeze®2, and Accu-Chek Aviva), and six conventional (Bayer Contour®, Bayer Contour USB, BGStar™, FreeStyle Lite®, OneTouch® Ultra® 2, and OneTouch Verio™Pro) systems. The results show that the fully integrated system reduces the number of steps to perform SMBG. The mean decrease is approximately 70% compared with the other systems. Authors assume that a reduction of handling steps also reduces the risk of potential user errors and improves the user-friendliness of the system.  

_

Individuals achieve more accurate results with Meters that are Codeless and employ Dynamic Electrochemistry: a study:

Four blood glucose monitoring systems (with or without dynamic electrochemistry algorithms, codeless or requiring coding prior to testing) were evaluated and compared with respect to their accuracy. Altogether, 108 blood glucose values were obtained for each system from 54 study participants and compared with the reference values. Analytical performance of these blood glucose meters differed significantly depending on their technologic features. Meters that utilized dynamic electrochemistry and did not require coding were more accurate than meters that used static electrochemistry or required coding.

_____

Heel-stick SMBG in neonate:

 Heel stick is a minimally invasive and easily accessible way of obtaining capillary blood samples for SMBG in newborn. The development of newer, more effective and less painful lancing devices may increase the relative utility of heel stick. Heel stick sampling can also help preserve venous access for future intravenous (IV) lines in neonates. The normal range of blood glucose is around 1.5–6 mmol/l in the first days of life, depending on the age of the baby, type of feed, assay method used, and possibly the mode of delivery. Up to 14% of healthy term babies may have blood glucose less than 2.6 mmol/l in the first three days of life. The normal blood glucose level in full-term babies is 40 mg/dL to 150 mg/dL. In premature infants, it is 30 mg/dL to 150 mg/dL. The healthy, term infant experiences a brief, self-limited period of relatively low blood glucose during the first two hours of life. Infants are normally asymptomatic during this time. As this transient drop is physiologic, routine glucose screening is not recommended. Lowest concentrations are more likely on day 1. There is no reason to routinely measure blood glucose in appropriately grown term babies who are otherwise well. Screening should be directed towards those infants at risk for pathologic hypoglycemia. Glucose screening is recommended for infants in the following categories who are at increased risk for pathological hypoglycemia:

  • Born to mothers with gestational diabetes or diabetes mellitus
  • Large for gestational age (LGA) ( >3969g)
  • Small for gestational age (SGA) (<2608g)
  • Premature (<37 weeks gestation)
  • Low birth weight (<2500g)
  • Smaller twin when sizes are discordant
  • Polycythemia (hct >70%)
  • Hypothermia
  • Low Apgar scores (<5 at one minute, <6 at five minutes)
  • Stress (sepsis, respiratory distress, etc)

________

Standardization of glucometer:

There are two ways to assess the accuracy of glucose measurement techniques: technical or clinical. Technical accuracy assesses the agreement between the measured and reference glucose values. Clinical accuracy judges how the differences in the measurements impact clinical decision processes. Both have clinical implications.  A review by Krouwer and Cembrowski details the standards and statistical methods used to characterize accuracy of SMBG Devices and highlight the different criteria acceptable for accuracy between standard organizations and professional societies. In 1987, an American Diabetes Association (ADA) consensus statement recommended that the acceptable error for SMBG DEVICEs from all sources (user, analytical, etc.) should be less than 10% for glucoses ranging from 30 to 400 mg/dl at all times. This ADA consensus statement also recommended that glucose measurements should not differ more than 15% from values obtained by a laboratory reference method. The ADA decreased the maximum allowable analytical error to <5% in 1996.  International Organization for Standardization (ISO) 15197 provided different recommendations in 2003. These state that 95% of the individual glucose measurements compared to the reference measurements are required to be in the range ±15 mg/dl for values less than or equal to 75 mg/dl and ±20% for glucose values greater than 75 mg/dl. This is the standard that the FDA normally uses as the goal for approval of SMBG DEVICEs. The standards set by the ADA (ADA 1987/1996), requiring all glucose measurements with SMBG DEVICEs to be within 5% of CLD values, were deemed technically unachievable by the International Federation of Clinical Chemistry and Laboratory Medicine.

_

Differing glucometer standards:

Although there is no universal standard for accuracy of glucose meters, several groups have defined acceptable ranges. The U.S. Food and Drug Administration (FDA) requires glucose meters to produce self-monitoring results within 20 percent of a reference measurement but recommends results within 15 percent; the FDA has stated that future meters should achieve results within 10 percent of reference at serum glucose concentrations of 30 to 400 mg per dL (1.7 to 22.2 mmol per L). The American Diabetes Association (ADA) recommends that meters produce readings within 5 percent of laboratory values. All meters currently on the market are considered to be clinically accurate in that they at least meet the FDA standard, although it is important to remember that they are not as accurate as a standard laboratory test. Given this broad range of possible error, making treatment decisions based solely on self-monitoring of blood glucose (SMBG) is not advised. Glucose meters are most accurate when used properly. Thus, educating patients on proper use and what to do with the results is vital. Although the exact procedure for using a meter varies by product, potential pitfalls are similar. Common errors include poor maintenance (e.g., soiled meter), using expired test strips, obtaining an inadequate sample size, and failing to calibrate the meter.

_

International Organization for Standardization of SMBG devices:

In a recent study in 2012 of 43 glucose meters, only 34 systems met ISO standards. ISO 15197:2013 specifies requirements for in vitro glucose monitoring systems that measure glucose concentrations in capillary blood samples, for specific design verification procedures and for the validation of performance by the intended users. These systems are intended for self-measurement by lay persons for management of diabetes mellitus. ISO 15197:2013 is applicable to manufacturers of such systems and those other organizations (e.g. regulatory authorities and conformity assessment bodies) having the responsibility for assessing the performance of these systems.  Based on ISO 15197:2013, the blood-glucose monitoring system shall meet both the following minimum criteria for acceptable system accuracy.

1.  95% of the measured glucose values shall fall within the standard:

2.  99% of individual glucose measured values shall fall within zones A and B of the Clarke Error Grid (CEG) for type 1 diabetes. The Clarke Error Grid Analysis (EGA) was developed in 1987 to quantify clinical accuracy of patient estimates of their current blood glucose as compared to the blood glucose value obtained in their meter. The grid breaks down a scatter plot of a reference glucometer and evaluated glucometer into 5 regions; A, B, C, D, and E.

____________

Why SMBG is done?

_

_
Self-monitoring is an integral part of diabetes management because it puts you in charge. Regardless of how you manage your diabetes — through diet and exercise alone or combined with oral medicines or insulin — regular blood glucose monitoring provides immediate feedback on how your program is working. Checking your blood glucose gives you the freedom to make choices without worry, the confidence to learn from your actions, and the motivation to keep striving to do better.  Monitoring tells you that what you’re doing either is working or isn’t, and it serves as motivation to keep up actions that are working or to make changes. The important thing is to know how to interpret the numbers and take the necessary action. For example, if you take insulin and your blood glucose is high, you may need to bolus, or take more rapid-acting insulin, to bring your levels down into range. If you manage your Type 2 diabetes with diet and exercise, you might treat high blood glucose with a walk around the block. People who use insulin and certain oral diabetes drugs are also at risk of developing low blood glucose, or hypoglycemia, which needs to be treated promptly when it occurs. Regular monitoring may enable you to catch and treat it early, and any symptoms of hypoglycemia should be checked with a meter reading. Over time, blood glucose monitoring records can be analyzed for patterns of highs or lows that may suggest that a change is needed in the treatment regimen. Regular monitoring is especially helpful for showing the positive effects of exercise. Say your readings have regularly been around 140 mg/dl, but you start taking a walk every day and you start getting more readings around 120 mg/dl. That will definitely boost your motivation.

_

Self-monitoring blood glucose — provides useful information for diabetes management.

It can help you to:

•Judge how well you’re reaching overall treatment goals

•Understand how diet, stress and exercise affect blood sugar levels

•Understand how as illness affect blood sugar levels

•Monitor the effect of diabetes medications on blood sugar levels

•Identify blood sugar levels that are dangerously high or low

•Help prevent low blood sugar at night

•Reduce the risk of eye, kidney and nerve complications

•Help you make informed decisions about the amount and type of insulin to use

•Help you manage illness at home and alert you if you need to do a ketone test

_

Based upon the results of randomized trials, self-monitoring of blood glucose (SMBG) is recommended for in patients who take medications that can cause hypoglycemia and that need to be adjusted based on ambient glucose levels. For example, in order to avoid hypoglycemia and achieve target glucose levels, patients with type 1 diabetes who take mealtime insulin should usually test before meals to adjust doses, based on meal size and content, anticipated activity levels, and glucose levels. Similar guidelines apply to insulin-treated type 2 diabetes, although their glucose levels are characteristically more stable, and they may require less frequent monitoring. Patients treated with sulfonylureas or meglitinides, which can also cause hypoglycemia, should be tested once to twice per day during titration of their doses, but after a stable dose and target glycemic targets are achieved, may only need to test several times per week, usually in the morning or before dinner. All insulin and sulfonylurea patients need to test more frequently before and during long car rides, during sick days, and when there are changes in diet and exercise patterns.

_

Clinical utility of SMBG:

Uses of SMBG data include identifying and treating hyper- and hypoglycemia; making decisions about food intake or medication adjustment when exercising; determining the effect of ingested food on blood glucose; and managing glucose fluctuations resulting from illness.  Although the data are somewhat conflicting, larger, better-designed trials have shown that SMBG improves glycemic control when the results are used to adjust therapy. However, the data for reducing long-term complications are more conclusive for patients on insulin therapy. In most hands, the glucose oxidase strip method is accurate and reliable. Since whole blood is used, the results tend to be slightly lower than simultaneous venous samples, but this is balanced by the fact that capillary blood has a higher glucose concentration than venous blood. Most patients can visually estimate the correct value, but a few patients consistently misread the visual charts and must use a reflectance meter. This may be due to an unexpectedly high prevalence of disturbances of color perception in diabetics. Most patients feel more comfortable with the digital readout of the reflectance meter, although it is not necessarily more accurate. The major sources of error are in failing to put a large enough drop of blood on the strip and inaccurate timing. For patients who use reflectance meters, another source of error is failure to keep the machine clean and calibrated. Once the color is developed, it is relatively stable, so patients can be instructed to bring developed strips to the physician’s office so that the accuracy can be checked.

_

Optimal use of SMBG:

Establishing a Glucose Profile:

To take full advantage of the benefits of SMBG, patients must collect data at appropriate times during the day, recognize readings that are outside their target range, and take action to improve their glycemic control. This is most easily accomplished by having patients compile a periodic glucose profile by taking a series of blood glucose measurements throughout the day, capturing information from the fasting, postprandial, and postabsorptive (or late postprandial) periods. Conversely, by staggering SMBG measurements at different times on different days, patients can generate an accurate portrait of day-to-day glycemic excursions while avoiding the need to test many times in a single day. Regardless of the testing regimen, patients should be encouraged to collect data on glucose levels relative to meals. Studies that have used meal-based SMBG testing have demonstrated improvements in HbA1c. The ability to download memory meters with a date and time stamp greatly facilitates this process. Some meters have event markers for meal times, insulin doses, exercise, and hypoglycemia, substantially adding to the power of the analysis.

_

Pattern Recognition:

Regardless of the monitoring regimen, a key to effective use of SMBG in clinical practice is “pattern management,” a systematic approach to recognizing the glycemic patterns within SMBG data and then taking action based on those results. This approach consists of several key steps: (1) establish both premeal and postmeal blood glucose targets; (2) gather data on blood glucose levels, carbohydrate in-take, insulin dose (when applicable), activity levels, schedule, and physical and emotional stress; (3) analyze data to determine whether any patterns emerge; (4) assess any influencing factors; (5) take action; and (6) regularly monitor blood glucose levels to evaluate the impact of actions taken. By using the data gathered during a specified period, the clinician and patient can review patterns of glycemic excursions and then make adjustments to meals, activities, and medications to better control glucose levels, minimize glycemic excursions, and limit hypoglycemia.

_

Inconsistent Highs & Lows:

Sometimes you may get a lower or higher blood glucose reading than usual and you may not be able to figure out the reason. When you are sick with a virus or flu, your blood glucose levels will nearly always go up and you may need to contact your doctor. There are a number of other common causes for blood glucose levels to increase or decrease. These include:

•Food – time eaten, type and amount of carbohydrate for example: bread, pasta, cereals, vegetables, fruit and milk

•Exercise or physical activity

•Illness and pain

•Diabetes medication

•Alcohol

•Emotional stress

•Other medications

•Testing techniques.

Contact your doctor or Credentialed Diabetes Educator if you notice that your blood glucose patters change or are consistently higher or lower than usual.

_

A change in attitude:
For many people with diabetes, striving for tight control is a full-time job, and numbers outside the parameters of your goals can make you crazy. Dale, the diabetes educator from the University of Michigan, suggests a shift in perception that can help avoid knee-jerk reactions to high or low numbers: Instead of “testing” your blood glucose, “monitor” it. “When you ‘test,’” she says, “the results can be interpreted to mean that you’ve ‘passed’ or ‘failed.’ It’s emotionally charged. When you ‘monitor’ instead, you gather information and make adjustments as necessary. You just need to ask, ‘What can I learn from this? Was my serving of pasta too large? Do I need to lower my insulin dose before exercise? What can I do better to prevent this from happening in the future?’ That’s how it should be for everyone.”

___

Barriers to SMBG:

Barriers to optimal use of SMBG include limited knowledge, both by clinicians and patients, as well as from perceived inconvenience or discomfort with the measurement. Motivational/behavioral issues, particularly in the adolescent subgroup, may also be a barrier. These issues, however, should never distract from the fact that failure to achieve glycemic control with SMBG is often the result of a failure to properly educate patients how to monitor blood glucose levels and the importance of accuracy in doing so. Thus, clinicians must be aware of these potential barriers and be prepared to address them with individual patients and other caregivers, such as families or guardians.

_

Barriers and facilitators to SMBG by people with type 2 diabetes using insulin

_

_______

Conceptual framework of factors influencing the use of SMBG:

_

_______

Rate of daily SMBG among Adults with Diabetes aged 18 Years and Older, 1997-2006: CDC Data & Statistics:

_

_

From 1997 to 2006, rates of SMBG increased overall, in all age groups examined, and in the majority of states examined. Health insurance policy changes and improvements in monitoring devices during this period might have influenced the rate increases. The Balanced Budget Act of 1997 provided Medicare coverage for blood-glucose monitors and testing strips for persons with insulin-treated or non–insulin-treated diabetes. This change in Medicare coverage and its possible influence on the policies of private insurers might have contributed to the increases in SMBG rates. The improvement in monitoring technology makes the monitoring practice more convenient, which might also contribute to the upward trends. Consistent with findings from other studies, lower rates of SMBG were correlated with being male, having less than a high school education, having no health insurance coverage, taking no medication or oral medication only, making two or fewer doctor visits annually, and not having taken a diabetes-education course. The negative associations between SBMG and lower education or lack of health insurance coverage suggest that socioeconomic barriers might impede the practice of SMBG. The cost of blood glucose–monitoring supplies might be a barrier for patients with limited economic resources. Positive associations were observed between SMBG and number of doctor visits, insulin use, or having ever taken a diabetes-education course, which indicates that SMBG might be associated with better disease management or more intensive medical care. Access to health care is an important factor associated with SMBG. Health insurance coverage of monitoring devices and supplies is integral in encouraging self-monitoring and self-management practices. Collaborations to ensure adequate insurance coverage for blood-glucose monitors, test strips, and lancets are essential for increasing the rates and benefits of SMBG. Recommendations from health professionals and the provision of diabetes education can influence the self-management practices of patients. Diabetes-education programs might increase the benefits of self-monitoring by teaching patients the optimal timing and frequency of self-monitoring, how to interpret the results correctly, and how to make appropriate diet, exercise, and pharmacologic-therapy adjustments in response to SMBG readings. Continued surveillance will be important for monitoring future trends in SMBG and the effectiveness of intervention strategies.

_________

 When to do SMBG?  

The more often you measure your blood sugar level, the more information you and your diabetes care provider will have for making the right decisions about your diabetes management. The most common times to do a blood sugar test include:

1. Before breakfast: This test reflects the blood sugar values during the night and is probably the most important time to test. The rapid-acting insulin dose can be adjusted based, in part, by the value of this test. The dose of Lantus or Levemir insulin is also based on this test.

2. Before lunch: This helps you decide if the morning Humalog/NovoLog/Apidra and/or Regular insulin dosage was correct.

3. Before dinner: This test reflects how well the dose of morning NPH or lunchtime rapid-acting insulin worked. It may also reflect the effect of afternoon sports activities and an afternoon snack. A test should not be done unless it has been at least 2 hours since food was eaten. If it is time for dinner and your child had an afternoon snack 1 hour earlier, it is best to wait and do a test before the bedtime snack. If this is a common occurrence, change to doing a blood sugar test before the afternoon snack.

4. Before the bedtime snack: This test lets you know if the rapid-acting insulin dose given at dinner was correct. This test is important for people who tend to have reactions during the night, children who play outside after dinner, and anyone who did not eat well at dinner. If the bedtime values are low, an extra snack should be given in addition to the usual solid protein and carbohydrate so your child’s blood sugar does not drop too low during the night. Recheck the value 15 or more minutes after the snack to make sure that it has come back up.

5. Testing after meals: Doing a blood sugar test 2 hours after eating a meal is becoming a more common practice. You should check blood sugar values 2 hours after each meal once or twice weekly. The blood sugar value goals are the same for 2 hours after a meal as they are before a meal. Testing after meals is a useful testing time for people who count carbohydrates and inject insulin just before eating based on how many carbohydrates they plan to eat.

6. Testing at night: Occasionally, you may need to do a blood test in the middle of the night to make sure the value is not getting too low. A nighttime blood sugar test is important for people who tend to have low blood sugars during the night. More than half of the severe low sugars occur during the night. It is important to test on nights when there has been extra physical activity (for example, a basketball game in the evening or playing hard outside on a nice summer evening). The best time to do a check varies with each person. For some, between midnight and 2 AM is the best. For others, the early morning hours are better

________

Dawn phenomenon and Somogyi effect:

If your fasting readings are consistently higher than these goals, it may be because of the dawn phenomenon or a result of the Somogyi effect. In the dawn phenomenon, hormones released in the very early morning cause increased insulin resistance, resulting in higher blood glucose levels. This occurs in everyone, with diabetes or without. However, in people who don’t have diabetes, extra insulin is secreted, so the rise in blood glucose level is minimal. Common preventive treatments for high morning blood glucose caused by the dawn phenomenon include getting daily exercise, eating a carbohydrate-containing bedtime snack, or adding the drug metformin to the diabetes control regimen.

_

The Somogyi effect, which is more likely to occur in people who use insulin, is a phenomenon in which low blood glucose during the night causes the body to release hormones that raise blood glucose levels, resulting in high morning levels. While a person’s first instinct for treating high morning readings may be to increase nighttime insulin, in fact, taking less insulin and going to bed with a higher blood glucose reading may be more effective at preventing the low that leads to the morning rise in glucose. People who are experiencing high morning blood glucose levels are often encouraged to wake up at 3 AM on several occasions to check their blood glucose. High blood glucose at this time may point to the dawn phenomenon as the cause of the high morning readings, while low blood glucose at 3 AM may suggest the Somogyi effect.

________

What do my blood sugar levels tell me? 

Time of Test Can be used to adjust meal/medicine
Fasting blood sugar (FBG) and   Nighttime (3-4 a.m.) Adjust medicine or long-acting insulin
Before a meal Modify meal or medicine
1-2 hours after a meal Learn how food affects sugar values (often the highest blood sugars of the day*)
At bedtime Adjust diet or medicine (last chance for the next 8 hours)

*Depends on the size of the meal and the amount of insulin in your medicine

____

Frequency of SMBG:

_

_

Although the optimal frequency of monitoring is unknown, the ADA recommends SMBG three or more times a day for patients with type 1 diabetes. Patients with type 2 diabetes still benefit from at least periodic monitoring. Ultimately, the frequency and timing of SMBG should be determined by how the data will be used. SMBG can assist the patient and physician with adjusting diet and medications and maintaining appropriate glucose control. More frequent monitoring is beneficial during insulin dose adjustments. Postprandial monitoring is important to identify the effect of various foods on glucose levels and to monitor the effects of preprandial medications. Other factors, such as desire for tight control and current degree of control, will influence frequency of monitoring.

_

A major obstacle to increased SMBG utilization is the lack of clear guidelines for testing frequency. A global consensus conference was convened in 2004 to address this issue. The results of that conference were published as a supplement in the American Journal of Medicine. Table below shows a summary of the recommendations presented.

_

When you should test your blood glucose levels and how often you should test varies depending on each individual, the type of diabetes and the tablets and/or insulin being used. Your doctor or Credentialed Diabetes Educator will help you decide how many tests are needed and the levels to aim for.

Possible times to test are:

•Before breakfast (fasting)

•Before lunch/dinner

•Two hours after a meal

•Before bed

•Before rigorous exercise

•When you are feeling unwell

You may need to record all your tests. Even though your meter may have a memory, it is important to keep a record of your readings in a diary and to take this with you to all appointments with your diabetes team. Testing four times a day is usually recommended for people with type 1 diabetes. People using an insulin pump may need to test more often.

_

SMBG more often:

There will be times when you need to test more often, however you should first discuss this with your doctor or Credentialed Diabetes Educator. Example of these times include when you are:

•Being more physically active or less physically active

•Sick or stressed

•Experiencing changes in routine or eating habits, e.g. travelling

•Changing or adjusting your insulin or medication

•Experiencing symptoms of hypoglycemia

•Experiencing symptoms of hyperglycemia

•Experiencing night sweats or morning headaches

•A female planning pregnancy or are pregnant.

•Pre/post minor surgical day procedures

•Post dental procedures

Your Credentialed Diabetes Educator can help you work out a testing plan especially for you.

___

SMBG vis-à-vis diabetes table:   

Basic SMBG requirements (must be met):
The person with diabetes (or a family member/caregiver) must have the knowledge and skills to use a home blood glucose monitor and to record the results in an organized fashion. The person with diabetes and/or members of the healthcare team must be willing to review and act upon the SMBG results in addition to the A1C results.

 __ 

A. REGULAR SMBG is required if the person with diabetes is:

SITUATION SMBG RECOMMENDATION
Using multiple daily injections of insulin (≥4 times per day)
Using an insulin pump
SMBG ≥4 times per day
Using insulin <4 times per day SMBG at least as often as insulin is being given
Pregnant (or planning a pregnancy), whether using insulin or not
Hospitalized or acutely ill
SMBG individualized and may involve SMBG ≥4 times per day
Starting a new medication known to cause hyperglycemia (e.g. steroids)
Experiencing an illness known to cause hyperglycemia (e.g. infection)
SMBG individualized and may involve SMBG ≥2 times per day
B. INCREASED FREQUENCY OF SMBG may be required if the person with diabetes is:
SITUATION SMBG RECOMMENDATION
Using drugs known to cause hypoglycemia
(e.g. sulfonylureas, meglitinides)
SMBG at times when symptoms of hypoglycemia occur or at times when hypoglycemia has previously occurred
Has an occupation that requires strict avoidance of hypoglycemia SMBG as often as is required by employer
Not meeting glycemic targets SMBG ≥2 times per day, to assist in lifestyle and/or medication changes until such time as glycemic targets are met
Newly diagnosed with diabetes (<6 months) SMBG ≥1 time per day (at different times of day) to learn the effects of various meals, exercise and/or medications on blood glucose
Treated with lifestyle and oral agents and is meeting glycemic targets Some people with diabetes might benefit from very infrequent checking (SMBG once or twice per week) to ensure that glycemic targets are being met between A1C tests
C. DAILY SMBG is not USUALLY required if the person with diabetes:
Screen for diabetes complications annually or as indicated
Is treated only with lifestyle and is meeting glycemic targets
Has pre-diabetes

_

_

SMBG vis-à-vis insulin table:

Suggested SMBG Patterns for Patients Using Insulin:
Basal Insulin Only – NPH or long-acting insulin analog, typically given at bedtime. SMBG at least as often as insulin is being given. Optional, less frequent SMBG can be done at other times of day to ensure glycemic stability throughout the day.
BREAKFAST LUNCH SUPPER BED-
TIME
NIGHT
before after before after before after
Insulin NPH/
long
(basal)
SMBG
pattern
SMBG
test
Adjustment Basal insulin
↑ if BG high
↓ if BG low
Premixed – typically given pre-breakfast and pre-supper. SMBG at least as often as insulin is being given. SMBG QID until glycemic targets are met; SMBG BID (alternating times) is usually sufficient once glycemic targets are met.
BREAKFAST LUNCH SUPPER BED-
TIME
NIGHT
before after before after before after
Insulin pre-
test
pre-
test
SMBG pattern 1:
Starting
SMBG
test
SMBG
test
SMBG
test
SMBG
test
SMBG pattern 2:
Stable
SMBG
test
SMBG
test
Alternating daily SMBG
test
SMBG
test
Adjustment Pre-supper insulin
↑if BG high
↓if BG low
Pre-breakfast insulin
↑if BG high
↓if BG low
Pre-breakfast insulin
↑if BG high
↓if BG low
Pre-supper insulin
↑if BG high
↓if BG low
QID (basal-bolus/MDI) – typically given as a rapid-acting analog or regular insulin (bolus) before each meal and NPH or long-acting analog (basal) typically given at bedtime. SMBG should be QID, pre-meal and bedtime, in order to assess previous dose and to adjust next dose. Some patients find that post-prandial checking can also be helpful.
BREAKFAST LUNCH SUPPER BED-
TIME
NIGHT
before after before after before after
Insulin rapid
regular
bolus
rapid
regular
bolus
rapid
regular
bolus
NPH/
long
(basal)
SMBG pattern 1:
Starting or Stable
SMBG
test
SMBG
test
SMBG
test
SMBG
test
SMBG pattern 2:
Stable, Focus on
post-meal BG
SMBG
test
SMBG
test
SMBG
test
SMBG
test
SMBG pattern 3:
Intensive
management
SMBG
test
SMBG
test
SMBG
test
SMBG
test
SMBG
test
SMBG
test
SMBG
test
SMBG
test
Adjustment Basal insulin
↑if BG high
↓if BG low
Pre-breakfast insulin
↑if BG high
↓if BG low
Pre-lunch insulin
↑if BG high
↓if BG low
Pre-supper insulin
↑if BG high
↓if BG low
Basal insulin
↓if BG low
MDI = multiple daily injections
No funding sources were used by the CDA for the development or launch of this document on SMBG.

_____

SMBG in Type 1 diabetes: 

•Self-monitoring of blood glucose levels should be used as part of an integrated package that includes appropriate insulin regimens and education to help choice and achievement of optimal diabetes outcomes.

•Self-monitoring skills should be taught close to the time of diagnosis and initiation of insulin therapy.

•Self-monitoring results should be interpreted in the light of clinically significant life events.

•Self-monitoring should be performed using meters and strips chosen by adults with diabetes to suit their needs, and usually with low blood requirements, fast analysis times and integral memories.

•Structured assessment of self-monitoring skills, the quality and use made of the results obtained and the equipment used should be made annually. Self-monitoring skills should be reviewed as part of annual review, or more frequently according to need, and reinforced where appropriate.

•Adults with type 1 diabetes should be advised that the optimal frequency of self-monitoring will depend on:

-The characteristics of an individual’s blood glucose control.

-The insulin treatment regimen.

-Personal preference in using the results to achieve the desired lifestyle.

_

SMBG in Type 2 diabetes:

National Institute for Health and Care Excellence (NICE) recommendations for patients with type 2 diabetes:

•Offer self-monitoring of plasma glucose to a person newly diagnosed with type 2 diabetes only as an integral part of his or her self-management education.

•Discuss its purpose and agree how it should be interpreted and acted upon.

•Self-monitoring of plasma glucose should be available:

-To those on insulin treatment.

-To those on oral glucose-lowering medications to provide information on hypoglycemia.

-To assess changes in glucose control resulting from medications and lifestyle changes.

-To monitor changes during intercurrent illness.

-To ensure safety during activities, including driving.

•Assess at least annually and in a structured way:

 -Self-monitoring skills.

-The quality and appropriate frequency of testing.

-The use made of the results obtained.

-The impact on quality of life.

-The continued benefit.

-The equipment used.

•If self-monitoring is appropriate but blood glucose monitoring is unacceptable to the individual, discuss the use of urine glucose monitoring.

_

At least some studies have found that the more often people monitor their blood glucose with a conventional blood glucose meter, the better their glycosylated hemoglobin (HbA1c) levels. (The HbA1c test is a measure of blood glucose control over the previous two to three months.) Other studies have reported similar benefits for continuous monitoring, in which a sensor worn under the skin transmits glucose measurements every few minutes to a receiver. The GuardControl Trial, for example, found that participants with Type 1 diabetes who used a continuous glucose monitor for three months experienced a 1-percentage-point drop in their HbA1c levels. In a perfect world, people with Type 1 diabetes should monitor six or seven times a day. However, that’s often impractical because of time and resources. A person whose Type 1 diabetes is in stable control should monitor a minimum of four times a day. For people whose Type 2 diabetes in good control, they should monitor twice a day.

_

SMBG is currently recommended for all type 1 and type 2 diabetes diabetic patients being treated with insulin. SMBG should be determined individually, and be part of a total treatment regimen that includes diet, exercise, weight loss, and insulin or oral medications when indicated. The optimal frequency and timing of SMBG depends on many variables, including diabetes type, level of glycemic control, management strategy, and individual patient factors. Healthcare professionals will also need to modify SMBG regimens to accommodate changes in therapy and lifestyle. For people with type 1 diabetes, SMBG is an essential component of daily diabetes management and it has been shown that testing 3 or more times a day was associated with a statistically and clinically significant 1.0% reduction in A1C levels. Furthermore, blood glucose measurements taken post-lunch, post-dinner and at bedtime have demonstrated the highest correlation to A1C. Frequent SMGB pre-and post meals several times a day will provide useful information for adjusting insulin and carbohydrate intake. In addition, patients with hypoglycemia unawareness may need to test more frequently, particularly prior to driving or operating any machinery, watching small children, and other activities where compromise of cognitive function may be dangerous. The results of multiple testing each day provide information that is better correlated to A1C than fasting results alone.

_

SMBG for people with type 2 diabetes who are not using insulin:

Although self-monitoring of blood glucose has been found to be effective for patients with type 1 diabetes and for patients with type 2 diabetes using insulin, evidence suggests that self-monitoring of blood glucose is of limited clinical effectiveness in improving glycemic control in people with type 2 diabetes on oral agents or diet alone.  A Cochrane review found that the overall effect of self-monitoring of blood glucose on glycemic control in patients with type 2 diabetes who are not using insulin is small up to six months after initiation and subsides after 12 months. There was no evidence that self-monitoring of blood glucose affected patient satisfaction, general well-being or general health-related quality of life.

_

_

Accumulating Evidence for Improved Glycemic Control in Type 2 Diabetes:

The use of SMBG to detect hypoglycemia and hyperglycemia and then adjusting therapy to minimize glycemic excursions is generally considered standard practice in type 1 diabetes. In patients with type 2 diabetes, especially those not taking insulin, SMBG use has been more controversial. The observational Fremantle Diabetes Study, which reported cross-sectional and longitudinal data, found no significant benefit associated with SMBG. Likewise, a small study by Davidson et al found no statistically significant improvements in HbA1c for patients randomized to SMBG vs. controls. Cross-sectional SMBG studies are incapable of demonstrating a cause-and-effect relationship between SMBG and HbA1c because the data do not evaluate changes in HbA1c over time in the presence of an intervention. In the longitudinal arm of the Fremantle Diabetes Study, the mean SMBG testing frequency of less than 1 test per day in patients treated with diet or oral agents or less than 2 tests per day in insulin-treated patients may have been suboptimal for providing actionable feedback to patients. Additionally, the study did not clearly indicate how SMBG was integrated into the diabetes management plan or whether patients were taught how to respond to out-of-target blood glucose readings. Similarly, the study by Davidson et al did not clearly report how SMBG results were used by patients or their health care professionals. The sample size, wide 95% confidence interval, less than 40% adherence to recommended SMBG frequency, and poorly educated study population may have contributed to the failure to achieve a significant difference. Recently, however, 2 meta-analyses demonstrated that including SMBG as part of a multicomponent management strategy results in a statistically significant decrease in HbA1c of approximately 0.40% in patients with type 2 diabetes who are not taking insulin. When extrapolated to findings from the United Kingdom Prospective Diabetes Study, this decrease would be expected to reduce the risk of microvascular complications by approximately 14%. A 4-year longitudinal study that differentiated new users of SMBG from experienced users found a proportional relationship between SMBG frequency and HbA1c reduction regardless of therapy for new users and a similar association among pharmacologically treated experienced users. Finally, a large epidemiological study of patients with type 2 diabetes that spanned 6.5 years showed that SMBG was associated with lower diabetes-related morbidity and all-cause mortality, even among patients not receiving insulin. Hence, a growing body of evidence suggests that daily SMBG has clinical value in type 2 and type 1 diabetes.  

_______

Synopsis of SMBG vis-à-vis methods of glycemic control:

_

In a nutshell, frequent SMBG is indicated all diabetics who take daily insulin and who take oral agents which can cause severe hypoglycemia.

_______

For those with unstable diabetes:

For those suffering from brittle diabetes (unstable diabetes) along with Type 1 diabetes, should use the CGMS (Continuous Glucose Monitoring System) of monitoring their blood glucose. Unlike traditional meters that provide a one-time snapshot of one’s blood glucose levels, continuous glucose monitors (CGMSs) measure one’s glucose levels every few minutes. This system is essential for people suffering from this kind of diabetes since they need to keep a tab on their blood glucose levels at all times.

_

Should I keep written records?

Keeping good records to look for patterns in blood sugars is essential. It is wise to keep written records even if your meter is able to store results (in case the meter breaks). Write down the time of the test, the date, how your child feels, and the blood sugar value. You may also want to note times of heavy exercise, illness, or stress. It may be helpful to record what was eaten for the bedtime snack or any evening exercise to see if these are related to morning blood sugars. Also, keep a record of when your child has low blood sugar reactions and possible causes. Bring these results to your appointments. Good record keeping and bringing the results to clinic visits allow the family and diabetes team to work together most effectively to achieve good diabetes management.  

_

SMBG: Accuracy of Self-Reported Data and Adherence to Recommended Regimen: a1988 study:

Reflectance meters containing memory chips were used in a study that addressed several questions concerning routine use of self-monitoring of blood glucose (SMBG), including accuracy of patient blood glucose (BG) diaries, reliability of self-reported frequency of SMBG, and adherence to recommended SMBG regimen. Thirty adults with insulin-dependent diabetes used memory meters and recorded test results in diaries for 2 wk while performing their normal SMBG regimen. Analysis of glucose diaries showed that only 23% of the subjects had no diary errors and 47% had clinically accurate diaries (>10% error rate). The most common types of errors were omissions of values contained in meter memory and additions of values not contained in meter memory, with significantly more omissions than additions. Alterations of test values (e.g., changing a 300-mg/dl reading to 200 mg/dl) were extremely rare. There was no difference in the rate of errors that resulted in a more positive clinical profile (omitting unacceptable values and adding acceptable values) or a more negative clinical profile (omitting acceptable values and adding unacceptable values). Examination of the actual frequency of SMBG showed that most subjects (56.6%) measured their BG an average of two to three times each day. Self-report of SMBG frequency correlated with both actual frequency and HbA1. Although actual frequency of SMBG was not related to physicians’ recommendations, the majority (64%) of subjects were self-testing as often or more often than they had been instructed.

___

Structured SMBG:

Self-monitoring should be assessed at least annually and in a structured way:

_

If self-monitoring is appropriate but blood glucose monitoring is unacceptable to the individual, discuss the use of urine glucose monitoring.

 _

SMBG education:

SMBG should be part of an educational program including the patient and close relatives (family members) where necessary. When prescribing the SMBG device, it is essential to explain the issues to the patient and to organise this self-monitoring with the patient, including the frequency, scheduling, blood glucose targets and treatment adjustments to be made by the patient or doctor based on the results. In all cases, the patient should maintain an appropriate diet and physical exercise. Glucose meters are most accurate when used properly. Thus, educating patients on proper use and what to do with the results is vital. Also, lower rates of SMBG are correlated with having less than a high school education; and in my view, uneducated people are unlikely to learn right way to use glucometer.   

_

The figure below shows impacts of SMBG as a component of education/treatment program:

____

____

Successful SMBG:

_______________

SMBG Special patient groups:  

While the overall effect of self-monitoring seems modest, there is a paucity of data on special groups, including heavy goods vehicle drivers for whom hypoglycemia may pose an unacceptable occupational risk to themselves and the public. Also, people starting or changing their oral diabetes medication may benefit from self-monitoring.

_

Barriers to self-monitoring of blood glucose among adults with diabetes in an HMO: A cross sectional study: 2003:

In addition to logistic barriers to SMBG, some recent evidence suggests that adult diabetes patients who may be at greatest risk for poor outcomes (e.g., minorities, elderly, lower SES) may be least likely to self-monitor. In a study of more than 44,000 managed care patients with type 1 (2,818) and type 2 (41,363) diabetes, Karter et al identified older age, male gender, non-white race, lower socioeconomic status, English language difficulty, higher out of pocket test strip costs, intensity of insulin therapy, greater alcohol consumption, and smoking as independent predictors of less frequent self-monitoring in diabetes patients. This study was the first to move beyond simple reporting of descriptive statistics in order to assess predictors of SMBG in managed care settings. Unfortunately, the validity of the study findings is limited by the reliance on self-reports of self-monitoring, an unreliable measure of actual behavior. The purpose of the current study was to examine the relationship between patient characteristics and SMBG in a large health maintenance organization (HMO) using objective measures of self-monitoring practice. Specifically, we tested the hypothesis that, controlling for type of drug therapy and severity of illness, diabetes patients at greatest risk for poor health outcomes (e.g., older age, multiple chronic conditions, non-white race, lower neighborhood SES) are less likely to practice SMBG. The study population included more than 4,500 adult managed care patients using insulin, oral, or a combination of the two drug therapies. Our use of objective measures of SMBG distinguishes this study from previous attempts to identify predictors of SMBG in managed care. This paper represents the first phase of a larger study to evaluate the effect of distributing free home glucose monitors to diabetes patients at this New England HMO. In multivariate analyses, lower neighborhood socioeconomic status, older age, fewer HbA1c tests, and fewer physician visits were associated with lower rates of self-monitoring. Obesity and fewer comorbidities were also associated with lower rates of self-monitoring among insulin-managed patients, while black race and high glycemic level (HbA1c>10) were associated with less frequent monitoring. For patients taking oral sulfonylureas, higher dose of diabetes medications was associated with initiation of self-monitoring and HbA1c lab testing was associated with more frequent testing. Managed care organizations may face the greatest challenges in changing the self-monitoring behavior of patients at greatest risk for poor health outcomes (i.e., the elderly, minorities, and people living in low socioeconomic status neighborhoods).

_____

Self-monitoring of blood glucose during pregnancy: indications and limitations:

Approximately 5 percent of all pregnancies are complicated by gestational diabetes mellitus, which increases both maternal and perinatal morbidity. In treating women with this condition, many have advocated minimizing fluctuations in blood glucose concentrations to avert maternal hyperglycemia and thus decrease the risk of fetal hyperglycemia and its consequences, fetal hyperinsulinemia and excess fetal growth. Perinatal morbidity and mortality rates, often affected by maternal diabetes, have dramatically been reduced since the discovery of insulin and its therapeutic implementation. In addition to increased availability of insulin, many important technological advances have been developed over the preceding decades. These advances culminated in a larger array of diagnostic and therapeutic capabilities that contributed to improved outcomes in high-risk pregnancies. The availability of glucose meters has represented an important positive impact in the treatment of pregnant women with any type of diabetes. Data frequently show patients who perform self-monitoring of blood glucose (SMBG) more strictly adhere to treatment programs due to increased comprehension regarding treatment and participation in the prescribed treatment regimen. Treating hyperglycemia during pregnancy reduces adverse pregnancy outcomes. The first step towards a tight glucose control in pregnancy is patient adherence to SMBG.

_

Indications for self-monitoring of blood glucose during pregnancy complicated by diabetes:

SMBG is an integral part of standard diabetes care. It allows pregnant women and their healthcare providers to determine the most effective therapeutic modality (e.g. diet, physical activity, or insulin) to control glucose levels and reduce risks of diabetes-related complications. The number of daily tests required to adequately monitor blood glucose levels is specific to the patient and based on the recommendation of the practitioner. Several characteristics, unique to each pregnant woman should be considered. For example, the type of treatment (diet and/or insulin), frequency and intensity of physical activity, and the risk of hypoglycemia. Additionally, SMBG makes patients feel more secure and comfortable using insulin since it allows early recognition of symptoms of hypoglycemia. The indications for, and frequency of SMBG in pregnant women that are not under insulin treatment must be tailored to the individual. Patients must be trained to adjust the amount of food intake with the frequency, intensity, and timing of physical exercise. It is unclear whether SMBG alone leads to improved glycemic control in non-insulin treated subjects with type 2 diabetes. Additionally, there is no data in women with gestational diabetes mellitus (GDM). Measured glucose values need to be frequently checked to ensure both accuracy and the patient’s understanding of any alterations to prescribed treatment. For the vast majority of patients using insulin, SMBG is recommended three or more times per day. A more intensive SMBG regimen is indicated for women with pre-gestational type 1 or 2 diabetes. The aim is to reach adequate HbA1c levels safely without inducing hypoglycemia.

_

When to monitor:

Strict monitoring of postprandial glucose levels is paramount during pregnancy. Many studies have shown that postprandial hyperglycemia beyond the 16th week of pregnancy is the main predictor for fetal macrosomia. Peak plasma glucose levels during pregnancy occur between 60 and 90 minutes after eating. It is recommended to perform SMBG one hour after food intake to evaluate potential adjustments in meal composition and/or in the prandial insulin dose. In special circumstances, like women with slowed gastric emptying, a high-fat meal, or women who use regular insulin for a prandial bolus, it might be more appropriate to perform SMBG two hours after meals instead of one. SMBG performed before eating is the most useful parameter to identify optimal basal insulin doses. Evaluating glycemic levels during the night is recommended to diagnose and prevent nocturnal hypoglycemia. One randomized study of 66 women with GDM observed better neonatal outcomes by aiming for 1-hour postprandial glucose levels less than 140 mg/dL as opposed to a preprandial target of 59 to 106 mg/dL. In another study, 61 women with type 1 diabetes were randomly assigned into two groups at 16 weeks gestation. Women either monitored blood glucose levels preprandially or postprandially. Postprandial capillary blood glucose monitoring significantly reduced the incidence of preeclampsia and neonatal triceps skinfold thickness compared to preprandial monitoring. These studies have been criticized for not using comparable target blood glucose levels for pre- and post-prandial monitoring. Regardless, most specialists prefer postprandial testing at least partly, for the physiologic changes discussed earlier.

_

Barriers to self-monitoring of blood glucose during pregnancy:

The first step towards a successful SMBG during pregnancy is patient education and an understanding of the importance of SMBG to reducing complications during and after pregnancy. The patient must be properly educated on all aspects of meter use. It is important she be aware of how to properly code her meter, wash her hands prior to the test, and to apply the correct amount of blood to the test strip. It is also critical to educate patients on how glucose from food can affect the test results, to use test strips before the expiration date and not longer than 90 days after the vial was opened. Lastly, it is crucial to educate patients on proper storage of strips and disposal of strips if they are subjected to extreme humidity or temperature. Other common barriers to SMBG include costs of the meters and strips, lower socio-economic status, fewer HbA1c tests, obesity and other comorbidities, poor glycemic control, stigmas of testing in public places, pain, and inconvenience.  

_

An association between diabetes in pregnancy and fetal overgrowth has long been recognized. Fetal overgrowth is associated with a number of adverse outcomes for both the mother and her baby, such as a higher rate of difficult delivery. To reduce the rate of fetal overgrowth and its associated complications, women with diabetes in pregnancy undergo a number of interventions. Among these interventions is glucose monitoring. The utility of a hemoglobin A1c value appears to be greatest when performed periconceptionally to estimate the risk of congenital anomalies, but it does not appear to confer substantial benefit for estimating the risk of fetal overgrowth or other adverse pregnancy outcomes. Recent studies suggest that CGMS may be beneficial for certain women with diabetes treated with insulin, particularly in women with diabetes that is difficult to control. However, these data require further evaluation and do not yet support the incorporation of CGMS into routine practice.

_

Postprandial versus Preprandial Blood Glucose Monitoring in Women with Gestational Diabetes Mellitus requiring Insulin Therapy: a 1995 study:

In the management of gestational diabetes, various methods of glucose monitoring have been proposed, including the measurement of fasting, preprandial, postprandial, and mean 24-hour blood glucose concentrations.  In a retrospective pilot study comparing the outcomes of pregnancy among women with gestational diabetes who were followed with preprandial or postprandial glucose measurements, authors found that there was less macrosomia (defined as a birth weight greater than 4000 g) among their infants when treatment was based on the results of postprandial measurements.  

_________

Variability of blood glucose:

Biological Variation:

Fasting glucose concentrations vary considerably both in a single person from day to day and also between different subjects. Intraindividual variation in a healthy person is reported to be 5.7% to 8.3%, whereas interindividual variation of up to 12.5% has been observed. Based on a CV (coefficient of variation) of 5.7%, FPG can range from 112 to 140 mg/dL in an individual with an FPG of 126 mg/dL. (It is important to realize that these values encompass the 95% confidence interval, and 5% of values will be outside this range.)

_

The concentration of glucose is highest in the arterial circulation. Laboratory determinations are usually done on venous samples. If the venous circulation is delayed, such as by leaving a tourniquet on for a prolonged period of time, the concentration falls even further. Thus, samples should be obtained after releasing the tourniquet. Studies have shown that blood glucose concentration may fall as much as 25 mg/dl when a tourniquet has been left in place for 6 minutes. The concentration of glucose in capillary samples is intermediate between venous and arterial. Warming the extremity increases the capillary flow and “arterializes” the sample, while cooling or a tourniquet decreases the flow and lowers the concentration of glucose.

_

Glycolysis:

Both red cells and leukocytes contain glycolytic enzymes. Therefore glucose will be consumed and the concentration of glucose in a sample of whole blood will decline with time. The rate of loss is generally said to be approximately 5% per hour, but may be as rapid as 40% in 3 hours. Consumption of glucose in whole blood samples can be prevented by adding sodium fluoride to the specimen to inhibit the glycolytic enzymes. This approach is the generally applied method in the clinical laboratory. It is effective except in situations where the system is overwhelmed, such as in specimens from patients with leukemia, which contain large numbers of leukocytes. Sodium fluoride has a major disadvantage in that its use makes the sample unacceptable for other determinations such as sodium and uric acid. However, while fluoride does attenuate in vitro glycolysis, it has no effect on the rate of decline in glucose concentrations in the first 1 to 2 h after blood is collected, and glycolysis continues for up to 4 h in samples containing fluoride. The delay in the glucose stabilizing effect of fluoride is most likely the result of glucose metabolism proximal to the fluoride target enolase. After 4 h, fluoride maintains a stable glucose concentration for 72 h at room temperature. A recent publication showed that acidification of the blood sample inhibits glycolysis in the first 2 h after phlebotomy, but the collection tubes used in that study are not commercially available. Placing tubes in ice water immediately after collection may be the best method to stabilize glucose initially, but this is not a practical solution in most clinical situations. Separating cells from plasma within minutes is also effective, but impractical. Rapid separation of the sample or cooling will also prevent glycolysis and will allow the sample to be used for other determinations. Unhemolyzed samples that have been separated within 30 minutes of drawing are generally considered adequate. Rapid cooling of the sample followed by centrifugation is even more effective in preventing glycolysis.  Blood glucose cannot be determined accurately on postmortem specimens because both glycogenolysis and glycolysis continue after death. A reasonable estimate of the antemortem blood glucose concentration can be obtained by measuring the glucose concentration of the vitreous of the eye, which does not contain glycolytic enzymes.

_

Variability of capillary blood glucose monitoring measured on home glucose monitoring devices:

Self monitoring of blood glucose helps achieve glycemic goals. Glucometers must be accurate. Many variables affect blood glucose levels. Factors are analytical variables (intrinsic to glucometer and glucose strips) and pre analytical related to patients. Analytical variables depend on factors like shelf life, amount of blood and enzymatic reactions. Preanalytical variables include pH of blood, hypoxia, hypotension, hematocrit etc. CGMS has the potential to revolutionise diabetes care but accuracy needs to be proven beyond doubt before replacing current glucometer devices.

_______

Factors that alter blood glucose results by SMBG:

Table above outlines preanalytical, analytical, and postanalytical factors that can alter the glucose result when a SMBG device is used. The FDA accumulated over 400 medical device reports on blood glucose monitors used in hospitals over 2 years. The 4 most frequent errors reported included 2 preanalytical errors (inadequate instrument cleaning; incorrect quality control or proficiency testing procedures) and 2 analytical errors (improper technique; an incorrect match between the glucose monitor for calibration and test strip calibration when required by the manufacturer).

_______

Preanalytical Variation:

 Numerous factors that occur before a sample is measured can influence results of blood tests. Examples include medications, venous stasis, posture, and sample handling. The concentration of glucose in the blood can be altered by food ingestion, prolonged fasting, or exercise. It is also important that measurements are performed in subjects in the absence of intercurrent illness, which frequently produces transient hyperglycemia. Similarly, acute stress (e.g., not being able to find parking or having to wait) can alter blood glucose concentrations. Samples for fasting glucose analysis should be drawn after an overnight fast (no caloric ingestion for at least 8 h), during which time the subject may consume water ad lib. The requirement that the subject be fasting is a considerable practical problem as patients are usually not fasting when they visit the doctor, and it is often inconvenient to return for phlebotomy. For example, at an HMO affiliated with an academic medical center, 69% (5,752 of 8,286) of eligible participants were screened for diabetes. However, FPG was performed on only 3% (152) of these individuals. Ninety-five percent (5,452) of participants were screened by random plasma glucose measurements, a technique not consistent with ADA recommendations. In addition, blood drawn in the morning as FPG has a diurnal variation. Analysis of 12,882 participants aged 20 years or older in NHANES III who had no previously diagnosed diabetes revealed that mean FPG in the morning was considerably higher than in the afternoon. Prevalence of diabetes (FPG > 126 mg/dL) in afternoon-examined patients was half that of participants examined in the morning. Other patient-related factors that can influence the results include food ingestion when supposed to be fasting and hypocaloric diet for a week or more prior to testing.

_

Pre-analytical variables:

Choosing the correct blood sample:

There are several aspects concerning the blood sample that needs attention. Although there are different recommendations, the first choice is to wash the hands with soap and water, dry them, and use the first drop of blood for assessment. Erroneous blood glucose levels (pseudo hyperglycemia) have been recorded when patients did not wash their hands with water after peeling fruits and such false readings were still noted when hand washing was substituted with the use of an alcohol swab. If washing hands is not possible, and they are not visibly soiled or exposed to a sugar-containing product, it is acceptable to use the second drop of blood after wiping away the first drop. Firm squeezing of the finger should be avoided.

_

Operator error:

The technique of the user or operator of the glucometer usually is responsible for more inaccuracy than the glucometer itself. Applying insufficient blood to the strip, using strips that are out of date or exposed to excess moisture or humidity, and failure to enter the proper code, can compromise accuracy. Several important technologic advances that decrease operator error have been made in the last few years. These include “no wipe” strips, automatic commencement of timing when both the sample and the strip are in the meter, smaller sample volume requirements, an error signal if sample volume is inadequate, “lock out” if controls are not assayed, barcode readers, and the ability to store up to several hundred results that can subsequently be downloaded for analysis. Together these improvements have produced superior performance by newer meters. Patient education can have significant influence on the accuracy of the readings shown on the glucometer. Operator error such as differences as much as 14.5 mg/dl between lots of test strips has been reported. Most of the glucometers require coding to be done prior to use. A study by Raine et al. have suggested that up to 16% of patients in endocrine practice miscode their glucometers. This can lead to -37% to + 29% errors in clinical practice. The probability of giving additional 3 units of insulin dose when meters are miscoded was as high as 22.5%.

_

Hematocrit:

Variation in the patient hematocrit can result in inaccuracies in the blood glucose reading. Abnormal hematocrit concentrations can result in falsely low (hematocrit >50%) or high (hematocrit <40%) glucose levels. In one study, hematocrit effect was studied by adjusting the hematocrit of donor sodium heparin blood at glucose concentrations of 54, 247, and 483 mg/dl.  At low glucose concentrations (54 mg/dl), the mean glucose difference changed by more than 10 mg/dl. At higher glucose concentrations, meters demonstrated more than 10% change in the mean glucose percentage difference between the lowest and highest hematocrit values. Hematocrit variations may occur frequently in daily routine (e.g., due to dehydration/exercise, nicotin and alcohol abuse, pregnancy etc.). The uses of meters with stable performance are recommended under these conditions. 

_

Whole blood vs. plasma vs. interstitial fluid:

The estimation of whole blood glucose levels are usually 10-15% lower than plasma glucose alone. The glucose concentration in the water that makes up plasma is equal to that of erythrocytes. Plasma has greater water content than erythrocytes and, therefore, exhibits higher glucose levels than whole blood. The World health Organization (WHO) has devised a conversion factor of 1.12, which has been mathematically, derived assuming a hematocrit of 45% and red- cell to plasma ratio of ~ 0.8. In a critical care setting, multiple variables affecting the blood glucose may be present at one time. Hypotension, hypoxia, pH of blood, temperature are amongst the many variables affecting the blood glucose measurement. Glucose is measured in the interstitial fluid by an electrochemical glucose oxidase method. Each measurement cycle requires 20 minutes to complete. The measured interstitial glucose value lags behind the serum glucose concentration by about 18 minutes, secondary to the time required for a change in serum glucose to equilibrate with the interstitial fluid. Sweat on the skin will dilute the collection fluid. Sweat and/or elevated body temperature will initiate a skipped measurement cycle.

_

Hypotension:

Hypotension results in decrease in perfusion and increase in glucose utilization resulting in false results in capillary blood glucose. Atkin et al. assessed the validity of the finger stick glucose measurements in the hypotensive critically-ill patients. They found that the fingerstick glucose values were significantly lower than the values obtained by venous reagent strips or laboratory glucose measurements. Fingerstick glucose values in the hypotensive group were 67.5% of laboratory glucose values and were significantly lower than the values obtained in the normotensive group (91.8%, P less than 0.001).  Juneja et al. aimed to compare the accuracy of capillary bedside glucometry with arterial samples in critically-ill patients with shock through a prospective case-control study. They studied 100 patients on vasopressor support, and the control group had 100 normotensive patients. Mean arterial and capillary sugars (mg/dl) in study and control groups were 164.7 ± 70 and 157.4 ± 68.9 and 167.1 ± 62.2 and 167.5 ± 61, respectively. They concluded that arterial blood glucose is better measurement compared to capillary blood glucose in hypotensive patients. Venous blood glucose values are also stated to be significantly better than capillary blood glucose measurement and correlate better with the laboratory measurements in a critical care setting.  Also, the prandial state accentuated the difference between various measurements in hospital setting. Capillary blood glucose levels were 20-25% higher than venous plasma glucose level in prandial state, whereas it was only 2- 5 mg/ dl higher in fasting state.

_

PH:

Like any other enzymatic reaction, change in pH is likely to affect the enzymatic reaction. However, in the range of pH 6.89 to 7.4, it is found to not have much effect on the blood glucose levels measured.

_

Alternate site:

Bina et al. studied the differences in the measurement of blood glucose at various site (forearm, palm, and thigh) with respect to the finger tip capillary blood glucose. Also, the effect of prandial state and moderate exercise at the blood glucose levels on the different sites were studied. Significant differences in BG at alternative sites were found 60 min post-meal (P < 0.0003) and post-exercise (P < 0.037). However, no significant differences were observed between sites in either the fasting state or at 90 and 120 min post-meal.  It has been observed that there is a considerable time lag in measurement at alternate sites. It can be particularly dangerous in hypoglycemia situations, and hence clinicians must be aware of this. The effect of oxygen concentration in the sample has already been discussed.

_

Drugs:

There are various drugs affecting the capillary glucose readings. Of particular importance were the acetaminophen, dopamine, mannitol, and the ascorbic acid. Glucose meters use enzyme-based amperometric biosensors to measure glucose concentrations. Glucose oxidase oxidizes glucose to gluconolactone while reducing oxygen to H2O2. Other mediators like ascorbic acid, uric acid, acetaminophen, and salicylic acid can falsify the results by nonspecifically oxidizing H2O2. Acetaminophen increased glucose readings with GDH meters but decreased readings with some, but not all, GO-based meters at therapeutic drug levels. Dopamine increased glucose values on GDH-based meters, primarily at high drug concentrations.  Mannitol increased GO-based meter readings, possibly through detection by the analyzer or by a non-specific osmotic effect. At high doses, ascorbic acid increased GDH-based meter readings but decreased those that used glucose oxidase.

_

Other interfering substances:

 Some naturally occurring substances in the body tend to interfere in the blood glucose readings. High triglyceride levels cause falsely low blood glucose values as they tend to take up volume reducing the glucose levels. Also, bilirubin has also noted to cause pseudohypoglycemia .

_

Implications:

With such vast data regarding the various variables affecting the blood glucose reading in glucometer, the clinician must be alert while interpreting the values while treating a patient. Also, the patients need to be educated regarding their glucometers, which can prevent false readings and inadvertent admission of excess insulin resulting in severe hypoglycemia.

_________

Analytical Variation:

 Glucose is measured in central laboratories almost exclusively using enzymatic methods, predominantly with glucose oxidase or hexokinase. The following terms are important for understanding measurement: accuracy indicates how close a single measurement is to the “true value” and precision (or repeatability) refers to the closeness of agreement of repeated measurements under the same conditions. Precision is usually expressed as CV (coefficient of variation); methods with low CV have high precision. Numerous improvements in glucose measurement have produced low within-laboratory imprecision (CV <2.5%). Thus, the analytical variability is considerably less than the biological variability, which is up to 8.3%. Nevertheless, accuracy of measurement remains a problem. There is no program to standardize results among different instruments and different laboratories. Bias (deviation of the result from the true value) and variation among different lots of calibrators can reduce the accuracy of glucose results. (A calibrator is a material of known concentration that is used to adjust a measurement procedure.) A comparison of serum glucose measurements (target value 98.5 mg/dL) was performed among ~6,000 laboratories using 32 different instruments. Analysis revealed statistically significant differences in bias among clinical laboratory instruments, with biases ranging from -6 to +7 mg/dL (-6 to +7%) at a glucose concentration of 100 mg/dL. These considerable differences among laboratories can result in the potential misclassification of >12% of patients. Similarly, inspection of a College of American Pathologists (CAP) survey comprising >5,000 laboratories revealed that one-third of the time the results among instruments for an individual measurement could range between 141 and 162 mg/dL. This variation of 6.9% above or below the mean reveals that one-third of the time the glucose results on a single patient sample measured in two different laboratories could differ by 14%.

_

Analytical variables:

Detection Method:

Glucometers use predominantly 2 principles: Electrochemical (ameperometry) and reflectance photometry. In glucometers, the enzyme used (glucose oxidase) induces an electric current through the strip, which is proportionate to the amount of glucose. In the reflectance glucometers, the strip changes color according to the amount of glucose in the sample. These glucometers quantify the color change by reflectance photometry. If the drop of blood does not cover the entire testing area of reflectance, glucometers can give falsely low value. Also, they are either automatic (non-wiping) or manual (wiping). The ambient temperature has shown to affect the glucose readings in the reflectance meters. In one study, it was demonstrated that the manual reflectance glucometer overestimated glucose concentrations by 14% at 44°C and underestimated by 12.7% at 25°C.

_

Enzymatic reactions:

Glucose meters contain strips that contain two enzymes; glucose oxidase (GO) and glucose dehydrogenase (GDH) or hexokinase. Glucose oxidase meters require oxygen and water for their reaction and hence are susceptible to extremes of hydration or oxygenation. GO-mediated reactions result in generation of gluconic acid and hydrogen peroxide. Capillary blood glucose was measured in mountain climbers at 13500 ft by various glucometers. GO glucometers overestimated blood glucose by 6-15%, whereas the GDH meters were all within 5%. This is because the glucose oxidase biosensor strips are sensitive to the oxygen concentration. But, a recent study shows no such difference. Maltose, galactose and xylose will be misinterpreted as glucose by GDH based methods. So patients on icodextrin peritoneal dialysis using GDH meters will result in falsely high values as it can be metabolized to maltose cross-reacting as glucose.

_

Glucostrips:

Glucostrips are another potential source of variability of blood glucose levels. Glucostrips have a finite life; it is usually for 2 years in ideal storage conditions. Exposure of strips to light causes discoloration of test area resulting in falsely elevated glucose levels. Exposure of strips to humidity and temperature by open cap vials decreases their stability by day 14 due to exposure to heat and humidity.

_____________

Accuracy of glucometer:

Accuracy can be defined as the variation from the reference value. When assessing laboratory values for glucose, the testing method is accurate if the measurement is within acceptable error compared to the reference method. Within the range of hypoglycemia, if the values reported by the SMBG device are inaccurate (e.g., reported higher than actual values), this inaccuracy could lead to failure to recognize and treat life-threatening values or even more worrisome result in a different treatment (e.g., increasing insulin infusions) that could pose a serious patient safety risk. The importance of accuracy for clinical treatment assesses whether the measurement value is within a range close enough to the actual value that the clinical approach to therapy remains the same. The current ADA device recommendations for SMBG with SMBG devices include the following: (a) achieve and maintain glycemic control, (b) prevent and detect hypoglycemia, (c) avoid severe hyperglycemia, and (d) facilitate diabetes therapy adjustment to lifestyle changes (activity, diet changes, etc.). The accuracy requirements set by the professional organizations are still rarely met by SMBG devices. With outpatients and other hospitalized noncritically ill patients, most clinicians appear satisfied with SMBG device accuracy when glucose values avoid the extremes of hypoglycemia and hyperglycemia. This is because, in the range of normal glucose, the accuracy in this range is typically acceptable for clinical decision-making. For the care of critically ill patients, accuracy becomes more important as some of the early signs present with hypoglycemia and hyperglycemia may be difficult to detect in this patient population due to decreased mental status, sedatives, and other patient conditions. For optimal glucose control in high-demand states in critically ill patients, SMBG device technology has yet to provide a high enough degree of accuracy and reliability that leads to appropriate clinical decision-making. Continuous glucose monitoring devices based on invasive, minimal invasive, or noninvasive methodology are being developed to improve blood glucose monitoring.  

_

Establishing glucose meter accuracy is challenging. Glucose meters only accept whole blood, but existing standards are serum based. Glucose as an analyte is unstable in whole blood, and the process of stabilizing glucose through glycolysis inhibitors can interfere with some glucose meters. Technical accuracy for glucose meters is defined by comparing meter results against clinical laboratory methods that use plasma/serum-based samples. There is no consensus among standards organizations and professional societies, however, for acceptable performance criteria. While technical accuracy defines meter performance, clinical accuracy establishes how treatment decisions agree between meter results and laboratory glucose results. Glucose meters should be evaluated before use, and the specific meter model selected should be based on technical and clinical performance in the intended patient population.

_

Accuracy of glucose meters is a common topic of clinical concern. Blood glucose meters must meet accuracy standards set by the International Organization for Standardization (ISO). According to ISO 15197 Blood glucose meters must provide results that are within 20% of a laboratory standard 95% of the time (for concentrations about 75 mg/dL, absolute levels are used for lower concentrations). However, a variety of factors can affect the accuracy of a test. Factors affecting accuracy of various meters include calibration of meter, ambient temperature, pressure use to wipe off strip (if applicable), size and quality of blood sample, high levels of certain substances (such as ascorbic acid) in blood, hematocrit, dirt on meter, humidity, and aging of test strips. Models vary in their susceptibility to these factors and in their ability to prevent or warn of inaccurate results with error messages. The Clarke Error Grid has been a common way of analyzing and displaying accuracy of readings related to management consequences. More recently an improved version of the Clarke Error Grid has come into use: It is known as the Consensus Error Grid. Older blood glucose meters often need to be “coded” with the lot of test strips used, otherwise, the accuracy of the blood glucose meter may be compromised due to lack of calibration.

_

Independent accuracy testing is expensive, complicated, and rare. Diabetes Forecast, for example, doesn’t test or recommend products because the American Diabetes Association is a nonprofit organization without a laboratory or expertise in lab comparisons of products. Where the data do exist, in the form of manufacturers’ tests, accuracy is reported in different ways. Some companies report accuracy as a “regression line,” involving correlation coefficients, slopes, and Y-axes. Others report in a friendlier table format using percentages. Those measures of accuracy are apples and oranges. It’s not possible [for a consumer] to do a direct comparison of how accurate one meter is to another. We’ve seen cases where cheaper meters don’t necessarily have all the bells and whistles, but have better accuracy. So users have to evaluate all of the features that are important to them.

_

Factors that affect Blood Glucose Meter Accuracy:

Expired test strips:

Always check the expiry date of test strips before performing a test as expired test strips could produce a false reading.

Testing in very warm or very cold temperatures:

Accuracy of blood glucose meters can be affected by temperature.  Blood glucose meters are designed to be at their most accurate at room temperature.  If you need to test in very warm or very cold temperatures, refer to your meter’s user guide to see what temperature range the meter is most accurate at.

Testing in very humid conditions:

Humid conditions can also affect meter accuracy.  Always remember to keep the test strip pot closed after a test has been performed and, where possible, avoid testing in very humid environments.

Coding mistakes:

Some blood glucose meters require test strips which need a code to be entered into the meter before testing with a new batch of test strips. If the wrong code is entered, this can affect accuracy of the result.  A number of blood glucose meters these days do not need coding.

Too little blood applied:

If too little applied is applied to the strip, this can cause a false reading.  A number of blood glucose monitors won’t give a reading if this happens and will instead alert that too little blood has been applied or give an error message.

Contamination on skin:

Dirty or wet hands can make readings very inaccurate in some cases. You should ensure your hands have been cleaned, with soap and water, and dried then before performing a blood test.

____

Temperature and SMBG:

In principle, SMBG values are measured with blood glucose meters that use an enzyme reaction. A temperature sensor is built into the main device as a control mechanism for adjusting the enzyme reaction rate to match the ambient temperature, allowing accurate values to be obtained. Differences in SMBG values due to temperature, within the range of usual ambient temperatures, are reported to be negligible to the extent that clinical decisions are not affected. Conversely, one study comparing SMBG values for ground temperatures between 25°C and 8°C re­ported that the meters can either underestimate or overestimate BG values. In addition, when patient skin temperature is cool (15.5°C), lower SMBG values have been reported compared with warm skin temperature (35°C). Par­ticularly during the winter, the SMBG values are higher than the PG values.

_______

Common User’s Errors:

A tool such as SMBG can contribute substantially to improved glycemic control if reasonably accurate and used appropriately. What if, however, the information is incorrect either because of technical inaccuracies or user error? Confounding issues related to blood glucose testing in the inpatient setting have been well elucidated. In the outpatient setting, common errors in SMBG have been documented in observational studies. I have already discussed technical inaccuracies but SMBG data can be rendered inaccurate by several user errors, including:

• failure to store glucose strips properly;

• failure to set glucose meter codes to match strip codes;

• failure to apply sufficient blood on the meter’s strip;

• failure to use control solutions;

• use of date-expired control solutions;

• use of date-expired strips; and

• failure to wash hands properly.

The frequency of user error relating to meter codes has been reported at approximately 16%.  In one study, exactly half of the patients were elderly. As these patients are often challenged by cognitive and dexterity limitations and frequently have long-standing diabetes requiring insulin, therapeutic interventions based on such erroneous data can be destructive.

_

The American Diabetes Association (ADA) assumed in a consensus report published in 1990 that up to 50% of the self-monitored blood glucose readings have more than 20% deviation from the true values. However, more recent studies found the percentage of deviation to be less.  Alto and colleagues conducted a study of 111 patients in two family practice settings to determine the technical skill and accuracy of SMBG in an outpatient population. The patients were observed using a 13-point checklist of critical steps in the calibration and operation of their glucose monitor. Overall, 53% of patient glucose values were within 10% of the control value, 84% were within 20% of the control value and 16% varied 20% or more from the control value. In short, the study showed that despite multiple technical errors when using SMBG, most patients obtained clinically useful values. A study reported by Bergenstal and colleagues found that 19% percent of patients had inaccuracy rates of more than 15% in blood glucose monitoring.  Some of the most common causes of inaccurate readings included: lack of periodic meter technique evaluation, difficulty using wipe meters, incorrect use of control solutions, lack of hand washing (even when under clinical observation), and using unclean meters. These studies demonstrate the need for healthcare providers to monitor patient use of SMBG to help improve the accuracy of test results.

_

Error categories:

For the evaluation, an error classification system was developed containing a weighting and interpretation of the errors and the resulting consequences. Errors were classified in the categories F1 to F5:

_

Community pharmacy-based intervention to improve self-monitoring of blood glucose in type 2 diabetic patients: a 2006 study:

The objective of this study was to record and assess the errors patients make in preparing, performing, and processing self-monitoring of blood glucose (SMBG). This study shows that the majority of individuals with type 2 diabetes (83%) make at least one mistake in carrying out the measurement of blood glucose levels with their own device. The study revealed two kinds of errors that were quite frequent: errors that falsify the measurement reading as well as errors that can have a negative effect on patient compliance. In the reference literature there is a consensus that individuals with diabetes make numerous mistakes in the self-monitoring of their blood glucose levels and that remedial training sessions are required. The kinds of mistakes observed in the studies, however, were very similar. In other studies, too, the main mistakes were in cleaning of the hands, making adjustments to the settings of the meter and problems with coding. But there is little data on how much impact to expect from these remedial training sessions for type 2 diabetic patients. One problem is that there is no standard method for remedying the errors, so that the kind of error assessment is variable at the time of our study. A validated documentation sheet was not available. The documentation sheets used in various studies are designed to record 13 to 45 sources of error. Thus, no comparability exists among the various studies. The main difference in the evaluations consisted in whether the components of the SMBG were summarized or recorded in a very detailed way. Common to all of them, however, is that it could be shown in principle that diabetes management education sessions instructing how to carry out blood glucose self-testing are both necessary and effective, even if no general statement could be made about the extent of the success. The study presented here was able to show that a one-time, standardized intervention in community pharmacies specialized in diabetes care is able to more than triple the number of patients who carried out the self-monitoring without making any errors: initially 17% compared to 59% at the end of the study. However, a selection bias in the patient population of the study cannot be fully excluded, since such offers are probably accepted more frequently by motivated patients rather than by unmotivated patients. Altogether, the pharmacy setting is suited for carrying out such evaluations along with giving corresponding instructions on how to correctly perform SMBG. Such an intervention comprised verbal instructions as well as practical exercises and took on average about 20 to 30 minutes, including the study documentation.

_

Synopsis of SMBG variation and error:

___________ 

Efficacy of SMBG:

_

Why SMBG is helpful for Patients with Diabetes:

SMBG is helpful to patients with diabetes in four distinct ways.

1.  First, it allows patients and clinicians to detect high or low blood glucose levels, thereby facilitating therapeutic adjustments to achieve long-term A1c goals.

2. Second, SMBG helps protect patients by allowing them to immediately confirm acute hypoglycemia or hyperglycemia.

3. Third, the technology facilitates patient education about diabetes and its management by giving patients more self-care responsibilities.

4. Fourth, SMBG helps motivate people toward healthier behavior.

_

SMBG facilitates improved A1c:

Many published studies have demonstrated that regular and frequent SMBG improves glycemic control in T1DM and T2DM patients on insulin treatment. There is also very strong evidence that SMBG improves control in T2DM patients who are not on insulin therapy. Davidson and colleagues showed that there is an inverse correlation between frequency of SMBG and A1c values in T1DM patients. Patients using SMBG have lower A1c than those who do not. The authors found that the more times per day that people check their blood glucose levels, the lower their A1c. However, after reaching a frequency of 6-7 tests per day, the improvement levels off. Strowig and colleagues showed similar results, reporting a 0.25% decrease in A1c for each blood glucose test per day. Again, there was a point of diminishing returns; improvements in A1c leveled off at approximately 8 tests per day. Studies of pediatric T1DM patients have demonstrated similar findings. In a retrospective study of more than 24,000 patients, Karter and colleagues found that increased frequency of SMBG correlated strongly with improved A1c regardless of the type of diabetes or therapy used.

_

Non-Insulin-Treated T2DM patients:

There has been much debate on the impact of SMBG on A1c in T2DM patients who are not treated with insulin. Skeptics of the benefits of SMBG use in this patient group often cite small or poorly designed studies that demonstrate no A1c benefit. This perspective often overlooks the fact that many T2DM patients are not adequately trained to interpret and respond to their test results. Utilization of SMBG involves more than simply documenting test results in a logbook; patients must understand and be able to make appropriate changes in therapy or activity based upon those results. SMBG testing in T2DM patients has also been hampered by a lack of consensus on the timing and frequency with which testing should be performed. Most patients who do perform blood glucose monitoring seldom test postprandial glucose. Other factors that inhibit testing frequency include the cost, pain, and inconvenience. All of these factors work against seeing a benefit in T2DM patients. Despite these factors, there is strong evidence that SMBG is, in fact, an effective method for lowering A1c in this patient group. A meta-analysis by Sarol and colleagues found an overall A1c improvement of 0.4% in non-insulin-treated T2DM patients who use SMBG compared with those who do not monitor. To counter potential criticism of their report, the authors critiqued the studies included in their meta-analysis and found no publication bias in their selection. A second meta-analysis conducted by Welschen and colleagues found similar results: an overall 0.39% improvement in A1c in type 2 patients not on insulin. The authors concluded that SMBG lowers A1c levels. Another review of the literature by Saudek in 2006 yielded similar findings.

_

SMBG and clinical outcome:

In a recent epidemiologic, non-randomized retrospective study, Martin and colleagues looked at disease-related fatal and non-fatal events in approximately 3,200 T2DM patients. Unlike the meta-analyses cited above, this study directly assessed clinical outcomes relative to SMBG utilization. Fewer patients who used SMBG experienced fatal or non-fatal events than patients who did not monitor their glucose (7.2 versus 10.4%, p=0.002). The authors concluded that SMBG may be associated with a healthier lifestyle and/or better disease management. Significantly, this study did not simply show that SMBG correlates with improved A1c; it demonstrated that SMBG is actually linked to better clinical outcomes. Furthermore, a recent study showed that patients described as being “Uncontrolled Diabetics” (defined in this study by HbA1C levels >8%) showed a statistically significant decrease in the HbA1C levels after a 90-day period of seven-point Self-Monitoring of Blood Glucose (SMBG) with a Relative Risk Reduction (RRR) of 0.18% (95% CI, 0.86-2.64%, p<.001).

________

Pros and cons of SMBG:

In contrast to the average blood glucose concentration reflected in an A1c level, the measurement of blood glucose itself, termed self monitoring of blood glucose (SMBG), only reflects what is going on at that moment. This has both advantages and disadvantages. One advantage is that, at least theoretically, interventions can be carried out by the patient at that moment to counter the high (or low) blood glucose concentration. Furthermore, when adjusting insulin doses, it is important to know the pattern of blood glucose values, i.e., when during the day the levels are high, in range, or low, since different parts of the insulin prescription affect glucose concentrations at various times after injection. The disadvantage is that the value only reflects one instance in time and glucose concentrations fluctuate throughout the day and night. Therefore, one value does not accurately portray what the overall levels of glucose are. It is certainly not unheard for patients to manipulate their behavior to ‘look good’ (i.e., have a glucose concentration near normal) when seen by their doctor by restricting their diets several days before, omitting food for 18–24 hours before the visit, taking extra insulin, etc. In that vein, it has been amply demonstrated that up to a quarter of patients will falsify their SMBG values when writing the results in their log books. In that case, they usually conveniently forget to bring in the meter, most of which contain a memory chip. Discrepancies between A1C levels and proffered SMBG values usually flush out this misguided behavior.   

_

Most people would agree that treatments, especially those that have invasive components and/or are expensive, should result in improved clinical outcomes. SMBG, as part of a treatment plan, is both expensive and invasive but does have the potential to improve outcomes by helping to lower glucose concentrations and thereby decrease the small vessel complications of diabetes. In patients taking insulin, performing SMBG offers the opportunity to correct high measured values at that moment by injecting additional rapid-acting insulin. More importantly, the pattern of results over longer periods of time enables the physician (or selected patients) to make insulin dose adjustments to counteract blood glucose concentrations exceeding the desired range. Therefore, it is not surprising that in at least eight studies, A1c levels were inversely related to the frequency of SMBG measurements in insulin-requiring patients, i.e., the more frequently that patients tested, the lower their A1C levels. However, simply measuring blood glucose is ineffective. In one of the studies, increased frequency of SMBG resulted in lower A1C levels only in those who self-adjusted their insulin doses, not in the insulin-requiring patients who did not. This strongly suggests that acting on the measured values is necessary.
_
At least 19 studies have been carried out to evaluate the effect of SMBG on A1c levels in diabetic patients not receiving insulin. Only five have been positive, i.e., showing that performing SMBG in these patients was associated with statistically significant lower A1c levels than in control groups that did not carry out SMBG. However, in each one of them, factors other than SMBG were probably responsible for the positive results. These include greater attention to education and decision making in the group performing SMBG compared with the control group, self-selection or a preferential drop-out rate. In the first case, those in the SMBG group either received more intensive nutritional counseling or decisions on changing therapy were made more frequently than in their matched control group. In the second case, patients were given the choice of performing SMBG or not. Those who chose to also had better self care practices and healthy lifestyle behaviors documented by a questionnaire, thus invalidating the conclusion that SMBG per se is what led to the lower A1c levels. Finally, in one study, nearly 50% failed to finish it. If the nearly half of the SMBG group that failed to complete the study were enriched in those who were showing the least response, these results could also be due to self selection.
_
The gold standard for carrying out clinical studies is randomization and blinding. This means that the subjects are randomly chosen to be placed in the control or intervention group (which avoids self selection) and the person(s) carrying out the study are blinded so that they do not know whether the subject is in the control or intervention group (which avoids preferential treatment of one group). A nurse-directed diabetes disease management program afforded the author the opportunity to carry out such a study evaluating SMBG in type 2 diabetic patients who were taking pills but no insulin. In this program, a nurse (under the supervision of a physician) followed detailed treatment algorithms. Patients on pills were randomized to perform SMBG or not. Both groups were seen by a dietitian who taught the selected patients SMBG and provided nutritional counseling to both groups five times during a six-month period. The dietitian utilized the SMBG values (recommended before and after one meal every day but Sunday and carried out 45% of the time) in his nutritional counseling. Neither the nurse nor the physician when consulted by the nurse knew which patients were performing SMBG. Although A1c levels fell significantly in both groups, the decrease was not statistically significantly different between the groups. In other words, SMBG had no beneficial effect when patients not taking insulin received good diabetes care. There are at least three possible explanations for the lack of an effect of SMBG in patients not taking insulin. First, patients receive little or no feedback on their results. This was not the case in the randomized blinded study described above. Second, related to the first, they are not taught the self-management skills to use to lower the measured glucose values. However, there are a limited number of behaviors possible for patients not receiving insulin to counter a high SMBG value. If the measurement was taken before a meal, options include delaying that meal, eating less (especially carbohydrates), exercising at that point, or increasing the dose of a pill before that meal that rapidly increases insulin secretion. (That medication, however, is used by only a very small minority of patients.) Even if taught, given patients’ usual lifestyles, these self management activities are not very likely to occur. Third, in the author’s experience, the vast majority of patients measure their glucose level before meals, rather than after meals. This limits the two potential benefits of SMBG in patients not taking insulin – motivation and education. Fasting values serve neither to educate (there is no information on the effect of the meal composition or size) nor to motivate well (postprandial values are much higher). Except for early type 2 diabetes, in which the before-meal glucose values are near normal, the most important determinant of after-meal glucose concentrations is the before-meal value. Therefore, in the author’s view, if SMBG is to be recommended in patients not receiving insulin, it should be carried out before and one to two hours after a meal to maximize the educational value of how the size and composition of the meal contributes to the rise of glucose concentrations after eating (from the difference between the two SMBG values) and the motivational aspect by showing the patient how high the glucose level rises.

_
In addition to its drawbacks of invasiveness and lack of efficacy, SMBG is expensive. In the Kaiser Permanente Northern California Region, the cost for strips alone in 1998 was the fourth largest out-patient pharmacy expenditure, accounting for 2% of the entire budget. Some of these costs would, of course, be attributed to patients receiving insulin. Although it is not possible to completely isolate SMBG costs for diabetic patients not taking insulin, the Medicare B fee-for-service program run by the government affords a fairly accurate estimate of this cost. The ICD-9 code, 250.00 (type 2 diabetes, uncomplicated, not uncontrolled), is the one most often used for diabetic patients on either diet alone or taking oral antidiabetes medications. The total cost in 2002 for reagent strips, lancets, lancing devices, meters, batteries, calibration solutions or calibration chips was US$465,503,576, which represented 58.8% of the total outlay of the Medicare B program for the ICD-9 code of 250.00 (personal communication, staff, Center for Medicare & Medicaid Services).To the extent that type 2 diabetic patients receiving insulin were given this ICD- 9 code, this cost would be an overestimate. On the other hand, to the extent that type 2 diabetic patients not taking insulin were given another ICD-9 code, this cost would be an underestimate. However, since this cost does not include the 10% of Medicare beneficiaries enrolled in health maintenance organiztion (HMO) Managed Medicare, this figure is certainly an underestimate of the total cost for SMBG in type 2 diabetic Medicare patients not taking insulin. Given that this nearly half a billion dollars is only for Medicare patients, the total cost for SMBG for all type 2 diabetic patients not taking insulin is obviously much, much higher.
______

SMBG clinical trials:

Self-monitoring blood glucose (SMBG) improves glycaemic control in patients with type 1 diabetes and possibly also in insulin-treated type 2 diabetes (T2D), especially when treated with multiple insulin injections per day. However, the value and frequency of SMBG in non-insulin-treated patients with T2D is a matter of controversy. A consensus opinion among a group of experts from the UK suggested that patients with T2D using oral antidiabetic drugs (OAD) should monitor their blood glucose at least once daily, varying the time of testing between fasting, preprandial and postprandial levels during the day. A global consensus conference on SMBG recommended eleven measurements a week in these patients and another recent consensus conference noted that patents with T2D on OAD may use SMBG but specific recommendations with respect to frequency were not made. A cross-sectional and longitudinal study of patients with T2D in Australia showed that HbA1c was not significantly different between SMBG users and nonusers, either overall or within diabetes treatment groups such as diet, OAD or insulin, with or without OAD. Although such observational data can be useful in determining the effect of an intervention, conclusive evidence of this assumption is not available from randomized controlled trials. A recent study reported on the effect of a more and less intensive diabetes education combined with recommendations on the frequency of SMBG in patients with T2D. They found that a more intensive education did not result in an improved HbA1c (%) compared to standard information and care.

______

Summary of key observational studies on Self-Monitoring of Blood Glucose in Non-Insulin Treated Type 2 Diabetes:

Study Description of purpose Findings/comments
Fremantle Diabetes Study Assessed whether SMBG is an independent predictor of improved outcome in a community-based cohort of T2DM patients.  Used longitudinal data from 1,280 T2DM participants (70% ongoing SMBG users at baseline) and a subset of 531 individuals who attended annual assessments over a 5-year period SMBG was associated with a 48% decreased risk of cardiovascular mortality in insulin-treated patients, but a 79% increased risk in non-insulin-treated patients. Time-dependent SMBG was independently associ­ated with a 48% reduced risk of retinopathy in the 5-year cohort.‘Inconsistent findings relating to the association of SMBG with cardiac death and retinopathy may be due to confound­ing, incomplete covariate adjustment or chance’
Kaiser Permanente Assessed longitudinal association between SMBG and glycemic control in diabetic patients from an integrated health plan.  Followed 16,091 new SMBG users and 15,347 ongo­ing users over a 4-year period Greater SMBG frequency among new users was associated with a graded decrease in HbA1c (relative to non-users) regardless of diabetes therapy.  Longitudinal changes in SMBG frequency were related to significant changes in glycemic control
QuED Assessed impact of SMBG on metabolic control in non-insulin-treated T2DM subjects (41% ongoing SMBG users at baseline).  Followed 1,896 patients over a 3-year period Performance and frequency of SMBG did not predict better metabolic control over 3 years.  Investigators could not identify any specific subgroups for whom SMBG practice was associated with lower HbA1c levels during the study
ROSSO Investigated relationship of SMBG with disease-related morbidity and mortality.  Followed 3,268 patients from diagnosis of T2DM between 1995 and 1999 until end of 2003 (mean follow-up 6.5 years) retrospectively from medical records SMBG was associated with decreased diabetes-related severe morbidity and all-cause mortality.  This association was also seen in subgroup of non-insulin-treated patients.  Medical records contained data on some biochemical parameters, retinopathy and neuropathy for only a small proportion of patients
King-Drew Medical Center Trial Randomized, single-blind study designed to determine whether SMBG improves HbA1c in non-insulin-treated T2DM patients. Clinical management decisions were blinded to SMBG data and use 89 non-insulin-treated T2DM patients were followed for 6 months At 6 months, differences in decrease in HbA1c levels were not statistically significant. The rapid upgrading of medication every two weeks if goals were not met may have obscured the potential of SMBG for supporting self-management
ESMON Prospective randomized controlled trial assessed the effect of SMBG vs. no monitoring on glycemic control and psychological indices in patients with newly diagnosed T2DM. Evaluated 184 non-insulin-treated patients with no previous use of SMBG over 12 months There were no significant differences in HbA1c be­tween groups at any time point. SMBG was associated with a 6% higher score on the depression subscale of the well-being questionnaire. The major improvement of mean HbA1c levels in the con­trol group, from 8.6 to 6.9% indicates a dominant role of medication in disease management
DINAMIC Multicentre, randomized, parallel-group trial was designed to determine if therapeutic management programs for T2DM that included SMBG result in greater reductions in HbA1c compared with pro­grams without SMBG in non-insulin-treated patients.Followed 610 T2DM patients with early or mild dia­betes receiving an identical oral anti-diabetic therapy regimen with gliclazide for 27 weeks There was a major decrease of HbA1c which was significantly larger in the SMBG group than the con­trol group. The incidence of symptomatic hypoglycemia was lower in the SMBG group. The major improvement of HbA1c levels in the control group from 8.1 to 7.2% indicates a dominant role of medication in disease management
German-Austrian Prospective, multicenter, randomized controlled study Investigated the effect of meal-related SMBG on glycemic control and well-being in non-insulin-treated T2DM subjects. Followed 250 non-insulin-treated T2DM patients for 6 months In per-protocol analysis (n=223) use of SMBG sig­nificantly reduced HbA1c levels. SMBG use resulted in a marked improvement of general well-being with significant improvements in the subject’s depression and lack of well-being. The benefit of intense patient care is evident but the con­tribution of intense vs. SMBG cannot be assessed
DiGEM Three-arm, open, parallel group randomized trial designed to determine whether SMBG alone, or with instruction in incorporating results into self-care, is more effective than standardized usual care in improving glycaemic control in non-insulin-treated T2DM patients. Followed 453 patients with a mean HbA1c level of 7.5% for a median duration of 1 year. At 12 months the differences in HbA1c level between the three groups were not statistically significant. Investigators concluded that evidence is not convinc­ing of an effect of SMBG, with or without instruction in incorporating findings into self-care, compared with usual care in reasonably well controlled non-insulin-treated patients with type 2 diabetes.

_______

_______


_

Self-monitoring of blood glucose in patients with type 2 diabetes mellitus who are not using insulin:

A 2012 study by Cochrane Collaboration:

Self-monitoring of blood glucose (SMBG) has been found to be effective for patients with type 1 diabetes and for patients with type 2 diabetes using insulin. There is much debate on the effectiveness of SMBG as a tool in the self-management for patients with type 2 diabetes who are not using insulin. The Objective of this study was to assess the effects of SMBG in patients with type 2 diabetes mellitus who are not using insulin. Twelve randomised controlled trials were included and evaluated outcomes in 3259 randomised patients. Intervention duration ranged from 6 months (26 weeks) to 12 months (52 weeks). Nine trials compared SMBG with usual care without monitoring, one study compared SMBG with SMUG, one study was a three-armed trial comparing SMBG and SMUG with usual care and one study was a three-armed trial comparing less intensive SMBG and more intensive SMBG with a control group. Seven out of 11 studies had a low risk of bias for most indicators.  From this review, authors conclude that when diabetes duration is over one year, the overall effect of self-monitoring of blood glucose on glycemic control in patients with type 2 diabetes who are not using insulin is small up to six months after initiation and subsides after 12 months. Furthermore, based on a best-evidence synthesis, there is no evidence that SMBG affects patient satisfaction, general well-being or general health-related quality of life. More research is needed to explore the psychological impact of SMBG and its impact on diabetes specific quality of life and well-being, as well as the impact of SMBG on hypoglycemia and diabetic complications. The aim of this systematic review was to assess the effects of SMBG in patients with type 2 diabetes who are not using insulin. Six randomised controlled trials were added to the six trials included in the original review (Welschen 2005a). In non-insulin treated type 2 diabetes patients with a diabetes duration of at least one year the overall effect of SMBG compared to control groups and a follow-up of six months showed a statistically significant 0.3% HbA1c decrease. In contrast, authors saw a non-significant decrease of 0.1% in HbA1c in patients in SMBG groups compared to control groups over a 12 months follow-up period. Secondly, the overall effect of SMBG compared to SMUG over a follow-up of six months showed a statistically non-significant decrease of 0.2% HbA1c. Thirdly, it was not possible to estimate an overall effect of SMBG over a follow-up of six months for newly diagnosed non-insulin treated type 2 diabetes patients, due to substantial inconsistency in the direction of the effect. However, the overall effect of SMBG with a follow-up of 12 months demonstrated a statistically significant decrease of 0.5% in HbA1c compared to control groups (two trials). Concerning health-related quality of life, well-being and patient satisfaction outcomes, based on a best-evidence synthesis authors conclude that there was no significant evidence available that SMBG had an effect on patient satisfaction (4 out of 4 trials), general well-being (4 out of 4 trials) or general health-related quality of life (3 out of 3 trials). Regarding levels of depression (WBQ-22, sub scale), inconsistent findings were observed (2 out of 2 trials). Lastly, regarding the secondary outcomes authors conclude that based on a best-evidence synthesis periods of both asymptomatic and symptomatic hypoglycaemia are more frequent in patients performing SMBG (3 out of 4 trials); and secondly, there is no statistically significant difference in fasting plasma glucose levels between SMBG and control intervention groups (3 out of 3 trials).

_

Results from studies of SMBG use in non-insulin-treated T2DM have been mixed, due to differences in study design, populations, outcome indicators, and inherent limitations of the traditional RCT models used. However, current evidence suggests that using SMBG in this population has the potential to improve glycemic control, especially when incorporated into a comprehensive and ongoing educa­tion program that promotes management adjustments according to the ensuing blood glucose values. SMBG use should be based on shared decision making between people with diabetes and their healthcare provid­ers and linked to a clear set of instructions on actions to be taken based upon SMBG results. SMBG prescription is discouraged in the absence of relevant education and/or ability to modify behaviour or therapy modalities. In summary, the appropriate use of SMBG by people with non-insulin-treated diabetes has the potential to optimize diabetes management through timely treatment adjustments based on SMBG results and improve both clinical outcomes and quality of life. However, the value and utility of SMBG may evolve within a preventive care model that is based on ongoing monitoring and the ability to adjust management as the diabetes progresses over time. In the meantime, more effective patient and provider training around the use of SMBG is needed. Because skilled healthcare professionals are needed now and in the future to address the growing diabetes epidemic, it is hoped that this report will encourage the development and systematic introduction of more effective diabetes self-management education/training and the value-based models of clinical decision making and care delivery.

_

A systematic review of self blood glucose monitoring (SMBG) in type 2 patients not taking insulin concluded: “The overall effect of SMBG was a statistically significant decrease of 0.39% in glycated hemoglobin (HbA1c) compared with the control groups. This is considered clinically relevant. Based on the UK Prospective Diabetes Study, a decrease of 0.39% in HbA1c is expected to reduce risk of microvascular complications by 14%. Davidson, on the other hand, in a counterpoint to this study, reviewed several trials and concluded that SMBG fails to reduce HbA1c in type 2 patients not taking insulin and is therefore a waste of money.

_

HBA1c as a function of SMBG measurements per day:

The figure above conclusively proves that SMBG improves A1c in all types of DM and consequently reduce diabetic complications and improve clinical outcomes.

 ________

Continuous glucose monitoring (CGM): 

_

_

A continuous glucose monitoring system (CGMS) measures blood glucose on a continuous basis (every few minutes). Two types of devices are available: newer systems that display “real time” glucose results directly on the monitor system, and earlier “non-real time” devices that do not have this result display capability and results are only available for retrospective viewing and analysis when data are downloaded to computer.

A typical “real-time” system consists of:

  • a disposable glucose sensor placed just under the skin, which is worn for a few days until replacement,
  • a link from the sensor to a non-implanted transmitter which communicates to a radio receiver,
  • an electronic receiver worn like a pager (or insulin pump) that displays blood glucose levels on a practically continuous manner, as well as monitors rising and falling trends in glycemic excursions.

Continuous monitoring allows examination of how the blood glucose level reacts to insulin, exercise, food, and other factors. The additional data can be useful for setting correct insulin dosing ratios for food intake and correction of hyperglycemia. Monitoring during periods when blood glucose levels are not typically checked (i.e. overnight) can help to identify problems in insulin dosing (such as basal levels for insulin pump users or long-acting insulin levels for patients taking injections). Monitors may also be equipped with alarms to alert patients of hyperglycemia or hypoglycemia so that a patient can take corrective action(s) (after finger stick testing, if necessary) even in cases where they do not feel symptoms of either condition. While the technology has its limitations, studies have demonstrated that patients with real-time continuous sensors experience less hyperglycemia, hyperglycemia, nocturnal hypoglycemia and even improvement in A1C levels. CGMS do not directly measure glucose levels in the blood, but measure the glucose level of interstitial fluid. This results in two disadvantages as compared to traditional blood glucose monitoring. 

1. Using current technology, continuous systems must be calibrated with a traditional blood glucose measurement and therefore do not yet fully replace “finger stick” measurements.

2. Glucose levels in interstitial fluid temporally lag behind blood glucose values. The lag time has been reported to be 5 minutes in general. This lag time is insignificant when blood sugar levels are relatively consistent. However, blood sugar levels, when changing rapidly (rising such as after a meal, or dropping in case of hypoglycemia), may read in the normal range on a CGMS while in reality the patient is already experiencing symptoms of an out-of-range blood glucose value and may require treatment. For these and other reasons related to this first generation technology, patients using CGMS are typically advised to take traditional finger stick measurements at least twice a day (for calibration), to verify that the sensor readings are accurate, and whenever they wish to self-treat their diabetes. Currently available CGMS are relatively expensive. While changing a subcutaneous cannula every 3–7 days would be less invasive than 3 or more finger sticks per day, the technology is still invasive. Coincidentally, the standard finger stick method must be used alongside the CGM technology to guarantee its functionality and accuracy. CGM technology is a major step in the advancement of diabetes care and has been proven effective. The ultimate goal of CGM technology is to use it in combination with subcutaneous insulin pumps, and in effect create an external “artificial pancreas” thereby providing better overall health and improved HbA1C tests.

_

The currently available CGMs measure blood glucose either with minimal invasiveness through continuous measurement of interstitial fluid (IF) or with the noninvasive method of applying electromagnetic radiation through the skin to blood vessels in the body. The technologies for bringing a sensor into contact with IF include inserting an indwelling sensor subcutaneously (into the abdominal wall or arm) to measure IF in situ or harvesting this fluid by various mechanisms that compromise the skin barrier and delivering the fluid to an external sensor. These IF measurement technologies are defined as minimally invasive because they compromise the skin barrier but do not puncture any blood vessels. After a warm-up period of up to 2 h and a device-specific calibration process, each device’s sensor will provide a blood glucose reading every 1–10 min for up to 72 h with the minimally invasive technology and up to 3 months with the noninvasive technology. Results are available to the patient in real time or retrospectively. Every manufacturer of a CGM produces at least one model that sounds an alarm if the glucose level falls outside of a preset euglycemic range. Invasive indwelling intravascular sensors that measure blood glucose directly are also under development for monitoring hospitalized patients. Prolonged use of such devices might cause vascular damage or infection. No articles have been published on their performance.

_

Noninvasive CGM Sensor:

A novel noninvasive method introduced by Glucon in 2007 is a CGM sensor called Aprise. This technology uses the photo acoustic properties of blood to measure glucose levels. The technique involves a sensor placed on a blood vessel that emits sound and pressure waves generated with short laser pulses absorbable by human tissue. The absorption causes a rise in temperature, and this creates an acoustic pressure impulse on the tissue surface. This impulse carries information about the properties of the structure underlying the skin. Glucose is known to affect the optical properties of blood, and thus the ultrasonic image changes of the tissue can estimate the blood glucose concentration. A study of 62 inpatients showed similar results with the Aprise CGM system compared with the more invasive CGM systems. Results showed accuracy in measuring glucose in 51%, 67%, and 86% of samples amongst low (<150 mg/dL), mid (151–200 mg/dL), and high (>251 mg/dL) glucose ranges respectively. They were able to directly measure the blood instead of using the interstitial fluid compartment and inferring the systemic glucose levels. The results from this noninvasive CGM trial were promising; however, the Aprise is yet to be marketed to the public.

_

According to a recent report the CGM technology is not as reliable as initially anticipated. In a clinical trial following 11 patients with either Type I or Type II diabetes, Mazze and colleagues found that the 2 leading CGM systems, DexCom (San Deigo, CA) and MiniMed Guardian RT (Northhridge, CA), are less accurate than expected due to a delay in the time it takes to monitor glucose in interstitial fluid and display it to the patient. There were lag times of 21 ± 5 min for the Guardian system and 7 ± 7 min for Dexcom. The CGM devices were also found to be less reliable than the traditional finger stick method. While both the Guardian and the Dexcom monitors should display 282 readings per day, the Dexcom averaged 204 ± 37 and the Guardian 229 ± 29. The inaccuracies did not induce episodes of acute crisis, such as diabetic coma, nor did they stop the technology from being marketed. As a result it is recommended that finger sticks continue to be performed to confirm high and low glucose readings before relying solely on CGM. The potential for somewhat continuous detection of glycemic abnormalities and maintaining overall glycemic control was achieved, and the technology was appoved for sale. There is no clear consensus about the clinical indications for CGM in actual clinical practice.  A Cochrane review found that there is limited evidence for the effectiveness of real-time CGM use in children, adults and patients with poorly controlled diabetes. However there were indications that higher compliance of wearing the CGM device improved glycosylated HbA1c level to a larger extent.

_

Clinical indications of CGM:

Situations that require detailed information about blood glucose fluctuations that only continuous monitoring can provide include when adjusting therapy, quantifying the response in a trial of a diabetes therapy, assessing the impact of lifestyle modifications on glycemic control, monitoring conditions where tighter control without hypoglycemia is sought (e.g., gestational diabetes, pediatric diabetes, in the intensive care unit), diagnosing and then preventing hypoglycemia (e.g., during sleep, with hypoglycemia unawareness), and diagnosing and preventing postprandial hypoglycemia. The most important use of continuous blood glucose monitoring is to facilitate adjustments in therapy to improve control. Specific therapeutic adjustments include changing from regular to a synthetic ultrashort-acting insulin analog at mealtime, changing from NPH to a synthetic ultralong-acting insulin once or twice per day, increasing or decreasing the mealtime insulin bolus dosage, increasing or decreasing the basal insulin rate, altering the treatment of intermittent hypoglycemia or hyperglycemia, changing the insulin-to-glucose correction algorithm for premeal hyperglycemia, changing the insulin-to-carbohydrate ratio at mealtime, changing the method for counting carbohydrates, changing the carbohydrate composition of the diet, changing the discount in short-acting insulin dosage for exercise, changing the nighttime regimen because of the dawn phenomenon, changing the target preprandial or postprandial blood glucose values, or before referring a patient for psychological counseling to improve adherence to the treatment regimen. The most frequent therapy adjustment by Sabbah et al. (out of eight adjustments) was to increase the mealtime bolus dosage. The most frequent therapy adjustment by Kaufman et al. (out of nine adjustments) was to modify the type of basal long-acting insulin.

_

Accuracy of CGM:

A real-time CGM can be programmed to sound an alarm for readings below or above a target range. The most important use of an alarm is to detect unsuspected hypoglycemia (such as during sleep) so that glucose can be administered to prevent brain damage. There is a trade-off between an alarm’s sensitivity and specificity. In general, if the alarm is set to sound at a lower level than the hypoglycemic threshold, then the specificity will be good but the sensitivity may be poor. If the alarm is set to sound at a glucose level higher than the hypoglycemic threshold, then the sensitivity will be good but the specificity may be poor. The greater accuracy a continuous monitor can provide, the less of a trade-off is necessary.

_

The Diabetes Research in Children Network (DirecNet) is a U.S. network of five clinical centers and a coordinating center dedicated to researching glucose monitoring technology in children with type 1 diabetes. The network’s investigators, the DirecNet Study Group, assessed the accuracy of the first- and second-generation CGMS and the GW2B in children with type 1 diabetes in concurrently published studies. The second-generation CGMS Gold, compared with the first-generation CGMS, had a lower median relative absolute difference (RAD) between CGMS glucose and reference serum glucose paired values (11 and 19%, respectively). For the GW2B, the median RAD between GW2B glucose and reference serum glucose paired values was 16%. Similar RAD values of 21% have been reported for the first-generation CGMS by Kubiak et al. RAD values of 12.8%  and 12.8–15.7% have been reported for the second-generation CGMS Gold system by Goldberg et al. and Guerci et al, respectively. Bode et al. evaluated the performance of the Guardian Continuous Monitoring System (Medtronic MiniMed) and whether using real-time alarms reduced hypo- and hyperglycemic excursions in a type 1 diabetic population. The mean absolute relative error between home blood glucose meter readings and sensor values was 21.3% (median 7.3%); further, on average the Guardian read 12.8 mg/dl below the concurrent home blood glucose meter readings. The hypoglycemia alert was able to distinguish glucose values ≤70 mg/dl with 67% sensitivity, 90% specificity, and 47% false alerts. The hyperglycemia alert showed a similar ability, detecting values ≥250 mg/dl with 63% sensitivity, 97% specificity, and 19% false alerts. The alerts resulted in a significant (P = 0.03) reduction in the duration of hypoglycemic excursions and a marginally significant (P = 0.07) increase in the frequency of hyperglycemic excursions. The International Organization for Standardization (ISO) standards for accuracy of point blood glucose tests require that a sensor blood glucose value be within 15 mg/dl of reference for a reference value ≤75 mg/dl and within 20% of reference for a reference value >75 mg/dl. Sensor accuracy by this definition is expressed as the percentage of data pairs meeting these requirement. The DirecNet group found that for hypoglycemic blood glucose levels (determined by a reference blood glucose monitor, the OneTouch Ultra), the CGMS Gold met the ISO standards in only 48% of readings and the GW2B met these standards in only 32% of readings. The percentage of data points attaining ISO accuracy standards climbed as the blood glucose level rose, topping out for the highest segment of reference blood glucose levels (i.e., blood glucose values ≥240 mg/dl). In this glycemic category, the CGMS Gold and GW2B, respectively, met ISO accuracy for 81 and 67% of data points. In a separate series of 15 healthy nondiabetic children undergoing continuous glucose monitoring over 24 h, the DirecNet Group reported that the median absolute difference in concentrations for the GW2B was 13 mg/dl and for the CGMS was 17 mg/dl. Furthermore, 30% of the values from the GW2B and 42% of the values from the CGMS deviated by >20 mg/dl from the reference value.

_

Continuous glucose monitoring offers advantages over intermittent glucose monitoring when glycemic patterns are poorly understood. The information about direction, magnitude, duration, frequency, and causes of fluctuations in blood glucose levels that can be obtained by continuous glucose monitoring is simply not available with intermittent blood glucose monitoring. When retrospective patterns are needed to adjust therapy or document the state of physiology, CGMs are useful. When real-time recognition of both the absolute magnitude of glycemia as well as trend patterns are needed, then a real-time CGM provides a wealth of information. Technologies for continuous glucose monitoring require patient education for proper use. During hypoglycemia or periods of rapid fluctuation, values provided by CGMs may be inaccurate. Clinical outcome studies suggest that measures of mean glycemia and hypoglycemic burden both improve with the use of continuous glucose monitoring, but more studies are needed to convince payers to reimburse for this technology. In this data-hungry world, it appears likely that CGMs will eventually become a routine part of diabetes management, initially for patients with difficult-to-control diabetes and eventually for most patients with diabetes. Retrospective reporting will eventually give way to real-time readings, and adjunctive use requiring a confirmatory finger-stick blood test will eventually give way to primary use without the requirement of such confirmation. As methods for minimally invasive and noninvasive continuous monitoring advance, diabetic patients will use this technology more routinely. Data printouts from CGMs will increasingly provide a roadmap for effective diabetes management in the 21st century.

__

Continuous glucose monitoring systems for type 1 diabetes mellitus: Cochrane review 2012:

Type 1 diabetes is a disease in which the pancreas has lost its ability to make insulin. A deficit in insulin leads to increases in blood glucose levels, these elevated blood glucose levels can lead to complications which may affect the eyes, kidneys, nerves and the heart and blood vessels. Since there is no cure for type 1 diabetes, patients need to check their blood glucose levels often by fingerprick and use these blood glucose values to decide on their insulin dosages. Fingerpricks are often regarded as cumbersome and uncomfortable by patients. In addition, fingerprick measurements only provide information about a single point in time, so it is difficult to discern trends in decline of rises in blood glucose levels. Continuous glucose monitoring systems (CGM) measure blood glucose levels semi-continuously. In this review 22 studies were included. These studies randomised 2883 patients with type 1 diabetes to receive a form of CGM or to use self measurement of blood glucose (SMBG) using fingerprick. The duration of follow-up varied between 3 and 18 months; most studies reported results for six months of CGM use. This review shows that CGM helps in lowering the glycosylated haemoglobin A1c (HbA1c) value (a measure of glycemic control). In most studies the HbA1c value decreased (denoting improvement of glycemic control) in both the CGM and the SMBG users, but more in the CGM group. The difference in change in HbA1c levels between the groups was on average 0.7% for patients starting on an insulin pump with integrated CGM and 0.2% for patients starting with CGM alone. The most important adverse events, severe hypoglycemia and ketoacidosis did not occur frequently in the studies, and absolute numbers were low (9% of the patients, measured over six months). Diabetes complications, death from any cause and costs were not measured. There are no data on pregnant women with diabetes type 1 and patients with diabetes who are not aware of hypoglycemia. There is limited evidence for the effectiveness of real-time continuous glucose monitoring (CGM) use in children, adults and patients with poorly controlled diabetes. The largest improvements in glycaemic control were seen for sensor-augmented insulin pump therapy in patients with poorly controlled diabetes who had not used an insulin pump before. The risk of severe hypoglycemia or ketoacidosis was not significantly increased for CGM users, but as these events occurred infrequent these results have to be interpreted cautiously. There are indications that higher compliance of wearing the CGM device improves glycosylated haemoglobin A1c level (HbA1c) to a larger extent.

_

Glucose sensing bio-implants:

A significant improvement of diabetes therapy might be achieved with an implantable sensor that would continuously monitor blood sugar levels within the body and transmit the measured data outside. The burden of regular blood testing would be taken from the patient, who would instead follow the course of their glucose levels on an intelligent device like a laptop or a smart phone. Glucose concentrations do not necessarily have to be measured in blood vessels, but may also be determined in the interstitial fluid, where the same levels prevail – with a time lag of a few minutes – due to its connection with the capillary system. However, the enzymatic glucose detection scheme used in single-use test strips is not directly suitable for implants. One main problem is caused by the varying supply of oxygen, by which glucose is converted to glucono lactone and H2O2 by glucose oxidase. Since the implantation of a sensor into the body is accompanied by growth of encapsulation tissue as seen in the figure below, the diffusion of oxygen to the reaction zone is continuously diminished. This decreasing oxygen availability causes the sensor reading to drift, requiring frequent re-calibration using finger-sticks and test strips.

_

_

Other approaches replace the troublesome glucose oxidase reaction with a reversible sensing reaction, known as an affinity assay. This scheme was originally put forward by Schultz & Sims in 1978. A number of different affinity assays have been investigated, with fluorescent assays proving most common.  MEMS technology has recently allowed for smaller and more convenient alternatives to fluorescent detection, via measurement of viscosity. Investigation of affinity-based sensors has shown that encapsulation by body tissue does not cause a drift of the sensor signal, but only a time lag of the signal compared to the direct measurement in blood. Longer term solutions to continuous monitoring, not yet available but under development, use a long-lasting bio-implant. These systems promise to ease the burden of blood glucose monitoring for their users, but at the trade off of a minor surgical implantation of the sensor that lasts from one year to more than five years depending on the product selected. Products under development include: The Senseonics Continuous Glucose Monitoring System, Implanted Glucose Bio-sensor, The Dexcom LTS (long term system) and The Animas Glucose Sensor.

_

Function of an Implanted Tissue Glucose Sensor for more than 1 Year in Animals:

An implantable sensor capable of long-term monitoring of tissue glucose concentrations by wireless telemetry has been developed for eventual application in people with diabetes. The sensor telemetry system functioned continuously while implanted in subcutaneous tissues of two pigs for a total of 222 and 520 days, respectively, with each animal in both nondiabetic and diabetic states. The sensor detects glucose via an enzyme electrode that is based on differential electrochemical oxygen detection, which reduces the sensitivity of the sensor to encapsulation by the body, variations in local microvascular perfusion, limited availability of tissue oxygen, and inactivation of the enzymes. After an initial 2-week stabilization period, the implanted sensors maintained stability of calibration for extended periods. The lag between blood and tissue glucose concentrations was 11.8 ± 5.7 and 6.5 ± 13.3 minutes (mean ± standard deviation), respectively, for rising and falling blood glucose challenges. The lag resulted mainly from glucose mass transfer in the tissues, rather than the intrinsic response of the sensor, and showed no systematic change over implant test periods. These results represent a milestone in the translation of the sensor system to human applications.

__________

Artificial pancreas:

To overcome the limitations of current insulin therapy, researchers have long sought to link glucose monitoring and insulin delivery by developing an artificial pancreas. An artificial pancreas is a system that will mimic, as closely as possible, the way a healthy pancreas detects changes in blood glucose levels and responds automatically to secrete appropriate amounts of insulin. Although not a cure, an artificial pancreas has the potential to significantly improve diabetes care and management and to reduce the burden of monitoring and managing blood glucose.

An artificial pancreas based on mechanical devices requires at least three components:

  • a CGM system
  • an insulin delivery system
  • a computer program that “closes the loop” by adjusting insulin delivery based on changes in glucose levels

With recent technological advances, the first steps have been taken toward closing the loop. The first pairing of a CGM system with an insulin pump—the MiniMed Paradigm REAL-Time System—is not an artificial pancreas, but it does represent the first step in joining glucose monitoring and insulin delivery systems using the most advanced technology available.

_

A Breakthrough in better Glucose Control: MiniMed 530G with Enlite:

As an integrated insulin pump and CGM system, MiniMed 530G with Enlite offers better control than multiple daily injections or conventional insulin pumps. It increases confidence to achieve better control. Bayer’s Contour Next Link allows seamless integration as a part of the MiniMed 530G system, transmitting exceptionally accurate blood glucose results wirelessly to the insulin pump. This makes it easier for you and your doctor to make better therapy adjustments than with logbooks alone.  CareLink is a convenient online tool that collects and organizes information from your system into personalized reports.

_

Bionic pancreas:

The device represents a solution that is as close to a cure as some feel possible for now and may be available to the commercial market by 2017. Developed by a collaborative biomedical team between BU and Massachusetts General Hospital, the device is worn externally and consists of two hormone pumps (insulin and glucagon), an iPhone and a continuous glucose monitor that measures blood sugar levels every five minutes. It is a marriage of biology and technology, with a powerful algorithm capable of recommending and delivering instantaneous hormone that balances blood sugar better than any diabetic can replicate on his or her own. It is self-correcting, constantly making little adjustments to keep the blood sugars in the optimal range and learning from the patient’s own history how much insulin or glucagon to provide.  All of the participants in the study had blood sugars that would greatly reduce, if not eliminate, the long-term risks of diabetes.

__________

ICU and glucose monitoring:

Accuracy of different methods for blood glucose measurement in critically ill patients:

Measurement of blood glucose concentration in ICUs is currently performed almost entirely intermittently, with analysis using either point-of-care glucose meters or blood gas analyzers. Although accurate data are not available, most measurements are probably made on glucose meters and the majority of samples are capillary blood obtained by finger pricks. The use of glucose meters and sampling capillary blood both have the potential to introduce errors into the measurement of blood glucose concentration. Severe sepsis and septic shock are the main causes of death in intensive care units. More than 750,000 cases of severe sepsis occur annually in the United States, amounting to 215,000 deaths/year in that country. Impaired microcirculation plays a leading role in this setting and, unless corrected, it can evolve to multiple organ dysfunction and death. Glucose homeostasis becomes modified in these patients, thereby resulting in insulin resistance, hyperinsulinemia and consequent hyperglycemia. This set of conditions is named stress diabetes, and it is a physiological response that ensures glucose supply to non-insulin-dependent tissues such as hepatocytes, nerve cells and alveolar, endothelial and immune system cells. Hyperglycemia is an independent predictor of adverse outcomes in cases of cardiovascular disease, neurological disorders, respiratory, liver and gastrointestinal disease, malignancy, sepsis and surgical patients. Normoglycemia is related to lower morbidity and mortality because of improvements in systemic inflammatory processes and in immune, endothelial and mitochondrial dysfunctions. Normoglycemic patients are less susceptible to bloodstream infection, renal failure, anemia and transfusion, polyneuropathy, hyperbilirubinemia and prolonged dependence on both mechanical ventilation and intensive care therapy. Additionally, glucose control is cost-effective. Thus, although glucose control is a priority in treating critically ill patients, glucose monitoring can be quite challenging. Considering that many intensive care patients are unable to express signs and symptoms of hypoglycemia, frequent and accurate measurements are pivotal. Given the low cost, easy sampling and prompt results of glucometers, capillary blood glucose levels are often determined using this method, although it has not been validated for intensive care patients. Critically ill patients have multiple relevant conditions that can interfere with measurements such as pH, partial pressure of oxygen, hematocrit, low blood pressure levels and tissue hypoperfusion. Measurement mistakes may lead to unnecessary procedures regarding insulin doses and increase the risk of severe or prolonged hypoglycemia and its complications such as seizures, coma, arrhythmia and irreversible cerebral damage. So fingerstick capillary blood glucose method usually overestimates the true glucose levels and gives rise to management errors in ICU patients. The use of glucometer in intensive care must also be avoided, particularly if noradrenalin is being used. Predominantly, this method overestimates blood glucose levels, which implies procedural errors and exposes patients to more frequent and prolonged hypoglycemic events. There is an increasing volume of evidence that maintaining as close to normal glucose as possible in hospitalized patients, especially those in intensive, surgical, and critical care units, can significantly improve outcomes, decreasing both morbidity and mortality.

_

Alternatives to the use of glucose meters are measurement in the hospital’s central laboratory or using a blood gas analyzer in the ICU. Although central laboratory measurement is much more accurate, the time delay in sending samples to the laboratory makes this an impractical solution for the ICUs in most hospitals. The frequency with which the blood glucose concentration is measured in the ICU makes venipuncture impractical, and viable alternatives are to sample from indwelling arterial or venous catheters. Sampling from indwelling vascular catheters may increase the risk of catheter-related bloodstream infection but this risk has not been quantified. Obviously, when sampling from indwelling catheters it is essential to avoid contamination from infusions of glucose-containing fluids. A more practical solution, but one that may have considerable cost implications, is to measure the blood glucose concentration in a blood gas analyzer because the majority of ICUs in the developed world will have such an analyzer in the ICU. Measurements from a properly maintained blood gas analyzer will have similar accuracy to central laboratory measurements. An additional consideration is that the blood glucose concentration varies in different vascular beds and the site from which blood is sampled can introduce further errors. The blood glucose concentration in radial arterial blood will be approximately 0.2 mmol/l higher than that in blood sampled from a peripheral vein, and 0.3 to 0.4 mmol/l higher than that in blood sampled from the superior vena cava. Sampling capillary blood in ICU patients, particularly in those who are hemodynamically unstable and being treated with vasopressors, can introduce large errors when compared with a reference method in which glucose is measured in central venous or arterial samples.

_

CGM in ICU:

Numerous techniques are available for continuous glucose monitoring in the ICU, including microdialysis and optical methods such as absorption spectroscopy, optical scattering and fluorescence. The blood glucose concentration can be measured in vitro by sensors that sit in the vascular or interstitial space or ex vivo by drawing blood samples or a dialysate to a sensor from an indwelling vascular catheter or dialysis membrane. Systems that intermittently draw blood to an externally based sensor may be described as automated intermittent monitors rather than continuous glucose monitors. Potential advantages of continuous glucose monitors include the ability to observe trends in blood glucose concentration and to intervene before the blood glucose concentration enters an unacceptable range, and removal of operator error both in the timing of blood glucose measurements and in the sampling and analysis of blood. Almost all monitoring of the blood glucose concentration in critically ill patients is by intermittent measurement. Although intermittent measurement is current standard practice, there is no agreed metric for reporting glycemic control and many of the metrics currently reported are affected by the frequency of measurement. Current systems for continuous or automated intermittent monitoring may measure the blood glucose concentration at a frequency varying from every minute to every 15 minutes. Such monitors will not only increase the number of measurements, but will also standardize the frequency of measurements amongst patients monitored with each device. This may allow for a better reporting of glucose control metrics, and if sufficiently accurate may offer a better understanding of the association between those metrics and outcomes.  

__________

Limitations of Conventional Methods of Self-Monitoring of Blood Glucose: a 2001study:

Children with type 1 diabetes are usually asked to perform self-monitoring of blood glucose (SMBG) before meals and at bedtime, and it is assumed that if results are in target range, along with HbA1c measurements, then overall glycemic control is adequate. However, the brief glimpses in the 24-h glucose profile provided by SMBG may miss marked glycemic excursions. The MiniMed Continuous Glucose Monitoring System (CGMS) has provided a new method to obtain continuous glucose profiles and opportunities to examine limitations of conventional monitoring.  A total of 56 children with type 1 diabetes (age 2–18 years) wore the CGMS for 3 days. Patients entered four fingerstick blood samples into the monitor for calibration and kept records of food intake, exercise, and hypoglycemic symptoms. Data were downloaded, and glycemic patterns were identified.  Despite satisfactory HbA1c levels (7.7 ± 1.4%) and premeal glucose levels near the target range, the CGMS revealed profound postprandial hyperglycemia. Almost 90% of the peak postprandial glucose levels after every meal were >180 mg/dl (above target), and almost 50% were >300 mg/dl. Additionally, the CGMS revealed frequent and prolonged asymptomatic hypoglycemia (glucose <60 mg/dl) in almost 70% of the children. Despite excellent HbA1c levels and target preprandial glucose levels, children often experience nocturnal hypoglycemia and postprandial hyperglycemia that are not evident with routine monitoring.  These observations have important clinical implications, because recent evidence suggests that postprandial hyperglycemia plays a particularly important role in the development of vascular complications of diabetes. These data also illustrate the potential usefulness of monitoring postprandial as well as preprandial glucose levels in youth with type 1 diabetes. The sensor also detected many more hypoglycemic events during the day than were appreciated clinically. Repeated use of the CGMS may provide a means to optimize basal and bolus insulin replacement in patients with type 1 diabetes.

__________

Home vs. Hospital Testing:

Most home meters measure glucose in so-called “whole blood” (blood as it comes out of our body). Whole blood consists of a liquid, called plasma, and cells, mainly red cells. The percentage of red cells is called the hematocrit. The standard reference lab test measures glucose in plasma (about half to two thirds of the volume of blood). Home meters are calibrated to give results as though they are measuring glucose in plasma only (called “plasma-equivalent” results). That said, to some degree we’re already on two different playing fields. Second, laboratory tests eliminate virtually all variation, except for manufacturing variation, from their testing. What that means is that hospital standards are much more exacting than testing at home because in hospitals you have: trained technicians, a controlled environment for temperature and humidity, constant maintenance of the machine that performs the test, with checking and refining of the machine’s calibration several times a day, and a much larger sample of blood (5 ml) that’s analyzed for 60 seconds or more, and at much greater expense. Lab tests generally come within about plus/minus 4% of a perfect reading.

_______

SMBG vs. HbA1c:

Hemoglobin A1c outperforms Fasting Glucose for Risk Prediction:

Measurements of hemoglobin A1c (HbA1c) more accurately identify persons at risk for clinical outcomes than the commonly used measurement of fasting glucose, according to a study by researchers at the Johns Hopkins Bloomberg School of Public Health. HbA1c levels accurately predict future diabetes, and they better predict stroke, heart disease and all-cause mortality as well. The study appeared in the March 4, 2010, issue of New England Journal of Medicine. As a diagnostic, “HbA1c has significant advantages over fasting glucose,” said Elizabeth Selvin, PhD, MPH, the study’s lead author. The A1c test has low variability from day to day, levels are not as affected by stress and illness, it has greater stability and the patient is not required to fast before the test is performed. In the study, people with HbA1c levels between 5.0 to 5.5 percent were identified as being within “normal” range. The majority of the U.S. adult population is within this range. With each incremental HbA1c increase, the study found, the incidence of diabetes increased as well; those at a level of 6.5 percent or greater are considered diabetic, and those between 6.0 and 6.5 percent are considered at a “very high risk” (9 times greater than those at the “normal” range) for developing diabetes. The revised ADA guidelines classify people with HbA1c levels in the range of 5.7 to 6.4 percent as “at very high risk” for developing diabetes over 5 years. The range of 5.5 to 6 percent, according to the ADA guidelines, is the appropriate level to initiate preventive measures. The study measured HbA1c in blood samples from more than 11,000 people, black and white adults, who had no history

__

Limitations of monitoring glycemic control using only HbA1c:

HbA1c does not provide information about Glycemic Variability:

As an integrated measure of fasting, preprandial, and postprandial glucose levels, HbA1c may not completely represent the risks that patients with diabetes are exposed to on a daily basis. Although it provides a quantitative measure of mean glucose exposure over an extended period, HbA1c alone does not indicate the degree of glycemic variability (the frequency and magnitude of glucose excursions) that a patient may experience during a given day. The potential importance of glycemic variability is suggested by findings from the DCCT. Even at equivalent HbA1c levels, patients receiving intensive therapy (involving more frequent preprandial insulin injections) had a reduction in the risk of progression of retinopathy over time compared with patients receiving conventional treatment. The DCCT research group speculated that complications might be highly dependent on the extent of postprandial glycemic excursions and that conventionally treated patients were more likely to be exposed to greater glycemic excursions than those in the intensive treatment group.

_

The figure below proves that HbA1c misses glycemic excursions:

_

The figure below shows that HbA1c misses many lows of glucose:

_

HbA1C does not differentiate among Fasting, Preprandial, and Postprandial Glycemia:

Optimal diabetes management involves control of fasting, preprandial, and postprandial glucose levels. However, because HbA1c represents mean glycemic exposure over time, it cannot be used to identify whether a given patient’s abnormal glycemic levels are primarily due to high fasting plasma glucose levels or high postprandial plasma glucose (PPG) levels. Simply put, an elevated HbA1c measurement signals a need for a change in therapy, but it cannot indicate what type of change is necessary. In fact, the relative contributions of fasting plasma glucose and PPG to HbA1c vary according to HbA1c level, with PPG becoming increasingly important as HbA1c decreases toward target levels.

_

A considerable body of evidence points to the clinical importance of postprandial hyperglycemia, typically evaluated in clinical studies as postchallenge glucose levels in an oral glucose tolerance test. Although the terms postchallenge and postprandial are not synonymous because of the inherent differences between ingesting a pure glucose solution and eating a mixed meal, postchallenge hyperglycemia is often regarded as a surrogate for postprandial hyperglycemia. Even when HbA1c and fasting glucose levels are within the normal range, postchallenge hyperglycemia has been associated with a 2-fold increase in the risk of death from cardiovascular disease. In the Funagata Diabetes Study, impaired glucose tolerance (IGT), defined as a 2-hour postchallenge plasma glucose level between 140 and 198 mg/dL (7.8-11.0 mmol/L), as opposed to impaired fasting glucose (defined in that study as 110-125 mg/dL [6.1-7.0 mmol/L]) was shown to double the risk of death from cardiovascular disease.

_

The increased risk of macrovascular disease among patients with IGT but not impaired fasting glucose, combined with the greater contribution of postprandial hyperglycemia to overall glycemia when HbA1c levels are lower, may help to explain why even newly diagnosed patients have an increased cardiovascular risk. When IGT progresses to diabetes, the patient has been exposed to postprandial glycemic excursions for many years. In the Diabetes Intervention Study, elevated PPG levels after a normal breakfast were associated with significantly higher mortality during an 11-year follow-up of patients with newly diagnosed type 2 diabetes (P<.01).  In support of these findings, prospective interventions that control PPG have been shown to improve endothelial function and reduce carotid atherosclerosis in patients with type 2 diabetes. Postprandial hyperglycemia was recently linked to microvascular complications as well. In a study of 151 Japanese patients, PPG levels correlated better than HbA1c measurements with the risk of retinopathy progression.

_

Inaccuracies in HbA1c Test Results:

More than 30 different HbA1c assays are currently available. Differences among these assays as well as variations between and within laboratories can affect HbA1c results. In 1996, the National Glycohemoglobin Standardization Program (NGSP) was established in the United States to certify assay methods as traceable to DCCT reference values. Certification from the NGSP requires an assay method’s reference interval to be within 5% of the normal HbA1c level of 4% to 6%, with variations limited to less than 3% within a laboratory and less than 5% between laboratories.6 The ADA now recommends that laboratories use only NGSP-certified assays. Some medical conditions may cause inaccurate HbA1c test results.  Conditions or factors that shorten red blood cell life span, such as acute blood loss, hemolytic anemia, and some medications used for human immunodeficiency virus-positive patients,  will yield falsely low HbA1c values regardless of assay method. Hemoglobin variants, hemoglobinopathies, conditions that result in increased erythrocyte turnover, and blood transfusions can increase or decrease HbA1c levels depending on the condition and the HbA1c assay method used. Iron deficiency anemia, hypertriglyceridemia, hyperbilirubinemia, uremia, and high doses of acetylsalicylic acid can produce falsely high HbA1c measurements. Dietary supplements and opiate or alcohol abuse can also affect HbA1c results. Vitamins C and E may lower test results by inhibiting glycation of hemoglobin, and vitamin C has also been reported to increase HbA1c values, depending on the assay used.

_

_

Why is hemoglobin A1c unreliable?  

While this sounds good in theory, the reality is not so black and white. The main problem is that there is actually a wide variation in how long red blood cells survive in different people. A study shows that red blood cells live longer than average at normal blood sugars. Researchers found that the lifetime of hemoglobin cells of diabetics turned over in as few as 81 days, while they lived as long as 146 days in non-diabetics. This proves that the assumption that everyone’s red blood cells live for three months is false, and that hemoglobin A1c can’t be relied upon as a blood sugar marker. In a person with normal blood sugar, hemoglobin will be around for a lot longer, which means it will accumulate more sugar. This will drive up the A1c test result – but it doesn’t mean that person had too much sugar in their blood. It just means their hemoglobin lived longer and thus accumulated more sugar. The result is that people with normal blood sugar often test with unexpectedly high A1c levels. So people with completely normal fasting and post-meal blood sugars have A1c levels of >5.4%. In fact this is not abnormal, when we understand that people with normal blood sugar often have longer-lived red blood cells – which gives those cells time to accumulate more sugar. On the other hand, if someone is diabetic, their red blood cells live shorter lives than non-diabetics. This means diabetics and those with high blood sugar will test with falsely low A1c levels. And we already know that fasting blood glucose is the least sensitive marker for predicting future diabetes and heart disease. This is a serious problem, because fasting blood glucose and hemoglobin A1c are almost always the only tests doctors run to screen for diabetes and blood sugar issues. Another condition that affects hemoglobin A1c levels is anemia. People who are anemic have short-lived red blood cells, so like diabetics, they will test with falsely low A1c levels. In my practice, about 30-40% of my patients have some degree of anemia, so this is not an uncommon problem. 

_________

Using SMBG to Complement HbA1c:

Self-monitoring of blood glucose provides a real-time measure of blood glucose levels and consequently represents a valuable adjunct to the periodic determination of HbA1c values.  Accordingly, SMBG provides patients with instant feedback about the effects of food choices, exercise, stress, and medications on their glycemic levels. Although the optimal frequency and timing of SMBG depend on many variables including diabetes type, level of glycemic control, management strategy, and individual patient factors, SMBG allows clinicians to fine-tune therapy and thus more effectively manage their patients’ glucose levels. When used properly, most modern glucose meters demonstrate a high degree of clinical accuracy compared with laboratory instruments, and average SMBG readings generally correlate well with HbA1c values. Without regular self-testing to provide day-to-day insights, an A1c result can be misleading. Because it gives a long-term view, a person with frequent highs and lows could have an average A1c that looks quite healthy. The only way to get a complete picture of your blood sugar control is by reviewing your day-to-day self-checks along with your regular A1c tests, and working closely with your healthcare team to interpret the results.

_

SMBG can identify Hypoglycemic Episodes:

Fear of hypoglycemia often leads to a less intensive management approach by clinicians and patients, resulting in suboptimal glycemic control. Hypoglycemia is a concern to patients with type 1 diabetes and those with type 2 diabetes managed with insulin or oral agents. In a study using CGM in elderly patients with type 2 diabetes, no patients reported symptoms of hypoglycemia, yet 80% had glucose values lower than 50 mg/dL (2.8 mmol/L) on at least 1 occasion. Self-monitoring of blood glucose provides a means of identifying daily hypoglycemic events, allowing immediate treatment and modification of therapeutic regimens to allow tighter glycemic control while minimizing future hypoglycemic risk.

_

SMBG detects Glycemic Excursions:

Continuous glucose monitoring studies show that daily blood glucose values range widely in both hypoglycemic and hyperglycemic ranges. Although real-time CGM devices that measure glucose concentrations in interstitial fluid have recently entered the market, they are indicated as adjuncts to standard SMBG, and all therapy adjustments should be based on measurements obtained from a blood glucose meter. Both SMBG and CGM have the ability to identify glycemic excursions. In a study of 600 insulin-treated patients, regular SMBG—an average of 3 times per day for 3 months—revealed a wide range of daily glucose values. When mean minimal and maximal values were determined, blood glucose ranged from 40 to 449 mg/dL in patients with type 1 diabetes and 63 to 382 mg/dL in patients with type 2. The wide variation in glucose levels, particularly the dangerously low values observed in the patients with diabetes, would not have been detected by HbA1c alone. For patients treated with insulin, an occasional SMBG reading in the middle of the night can help detect nocturnal hypoglycemia. It is important to recognize that postprandial glucose excursions may still be present in patients who have achieved their HbA1c target. A study of patients with type 2 diabetes who performed SMBG before and 2 hours after meals showed that many with HbA1c levels lower than 7.0% had postprandial glucose levels in excess of 160 mg/dL and glucose excursions of more than 40 mg/dL. In a cross-sectional analysis of the Third National Health and Nutrition Examination Survey cohort, postchallenge hyperglycemia was identified in 39% of patients with type 2 diabetes who were not using insulin and had an HbA1c level lower than 7.0%. Diabetes management software, with data uploaded from blood glucose meters, can be used to calculate the SDs of blood glucose values. Analyzing SDs is perhaps the simplest method to identify the degree of glycemic fluctuations.

_

Management of glycemia in diabetes is crucially important to the prevention of both acute and long-term complications. The two fundamental approaches to assessment, SMBG and HbA1c, provide fundamentally different but complementary information. Regular SMBG is to be encouraged, particularly in patients using insulin, although the frequency can vary widely dependent particularly on the glycemic stability of the patient and the need to follow treatment changes. HbA1c, the criterion standard measure of chronic glycemic control and complication risk, should be measured every 3 to 6 months to assess the success of the treatment regimen. Changes in both approaches are ongoing but with proper control of

glycemia, diabetes can be successfully managed.

________

What blood sugar marker is most reliable?  FPG, PPG or A1c?

Testing accurately for blood sugar is like putting pieces of a puzzle together. Fasting blood glucose, A1c and post-meal blood sugar are all pieces of the puzzle. But post-meal blood glucose testing is by far the most reliable and accurate way to determine what’s happening with blood sugar, and the most sensitive way of predicting future diabetic complications and heart disease.

_

Why post-meal blood sugar is a superior marker than fasting BG and A1c:

Fasting blood sugar:

According to continuous glucose monitoring studies of healthy people, a normal fasting blood sugar is 83 mg/dL or less. Many normal people have fasting blood sugar in the mid-to-high 70s. While most doctors will tell you that anything under 100 mg/dL is normal, it may not be. In a study, people with FBG levels above 95 had more than 3x the risk of developing future diabetes than people with FBG levels below 90. This study showed progressively increasing risk of heart disease in men with FBG levels above 85 mg/dL, as compared to those with FBG levels of 81 mg/dL or lower. What’s even more important to understand about FBG is that it’s the least sensitive marker for predicting future diabetes and heart disease. Several studies show that a “normal” FBG level in the mid-90s predicts diabetes diagnosed a decade later. Far more important than a single fasting blood glucose reading is the number of hours a day our blood sugar spends elevated over the level known to cause complications, which is roughly 140 mg/dl (7.7 mmol/L).  One caveat here is that very low-carb diets will produce elevated fasting blood glucose levels. Why? Because low-carb diets induce insulin resistance. Restricting carbohydrates produces a natural drop in insulin levels, which in turn activates hormone sensitive lipase. Fat tissue is then broken down, and non-esterified fatty acids are released into the bloodstream. These FFA are taken up by the muscles, which use them as fuel. And since the muscle’s needs for fuel has been met, it decreases sensitivity to insulin. So, if you eat a low-carb diet and have borderline high FBG (i.e. 90-105), it may not be cause for concern. Your post-meal blood sugars and A1c levels are more important.

_

Hemoglobin A1c:

In spite of what the American Diabetes Association (ADA) tells us, a truly normal A1c is between 4.6% and 5.3%. But while A1c is a good way to measure blood sugar in large population studies, it’s not as accurate for individuals. An A1c of 5.1% maps to an average blood sugar of about 100 mg/dL. But some people’s A1c results are always a little higher than their FBG and OGTT numbers would predict, and other people’s are always a little lower. This is probably due to the fact that several factors can influence red blood cells (vide supra). A number of studies show that A1c levels below the diabetic range are associated with cardiovascular disease. A study showed that A1c levels lower than 5% had the lowest rates of cardiovascular disease (CVD) and that a 1% increase (to 6%) significantly increased CVD risk. Another study showed an even tighter correlation between A1c and CVD, indicating a linear increase in CVD as A1c rose above 4.6% – a level that corresponds to a fasting blood glucose of just 86 mg/dL. Also earlier study showed that the risk of heart disease in people without diabetes doubles for every percentage point increase above 4.6%. Studies also consistently show that A1c levels considered “normal” by the ADA fail to predict future diabetes. This study found that using the ADA criteria of an A1c of 6% as normal missed 70% of individuals with diabetes, 71-84% with dysglycemia, and 82-94% with pre-diabetes. How’s that for accuracy? What we’ve learned so far, then, is that the fasting blood glucose and A1c levels recommended by the ADA are not reliable cut-offs for predicting or preventing future diabetes and heart disease. This is problematic, to say the least, because the A1c and FBG are the only glucose tests the vast majority of people get from their doctors.

_

OGTT / post-meal blood sugars:

The more realistic and convenient way to achieve PPG rather than conventional OGTT is simply using a glucometer to test your blood sugar one and two hours after you eat a meal. This is called post-prandial (post-meal) blood sugar testing. The ADA considers OGTT of between 140 – 199 two hours after the challenge to be pre-diabetic, and levels above 200 to be diabetic. But once again, continuous glucose monitoring studies suggest that the ADA levels are far too high. Most people’s blood sugar drops below 120 mg/dL two hours after a meal, and many healthy people drop below 100 mg/dL or return to baseline. This study showed that even after a high-carb meal, normal people’s blood sugar rises to about 125 mg/dL for a brief period, with the peak blood sugar being measured at 45 minutes after eating, and then drops back under 100 mg/dL by the two hour mark. Another continuous glucose monitoring study confirmed these results. Sensor glucose concentrations were between 71 – 120 mg/dL for 91% of the day. Sensor values were less than or equal to 60 or 140 mg/dL for only 0.2% and 0.4% of the day, respectively. On the other hand, some studies suggest that even healthy people with no known blood sugar problems can experience post-meal spikes above 140 mg/dL at one hour. If post-meal blood sugars do rise above 140 mg/dL and stay there for a significant period of time, the consequences are severe. Prolonged exposure to blood sugars above 140 mg/dL causes irreversible beta cell loss (the beta cells produce insulin) and nerve damage. One in two “pre-diabetics” gets retinopathy, a serious diabetic complication. Cancer rates increase as post-meal blood sugars rise above 160 mg/dL. This study showed stroke risk increased by 25% for every 18 mg/dL rise in post-meal blood sugars. Finally, 1-hour OGTT readings above 155 mg/dL correlate strongly with increased CVD risk.

_

Does targeting postprandial hyperglycemia improve overall glycemic control?

In a study of patients with type 2 diabetes with secondary failure of sulfonylurea therapy, Feinglos et al. showed that improvement of postprandial hyperglycemia, using insulin lispro (Humalog) at mealtime in combination with a sulfonylurea, not only reduced 2-h postprandial glucose excursions, but also reduced both fasting glucose and A1C levels from 9.0% to 7.1% (P < 0.0001). Subjects in the lispro group also benefited from significantly decreased total cholesterol levels and improved HDL cholesterol concentrations. Improvements in A1C levels were also reported in a study by Bastyr et al., which showed that therapy focused on lowering postprandial glucose versus fasting glucose may be better for lowering glycated hemoglobin levels. Further, in a study of patients with gestational diabetes, De Veciana et al. demonstrated that targeting treatment to 1-h postprandial glucose levels rather than fasting glucose reduces glycated hemoglobin levels and improves neonatal outcomes. Regardless of whether postprandial glucose is a better predictor of A1C than fasting/preprandial glucose, most researchers agree that the best predictor of A1C is mean blood glucose, which is a composite of both fasting/preprandial and postprandial glucose. Therefore, it is reasonable to conclude that achieving near-normal postprandial glucose levels is essential to achieving overall glycemic control.

_

Is postprandial glucose control an independent contributor to diabetes outcomes?

Numerous epidemiological studies have shown elevated postprandial/post-challenge glucose to be independent and significant risk factors for macrovascular complications and increased mortality risk. The Honolulu Heart Study found a strong correlation between postchallenge glucose levels and the incidence of cardiovascular mortality. The Diabetes Intervention Study, which followed newly diagnosed patients with type 2 diabetes, found moderate postprandial hyperglycemia to be more indicative of artherosclerosis than was fasting glucose, and found postprandial but not fasting glucose to be an independent risk factor for cardiovascular mortality. The DECODE Study, which followed more than 25,000 subjects for a mean period of 7.3 years, showed that increased mortality risk was much more closely associated with 2-h post–glucose load plasma levels than with fasting plasma glucose. Similar to these findings, de Vegt et al. found that the degree of risk conferred by the 2-h postprandial glucose concentration was nearly twice that conferred by A1C level. Further, recent studies have demonstrated that even moderate postprandial hyperglycemia (148–199 mg/dl) is not only more indicative of artherosclerosis than is fasting glucose, but also may have direct adverse effects on the endothelium.  

_

What is 4 point SMBG?

Fasting, before lunch, before dinner and bedtime SMBG is 4 point SMBG.

What is 7 point SMBG?

Fasting (before breakfast), after breakfast, before & after lunch, before and after dinner, and bedtime is 7 point SMBG.

7 point SMBG takes into consideration post-meal blood glucose and since post-meal blood glucose correlates well with vascular diabetic complications, 7 point SMBG not only improves glycemic control better than 4 point SMBG, but also greater reduction in diabetic vascular complications. 

_

So in a nutshell, I would like to have all three: fasting blood glucose, A1c and post-meal glucose. But if I had to choose one, it would definitely be post-meal glucose.  

______

SMBG vs. CGM:

Glucose monitoring is an important component of type 1 diabetes treatment. Careful consideration of the advantages and disadvantages of SMBG and CGM can help providers identify the approach that best fits with patient’s lifestyle and treatment goals.

_

The advantages of SMBG are that it is relatively inexpensive, easy to train patient to complete, provides an accurate measure of capillary glucose concentrations, and available glucose meters can offer features including memory, downloading software, no coding strips, and small blood sample requirements. Disadvantages are the impact of user error on test accuracy, the need for multiple finger-stick blood samples each day, and the limited data available (e.g., SMBG provides a single snap shot of glucose concentrations, not trending data). Efficacy studies support the use of SMBG in diabetes management and suggest that it likely to remain the most common form of glucose monitoring practiced by patient today.

_

The main advantage of CGM is that it can provide a near-continuous read-out of interstitial glucose concentration, which adequately reflects blood glucose concentration and can help to identify trends and patterns in glucose control with only a single needle stick to place the sensor. In addition, in the case of real-time CGM, monitors can be programmed to alarm for either high or low glucose values, thus allowing parents and youth to treat for these abnormal values and potentially reducing fear related to hypo or hyperglycemia. Disadvantages include the cost of CGM, lack of universal insurance coverage for this technology, limited FDA approval for CGM devices, and cosmetic (e.g., additional infusion site/monitor) and psychological concerns (e.g., frustration, helplessness if glucose control is not perceived as adequate). There is also limited evidence supporting use of CGM with type 1 diabetes as a means of improving long-term glycemic control. One barrier to CGM use appears to be patient’s willingness to accept and use this technology for diabetes management, a problem which likely will need to be addressed before it is possible to adequately examine for the efficacy of CGM use on glycemic control.

_______

Cost effectiveness of SMBG:

The worldwide epidemic of diabetes is producing unacceptable human suffering. This in turn produces economic losses from direct costs and lost production. Therapeutic endeavors must be directed to attenuation of this effect. A cure is not on the horizon; the best tools available to doctors are those that reduce risks and delay or prevent disease progression. In type 2 patients, therapeutic approaches must be progressive, reflecting the gradual loss of b-cell function. SMBG is the singular, immediate, accurate measure available to the patient allowing therapy adjustment. With appropriate education, the patient and healthcare team can adjust therapy to approach glycemic goals. The value of testing, not simply the cost, must be appreciated by patients, doctors, and the healthcare system. Prevention or delay of complications and improvement in daily symptoms and quality of life are priceless. The cost of home blood glucose monitoring is substantial due to the cost of the test strips. In 2006, the consumer cost of each glucose strip ranged from about $0.35 to $1.00. Manufacturers often provide meters at no cost to induce use of the profitable test strips. Type 1 diabetics may test as often as 4 to 10 times a day due to the dynamics of insulin adjustment, whereas type 2 typically test less frequently, especially when insulin is not part of treatment. As described earlier, frequent SMBG results in a statistically and clinically significant improvement in A1c which can range up to reductions of 2.5-4.0%. To determine whether this reduction results in economic benefits, Neeser and colleagues performed a cost-effectiveness analysis of SMBG using a Markov state model of diabetes to assess the clinical impact and related costs when SMBG is provided to patients not on insulin therapy. They assumed an improvement in A1c of 0.39%. The results of the analysis showed a slight increase in life expectancy and a reduced cost of complications, 70% of which was attributable to reductions in microvascular events. The cost per life-year gained was approximately $39,650, which is considered to be an acceptable cost-effective intervention from a health insurance perspective.

An analysis by Simon and colleagues assessed the cost-effectiveness of SMBG in type 2 subjects who participated in the DiGEM study. The average annual cost of intervention was £89 (€113; $179) for standard­ized usual care, £181 for less intensive self-monitoring, and £173 for more intensive self-monitoring, showing an additional cost per patient of £92 (95% confidence interval £80 to £103) in the less intensive group and £84 (£73 to £96) in the more intensive group. Given that there were no significant differences in clinical outcomes (change in HbA1c), the authors concluded that SMBG is unlikely to be cost-effective when additional to standardized usual care.

_

A 2008 BMJ study found that self monitoring of blood glucose with or without additional training in incorporating the results into self care was associated with higher costs and lower quality of life in patients with non-insulin treated type 2 diabetes. In light of this, and no clinically significant differences in other outcomes, self monitoring of blood glucose is unlikely to be cost effective in addition to standardised usual care.  Canadian study in 2010 found that routine use of SMBG (one or more test strips per day) in patients with non–insulin-treated type 2 diabetes is associated with an incremental cost of $113,643 per QALY gained, relative to no SMBG. A reduction in the price of blood glucose test strips would improve the cost-effectiveness of SMBG. For patients with insulin-treated type 2 diabetes, SMBG testing frequencies beyond 21 test strips per week require unrealistically large A1C estimates of effect to achieve favourable incremental cost per QALY estimates.

_

Self-monitoring of Blood Glucose in Type 2 Diabetes: Cost-effectiveness in the United States: 2008:

Compared with no SMBG, quality adjusted life expectancy increased with SMBG frequency. Increases were 0.103 and 0.327 quality-adjusted life-years (QALYs) for SMBG at 1 and 3 times per day, respectively. Corresponding incremental cost-effective ratios (ICERs) were $7856 and $6601 per QALY gained. Results indicate that SMBG at both 1 and 3 times per day in this cohort of patients with T2DM taking OADs would represent good value for money in the United States, with ICERs being most sensitive to the time horizon. Longer time horizons generally led to greater SMBG cost-effectiveness. The ICER for SMBG 3 times per day was $518 per QALY over a 10-year time horizon, indicating very good value.

__

 Self-monitoring of blood glucose (SMBG) in patients with type 2 diabetes on oral anti-diabetes drugs: cost-effectiveness in France, Germany, Italy, and Spain: 2010:

With cost assumptions reflecting current reimbursement levels in France, Germany, Italy, and Spain, SMBG was found to be cost-effective across a 40-year time horizon, with all base case ICERs <16,000/QALY. This study adds to the literature on the country-specific, long-term value of SMBG for type 2 diabetes patients treated with OADs. Under current model assumptions, variations in cost-effectiveness results stemmed primarily from payer reimbursement practices for SMBG within each country

________

Counterfeit SMBG: Bogus Diabetes Test Strips Traced to Chinese Distributor:

Batches of counterfeit test strips for some meters have been identified, which have been shown to produce inaccurate results. A global hunt started by Johnson & Johnson has tracked to China some counterfeit versions of the test strips used by 10 million Americans to measure their blood sugar levels. Potentially dangerous copies of the OneTouch Test Strip sold by the company’s LifeScan unit surfaced in American and Canadian pharmacies last year.  J.& J., one of the world’s largest makers of consumer health products, learned of the bogus test strips from patients’ complaints. Tipped off by the company, the Food and Drug Administration issued a consumer alert without disclosing the link to China. No injuries were reported, but inaccurate test readings may lead a person with diabetes to inject the wrong amount of insulin, causing harm or death. The investigation found that a distributor in China was the source of about a million fake test strips that have turned up in at least 35 states and eight countries. The trail, initiated by calls to a LifeScan hot line, led detectives to 700 pharmacies where the products were sold, then to eight American wholesalers, then to two importers. One importer was in the United States and was found in a Las Vegas hotel room. Records seized from the importers showed that the counterfeit strips were bought from Henry Fu and his company, Halson Pharmaceuticals, which is based in Shanghai. Mr. Fu was arrested by Chinese authorities and remains in prison in China.

_________

Research and newer technology in SMBG:

_____

Dual-Analyte Detection:

Various clinical situations require the simultaneous monitoring of glucose and of other clinically important analytes, such as lactate or insulin. Such coupling of two sensing elements requires both analytes to be monitored independently at different levels and without cross talk. Wang and Zhang developed a needle-type sensor for the simultaneous continuous monitoring of glucose and insulin. The integrated microsensor consisted of dual electrocatalytic (RuOx) and biocatalytic (GOx) modified carbon electrodes inserted into a needle and responded independently to nanomolar and millimolar concentrations of insulin and glucose, respectively.

_

__________

Less Invasive Technology to Monitor Blood Glucose Levels in Patients with Diabetes:

The current standard for self monitoring blood glucose levels is through a lancet device that pricks the finger or forearm. The droplet of blood is placed on an optical disposable strip that is then placed into a glucose meter, or glucometer, giving a reading of blood glucose in milligrams per deciliter. High or low glucose levels can then be corrected using insulin or glucose depending on whether the patient is hyper- or hypoglycemic, respectively. This gold standard of diabetes self testing has been improved upon in recent years and many monitors require less blood and allow more sites than just the fingertip and forearm to be tested. There have also been attempts to make the standard glucometers less noticeable and easier to carry. One example is a cellular phone which can double as a glucose monitor, thus making it something that could be discretely taken anywhere. However, the invasiveness of the procedure persists as there is still a requirement to puncture skin to produce a blood droplet. The gold standard for self monitoring currently involves a small device measuring the blood glucose level in a droplet of blood taken from the fingertip or forearm of a patient. This lancet approach is done on average 3–4 times a day and can be uncomfortable and inconvenient. It is known that monitoring blood glucose more frequently leads to better control and maintaining overall health. However, invasiveness of the finger stick method has caused patients to ignore the need to monitor their blood glucose levels and leave themselves at risk for future complications.

_

Non-invasive glucose monitoring:

Non-invasive glucose monitoring techniques can be grouped as subcutaneous, dermal, epidermal and combined dermal and epidermal glucose measurements.  Matrices other than blood under investigation include interstitial fluid, ocular fluids and sweat. Test sites being explored include finger tips, cuticle, finger web, forearm and ear lobe. Subcutaneous measurements include microdialysis, wick extraction, and implanted electrochemical or competitive fluorescence sensors. Microdialysis is also an investigational dermal and epidermal glucose measurement technique. Epidermal measurements can be obtained via infrared spectroscopy, as well. Combined dermal and epidermal fluid glucose measurements include extraction fluid techniques (iontophoresis, skin suction and suction effusion techniques) and optical techniques. The optical techniques include near infrared spectroscopy, infrared spectroscopy, raman spectroscopy, photoacoustic spectroscopy, scatter and polarization changes

 _

The Biosensor for Blood Glucose Concentration:

There are many methods available for glucose determination, with the majority based on enzymatic reactions. In order of accurateness, the most common are directly measuring glucose in blood (invasive), measuring glucose in the interstitial fluid (minimally invasive), and estimating glucose using other corporal fluids like oral mucosa, aqueous humor of the eye, sweat, urine, saliva, tears, and so forth (noninvasive). The technologies employed could be polarimetry, electromagnetism, ultrasound, Raman spectroscopy, reverse iontophoresis, impedance spectroscopy, and so forth. Why noninvasive measurement is important is evident; the pain caused by finger pricking or invasive sensors is the main reason. It is very common that minimally invasive glucose sensors cause irritation, infections, or even bruising. These sensors have to be renewed every 5 or 6 days, and, at worst, may require that the sensor be recalibrated at frequent intervals with a fingerstick meter. Noninvasive monitoring avoids all these disadvantages but is not as accurate as the invasive technologies. The ideal glucose sensor should be selective for glucose with a fast, predictable response to changing glucose concentrations. It should depend on a reversible and reproducible signal to provide results, and sensor fabrication must be reproducible and cheap on a large scale. It should have a long operational lifetime under physiological conditions, but most of all must be acceptable to the patient. Therefore, it should be noninvasive, should not require user calibration, and would ideally provide real-time continuous information regarding glucose.

_

Current continuous glucose monitoring systems have the advantage of direct insertion of electrochemical sensors into the IF space rather than transporting the sampled fluid outside the body to detect glucose concentrations. Software programs have been designed to accommodate the lag in IF glucose readings. Despite the advances in the making of sensors with new and improved designs and materials, sensor insertion causes trauma to the insertion site. It can disrupt the tissue structure, provoking an inflammatory reaction that can consume glucose followed by a repair process. The interaction of the sensor with the traumatized microenvironment warrants the need for a waiting period for the sensor signal to stabilize, and that period varies depending on the sensor type.

_

A variety of noninvasive blood glucose monitoring techniques are currently under evaluation as seen in the table below. However, none of them are commercially available at this time. Noninvasive methods will permit real-time bedside glucose monitoring without the requirement of an indwelling intravenous catheter with a glucose sensor located on its tip. The availability of a device for rapidly assessing glucose concentration without skin puncture by retinal or corneal glucose measurement or skin transillumination will revolutionize monitoring of glucose at home and in the hospital. Products under development include: Fovioptics retinal glucose analyzer, Inlight Solutions, NIR glucose sensor, NIR Diagnostics, NIR glucose sensor, Sinsys Medical GTS, Sontra Ultrasonic Symphony Diabetes management system, Solianis Monitoring AG etc.  

__

What is intriguing about these initiatives is that, in their final form, they may create a flow of useful diagnostic data reported to clinical laboratories in real time. This would create the opportunity for pathologists and lab scientists to consult with the patients’ physicians, while archiving this test result data in the laboratory information system (LIS). These glucose monitoring methods would also ensure that a complete longitudinal record of patient tests results is available to all the physicians practicing in an accountable care organization (ACO), medical home, or hospital.

________

Overview of Non-Invasive Optical Glucose Monitoring Techniques:

Non-invasive optical measurement of glucose is performed by focusing a beam of light onto the body. The light is modified by the tissue after transmission through the target area. An optical signature or fingerprint of the tissue content is produced by the diffuse light that escapes the tissue it has penetrated. The absorbance of light by the skin is due to its chemical components (i.e., water, hemoglobin, melanin, fat and glucose). The transmission of light at each wavelength is a function of thickness, color and structure of the skin, bone, blood and other material through which the light passes. The glucose concentration can be determined by analyzing the optical signal changes in wavelength, polarization or intensity of light. The sample volume measured by these methods depends on the measurement site. The correlation with blood glucose is based on the percent of fluid sample that is interstitial, intracellular or capillary blood. Drs. Roe and Smoller  have devised the following example. The fluid viewed through the limb is 63% intracellular and 37% extracellular, of which 27% is interstitial and 10% plasma. A blood glucose value of 100mg/dl is equivalent to a tissue sample glucose average of 38mg/dl of which 26% is due to blood, 58% is due to interstitial fluid and 16% is due to intracellular fluid. What the tissue sample glucose means clinically in respect, to therapy is still under investigation. Not only is the optical measurement dependent on concentration changes in all body compartments measured, but changes in the ratio of tissue fluids (as altered by activity level, diet or hormone fluctuations) and this, in turn, effects the glucose measurement. Problems also occur due to changes in the tissue after the original calibration and the lack of transferability of calibration from one part of the body to another. Tissue changes include: body fluid source of the blood supply for the body fluid being measured, medications that affect the ratio of tissue fluids, day-to-day changes in the vasculature, the aging process, diseases and the person‘s metabolic activity.

_

Dermal and Epidermal Fluid Glucose Measurement Techniques
Technique Definition
Near Infrared Spectroscopy (NIR) Absorption or emission data in the 0.7 to 2.5 µm region of the spectrum are compared to known data for glucose.
Raman Spectroscopy Laser light is used to induce emission from transitions near the level excited.
Photoacoustic Spectroscopy Laser excitation of fluids is used to generate an acoustic response and a spectrum as the laser is tuned.
Scatter Changes The scattering of light can be used to indicate a change in the material being examined.
Polarization Changes The presence of glucose in a fluid is known to cause a polarization preference in the light transmitted.
Mid-Infrared Spectroscopy Absorption or emission data in the 2.5 µm – 25 µm region are examined and used to quantify glucose in a fluid.

The market introduction of noninvasive blood glucose measurement by spectroscopic measurement methods, in the field of near-infrared (NIR), by extracorporal measuring devices, failed so far because at this time, the devices measure tissue sugar in body tissues and not the blood sugar in blood fluid. To determine blood glucose, the measuring beam of infrared light, for example, has to penetrate the tissue for measurement of blood glucose.

_

Optical Coherence Tomography:

Optical coherence tomography (OCT) technology is similar to that of pulsatile microcirculation, but it uses infrared light and penetrates deeper into biological tissues. Optical coherence tomography monitors a cylindrical layer of skin in 20μm increments from the skin surface to the subcutaneous tissue. The sensor, trademarked as the GlucoLight Sentris 100 Optical Continuous Glucose Monitor, detects changes in the protein confirmation of collagen and myosin that occur secondary to glucose concentration changes. In a feasibility trial, 33 patients had baseline blood glucose levels sampled via OCT and via capillary derived glucose with subsequent checks every 10–15 min for 2 hours after a 50-g carbohydrate load. The range of blood glucose detected by OCT was 98–442mg/dL. Eighty-three percent of results were within zones A and B of Clark error grid analysis with <1% of values in the clinically unacceptable zones of C and D. Similarly, 83% of the readings were within 20% of reference values obtained with capillary testing. Overall the results of the clinical trial showed that the GlucoLight was safe and effective, and that its reported glucose readings correlate with capillary blood glucose results. However, the trial did not demonstrate the new technology’s ability to accurately monitor blood glucose in the hypoglycemic range and no readings ≤75mg/dL were recorded.

_

At Israel’s Bar-Ilan University, a research team led by Zeev Zallevsky, Ph.D., has developed a non-invasive glucose measuring device that is worn like a wristwatch.  This device consists of a laser that generates a wavefront of light to illuminate a patch of skin on the wrist near an artery, and a camera that measures changes over time in the light backscattered off the skin. Unlike other chemicals present in the blood, glucose exhibits a “Faraday effect,”. In the presence of an external magnetic field generated by the attached magnet, the glucose molecule alters the polarization of the wavefront and thus influences the resulting speckle patterns. These changing patterns provide a direct measurement of the glucose concentration.

_

Pulsatile Microcirculation:

Measuring blood glucose using an optical signal rather than blood from the fingertip has recently been developed. In a 15 patient study, each patient placed their finger into a slot of the newly developed meter, the TangTest (TG), and waited 30 sec for results. The meter works by using a weak light source shone onto the tested finger. Variations in intensity of the transmitted light are measured and algorithms used to quantify the amount of glucose present. The times measured were fasting, 20 min, and 40 min after a meal. The results of the study showed a linear relationship between glucose measurements using the TG system and a standard glucometer, with a correlation coefficient of r=0.81. Correcting for finger position and pulsatile components of the TG signals, 100% of results fell within zones A and B in a Clark error grid. There were many factors that might potentially limit this technology’s commercial application. Since blood flow must be unimpeded for testing, the ambient temperature must be within a defined range. As diabetics often have circulatory problems, the test results for this technology might be affected. In addition, the body must be relaxed physically and psychologically when testing; therefore tested patients had to be warm and relaxed for 20 min before performing the study. This is especially challenging in real life when a patient is hypoglycemic and a test result is needed as soon as possible. To obtain optimal results the test finger also has to be perfectly positioned in the meter; an odd angle or improper finger position can affect the transmission of light yielding inaccurate results. The TG meter also takes 30 sec to give a test result, in contrast to 5 seconds for standard glucometers.

_

Pulse glucometry:

A new approach for noninvasive blood glucose measurement using instantaneous differential near-infrared spectrophotometry:

Authors describe a new optical method for noninvasive blood glucose (BGL) measurement. Optical methods are confounded by basal optical properties of tissues, especially water and other biochemical species, and by the very small glucose signal. They address these problems by using fast spectrophotometric analysis in a finger, deriving 100 transmittance spectra per second, to resolve optical spectra (900to1700nm) of blood volume pulsations throughout the cardiac cycle. Difference spectra are calculated from the pulsatile signals, thereby eliminating the effects of bone, other tissues, and nonpulsatile blood. A partial least squares (PLS) model is used with the measured spectral data to predict BGL levels. Using glucose tolerance tests in 27 healthy volunteers, periodic optical measurements were made simultaneously with collection of blood samples for in vitro glucose analysis. Altogether, 603 paired data sets were obtained in all subjects and two-thirds of the data or of the subjects randomly selected were used for the PLS calibration model and the rest for the prediction. Bland-Altman and error-grid analyses of the predicted and measured BGL levels indicated clinically acceptable accuracy. Authors conclude that the new method, named pulse glucometry, has adequate performance for safe, noninvasive estimation of BGL.

_

Vital signs including blood glucose monitoring from arterial pulse:

UFIT, which uses a noninvasive, Web-enabled device that straps around a patient’s wrist, responds to the need for an easy-to-use self-monitoring system that reliably and simultaneously captures key data on heart and blood, including heart rate, blood pressure, blood oxygen and blood glucose. The system is intended to optimize the management of chronic diseases such as high blood pressure, heart disease and diabetes.  The study’s findings were presented at the Institute of Electrical and Electronics Engineers (IEEE) International Workshop on Medical Measurements and Applications (MeMeA). This Biosign-sponsored study assumed that the arterial pulse, a rich source of clinically relevant information (e.g., rate, rhythm, pattern, pressure and oxygen), could also provide information on blood glucose. The study gathered glucose measurements from 120 participants with blood glucose levels ranging between 3.5 and 27.4 mmol/L. The results show a tight statistical correlation (0.998, Pearson substantial equivalence) between UFIT and laboratory analysis of blood glucose, with a low (1.63 percent) average of the mean percent difference between the UFIT measurements and the laboratory analysis. The correlation was obtained post-hoc by comparing a feature extracted from the radial artery pulse with laboratory blood glucose data. The methodology resembles that used to correlate HbA1C with the direct measurements of glucose in drawn blood.

 _

Pulse gluco-oxymeter: OrSense’s NBM-200G:

This new, noninvasive continuous blood glucose monitoring system, which has already been approved in Europe, measures oxygen saturation, hemoglobin, and blood glucose with very high sensitivity. NBM-200G utilizes occlusion spectroscopy technology that correlates blood glucose levels with light-absorption and scattering measurements. The device is “operated by placing a ring-shaped probe around the patient’s finger, which applies a gentle pressure to the finger, similar to that applied during noninvasive blood-pressure measurement, and temporarily occludes the blood flow. During the occlusion, optical elements in the sensor perform a sensitive measurement of the light transmitted through the finger.  In a recent trial of the NBM-200G, 130,000 glucose-paired readings were taken from 450 patients to determine the accuracy of the device compared to invasive products. There was a strong correlation between measurements derived from the NBM-200G and those from invasive measurements. This method is painless as well as accurate in comparison to invasive devices. Therefore, it is likely to improve compliance in patients who tend to avoid fingersticks and are unable to control their diabetes with invasive products. According to OrSense, the NBM-200G is Conformité Européenne (CE) approved for noninvasive continuous monitoring in patients with demanding need for glycemic control, such as those with brittle diabetes, nocturnal hypoglycemia, and gestational diabetes.

_

Azurite: Attempting to develop a Noninvasive Continuous Glucose Monitor using electrical properties of glucose:

Azurite’s approach is apparently quite original. Andrews and Zebrowski intend to measure blood glucose directly through an electromagnetic (EM) sensing system. Current continuous glucose monitors are invasive and rely on the measurement of some secondary characteristic. For example, Google’s contact lens, which is still in the research stage and not yet available on the market, measures the glucose content of tears, not blood. Continuous monitors currently on the market are invasive, requiring a device that attaches to the body with adhesive and a needle that must be replaced every several days. These monitors measure base blood glucose figures on glucose values in interstitial fluid, the liquid that surrounds our cells and tissues. Azurite’s idea is based on the fact that an electromagnetic signal, depending on its wavelength, can bounce off a surface and return to its source with a particular pattern reflective of the surface it encountered. Glucose molecules, like any material, reflect a unique electromagnetic signal based on their inherent electrical properties. So Azurite hopes to bounce an electromagnetic signal off the glucose in your blood, which would then return to a device carrying information about how much glucose it encountered along the journey. Various research groups have successfully ascertained blood glucose levels by observing the electrical properties of glucose in the blood. In an article published in 2011, researchers at the University of Mississippi demonstrated that a microstrip patch antenna could be used to determine the glucose concentration within a sample of blood by measuring its electrical properties. Drawing from this research and the work of other research groups that are examining at the electrical properties of glucose, Azurite has modeled a novel approach that they hope will lead to a device that uses EM technology to measure those electrical properties remotely. Azurite is determined to move beyond the theoretical and make a direct impact on the lives of people with diabetes. Researchers are hopeful that this technology will lead to a product that combines the rich data of continuous sensing and the convenience and ease of a noninvasive meter.

___

Iontophoresis Based Monitoring: GlucoWatch:

The GlucoWatch automatic glucose biographer from Cygnus (San Francisco, CA) has been introduced as a means of measuring blood glucose based on iontophoresis. This technology uses electrical polarity to cross the skin into deeper tissues of the body and is frequently used for drug delivery. In the context of blood glucose monitoring, interstitial fluid is brought to the skin surface and glucose levels are measured with an electrochemical enzymatic sensor worn on the skin. The wearable GlucoWatch device (available from Animas Technologies Inc.) contains both the extraction and the sensing functions along with the operating and data-storage circuitry. It provides up to three glucose readings per hour for up to 12 h (i.e., 36 readings within a 12 h period). The system has been shown to be capable of measuring the electroosmotically extracted glucose with a clinically acceptable level of accuracy. An alarm capability is included to alert the individual of very low or high glucose levels. However, the unit requires a long warm up and calibration against fingerstick blood measurement and is subject to difficulties due to skin rash with irritation under the device, long warm up times, sweating, or change in the skin temperature. A similar device has been developed in Korea called the RIGMD. It uses the same technology as that of the GlucoWatch but the enzymatic sensor measures glucose every 5 min instead of every 10 min.

_

The GlucoWatch didn’t quite live up to doctors’ and consumers’ expectations. Promises of a new, revolutionary way to continuously monitor blood sugar levels fell short. Some patients found this process very uncomfortable, even painful. Many reported skin irritation. A randomized study from researchers at the University College of London pointed out further shortcomings with the GlucoWatch device. Their results were published in the May 2009 edition of the journal Diabetic Medicine. Though only 6 percent were unable to tolerate wearing the device, participants noted inaccuracies in their readings on the GlucoWatch G2 Biographer. Another clinical study from the Stanford School of Medicine found that the GlucoWatch frequently triggered false alarms, erroneously telling users their blood sugar was too high. Out of 20 alarms sounded, only 10 cases actually correctly assessed a too-high reading, the other 10 were false positives. Now GlucoWatch has vanished from the diabetes care scene and its manufacturer has stopped any further development.

_

Exhaled Gases to Measure Blood Glucose:

It has been previously shown that people experiencing hyperglycemia exhale gases such as acetone and ethanol in different amounts than people who are normoglycemic. The concept of analyzing exhaled gases to measure glucose has been pursued by looking at methyl nitrate production. In a research study 18 experiments were conducted among 10 Type I diabetic children. Glucose from plasma and exhaled gases were monitored during euglycemia (normal glucose levels) and during inducement of hyperglycemia and its correction. The study was able to show that methyl nitrate concentrations were around 11 ± 3 parts per trillion by volume (pptv) during euglycemia and increased to 27 ± 6 pptv during hyperglycemia. Methyl nitrate concentrations also normalized and came back to 15 ± 2 pptv after the correction of a hyperglycemic event. The study showed that methyl nitrate concentration correlates with blood glucose fluctuations. Although methyl nitrate was found to be the most significant chemical in glucose fluctuation, there were greater than 50 gases that were detectable. This study shows the potential for the use of exhaled compounds as diagnostic markers for glycemic levels; however, it is far from being ready for commercial use. The authors employed gas chromatography and mass selective detection to monitor the fluctuations in exhalation levels. This is not a method that is feasible for routine patient use in disease monitoring, and the technology must be harnessed into a device that could be marketed and easily used by diabetics in their daily lives. The study also had weaknesses as it only looked at children with Type I diabetes and only looked at hyperglycemia. Finally, algorithms to relate the levels of exhaled gas to actual glucose levels in standard units are undeveloped. Further testing will need to be done on a larger randomized population of people across all different levels of blood glucose measurements. All negative aspects aside, the authors offer a promising avenue to pursue in the methods of noninvasive blood glucose testing.

_

Breathalyzer’s Nanosensor detects glucose in exhaled breath:

Glucose Breathalyzer uses Nano-films and Acetone-Sensitive Polymers:

Western New England University (WNE) researchers also announced another breathalyzer that uses nanotechnology to noninvasively detect blood-glucose levels in the breath of diabetics. The researchers unveiled this technology at the 2013 American Association of Pharmaceutical Scientists (AAPS) Annual Meeting and Exposition in San Antonio, Texas. Ronnie Priefer, Ph.D., a Professor of Medical Chemistry at WNE in Springfield, Massachusetts, created the multilayer technology using nanometer-thick films consisting of two polymers that react with acetone. This film crosslinks the polymers and alters the physicochemical nature of the film to provide quantification of acetone, and thus glucose levels, noted the AAPS press release. 

________

Glucose Sensing via the Eye:

An exciting potential avenue for less invasive glucose measurement involves ocular testing. One group created a wearable contact lens that has been analyzed in clinical trial. The lens showed promise as it measures increases in glucose levels by a colorimetric response to systemic glucose fluctuations. Increased glucose levels cause a Rhodopsin fluorescent dye to emit from the lens and recordings were obtained with a hand-held photofluorometer. The color change of the lens was only slightly visible to the naked eye and thus would remain aesthetically acceptable to the wearer. A more recent study introduced a wearable amperometoric glucose sensor to measure tear glucose levels. The sensor, connected to an external measurement system displaying results, was able to detect an increasing dose of glucose as it was manually administered to a rabbit. However, the biosensor showed a limited increase in glucose levels from 0.16 to 0.46 mmol/L in comparison to a commercial glucometer, which showed an increase from 3.7 to 7.6 mmol/L, a clear difference in sensitivity. In addition, there was a measurement delay in the sensor, in the order of tens of minutes, which is not desirable in situations of hypo- or hyperglycemia.

_

Google develops contact lens glucose monitor:

_

Google unveiled a contact lens that monitors glucose levels in tears, a potential reprieve for millions of diabetics who have to jab their fingers to draw their own blood as many as 10 times a day. The prototype, which Google says will take at least five years to reach consumers, is one of several medical devices being designed by companies to make glucose monitoring for diabetic patients more convenient and less invasive than the traditional finger pricks. The lenses use a minuscule glucose sensor and a wireless transmitter to help those among the world’s 382 million diabetics who need insulin keep a close watch on their blood sugar and adjust their dose. The device looked like a typical contact lens when it is held one on index finger. On closer examination, sandwiched in the lens are two twinkling glitter-specks loaded with tens of thousands of miniaturized transistors. It’s ringed with a hair-thin antenna. The Google team built the wireless chips in clean rooms, and used advanced engineering to get integrated circuits and a glucose sensor into such a small space. Researchers also had to build in a system to pull energy from incoming radio frequency waves to power the device enough to collect and transmit one glucose reading per second. The embedded electronics in the lens don’t obscure vision because they lie outside the eye’s pupil and iris. According to Google, the sensor can take about one reading per second, and it is working on adding tiny LED lights to the lens to warn users when their glucose levels cross certain thresholds. The sensors are so small that they “look like bits of glitter.” Google says it is working with the FDA to turn these prototypes into real products and that it is working with experts to bring this technology to market. These partners, the company says, “will use our technology for a smart contact lens and develop apps that would make the measurements available to the wearer and their doctor.” 

_

Non-invasive measurement of blood glucose using retinal imaging:

An apparatus carries out measurements of blood glucose in a repeatable, non-invasive manner by measurement of the rate of regeneration of retinal visual pigments, such as cone visual pigments. The rate of regeneration of visual pigments is dependent upon the blood glucose concentration, and by measuring the visual pigment regeneration rate, blood glucose concentration can be accurately determined. This apparatus exposes the retina to light of selected wavelengths in selected distributions and subsequently analyzes the reflection (as color or darkness) from a selected portion of the exposed region of the retina, preferably from the fovea.

_

EyeSense:

EyeSense is a noninvasive technology currently in development that measures blood glucose concentrations simply by placement of the measurement device near the eye. This innovative, noninvasive technology utilizes a novel biochemical sensor that is inserted below the conjunctiva in a simple and painless procedure by the ophthalmologist on an annual basis. The technology would replace conventional fingersticking and would probably increase blood glucose monitoring compliance. The methodology hinges on a biochemical sensor that is embedded on a small, hydrogel disk. The chemical in the disk reacts with blood glucose in the interstitial fluid below the conjunctiva of the eye and emits fluorescent light that is quantified by the photometer device. The photo-meter can be placed in front of the eye to obtain the blood glucose results in less than 20 seconds. The advantage of noninvasive technology is that patients have the ability to measure their blood glucose as frequently as they want without having to lance their fingers. The implanted disk is invisible to the naked eye. Additionally, it is generally well tolerated and does not feel like a foreign body in the eye of the user. EyeSense is still in the advanced stages of development and its approval appears promising.  

_ _______

Dario: Turning Your Smartphone into a Glucose Meter:

It’s an integrated unit about the size of a cigarette lighter that includes a basic adapter that connects into a smartphone’s audio jack. As soon as you connect the device, your smartphone switches to BG-monitoring mode. You then click open the self-contained lancing device that has disposable lancets inside and an integrated cartridge of 25 propriety test strips, allowing you to poke your finger just like any other meter. The reading you get is transmitted directly to the smartphone through an app that’ll be available for free for both iPhones and Andriod systems. The app will allow patients not only to immediately see and automatically upload BG results, but also to add food information into a database — along with easy access to carb estimations, an insulin calculator, and other features like data-sharing online. Not to mention the array of alerts and reminders that patients could program in at their choosing. Dario uses ultra-thin lancets, and you would buy the 25-strip cartridges with propriety strips from the company (or from a supply provider who will eventually stock them). Dario is unique in that users will be able to analyze data directly on the phone app, send data to caregivers and doctors, and  even have their data examined in clinical research and epidemiology studies about the distribution and patterns of diabetes management. The hope is to have Dario become compatible with electronic health records (EHRs) and other services that would offer interoperability with insulin pumps and CGMS (continuous glucose monitors), and possibly even Pharma-interaction on the app in terms of learning about or ordering prescriptions if users so desire.

_

iBG star:

As the “i” in iBGStar suggests, the glucose monitor is specifically made for the Apple iPhone or iPod touch. iBGStar is the first device that has been cleared by the FDA for use on an Apple device, and it is currently available in some European countries. The iBGStar uses its Diabetes Manager App for the iPhone to help users keep track of blood glucose levels on a daily basis, while the application allows patients to send selected data to their physicians to aid in monitoring their progress. The new monitor uses a novel patented technology called dynamic electrochemistry. Dynamic electrochemistry is a technology that uses complex mathematical methods to calculate and adjust for interference that may be caused by changes in temperature, humidity, and hematocrit levels. The device sends out signals of different frequencies and voltage in order to compensate for the interference that may cause inconsistent blood glucose readings. In patients who have abnormal hematocrit levels, which may be due to a disease state, a low hematocrit level may artificially overestimate actual blood glucose levels. This may pose a safety concern to the patient because it may lead him or her to use a higher insulin dose than required, possibly resulting in hypoglycemia and even hospitalization. Therefore, it is important to use a device that can measure blood glucose levels precisely in various conditions. In order to establish the accuracy of the iBGStar, a comparison study evaluated the BGStar, a device using the same dynamic electrochemistry method, against 12 other glucose monitoring systems from various manufacturers. The study specifically observed the consistency of blood glucose readings with varied hematocrit concentrations. The results showed that only four of the 13 devices—the BGStar, OneTouch Verio, Glucocard G+, and Contour—actually met the study criteria for having less than 10% maximal mean percentage deviation (MMPD) from control glucose readings. In addition, another study has supported the accuracy of the device by showing that the iBGStar has 99.5% accuracy. The iBGStar is an excellent device that will provide consistent blood glucose readings and is easier to use than conventional monitors due to portability and compatibility with smartphones, but it still requires the use of needles, which may hinder compliance to glucose monitoring for some patients.

_

Apple’s HealthKit:

HealthKit, which is still under development, is the center of a new healthcare system by Apple. Regulated medical devices, such as glucose monitors with accompanying iPhone apps, can send information to HealthKit. With a patient’s consent, Apple’s service gathers data from various health apps so that it can be viewed by doctors in one place. Stanford University Hospital doctors said they are working with Apple to let physicians track blood sugar levels for children with diabetes. In the first Stanford trial, young patients with Type 1 diabetes will be sent home with an iPod touch to monitor blood sugar levels between doctor’s visits. HealthKit makes a critical link between measuring devices, including those used at home by patients, and medical information services relied on by doctors, such as Epic Systems Corp, a partner already announced by Apple. Medical device makers are taking part in the Stanford and Duke trials. DexCom Inc, which makes blood sugar monitoring equipment, is in talks with Apple, Stanford, and the US Food and Drug Administration about integrating with HealthKit. DexCom’s device measures glucose levels through a tiny sensor inserted under the skin of the abdomen. That data is transmitted every five minutes to a hand-held receiver, which works with a blood glucose meter. The glucose measuring system then sends the information to DexCom’s mobile app, on an iPhone, for instance. Under the new system, HealthKit can scoop up the data from DexCom, as well as other app and device makers. Data can be uploaded from HealthKit into Epic’s “MyChart” application, where it can be viewed by clinicians in Epic’s electronic health record.

_

Glooko’s new device Bluetooth-enables popular glucose meters:

Glooko, which makes a cable that syncs popular glucose meters to a companion app on smartphones, has always said it plans eventually to replace that cable with a wireless Bluetooth connection. Recently company announced that they’ve finally released that product, the Bluetooth MeterSync Blue. The small box will plug into a patient’s glucose meter and send the information wireless, via Bluetooth, to Android or Apple phones.

__

Telcare Glucose Meter:

Recent advances in cellular data communications technology have enabled the development of glucose meters that directly integrate cellular data transmission capability, enabling the user to both transmit glucose data to the medical caregiver and receive direct guidance from the caregiver on the screen of the glucose meter. The first such device, from Telcare, Inc., was exhibited at the 2010 CTIA International Wireless Expo, where it won an E-Tech award. This device is currently undergoing clinical testing in the US and internationally. The Telcare Blood Glucose Meter (BGM) aims to change that, with a color screen and cellular connectivity that automatically uploads your blood sugars to the cloud. While the Telcare device itself might be more on par stylistically with the Blackberry generation, the Telcare BGM serves as an essential transition step for glucose meters by adding cloud storage and analytics while retaining a familiarity for users of all ages and tech-suaveness. With its own cellular 3G antenna, the Telcare automatically uploads blood glucose recordings to its central cloud server using the Verizon network. At this point, users can access their data through any web browser, or use the partner app Diabetes Pal for Android or iPhone. A key advantage of the Telcare is that the MyTelcare portal allows users to grant read-only access to others (family members, caregivers), and to grant full access to health care providers. The full access permitted for health care providers allows them to adjust feedback messages, change target ranges, and target number of readings per day.

_____

Non-invasive glucose meter (glucometer):

Noninvasive glucose refers to the measurement of blood glucose levels (required by people with diabetes to prevent both chronic and acute complications from the disease) without drawing blood, puncturing the skin, or causing pain or trauma. The search for a successful technique began about 1975 and has continued to the present without a clinically or commercially viable product. A non-invasive glucose meter is a relatively new piece of technology that takes glucose measurements without any finger pricking or skin pricking. The thought of pricking one’s finger several times a day, or even once, makes many diabetics jittery, so a non-invasive glucose meter can be highly desirable. A 2012 study reviewed ten technologies: bioimpedance spectroscopy, electromagnetic sensing, fluorescence technology, mid-infrared spectroscopy, near infrared spectroscopy, optical coherence tomography, optical polarimetry, raman spectroscopy, reverse iontophoresis, and ultrasound technology, concluding with the observation that none of these had produced a commercially available, clinically reliable device and that therefore, much work remained to be done. As of 2014, only two noninvasive glucose meters [GlucoTrack & Orsense] which have obtained CE mark approval are being marketed in a number of countries.

_

The GlucoTrack:

GlucoTrack uses ultrasonic, electromagnetic and thermal technologies to non-invasively measure glucose levels in the blood: In a perfect world, blood sugar testing would be quick and painless. The finger-prick, the blood and the coated strips can be messy, complicated to use and painful—and these issues can contribute to patient noncompliance. A goal of the medical device community has been to develop a blood glucose monitoring device that is noninvasive but still highly effective, and thereby remove what are believed to be among the two most significant barriers to frequent monitoring of blood glucose by diabetes patients:  pain and cost. To meet this need, Integrity Applications, based in Ashkelon, Israel, has developed the GlucoTrack® model DF-F non-invasive blood glucose measurement device, which represents a key advance in this area. It is designed to help people with diabetes obtain blood glucose level measurements without the pain, inconvenience, incremental cost and difficulty of conventional (invasive) spot finger stick devices. The GlucoTrack device takes advantage of the natural physiology of the ear lobe and uses an ear lobe clip to deliver blood glucose readings in about a minute, thanks to a trio of technologies: ultrasonic, electromagnetic and thermal.

_

GlucoTrack is battery-operated and includes a Main Unit (MU), which contains display and control features, as well as transmitter, receiver and processor, and a Personal Ear Clip (PEC), which is clipped to the earlobe and contains sensors and calibration electronics. The device is small, light and easy to use and handle.  The Main Unit can be shared by up to three users (in model DF-F), although each user requires his/her own (individually calibrated) PEC.  The device includes a USB port for data downloading (enables off-line analysis), as well as battery recharging. As a noninvasive device, GlucoTrack does not measure blood glucose levels directly; instead, it harnesses three independent technologies to measure physiological phenomena that correlate with the user’s glucose level. These measurements—which are transmitted from the PEC to the Main Unit—are subsequently analyzed using an algorithm that translates them into blood glucose level readings. Significantly, GlucoTrack does not use optical technology, which, based on others’ experience, was found to be impractical for use in noninvasive glucose monitoring.  GlucoTrack performs three independent measurements simultaneously, using thermal, ultrasound, and electromagnetic technologies. The results are weighted, using a patented, unique algorithm to provide a reading which is displayed on a color touch screen of the device by large, clear digits.  The result is announced verbally as well, allowing visually impaired users using the device as easy.

_

Why the earlobe and not another anatomical location?

The earlobe is a very convenient place on the body to measure one’s blood sugar levels, since doing so doesn’t interfere with one’s activities. From a physiological standpoint, there are also specific benefits to using the earlobe. For example, the earlobe contains a great number of capillary vessels, and blood within it flows relatively slowly. It also contains a relatively small amount of fat and nerves, as well as no bones. All of these facts help to ensure a better reading. In addition, the earlobe is relatively stable in size in adults, which similarly helps to maintain the calibration valid for relatively long period of time. The device also cuts down on costs for the user, as the Personal Ear Clip only needs to be replaced every six months.

_

Calibration:

Calibration is required to be performed prior to glucose measurements so that the influence of individual quasi-stable factors, such as tissue structure, can be minimized. The process consists of correlating invasive basal and postprandial BG data, taken from finger capillary blood, with six sequential measurements with the GluocTrack instrument, generating a calibration curve that is exclusive to each individual. Six invasive pre and post-prandial measurements generate individual calibration as seen in the figure below. The first measurement pair is taken in the fasting state. The calibration procedure is easy, lasts about 1.5 hours and more importantly, is valid for a month (a longer period is forecast in the future). 

_

The figure below shows that non-invasive glucometer saves money as compared to invasive glucometer:

 ______

Parents hack child’s glucometer:

To deal with childhood diabetes and keep on top of a disease that could turn deadly at a moment’s notice, parents have resorted to hacking medical devices. By creating solutions that help them manage their children’s disease, these innovative parents could push the medical device world in a new direction. Jason Adams, whose eight year old daughter has Type 1 diabetes, was concerned about monitoring her blood sugar at night. Without the ability to monitor her condition, he was forced to keep her home, which prevented her from attending sleepovers with friends. Jason’s daughter Ella uses a Dexcom Inc. glucose monitor, a device that takes blood sugar readings every five minutes, according the WSJ. Unfortunately, however, the monitor has no provision for sharing data over a network. A little internet searching revealed to Jason a system called “NightScout,” a remote-monitoring software developed by other parents of diabetic children. The developers of NightScout, who happen to be software engineers, were frustrated with the limited capabilities of current diabetes monitoring technology. According to the WSJ, the open-source software enables parents to hack the Dexcom glucose monitor and upload its information to the Internet. Two weeks after getting the software setup at home, Ella was able to attend her first sleepover. Other notable successes have occurred as well, according to the WSJ. Kristin Derichsweiler, a nurse and single mother of four, downloaded the software and started using it to help her 15 year old son manage his diabetes. While at work, she noticed his blood sugar dropping to dangerously low levels. When he failed to answer the phone, she rushed home to find he had become unresponsive and needed juice to restore proper sugar levels. Despite the successes, there is justified concern from the FDA. Coming to rely on an untested technology could lead to a potentially deadly false sense of security. Questions that are raised by the FDA related to NightScout center around how users may get support if they run into problems, and how to keep data confidential on the Internet. While questions are raised, the agency is making efforts to facilitate the new software and the parents who are using it.

_

The First Remote Mobile Communications Device Used for Continuous Glucose Monitoring (CGM):

Dexcom, Inc., a leader in continuous glucose monitoring (CGM) for patients with diabetes, announced recently that it has received U.S. Food and Drug Administration (FDA) approval for its CGM remote mobile communications device: Dexcom SHARE. Dexcom SHARE, an accessory to the Dexcom G4® PLATINUM Continuous Glucose Monitoring System, uses a secure wireless connection to transmit the glucose levels of a person with diabetes to the smartphones of up to five designated recipients, or “followers.” These followers can remotely monitor a patient’s glucose information and receive alert notifications from almost anywhere via their Apple® iPhone® or iPod® touch. With Dexcom SHARE, parents and personal caregivers can monitor a child’s or loved one’s glucose data from a remote location, giving them peace of mind and reassurance when they are apart. Now critical glucose data from the Dexcom G4® PLATINUM Continuous Glucose Monitoring System can be remotely monitored using a mobile device. So parents need not hack their child’s glucometer as discussed in previous paragraph.  

___________

___________

My logic on Diabetes Mellitus: 

Since it has been shown that microvascular diabetes complications are due to chronic hyperglycemia per se, I am surprised to know that most experts including ADA, IDF and WHO recommend venous plasma glucose level for diagnosis of diabetes. It is because all the tissues bearing the brunt of diabetic microvascular complications (e.g. kidneys, nerves, and retina) are perfused by high blood/plasma glucose from their arterial blood supply. Venous blood/plasma glucose does not enter in any tissue except liver through portal vein. Remember, it is the portal vein that brings glucose from food along with insulin from pancreas to liver for metabolism. Remember, both glucose from food and insulin from pancreas enter systemic circulation after first-pass metabolism in liver. However, pancreas secret insulin in response to arterial blood glucose and not venous blood glucose including portal vein glucose. So it is the arterial blood glucose that secretes insulin and it is the high arterial blood glucose that causes diabetic complications when insulin secretion and/or action is reduced. Then why rely on venous blood glucose for diagnosis of diabetes?  Since it is difficult to puncture artery every time for blood glucose test and since arterial blood glucose is almost same as capillary blood glucose, why not capillary blood glucose for diagnosis of diabetes mellitus?  Of course we have to change cutoff values for diabetes diagnosis as compared to venous blood but it would be more rational, more physiological and correlate well with diabetic complications. Of course there is marked difference between capillary blood glucose and venous blood glucose in non-diabetic individual post prandial, but since capillary (surrogate arterial) blood glucose determines diabetic complications, cutoff value for diagnosis of diabetes, both fasting and post prandial, must be based on capillary blood glucose measurement (SMBG) rather than venous glucose. I would also recommend HbA1c measurement from capillary blood rather than venous blood for the same reason.

___________

___________

The moral of the story:

1. Diabetes mellitus (DM) is defined as a metabolic disorder characterized by hyperglycemia due to reduced insulin secretion and/or reduced insulin action and/or increased glucose production. Persistent hyperglycemia is a hallmark of diabetes but transient hyperglycemia can occur a part of stress response in acute illnesses and is brought about by elevated levels of counter regulatory hormones.

_

2. One in 10 adults has diabetes. 382 millions have diabetes in 2013 worldwide. Every six seconds someone dies from diabetes.

_

3. About half of diabetic population worldwide does not know that they have diabetes.  

_

4. Diabetes is not merely a health issue, but also a political issue, one which requires whole society approach. Type 2 diabetes, which many consider an epidemic currently, is increasing worldwide predominantly due to poor diet, sedentary lifestyle, new wealth and the fact that we are living longer. Eating fast food two or more times a week increases the risk of developing Type 2 diabetes by 27 percent.   

_

5. Chronic hyperglycemia per se causes chronic diabetic complications by various mechanisms although there is a genetic susceptibility for developing particular complications. There is no way to check genetic susceptibility to diabetic complications.

_

6. Numerous studies have demonstrated that optimal management of glycemia along with other cardiovascular risk factors can reduce risk of development and progression of both microvascular and macrovascular complications. It has been shown that microvascular complications, such as neuropathy, nephropathy, and retinopathy are reduced by 40% for every percentage reduction in hemoglobin A1c (HbA1c or A1c) values. 

_

7. Tight glucose control decreased the risk of progression of retinopathy, nephropathy, and neuropathy but increased the risk of hypoglycemia 2.4 times. Intensive efforts to achieve blood sugar levels close to normal have been shown to triple the risk of the most severe form of hypoglycemia, in which the patient requires assistance from by-standers in order to treat the episode. Among intensively controlled type 1 diabetics, 55% of episodes of severe hypoglycemia occur during sleep, and 6% of all deaths in diabetics under the age of 40 are from nocturnal hypoglycemia. 

_

8. Since hyperglycemia per se causes diabetic complications, since control of hyperglycemia leads to reduction of diabetic complications and since tight control of blood glucose invariably leads to hypoglycemia and its complications; diabetics are therefore recommended to check their blood glucose levels frequently to prevent and/or treat hyperglycemia and hypoglycemia. 

_

9. Researchers found that the lifetime of hemoglobin cells (RBC) of diabetics turned over in as few as 81 days, while they lived as long as 146 days in non-diabetics. In a person with normal blood sugar, hemoglobin will be around for a lot longer, which means it will accumulate more sugar. This will drive up the A1c test result – but it doesn’t mean that person had too much sugar in their blood. It just means their hemoglobin lived longer and thus accumulated more sugar. So normal people with normal fasting plasma glucose (FPG) and postprandial plasma glucose (PPG) can have falsely elevated A1c levels. On the other hand, if someone is diabetic, their red blood cells live shorter lives than non-diabetics. That means diabetics will have falsely low A1c levels. So can we rely on A1c alone?  

_

10. It is a fact that A1c is a measure of glycemic control and rise/fall of A1c correlates well with rise/fall of chronic diabetic complications. Nonetheless it does not provide information about day-to-day glucose levels, nor does it provide immediate feedback to patients about medication or lifestyle choices. The biggest limitation of A1c is that it misses wide glycemic excursions and also misses many asymptomatic hypoglycemias. Frequent unrecognized hypoglycemia may lead to falsely low HbA1c levels. Postprandial hyperglycemia was identified in 39% of patients with type 2 diabetes who were not using insulin and had an HbA1c level lower than 7.0%.  There is evidence to show that wide glycemic excursions lead to vascular complications despite reasonable A1c. Self-monitoring of blood glucose (SMBG) complements A1c because it can distinguish among fasting, preprandial, and postprandial hyperglycemia; detect wide glycemic excursions; identify hypoglycemia; and provide immediate feedback to patients about the effect of food choices, activity, and medication on glycemic control.

_

11. SMBG is measurement of glucose in blood (or plasma) either directly or indirectly through other body fluids by patients themselves or their care givers or medical personnel by any technique without using laboratory.  

_

12. Nobody should do urine glucose test as a surrogate marker for blood glucose test. However, I have diagnosed diabetes in some patients by seeing urine glucose test positive incidentally when urine examination was done for evaluation of fever, jaundice or hematuria and not for diabetes. Also urine ketone test is very helpful to patient to judge severity of illness at home when SMBG level is above 240 mg/dL.  

_

13. Collection of blood sample must be avoided from the same arm or the same vein where IV drip D5W/D5NS is infused because only10% contamination with D5W/D5NS will elevate glucose in a sample by 500 mg/dL or more. However, when blood sample is drawn from the arm opposite the one in which an intravenous line is inserted, blood glucose will rise by 0.38mg/dL every minute in a 70 kg diabetic man with little or no insulin secretion when drip duration is 8 hour for 500 ml. And if the same drip is given to a normal non-diabetic person, slight increase in blood glucose will stimulate insulin secretion and therefore blood glucose will be reasonably maintained. The corollary is that if you have collected blood from the arm opposite the one in which an intravenous line is inserted, if the drip rate is average (4 to 8 hour for each pint), and if you are getting high blood glucose level, do not blame IV drip (D5W or D5NS) as patient may indeed be diabetic. This is very important because I have seen in emergency situation as well as routine hospital admission; IV drip is started even before blood is collected for investigation although SMBG by nurse must be done before starting IV drip. Also, diabetics on IV drip for various reasons ought to be monitored for glucose levels. 

_

14. Whole blood glucose is 15 % lower than plasma glucose because plasma has higher water content and consequently more dissolved glucose than whole blood. The conversion of concentration values from one system (or sample type) to another is subject to unpredictable errors. Several authors have already rejected the practice of converting glucose concentrations and have recommended that plasma be used for all glucose determinations. The blood glucose is defined as venous plasma glucose according to the criteria of WHO to diagnose diabetes. Venous blood is usually employed for laboratory analysis and is preferable in diabetes testing. Laboratory plasma glucose measurement is far more accurate than plasma equivalent of whole blood capillary glucose measurement by glucometer (SMBG) because laboratory tests virtually eliminate all variations except for manufacturing variation from their testing. A standard lab glucose value is within about plus/minus 4% of a perfect reading while SMBG is within plus/minus 15% of the lab test (current ISO standard). However, because of the widespread use of glucometers, fingerstick capillary whole blood glucose tests have also become a standard. Also, some glucometers can measure capillary plasma glucose directly by a series of absorbent pads to separate the cellular portion of a sample from the plasma portion.

_

15. Plasma glucose is a biological variable and possesses intra-individual variability of 5.7% to 8.3% and inter-individual variability of up to 12.5%. An individual having FPG of 126 mg/dL can show FPG value from 112 to 140 mg/dL based on coefficient of variation (CV) of 5.7%. The analytical variability is considerably less than the biological variability, still one-third of the time, the glucose results on a single patient sample measured in two different laboratories could differ by 14%.  

_

16. In the fasting state, the glucose concentrations in arterial, capillary (SMBG), and (forearm) venous blood are supposed to be almost indistinguishable. Fasting venous glucose is generally 2-5 mg/dL lower than fasting arterial blood glucose. Arterial blood glucose and capillary blood glucose (SMBG) have been shown to be almost identical in concentration in fasting as well as post meal time. However, post prandial (after glucose load) venous glucose could be 7 to 35 % lower than arterial glucose (equivalent SMBG) due to muscles removing more glucose from the blood than the liver in the presence of adequate insulin action. It has been shown that a lack of insulin (in the de-pancreatized animal) shows an arteriovenous glucose difference that is extremely small and that injection of insulin produces an increase in this difference. The mean arteriovenous differences are largest in lean nondiabetic individuals and smallest in diabetic individuals. So in a non-diabetic individual, SMBG post prandial can overdiagnose diabetes because SMBG would be markedly higher than venous blood glucose. So if you were a non-diabetic before and now you want to know whether you have developed diabetes, please test fasting and post prandial blood (plasma) glucose in a laboratory. For established diabetes, SMBG is useful for monitoring fasting blood sugar (FBS) and postprandial blood sugar (PPBS). Lower post prandial venous blood glucose proves insulin action in non-diabetic as well as in type 2 diabetics (T2DM). Therefore in T2DM, concurrent post prandial SMBG and lab venous blood glucose can show arteriovenous blood glucose difference and higher the difference, greater the residual insulin action. In other words, large arteriovenous postprandial blood glucose difference in T2DM suggests that pancreas is secreting some residual insulin and that insulin is acting.   

_

17. I recommend 2 hour postprandial plasma glucose (PPG) by laboratory as a screening test for T2DM. An increase in postprandial glucose concentration usually occurs before fasting glucose increases. Therefore, postprandial glucose is a sensitive indicator of the risk for developing diabetes and an early marker of impaired glucose homeostasis. Also, PPG is a better predictor of both all-cause mortality and cardiovascular mortality or morbidity than the FPG. Even when HbA1c and fasting glucose levels are within the normal range, post prandial hyperglycemia has been associated with a 2-fold increase in the risk of death from cardiovascular disease.  Elevated postprandial glucose is independent and significant risk factors for macrovascular complications and increased mortality risk.  Prospective interventions that control PPG have been shown to improve endothelial function and reduce carotid atherosclerosis in patients with type 2 diabetes. PPG levels correlated better than HbA1c measurements with the risk of retinopathy progression.  We have to unlearn that FPG and A1c levels are reliable cut-offs for predicting or preventing future diabetes and its complications. PPG scores over FPG and A1c as the best marker for predicting future diabetes, preventing future diabetes, controlling present diabetes and preventing chronic diabetic complications. Many studies have shown that postprandial hyperglycemia beyond the 16th week of pregnancy is the main predictor for fetal macrosomia and postprandial capillary blood glucose monitoring significantly reduced the incidence of preeclampsia.  In order to avoid hypoglycemia and achieve target glucose levels, patients with diabetes who take mealtime insulin are advised to test SMBG before meals to adjust doses, based on meal size and content, anticipated activity levels, and glucose levels. The biggest limitation of SMBG is that despite excellent A1c levels and target preprandial glucose levels, patients often experience nocturnal hypoglycemia and postprandial hyperglycemia that are not evident with routine SMBG. Since recent evidence suggests that postprandial hyperglycemia plays a particularly important role in the development of vascular complications of diabetes, it is imperative to monitor postprandial blood glucose by SMBG for better clinical outcome.  

_

18. Pain associated with finger lancing is one of the major barriers to SMBG. In a recent study, up to 35% of the participants stated that pain is the main reason people with diabetes refrain from regular blood glucose testing. Everybody nowadays uses computer, internet and cell phones; all need finger tip use; painful finger tips can affect handling of all these devices. To reduce lancing pain, forearm testing is an acceptable alternative to finger prick testing for blood glucose measurement provided blood sugar is stable and not rapidly changing.

_

19. Do not share glucometer or fingerstick lancing devices. Sharing of this equipment could result in transmission of infection such as hepatitis B. However, the rate of infections from lancets is extremely low because the lancet goes into the subcutaneous space and is not being used intravenously, and the blood is flowing out of the body.

_

20. If fingers are not soiled by sugar containing products (biscuits, fruits etc); the first drop of blood can be used after gentle squeezing the finger for SMBG. 

_

21. Up to 16% of patients miscode their glucometers. This can lead to -37% to + 29% errors in clinical practice. So no coding strips are recommended.

_

22. Factors affecting accuracy of various glucometers include calibration of meter, ambient temperature, pressure use to wipe off strip (if applicable), size and quality of blood sample, high levels of certain substances (such as ascorbic acid) in blood, hematocrit, dirt on meter, humidity, and aging of test strips. However despite multiple technical errors while using SMBG, most patients obtain clinically useful values. Various studies have shown the clinical accuracy of current glucose meters and concluded that the meters are sufficiently reliable for clinical decision making.

_

23. It has been amply demonstrated that up to a quarter of patients will falsify their SMBG values when writing the results in their log books to manipulate their behavior to ‘look good’.

_

24. For people with type 1 diabetes and type 2 diabetes taking daily insulin, SMBG is an essential component of daily diabetes management and it has been shown that testing 3 or more times a day was associated with a statistically and clinically significant 1.0% reduction in A1c levels. Furthermore, blood glucose measurements taken post-lunch, post-dinner and at bedtime have demonstrated the highest correlation to A1c. Two meta-analyses demonstrated that SMBG results in a statistically significant decrease in A1c of approximately 0.40% in patients with type 2 diabetes who are not taking insulin.  SMBG allows patients to adjust food intake, physical activity, or pharmacologic therapy in response to their blood-glucose readings and to assess whether their blood-glucose levels are under control and thereby reduce diabetic complications.

_

25. Even though SMBG is invasive and expensive, regular and frequent SMBG not only improves glycemic control in all diabetics by improving A1c but also linked to better clinical outcomes; by necessarily acting on the measured values. In other words, mere doing SMBG without acting upon is futile. SMBG prescription is discouraged in the absence of relevant education and/or ability to modify behaviour or therapy modalities. Out of all SMBGs, only 41% of people with diabetes have the ability to calculate an insulin dose based on carbohydrate intake and blood glucose levels. Also glucometers are most accurate when used properly. Thus, educating patients on proper use and what to do with the results is vital.

_

26. Lower rates of SMBG are correlated with having less than a high school education, having no health insurance coverage (poor people), taking no medication or oral medication only, making two or fewer doctor visits annually, and not having taken a diabetes-education course. 

_
27. Patients at greatest risk for diabetes complications (i.e. elderly, minorities, low socioeconomic status, obese, alcoholics and smokers) are least likely to self-monitor blood glucose.

_

28. Glucometers and test strips are an acceptable cost-effective intervention in all diabetics from a health insurance perspective.

_

29. Fingerstick blood glucose measurement by glucometer is affected by pH, partial pressure of oxygen, hematocrit, hypotension & tissue hypoperfusion and noradrenalin (norepinephrine) drip in the intensive care unit (ICU) resulting in wrong therapy. Glucometer should not be used in ICU. A blood gas analyzer can measure blood glucose level in the ICU for critically ill patients.  

_

30. Continuous glucose monitor (CGM) has two distinct disadvantages over SMBG due to measurement of interstitial fluid glucose and not blood glucose; continuous systems must be calibrated with SMBG and therefore do not yet fully replace “finger stick” SMBG measurements and  glucose levels in interstitial fluid temporally lag behind blood glucose values by 5 to 20 minutes. CGM by subcutaneous sensor is classified as minimal invasive technique. Detection of nocturnal hypoglycemia is the main advantage of CGM over SMBG. CGM technology is neither reliable nor accurate as initially anticipated and has limited evidence for the effectiveness. The most important use of continuous blood glucose monitoring is to facilitate adjustments in intensive insulin therapy to improve control. The ultimate goal of CGM technology is to use it in combination with subcutaneous insulin pumps, and in effect create an external “artificial pancreas” thereby providing better overall health and improved HbA1c tests.

_

31. The two most significant barriers to frequent fingerstick SMBG by diabetes patients, the pain and the cost, are overcome by non-invasive blood glucose meter like GlucoTrack and Orsense NBM-200G.

_

32. The arterial pulse provides clinically relevant information like rate, rhythm, pressure and oxygen saturation; could also provide information on blood glucose by non-invasive blood glucose measurement using optical properties of glucose. This is pulse gluco-oximetry.

_

33. Technological advances have enabled glucometer connectivity to smart-phones resulting in transmission of blood glucose data to smart-phones so that users will be able to analyze data directly on the phone app and send data to caregivers and doctors. Recent advances in cellular data communications technology have enabled the development of glucometer that directly integrate cellular data transmission capability, enabling the user to both transmit glucose data to the medical caregiver and receive direct guidance from the caregiver on the screen of the glucometer. In a nutshell, glucometer data can be transmitted to smart phone, smart phone can function as glucometer and glucometer can function as smart phone. I would name such device as Glucophone, a device that measures and sends blood glucose data to caregivers & doctors, and receive guidance from the caregiver & doctors. I hope that ADA and IDF accept this new terminology.    

_

34. My logic on Diabetes Mellitus: 

Since it has been shown that microvascular diabetes complications are due to chronic hyperglycemia per se, I am surprised to know that most experts including ADA, IDF and WHO recommend venous plasma glucose level for diagnosis of diabetes. It is because all the tissues bearing the brunt of diabetic microvascular complications (e.g. kidneys, nerves, and retina) are perfused by high blood/plasma glucose from their arterial blood supply. Venous blood/plasma glucose does not enter in any tissue except liver through portal vein. Remember, it is the portal vein that brings glucose from food along with insulin from pancreas to liver for metabolism. Remember, both glucose from food and insulin from pancreas enter systemic circulation after first-pass metabolism in liver. However, pancreas secret insulin in response to arterial blood glucose and not venous blood glucose including portal vein glucose. So it is the arterial blood glucose that secretes insulin and it is the high arterial blood glucose that causes diabetic complications when insulin secretion and/or action is reduced. Then why rely on venous blood glucose for diagnosis of diabetes?  Since it is difficult to puncture artery every time for blood glucose test and since arterial blood glucose is almost same as capillary blood glucose, why not capillary blood glucose for diagnosis of diabetes mellitus?  Of course we have to change cutoff values for diabetes diagnosis as compared to venous blood but it would be more rational, more physiological and correlate well with diabetic complications. Of course there is marked difference between capillary blood glucose and venous blood glucose in non-diabetic individual post prandial, but since capillary (surrogate arterial) blood glucose determines diabetic complications, cutoff value for diagnosis of diabetes, both fasting and post prandial, must be based on capillary blood glucose measurement (SMBG) rather than venous glucose. I would also recommend HbA1c measurement from capillary blood rather than venous blood for the same reason. I hope that ADA, IDF and WHO accept my logic on diabetes mellitus. In my view, capillary whole blood/plasma glucose, fasting and post prandial, by glucometer or laboratory method must be used for diagnosis of diabetes mellitus. 

_____________

_____________

Dr. Rajiv Desai. MD.

November 1, 2014
_____________                                     

Postscript:

Many diabetics in developing countries are poor people and cannot afford SMBG. Out of all affording diabetics, 67% of patients do not check their blood glucose regularly for reasons such as sore fingers, inconvenience, and the fear of needles. Additionally uneducated people are unlikely to learn right way to use glucometer and unlikely to act upon SMBG results. So SMBG is not a panacea for diabetes control.

Footnote:

When a normal non-diabetic individual goes for diabetes screening, post prandial SMBG overdiagnose diabetes as post prandial SMBG is far higher than concurrent venous blood glucose. So if you were a non-diabetic before and now you want to know whether you have developed diabetes, please test fasting and post prandial blood (plasma) glucose in a laboratory. However, according to my logic on diabetes, capillary blood glucose determines diabetic complications and not venous blood glucose; so you must go for fasting and post prandial SMBG for diagnosis of diabetes. Of course you would need newer cutoff points.     

__________

SELF MEASUREMENT OF BLOOD PRESSURE (SMBP):

________

The figure above shows correct way to measure blood pressure at home.

_______

Prologue: 

When the heart beats it generates a pressure in the arteries to pump blood around the body. In some people, the pressure generated is too high and this is called hypertension. Way back in 1981, Dr. R. C. Hansoti was head of cardiology department in Nair hospital, Mumbai and he was taking a clinic on hypertension for a group of medical students and I was one of the medical student attending his clinic. He asked a question to everybody: What are the symptoms of hypertension?  Some said headache, some said giddiness and some said palpitation. When my turn came, I said hypertension has no symptoms. Dr. Hansoti was satisfied with my answer. He said that there is only one wise doctor among the crowd. I felt elated. Even today, I remember that incident. Most people aren’t aware that they have high blood pressure because there really are no symptoms. Death may be the first symptom of hypertension. That is why it’s been dubbed the silent killer. Untreated hypertension increases the risk of heart disease and stroke which are common causes of death worldwide. One in every three adults has high blood pressure. If you aren’t checking your blood pressure regularly, there’s no sure way to know if it’s within a healthy range. Often high blood pressure goes untreated until another medical condition arises or the individual goes in for a routine check-up. The only way to know that you have high blood pressure is to measure it clinically. There is no laboratory test or X-ray to detect hypertension. Approximately 100 years have passed since the legendary development by the Italian Riva Rocci to measure blood pressure by an upper arm cuff with the mercury manometer and since the first description of sound phenomena above the brachial artery by the Russian Korotkoff during upper arm compression. Blood pressure determination continues to be one of the most important measurements in all of clinical medicine and is still one of the most inaccurately performed. For decades, doctors and nurses used to measure blood pressure. Today, I will discuss self measurement of blood pressure (SMBP) by people themselves at their home/workplace/shopping mall.

______

Abbreviations and synonyms:

HT = hypertension

BP = blood pressure

SP = Systolic pressure =SBP

DP = Diastolic pressure = DBP

PP = Pulse pressure

MP = Mean pressure

SMBP = Self measurement (monitoring) of blood pressure [by patient or relative]

OMBP = Office (clinic) measurement (monitoring) of blood pressure [by doctor or nurse]

AMBP = Ambulatory measurement (monitoring) of blood pressure [by doctor or patient] = ABPM (ambulatory BP monitoring)

SMBP is also called HBPM (home blood pressure monitoring) or HBP (home BP); but since self measurement of blood pressure can be done outside home, I prefer SMBP over HBPM/HBP.

AOBP = automated office BP (BP taken in clinic with automated oscillometric validated device)

______

Note:

Self measurement of blood pressure (SMBP) is performed by adults only. There is no self measurement of blood pressure by children. If a child indeed has high/low blood pressure, it ought to be measured by doctor. Parents of hypertensive child can measure blood pressure of child at home provided they are trained and they have appropriate cuff size. In this article, blood pressure measurement means blood pressure measured by adults for adults, and arm means upper arm.

_______

The value of blood pressure among lay public:

My 27 years of experience as a physician tells me that blood pressure is highly overvalued physiological parameter by patients. Most lay public think that blood pressure is cornerstone of health. Tying the cuff and watching the mercury go up and down make them feel that the most vital parameter of their health is being investigated. The moment doctor says that BP is normal; they feel elated, happy and satisfied. Whether a person may be having a vertigo or terminal cancer, normal blood pressure assure them of wellbeing and good health. We doctors know that it is not true. You may have normal blood pressure during heart attack and die suddenly. On the other hand, your blood pressure may be elevated due to anxiety but you may be absolutely healthy. Paradoxically, there are many people having hypertension but never got BP measured as they have no symptoms. Also, there are many people who know that they have hypertension but refuse treatment as they have no symptoms. Also, there are many people who are on treatment for hypertension but their BP was never controlled. So lay public and BP have love-hate relationship. 

_______

Introduction to SMBP:

Self measurement of blood pressure was introduced in the 1930s. A recent UK Primary Care survey showed 31% people self-measure blood pressure and out of them 60% self-measure at least monthly. In the USA, the use of self-BP monitoring is growing rapidly: Gallup polls suggest that the proportion of patients who report that they monitor their BP at home increased from 38% in 2000 to 55% in 2005. Because blood pressure monitors are now readily available and cheap (as little as £10; €11.8; $15), self monitoring is likely to increase—in the United States and Europe up to two thirds of people with hypertension do self-monitor. Home blood pressure monitoring is becoming increasingly important in the diagnosis and management of arterial hypertension. The rapid diffusion of this technique has been favoured by a number of factors, including technical progress and wider availability of SMBP devices, increasing awareness of the importance of regular BP monitoring, and recognition of the usefulness of SMBP by international hypertension management guidelines. Each person has roughly 100.000 single blood pressure values per day. That is why only regular measurements taken at the same daytime and over a longer period of time enable a useful evaluation of blood pressure values. Approximately one in three American adults have high blood pressure. Nearly third of adults with hypertension do not have their blood pressure under control. There is now a growing body of data that strategies in which anti-hypertensive therapy is titrated remotely by patients, as well as clinicians, using home blood pressure monitoring can be effective.  As a result, connected blood pressure monitors could potentially have a meaningful impact on health outcomes.

_

The gold standard for clinical blood pressure measurement has always been readings taken by a trained health care provider using a mercury sphygmomanometer and the Korotkoff sound technique, but there is increasing evidence that this procedure may lead to the misclassification of large numbers of individuals as hypertensive and also to a failure to diagnose blood pressure that may be normal in the clinic setting but elevated at other times in some individuals. There are 3 main reasons for this: (1) inaccuracies in the methods, some of which are avoidable; (2) the inherent variability of blood pressure; and (3) the tendency for blood pressure to increase in the presence of a physician (the so-called white coat effect).

_

Numerous surveys have shown that physicians and other health care providers rarely follow established guidelines for blood pressure measurement; however, when they do, the readings correlate much more closely with more objective measures of blood pressure than the usual clinic readings. It is generally agreed that conventional clinic readings, when made correctly, are a surrogate marker for a patient’s true blood pressure, which is conceived as the average level over prolonged periods of time, and which is thought to be the most important component of blood pressure in determining its adverse effects. Usual clinic readings give a very poor estimate of this, not only because of poor technique but also because they typically only consist of 1 or 2 individual measurements, and the beat-to-beat blood pressure variability is such that a small number of readings can only give a crude estimate of the average level.

_

There is little point nowadays in simply classifying people as “hypertensive” or “non-hypertensive” purely on the basis of one blood pressure measurement – no matter by what means or how confidently it may have been made. For some applications (for example, in monitoring or researching the effect of antihypertensive medication on blood pressure) it is important to be confident about baselines and the changes that may occur with medication. For other applications such as assessing cardiovascular risk, additional factors are at least as important as the blood pressure measurement and choice of the means by which blood pressure is measured may be less critical.

_

There are potentially 3 measures of blood pressure that could contribute to the adverse effects of hypertension. The first is the average level, the second is the diurnal variation, and the third is the short-term variability. At the present time, the measure of blood pressure that is most clearly related to morbid events is the average level, although there is also evidence accumulating that suggests that hypertensive patients whose pressure remains high at night (nondippers) are at greater risk for cardiovascular morbidity than dippers. Less information is available for defining the clinical significance of blood pressure variability, although it has been suggested that it is a risk factor for cardiovascular morbidity.

_

The recognition of these limitations of the traditional clinic readings has led to two parallel developments: first, increasing use of measurements made out of the clinic, which avoids the unrepresentative nature of the clinic setting and also allows for increased numbers of readings to be taken; and second, the increased use of automated devices, which are being used both in and out of the office setting. This decreased reliance on traditional readings has been accelerated by the fact that mercury is being banned in many countries, although there is still uncertainty regarding what will replace it. The leading contenders are aneroid and oscillometric devices, both of which are being used with increasing frequency but have not been accepted as being as accurate as mercury.

_

High blood pressure is one of the most readily preventable causes of stroke and other cardiovascular complications. It can be easily detected, and most cases have no underlying detectable cause; the most effective way to reduce the associated risk is to reduce the blood pressure. Unlike many other common, chronic conditions, we have very effective ways of treating high blood pressure and we have clear evidence of the benefits of such interventions.  However, despite a great deal of time and effort, hypertension is still underdiagnosed and undertreated. Furthermore, losses to follow up are high and are responsible for avoidable vascular deaths. Blood pressure is usually measured and monitored in the healthcare system by doctors or nurses in hospital outpatient departments and, increasingly, in primary care settings. New electronic devices have been introduced and validated in the clinical setting to replace the mercury sphygmomanometer and to overcome the large variations in measurement due to variability between observers. Ambulatory blood pressure monitoring is also being used more often to assess individuals’ blood pressures outside the clinical setting. Measuring blood pressure at home is becoming increasingly popular with both doctors and patients. Some national and international guidelines also recommend home monitoring in certain circumstances.   

_

Hypertension is elevated blood pressure (BP) above 140 mm Hg systolic and 90 mm Hg diastolic when measured under standardized conditions. Hypertension can be a separate chronic medical condition estimated to be affecting a quarter of the world’s adult population, as well as a risk factor for other chronic and nonchronic patient groups. Traditional high-risk patient groups include diabetics, pregnant women with gestational diabetes or preeclampsia, and kidney disease patients. For chronic hypertensive patients, persistent hypertension is one of the key risk factors for strokes, heart attacks, heart and kidney failure, and other heart and circulatory diseases and increased mortality. Preeclampsia is the most common cause of maternal and fetal death. For gestational diabetes and preeclampsia patients, the accurate measurement of BP during pregnancy is one of the most important aspects of prenatal care. For kidney disease patients and diabetics, blood pressure should be kept below 130 mmHg systolic and 80 mm Hg diastolic to protect the kidneys from BP-induced damage. As there are usually no symptoms, frequent blood pressure controls are highly relevant for these high-risk groups. The level of the blood pressure is the main factor in the decision to start antihypertensive therapy and other interventions. It is thus vital that the measurements are obtained in a reliable manner. Measurements can be performed either at the clinic or in the home setting. In the clinical setting, patients often exhibit elevated blood pressure. It is believed that this is due to the anxiety some people experience during a visit to the clinic. This is known as the white coat effect and is reported to be affecting between 20% to 40% of all patients visiting a clinic. As a consequence, the current international guideline on BP measurement is to follow up on measurements obtained in the clinic using SMBP to negate the white coat effect.

________

History of BP measurement:

In the early 1700′s a British veterinarian demonstrated that blood was under pressure by inserting a tube into a horse’s artery and connecting it to a glass tube. He observed the blood rising in the vertical tube and concluded that it had pressure. It was not until 1847 that a human blood pressure was demonstrated but again by a catheter inserted directly into an artery. The blood would rise in the tube until the weight of the column of blood was equal to the pressure of the blood. Unfortunately, this required a tube 5 or 6 feet tall and, to be able to demonstrate hypertension, even 12 or 13 feet. Neither the invasive technique nor the huge column was practical. In 1881 Ritter von Basch developed a device to encircle the arm with pressure sufficient to obliterate the pulse in an artery beyond the cuff. Connected to a manometer (a pressure measuring device) one could read how much pressure was required to shut off the pulse. Intra-arterial measurement confirmed the accuracy. This method read only the systolic pressure. In 1896 Italian, Riva-Rocci, developed the prototype of the mercury sphygmomanometer used to this day. He reasoned that the very high column could be greatly shortened if a heavy liquid could be used. Fortunately, mercury (Hg) was available. A silvery liquid that is 13.6 times as heavy as water, mercury could shorten the column to less than a foot. Thus he connected the cuff wrapped around the arm to a glass column of mercury that showed the pressure in the cuff. The observer could then read how many millimeters of mercury were required to shut off the pulse below the cuff. The use of mercury is still the gold standard today and the millimeters of mercury still the units of pressure measurement (mm Hg) regardless of the type of apparatus used. A column of mercury of a specific height is a certain pressure no matter how you look at it. This design was brought to the United States by a neurosurgeon, Harvey Cushing, who was traveling through Italy at the time. Nikolai Korotkoff, who observed and described the sounds made by the heart pumping the blood beneath the cuff as it was deflated, made the final real advance in 1905. This required the use of a stethoscope to listen but was the first method to allow the diastolic pressure to be measured as well.  In addition, the measurement of both systolic and diastolic pressures was more accurate and reliable than previous methods. It’s difficult to realize but we only began to take blood pressures about one hundred years ago. Thus the blood pressure unit of measurement today is still millimeters of mercury (mm Hg). The sounds we observe when taking a blood pressure are still called the Korotkoff sounds. This only requires the operator to deflate the cuff and observe at what pressure the Korotkoff sounds start and at what pressure they stop. These are the systolic and the diastolic pressures and are written for example as 120/80 or 120 over 80. Since Riva-Rocci invented indirect brachial cuff sphygmomanometry in 1896 and Korotkoff proposed the auscultatory method in 1905, the method for blood pressure (BP) measurements has remained essentially unchanged for the past 100 years. In 1969, Posey et al. identified mean BP on the basis of the cuff-oscillometric method. With subsequent theoretical and technical improvements, a newer method to determine systolic and diastolic BP was introduced to the cuff-oscillometric method. As a result, many of the automatic electronic sphygmomanometers available today have adopted this method, and those different from the auscultatory method have begun to be used in general clinical practice. Since the advent of indirect methods for sphygmomanometry, the past century has developed the practical and clinical sciences of hypertension. However, BP information necessary for the diagnosis and treatment of hypertension is still obtained essentially on the basis of casual measurements at the outpatient clinic (clinic BP). However, the reliability of clinic BP was called into question 40 years after the advent of indirect sphygmomanometry. In 1940, Ayman and Goldshine widely adopted the concept of self-BP measurements in the field of clinic BP measurements and demonstrated discrepancies between clinic BP and self-BP measurements. Bevan, in the United Kingdom, first reported the results of ambulatory BP monitoring using a direct arterial BP measurement method in 1969, and showed that human BP changes markedly with time. The quantity and quality of BP information vary greatly according to different methods, and the problem of interpreting clinic BP, which is obtained specifically in a medical environment, has been an issue in the clinical practice of hypertension during the past 50 years.  

__________

Prevalence, harms and awareness of hypertension:

_

According to the National Health And Nutrition Examination Survey (NHANES), at least 65 million adult Americans, or nearly one-third of the US adult population, have hypertension, defined as a systolic blood pressure ≥140 mm Hg, diastolic blood pressure ≥90 mm Hg, and/or current use of antihypertensive medication.  Another one-quarter of US adults have blood pressure in the “pre-hypertension” range, a systolic blood pressure of 120 to 139 mm Hg or diastolic blood pressure of 80 to 89 mm Hg, i.e., a level above normal yet below the hypertensive range. The prevalence of hypertension rises progressively with age, such that more than half of all Americans aged 65 years or older have hypertension.

_

The figure above shows prevalence of hypertension among adult population worldwide. It is estimated that one out of three adults has hypertension. Nearly 1 billion adults (more than a quarter of the world’s population) had hypertension in 2000 with a prevalence rate of 26.4 percent, and this is predicted to increase to 1.56 billion by 2025 and a prevalence rate of 29.2 percent. The prevalence rates in India are now almost comparable to those in the USA. While mean blood pressure has decreased in nearly all high-income countries, it has been stable or increasing in most African countries. Today, mean blood pressure remains very high in many African and some European countries. The prevalence of raised blood pressure in 2008 was highest in the WHO African Region at 36.8% (34.0–39.7).

__

Blood pressure levels, the rate of age-related increases in blood pressure, and the prevalence of hypertension vary among countries and among subpopulations within a country. Hypertension is present in all populations except for a small number of individuals living in primitive, culturally isolated societies. In industrialized societies, blood pressure increases steadily during the first two decades of life. In children and adolescents, blood pressure is associated with growth and maturation. Blood pressure “tracks” over time in children and between adolescence and young adulthood. Both environmental and genetic factors may contribute to regional and racial variations in blood pressure and hypertension prevalence. Studies of societies undergoing “acculturation” and studies of migrants from a less to a more urbanized setting indicate a profound environmental contribution to blood pressure. Obesity and weight gain are strong, independent risk factors for hypertension. It has been estimated that 60% of hypertensives are >20% overweight. Among populations, hypertension prevalence is related to dietary NaCl (salt) intake, and the age-related increase in blood pressure may be augmented by a high NaCl intake. Low dietary intakes of calcium and potassium also may contribute to the risk of hypertension. The urine sodium-to-potassium ratio is a stronger correlate of blood pressure than is either sodium or potassium alone. Alcohol consumption, psychosocial stress, and low levels of physical activity also may contribute to hypertension. Adoption, twin, and family studies document a significant heritable component to blood pressure levels and hypertension. Family studies controlling for a common environment indicate that blood pressure heritabilities are in the range 15–35%. In twin studies, heritability estimates of blood pressure are ~60% for males and 30–40% for females. High blood pressure before age 55 occurs 3.8 times more frequently among persons with a positive family history of hypertension. Despite improvements in the quality of health care and life expectancy, it is expected that the prevalence of hypertension will continue to rise worldwide.  

_

Hypertension awareness:

_

From the above table, one can say that one third of adult population have HT in the U.S. Out of all hypertensive, one third are unaware that they have HT. Out of all hypertensive taking treatment, only one third are controlled.

_

40% of Adult Population Worldwide has Hypertension: 54 % of them unaware of hypertension:

Hypertension is truly a global epidemic, being highly prevalent in all communities worldwide, according to new data from the Prospective Urban Rural Epidemiology (PURE) study. Other findings show that awareness is very low and that once patients are aware, most are treated, but control is very poor. The prevalence of hypertension was lowest in lowest-income countries (around 30%) and highest in upper-middle-income economies (around 50%), with high-income and low-middle-income economies having an intermediate level (around 40%). Only 30% of the population had optimal blood pressure, with another 30% found to be in the pre-hypertension range. Of the 40% with hypertension, 46% of these individuals were aware of their condition, 40% were treated, but only 13% were controlled.

_

Risk and harm of hypertension:

_

The figure below shows that hypertension is the number one risk factor for death worldwide. Blood pressure is a powerful, consistent, and independent risk factor for cardiovascular disease and renal disease.

_

As per the World Health Statistics 2012, of the estimated 57 million global deaths in 2008, 36 million (63%) were due to noncommunicable diseases (NCDs). The largest proportion of NCD deaths is caused by cardiovascular diseases (48%). In terms of attributable deaths, raised blood pressure is one of the leading behavioral and physiological risk factor to which 13% of global deaths are attributed. Hypertension is reported to be the fourth contributor to premature death in developed countries and the seventh in developing countries. The World Health Organization ranks high BP as the third highest risk factor for burden of disease, highlighting the contribution of hypertension directly and indirectly to the development of numerous diseases. Hypertension has been identified as a major risk factor for cardiovascular disease, and is an important modifiable risk factor for coronary artery disease, stroke, peripheral vascular disease, congestive heart failure, and chronic kidney disease. The Global Burden of Diseases Study 2010 reported that hypertension is worldwide the leading risk factor for cardiovascular disease, causing 9.4 million deaths annually. Hypertension is a major contributor to the global morbidity burden with devastating downstream outcomes with heavy financial burden on scarce health resources.

_

Raised blood pressure is a major risk factor for coronary heart disease and ischemic as well as hemorrhagic stroke. Blood pressure levels have been shown to be positively and continuously related to the risk for stroke and coronary heart disease. In some age groups, the risk of cardiovascular disease doubles for each increment of 20/10 mmHg of blood pressure, starting as low as 115/75 mmHg. In addition to coronary heart diseases and stroke, complications of raised blood pressure include heart failure, peripheral vascular disease, renal impairment, retinal hemorrhage and visual impairment. Treating systolic blood pressure and diastolic blood pressure until they are less than 140/90 mmHg is associated with a reduction in cardiovascular complications. Effective control of blood pressure has been shown to significantly improve health outcomes and reduce mortality. Control of blood pressure has been shown to decrease the incidence of stroke by 35 to 40 percent, myocardial infarction by 20 to 25 percent and heart failure by more than 50 percent. A decrease of 5 mmHg in systolic BP is estimated to result in a 14 percent reduction in mortality due to stroke, a 9 percent reduction in mortality due to heart disease, and a 7 percent reduction in all-cause mortality.

_

The figure below shows correlation between HT and cardiovascular risk:

_

Data from numerous observational epidemiological studies provide persuasive evidence of the direct relationship between blood pressure and cardiovascular disease. In a recent meta-analysis that aggregated data across 61 prospective observational studies that together enrolled 958,074 adults, there were strong, direct relationships between average blood pressure and vascular mortality. These relationships were evident in middle-aged and older-aged individuals. Importantly, there was no evidence of a blood pressure threshold, that is, cardiovascular mortality increased progressively throughout the range of blood pressure, including the pre-hypertensive range. It has been estimated that ≈15% of blood pressure–related deaths from coronary heart disease occur in individuals with blood pressure in the pre-hypertensive range. Individual trials and meta-analyses of clinical trials have conclusively documented that antihypertensive drug therapy reduces the risk of cardiovascular events in hypertensive individuals. Such evidence provides strong evidence for current efforts to identify and treat individuals with hypertension and for parallel efforts to identify individuals with pre-hypertension, who are at risk for hypertension and blood pressure–related morbidity.

_______

Validity of self-reported hypertension:

Arterial hypertension is the main modifiable risk factor for coronary disease, cerebrovascular diseases, congestive cardiac insufficiency, and other cardiovascular diseases. The adequate treatment of arterial hypertension significantly reduces cardiovascular morbidity and mortality. Thus, knowledge of the distribution of hypertension among the population and the identification of vulnerable groups are of great interest to public health. To determine the prevalence of hypertension in the population is a complex task, which requires not only the measurement of arterial pressure, but also the verification of the use of medication for its control. Self-reported hypertension has been used in a number of health surveys, including the National Health and Nutrition Examination Survey (NHANES), in the United States, and the Pesquisa Nacional por Amostras de Domicílio (National Household Sample Survey – PNAD 98), in Brazil. The sensitivity and specificity of self reported hypertension found in various studies are about 71% and 90% respectively. Generally speaking, these results confirm the validity of self-reported hypertension among population. Since only 50 % hypertensives know that they have HT, SMBP by population at home would greatly increase HT detection, and consequently treatment and prevention of HT related morbidity and mortality.

________

Blood pressure measurement in low resource settings:

The treatment of hypertension has been associated with an approximate 40% reduction in the risk of stroke and 20% reduction in the risk of myocardial infarction. However, in developing countries the detection of major cardiovascular risk factors, such as hypertension, is often missed. Failure to identify hypertension is largely due to the unavailability of suitable blood pressure measurement devices and the limited attention paid to the techniques and procedures necessary to obtain accurate blood pressure readings.

_______

Basics of blood pressure:

The ejection of blood from the left ventricle of the heart into the aorta produces pulsatile blood pressure in arteries. Systolic blood pressure is the maximum pulsatile pressure and diastolic pressure is the minimum pulsatile pressure in the arteries, the minimum occurring just before the next ventricular contraction. Normal systolic/diastolic values are near 120/80 mmHg. Normal mean arterial pressure is about 95 mmHg.

_

Pressure pulse wave (pulse pressure wave):

Every heart beat generates pressure pulse wave transmitted over walls of aorta and major arteries.

_

The figure above shows aortic pulse pressure waveform. Systolic and diastolic pressures are the peak and trough of the waveform. Augmentation pressure is the additional pressure added to the forward wave by the reflected wave. The dicrotic notch represents closure of the aortic valve and is used to calculate ejection duration. Time to reflection is calculated as the time at the onset of the ejected pulse waveform to the onset of the reflected wave.

_

Energetics of flowing blood:

Because flowing blood has mass and velocity it has kinetic energy (KE). This KE is proportionate to the mean velocity squared (V2; from KE = ½ mV2). Furthermore, as the blood flows inside a vessel, pressure is exerted laterally against the walls of the vessel; this pressure represents the potential or pressure energy (PE). The total energy (E) of the blood flowing within the vessel, therefore, is the sum of the kinetic and potential energies (assuming no gravitational effects). Although pressure is normally considered as the driving force for blood flow, in reality it is the total energy that drives flow between two points (e.g., longitudinally along a blood vessel or across a heart valve). Throughout most of the cardiovascular system, KE is relatively low, so for practical purposes, it is stated that the pressure energy (PE) difference drives flow. Kinetic energy and pressure energy can be interconverted so that total energy remains unchanged. This is the basis of Bernoulli’s Principle. An interesting, yet practical application of Bernoulli’s Principle is found when blood pressure measurements are made from within the ascending aorta. The instantaneous blood pressure that is measured within the aorta will be very different depending upon how the pressure is measured. As illustrated in the figure below, if a catheter has an end-port (E) sensor that is facing the flowing stream of blood, it will measure a pressure that is significantly higher than the pressure measured by a side-port (S) sensor on the same catheter.  The reason for the discrepancy is that the end-port measures the total energy of the flowing blood. As the flow stream “hits” the end of the catheter, the kinetic energy (which is high) is converted to potential (or pressure) energy, and added to the potential energy to equal the total energy. The side-port will not be “hit” by the flowing stream so kinetic energy is not converted to potential energy. The side-port sensor, therefore, only measures the potential energy, which is the lateral pressure acting on the walls of the aorta. The difference between the two types of pressure measurements can range from a few mmHg to more than 20 mmHg depending upon the peak velocity of the flowing blood within the aorta. So end pressure is higher than lateral pressure (blood pressure).  

_

_

________

Regulation of blood pressure:

_

_

To provide a framework for understanding the pathogenesis of and treatment options for hypertensive disorders, it is useful to understand factors involved in the regulation of both normal and elevated arterial pressure. Cardiac output and peripheral resistance are the two determinants of arterial pressure. Cardiac output is determined by stroke volume and heart rate; stroke volume is related to myocardial contractility and to the size of the vascular compartment. Peripheral resistance is determined by functional and anatomic changes in small arteries (lumen diameter 100–400 micron) and arterioles. So any condition that increases cardiac output and/or peripheral resistance would increase blood pressure.

_

Blood is a fluid and fluid flows across pressure gradient. Blood pressure in arteries is higher than blood pressure in capillaries and blood pressure in capillaries is higher than blood pressure in veins. That is how blood flows from arteries to capillaries to veins. Blood pressure generates pressure gradient from heart to tissues and that is how tissues are perfused. When you are in shock with very low blood pressure, tissue perfusion is markedly reduced resulting in multi-organ failure and death if not treated.  

________

Blood pressure measurement means arterial blood pressure measurement:

Blood pressure measurements have been part of the basic clinical examination since the earliest days of modern medicine. The origin of blood pressure is the pumping action of the heart, and its value depends on the relationship between cardiac output and peripheral resistance. Therefore, blood pressure is considered as one of the most important physiological variables with which to assess cardiovascular hemodynamics. Venous blood pressure is determined by vascular tone, blood volume, cardiac output, and the force of contraction of the chambers of the right side of the heart. Since venous blood pressure must be obtained invasively, the term blood pressure most commonly refers to arterial blood pressure, which is the pressure exerted on the arterial walls when blood flows through the arteries. The highest value of pressure, which occurs when the heart contracts and ejects blood to the arteries, is called the systolic pressure (SP). The diastolic pressure (DP) represents the lowest value occurring between the ejections of blood from the heart. Pulse pressure (PP) is the difference between SP and DP, i.e., PP = SP – DP.

The period from the end of one heart contraction to the end of the next is called the cardiac cycle. Mean pressure (MP) is the average pressure during a cardiac cycle. Mathematically, MP can be decided by integrating the blood pressure over time. When only SP and DP are available, MP is often estimated by an empirical formula:

MP = DP + PP/3

Note that this formula can be very inaccurate in some extreme situations. Although SP and DP are most often measured in the clinical setting, MP has particular importance in some situations, because it is the driving force of peripheral perfusion. SP and DP can vary significantly throughout the arterial system whereas MP is almost uniform in normal situations.

_

Unit of blood pressure measurement:

Pressure is force per unit area. Examples are pounds per square foot; newtons per square centimeter, tons per square yard, etc. Other units are atmospheres (atm) and Pascals (Pa).

One Pascal = 1 N/m2 = 10-5 Bars

Atmospheric pressure is the force per unit area exerted on a surface by the weight of air above that surface in the atmosphere of Earth (or that of another planet). In most circumstances atmospheric pressure is closely approximated by the hydrostatic pressure caused by the weight of air above the measurement point.

A pressure of 1 atm can also be stated as:

= 1.01325 bar

= 101325 pascal (Pa) or 101.325 kilopascal (kPa)

= 1013.25 millibars (mbar, also mb)

= 760 torr

≈ 760.001 mm-Hg (millimeter mercury), 0 °C

So atmospheric pressure is about 760 mm Hg at sea level.

That means our human body is subjected to 760 mm Hg pressure by atmosphere.

Same units are used for blood pressure.

Blood pressure means lateral pressure exerted by column of blood over wall of blood vessel (aorta and major arteries for arterial blood pressure). Normal blood pressure in an adult human is 120/80 mm Hg. 120 is systolic blood pressure when heart is in systole (contracting forcefully) and 80 is diastolic blood pressure when heart is in diastole (relaxing). It cannot be overemphasized that atmospheric pressure by air over our body acts on blood column as well as blood vessel wall and therefore whatever blood pressure we are measuring is the pressure over and above atmospheric pressure. The blood pressure measurements are “relative pressure”, meaning the figures that we state are above atmospheric pressure. When we say blood pressure is 100 mmHg, that really means 100 mmHg higher than atmospheric pressure. It’s a gauge pressure, not an absolute pressure. The corresponding absolute pressure would be about 760 + 100 mmHg. It is the atmospheric pressure that forces air into your lungs and compresses your body. That’s why it’s supposed that a human in space would have the air sucked out of them – there’s no pressure whatsoever to keep air in your lungs. Alternatively, when you go underwater, for every 33 feet you dive you’re being squeezed by an additional atmosphere of pressure. Deep water diving can cause extreme changes in blood pressure levels. The amount of atmospheric pressure is increased dramatically, due to the pressure exerted by the water over the swimmer. This increased pressure forces an increase of blood pressure, which can be extremely dangerous to anyone with high blood pressure. Individuals with blood pressure problems should consult their physician prior to any deep water diving excursion, to avoid serious risks to their health. Astronauts are individuals who spend long periods of time in space, without gravity and the pressure exerted by the atmosphere. The greater the length of time spent outside of the Earth’s atmosphere, the more likely that the astronaut will experience fainting episodes upon their return to Earth. It is theorized that the increased atmosphere pressure puts a higher demand on the heart and it cannot keep up, which makes the blood pressure lower, which results in fainting.

_

The gradual accumulation of mercury on the sea bed and the increasing use of accurate validated automatic sphygmomanometers that do not use mercury, or cumbersome, and frequently inaccurate, auscultation is leading to the gradual withdrawal of mercury sphygmomanometers. If a mercury column is no longer used to measure blood pressure, should we continue to use mm Hg or should we switch to kPa? Doctors feel comfortable with the conventional mm Hg and not Kilo-Pascal or Bars.

_______

Manometer:

A ‘manometer’ is an instrument that uses a column of liquid to measure pressure, although the term is often used nowadays to mean any pressure measuring instrument.

_

Sphygmomanometer:

The word comes from the Greek sphygmos meaning pulse, plus the scientific term manometer (pressure meter).  A sphygmomanometer consists of an inflatable cuff, a measuring unit (the mercury manometer, or aneroid gauge), and a mechanism for inflation which may be a manually operated bulb & valve or a pump operated electrically. It is always used in conjunction with a means to determine at what pressure blood flow is just starting, and at what pressure it is unimpeded. Manual sphygmomanometers are used in conjunction with a stethoscope. The usual unit of measurement of blood pressure is millimeters of mercury (mmHg) as measured directly by a manual sphygmomanometer. You do not need stethoscope in automated sphygmomanometer, where cuff inflation is done by electrically operated pump; and you have either microphone-filter combination to detect korotkoff sound or you have oscillometric technique which obviates korotkoff sound altogether. When using semi-automatic blood pressure monitors for measuring blood pressure, the cuff is inflated by hand using a pumping bulb. The device deflates automatically. Beyond this the blood pressure is evaluated and calculated the same way as it is done by full-automatic devices.  

________

Variability of blood pressure:

Blood pressure can vary widely as seen in the figure below.

The main value of self monitoring is that it can provide more precise estimates of the true underlying mean blood pressure than traditional clinic measurements. The table below shows the increased precision in mean systolic blood pressure gained from additional measurements for up to two weeks.

_

In order to obtain an accurate evaluation of the blood pressure value, the number of measurements should be increased

Indeed, many medical studies have showed that the higher the number of blood pressure measurements the more reliable the precision will be. So the ambulatory blood pressure measurement during 24 hours is currently used. The other technique consists of measuring the blood pressure only a few times during the day, for a few days in a row. Physicians committees have proved that at least 15 measurements were necessary to appreciate the real value of the blood pressure.

These measurements must be collected in the same conditions to have an optimal reliability.
_

No matter which measurement device is used, blood pressure is always a variable haemodynamic phenomenon. Modification of the factors that influence variability is not always possible, but we can minimize their effect. When optimum conditions are not possible, this should be noted with the reading. The table below shows factors that influences blood pressure variability.

_

Blood pressure generally is higher in the winter and lower in the summer. That’s because low temperatures cause your skin blood vessels to narrow (vasoconstriction) to conserve heat — which increases blood pressure because more pressure is needed to force blood through your narrowed veins and arteries.

_

The figure below shows typical BP fluctuations during a day:  

_____

High blood pressure vs. hypertension:    

Well, hypertension and high blood pressure are two terms that are almost used interchangeably. The common layman is expected to assume that both hypertension and high blood pressure are one and the same thing. And yes, they are correct because the two are really similar! Hence, in ordinary day-to-day usage, people can interchange “hypertension” for “high blood pressure” and vice versa. However, in the medical setup, the story seems to be different. If you’re in good health, your blood pressure will fluctuate during the day, depending on your stress level, how much caffeine you’ve had, whether you’re exerting yourself and so on. Taking your blood pressure when you’ve just heard that your house has been burgled, or after you’ve lost your job will show that you have high blood pressure. That’s not necessarily dangerous. Causes of reversible high blood pressure are pain, anxiety, agitation, hypoxia, hypercarbia and urinary bladder distention. Reversible high blood pressure is not hypertension. When your blood pressure stays high for a long time, you have hypertension. In the strictest sense, there should be a clear distinction between hypertension and high blood pressure. By definition, “hypertension” is a medical condition of the cardiovascular system that is often chronic in nature. It is characterized by a persistent elevation of the blood pressure. The prefix “hyper” means “high” so “hypertension” is the opposite of “hypotension” (low blood pressure). What you have is a number above which you have a defined diagnosis — at least that’s how we tend to do this in medicine. And the numbers actually mean risk — the higher the blood pressure, the greater the risk — and the interesting thing is the risk begins to occur even at relatively normal blood pressure readings. So the higher you go, the worse off you’re going to be, from a blood pressure point of view. You must also remember that certain medical conditions can cause reversible hypertension like anemia, thyrotoxicosis etc. You correct anemia and thyrotoxicosis, the BP will come down.

_____

Defining Hypertension:

_

_

From an epidemiologic perspective, there is no obvious level of blood pressure that defines hypertension. In adults, there is a continuous, incremental risk of cardiovascular disease, stroke, and renal disease across levels of both systolic and diastolic blood pressure. The Multiple Risk Factor Intervention Trial (MRFIT), which included >350,000 male participants, demonstrated a continuous and graded influence of both systolic and diastolic blood pressure on coronary heart disease mortality, extending down to systolic blood pressures of 120 mmHg. Similarly, results of a meta-analysis involving almost 1 million participants indicate that ischemic heart disease mortality, stroke mortality, and mortality from other vascular causes are directly related to the height of the blood pressure, beginning at 115/75 mmHg, without evidence of a threshold. Cardiovascular disease risk doubles for every 20-mmHg increase in systolic and 10-mmHg increase in diastolic pressure. Among older individuals, systolic blood pressure and pulse pressure are more powerful predictors of cardiovascular disease than is diastolic blood pressure.

_

Clinically, hypertension may be defined as that level of blood pressure at which the institution of therapy reduces blood pressure–related morbidity and mortality. Current clinical criteria for defining hypertension generally are based on the average of two or more seated blood pressure readings during each of two or more outpatient visits. A recent classification recommends blood pressure criteria for defining normal blood pressure, pre-hypertension, hypertension (stages I and II), and isolated systolic hypertension, which is a common occurrence among the elderly as seen in the table below.

_

Blood pressure classification:  

Blood Pressure Classification Systolic, mmHg Diastolic, mmHg
Normal <120 and <80 
Pre-hypertension 120–139 or 80–89 
Stage 1 hypertension 140–159 or 90–99 
Stage 2 hypertension  ≥160 or ≥100
Isolated systolic hypertension >140 and <90 

_

_

In children and adolescents, hypertension generally is defined as systolic and/or diastolic blood pressure consistently >95th percentile for age, sex, and height. Blood pressures between the 90th and 95th percentiles are considered pre-hypertensive and are an indication for lifestyle interventions.

_

Fetal blood pressure:

In pregnancy, it is the fetal heart and not the mother’s heart that builds up the fetal blood pressure to drive its blood through the fetal circulation. The blood pressure in the fetal aorta is approximately 30 mm Hg at 20 weeks of gestation, and increases to approximately 45 mm Hg at 40 weeks of gestation.

The average blood pressure for full-term infants:

Systolic 65–95 mm Hg

Diastolic 30–60 mm Hg

Remember, as human ages from infancy to adulthood to elderly, BP steadily rises. Clinic BP of 140/90 mm Hg is a cut off value for adult above which anti-HT treatment is advised. It may be advised at 135/85 mm Hg if person has diabetes or chronic kidney disease.

_

The figure below shows discrepancy between office (clinic) measurement of blood pressure (OMBP) and self measurement of blood pressure (SMBP) at home:

For SMBP, cut off value is 135/85 mm Hg in contrast to OMBP cut off value 140/90 mm Hg.

_

White-Coat Hypertension (WCH) or Isolated Office (Clinic) Hypertension:

_

_

Most patients have a higher level of anxiety, and therefore higher blood pressure, in the physician’s office or clinic than in their normal environment (as revealed by ambulatory monitoring or home blood pressure measurements), a phenomenon commonly called the white-coat effect. Several factors can increase this effect, such as observer-patient interaction during the measurement. The effect tends to be greatest in the initial measurement, but can persist through multiple readings by the doctor or nurse during the same visit. Whether the white-coat effect is due purely to patient anxiety about an office visit or to a conditioned response has been a point of interest in clinical studies. Regardless, it may result in the misdiagnosis of hypertension or in overestimation of the severity of hypertension and may lead to overly aggressive therapy. Antihypertensive treatment may be unnecessary in the absence of concurrent cardiovascular risk factors. “White-coat hypertension” or “isolated office hypertension” is the condition in which a patient who is not on antihypertensive drug therapy has persistently elevated blood pressure in the clinic or office (> 140/90 mm Hg) but normal daytime ambulatory blood pressure (< 135/85 mm Hg). Since patients may have an elevated reading when seen for a first office visit, at least several visits are required to establish the diagnosis. Multiple studies have suggested that white-coat hypertension may account for 20% to 25% of the hypertensive population, particularly in older patients, mainly women. Both white-coat hypertension and the white-coat effect can be avoided by using an automatic and programmable device that can take multiple readings after the clinician leaves the examination room. Its magnitude can be reduced (but not eliminated) by the use of stationary oscillometric devices that automatically determine and analyze a series of blood pressures over 15 to 20 minutes with the patient in a quiet environment in the office or clinic. Other health risk factors are often present and should be treated accordingly. In some patients, WCH may progress to definite sustained hypertension, and all need to be followed-up indefinitely with office and out-of-office measurements of blood pressure. Treatment with antihypertensive drugs may lower the office blood pressure but does not change the ambulatory measurement. This pattern of findings suggests that drug treatment of WCH is less beneficial than treatment of sustained hypertension. The so-called white coat hypertension may be associated with an increased risk of target organ damage (e.g., left ventricular hypertrophy, carotid atherosclerosis, overall cardiovascular morbidity), although to a lesser extent than in individuals with elevated office and ambulatory readings.

_

A survey showed that 96% of primary care physicians habitually use a cuff size too small, adding to the difficulty in making an informed diagnosis. For such reasons, white coat hypertension cannot be diagnosed with a standard clinical visit. Ambulatory blood pressure monitoring and patient self-measurement using a home blood pressure monitoring device is being increasingly used to differentiate those with white coat hypertension or experiencing the white coat effect from those with chronic hypertension.  Ambulatory monitoring has been found to be the more practical and reliable method in detecting patients with white coat hypertension and for the prediction of target organ damage. Even as such, the diagnosis and treatment of white coat hypertension remains controversial.

_

Masked Hypertension or Isolated Ambulatory Hypertension:

Somewhat less frequent than WCH but more problematic to detect is the converse condition of normal blood pressure in the office and elevated blood pressures elsewhere, e.g., at work or at home. Lifestyle can contribute to this, e.g., alcohol, tobacco, caffeine consumption, and physical activity away from the clinic. Target organ damage is related to the more prolonged elevations in pressure away from the physician’s office and the presence of such when the blood pressure is normal in the office can be a clue. There is also some evidence that such patients are at increased risk.

_

So in a nutshell, adult population is divided in four groups: true hypertensive, true normotensive, white coat HT and masked HT:

_____

Pseudo-hypertension:

When the peripheral muscular arteries become very rigid from advanced (often calcified) arteriosclerosis, the cuff has to be at a higher pressure to compress them. Rarely, usually in elderly patients or those with longstanding diabetes or chronic renal failure, it may be very difficult to do so. The brachial or radial artery may be palpated distal to the fully inflated cuff in these instances (positive Osler sign). The patients may be overdosed with antihypertensive medications inadvertently, resulting in orthostatic hypotension and other side effects. When suspected, an intra-arterial radial artery blood pressure can be obtained for verification. The Osler maneuver is not a reliable screen for pseudo-hypertension. The maneuver is performed by assessing the palpability of the pulseless radial or brachial artery distal to a point of occlusion of the artery manually or by cuff pressure. It was present in 7.2% of 3387 persons older than 59 years screened for the Systolic Hypertension in the Elderly Program (SHEP) study—more common in men, those found to be hypertensive, and those with a history of stroke. However, the Osler maneuver may be positive in the absence of pseudo-hypertension in one-third of hospitalized elderly subjects.

______

Orthostatic or Postural Hypotension:

Orthostatic hypotension is defined as a reduction of systolic blood pressure of at least 20 mm Hg or 10 mm Hg in diastolic blood pressure within 3 minutes of quiet standing.  An alternative method is to detect a similar fall during head-up tilt at 60 degrees. This may be asymptomatic or accompanied by symptoms of lightheadedness, faintness, dizziness, blurred vision, neck ache, and cognitive impairment. Factors affecting this response to posture include food ingestion, time of day, medications, ambient temperature, hydration, deconditioning, standing after vigorous exercise, and age. If chronic, the fall of blood pressure may be part of pure autonomic failure, multiple system atrophy, associated with Parkinsonism or a complication of diabetes, multiple myeloma, and other dysautonomias. Patients with autonomic failure exhibit a disabling failure of control of many autonomic functions. The major life-limiting failure is inability to control the level of blood pressure, especially in those patients with orthostatic hypotension who concomitantly have supine hypertension. In these patients, there are great and swift changes in pressure so that the patients faint because of profound hypotension on standing and have very severe hypertension when supine during the night. Often the heart rate is fixed as well. The supine hypertension subjects them to life-threatening target organ damage such as left ventricular hypertrophy, coronary heart disease, flash pulmonary edema, heart failure, renal failure, stroke, and sudden death (presumably caused by central apnea or cardiac arrhythmias).

_______

Measurement of blood pressure:

_

Flow chart of blood pressure measurement is depicted in the figure below:

 

_

Location where BP is measured: clinic (office), home or ambulatory:  

There are three clinical settings where blood pressure is measured. These are in an office (clinic) setting – office measurement of blood pressure (OMBP), an ambulatory setting – ambulatory measurement of blood pressure (AMBP) and at home –self measurement of blood pressure measurement (SMBP). SMBP can be done even outside home at work place or shopping mall etc.

_

_

The measurement of blood pressure is the commonest procedure carried out by doctors and nurses. The correct method of blood pressure measurement is crucial, particularly in patients with hypertension. There is marked intrinsic variability of blood pressure such that an observer even if careful and meticulous in adhering to recommended guidelines, obtain a value which will not be the same from one moment to the next or from one occasion to another. A failure to recognize such variability may result in a patient being falsely labeled as hypertensive or even normotensive and consequently being treated unnecessarily or not being treated.

_

Although the monitoring of antihypertensive treatment is usually performed using blood pressure readings made in the physician’s office and having a blood pressure check is by far the most common reason for visiting a physician, it is neither a reliable nor an efficient process. Thus, physician’s measurements are often inaccurate as a result of poor technique, often unrepresentative because of the white coat effect, and rarely include more than three readings made at any one visit. It is often not appreciated how big variations in blood pressure can be when measured in the clinic. In a study conducted by Armitage and Rose in 10 normotensive subjects, two readings were taken on 20 occasions over a 6-wk period by a single trained observer. The authors concluded that “the clinician should recognize that the patient whose diastolic pressure has fallen 25 mm from the last occasion has not necessarily changed in health at all; or, if he is receiving hypotensive therapy, that there has not necessarily been any response to treatment.” In addition, blood pressure can decrease by 10 mmHg or more within the time of a single visit if the patient rests, as shown by Alam and Smirk in 1943. There is also a practical limitation to the number or frequency of clinic visits that can be made by the patient, who may have to take time off work to make the visit. The potential utility of hypertensive patients having their blood pressures measured at home, either by using self-monitoring or by having a family member make the measurements, was first demonstrated in l940 by Ayman and Goldshine. They demonstrated that home blood pressures could be 30 or 40 mmHg lower than the physicians’ readings and that these differences might persist over a period of 6 month. Self monitoring has the theoretical advantage of being able to overcome the two main limitations of clinic readings: the small number of readings that can be taken and the white coat effect. It provides a simple and cost-effective means for obtaining a large number of readings, which are at least representative of the natural environment in which patients spend a major part of their day.

_

 It is not uncommon for blood pressure to be much higher in a doctor’s office than in an out of office setting, the difference being referred to as “white coat effect”. Furthermore, considerably large amount of data indicates that out of office blood pressure whether recorded via ambulatory measurements or self at home is a better predictor of outcome than that measured by a doctor in a clinical setting. The normal values for the SMBP and AMBP are lower than the OMBP. The cut off blood pressure levels for the three settings are as follows:

Office Blood Pressure: 140/90mm Hg

Home Blood pressure: 135/85 mm Hg

Ambulatory Blood Pressure: Mean Daytime 135/85 mm Hg

                                              Mean Night time 120/70 mm Hg

The diagnosis of hypertension in clinic setting is made if repeated measurements performed on three separate occasions when the systolic blood pressure is equal or greater than 140 mm Hg and the diastolic blood pressure is equal or greater than 90 mm Hg taken over a period of two months.

_

Problem with office (clinic) BP:

The accurate measurement of blood pressure (BP) remains the most important technique for evaluating hypertension and its consequences, and there is increasing evidence that the traditional office BP measurement procedure may yield inadequate or misleading estimates of a patient’s true BP status. The limitations of office BP measurement arise from at least four sources: 1) the inherent variability of BP coupled with the small number of readings that are typically taken in the doctor’s office, 2) poor technique (e.g., terminal digit preference, rapid cuff deflation, improper cuff, and bladder size), 3) the white coat effect and 4) the masked effect. Nearly 70 years ago there were observations made that office BP can vary by as much as 25 mm Hg between visits. The solution to this dilemma is potentially two-fold: by improving the office BP technique (e.g., using accurate validated automated monitors that can take multiple readings), and by using out-of-office monitoring to supplement the BP values taken in the clinical environment.

_

Out-of-office monitoring takes two forms at the present time: self (or home), and ambulatory BP monitoring. While both modalities have been available for 30 years, only now are they finding their way into routine clinical practice. The use of self-BP monitoring (also referred to as home BP monitoring) as an adjunct to office BP monitoring has been recommended by several national and international guidelines for the management of hypertension, including the European Society of Hypertension, the American Society of Hypertension (ASH), the American Heart Association (AHA), the British Hypertension Society, the European Society of Hypertension, the Japanese Hypertension Society, the World Health Organization – International Society of Hypertension,  and the Joint National Committee on Prevention, Detection, Evaluation, and Treatment of High Blood Pressure (JNC 7).

_

The practice and epidemiology of hypertension still depend entirely on BP information obtained in a medical environment (clinic BP/BP at a health examination), resulting in the accumulation of a great quantity of data about BP in a medical environment. For this reason, clinic BP remains the gold standard for the diagnosis and treatment of hypertension. However, data regarding AMBP or self-BP measurements at home (home BP) have also been accumulating for the past 30 years, and BP information, other than clinic BP, has been shown to have greater clinical significance than clinic BP. Many of these findings are the result of clinical and epidemiological studies. Essentially, as AMBP and home BP are accompanied by qualitative improvements and quantitative increases in information compared with clinic BP, they are considered to have greater clinical significance. For example, in AMBP by an indirect method widely used today, BP values can be obtained every 15 or 30 min on a particular day. Therefore, 50–100 BP values can be measured in the time course of one day. On the other hand, with home BP measurements, BP values are obtained at least at 2 time points in a day, that is, morning and evening, providing time-related BP information at 60 time points in a month. In addition to such definite increases in the quantity of information, BP information as a function of time leads to qualitative improvements. The application of the cuff-oscillometric method to sphygmomanometric devices associated with recent improvements in electronic technology and the clinical utilization of AMBP and home BP measurements are a paradigm shift in the history of the diagnosis and treatment of hypertension by indirect BP measurements.

_

Home blood pressure and average 24-h ambulatory blood pressure measurements are generally lower than clinic blood pressures. Because ambulatory blood pressure recordings yield multiple readings throughout the day and night, they provide a more comprehensive assessment of the vascular burden of hypertension than do a limited number of office readings. Increasing evidence suggests that home blood pressures, including 24-h blood pressure recordings, more reliably predict target organ damage than do office blood pressures. Blood pressure tends to be higher in the early morning hours, soon after waking, than at other times of day. Myocardial infarction and stroke are more common in the early morning hours. Nighttime blood pressures are generally 10–20% lower than daytime blood pressures, and an attenuated nighttime blood pressure “dip” is associated with increased cardiovascular disease risk. Recommended criteria for a diagnosis of hypertension are average awake blood pressure 135/85 mmHg and asleep blood pressure 120/70 mmHg. These levels approximate a clinic blood pressure of 140/90 mmHg.

_

_

How should we handle the difference between home and clinic readings?

Most home measurements of blood pressure are lower than those taken by a health professional in the office— a meta-analysis found that they differed by 6.9/4.9mm Hg and the difference varied with age and treatment. The British Hypertension Society suggests a “correction” factor in the order of 10/5 mm Hg. In one trial where antihypertensive drugs were titrated by someone who was blinded to whether the blood pressure results were from home or office readings, the home monitored group had worse blood pressure control because of lower prescription of all classes of drugs. This may have resulted from failure to account for the difference between home and office blood pressures. A systemic review aimed at ascertaining a diagnostic cut-off for hypertension for home measurements— defined as an office equivalent of 140/90 mm Hg— identified different thresholds of self monitored pressures of between 129/84 mm Hg and 137/89 mm Hg, depending on the method of comparison used. Recommendations from the US and Europe have settled on a threshold of 135/85 mm Hg. No studies have assessed morbidity and mortality outcomes from treating to a lower “home target,” but because home blood pressure is systematically lower than office readings it seems appropriate to adopt such a strategy.

______

Technique of BP measurement: direct or indirect:

_

Indirect Blood Pressure Measurement

Indirect measurement is often called noninvasive measurement because the body is not entered in the process. The upper arm, containing the brachial artery, is the most common site for indirect measurement because of its closeness to the heart and convenience of measurement, although many other sites may have been used, such as forearm or radial artery, finger, etc. Distal sites such as the wrist, although convenient to use, may give much higher systolic pressure than brachial or central sites as a result of the phenomena of impedance mismatch and reflective waves. An occlusive cuff is normally placed over the upper arm and is inflated to a pressure greater than the systolic blood pressure. The cuff is then gradually deflated, while a detector system simultaneously employed determines the point at which the blood flow is restored to the limb. The detector system does not need to be a sophisticated electronic device. It may be as simple as manual palpation of the radial pulse. The most commonly used indirect methods are auscultation and oscillometry.

Auscultatory Method:

The auscultatory method most commonly employs a mercury column, an occlusive cuff, and a stethoscope. The stethoscope is placed over the blood vessel for auscultation of the Korotkoff sounds, which defines both SP and DP. The Korotkoff sounds are mainly generated by the pulse wave propagating through the brachial artery. The Korotkoff sounds consist of five distinct phases. The onset of Phase I Korotkoff sounds (first appearance of clear, repetitive, tapping sounds) signifies SP and the onset of Phase V Korotkoff sounds (sounds disappear completely) often defines DP. Observers may differ greatly in their interpretation of the Korotkoff sounds. Simple mechanical error can occur in the form of air leaks or obstruction in the cuff, coupling tubing, or Bourdon gage. Mercury can leak from a column gage system. In spite of the errors inherent in such simple systems, more mechanically complex systems have come into use. The impetus for the development of more elaborate detectors has come from the advantage of reproducibility from observer to observer and the convenience of automated operation. Examples of this improved instrumentation include sensors using plethysmographic principles, pulse-wave velocity sensors, and audible as well as ultrasonic microphones. The readings by auscultation do not always correspond to those of intra-arterial pressure. The differences are more pronounced in certain special occasions such as obesity, pregnancy, arteriosclerosis, shock, etc. Experience with the auscultation method has also shown that determination of DP is often more difficult and less reliable than SP. However, the situation is different for the oscillometric method where oscillations caused by the pressure pulse amplitude are interpreted for SP and DP according to empirical rules.

Oscillometric Method:

In recent years, electronic pressure and pulse monitors based on oscillometry have become popular for their simplicity of use and reliability. The principle of blood pressure measurement using the oscillometric technique is dependent on the transmission of intra-arterial pulsation to the occluding cuff surrounding the limb. An approach using this technique could start with a cuff placed around the upper arm and rapidly inflated to about 30 mmHg above the systolic blood pressure, occluding blood flow in the brachial artery. The pressure in the cuff is measured by a sensor. The pressure is then gradually decreased, often in steps, such as 5 to 8 mmHg. The oscillometric signal is detected and processed at each step of pressure. The cuff pressure can also be deflated linearly in a similar fashion as the conventional auscultatory method. Arterial pressure oscillations are superimposed on the cuff pressure when the blood vessel is no longer fully occluded. Separation of the superimposed oscillations from the cuff pressure is accomplished by filters that extract the corresponding signals. Signal sampling is carried out at a rate determined by the pulse or heart rate. The oscillation amplitudes are most often used with an empirical algorithm to estimate SP and DP. Unlike the Korotkoff sounds, the pressure oscillations are detectable throughout the whole measurement, even at cuff pressures higher than SP or lower than DP. Since many oscillometric devices use empirically fixed algorithms, variance of measurement can be large across a wide range of blood pressures. Significantly, however, MP is determined by the lowest cuff pressure of maximum oscillations and has been strongly supported by many clinical validations.

_

_

How to diagnose Blood Pressure without a Blood Pressure Cuff:

Physicians normally use a blood pressure cuff, but patients can approximate their own blood pressures without a cuff.

Step 1:

Feel for a pulse at one of the carotid arteries. These arteries run through the neck, on either side of the voice box, or larynx. A palpable carotid pulse means the individual in question has a systolic, or pumping, pressure of 60-70 mmHg.

Step 2:

Feel for a pulse at one of the femoral arteries. These arteries are the major vessels that deliver blood to the tissues of the leg, and they run from the abdomen through each thigh. The femoral pulse is easiest to palpate in the crease between the thigh and the abdomen, a few inches to either side of the midline. Since the femoral artery is further from the heart than the carotid artery, blood pressure is lower in the femoral artery. Palpable femoral arteries mean the patient has at least a systolic pressure of 70-80 mmHg.

Step 3:

Feel for a pulse at one of the radial arteries. These run along the underside of the arm near the two bones of the forearm. It’s easiest to find the radial pulse by placing the fingers on the underside of the forearm before the arm meets the wrist, closer to the thumb side of the arm. Palpable radial pulses indicate that the patient has a systolic pressure of more than 80 mmHg. Because the radial artery is smaller than the femoral artery and is higher on the body, blood pressure must be higher than 80 mmHg for a pulse to reach the radial artery.

Warnings:

A 2000 article in the “British Medical Journal” notes that palpation-based blood pressure assessments may overestimate blood pressure slightly. Also, feel for pulses gently–overly compressing arteries can cause damage to tissues and may make it impossible to feel a pulse.

______

Direct Blood Pressure Measurement:

Direct measurement is also called invasive measurement because bodily entry is made. For direct arterial blood pressure measurement an artery is cannulated. The equipment and procedure require proper setup, calibration, operation, and maintenance. Such a system yields blood pressures dependent upon the location of the catheter tip in the vascular system. It is particularly useful for continuous determination of pressure changes at any instant in dynamic circumstances. When massive blood loss is anticipated, powerful cardiovascular medications are suddenly administered, or a patient is induced to general anesthesia, continuous monitoring of blood pressures becomes vital. Most commonly used sites to make continuous observations are the brachial and radial arteries. The femoral or other sites may be used as points of entry to sample pressures at different locations inside the arterial tree, or even the left ventricle of the heart. Entry through the venous side of the circulation allows checks of pressures in the central veins close to the heart, the right atrium, the right ventricle, and the pulmonary artery. A catheter with a balloon tip carried by blood flow into smaller branches of the pulmonary artery can occlude flow in the artery from the right ventricle so that the tip of the catheter reads the pressure of the left atrium, just downstream. These procedures are very complex and there is always concern of risk of hazard as opposed to benefit. Invasive access to a systemic artery involves considerable handling of a patient. The longer a catheter stays in a vessel, the more likely an associated thrombus will form. The Allen’s test can be performed by pressing on one of the two main arteries at the wrist when the fist is clenched, then opening the hand to see if blanching indicates inadequate perfusion by the other artery. However, it has proved an equivocal predictor of possible ischemia. In the newborn, when the arterial catheter is inserted through an umbilical artery, there is a particular hazard of infection and thrombosis, since thrombosis from the catheter tip in the aorta can occlude the arterial supply to vital abdominal organs. Some of the recognized contraindications and complications include poor collateral flow, severe hemorrhage diathesis, occlusive arterial disease, arterial spasm, and hematoma formation. In spite of well-studied potential problems, direct blood pressure measurement is generally accepted as the gold standard of arterial pressure recording and presents the only satisfactory alternative when conventional cuff techniques are not successful. This also confers the benefit of continuous access to the artery for monitoring gas tension and blood sampling for biochemical tests. It also has the advantage of assessing cyclic variations and beat-to-beat changes of pressure continuously, and permits assessment of short-term variations.  Other exceptional cases where this method may also be employed include cases where the pressure is very high, but the patient does not exhibit any symptoms. This may be a case of calcified arteries, in which case, the pressure will not be recorded accurately with the help of a sphygmomanometer and a stethoscope.

_____

Blood pressure measurements in routine clinical practice:

Repeated office blood pressure measurements are mandatory in clinical practice to characterize precisely the blood-pressure-related cardiovascular risk of individual subjects. Precise recommendations are available to ensure standardized accurate measurements (O’Brien et al. 2003, Parati et al. 2008a), which until now have been obtained in most cases through the auscultatory technique making use of mercury or aneroid sphygmomanometers. Given the fact that aneroid manometers easily lose calibration, mercury manometers have been, until now, the recommended tools for auscultatory blood pressure readings, on which the conventional management of hypertensive patients has been based over the last 60-70 years. In more recent years an increasing use of home blood pressure monitoring and 24-hour ambulatory blood pressure monitoring has been observed (both based on oscillometric blood pressure measurements), aimed at complementing the information provided by office blood pressure measurements. This is based on the evidence of a stronger prognostic value of 24-hour ambulatory and home blood pressure monitoring as compared to isolated office readings (Parati et al. 2008b, Parati et al. 2009b, Verdecchia et al. 2009). A slow progressive increase in the use of oscillometric blood pressure measuring devices at the time of the office visit has been recently observed, although auscultatory readings are still preferred by physicians in most countries.

_

Reliable measurements of blood pressure depend on attention to the details of the technique and conditions of the measurement. Proper training of observers, positioning of the patient, and selection of cuff size are essential. Owing to recent regulations preventing the use of mercury because of concerns about its potential toxicity, most office measurements are made with aneroid sphygmomanometers or with oscillometric devices in western nations. These instruments should be calibrated periodically, and their accuracy confirmed. Before the blood pressure measurement is taken, the individual should be seated quietly in a chair (not the exam table) with feet on the floor for 5 min in a private, quiet setting with a comfortable room temperature. At least two measurements should be made. The center of the cuff should be at heart level, and the width of the bladder cuff should equal at least 40% of the arm circumference; the length of the cuff bladder should be enough to encircle at least 80% of the arm circumference. It is important to pay attention to cuff placement, stethoscope placement, and the rate of deflation of the cuff (2 mmHg/s). Systolic blood pressure is the first of at least two regular “tapping” Korotkoff sounds, and diastolic blood pressure is the point at which the last regular Korotkoff sound is heard. In current practice, a diagnosis of hypertension generally is based on seated, office measurements. Currently available ambulatory monitors are fully automated, use the oscillometric technique, and typically are programmed to take readings every 15–30 min. Twenty-four-hour ambulatory blood pressure monitoring more reliably predicts cardiovascular disease risk than do office measurements. However, ambulatory monitoring is not used routinely in clinical practice and generally is reserved for patients in whom white coat hypertension is suspected. The Seventh Report of the Joint National Committee on Prevention, Detection, Evaluation, and Treatment of High Blood Pressure (JNC 7) has also recommended ambulatory monitoring for treatment resistance, symptomatic hypotension, autonomic failure, and episodic hypertension.  

_______

Factors affecting blood pressure measurement:

It is important to be aware of the factors that affect blood pressure measurement:

(1) The technical skills of the observer;

(2) The inherent variability of blood pressure;

(3) The accuracy of the device, including its limitations and applications;

(4) The difficulty in measuring blood pressure in some special groups, e.g. the elderly, patients with arrhythmias, patients with a large arm, children, pregnant women.

The most important element in using auscultatory methods is the observer. All observers need adequate training in listening and recognising the correct sounds. Most common sources of error in many reports are mostly due to the observer, including poor hearing, difficulty/failure in interpreting the Korotkoff sounds and lack of concentration. Most serious errors involve the interpretation of the Korotkoff sounds and recognising diastolic pressure. Observers may be influenced by the subjects. For example, observers tend to be reluctant in diagnosing young healthy subjects as hypertensive or obese older persons as normotensive when the blood pressure is around 140/90 mmHg (systolic/diastolic blood pressure) resulting in a tendency to under read in the first case and over estimate in the latter. Observer-related issues include: prejudice and bias such as threshold avoidance; terminal digit preference; fast deflation, etc. (Beevers et al. 2001).

______

Location of Measurement vis-à-vis body part—Arm, Wrist, and Finger:

The standard location for blood pressure measurement is the upper arm, with the stethoscope at the elbow crease over the brachial artery, although there are several other sites where it can be performed. Monitors that measure pressure at the wrist and fingers have become popular, but it is important to realize that the systolic and diastolic pressures vary substantially in different parts of the arterial tree. In general, the systolic pressure increases in more distal arteries, whereas the diastolic pressure decreases. Mean arterial pressure falls by only 1 to 2 mm Hg between the aorta and peripheral arteries.  Moreover, during the measurement, it is very important to place the site of measurement at the level of the heart because gravity is also another source of error. Indeed, if the subject takes its blood pressure upright, with an outstretched arm along the body, the blood pressure measured at the level of the arm could be raised on an average of 3 millimeters of mercury, whereas if it is measured at the level of the wrist, this increase could be of 15 millimeters of mercury, compared to the blood pressure measured in the aorta!

_

Three techniques of measuring blood pressure using a cuff: palpatory, auscultatory and oscillometric:

Blood pressure is measured noninvasively by occluding a major artery (typically the brachial artery in the arm) with an external pneumatic cuff. When the pressure in the cuff is higher than the blood pressure inside the artery, the artery collapses. As the pressure in the external cuff is slowly decreased by venting through a bleed valve, cuff pressure drops below systolic blood pressure, and blood will begin to spurt through the artery. These spurts cause the artery in the cuffed region to expand with each pulse and also cause the famous characteristic sounds called Korotkoff sounds. The pressure in the cuff when blood first passes through the cuffed region of the artery is an estimate of systolic pressure. The pressure in the cuff when blood first starts to flow continuously is an estimate of diastolic pressure. There are several ways to detect pulsatile blood flow as the cuff is deflated: palpation, auscultation over the artery with a stethoscope to hear the Korotkoff sounds, and recording cuff pressure oscillations. These correspond to the three main techniques for measuring blood pressure using a cuff.

_

Palpatory method using pneumatic cuff:

The brachial artery should be palpated while the cuff is rapidly inflated to about 30 mmHg above the point at which the pulse disappears; the cuff is then slowly deflated, and the observer notes the pressure at which the pulse reappears. This is the approximate level of the systolic pressure. Palpatory estimation is important, because phase I sounds sometimes disappear as pressure is reduced and reappear at a lower level (the auscultatory gap), resulting in systolic pressure being underestimated unless already determined by palpation. The palpatory technique is useful in patients in whom auscultatory endpoints may be difficult to judge accurately: for example, pregnant women, patients in shock or those taking exercise.

_

The radial artery is often used for palpatory estimation of the systolic pressure, but by using the brachial artery the observer also establishes its location before auscultation. In the palpatory method, the appearance of a distal pulse indicates that cuff pressure has just fallen below systolic arterial pressure.  

_

Note:

Palpatory method must precede auscultatory method of BP determination by manual manometers using auscultatory technique.

________

Actual manual measurement of BP by auscultatory method:

_

_

The auscultatory method has been the mainstay of clinical blood pressure measurement for as long as blood pressure has been measured but is gradually being supplanted by other techniques that are more suited to automated measurement. The Auscultatory Method involves—Mercury, Aneroid, and Hybrid Sphygmomanometers.  It is surprising that nearly 100 years after it was first discovered, and the subsequent recognition of its limited accuracy, the Korotkoff technique for measuring blood pressure has continued to be used without any substantial improvement. The brachial artery is occluded by a cuff placed around the upper arm and inflated to above systolic pressure. As it is gradually deflated, pulsatile blood flow is re-established and accompanied by sounds that can be detected by a stethoscope held over the artery just below the cuff. Traditionally, the sounds have been classified as 5 phases: phase I, appearance of clear tapping sounds corresponding to the appearance of a palpable pulse; phase II, sounds become softer and longer; phase III, sounds become crisper and louder; phase IV, sounds become muffled and softer; and phase V, sounds disappear completely. The fifth phase is thus recorded as the last audible sound. The sounds are thought to originate from a combination of turbulent blood flow and oscillations of the arterial wall. There is agreement that the onset of phase I corresponds to systolic pressure but tends to underestimate the systolic pressure recorded by direct intra-arterial measurement. The disappearance of sounds (phase V) corresponds to diastolic pressure but tends to occur before diastolic pressure determined by direct intra-arterial measurement. No clinical significance has been attached to phases II and III.

_

The Korotkoff sound method tends to give values for systolic pressure that are lower than the true intra-arterial pressure, and diastolic values that are higher. The range of discrepancies is quite striking: One author commented that the difference between the 2 methods might be as much as 25 mm Hg in some individuals. There has been disagreement in the past as to whether phase IV or V of the Korotkoff sounds should be used for recording diastolic pressure, but phase IV tends to be even higher than phase V when compared against the true intra-arterial diastolic pressure and is more difficult to identify than phase V. There is now general consensus that the fifth phase should be used, except in situations in which the disappearance of sounds cannot reliably be determined because sounds are audible even after complete deflation of the cuff, for example, in pregnant women, patients with arteriovenous fistulas (e.g., for hemodialysis), and aortic insufficiency.  Most of the large-scale clinical trials that have evaluated the benefits of treating hypertension have used the fifth phase.

_

Auscultatory gap:

In older patients with a wide pulse pressure, the Korotkoff sounds may become inaudible between systolic and diastolic pressure, and reappear as cuff deflation is continued. This phenomenon is known as the auscultatory gap. In some cases, this may occur because of fluctuations of intra-arterial pressure and is most likely to occur in subjects with target organ damage. The auscultatory gap often can be eliminated by elevating the arm overhead for 30 seconds before inflating the cuff and then bringing the arm to the usual position to continue in the measurement. This maneuver reduces vascular volume in the limb and improves inflow to enhance the Korotkoff sounds. You can also approximate systolic BP by palpatory method and then inflate cuff 30 mm Hg above it and then deflate at 2mm/sec rate to determine accurate BP. By this way, you can avoid mistake of underestimating systolic BP due to auscultatory gap. The auscultatory gap is not an issue with nonauscultatory methods.

_

Measurement of BP manually by mercury sphygmomanometer:

_

Stethoscope:

A stethoscope should be of a high quality, with clean, well-fitting earpieces. The American Heart Association recommends using the bell of the stethoscope over the brachial artery, rather than placing the diaphragm over the antecubital fossa, on the basis that the bell is most suited to the auscultation of low-pitched sounds, such as the Korotkoff sounds. However, it probably does not matter much if the bell or diaphragm is used in routine blood pressure measurement, provided the stethoscope is placed over the palpated brachial artery in the antecubital fossa. As the diaphragm covers a greater area and is easier to hold than a bell, it is reasonable to recommend it for routine clinical measurement of blood pressure.

_

Position of manometer:

The observer should take care about positioning the manometer:

• The manometer should be no further than 1 meter away, so that the scale can be read easily.

• The mercury column should be vertical (some models are designed with a tilt) and at eye level. This is achieved most effectively with stand-mounted models, which can be easily adjusted to suit the height of the observer.

• The mercury manometer has a vertical scale and errors will occur unless the eye is kept close to the level of the meniscus. The aneroid scale is a composite of vertical and horizontal divisions and numbers, and must be viewed straight-on, with the eye on a line perpendicular to the centre of the face of the gauge.

_

Placing the cuff:

The cuff should be wrapped round the arm, ensuring that the bladder dimensions are accurate. If the bladder does not completely encircle the arm, its centre must be over the brachial artery. The rubber tubes from the bladder are usually placed inferiorly, often at the site of the brachial artery, but it is now recommended that they should be placed superiorly or, with completely encircling bladders, posteriorly, so that the antecubital fossa is easily accessible for auscultation. The lower edge of the cuff should be 2–3 cm above the point of brachial artery pulsation.

_

1. The patient should be relaxed and seated, preferably for several minutes, (at least 5 minutes). Ideally, patients should not take caffeine-containing beverages or smoke for two hours before blood pressure is measured.

2. Ideally, patients should not exercise within half an hour of the measurement being taken (National Nutrition Survey User’s Guide).

3. Use a mercury sphygmomanometer. All other sphygmomanometers should be calibrated regularly against mercury sphygmomanometers to ensure accuracy.

4. Bladder length should be at least 80%, and width at least 40% of the circumference of the mid-upper arm. If the Velcro on the cuff is not totally attached, the cuff is probably too small.

5. Wrap cuff snugly around upper arm, with the centre of the bladder of the cuff positioned over the brachial artery and the lower border of the cuff about 2 cm above the bend of the elbow.

6. Ensure cuff is at heart level, whatever the position of the patient.

7. Palpate the brachial pulse of the arm in which the blood pressure is being measured.

8. Inflate cuff to the pressure at which the brachial pulse disappears and note this value. Deflate cuff, wait 30 seconds.

9. Place the stethoscope gently over the brachial artery at the point of maximal pulsation; a bell endpiece gives better sound reproduction, but in clinical practice a diaphragm is easier to secure with the fingers of one hand, and covers a larger area. The stethoscope should be held firmly and evenly but without excessive pressure, as too much pressure may distort the artery, producing sounds below diastolic pressure. The stethoscope endpiece should not touch the clothing, cuff or rubber tubes, to avoid friction sounds.

10. The cuff should then be inflated rapidly to about 30 mmHg above the palpated systolic pressure and deflated at a rate of 2–3 mmHg per pulse beat (or per second), during which the auscultatory phenomena described below will be heard.

11. For recording the systolic reading, use phase I Korotkoff (the first appearance of sound). For diastolic pressure, use phase V Korotkoff (disappearance of sound). 

12. When all sounds have disappeared, the cuff should be deflated rapidly and completely to prevent venous congestion of the arm before the measurements is repeated.

13. Wait for 30 seconds before repeating the procedure in the same arm. Average the readings. If the first two readings differ by more than 6 mm Hg systolic or if initial readings are high, take several readings after five minutes of quiet rest. I recommend ignoring the first reading altogether. Blood pressure should be taken at least once in both arms and the higher pressure subsequently used.

14. Leaving the cuff partially inflated for too long will fill the venous system and make the sounds difficult to hear. To avoid venous congestion, it is recommended that at least 30 seconds should elapse between readings. Conversely, if the sounds are difficult to hear initially, the veins can be emptied and the sound magnified if the patient raises the arm over the head with the cuff deflated. Milk the forearm down and inflate the cuff while the arm is still raised. Then quickly return the arm to the usual position and take the reading.

15. In the case of arrhythmias, additional readings may be required to estimate the average systolic and diastolic pressure. Isolated extra beats should be ignored. Note the rhythm and pulse rate.

_

The phases of sound during gradual deflation of cuff over brachial artery are shown in the table below and they were first described by Nicolai Korotkoff and later elaborated by Witold Ettinger.

_

Diastolic dilemma:

For many years, recommendations on blood pressure measurement have been equivocal as to the diastolic endpoint – the so-called ‘diastolic dilemma’. Phase IV (muffling) may coincide with, or be as much as, 10 mmHg greater than phase V (disappearance), but usually the difference is less than 5 mmHg. There has been resistance to general acceptance of the silent endpoint until recently, because the silent endpoint can be considerably below the muffling of sounds in some groups of patients, such as children, pregnant women, or anaemic or elderly patients. In some patients, sounds may even be audible when cuff pressure is deflated to zero. There is now a general consensus that disappearance of sounds (phase V) should be taken as diastolic pressure (as originally recommended by Korotkoff in 1910). When the Korotkoff sounds persist down to zero, muffling of sounds (phase IV) should be recorded for diastolic pressure, and a note made to this effect.

_

Inflation/Deflation System:

Indirect blood pressure measurement requires that occlusion of the brachial artery is produced by gradual inflation and deflation of an appropriately sized cuff. The tubing from the device to the cuff must be of sufficient length (70 cm or more) to allow for its function in the office setting. Successful inflation and deflation requires an airtight system; ongoing inspection and maintenance of the tubing for deterioration of the rubber (cracking) and the release valve are required. In my experience, air leakage from rubber tubing and bladder in cuff is the most common malfunction of manometer resulting in incorrect BP measurement.

_

Points to be noted while recording blood pressure:

The following points should be recorded with the blood pressure measurement [made to the nearest 2 mmHg without rounding-off to the nearest 5 or 10 mmHg (digit preference)]:

(i) position of the individual – lying, sitting or standing

(ii) the arm in which the measurement will be made– right or left

(iii) blood pressure in both arms on first attendance

(iv) arm circumference and inflatable bladder size

(v) phases IV and V for diastolic blood pressure

(vi) an auscultatory gap if present

(vii) state of the individual – e.g. anxious, relaxed

(viii) time of drug ingestion.

_

Effects of Body Position:  

Blood pressure measurement is most commonly made in either the sitting or the supine position, but the two positions give different measurements. It is widely accepted that diastolic pressure measured while sitting is higher than when measured supine (by ≈5 mm Hg), although there is less agreement about systolic pressure. When the arm position is meticulously adjusted so that the cuff is at the level of the right atrium in both positions, the systolic pressure has been reported to be 8 mm Hg higher in the supine than the upright position. In the supine position, the right atrium is approximately halfway between the bed and the level of the sternum; thus, if the arm is resting on the bed, it will be below heart level. For this reason, when measurements are taken in the supine position the arm should be supported with a pillow. In the sitting position, the right atrium level is the midpoint of the sternum or the fourth intercostal space. Other considerations include the position of the back and legs. If the back is not supported (as when the patient is seated on an examination table as opposed to a chair), the diastolic pressure may be increased by 6 mm Hg. Crossing the legs may raise systolic pressure by 2 to 8 mm Hg.

_

Effects of Arm Position:

The position of the arm can have a major influence when the blood pressure is measured; if the upper arm is below the level of the right atrium (when the arm is hanging down while in the sitting position), the readings will be too high. Similarly, if the arm is above the heart level, the readings will be too low. These differences can be attributed to the effects of hydrostatic pressure and may be 10 mm Hg or more, or 2 mm Hg for every inch above or below the heart level. Hydrostatic pressure is the pressure exerted by a fluid at equilibrium at a given point within the fluid, due to the force of gravity. Hydrostatic pressure increases in proportion to depth measured from the surface because of the increasing weight of fluid exerting downward force from above. The gravity affects blood pressure via hydrostatic forces (e.g. during standing). 

_

Other physiological factors that may influence the blood pressure during the measurement process include muscle tension. If the arm is held up by the patient (as opposed to being supported by the observer), the isometric exercise will raise the pressure. BP should be measured by keeping the arm cuff at heart level, with extension of the lower arm, and relaxation of the arm by means of a supporting pillow.

_

Cuff:

A soft arm cuff is usually recommended. In subjects with standard proportions, a hard plastic cuff is also applicable. In subjects with excessively thick or thin arms, large cuffs or small cuffs, respectively, should be used.

A) Site for cuff:

The cuff oscillometric principle is applicable to any site where an arterial pulse is available. However, the standard site for BP measurements is the upper arm, and several issues arise when BP is measured at sites other than the upper arm. At present, three types of electrical devices for home BP measurements are commercially available: the arm-cuff device, the wrist-cuff device and the finger-cuff device. In 1999, 7 million of these electrical devices were produced in the Far East (including Japan, Korea and Taiwan). Of those, 35% were wrist-cuff devices. Previously, finger-cuff devices commanded a considerable portion of the market share owing to their convenience and ease of use. However, it is now known that finger BP is physiologically different from brachial BP, and issues of vasospasm in the winter season as well as hydrostatic pressure differences are inevitable. Therefore, manufacturers have decreased production of finger-cuff devices and extensively increased production of wrist-cuff devices. In Japan wrist-cuff devices have 35% of the market share, and in Germany they possess almost half of the market share. Wrist-cuff devices are much easier to handle and more portable, but have several serious shortcomings. The most important issue is the necessity for correction of the hydrostatic pressure. The reference level for BP measurements is the right atrium. When the measurement site is 10 cm below the right atrium, SP and DP are measured as 7 mm Hg higher than at the level of the right atrium, and vice versa. Therefore, instructions for the wrist-cuff device indicate that the wrist must be kept at heart level. However, it is uncertain whether general users can accurately recognize where the heart level is. For example, the apex of the heart is sometimes determined as the heart level, but it is actually 5–10 cm lower than the right atrium, resulting in a 3.5–7-mm Hg higher BP reading compared with a measurement taken at the right atrium level. A 10-cm difference from the right atrium level easily and frequently occurs in usual settings. This difference may have serious implications for public health policies as well as clinical practice. In this situation, when the wrist is settled on the chest at the site of the heart in the supine position, the wrist is sometimes laid at a level 5–10 cm higher than that of the heart level, leading to lower BP measurements by 3.5–7 mm Hg than BP measurements at the right atrium. This issue also applies to the arm-cuff device, and adequate instruction is necessary when home BP is measured by the arm-cuff device. Even after appropriate correction of the hydrostatic pressure in the wrist-cuff device, another issue remains concerning the anatomy of the wrist. At the wrist, the radial and ulnar arteries are surrounded by the radial bone, ulnar bone and several long tendons, including the long palmar tendon. Therefore, even a sufficient excess of cuff pressure over arterial pressure does not necessarily occlude these arteries completely. Measurements are also influenced by flexion and hyperextension of the wrist. As a result, wrist-cuff devices sometimes provide erroneous readings, especially for SP. At present, the wrist-cuff device is inappropriate as a tool for clinical decision making. Recently, a wrist-cuff device that does not work unless the device is at the heart level has been developed, but even such devices do not overcome this anatomical issue. However, the wrist-cuff device has a certain merit in terms of convenience. Arm-cuff devices also have some shortcomings, such as application to a thick arm, the relationship between the cuff and clothes and the position of the arm cuff in relation to the elbow joint. The wrist-cuff device can overcome these shortcomings. However, I recommend the use of an arm-cuff device operated under standard measurement procedures.

B) Type of Cuff:

At present, soft cuffs and hard plastic cuffs are available for automatic arm-cuff devices for home BP measurements. In individuals with thick arms, a hard plastic cuff does not necessarily fit the arm, resulting in erroneous measurements. Thus, a soft cuff is more suitable, but in certain subjects a hard plastic cuff is convenient and measures BP accurately. Among cuff-oscillometric devices, the width and length of the cuff bladder differ among producers. This is permitted by the American Association for Medical Instrumentation (AAMI) and the American National Standard Institute Inc. (ANSI) as a prerequisite, provided that cuff pressure is transmitted to the artery and can occlude the brachial artery completely. In individuals with excessively thick or thin arms, the use of large or small cuffs, respectively, is recommended.

_

The inflation of the cuff:

An insufficient inflation leads to an undervaluation of the systolic blood pressure, i.e. the maximal blood pressure. The solution for the self-measurement device is based on an automatic system of inflation of the cuff. Many self-measurement devices of the blood pressure inflate the cuff up to 180 millimeters of mercury and then deflate it gradually. If this pressure is lower than the systolic blood pressure, then the device inflates the cuff again until the pressure is above the systolic blood pressure. Many self-measurement devices of the blood pressure have a possibility of presetting the maximal level of the pressure, such as 140, 170, 200 and 240 millimeters of mercury. Thus, when the cuff inflates with 140 millimeters of mercury and that the systolic blood pressure is 190 millimeters of mercury, the cuff inflates again with 170 and then 200 millimeters of mercury. The very sophisticated devices inflate their cuff gradually, hear the noises at the level of the artery at the same time and stop the inflation as soon as the blood pressure measured by the device exceeds the systolic blood pressure.

_

The deflating of the cuff:

The deflating must be very meticulous in order not to make an error of measurement of the blood pressure. If the deflating is too fast, the systolic blood pressure may be underestimated whereas the diastolic blood pressure may be over-estimated. The best self-measurement devices use a deflating programmed at a speed of 2 millimeters of mercury per second. Other devices use a deflating programmed on the heart pulsations, but they are valid only when the patient heart rate is between 60 and 80 per minute.

_

Differences between the Two Arms:

Several studies have compared the blood pressure measured in both arms, mostly using the auscultatory technique. Almost all have reported finding differences, but there is no clear pattern; thus, the difference does not appear to be determined by whether the subject is right- or left-handed. One of the largest studies was conducted in 400 subjects using simultaneous measurements with oscillometric devices, which found no systematic differences between the 2 arms, but 20% of subjects had differences of >10 mm Hg.  Although these findings are disturbing, it is not clear to what extent the differences were consistent and reproducible, as opposed to being the result of inherent blood pressure variability. Nevertheless, it is recommended that blood pressure should be checked in both arms at the first examination. This may be helpful in detecting coarctation of the aorta and upper extremity arterial obstruction. When there is a consistent inter-arm difference, the arm with the higher pressure should be used. In women who have had a mastectomy, blood pressure can be measured in both arms unless there is lymphedema. Self-BP measurements at home, however, are usually performed using the non-dominant arm. When an apparent difference in BP is observed between the arms in a clinical setting, the arm showing the higher BP should be used for self-BP measurements. To provide consistent results, the same arm should always be used for self-BP measurements.

_

Cuff Size:

_

The figure below differentiates between cuff and bladder. Whenever we talk of cuff size, we actually mean bladder size.

_

Von Recklinghausen in 1901 recognized that Riva Rocci’s device for determination of accurate systolic blood pressure by palpation had a significant flaw, its 5-cm-width cuff.  Multiple authors have shown that the error in blood pressure measurement is larger when the cuff is too small relative to the patient’s arm circumference than when it is too large. Previous epidemiological data from Britain and Ireland had suggested that arm circumferences of >34 cm were uncommon. Data from NHANES III and NHANES 2000 have shown the opposite in the United States. In the United States during the period from 1988 to 2000, there has been a significant increase in mean arm circumference and an increase in the frequency of arm circumferences of >33 cm was found because of increasing weight in the American population. This should not be surprising, because the prevalence of obesity in the United States has increased from 22.9% in NHANES III (1988 to 1994) to >30% in 2000. Similar data regarding the increased frequency of larger arm circumferences were also found in a study of a referral practice of hypertensive subjects, in which a striking 61% of 430 subjects had an arm circumference of ≥33 cm. Recognition of the increasing need for the “large adult” cuff, or even the thigh cuff, for accurate blood pressure measurement is critical, because frequently in practice only the standard adult size has been demonstrated to be available. More importantly, it has been demonstrated that the most frequent error in measuring blood pressure in the outpatient clinic is “miscuffing,” with undercuffing large arms accounting for 84% of the “miscuffings.”

_

_

 

_

The “ideal” cuff should have a bladder length that is 80% and a width that is at least 40% of arm circumference (a length-to-width ratio of 2:1). A recent study comparing intra-arterial and auscultatory blood pressure concluded that the error is minimized with a cuff width of 46% of the arm circumference. For the large adult and thigh cuffs, the ideal width ratio of 46% of arm circumference is not practical, because it would result in a width of 20 cm and 24 cm, respectively. These widths would give a cuff that would not be clinically usable for most patients, so for the larger cuffs, a less than ideal ratio of width to arm circumference must be accepted. In practice, bladder width is easily appreciated by the clinician but bladder length often is not, because the bladder is enclosed in the cuff. To further complicate the issue for clinicians, there are no standards for manufacturers of different sizes of blood pressure cuff. This has led to significant differences in which arm circumferences are accurately measured by individual manufacturers’ standard adult and large adult cuffs. 

_

_

Individual cuffs should be labeled with the ranges of arm circumferences, to which they can be correctly applied, preferably by having lines that show whether the cuff size is appropriate when it is wrapped around the arm. In patients with morbid obesity, one will encounter very large arm circumferences with short upper arm length. This geometry often cannot be correctly cuffed, even with the thigh cuff. In this circumstance, the clinician may measure blood pressure from a cuff placed on the forearm and listening for sounds over the radial artery (although this may overestimate systolic blood pressure) or use a validated wrist blood pressure monitor held at the level of the heart.

_

_

Cuff Placement and Stethoscope:

Cuff placement must be preceded by selection of the appropriate cuff size for the subject’s arm circumference. The observer must first palpate the brachial artery in the antecubital fossa and place the midline of the bladder of the cuff (commonly marked on the cuff by the manufacturer) so that it is over the arterial pulsation over the patient’s bare upper arm. The sleeve should not be rolled up such that it has a tourniquet effect above the blood pressure cuff. On the other hand, applying the cuff over clothes is similar to the undercuffing error and will lead to overestimation of blood pressure. The lower end of the cuff should be 2 to 3 cm above the antecubital fossa to allow room for placement of the stethoscope. However, if a cuff that leaves such space has a bladder length that does not sufficiently encircle the arm (at least 80%), a larger cuff should be used, recognizing that if the cuff touches the stethoscope, artifactual noise will be generated. The cuff is then pulled snugly around the bare upper arm. Neither the observer nor the patient should talk during the measurement. Phase 1 (systolic) and phase 5 (diastolic) Korotkoff sounds are best heard using the bell of the stethoscope over the palpated brachial artery in the antecubital fossa, although some studies have shown that there is little difference when using the bell or the diaphragm. The key to good measurement is the use of a high-quality stethoscope with short tubing, because inexpensive models may lack good tonal transmission properties required for accurate auscultatory measurement.

_

The clinician must also interpret BP measurement entries with some caution. One study showed that as many as 20% of logbook entries were incorrect or fictitious. My patients of government hospitals told me that many times nurses do not take blood pressure and write fictitious BP on the indoor case sheets.

__

Number of measurements at clinic:

Because of the variability of measurements of casual blood pressure, decisions based on single measurements will result in erroneous diagnosis and inappropriate management. Reliability of measurement is improved if repeated measurements are made. At least two measurements at 1 min intervals should be taken carefully at each visit, with a repeat measurement if there is uncertainty or distraction; it is best to make a few carefully taken measurements rather than taking a number of hurried measurements.

_

American Heart Association Guidelines for In-Clinic Blood Pressure Measurement:

Recommendation Comments
Patient should be seated comfortably, with back supported, legs uncrossed, and upper arm bared. Diastolic pressure is higher in the seated position, whereas systolic pressure is higher in the supine position.
An unsupported back may increase diastolic pressure; crossing the legs may increase systolic pressure.
Patient’s arm should be supported at heart level. If the upper arm is below the level of the right atrium, the readings will be too high; if the upper arm is above heart level, the readings will be too low.
If the arm is unsupported and held up by the patient, pressure will be higher.
Cuff bladder should encircle 80 percent or more of the patient’s arm circumference. An undersized cuff increases errors in measurement.
Mercury column should be deflated at 2 to3 mm per second. Deflation rates greater than 2 mm per second can cause the systolic pressure to appear lower and the diastolic pressure to appear higher.
The first and last audible sounds should be recorded as systolic and diastolic pressure, respectively. Measurements should be given to the nearest 2 mm Hg.
Neither the patient nor the person taking the measurement should talk during the procedure. Talking during the procedure may cause deviations in the measurement.

_______

Auscultatory method using microphone:

Mercury and aneroid sphygmomanometers require the use of a stethoscope to hear the sounds over the brachial artery. Sometimes, a microphone has been integrated into the cuff to obtain an automatic device. Unfortunately, this device is not always highly reliable because of the dexterity needed in their handling and the reduction in the precision of the cuff with time. Nevertheless, this device has not currently been abandoned.

______

Automated device:

Very often, the self-measurement devices for blood pressure are automatic, i.e. the patient just has to press on a button to begin the inflation. These automated devices use electric pump to inflate pneumatic cuff. Many devices are even equipped with a special program that can measure the blood pressure 3 times in a row. Most of automated devices use oscillometric technique to determine BP but you can have an automated device using auscultatory technique employing microphone-filter system to detect korotkoff sounds but these devices are now obsolete.

_
The Oscillometric Technique:

_

_

They measure systolic and diastolic pressures by oscillometric detection, using a piezoelectric pressure sensor and electronic components including a microprocessor. They do not measure systolic and diastolic pressures directly, per se, but calculate them from the mean pressure and empirical statistical oscillometric parameters. In the oscillometric method the cuff pressure is high pass filtered to extract the small oscillations at the cardiac frequency and the envelope of these oscillations is computed, for example as the area obtained by integrating each pulse. These oscillations in cuff pressure increase in amplitude as cuff pressure falls between systolic and mean arterial pressure. The oscillations then decrease in amplitude as cuff pressure falls below mean arterial pressure. The corresponding oscillation envelope function is interpreted by computer aided analysis to extract estimates of blood pressure. The point of maximal oscillations corresponds closely to mean arterial pressure. Points on the envelope corresponding to systolic and diastolic pressure, however, are less well established. Frequently a version of the maximum amplitude algorithm is used to estimate systolic and diastolic pressure values. The point of maximal oscillations is used to divide the envelope into rising and falling phases. Then characteristic ratios or fractions of the peak amplitude are used to find points corresponding to systolic pressure on the rising phase of the envelope and to diastolic pressure on the falling phase of the envelope. Current algorithms for oscillometric blood pressure implemented in commercial devices may be quite valid but are closely held trade secrets and cannot be independently validated.

_

One advantage of the method is that no transducer need be placed over the brachial artery, so that placement of the cuff is not critical. Other potential advantages of the oscillometric method for ambulatory monitoring are that it is less susceptible to external noise (but not to low-frequency mechanical vibration), and that the cuff can be removed and replaced by the patient, for example, to take a shower. The main problem with the technique is that the amplitude of the oscillations depends on several factors other than blood pressure, most importantly the stiffness of the arteries. Thus, in older people with stiff arteries and wide pulse pressures the mean arterial pressure may be significantly underestimated. The algorithms used for detecting systolic and diastolic pressures are different from one device to another and are not divulged by the manufacturers. The differences between devices has been dramatically shown by studies using simulated pressure waves, in which a systolic pressure of 120 mm Hg was registered as low as 110 and as high as 125 mm Hg by different devices. Another disadvantage is that such recorders do not work well during physical activity, when there may be considerable movement artifact. Additionally, the bladders deflate at a manufacturer-specific “bleed rate,” which assumes a regular pulse between bleed steps as part of the algorithms used to determine systolic and diastolic pressure.

_

It is a simple technique, effective and validated by many medical societies. This technique can be easily automated, and can be used as a self-measurement device by a great number of patients with high blood pressure. Currently, the majority of the self-measurement devices for blood pressure use this technique and the devices are generally reliable. The oscillometric technique has been used successfully in ambulatory blood pressure monitors and home monitors. Comparisons of several different commercial models with intra-arterial and Korotkoff sound measurements have shown generally good agreement, but the results have been better with ambulatory monitors than with the cheaper devices marketed for home use. Oscillometric devices are also now available for taking multiple measurements in a clinic setting.

_

Oscillometric vs. auscultatory:

There are a number of physiological and pathological states that may influence the ability of an oscillometric device to obtain an equivalent reading to a mercury sphygmomanometer. Oscillometric measurements are dependent on movement, and changes in the amplitude of this movement, in the artery, and therefore maybe altered. Oscillometric measurements cannot be relied on in patients with arrhythmias, or some valvular heart disease such as aortic incompetence. Other patients with altered vascular compliance, such as diabetics, or the elderly, could have less accurate blood pressure readings using oscillometric measurement. Changes in vascular compliance may also be confounded by oedema, intravascular volume, hyperdynamic circulation and by changes in cardiac output such as pre-eclampsia, in which oscillometric readings frequently underestimate the blood pressure. Although the accuracy and reproducibility of Korotokov sounds in these disease states are not known, listening to the Korotkoff sounds remains the technique in which current knowledge of indirect blood pressure is determined, and therefore, the auscultatory method of blood pressure is recommended in such populations.  

_

Are oscillometric measurements reliable?

Oscillometric monitoring requires the recording of pressure pulses in the cuff which arise through volume pulses of the artery. The course of the pulse pressure curve is recorded as the so called ‘pulse oscillogram’. By referring to the pulse amplitudes, an envelope curve is provided. The maximum of this oscillation envelope curve corresponds to the mean arterial pressure. Both systolic and diastolic blood pressures are determined from the shape of the envelope curve by means of a micro-computer. The underlying algorithms are specific for the respective commercial instruments. They are well guarded secrets of the manufacturers. Users generally will not be informed about changes in the use of algorithms. In addition to the algorithms and the quality of the electromechanical pressure transducer, further errors can influence the measurement accuracy of oscillometric devices. The recording of the oscillation pattern significantly depends on the anatomical position, elasticity and size of the artery. In addition, the size, histo-anatomy and distribution of the surrounding tissue affect the accuracy. This is particularly true for the circumference of the measurement site. Basically, the device calibration depends on the application site (upper arm, wrist, finger). Changes of the vascular wall elasticity and arteriosclerotic vascular changes also affect the course and pattern of the pulse oscillogram. Finally, oscillations are also dependent on the size and material of the cuff and of pressure tube connections. The impact of these physiological-anatomical and technical factors on the device-specific oscillometric measurement accuracy requires a critical review of the measurement accuracy by referring to an adequately sized patient sample. Unfortunately, such evaluation is not mandatory for all markets. For example, according to the European standard (EN1060 1-3) the CE (European Conformity Mark) identification does not include such a mandatory clinical evaluation of the measurement accuracy; an omission, which is not commonly known by prescribing practitioners or users of the instruments. Therefore, only a small proportion of automated devices on the market have been qualified by clinical evaluations according to generally accepted protocols of an independent institution or scientific society such as the British Hypertension Society (BHS), the Deutsche Hochdruckliga [German Hypertension Society (test seal)], the American Association for the Advancement of Medical Instrumentation (AAMI) or according to the DIN58130. Due to the unsatisfactory number of sufficiently evaluated instruments, a proposal to simplify the evaluation procedure has been made by the ESH Working Group on Blood Pressure Monitoring. Further efforts are currently under discussion to standardize the underlying clinical protocols imposing an obligatory regulation in order to carry out such evaluations (EN standard 1060, part 4). Successfully evaluated devices may not guarantee a specific monitoring accuracy for all kinds of users. Therefore, in addition to the general exclusion of patients suffering from frequent cardiac arrhythmia (in particular atrial fibrillation) a comparative monitoring including the standard Korotkoff method is urgently required to evaluate the individual monitoring accuracy of a device for each single user. Clinical evaluation studies demonstrate that the measurement accuracy in wrist type devices is significantly lower compared with upper arm monitoring devices. The wrist-type device market-share in Germany is ∼60–80% despite the fact that the evaluation according to the ‘test seal protocol’ (Deutsche Hochdruckliga) has only been passed by one of 13 tested wrist devices. 

_

Doppler ultrasound to detect brachial BP:

Doppler ultrasound is based on the Doppler phenomenon. The frequency of sound waves varies depending on the speed of the sound transmitter in relation to the sound receiver. Doppler devices transmit a sound wave that is reflected by flowing erythrocytes, and the shift in frequency is detected. Frequency shift can be detected only for blood flow greater than 6 cm/sec. Doppler ultrasound is commonly used for the measurement of blood pressure in low-flow states, evaluation of lower extremity peripheral perfusion, and assessment of fetal heart sounds after the first trimester of pregnancy. Doppler’s sensitivity allows detection of systolic blood pressure down to 30 mm Hg in the evaluation of a patient in shock.  Devices incorporating this technique use an ultrasound transmitter and receiver placed over the brachial artery under a sphygmomanometer cuff. As the cuff is deflated, the movement of the arterial wall at systolic pressure causes a Doppler phase shift in the reflected ultrasound, and diastolic pressure is recorded as the point at which diminution of arterial motion occurs. Another variation of this method detects the onset of blood flow, which has been found to be of particular value for measuring systolic pressure in infants and children. In patients with very faint Korotkoff sounds (for example those with muscular atrophy), placing a Doppler probe over the brachial artery may help to detect the systolic pressure, and the same technique can be used for measuring the ankle–arm index, in which the systolic pressures in the brachial artery and the posterior tibial artery are compared to obtain an index of peripheral arterial disease. 

___________

Shortcoming of traditional brachial artery compression:

The occlusion by the cuff – applied in the majority of indirect blood-pressure meters – changes the biomechanical properties of the arteries resulting in a change in the systolic and diastolic values. The occlusion of the brachial artery influences the local value of blood-pressure. In other words, the measurement changes the parameter to be measured. The change in blood-pressure is different in different parts of the body. The change caused by the inflation of the cuff is different from person to person. Even the same person can react to the occlusion differently. The widely used devices determine the momentarily value of blood-pressure. This results in an unpredictable error. According to BHS and AAMI the reference blood-pressure value is also determined by using a cuff. As a result of the occlusion the reference value can also be biased.  Presently available devices also neglect the variation caused by breathing. This can be as high as 10 Hg mm in the systolic pressure. The aim of the research work has been to increase the accuracy and reproducibility of the indirect, cuff-based blood pressure measurement with monitors using the Pulse Wave Velocity (PWV) principle. The importance of all this is that brachial BP can be unreliable, especially in young people whose more flexible blood vessel walls can give misleadingly high blood pressure, leading to unnecessary medical interventions. Conversely, old people with stiffer blood vessels may give a misleadingly low reading of brachial BP, disguising dangerous high blood pressure which can be a precursor to heart attack or stroke.   

________

The Pulse Wave Velocity (PWV) principle:

Since the 1990s a novel family of techniques based on the so-called pulse wave velocity (PWV) principle has been developed. These techniques rely on the fact that the velocity at which an arterial pressure pulse travels along the arterial tree depends, among others, on the underlying blood pressure.  Accordingly, after a calibration maneuver, these techniques provide indirect estimates of blood pressure by translating PWV values into blood pressure values. The main advantage of these techniques is that it is possible to measure PWV values of a subject continuously (beat-by-beat), without medical supervision, and without the need of inflating brachial cuffs. PWV-based techniques are still in the research domain and are not adapted to clinical settings.  Non-intrusive blood pressure monitoring are either the pulse wave velocity (PWV) or the inverse – pulse transit time (PTT). In general the PTT refers to the time it takes a pulse wave to travel between two arterial sites. PTT varies inversely with blood pressure changes and can be used to develop cuffless and continuous blood pressure measurement.  There are a number of different sophisticated pulse transit time measurement techniques such as the Ultrasound Doppler, arterial tonometry, and the so called “two point” PPG method (Smith et al. 1999; Kanda et al. 2000; Lykogeorgakis 2002). However, the simplest and most convenient method is to compute PTT as a temporal difference between the R wave in an electrocardiogram (ECG) and the beginning of the following pulse wave measured by photoplethysmography (Lutter et al. 2002; Kazanavicius et al. 2003).

_

PPG:

Photoplethysmography (PPG) is a simple and low-cost optical technique that can be used to detect blood volume changes in the microvascular bed of tissue. It is often used non-invasively to make measurements at the skin surface. The PPG waveform comprises a pulsatile physiological waveform attributed to cardiac synchronous changes in the blood volume with each heart beat, and is superimposed on a slowly varying baseline with various lower frequency components attributed to respiration, sympathetic nervous system activity and thermoregulation. Although the origins of the components of the PPG signal are not fully understood, it is generally accepted that they can provide valuable information about the cardiovascular system. A PPG is often obtained by using a pulse oximeter which illuminates the skin and measures changes in light absorption. The change in volume caused by the pressure pulse is detected by illuminating the skin with the light from a light-emitting diode (LED) and then measuring the amount of light either transmitted or reflected to a photodiode. Each cardiac cycle appears as a peak.  Blood pressure measuring method using PPG signal is the one of many non-invasive blood pressure methods. We can estimate blood pressure using wrist cuff and wrist PPG signal. During the deflation of the wrist cuff pressure, the PPG pulse appears at certain point that is similar to Korotkoff sound. After the pulse appeared, the morphology of PPG pulses was changed into the certain shape. So, we use these points to estimate the blood pressure. 

_

The Finger Cuff Method of Penaz: The Photoplethysmographic (PPG) method:

This interesting method was first developed by Penaz and works on the principle of the “unloaded arterial wall.” Arterial pulsation in a finger is detected by a photoplethysmograph under a pressure cuff. The output of the plethysmograph is used to drive a servo-loop, which rapidly changes the cuff pressure to keep the output constant, so that the artery is held in a partially opened state. The oscillations of pressure in the cuff are measured and have been found to resemble the intra-arterial pressure wave in most subjects. This method gives an accurate estimate of the changes of systolic and diastolic pressure, although both may be underestimated (or overestimated in some subjects) when compared with brachial artery pressures; the cuff can be kept inflated for up to 2 hours. It is now commercially available as the Finometer (formerly Finapres) and Portapres recorders, and has been validated in several studies against intra-arterial pressures. The Portapres enables readings to be taken over 24 hours while the subjects are ambulatory, although it is somewhat cumbersome. The PPG signal helps to measure the systolic pressure directly unlike the oscillometric method that measures the mean pressure and gives only an estimate for the systolic and diastolic pressures. The additional information gained by monitoring the PPG signal before and during slow inflation provides more accurate results than conventional indirect methods and assures that the cuff pressure only slightly (by less than 10 Hg mm) exceed the systolic pressure. The PPG signal also indicates if the cuff is placed or inflated improperly.  Some tests have revealed that the photoplethysmographic method was not reliable, not only because of the measuring site of the blood pressure, but also because of the bad quality of the blood pressure data collected. Besides photoplethysmography (PPG), piezoplethysmography and volume pressure recording are also used in various noninvasive blood pressure (NIBP) techniques for measuring the BP.   

_

Ultrasound Technique:

Researchers have demonstrated that Doppler ultrasound can be used to measure aortic PWV in a reliable and reproducible way. In addition, B-mode ultrasound provides an anatomical image that can increase the precision of measurements (for example, using the carotid or femoral bifurcation as a reference). This method has the further advantages of shorter performance time, short learning curve and the absence of anatomical limitations, which are especially pronounced in the carotid artery. The versatility of ultrasound also permits to explore simultaneously other pathologies such as plaques or blockages in the carotid and femoral territories as well as to assess intima-media thickness.  

_

Tonometry:

The principle of this technique is that when an artery is partially compressed or splinted against a bone, the pulsations are proportional to the intra-arterial pressure. This has been developed for measurement of the blood pressure at the wrist, because the radial artery lies just over the radius bone. However, the transducer needs to be situated directly over the center of the artery; hence, the signal is very position-sensitive. This has been dealt with by using an array of transducers placed across the artery. Although the technique has been developed for beat-to-beat monitoring of the wrist blood pressure, it requires calibration in each patient and is not suitable for routine clinical use.

_

Applantion tonometry for BP measurement:

Another application is applanation tonometry, in which a single transducer is held manually over the radial artery to record the pressure waveform while systolic and diastolic pressures are measured from the brachial artery. This technique has been used to estimate central aortic pressure. The rationale for this is that the arterial pressure at the level of the aortic root is different from the brachial artery pressure, and that this difference varies according to a number of physiological and pathological variables.  Central aortic pressure is a better predictor of cardiovascular outcome than peripheral pressure and peripherally obtained blood pressure does not accurately reflect central pressure because of pressure amplification. Lastly, antihypertensive medications have differing effects on central pressures despite similar reductions in brachial blood pressure. Applanation tonometry can overcome the limitations of peripheral pressure by determining the shape of the aortic waveform from the radial artery. Waveform analysis not only indicates central systolic and diastolic pressure but also determines the influence of pulse wave reflection on the central pressure waveform. It can serve as a useful adjunct to brachial blood pressure measurements in initiating and monitoring hypertensive treatment, in observing the hemodynamic effects of atherosclerotic risk factors, and in predicting cardiovascular outcomes and events. Radial artery applanation tonometry is a noninvasive, reproducible, and affordable technology that can be used in conjunction with peripherally obtained blood pressure to guide patient management. The shape of the pressure waveform in the arterial tree is determined by a combination of the incident wave and the wave reflected from the periphery. In hypertensive subjects and subjects with stiff arteries, the systolic pressure wave in the aorta and brachial artery is augmented by a late systolic peak, which can be attributed to wave reflection and which is not seen in more peripheral arteries such as the radial artery. Using Fourier analysis, it is possible to derive the central aortic pressure waveform from the radial artery trace. However, comparisons with directly recorded aortic pressure made during cardiac catheterization have shown considerable scatter between the estimated and true values, so the technique cannot yet be recommended for routine clinical practice.

_______

BP measurement devices: BP monitors:

_

_

Auscultatory arm devices:

1. Mercury Sphygmomanometers:

The mercury sphygmomanometer has always been regarded as the gold standard for clinical measurement of blood pressure, but this situation is likely to change in the near future. The design of mercury sphygmomanometers has changed little over the past 50 years, except that modern versions are less likely to spill mercury if dropped. In principle, there is less to go wrong with mercury sphygmomanometers than with other devices, and one of the unique features is that the simplicity of the design means that there is negligible difference in the accuracy of different brands, which certainly does not apply to any other type of manometer. However, this should not be any cause for complacency. One hospital survey found that 21% of devices had technical problems that would limit their accuracy, whereas another found >50% to be defective. The random zero sphygmomanometer was designed to eliminate observer bias but is no longer available.

_

2. Aneroid Sphygmomanometers:

In these devices, the pressure is registered by a mechanical system of metal bellows that expands as the cuff pressure increases and a series of levers that register the pressure on a circular scale. This type of system does not necessarily maintain its stability over time, particularly if handled roughly. They therefore are inherently less accurate than mercury sphygmomanometers and require calibrating at regular intervals. Recent developments in the design of aneroid devices may make them less susceptible to mechanical damage when dropped. Wall-mounted devices may be less susceptible to trauma and, hence, more accurate than mobile devices. The accuracy of the manometers varies greatly from one manufacturer to another. Thus, 4 surveys conducted in hospitals in the past 10 years have examined the accuracy of the aneroid devices and have shown significant inaccuracies ranging from 1% to 44%. The few studies that have been conducted with aneroid devices have focused on the accuracy of the pressure registering system as opposed to the degree of observer error, which is likely to be higher with the small dials used in many of the devices.

_

3. Hybrid Sphygmomanometers:

Devices have been developed that combine some of the features of both electronic and auscultatory devices, and are referred to as “hybrid” sphygmomanometers. The key feature is that the mercury column is replaced by an electronic pressure gauge, such as are used in oscillometric devices. Blood pressure is taken in the same way as with a mercury or aneroid device, by an observer using a stethoscope and listening for the Korotkoff sounds. The cuff pressure can be displayed as a simulated mercury column, as a digital readout, or as a simulated aneroid display. In one version, the cuff is deflated in the normal way, and when systolic and diastolic pressure are heard a button next to the deflation knob is pressed, which freezes the digital display to show systolic and diastolic pressures. This has the potential of minimizing terminal digit preference, which is a major source of error with mercury and aneroid devices. The hybrid sphygmomanometer has the potential to become a replacement for mercury, because it combines some of the best features of both mercury and electronic devices at any rate until the latter become accurate enough to be used without individual validation.

_

Selection of an accurate device:

An accurate device is fundamental to all measurements of blood pressure. If the device is inaccurate, attention to the detail of measurement methods is of little relevance. The accuracy of devices for measurement of blood pressure should not be judged on the sole basis of claims from manufacturers, which can be extravagant. Instead devices should be validated according to international protocols in peer reviewed journals. Understandably, many are skeptical about the accuracy of SMBP devices. The American Association for the Advancement of Medical Instrumentation and the British Hypertension Society has established the 2 standard protocols for instrument accuracy. Given the multitude of products available, few have been subjected to the rigorous standards of independent testing. The European Society of Hypertension found only 5 out of 23 tested devices worthy of recommendation. The investigators noted that all the recommended devices measured BP in the upper arm. They advised against the less accurate wrist and finger devices, as these can be subject to inaccuracies from peripheral vasoconstriction and errors in positioning in respect to heart-level placement.

_

Mercury sphygmomanometers:

The mercury-containing sphygmomanometer should not be viewed as an absolute standard. It is however, with all its faults as an indirect blood pressure determination, the method used to establish our current knowledge. Since Riva-Rocci’s times mercury sphygmomanometers associated with the occlusion-auscultatory technique have been used in clinical and epidemiological studies on hypertension. They represent the cornerstone for cardiovascular disease prognosis and prevention, as well as in the daily clinical management of patients with high blood pressure. As a result of this time- honoured use, blood pressure values are still quantified in mmHg both in current practice and in research, and doctors keep watching the mercury column as the most faithful indicator of the blood pressure levels in their patients. A commonly perceived advantage of mercury manometers lies in the fact that, when they are well maintained, they offer “absolute” measurements of blood pressure, and represent a “gold standard” reference technique used to validate all other methods which provide information on blood pressure levels in mmHg without using a mercury column. The blood pressure measurement based on the mercury sphygmomanometer is an indirect blood pressure determination, and is difficult to perfectly mimic with other techniques unrelated to auscultation of Korotkoff sounds. The high-density of liquid mercury metal provides an acceptable short length of the rising column for visualization of the pressure in the cuff. Therefore, the mercury column in a sphygmomanometer is used as a simple, gravity-based unit. When properly maintained and serviced and when used by knowledgeable trained health professionals, it can give accurate indirect measurements of both systolic and diastolic pressure. Currently it is considered to be the most accurate technique (O’Brien et al. 2003). A complete mercury sphygmomanometer requires a cuff, bladder, tubing and a rubber bulb, and should be maintained in good condition and serviced regularly according to the manufacturers’ instructions. Mercury sphygmomanometers are easily checked and maintained, but great care should be taken when handling mercury.

_

Limitations of mercury sphygmomanometer:

Despite its widespread availability for almost a century, there can be major problems with the use of mercury sphygmomanometers in clinical practice. Reports from hospitals and family practices have suggested that many mercury sphygmomanometers are defective because of poor maintenance (Beevers and Morgan 1993, Burke et al. 1982, Feher et al. 1992, Gillespie and Curzio 1998, Hutchinson et al. 1994, Markandu et al. 2000, Wingfield et al. 1996). Moreover, several studies have shown that there is a lack of knowledge of the technical aspects of the actual blood pressure measurement in both doctors and nurses and other health care professionals who use the mercury sphygmomanometers. The reports also suggest that the technique of blood pressure measurement is not applied very well. Additionally, there is a lack of knowledge of the appropriate blood pressure equipment and how to maintain the devices so that they are calibrated and in pristine condition. One should be aware of the fact that issues of maintenance are a factor for every blood pressure measurement device.

_

There are several other limitations of using the auscultatory method which affect both mercury and aneroid manometers:

1. Terminal digit preference: Tendency of the observer to round off the number to their choosing e.g. 144/96 mmHg as 140/100 mmHg or 150/90 mmHg (systolic/diastolic blood pressure). This is the zero preference. The observer finds it easier to read the prominent larger 10 mmHg markings instead of the smaller, 2 mmHg markings.

 2. Errors may occur when the manometer is not kept vertical, and the device is rested on the side of the bed or, having it tilted against the pillow. This is an issue when the device is being used at the patient’s bedside, not when used for public- health monitoring.

3. Inflation/deflation system: Another important limitation to consider is the performance of the inflation/deflation system and of the occluding bladder encased in a cuff, and proper application of auscultation with a stethoscope. Those issues apply to all blood pressure measuring devices using the auscultatory method. The inflation/deflation system consists of an inflating and deflating mechanism connected by rubber tubing to an occluding bladder. The standard mercury sphygmomanometers used in clinical practice are operated manually, with inflation being effected by means of a bulb compressed by hand and deflation by means of a release valve, which is also controlled by hand. The pump and control valve are connected to the inflatable bladder and thence to the sphygmomanometer by rubber tubing. Leaks from cracked or perished rubber make accurate measurement of blood pressure difficult because the fall of the mercury cannot be controlled. The length of tubing between the cuff and the manometer should be at least 70 cm and that between the inflation source and the cuff should be at least 30 cm. Connections should be airtight and easily disconnected.

4.  Oxidisation of the mercury is another very common occurrence, which can increase with time and make the columns difficult to read.

5. The markings on the column also fade with time, again making it impossible to read accurately.

6. Environmental concerns regarding mercury mean that there is no long-term future for these devices. These concerns have led to the imposition of bans in some European countries and supply in the UK is now restricted to healthcare use.

_

________

Automated oscillometric devices:

• Automated (spot-check) arm device:

This includes an electronic monitor with a pressure sensor, a digital display and an upper arm cuff. An electrically-driven pump raises the pressure in the cuff. Devices may have a user-adjustable set inflation pressure or they will automatically inflate to the appropriate level, usually about 30mmHg above an estimated systolic reading. When started, the device automatically inflates, then deflates the cuff and displays the systolic and diastolic values. The pulse rate may also be displayed. These devices may also have a ‘memory’ facility which stores the last measurement and previous readings. They are battery powered.

• Wrist device:

 This includes an electronic monitor with a pressure sensor, an electrically-driven pump attached to a wrist cuff. Function is similar to the automated (spot-check) device above. Battery powered. Wrist monitors have the advantages of being smaller than the arm devices and can be used in obese people, because the wrist diameter is little affected by obesity. A potential problem with wrist monitors is the systematic error introduced by the hydrostatic effect of differences in the position of the wrist relative to the heart. This can be avoided if the wrist is always at heart level when the readings are taken, but there is no way of knowing retrospectively whether this was performed when a series of readings are reviewed. Devices are now available that will only record a measurement when the monitor is held at heart level.

• Finger device:

 A new invention that has come is the finger blood pressure monitor. The finger monitor is the latest technology that helps measure the blood pressure almost instantly in the most non-invasive way possible. This is a tiny digital device that works on batteries and consists of an electric monitor and a sensor. There is a small finger compartment where the index finger of the person needs to be placed. Within a matter of seconds, the sensor reads the blood pressure in the finger and gives you the reading. This device is small and portable and hence, is preferred for regular monitoring in individuals, especially when the person is constantly on the go. Uses oscillometric, pulse-wave or plethysmographic methods for measurement. Finger monitors have so far been found to be inaccurate and are not recommended.

• Spot-check non-invasive blood pressure (NIBP) monitor:

This is a more sophisticated version of the automated device above and is designed for routine clinical assessment. There may be an option to measure additional vital signs, such as oxygen saturation in the finger pulse (SpO2) and body temperature. Mains and battery powered.

• Automatic-cycling non-invasive blood pressure (NIBP) monitor:

This is similar to the spot-check NIBP monitor, but with the addition of an automatic-cycling facility to record a patient’s blood pressure at set time intervals. These are designed for bed-side monitoring in a clinical environment where repetitive monitoring of patients and an alarm function is required. These devices may incorporate the ability to measure additional vital signs. The alarm limits can usually be set to alert nursing staff when one or more of the measured patient parameters exceed the pre-set limits. Mains and battery powered.

• Multi-parameter patient monitors:

These are designed for use in critical care wards and operating theatres and monitor a range of vital signs including blood pressure. May be possible to communicate with a Central Monitoring Station via Ethernet or Wi-Fi.

• Ambulatory blood pressure monitor:

This includes an upper arm cuff and an electronic monitor with a pressure sensor and an electrically-driven pump that attaches to the patient’s belt. The unit is programmed to record the patient’s blood pressure at pre-defined intervals over a 24-hour period during normal activities and stores the data for future analysis. Battery powered. Uses electronic auscultatory and oscillometric techniques.

________

Automated non-auscultatory (oscillometric) devices:

There is an ever-increasing market for oscillometric blood pressure devices that have also increased home surveillance such as self-measurement and ambulatory/24hr monitoring. Home blood pressure measurement has been shown to be more reproducible than office blood pressure measurement (Stergiou et al. 2002) more predictive of cardiovascular events (Bobrie et al. 2004, Ohkubo et al. 2004) and reliable when used by non-clinicians (Nordmann et al. 1999). The out-of-office measurements are effective at removing the white-coat effect (Parati et al. 2003) particularly when using an averaging mode (Wilton et al. 2007). Telemonitoring enables the patient to transmit home measurements directly to the clinician’s computer for further analysis, potentially enhancing early identification, reducing hospital visits (Pare et al. 2007) and improving the degree of blood pressure control also in general practice (Parati et al. 2009a).

_

Automated devices are generally intended for use on the upper arm, but finger and wrist devices are also available. Few of these latter devices have been shown to be accurate according to independent accuracy assessments; only a small minority of wrist devices assessed achieved an acceptable accuracy (five in total) (O’Brien and Atkins 2007). Wrist devices are sensitive to errors related to positioning of the wrist at heart level, and some devices have position sensors. Very few of the wrist devices have passed clinical validation after independent assessment (Altunkan et al. 2006, Nolly et al. 2004). However, even the validated wrist devices with position sensors appear to give significantly different blood pressure values than arm devices in a large proportion of hypertensive patients (Stergiou et al. 2008d), while in an earlier study no such differences were observed (Cuckson et al. 2004). The European Society of Hypertension Guidelines state the preference of arm over wrist oscillometric devices (O’Brien et al. 2003, Parati et al. 2008b). No finger device has yet achieved the established validation standards (Elvan-Taspinar et al. 2003, Schutte et al. 2004).

_

An accurate automated sphygmomanometer capable of providing printouts of systolic, diastolic and mean blood pressure, together with heart rate and the time and date of measurement, should eliminate errors of interpretation and abolish observer bias and terminal digit preference. Moreover, the need for elaborate training of observers would no longer be necessary, although a period of instruction and assessment of proficiency in using the automated device will always be necessary. Another advantage of automated measurement is the ability of such devices to store data for later analysis (Parati G et al. 2008b). This development is in fact taking place, and a number of long-term outcome studies are using automated technology to measure blood pressure instead of the traditional mercury ‘gold standard’. For example, in the large Anglo–Scandinavian Cardiac Outcome Trial, the validated Omron HEM-705CP automated monitor was used including thousands of patients followed for about five years (Dahlöf et al. 2005, Hansson et al. 1998, Yusuf et al. 2008).

_

The table below shows advantages of current automated oscillometric devices:

_

Automated blood pressure measurement will eliminate the observer errors associated with the use of the manual auscultatory technique such as terminal digit preference, threshold avoidance, observer prejudice, rapid deflation etc. (Beevers et al. 2001). However, clinically significant differences exist between measurements obtained through automation compared to auscultation in many devices. Automated device accuracy is not only device dependent, but also user dependent. As these devices are more likely to be used by untrained individuals, errors related to selecting correct cuff size and taking the recommended arm position, ensuring no movement or talking during device measurement, or allowing for sufficient rest before measurements may be more pronounced than mercury sphygmomanometers. Various guidelines have been published for the correct use of automated devices with specific methodologies advocated (Chobanian et al. 2003, O’Brien et al. 2003, Parati et al. 2008a), but are not as established as training for auscultatory blood pressure measurement. Automated devices have accuracy limitations in special groups such as those with vascular damage that influences the oscillometric signal: these include patients with diabetes, arrhythmias or pre-eclampsia, and the elderly. This is related to arterial/vascular changes in these patients, which are likely to influence the recording of pressure waves by the device.

_

__________

Which is the best BP measurement device?

_

Is the mercury sphygmomanometer still ‘the gold standard’ of blood pressure monitoring?

It is undisputed that the mercury sphygmomanometer has the highest accuracy, with a high degree of technical agreement between devices of different producers. This ensures worldwide comparability of values measured with this method. Specific advantages of mercury-based manometer devices are the simple technique and a simple baseline correction. Nevertheless, several studies have reported on insufficient maintenance and calibration of mercury sphygmomanometers used in the clinical setting and in general practice. A check of the devices in a major teaching hospital showed that only 5% of the investigated instruments had been properly serviced while an inspection in general practices of an English district found that only ∼30% of the devices had been properly maintained. Regular maintenance intervals are infrequently met. Despite the relatively simple principle of the technique, instrument inspections disclosed defects in the manometers, cuffs and tubing systems of more than 50% of the mercury manometers in use: the defects had an impact on the correctness of the readings. This means that sufficient measurement accuracy is ensured only by devices which undergo regular technical evaluation and calibration at least on a yearly basis. Restrictions of the use of mercury in medical devices have already been imposed in the Netherlands and Sweden. This was felt to be necessary to avoid occupational health hazards and environmental contamination. This raises the question of whether mercury sphygmomanometers should still be used as standard devices for measuring blood pressure.

_

Are aneroid manometers a first-choice alternative?

Aneroid sphygmomanometers are the most commonly used alternative devices for measuring blood pressure in the clinical setting and in general practice. Instead of transferring pressure to a mercury column, they are designed to transfer the detected pressure via a mechanical system and an elastic expansion chamber to a gauge needle. The devices are characterized by their handy design and even portability. The mechanism is, however, highly sensitive towards any mechanical strain. It can be easily damaged by any mechanical impact, mainly the result of accidental falls or pushes; accuracy can also decrease over time during clinical use. This may result in both calibration errors (which are often not immediately apparent) and baseline shifts. In addition, the technical design differs widely between models from different manufacturers. Dependent on the kind of application of these devices, instrument evaluation studies demonstrated technical defects or unacceptable measurement inaccuracy in up to 60% of the devices that had been evaluated. Reading errors occur more frequently in the range of high blood pressure values where aneroid manometers tend to underestimate the blood pressure of the patient. Portable instruments, in particular, show a higher technical failure rate. In general practices the percentage of regularly serviced and recalibrated instruments is sometimes below 5%. If, however, aneroid manometers receive regular technical maintenance, their measurement accuracy is identical to the standard mercury manometer devices. This has been tested for wall-mounted instruments. Therefore, only devices which undergo a regular (half-) yearly technical inspection including recalibration ensure a reliable measurement accuracy. Under these circumstances they can be adopted as a potential alternative to mercury sphygmomanometers. However, as a result of the widespread lack of such checks, one must unfortunately assume that the percentage of erroneous measurements is high. In particular, this applies to devices in which the manometer is not cuff-integrated, since the latter can act as a ‘shock protection’. The new development of a mechanical gear free sphygmomanometer (Durashock, Welch-Allyn) apparently combines the advantage of a handy design with lesser susceptibility to shock and impact. The calibration stability is therefore higher than for aneroid manometers.

_

Is the automated sphygmomanometer the better alternative?

It must be conceded that electronic blood pressure measurement devices have numerous advantages. They are small, compact and relatively inexpensive. It is recommended that automated devices should be subjected to independent validation for accuracy. To this end, various assessment protocols are available from the Association of Advanced Medical Instrumentation, the British Hypertension Society and the European Society for Hypertension. Indeed, several studies have shown that the best models perform well in comparison to their manual counterparts. They contain no mercury and hence, there is no concern regarding safety. They are also simple to use and most importantly, remove the huge user-bias which exists with mercury sphygmomanometers.

_

_

Problems with automated devices:

The advent of accurate oscillometric devices, however welcome, is not without problems. First, oscillometric devices have been notorious for their inaccuracy in the past, although more accurate devices are now appearing on the market. Secondly, most of the available oscillometric devices were designed for self-measurement of blood pressure by patients, and it should not be assumed that they will be suitable for clinical use, or that they will remain accurate with use, although some are being used successfully in hospital practice. Thirdly, oscillometric techniques cannot measure blood pressure accurately in all situations, particularly in patients with pre-eclampsia, arrhythmias such as atrial fibrillation, and there are also individuals in whom these devices cannot measure blood pressure, for reasons that are not always apparent (Stergiou et al. 2009a, Van Popele et al. 2000). All alternative blood pressure measurement devices need to be clinically validated in clinical protocols against the current gold standard of the mercury sphygmomanometer, until an alternative device is developed and recognised as such. Several international protocols, such as the ISO protocol, the British Hypertension Society (BHS) and the European Society of Hypertension (ESH) International Protocol are available for such a clinical validation. A list of validated oscillometric devices is available on dedicated websites, such as the British Hypertension Society as well as other national learned societies.

_

Accuracy and reliability of wrist-cuff devices for self-measurement of blood pressure: a 2014 study:

Self-measurement of blood pressure (BP) might offer some advantages in diagnosis and therapeutic evaluation and in patient management of hypertension. Recently, wrist-cuff devices for self-measurement of BP have gained more than one-third of the world market share. In this study, authors validated wrist-cuff devices and compared the results between wrist- and arm-cuff devices. The factors affecting the accuracy of wrist-cuff devices were also studied. The research group assessed  the validity of automated blood pressure measuring device consisted of 13 institutes in Japan, which validated two wrist-cuff devices (WC-1 and WC-2) and two arm-cuff devices (AC-1 and AC-2). They used a crossover method, where the comparison was done between auscultation, by two observers by means of a double stethoscope on one arm and the device on the opposite arm or wrist. The results suggest that wrist-cuff devices in the present form are inadequate for self-measurement of blood pressure and, thus, are inadequate for general use or clinical and practical use. However, there is much possibility in wrist-cuff device and the accuracy and reliability of wrist-cuff device are warranted by an improvement of technology.

_

You can see that most wrist cuff devices have questionable recommendation:

_

Synopsis of BP devices: 

________

Validation of the devices and monitors for BP measurement:

In order to guarantee a good quality of the measurement, the self-measurement device must be validated by independent organizations or experts in blood pressure measurement. For the moment, there is no regulation control concerning this type of device worldwide. Thus, the quality of the self-measurement devices is very unequal, and only very few are currently validated. There are two reasons for validation of the device. The first is to confirm whether the type of device is clinically applicable for BP measurements in the general population, and the other is to confirm whether the device can accurately and properly measure BP in individual. Home measurement devices should be validated before use and at regular intervals (essentially once a year) during use.

_

All monitors in clinical use should be tested for accuracy. All oscillometric automated monitors that provide read-outs of systolic and diastolic pressure should be subjected by independent investigators to formal validation protocols. The original 2 protocols that gained the widest acceptance were developed by the Association for the Advancement of Medical Instrumentation (AAMI) in 1987 and the British Hypertension Society (BHS) in 1990, with revisions to both in 1993, and to AAMI in 2002. These required testing of a device by 2 trained human observers in 85 subjects, which made validation studies difficult to perform. One consequence of this has been that there are still many devices on the market that have never been adequately validated. More recently, an international group of experts who are members of the European Society of Hypertension Working Group on Blood Pressure Monitoring has produced an International Protocol that could replace the 2 earlier versions and is easier to perform. Briefly, it requires comparison of the device readings (4 in all) alternating with 5 mercury readings taken by 2 trained observers. Devices are recommended for approval if both systolic and diastolic readings taken are at least within 5 mm Hg of each other for at least 50% of readings.  It is recommended that only those devices that have passed this or similar tests should be used in practice. However, the fact that a device passed a validation test does not mean that it will provide accurate readings in all patients. There can be substantial numbers of individual subjects in whom the error is consistently >5 mm Hg with a device that has achieved a passing grade. This may be more likely to occur in elderly or diabetic patients. For this reason, it is recommended that each oscillometric monitor should be validated on each patient before the readings are accepted. No formal protocol has yet been developed for doing this, but if sequential readings are taken with a mercury sphygmomanometer and the device, then major inaccuracies can be detected. Another problem is that manufacturers may change the model number after a device has been tested without indicating whether the measurement algorithm has also been changed. Users should also be aware that some automated non-invasive blood pressure monitors may have been validated by reference to intra-arterial measurements. There can be differences in readings between these devices and those validated by reference to non-invasive (sphygmomanometric) measurements.

_

In the interest of continuous technologic improvement, there should be a positive and close interaction between the validation centers and the manufacturers of the devices. Protocols should not restrict such exchange. For instance, after a negative stage 1 result of a validation, the manufacturer should have the possibility to adjust the device and resubmit it within a given time span, with the overall target of an improved performance of the instrumentation and a better product at the end. Only if such possibility is waived, or the modified device fails the study criteria, a negative publication should be the consequence to document that the device has failed and is not recommended for health care purposes.

_

With manual devices, such as mercury and aneroid monitors, it is recommended that the accuracy of the pressure registration mechanism be checked. In the case of mercury sphygmomanometers, this involves checking that the upper curve of the meniscus of the mercury column is at 0 mm Hg, that the column is free of dirt, and that it rises and falls freely during cuff inflation and deflation. Aneroid devices or other nonmercury devices should be checked by connecting the manometer to a mercury column or an electronic testing device with a Y-tube. The needle should rest at the zero point before the cuff is inflated and should register a reading that is within 4 mm Hg of the mercury column when the cuff is inflated to pressures of 100 and 200 mm Hg. The needle should return to zero after deflation.

__________

Automated BP recording in clinic:

Manual BP measurement is accurate when there is strict adherence to a BP measurement protocol, but readings might still be subject to “white-coat effect” and are often higher than BP measurements taken outside of the office setting. In the real world of everyday practice, physician and patient factors such as conversation during BP readings, recording of only a single BP reading, no antecedent period of rest before BP measurement, rapid deflation of the cuff, and digit preference with rounding off of readings to 0 or 5 all adversely affect the accuracy of manual BP measurement. The net result is a reading in routine clinical practice that is on average 9/6 mm Hg higher than BP taken in accordance with standardized guidelines for BP measurement in a research setting. Consequently, routine manual office BP has come to be regarded as an inferior method for diagnosing and managing hypertension. Even when performed properly in research studies, manual BP measurement is a relatively poor predictor of cardiovascular risk related to BP status compared with methods of out-of-office BP measurement such as 24-hour ambulatory BP monitoring (AMBP) or home BP measurement.

_

There is now an alternative to manual office BP measurement—automated office BP (AOBP). Automated oscillometric devices have recently been used in large-scale clinical trials and in population studies including the current National Health and Nutrition Education Survey. By incorporating validated, fully automated BP recorders into clinical practice, it is possible to improve the quality and accuracy of BP measurement in the office while eliminating most, if not all of the white coat response. AOBP involves the use of a fully automated, oscillometric sphygmomanometer to obtain multiple BP readings while the patient rests alone in a quiet room. Studies in community-based, primary care settings, in patients referred for 24-hour AMBP, and in patients referred to a hypertension specialist have all shown that AOBP can virtually eliminate the white coat response with AOBP readings being similar to the mean awake ambulatory BP.  AOBP has other advantages over manual BP measurement. Multiple readings can be taken without a health professional being present, thus saving valuable time of office personnel for other tasks. Unlike manual BP, AOBP readings are similar when taken in the office and in nontreatment settings such as an AMBP unit. Multiple AOBP readings can be taken as frequently as every 1 minute, from the start of one reading to the start of the next. Finally, the cutpoint for normal BP vs. hypertension for AOBP (135/85 mm Hg) is similar to values for both awake ambulatory BP and home BP.

_

A recent reevaluation of the cutpoint for a normal manual BP reading in routine practice has raised further questions about the use of the mercury sphygmomanometer. The traditional value of 140/90 mm Hg for defining hypertension was derived from carefully measured BP readings taken in the context of research studies or by specially trained health professionals in population surveys and in other similar research settings. However, manual BP measurement in routine office practice is usually not performed in accordance with recommended guidelines despite intensive efforts during recent decades on the part of organizations such as the American Heart Association to improve the quality of manual BP measurement in the community. In the “real world,” manual BP readings are of relatively poor quality and accuracy, often exhibit digit preference (rounding off to the nearest zero), have little or no correlation with target organ damage, and show a weak correlation with the awake ambulatory BP, a gold standard for determining future risk of cardiovascular events in relation to BP status. The net result is a “real world” cutpoint for manual BP/hypertension which is closer to 150/95 mm Hg instead of 140/90 mm Hg with about 25% of the patients exhibiting a clinically important white coat response, leading to possible overtreatment or inappropriate treatment of hypertension. Whereas, intensive education of physicians and other health professionals to improve the quality of BP measurement in routine practice has met with little success, the replacement of manual recorders such as the mercury sphygmomanometer with AOBP is relatively inexpensive, requires minimal training, and will make accurate BP measurement much less dependent on the expertise and training of the person recording the BP.

_________

Self measurement of blood pressure (SMBP):   

Self measurement (monitoring) of blood pressure is when a person (or carer) measures their own blood pressure outside the clinic—at home, in the workplace, or elsewhere. Self monitoring allows multiple measurements and therefore provides a more precise measure of “true” blood pressure and information about variability in blood pressure. Many investigators have found differences between blood pressure values obtained by health care professionals in a clinic and automated, self-determined measures obtained at home, the latter being on average about 8/4 mm Hg lower. The correlation between measurements at home and in the clinic has been reported to be as low as 0.21 for diastolic blood pressure. In line with these low correlations Padfield and colleagues reported that the sensitivity and specificity of self-determined measures in diagnosing hypertension when compared with pressures measured in the clinic were 73% and 86% respectively. This finding assumes that the clinic pressures constitute a gold standard, which may not be the case. Thus is raised the issue of which readings, home or clinic, are more valid.  Studies have demonstrated that blood pressure measurements obtained at home can be highly reproducible. Reproducibility of readings is essential for accuracy, and these studies are therefore reassuring. Furthermore, Gould and colleagues found that the accuracy of self-determined readings at home and of professionally taken readings at the clinic were similar, as determined by intra-arterial pressures. However, the overriding issue here is the validity of self-determined measures of blood pressure in decisions about the diagnosis of hypertension and whether treatment should be initiated.

_

Effective management of BP has been shown to dramatically decrease the incidence of stroke, heart attack, and heart failure.  However, hypertension is usually a lifelong condition, and long-term adherence to lifestyle modification (such as smoking cessation, regular exercise, and weight loss) and medication treatment remains a challenge in the management of hypertension. Thus an increasing focus has been placed on developing strategies that can improve adherence and result in satisfactory BP control with the goal of improving health outcomes for hypertensive patients. One such proposed method is self-measured blood pressure (SMBP) monitoring. SMBP refers to the regular self-measurement of a patient’s BP at home or elsewhere outside the office or clinic setting. However, while patient self-participation in chronic disease management appears promising, the sustainability and clinical impact of this strategy remain uncertain. Also its impact on health care utilization is uncertain, since it may replace office visits for BP checks but may increase overall intensity of surveillance and treatment.

_

Self-monitoring of blood pressure has been advocated as an adjunct to diagnosis, particularly for the detection of white coat hypertension (defined as pressure that is persistently high when measured at the clinic but normal when measured elsewhere.) Although there have been studies of home blood pressure monitoring as part of the management of treated hypertension, there have been few of self-monitoring as an adjunct to diagnosis and the initiation of therapy. Unfortunately, there is little information on the distribution of self-monitored pressures in the normotensive population, and there have been no prospective studies assessing the relation between level of self-monitored blood pressure and incidence of major illness or death from cardiovascular disease. The evidence from less rigorous cross-sectional assessments of monitoring at home and at the clinic is conflicting. Julius and colleagues have found that patients with high readings at the clinic and lower ones on self-assessment have hypertensive target-organ findings and cardiovascular risk factors similar to those of patients with sustained borderline elevation of blood pressure both at the clinic and at home. However, other investigators have found higher correlations of electrocardiographically determined left ventricular hypertrophy with self-determined blood pressure readings than with casual office readings and higher correlations of echocardiographically determined left ventricular mass with blood pressure readings taken at home than with those taken at the clinic.

_

Given the consequences of both false-negative and false-positive diagnoses, the inaccuracy of many devices for the self-determination of blood pressure and the potential value of additional measurements in a patient s home, the accuracy of self-monitoring should be studied further and its value in diagnosis determined for those with mild elevations in blood pressure at the clinic. If patients are asked to measure their blood pressure at home it is important that their equipment and technique be checked by health care professionals to ensure accuracy. Mercury sphygmomanometers are the most accurate and dependable devices and can be purchased for home use, but they are more difficult to master than the semi-automated or automated devices that are widely available. Mercury devices should likely not be suggested for patients with young children at home in view of the possibility of a mercury spill. Patients with difficulty hearing or seeing should only be asked to use automated devices if someone else in the home can assist them. Some sphygmomanometers of all types are accurate, but most nonmercury devices are not. It is important that patients use the correct cuff size for their arm circumference. Thus, the given recommendations for blood pressure determination apply to the use of automated devices if they are found to be as accurate as mercury devices.

________

Home BP:

Home BP is information obtained under a non-medical setting and essentially by self-measurement. With home BP measurements, time-related BP information can be obtained over a long period. On the basis of these characteristics, home BP provides the information indispensable for the diagnosis of white-coat hypertension, masked hypertension or early-morning hypertension. The frequency of white-coat hypertension based on home BP measurements has been reported to be 38–58% in cohorts of the general population, 15% in untreated patients with hypertension and 12–19% in hypertensive patients being treated. The frequency of masked hypertension based on home BP is reported to be about 10% in cohorts of the general population and 11–33% in hypertensive patients under treatment. Also, some home BP-measuring devices provide BP information during sleep at night. Moreover, home BP measurements are used as a means to average BP over a long period of time and, thus, are used as a means to transform essentially highly variable BP values into stable BP information in the form of averaged BP. This is applied to BP measurements for pregnant women and children. Many studies have also reported the usefulness of home BP measurements for the diagnosis and treatment of hypertension in dialysis patients and diabetic patients, in whom daily management of BP mediates critical results on their outcome.

_

Home blood pressure monitoring has been shown to be feasible; acceptable to patients, nurses, and doctors in general practice; and more suitable for the screening of “white coat” hypertension than ambulatory blood pressure monitoring.  The white coat effect is important in the diagnosis and treatment of hypertension, even in a primary care setting, and is not a research artefact.  Either repeated measurements by health professionals or ambulatory or home measurements may substantially improve estimates of blood pressure and management and control of hypertension. Home blood pressure measurements are the most acceptable method to patients and are preferred to either readings in the clinic or ambulatory monitoring. They provide accurate blood pressure measurements in most patients, although some patients of low educational level may have poor reporting accuracy.  Finally, blood pressure monitoring at home might help to improve awareness and concordance, and thus overall effective management.

_

Morning hypertension, and morning and evening home BP:

Although there is no precise definition of morning hypertension, a condition with a specifically high BP after waking early in the morning may be referred to as morning hypertension. According to the absolute values of home BP or AMBP, a value of greater than or equal to135/85 mm Hg in the morning, for example, may be regarded as morning hypertension; however, the value in the morning must be higher than that in the evening to confirm that BP is high specifically in the morning. Morning hypertension may be the result of one of two patterns of diurnal BP changes. One is the morning surge, which is a rapid elevation in BP around awakening from a low nocturnal level. The other is high BP in the morning observed in non-dippers, who show no normal nocturnal decrease in BP, or risers, who show nighttime elevations in BP. Both patterns are considered to be possible risk factors of cardiovascular diseases.  Those who exhibit large morning–evening differences in BP have marked target organ damage, such as left ventricular hypertrophy. However, home BP measured in the evening also has a high prognostic significance.

_

Nighttime BP during SMBP:

During sleep at night, BP is usually measured by AMBP. Recently, home BP-measuring devices capable of monitoring BP during sleep at night have been developed, and their performance has been close or equal to that of AMBP. Using a home BP-monitoring device, BP during sleep is measured once or twice during the night, although the frequency of measurement can be preset freely, and is therefore able to capture the BP in relation to the quality of sleep at the time of the measurement. Recently, midnight BP and diurnal changes in BP, as well as morning BP, have become of interest because of their relationships with target organ damage and prognosis. Decreases of 10–20% in nocturnal BP compared with daytime BP are classified as a normal pattern of diurnal changes (dipper), decreases of 0–10% as a no-nocturnal-dip type (non-dipper), elevations in BP during the nighttime compared with the daytime as a nocturnal elevation type (riser), and decreases of greater than or equal to20% in nocturnal BP as an excessive decrease type (extreme dipper). The prognosis has been poor in non-dippers and risers. In non-dippers and risers, hypertensive target organ damage, such as asymptomatic lacunar infarction, left ventricular hypertrophy and microalbuminuria, are observed more frequently than in dippers. Prospective studies have shown that the risk of cardiovascular diseases is higher in non-dippers than in dippers. According to the results of the Ohasama study, the risk of cardiovascular diseases is high in non-dippers even if they are normotensive. Therefore, the clinical significance of nocturnal BP is attracting interest. The results of a large-scale intervention study and an international collaborative study of observation studies show that low nighttime and low daytime BP are considered to improve the prognosis of patients. For the future, a wide application of home BP-measuring devices is expected to evaluate the BP during sleep at night in relation to the quality of sleep and to diurnal changes in BP.

________

Most suitable device for SMBP:

Arm-cuff devices based on the cuff-oscillometric method validated on the basis of the auscultation method are recommended for home BP measurements.

Why?

Previously, mercury column manometers or aneroid manometers, in conjunction with the auscultation method, have been used for home BP measurements. However, these manometers, especially aneroid manometers, are sometimes unreliable and inaccurate. Mercury column manometers are cumbersome and cause environmental pollution. Furthermore, the auscultation method involves a subjective decision and a complex technique, and technical instruction and training are necessary to perform an accurate auscultation. For all these reasons, previous devices for home BP measurements were not widely accepted and, consequently, not widely distributed. In the 1960s, electrical devices based on the microphone method were introduced for home BP measurements. However, because of the mechanical properties of the microphone, these devices were costly and subject to frequent malfunctions. The microphone method also had an inherent shortcoming in determining the phase V Korotkoff sound making determination of diastolic BP inaccurate. Thus, microphone devices for home BP measurements were not widely distributed. During this period, theoretical analysis of the cuff-oscillometric principle advanced extensively. In 1969, Posey et al. discovered that the maximum oscillation of intra-cuff pressure was nearly identical to the mean arterial BP, and the cuff-oscillometric principle was originally introduced as a method of determining mean arterial BP. Several experimental studies revealed that SBP and DBP could be estimated from the pattern of the gradual increase and decrease in cuff oscillation during cuff-pressure deflation. This basic algorithm has been improved by including procedures to correctly approximate the characteristic changes in cuff oscillation to the phase I and phase V Korotkoff sounds, and now almost all electrical devices for home BP measurements are based on the cuff-oscillometric principle. However, the different properties of the Korotkoff sounds and cuff oscillation led to an unavoidable difference in BP values between the two methods. Nevertheless, devices based on the cuff-oscillometric principle have become the norm for home BP measurements because of their simple mechanical properties, requiring only measurements in cuff-pressure changes. Therefore, these devices incorporate only a pressure sensor. Such a simple mechanism makes the device less troublesome and cheaper. The cuff-oscillometric device has another advantage when compared with the microphone device, in that surrounding noise does not interfere with BP measurements. More accurate BP values in patients with atrial fibrillation or arrhythmia are also available by cuff-oscillometric devices when compared with the Korotkoff sound method, as ectopically large or small pulses are averaged by the algorithm. Such factors encourage the production and distribution of cuff-oscillometric devices for home BP measurements. However, it is remarkable that sphygmomanometers used in the clinical setting have been changing from the Korotkoff sound method to cuff-oscillometric devices without much difficulty.

_

Although the mercury column sphygmomanometer with auscultation is becoming obsolete, the gold standard for clinical practice is still the Korotkoff sound method using a mercury column sphygmomanometer. Almost all epidemiological and clinical studies on hypertension have been based on casual-clinic BP measured by the Korotkoff sound method. Therefore, clinical and epidemiological information obtained using the cuff-oscillometric principle needs to be validated by the accumulation of data. Various manufacturers of devices using the cuff-oscillometric principle may use different algorithms, leading to differences among devices in BP measurements from a single subject. In practice, the accuracy of automatic devices is determined by comparison with the auscultation method, and no other standard method is currently available for this purpose. The issue here is the subjectivity and the possible inaccuracy of auscultation when the auscultation method is used as a standard. To exclude the shortcomings of the auscultation method, equipment based on objective methods should be developed for the calibration of automatic devices, in which the Korotkoff sound signal is treated with an established algorithm, and cuff-oscillometric devices are validated from this standard equipment. Objective and accurate evaluation of these automatic devices is a prerequisite for the authorization of cuff-oscillometric devices for home BP measurements. The accumulation of clinical and epidemiological data obtained by authorized cuff-oscillometric devices may finally validate such devices as tools for clinical decision making. As BP measurements in a clinical setting are now mostly obtained by cuff-oscillometric devices, the necessary data will be accumulated soon.

_

Choosing a Home Blood Pressure Monitor:

Here are some other tips to follow when shopping for a blood pressure monitor.

1. Choose a validated monitor:

 Make sure the monitor has been tested, validated and approved by the Association for the Advancement of Medical Instrumentation, the British Hypertension Society and the International Protocol for the Validation of Automated BP Measuring Devices.  

2.  Ensure the monitor is suitable for your special needs:

When selecting a blood pressure monitor for the elderly, pregnant women or children, make sure it is validated for these conditions.

3. Make sure the cuff fits:

Children and adults with smaller or larger than average-sized arms may need special-sized cuffs. They are available in some pharmacies, from medical supply companies and by direct order from companies that sell blood pressure cuffs. Measure around your upper arm and choose a monitor that comes with the correct size cuff.

__

________

Why is home monitoring important?

1.  Charting provides a “time-lapse picture”:

 Your healthcare provider will want an accurate picture of the situation inside your arteries. One measurement taken at the doctor’s office is like a snapshot. It tells what your blood pressure is at that moment. Since there are no symptoms for HT and no way to sense fluctuations in blood pressure, measuring is the only way to get the facts. Readings can vary throughout the day and can be temporarily influenced by factors such as emotions, diet and medication. A record of readings taken over time can provide you and your healthcare provider a clearer picture of your blood pressure. It can be like a time-lapse picture or movie, providing information on what is happening with your blood pressure over time.

2. Charting can help eliminate false readings:

 Some people experience anxiety when at a doctor’s office, which leads to temporarily higher readings. This condition is known as “white-coat hypertension.” At the other extreme, some individuals have normal readings in a professional’s office but elevated readings outside the office. This condition is often referred to as “reverse white-coat hypertension” or “masked hypertension.”  Such false readings can lead to over-diagnosis or misdiagnosis of HT. Self-measurement at home is good to reveal whether your blood pressure reading in the doctor’s office is correct.

_

Who should home monitor?

Home monitoring may be especially useful for:

1. Patients starting HT treatment to determine its effectiveness

2. Patients requiring closer monitoring than intermittent office visits provide, especially individuals with coronary heart disease, diabetes and/or kidney disease

3. Pregnant women since preeclampsia or pregnancy-induced hypertension can develop rapidly

4. People who have some high readings at the doctor’s office, to rule out white-coat hypertension and confirm true HBP

5. Elderly patients, because the white-coat effect increases progressively with age

 6. People suspected of having masked hypertension

_

_

_

Why do I need to monitor my blood pressure at home?

Monitoring your blood pressure at home offers several benefits. It can:

1. Help make an early diagnosis of high blood pressure. If you have pre-hypertension, or another condition that could contribute to high blood pressure, such as diabetes or kidney problems, home blood pressure monitoring could help your doctor diagnose high blood pressure earlier than if you have only infrequent blood pressure readings in the doctor’s office.

2. Help track your treatment. Home blood pressure monitoring can help people of all ages keep track of their condition including children and teenagers who have high blood pressure. Self-monitoring provides important information between visits to your doctor. The only way to know whether your lifestyle changes or your medications are working is to check your blood pressure regularly. Keeping track of changes can help you and your health care team make decisions about your ongoing treatment strategy, such as adjusting dosages or changing medications.

3. Encourage better control. Taking your own blood pressure measurements can result in better blood pressure control. You gain a stronger sense of responsibility for your health, and you may be even more motivated to control your blood pressure with an improved diet, physical activity and proper medication use.

4. Cut your health care costs. Home monitoring may cut down on the number of visits you need to make to your doctor or clinic. This can reduce your overall health care costs, lower your travel expenses and save in lost wages.

5. Check if your blood pressure is different outside the doctor’s office. Your doctor may suspect that your blood pressure goes up due to the anxiety associated with being at the doctor’s office, but is otherwise normal — a condition called white coat hypertension. Monitoring blood pressure at home or work, where that kind of anxiety won’t cause those spikes, can help see if you have true high blood pressure or simply white coat hypertension.

6.  Home and workplace monitoring may also help when the opposite occurs — your blood pressure seems fine at the doctor’s office, but is elevated elsewhere. This kind of high blood pressure, sometimes called masked hypertension, is more common in women and those who have cardiovascular risk factors, such as obesity, high blood cholesterol and high blood sugar.

_

Diagnostic threshold for hypertension in SMBP:

________

Schedule of SMBP:

A systematic review found little evidence to determine how many readings are appropriate, with considerable variation in recommendations in the literature. There is disagreement between guidelines for SMBP at home:

(i) The European Society of Hypertension and the 2012 Canadian Hypertension Education Program recommend duplicate SMBPs in the morning and evening;

(ii) The American Society of Hypertension recommends triplicate SMBPs;

(iii) The Japanese Society of Hypertension recommends at least one SMBP.

 The 1st SMBP in a triplicate tends to be higher than the 2nd and 3rd. The differences are quite small. They amount, on average, to 3 – 4 mmHg for systolic BP, 1 mmHg for diastolic and 1 – 2 mmHg for the heart rate.

__

Blood pressure varies throughout the day and drugs are typically taken in the morning. This usually results in peaks and troughs during the day, so it has been recommended that blood pressure is measured in the morning and the evening. Japanese data suggest that blood pressure measured in the morning correlates best with end organ damage, but these findings may be confounded by Japanese customs such as taking hot baths in the evening.  Current guidelines for SBPM recommend that in untreated patients there should be an initial 7-day measurement period with 2 readings taken in the morning and in the evening at predefined times (6 am–9 am and 6 pm–9 pm). The average of day 2 through 7 values should be taken as reference for the follow-up period. Once treatment is initiated, SBPM should be used exactly as in the pre-treatment phase and the readings should preferably be taken at trough, i.e., before drug intake in case of once-daily administration. When changes in treatment occur, the averages of the SBPM values measured over 2 weeks should be used to assess BP control. It follows that many BP readings should be collected that may create some problems for interpretations. For reasons of time and practicality, doctors are reluctant to calculate the average of tens or even hundreds of values and thus they usually make a cursory inspection of patients’ reports. In addition, there is experimental evidence that many patients tend to manipulate the BP reports, excluding those values that do not seem appropriate to them. Current international guidelines do not provide specific recommendations on how to solve these problems.

_

Long term monitoring for people on stable treatment:

Data from the PROGRESS trial (Perindopril Protection against Recurrent Stroke study) indicate that true changes in blood pressure occur slowly, and that for patients on stable medication a reasonable time frame for remeasurement would be every six to 12 months (Keenan K, Hayen A, Neal BC, Irwig L, 2008). Although the PROGRESS trial looked at office measurements of blood pressure, this estimate is probably valid for patients who self monitor. However, I think we need more studies to determine how frequently SMBP done for patients on long term stable treatment. 

_

A study was done to determine frequency of SMBP by patents:

__________

Correct way for SMBP:

1. Only measure when being relaxed. Take a rest for approximately two to three minutes before each measurement. Sit relaxed in an upright position. Even desk work increases the blood pressure by 6mm Hg (systolic value) and 5mm Hg (diastolic value) on average.

2. A full bladder causes an increase in blood pressure of approx. 10mm Hg.

3.  Check both a proper cuff size and a proper fit of the cuff. The cuff should be at the level of right atrium.

4.  Don’t talk and move during the measurement. Talking elevates your values by 17/13 mm Hg.

5. A repeated measurement should be started not earlier than a minute after the prior measurement.

6. Change therapy only after consulting your physician.

7. In some people, there is a significant difference in blood pressure numbers between their right and left arm. Although the reason for this is unclear, guidelines recommend that blood pressure be measured in both arms at the initial consultation. If there is a significant difference between the two readings, the arm with the higher reading should be used for future monitoring. 

_

Checklist for correct use of automated home blood pressure monitoring machine
1 Do not use caffeine products 30 minutes before measuring BP
2 Do not use tobacco products 30 minutes before measuring BP
3 Do not use alcohol products 30 minutes before measuring BP
4 No exercise 30 minutes before measurement of BP
5 Rest for 5 minutes before the first reading is to be taken and patient should be relaxed as measurement is taking place
6 No full bladder before measuring BP
7 Appropriate cuff size: the bladder length should be 80% of arm circumference
8 Appropriate cuff size: the bladder width should be at least 40% of arm circumference (i.e. a length-to-width ratio of 2:1)
9 Sit in a comfortable position, with legs and ankles uncrossed, and back and arm supported
10 All clothing that covers the location of cuff placement should be removed. Long sleeves should not be rolled up to avoid tourniquet effect
11 Wrap the correctly sized cuff smoothly and snugly around the upper part of the bare arm
12 The cuff should fit snugly, but there should be enough room to slip one fingertip under the cuff
13 The lower end of the cuff should be 2–3 cm above the antecubital fossa
14 The middle of the cuff on the upper arm should be at the level of the right atrium (the midpoint of the sternum)
15 No talking during BP measurement
16 No moving during BP measurement
17 A minimum of two readings should be taken at intervals of at least 1 minute, and the average of those readings should be used to represent the patient’s BP
18 If there is a >5 mmHg difference between the first and second readings, an additional two readings should be obtained, and then the average of these multiple readings should be used (ask patient if it is not applicable during the patient demonstration)
19 Properly record the BP reading in the log book

_

________

Patient education:

People should be aware of the main causes of inaccuracy in measurement, which can be divided into three broad categories—patient factors, technique and measurer factors, and device inaccuracy. Talking (increase of 17/13 mm Hg in one study) or crossing of legs (increase of 7/2 mm Hg in another study) during measurement and arm position (increase or decrease of 8 mm Hg for every 10 cm above or below heart level) can significantly alter measurements. Education regarding disclosure of results is important because studies have shown that up to 20% of readings are not divulged to healthcare professionals.

_

________

Which patients may not benefit from self monitoring?

To date, trials of self monitoring have studied people who are willing to monitor themselves, so the question remains whether self monitoring should be recommended for all.  People with an absolute contraindication for self monitoring are rare and include those in whom it is impossible to measure indirect blood pressure accurately (such as amputees). The evidence for self monitoring in pregnant women, children, and those with vascular problems such as Raynaud’s disease is sparse, and self monitoring should be undertaken with caution in these groups. Atrial fibrillation, which may affect the accuracy of oscillometric algorithms in automated monitors, may be problematic, although evidence indicates that accurate readings are possible with standard models. People with conditions that might preclude self monitoring, such as dementia or stroke, may need the help of a carer. Increased anxiety is often quoted as a problem in self measurement, and anecdotally some people seem not to cope with self monitoring.  Studies that have looked for increased anxiety resulting from self monitoring have been negative, but this may reflect the population studied.

_

 

____

When to consult doctor in SMBP:

The following table represents the values (units mmHg) supplied by the World Health Organisation (WHO):

Range Systolic Diastolic Recommendation
blood pressure to low < 100 < 60 Consult your doctor
blood pressure optimum 100 – 120 60 – 80 Self-check
blood pressure normal 120 – 130 80 – 85 Self-check
blood pressure slightly high 130 – 140 85 – 90 Consult your doctor
blood pressure to high 140 – 160 90 – 100 Seek medical advice
blood pressure far too high 160 – 180 100 – 110 Seek medical advice
blood pressure dangerously high > 180 > 110 Urgently seek medical advice!

It is recommended that you record your blood pressure values frequently and discuss them with your doctor. If your systolic values are frequently above 140 and/or the diastolic values above 90, you should consult your doctor. It is normal that blood pressure values are sometimes higher and lower and there is no need to worry if the results are sometimes higher than the above limits. But if your pressure is above the limits in most cases, you should consult your doctor!

___________

What is the value of self monitoring in diagnosis and prognosis?

Faster diagnosis:

Trials have shown that morbidity and mortality are significantly lower in people whose blood pressure is reduced earlier rather than later. The British Hypertension Society recommends that hypertension is diagnosed by using a series of office blood pressure readings taken over one to 12 weeks, depending on the blood pressure level. Self monitoring can provide more precise data in a much shorter time.

Improved accuracy:

Self monitoring can improve diagnostic and predictive accuracy. A large cohort study in Japan showed that self monitoring predicted the risk of stroke better than office readings. In this study, risk of stroke increased 29% (95% confidence interval 16% to 44%) for each 10 mm Hg increase in home systolic readings versus 9%(0%to 18%) for office readings. The predictive value of home measurement improved with the number of measurements, with the best predictive value being seen with 25 measurements.  Another large cohort study used an upper limit for normality of 135/85 mm Hg for self monitoring and found that each 10 mm Hg increase above this was associated with a 17% increase in risk of cardiovascular disease, even when office blood pressure was normal.

Reduced risk:

Self monitoring avoids two situations where office readings can mislead—white coat hypertension, where out of home readings are normal but office readings are raised, and masked hypertension, where the opposite is the case. Risk of death from cardiovascular disease increases progressively from normal readings at home and in the office, to white coat hypertension, then masked hypertension, and finally increased readings at home and in the office. Furthermore, one large cohort study found that the prognosis for masked hypertension was similar to that for uncontrolled office hypertension. People with masked hypertension are rarely identified, and self monitoring may be particularly helpful for this group, especially if it is used as a screening tool for people with high-normal office readings.

______

SMBP: from measurement to control:

How does self monitoring reduce blood pressure?

Better adjustment of antihypertensive drugs:

Doctors do not always treat patients with documented raised blood pressure even though antihypertensives are known to reduce blood pressure and the risk of cardiac disease. Self monitoring of blood pressure may lead patients to discuss their blood pressure with their doctor and this may encourage appropriate prescription of antihypertensives.

Improved compliance with scheduled treatment:

Self monitoring makes patients more aware of their blood pressure level; this might increase their illness perceptions and subsequent health behaviours and therefore improve adherence to drugs. Of 11 randomised controlled trials of self monitoring that reported measures of treatment adherence, six showed a statistically significant improvement in adherence, but in five of these six trials self monitoring was part of a complex intervention. These trials must be treated cautiously because pill counting was often used to measure compliance as opposed to more reliable methods.

Improved non-pharmacological interventions:

Self monitoring may lead to improvements in health behaviours, such as diet and exercise, that help reduce blood pressure. A randomised controlled trial found significant changes in body mass index at six and12 months in a self monitoring group compared with controls.  A reduction in alcohol intake was also seen at six but not 12 months. No effect was seen on self reported physical activity or salt intake.

Habituation to measurement:

Repeated measurement of blood pressure lowers blood pressure readings. Presumably this is because people habituate to the measurement process and show less of an alarm response when the cuff is inflated. However, results of a randomised trial of self monitoring that included ambulatory monitoring as an outcome measure supported the conclusions of a previous review implying that habituation to measurement was not the reason for the lowering of blood pressure in self monitoring.

____

Self-measured home blood pressure in predicting ambulatory hypertension: 

Physicians are commonly uncertain whether a person with office blood pressure (BP) around 140/90 mm Hg actually has hypertension. This is primarily because of BP variability. One approach is to perform self-measured home BP and determine if home BP is elevated. There is a general agreement that if home BP is ≥135/85 mm Hg, then antihypertensive therapy may be commenced. However, some persons with home BP below this cut-off will have ambulatory hypertension. Researchers therefore prospectively studied the role of home BP in predicting ambulatory hypertension in persons with stage 1 and borderline hypertension. They studied in a cross-sectional way home and ambulatory BP in a group of 48 patients with at least two elevated office BP readings. The group was free of antihypertensive drug therapy for at least 4 weeks and performed 7 days of standardized self-BP measurements at home. They examined the relationships of the three BP methods and also defined a threshold (using receiver operating curves) for home BP that captures 80% of ambulatory hypertensives (awake BP ≥135/85 mm Hg). Office systolic BP (145 ± 13 mm Hg) was significantly higher than awake (139 ± 12 mm Hg, P = 0.013) and home (132 ± 11 mm Hg, P <0.001) BP. Office diastolic BP (88 ± 4 mm Hg) was higher than home diastolic BP (80 ± 8 mm Hg, P < 0.001) but not different from awake diastolic BP (88 ± 8 mm Hg, P = 0.10). Home BP had a higher correlation (compared with office BP) with ambulatory BP. The home BP-based white coat effect correlated with ambulatory BP-based white coat effect (r = 0.83, P = 0.001 for systolic BP; r = 0.68, P = 0.001 for diastolic BP). The threshold for home BP of 80% sensitivity in capturing ambulatory hypertension was 125/76 mm Hg. The preliminary data suggest that a lower self-monitored home BP threshold should be used (to exclude ambulatory hypertension) in patients with borderline office hypertension.

_______

SMBP in special circumstances and groups:

Certain groups of people merit special consideration for the measurement of blood pressure—because of age, body habitus, or disturbances of blood pressure related to hemodynamic alterations in the cardiovascular system. Home BP, measured by patients themselves over a long period, is widely used for the management of chronic diseases in which BP control has a critical role for the prognosis. The AHA/ASH/PCNA joint statement and ESH Guidelines for home BP measurements emphasize the importance of home BP measurements in the management of diabetes mellitus, pregnancy, children and renal diseases. Now I will discuss special populations vis-à-vis SMBP:    

_

Elderly Patients:

Elderly patients are more likely to have WCH, isolated systolic hypertension, and pseudohypertension. Blood pressure should be measured while seated, 2 or more times at each visit, and the readings should be averaged. Blood pressure should also be taken in the standing position routinely because the elderly may have postural hypotension. Hypotension is more common in diabetic patients. It is frequently noticed by patients on arising in the morning, after meals, and when standing up quickly. Self-measurements can be quite helpful when considering changes in dosage of antihypertensive medications. Ambulatory blood pressure monitoring, sometimes coupled with Holter recordings of ECGs, can help elucidate some symptoms such as episodic faintness and nocturnal dyspnea. A study found that elderly have a relatively poor understanding of their blood pressure readings and targets, but a subset was considerably more knowledgeable and potentially suited to be more involved in blood pressure self-management.

_

Pulseless Syndromes:

Rarely, patients present with occlusive arterial disease in the major arteries to all 4 limbs (e.g., Takayasu arteritis, giant cell arteritis, or atherosclerosis) so that a reliable blood pressure cannot be obtained from any limb. In this situation, if a carotid artery is normal, it is possible to obtain retinal artery systolic pressure and use the nomogram in reverse to estimate the brachial pressure (oculoplethysmography), but this procedure and the measurement of retinal artery pressures are not generally available. If a central intra-arterial blood pressure can be obtained, a differential in pressure from a noninvasive method can be established and used as a correction factor.

_

Arrhythmias:

When the cardiac rhythm is very irregular, the cardiac output and blood pressure varies greatly from beat to beat. There is considerable inter-observer and intra-observer error. Estimating blood pressure from Korotkoff sounds is a guess at best; there are no generally accepted guidelines. The blood pressure should be measured several times and the average value used. Automated devices frequently are inaccurate for single observations in the presence of atrial fibrillation, for example, and should be validated in each subject before use. However prolonged (2 to 24 hours) ambulatory observations do provide data similar to that in subjects with normal cardiac rhythm. Sometimes, an intra-arterial blood pressure is necessary to get a baseline for comparison. If severe regular bradycardia is present (e.g., 40 to 50 bpm), deflation should be slower than usual to prevent underestimation of systolic and overestimation of diastolic blood pressure. Hypertension and atrial fibrillation (AF) often coexist and are strong risk factors for stroke. 

_

Blood Pressure Measurement in Atrial Fibrillation: NICE Hypertension Guideline Update 2011:

Because automated devices may not measure blood pressure accurately if there is pulse irregularity (for example, due to atrial fibrillation), palpate the radial or brachial pulse before measuring blood pressure. If pulse irregularity is present, measure blood pressure manually using direct auscultation over the brachial artery.

_

Automated blood pressure measurement in atrial fibrillation: a systematic review and meta-analysis:

The measurement of blood pressure in atrial fibrillation is considered as difficult and uncertain, and current guidelines recommend the use of the auscultatory method. The accuracy of automated blood pressure monitors in atrial fibrillation remains controversial. A systematic review and meta-analysis was performed of studies comparing automated (oscillometric or automated Korotkoff) versus manual auscultatory blood pressure measurements (mercury or aneroid sphygmomanometer) in patients with sustained atrial fibrillation. Twelve validations were analyzed (566 patients; five home, three ambulatory and three office devices). The meta-analysis found that these monitors appear to be accurate in measuring SBP but not DBP. Given that atrial fibrillation is common in the elderly, in whom systolic hypertension is more common and important than diastolic hypertension, automated monitors appear to be appropriate for self-home but not for office measurement.

_

An embedded algorithm for the detection of asymptomatic AF during routine automated BP measurement with high diagnostic accuracy has been developed and appears to be a useful screening tool for elderly hypertensives.

_

Obese people:

The association between obesity and hypertension has been confirmed in many epidemiological studies. Obesity may affect the accuracy of measurement of blood pressure in children, young and elderly people, and pregnant women. The relation of arm circumference to bladder dimensions is particularly important. If the bladder is too short, blood pressure will be overestimated—“cuff hypertension”—and if it is too long, blood pressure may be underestimated. The increasing prevalence of the metabolic syndrome, of which hypertension is a major component, means that accurate measurement of blood pressure increasingly becomes important. Today, modern automatic devices can overcome the problem of miscuffing in patients with large arms as a result of a special software algorithm that can provide accurate BP readings over a wide range of arm circumferences when coupled with a single cuff of standard dimensions. A tronco-conical–shaped cuff may be a key component of this instrumentation because it fits better on large, conical arms. In fact, the use of an inappropriately small rectangular cuff can be the source of large errors when BP is measured with the oscillometric method, in which measured cuff pressure oscillations are a reflection of the entire artery volume change under the cuff and does not involve the central section only. Instead of brachial artery, radial artery is more suitable for SMBP of obese people by listening for Korotkoff sounds over the radial artery, using a Doppler probe, or using an oscillometric device.  Whether validated wrist BP monitors can be an appropriate solution for very obese patients should also be established. Unfortunately, there is no available evidence to show that BP measured with upper arm oscillometric devices or wrist monitors is reliable in the obese population. Assessment of BP in obese individuals is further complicated by the fact that the discrepancies between office and out-of-office BPs are more pronounced in this group than in the nonobese segment of the population. Prospective trials designed to specifically evaluate whether BP measured with automatic devices in obese patients can predict cardiovascular events as accurately as BP measured with the traditional auscultatory technique will shed light on this controversial issue.

_

Children:

_

_

Pregnant Women:

Hypertension is the most common medical disorder of pregnancy and occurs in 10% to 12% of all pregnancies. The detection of elevated blood pressure during pregnancy is one of the major aspects of optimal antenatal care; thus, accurate measurement of blood pressure is essential. Changes in BP during pregnancy are markedly affected by the season. Seasons are important for the diagnosis of hypertension during pregnancy and preeclampsia. Mercury sphygmomanometry continues to be the recommended method for blood pressure measurement during pregnancy. Blood pressure should be obtained in the seated position. Measurement of blood pressure in the left lateral recumbency, on the left arm, does not differ substantially from blood pressure that is recorded in the sitting position. Therefore, the left lateral recumbency position is a reasonable alternative, particularly during labor. If the patient’s upper arm circumference is 33 cm or greater, a large blood pressure cuff should be used. In the past, there had been some question as to whether the fourth (K4) or fifth (K5) Korotkoff sound should be used to define the diastolic blood pressure. The International Society for the Study of Hypertension in Pregnancy currently recommends using K5 for the measurement of diastolic blood pressure in pregnancy. When sounds are audible with the cuff deflated, K4 should be used. It is recognized that alternatives to mercury devices may be necessary in the future, and a small number of automated blood pressure recorders have been validated for use in pregnancy. Self-monitoring may be useful in evaluating blood pressure changes during pregnancy.

_

_

Studies have found that home BP monitoring is the optimal method for the early detection of and early preventive intervention in preeclampsia and eclampsia. White-coat hypertension has also been frequently detected by home BP measurements in pregnant women.

 _

Patients who take antihypertensive drugs:

In patients who take antihypertensive drugs, the timing of measurement may have a substantial influence on the blood pressure. The time of taking antihypertensive drugs should be noted.

_

Blood pressure in patients who are exercising:

Systolic blood pressure increases with increasing dynamic work as a result of increasing cardiac output, whereas diastolic pressure usually remains about the same or moderately lower. An exaggerated blood pressure response during exercise may predict development of future hypertension.

_

Diabetes mellitus and hypertension:

Individuals with diabetes are at great risk for cardiovascular disease. Part of this increased risk is because of hypertension. There is a very high incidence of hypertension in patients with diabetes. One survey estimated that 54.8% of Caucasians, 60.4% of African Americans, and 65.3% of Mexican Americans who had diabetes also had hypertension. Several trials have also demonstrated the importance of blood pressure–lowering in hypertensive patients with diabetes. Two of the most significant of these trials were the United Kingdom Prospective Diabetes Study (UKPDS) and the Hypertension Optimal Treatment (HOT) study. The HOT study reported a 51% reduction in cardiac events in the diabetes subpopulation (n = 1,501) who were randomized to the more intensive blood pressure arm (goal: diastolic blood pressure of 80 vs. 90 mmHg). The UKPDS reported significant reductions in its intensive blood pressure arm (mean result: 144/82 vs. 154/87 mmHg in the standard arm) in all diabetes-related endpoints, deaths, stroke, and microvascular endpoints.  Currently, the American Diabetes Association (ADA) recommends a blood pressure goal of < 130/80 mmHg. The seventh report of the Joint National Committee on Prevention, Detection, Evaluation, and Treatment of High Blood Pressure (JNC 7) also recommends a blood pressure goal of < 130/80 mmHg for patients with diabetes. The International Diabetes Federation has recommended the use of home BP for the management of BP in diabetic patients. The J-HOME Study reported that home BP was greater than or equal to130/80 mm Hg in 7% of diabetic patients in whom clinic BP was controlled under 130/80 mm Hg. Home BP in the morning has been reported to more accurately reflect target organ damage than clinic BP in diabetic patients.  Management of patients on the basis of telemedicine in co-operation with nurses, where home BP is used as an index, has been reported to have led to a more rapid control of BP in diabetic patients.

_

Renal diseases (chronic kidney disease, dialysis):

Renal diseases are often accompanied by hypertension, and hypertension is the greatest risk factor for the progression of nephropathy. In the general population, the risk of chronic kidney disease has been reported to be high in patients with masked hypertension, as determined by home BP measurements. In patients undergoing dialysis, the greatest prognostic factor is the presence of cerebro- and cardiovascular complications, and the management of hypertension is extremely important. However, BP measured at the dialysis center fluctuates widely, and has been reported to not accurately reflect the outcome. Home BP is known to more closely reflect the usual BP of dialysis patients. In addition, home BP measurements in dialysis patients have been shown to improve the state of BP control.

_

Some unresolved issues in special populations vis-à-vis SMBP:

__________

Clinical significance and application of home BP:

1. Home BP is highly reproducible.

2. Home BP has a greater prognostic value than clinic BP.

3. Home BP is extremely effective for the evaluation of drug effects and their duration.

4. Home BP can also be used for telemedicine.

5. The introduction of home BP to the diagnosis and treatment of hypertension facilitates long-term BP control.

6. Home BP measurements improve the adherence to medications and medical consultations.

7. Home BP can detect seasonal variations and long-term changes in BP.

8. Home BP is essential for the diagnosis of white-coat hypertension and masked hypertension.

9.  Home BP measurements detect morning hypertension, and nighttime BP during sleep can also be obtained with certain devices.

10. Home BP is particularly important for the diagnosis and treatment of hypertension in diabetes mellitus, pregnancy, children and renal diseases.

11. Home BP has a great effect on the medical economy.

__________

Efficacy and utility of SMBP:

Although clinic blood pressure (BP) measurement still remains the cornerstone hypertension management, the broad availability of electronic BP measurement devices has led to their widespread adoption. Home BP monitoring is now uniformly advocated for the evaluation and management of hypertension. This is so because BP control among treated hypertensives remains poor, and it is believed that home BP monitoring can improve hypertension control. This improvement may be attributable to both better adherence with antihypertensive therapy and detection and treatment of masked hypertension. Further, in contrast to clinic BP measurement, which is associated with a white coat effect, home BP monitoring may reduce white coat effect and may obviate unnecessary therapy. In addition to improving hypertension control, home BP is superior to clinic BP in predicting cardiovascular prognosis and end-stage renal disease.

_

SMBP and target organ damage:

_

Several studies have indicated that the correlation between echocardiographically determined LVH and blood pressure is better for home than for clinic readings as shown in the table above. Home blood pressure has also been related to other measures of target organ damage. It has been reported to correlate more closely than clinic blood pressure with microalbuminuria and carotid artery interomedial thickness.

_

SMBP and prognosis:

_

_

The table below shows various studies that link high BP measured at home to morbidity and mortality:

_

Masked HT detected only by SMBP has high hazard ratio similar to sustained HT. 

__________

Self-monitoring for the evaluation of antihypertensive treatment:  

When patients are having their antihypertensive medication initiated or changed, it is necessary to measure their blood pressure on repeated occasions. Self-monitoring is ideal for this purpose, because it can obviate the need for many clinic visits. It has the additional advantage of avoiding the biases inherent in clinic pressure measurements. More frequent measures might increase compliance with antihypertensive medications. The validity of using home readings for monitoring the effects of treatment on blood pressure has been well established in a number of studies that have compared the response to treatment evaluated by clinic, home, and ambulatory pressures. Despite the general parallelism between clinic and home blood pressure during treatment, there may be considerable discrepancy between the two in individual patients. Thus, in a study of 393 patients treated with trandolapril, the correlation coefficient between the clinic and home pressure response, while highly significant, was only 0.36. The slope of the line was also rather shallow, indicating that a decrease of 20 mmHg in clinic pressure is on average associated with a decrease in home pressure of only 10 mmHg. Other studies have shown that drug treatment lowers clinic blood pressure more than home blood pressure; in a study of 760 hypertensives treated with diltiazem 300 mg the clinic blood pressure fell by 20/13 mmHg and the home blood pressure by 11/8 mmHg. In another study losartan lowered clinic blood pressure by 17/13 mmHg and home blood pressure by 7/5; trandolapril lowered clinic blood pressure by 17/13 and home blood pressure by 7/5; changes of AMBP were closer to the changes of home blood pressure. It is well recognized that drug treatment also lowers ambulatory blood pressure less than clinic blood pressure. One study has looked at the effects of exercise training on clinic and home blood pressure. Clinic blood pressure fell by 13/8 mmHg in the experimental group and 6/1 mmHg in the controls, whereas home blood pressures fell by 6/3 and 1/–1, respectively. Home monitoring is also ideal for evaluating the time course of the treatment response.

_

Self-Measurement of Blood Pressure at Home reduces the need for Antihypertensive Drugs: A Randomized, Controlled Trial:

It is still uncertain whether one can safely base treatment decisions on self-measurement of blood pressure. In the present study, authors investigated whether antihypertensive treatment based on self-measurement of blood pressure leads to the use of less medication without the loss of blood pressure control. they randomly assigned 430 hypertensive patients to receive treatment either on the basis of self-measured pressures (n=216) or office pressures (OPs; n=214). During 1-year follow-up, blood pressure was measured by office measurement (10 visits), ambulatory monitoring (start and end), and self-measurement (8 times, self-pressure group only). In addition, drug use, associated costs, and degree of target organ damage (echocardiography and microalbuminuria) were assessed. The self-pressure group used less medication than the OP group (1.47 versus 2.48 drug steps; P<0.001) with lower costs ($3222 versus $4420 per 100 patients per month; P<0.001) but without significant differences in systolic and diastolic OP values (1.6/1.0 mm Hg; P=0.25/0.20), in changes in left ventricular mass index (−6.5 g/m2 versus −5.6 g/m2; P=0.72), or in median urinary microalbumin concentration (−1.7 versus −1.5 mg per 24 hours; P=0.87). Nevertheless, 24-hour ambulatory blood pressure values at the end of the trial were higher in the self-pressure than in the OP group: 125.9 versus 123.8 mm Hg (P<0.05) for systolic and 77.2 versus 76.1 mm Hg (P<0.05) for diastolic blood pressure. These data show that self-measurement leads to less medication use than office blood pressure measurement without leading to significant differences in OP values or target organ damage. Ambulatory values, however, remain slightly elevated for the self-pressure group.

_

THOP trial 2004:

The appropriateness of home BP measurement to guide antihypertensive treatment has been tested in another large-scale randomized trial: the THOP (Treatment of Hypertension Based on Home or Office Blood Pressure) trial. The THOP trial showed that adjustment of antihypertensive treatment based on home BP instead of office BP led to less intensive drug treatment and marginally lower costs but also to less BP control, with no differences in general well-being or left ventricular mass . Home BP monitoring also contributed to the identification of patients with white-coat hypertension. Author’s findings support a strategy in which both home monitoring and 24-hour ambulatory monitoring can be “complementary” to conventional office BP measurement. The findings also “highlight the need for prospective studies to establish the normal range of home BP, including the operational thresholds at which drug treatment should be instituted or can be discontinued. Until such prospective data become available,” they conclude, “management of hypertension exclusively based on home BP cannot be recommended.” Well, we have come a long way from 2004 to 2014.

_

The figure below shows advantages of SMBP over OMBP for antihypertensive treatment trials:

___________

___________

Now I will go through various clinical trials on SMBP in chronological order:

_

Blood pressure control by home monitoring: meta-analysis of randomised trials: 2004

1359 people with essential hypertension allocated to home blood pressure monitoring and 1355 allocated to the “control” group seen in the healthcare system for 2-36 months participated in the study. The meta-analysis of 18 randomised controlled clinical trials found that “self” blood pressure monitoring at home results in better blood pressure control and greater achievement of blood pressure targets than “usual” blood pressure monitoring in the healthcare system. The size of the difference is rather small from the clinical viewpoint: 2.2/1.9 mm Hg (when allowing for publication bias), with 10% greater proportion on target. However, this may represent an adjunctive useful improvement in management of hypertension likely to contribute to a better outlook for cardiovascular events. The main inclusion criterion in the study was that participants had undertaken blood pressure monitoring at home either by themselves or with the aid of a family member. As this is the likely scenario for implementation in a population setting, the results of our meta-analysis could be applicable to the general population of people with mild to moderate essential hypertension.

Implications

What is already known on this topic:

Blood pressure is usually measured and monitored in the healthcare system by health professionals. With the introduction and validation of new electronic devices, self blood pressure monitoring at home is becoming increasingly popular. No evidence exists as to whether use of home monitoring is associated with better control of high blood pressure.

What this study adds:

Patients who monitor their blood pressure at home have a lower “clinic” blood pressure than those whose blood pressure is monitored in the healthcare system. A greater proportion of them also achieve blood pressure targets when assessed in the clinic.

Authors conclude that blood pressure monitoring by patients at home is associated with better blood pressure values and improved control of hypertension than usual blood pressure monitoring in the healthcare system. As home blood pressure monitoring is now feasible, acceptable to patients, and reliable for most of them, it could be considered as a useful, though adjunctive, practice to involve patients more closely in the management of their own blood pressure and help to manage their hypertension more effectively.

_______

Relationship between the Frequency of Blood Pressure Self-Measurement and Blood Pressure Reduction With Antihypertensive Therapy: 2006:

OLMETEL was conducted between February and October 2003 in 27 clinical practices in Germany. Patients adhering to the instructions for SMBP (at least two measurements daily) had a higher response to antihypertensive treatment with olmesartan medoxomil than those who were not adherent to these instructions.  One explanation for the observed phenomenon is that patients who meticulously follow the instructions for SMBP may equally meticulously follow their physicians’ recommendation for antihypertensive drug intake, or vice versa. This means that once the physician is dealing with an a priori compliant patient, it may not necessarily make a difference whether the patient uses SMBP to achieve the intended effect concerning BP-lowering since the number of SMBP recordings are just an indicator of good compliance. Similarly, other authors have concluded that physicians should recommend home BP measurement to patients being treated with antihypertensive drugs because there is the possibility that home BP measurement might improve medication compliance. On the other hand there is strong support for the notion that self-measurement per se increases compliance with antihypertensive therapy. This has been demonstrated in the Self-Measurement for the Assessment of the Response to Trandolapril study that was performed in general practice and enrolled 1710 patients.  Furthermore, not only did SMBP increase compliance compared with usual management, it also resulted in fewer clinic visits. The assumption that self-measurement increases compliance is also supported by other studies using home telemonitoring that showed that the mean arterial pressure reduction in the telemedical patient group was superior to that observed in the usual care group (in whom an increase in mean arterial pressure was observed). Whether SMBPs per se resulted in improved compliance with antihypertensive therapy or whether the number of recordings was an indicator of already existing compliance remains to be determined. Furthermore, a number of at least five BP home readings per week was identified as being able to correctly predict response to olmesartan medoxomil treatment. Non-adherence to drug intake is one of the most common causes of treatment-resistant hypertension.  Patients’ non-adherence to therapy is increased by misunderstanding of the condition or treatment, denial of illness because of lack of symptoms or perception of drugs as symbols of ill health, lack of patient involvement in the care plan, or unexpected adverse effects of medications. Therefore, any means to improve patient compliance should be welcome. Using BP telemonitoring not only may improve compliance but has also been proven to be a very useful tool in the assessment and follow-up of BP in hypertensive patients.

__________

Changes in Home Versus Clinic Blood Pressure With Antihypertensive Treatments: A Meta-Analysis 2008:
The main findings of this meta-analyses are as follows: (1) the changes produced by antihypertensive drug treatments in home BP were 20% smaller than those of clinic BP, and the changes in clinic BP were linearly related to those of home BP; (2) the difference in the BP reduction between clinic and home BP were attributable to the difference in the baseline BP levels; (3) the changes in home SBP were intermediate between the changes of clinic and ambulatory SBPs (including 24-hour SBP, daytime SBP, and nighttime SBP); and (4) the differing effects on clinic and home BP were similar for calcium channel blockers, angiotensin converting enzyme inhibitors, and angiotensin II receptor blockers, and also for placebo or control groups. Final conclusion is that the reduction of home BP produced by antihypertensive drug treatment is about 80% of the magnitude of the reduction of clinic BP.

________

Does self-monitoring reduce blood pressure?

Meta-analysis with meta-regression of randomized controlled trials 2010:

Randomised controlled trials (RCTs) that compared self measurement of blood pressure without professional intervention against usual care (not including patient self-monitoring) were eligible for inclusion in the review. Eligible studies had to report self measurement blood pressure and independently measured blood pressure (either systolic or diastolic office pressure or ambulatory monitoring expressed as mean daytime ambulatory pressure). Where reported, included studies assessed automated (40%), manual (20%), digital/electronic (20%) and semi-automated (8%) measurement devices. Four studies made no adjustment for self-measured readings and six made adjustments (usually 5/5mmHg); the other studies did not report any information regarding adjustments. Control groups were mostly usual or routine care; three studies used drug treatment as a control. Most of the included studies reported a target office blood pressure of 140/85-95mmHg. Authors’ concluded that self-monitoring of blood pressure in adults reduced blood pressure by a small but significant amount. Evidence of significant heterogeneity could not be explained by meta-regression.

______

Home Blood Pressure Monitoring in the Diagnosis and Treatment of Hypertension: A Systematic Review: 2010:

Sixteen studies in untreated and treated subjects assessed the diagnostic ability of SMBP by taking AMBP as reference. Seven randomized studies compared SMBP vs. office measurements or AMBP for treatment adjustment, whereas many studies compared SMBP with office measurements in assessing the antihypertensive drug effects. Several studies with different design investigated the role of SMBP vs. office measurements in improving patients’ compliance with treatment and hypertension control rates. The evidence on the cost-effectiveness of SMBP is limited. The studies reviewed consistently showed moderate diagnostic agreement between SMBP and AMBP, and superiority of SMBP compared to office measurements in diagnosing uncontrolled hypertension, assessing antihypertensive drug effects and improving patients’ compliance and hypertension control. Preliminary evidence suggests that SMBP has the potential for cost savings. There is conclusive evidence that SMBP is useful for the initial diagnosis and the long-term follow-up of treated hypertension. These data are useful for the optimal application of SMBP, which is widely used in clinical practice. More studies on the co-steffectiveness of SMBP are needed.

______

Role of Home Blood Pressure Monitoring in Overcoming Therapeutic Inertia and Improving Hypertension Control:

A Systematic Review and Meta-Analysis 2011:

Authors conclude that a small but significant improvement for all BPs, systolic, diastolic, or mean, results when home BP monitoring is used. However, simply monitoring home BP is of little value if the patients or their physicians do not act on the results. When home BP monitoring is accompanied by specific programs to treat elevated BP, such as through titration of antihypertensive drugs, it can result in more meaningful change in BP. Compared with no program to titrate antihypertensive therapy, programs that incorporate a strategy of antihypertensive therapy, such as through telemonitoring, may provide even better hypertension control. Larger studies are warranted among hemodialysis patients, for whom this strategy may be particularly beneficial.

_______

Sensitivity and specificity in the diagnosis of hypertension with different methods: 2011:

OBJECTIVE: To evaluate sensitivity and specificity of different protocols for blood pressure measurement for the diagnosis of hypertension in adults.
METHODS: Cross-sectional study conducted in a non-probabilistic sample of 250 public servants of both sexes aged 35 to 74 years in Vitória, southeastern Brazil, between 2008 and 2010. The participants had their blood pressure measured using three different methods: clinic measurement, self-measured and 24-hour ambulatory measurement. They were all interviewed to obtain sociodemographic information and had their anthropometric data (weight, height, waist circumference) collected. Clinic measurement and self-measured were analyzed against the gold standard ambulatory measurement. Measures of diagnostic performance (sensitivity, specificity, accuracy and positive and negative predictive values) were calculated. The Bland & Altman method was used to evaluate agreement between ambulatory measurement (standard deviation for daytime measurements) and self-measured (standard deviation of four measurements). A 5% significance level was used for all analyses.
RESULTS: Self-measured blood pressure showed higher sensitivity (S=84%, 95%CI 75;93) and overall accuracy (0.817, p<0.001) in the diagnosis of hypertension than clinic measurement (S=79%, 95%CI 73;86, and overall accuracy=0.815, p<0.001). Despite the strong correlation with daytime ambulatory measurement values (r=0.843, p<0.001), self-measured values did not show good agreement with daytime systolic ambulatory values (bias=5.82, 95%CI 4.49;7.15). Seven (2.8%) cases of white coat hypertension, 26 (10.4%) of masked hypertension and 46 (18.4%) of white-coat effect were identified.
CONCLUSIONS: The study shows that self-measured blood pressure has higher sensitivity than clinic measurement to identify true hypertension. The negative predictive values found confirm the superiority of self-measured when compared to clinic in identifying truly normotensive individuals. However, clinic measurement cannot be replaced with self-measured, as it is still the most reliable method for the diagnosis of hypertension.

________

Cardiovascular outcomes in the trial of antihypertensive therapy guided by self-measured home blood pressure: 2012:

The multicenter Hypertension Objective Treatment Based on Measurement by Electrical Devices of Blood Pressure (HOMED-BP; 2001–2010) trial involved 3518 patients (50% women; mean age 59.6 years) with an untreated systolic/diastolic HBP of 135–179/85–119 mm Hg. In a 2 × 3 design, patients were randomized to usual control (125–134/80–84 mm Hg (UC)) vs. tight control (<125/<80 mm Hg (TC)) of SMBP and to initiation of drug treatment with angiotensin converting enzyme inhibitors, angiotensin receptor blockers or calcium channel blockers.

1. In the study, 3518 hypertensive subjects were followed for up to 10 years by 300 general practitioners. This study showed that SMBP was used without difficulty and was readily accepted by practitioners and patients

2. The assessment of nocturnal BP is of major clinical relevance because of its demonstrated prognostic value. The Ohasama study investigators developed an SMBP device that can monitor nocturnal BP during sleep. Such devices are now used in epidemiological surveys, large-scale intervention trials and clinical pharmacology studies in Japan.  

3. Although there was no difference among the groups in a 2 × 3 study design, the risk of the primary endpoint independently increased by 41% and 47% for a 1 s.d. increase in baseline and follow-up systolic HBP, respectively, in all patients combined. The 5-year risk was <1% if the on-treatment systolic HBP was 131.6 mm Hg. The HOMED-BP study proved the feasibility of adjusting antihypertensive drug treatment based on HBP and suggested that a systolic HBP level of 130 mm Hg should be an achievable and safe target.

4. More recently, the HOMED-BP proved that adjusting antihypertensive drug treatment on the basis of blood pressure values collected through HBPT is feasible and effective for maintaining an optimal target blood pressure level and optimal antihypertensive medication

_______

How hypertensive patients in the rural areas use home blood pressure monitoring and its relationship with medication adherence: A primary care survey in China: 2013:

Despite an increasing popularity of home blood pressure monitoring (HBPM) over the last few decades, little is known about HBPM use among hypertensive patients in the rural areas. A cross-sectional survey including 318 hypertensive patients was conducted in a rural community in Beijing, China, in 2012. Participants were mainly recruited from a community health clinic and completed the questionnaires assessing HBPM usage. Binary logistic regression models were used for the analysis of medication adherence with age, gender, level of education marital status, perceived health status, duration of hypertension, HBPM use, and frequency of performing BP measurement. Among the total population, 78 (24.5%) reported currently use of HBPM. Only 5.1% of the HBPM users cited doctor’s advice as the reason for using HBPM. Analysis of the risk factors of poor medication adherence by multivariable modeling indicated significant associations between the duration of hypertension (adjusted OR, 3.31; 95% CI, 1.91-5.72; P < 0.001), frequency of performing BP measurements (adjusted OR, 2.33; 95% CI, 1.42-3.83; P < 0.001) and medication adherence. Authors found that most use of HBPM was without the involvement of a doctor or nurse. Further study is required to understand if HBPM is effective and the role of health professionals in its use for improved hypertension control.

_______

Self-Measured Blood Pressure Monitoring: Comparative Effectiveness:

Systemic review of 52 comparative studies in 2013:

The primary objective of this review is to evaluate whether the use of SMBP monitoring influences outcomes in adults and children with hypertension, and to what extent these changes in outcomes can be attributable to the use of self-monitoring devices alone or the use of SMBP plus additional support or attention. The intention of this report is to inform physicians’ decision making as to whether to encourage the use of SMBP monitoring alone or along with additional support, and to assist health care policymakers and payers with decisions regarding coverage and promotion of SMBP monitoring. This review identified 52 comparative studies that examined the impact of SMBP with or without additional support in the management of hypertension. Overall, the benefit of SMBP for BP reduction appears to be modest and is not consistent across studies. Authors examined the role of additional support in combination with SMBP by setting up comparisons as: (1) SMBP alone versus usual care; (2) SMBP plus additional support versus usual care; and (3) SMBP plus additional support versus SMBP with no additional support or less intense additional support. Twenty-four trials compared SMBP alone versus usual care. Meta-analysis showed a statistically significant reduction in clinic SBP and DBP (SBP/DBP 3.1/ 2.0 mmHg) at 6 months but not at 12 months. Only one RCT reported follow up beyond 12 months; findings indicated significant reductions in SBP and DBP at 24 months in favor of SMBP. The comparison of SMBP plus additional support versus usual care was examined in 24 studies, with 11 of 21 randomized trials and 2 of 3 nonrandomized studies reporting a statistically significant benefit in BP reduction favoring SMBP plus additional support. Four studies provided results after 12 months. Twelve trials compared SMBP plus additional support (or more intense additional support) versus SMBP without additional support (or plus less intense additional support). Only four of these trials reported a significantly greater reduction in BP in the SMBP plus additional (or more intense) support groups. Two studies provided results beyond 12 months. Both reported findings that were non-significant or of uncertain statistical significance. Tracking blood pressure at home helped patients with hypertension keep it under control, at least over the short term, a meta-analysis determined. Pooled study results pointed to 3.9/2.4 mm Hg lower blood pressure on average with self-monitoring at 6 months compared with usual care based on in-clinic monitoring alone. That impact would be clinically relevant on a population level if they were sustained over time. For example, a decrease of 2 or 5 mm Hg in systolic blood pressure in the population has been estimated to result in mortality reductions of 6% or 14% due to stroke, 4% or 9% due to chronic heart disease, and 3% or 7% due to all causes. While the impact of home monitoring alone fizzled to a non-significant 1.5/0.8 mm Hg reduction by 12 months, additional support, like education or counseling, kept the effect going. SMBP with or without additional support may confer a small benefit in BP control compared with usual care, but the BP effect beyond 12 months and the attendant long-term clinical consequences remain unclear. Given clinical heterogeneity and limited head to head comparisons, the evidence limits authors’ ability to draw definitive conclusions about the incremental effect of any specific additional support. Future research should standardize patient inclusion criteria, BP treatment targets for home BP, and SMBP and additional support protocols to maximize the interpretability and applicability of SMBP trials. For the current report, authors reviewed 52 published studies in which patients monitored their blood pressure with and without assistance. Such help ranged from educational materials to contact with a nurse or pharmacist or counseling over the telephone. They found some evidence that monitoring blood pressure at home improved control at six months, but not at 12 months. When patients got help, either through educational material or direct contact with medical professionals, home monitoring improved blood pressure control at both six and 12 months. From this data, authors concluded that home blood pressure monitoring is effective in the short term.

________

Self-Monitoring Blood Pressure lowers Cardiovascular Risk in Hypertension: Self-Monitoring of BP reduces Hypertension and Stroke Risk: 2014:
After participating in a self-management program, hypertensive patients at high risk for cardiovascular disease had lower systolic blood pressure compared to those who received standard care, according to the results of the Phase III TASMIN-SR trial published August 27, 2014, in JAMA. Researchers from the University of Oxford in the United Kingdom studied 552 patients aged at least 35 years with hypertension and a history of stroke, coronary heart disease, diabetes, or chronic kidney disease. The patients had baseline blood pressures of at least 130/80 mm Hg and were treated at 59 primary care practices across the United Kingdom between March 2011 and January 2013. Those in the intervention group were instructed to monitor their own blood pressures using an individualized self-titration algorithm, while those assigned to the control group received usual care, which included seeing their clinician for routine blood pressure measurements and receiving medication adjustments as necessary. Although the previous Phase II TASMINH2 trial deemed this method effective, the research team “wanted to develop the intervention and trial it in higher risk patients,” lead study author Richard J. McManus, PhD, FRCGP said. At baseline, the blood pressure of the intervention group was 143.1/80.5 mm Hg, which was similar to the 143.6/79.5 mm Hg baseline blood pressure of the control group. Although average systolic blood pressure decreased in both groups after 12 months, a more significant decline was found in the intervention group, as the researchers recorded a mean blood pressure of 128.2/73.8 mm Hg in the intervention group, compared to a mean blood pressure of 137.8/76.3 mm Hg in the control group. Based on readings received, the interventional group adjusted their own medication levels. Mean blood pressure at the beginning of the trial for self-monitoring patients was 143.1/80.5 mm Hg; after 12 months, that figure dropped to 128.2/73.8. For patients in the control group, mean blood-pressure prior to the start of the project was 143.6/79.5; afterward, it fell to 137.8/76.3. The study authors noted the results were comparable in all subgroups and no excessive adverse events were observed. “We thought that older patients with more comorbidities might not do as well as younger patients, but, in fact, we got better results: 9.2 mm Hg difference versus 5.4 mm Hg difference in systolic blood pressure in TASMIN-SR versus the TASMINH2 trial,” Dr. McManus said when asked about the study’s surprising findings. As a result, the researchers concluded self-monitoring is a viable option for the long-term treatment of hypertension in patients with high cardiovascular disease risk. “A group of high-risk individuals…are able to self-monitor and self-titrate antihypertensive treatment following a pre-specified algorithm developed with their family physician and that, in doing so, they achieved a clinically significant reduction in systolic and diastolic blood pressure without an increase in adverse events,” the study authors wrote. “This is a population with the most to gain in terms of reducing future cardiovascular events from the optimized blood pressure control.” Thus, Dr. McManus urged health care professionals to “consider self-management as an effective approach for lowering blood pressure safely” in patients with “above-target blood pressure and cardiovascular comorbidity.” Patients at risk for hypertension and stroke that self-monitor and make adjustments to medication from home could reduce their risk of stroke by 30% and significantly lower their blood pressure after 12 months, according to a recent study.

__________

Blood pressure self-monitoring could save tens of thousands of unnecessary deaths every year: A study:

Experts say that despite the availability of effective drugs, controlling high blood pressure in health centers and GP practices is poor because of infrequent monitoring and reluctance by doctors to increase medication (therapeutic inertia). Often patients do not take their drugs properly.

1. Portable system allows people to send their readings to medical staff

 2. Doctors check figures and can contact the patient to discuss their health

 3. Trial found significant drop in blood pressure among people using system

 4. Each year there are 62,000 unnecessary deaths in the UK due to poor blood pressure control

SMBP overcomes therapeutic inertia and improves patient compliance, control BP and saves life.

__________

__________

Negative reports about SMBP: 

Blood Pressure Monitoring Kiosks aren’t for Everyone: FDA warning:

Convenience can come with tradeoffs. The next time you put your arm in the cuff at a kiosk that measures blood pressure, you could get an inaccurate reading unless the cuff is your size. Correct cuff size is a critical factor in measuring blood pressure. Using a too-small cuff will result in an artificially high blood pressure reading; a too-large cuff may not work at all or result in an inaccurately low blood pressure reading. The Food and Drug Administration (FDA) is advising consumers that blood pressure cuffs on public kiosks don’t fit everyone and might not be accurate for every user. These desk-like kiosks for checking blood pressure are available in many public places—pharmacies, grocery and retail stores, gyms, airports, hair salons and even cafeterias. They are easily accessible and easy to use. But it’s misleading to think that the devices are appropriate for everybody. They are not one-size-fits-all. Other factors, including how someone uses a device, might cause an inaccurate reading. The user might not have placed the cuff on his arm properly or might not be sitting properly. These things will affect accuracy. That’s why people shouldn’t overreact to any one reading from a kiosk. Hypertension isn’t diagnosed solely based on one reading. Inaccurate blood pressure measurements can lead to the misdiagnosis of hypertension or hypotension (low blood pressure), and people who need medical care might not seek it because they are misled by those inaccurate readings.

_

Blood Pressure Self-Measurement in the Obstetric Waiting Room:

Authors observed 81 pregnant diabetics’ ability to correctly self-measure in the waiting room during a 4-week observational descriptive study. Specifically, they investigated the level of patient adherence to six recommendations with which patients are instructed to comply in order to obtain a reliable blood pressure reading. They found that the patients did not adhere to given instructions when performing blood pressure self-measurement in the waiting room. None of the 81 patients adhered to all six investigated recommendations, while around a quarter adhered to five out of six of the recommendations. The majority followed four or fewer of the recommendations. Results indicate that unsupervised self-measurement of blood pressure is not a reliable method. Thus, there is a need for increased staff presence and patient training or, alternatively, for introducing improved technology support. This could include context-aware patient adherence aids and clinical decision support systems for automatically validating self-measured data based on e-health and telemedicine technology.

__________

Blood pressure measurements in epidemiological/observational studies:

Very comprehensive research on population blood pressure exists throughout the world. These studies are essential for defining hypertension prevalence, awareness and treatment in any geographical region/country. A change in population blood pressure of 2 mmHg in systolic blood pressure translates to a change in stroke mortality of ten percent and coronary heart disease mortality of seven percent (Lewington et al. 2002). Therefore, data on progression from normotension to prehypertension and hypertension are very important in epidemiological research. The data have documented that prehypertension carries an increased risk for cardiovascular morbidity and mortality, and a high risk for progression to sustained hypertension (Hansen et al. 2007a, Julius et al. 2006). In this respect, changes from normotension to prehypertension are as important as the observation of hypertension itself. Reliable data are heavily dependent on blood pressure measurements carried out meticulously by properly trained personnel and with precise equipment. For this, adherence to a standardised technique over time is crucial. Findings of changes in population blood pressure are only meaningful if they are ascertained to be true differences and not related to a change in methods applied. Nearly all results on population blood pressure have been obtained by the use of a standard mercury sphygmomanometer by well-trained health personnel (Cutler et al. 2008). Despite this, the readings are not without observer bias and end-digit preference. In an attempt to minimise observer bias and end-digit preference, a number of highly recognized epidemiological research institutions have used the Random Zero Mercury Sphygmomanometer, where the reader has to subtract a random chosen magnitude of mmHg (from 0 to 20 mmHg) at the very end of the measurement. Despite minimising observer bias, the equipment has been shown to slightly underestimate the “true” blood pressure level as obtained by the use of a standard mercury manometer (Yang et al. 2008). Another approach that has been employed is the “London School of Hygiene Sphygmomanometer” (Andersen and Jensen 2007) where the reader is blinded to the mercury column but has to tap a button when they hear the first and the last Korotkov sounds (phase 1 and phase 5). In recent years, 24-hour ambulatory blood pressure measurements have been introduced in population studies and comprehensive databases have been constructed, e.g. the Idaco Database on population studies with contributions from many parts of the world (Hansen et al. 2007b). All these studies have convincingly shown that 24-hour ambulatory blood pressure measurements determined with oscillometric devices (at approximately 80 readings over 24 hours), are superior for prediction of cardiovascular morbidity and mortality as compared to a few measurements of blood pressure performed in clinical conditions with a standard mercury sphygmomanometer. In almost all these studies, although not exclusively, the comparator has been the standard mercury sphygmomanometer (Hansen et al. 2007b). Research into normal values for home blood pressure and the prognostic implication is less comprehensive. This research has been almost exclusively carried out with automatic oscillometric devices, with measurements being compared to the mercury sphygmomanometer. Data are accumulating showing that the predictive prognostic value of a certain number of home blood pressure readings is superior to a single or a few blood pressure readings performed in a clinic using a mercury sphygmomanometer (Sega et al. 2005). The home readings are a reflection of more precise estimation of the actual blood pressure levels over many readings as compared to few readings in the clinical setting. So far, comparisons of measurements obtained with mercury sphygmomanometer versus oscillometric automatic devices, obtained in the same clinical setting for determination of population blood pressure and prognostic implications, are missing. However, in the Pamela Study, three clinic readings with a mercury sphygmomanometer were compared to two home blood pressure oscillometric readings (Sega et al. 2005). As expected, the clinical readings were somewhat higher, but the prognostic implication was not that much different. In long-term outcome clinical trials, usually running for three to five years, mercury sphygmomanometers have been used as the gold standard for office blood pressure measurement. In some recent trials (the HOT Study, the ASCOT Study and the OnTarget Study) automatic oscillometric devices were used (Dahlöf et al. 2005, Hansson et al. 1998, Yusuf et al. 2008). In some of these studies it was shown that small differences in measured blood pressure already can have an impact on cardiovascular diseases. There is rapidly growing information on normal values and the prognostic implications of 24 hour ambulatory blood pressure measurements with oscillometric devices, while knowledge on self/home blood pressure measurements with oscillometric devices is less substantial. So far, a direct comparison between clinic blood pressure and prognostic implication based on measurements carried out with mercury sphygmomanometer and those with automatic oscillometric devices is lacking. In conclusion, the vast majority of information on population blood pressure (secular trends, progression to hypertension and prognostic implications, and also the benefits from treatment-induced blood pressure reduction in terms of cardiovascular events prevention) has so far been obtained with the use of mercury sphygmomanometers. Reliable data on changes in population blood pressure level, incidence and prevalence of hypertension, awareness and treatment, derived from follow-up studies are dependent on the use of consistent and trustworthy methods. It can be expected that epidemiological/observational studies in the future will comprise repetitive blood pressure measurements at home carried out with well-calibrated, well-validated automatic oscillometric equipment. For the moment, mercury sphygmomanometers are essential for such validation of newly developed blood pressure measurement devices. Otherwise, the conclusions based on the results of long–term epidemiological studies on changes in population blood pressure may be seriously jeopardized.

___________

SMBP and telemonitoring:

BP measurement and monitoring are critical for BP management. Traditionally, BP measurement has been performed by physicians or nurses in office-based care settings. Office BP measurements are subject to error related to the patient’s reaction to the measurement procedure, a phenomenon known as the ‘‘white coat effect.’’ Measurement of BP at home is not impacted by this effect and can therefore provide more stable and reproducible BP measures, which can be of greater prognostic value. In addition, home BP measurements have been shown to reflect true BP more reliably than office readings and to correlate better with end-organ damage. Moreover, home BP measurement has the added value of providing clinically relevant information between office visits and, therefore, can be a more consistent source of information to help manage BP and associated risks. Therefore, hypertension management guidelines recommend home or ‘‘self’’ BP monitoring (SMBP) in the management of hypertension. SMBP can be manually measured and recorded by the patient or electronically transmitted to a healthcare provider. The technological advances in BP telemonitoring have been brought about by the availability of valid and easy-to-use BP devices that use automated oscillometric tools. Further, the technology allows automatic transmission of BP data to primary care providers. Several studies have demonstrated the feasibility, accuracy, patient compliance, and satisfaction with BP telemonitoring in managing hypertension.

_

Systemic review of 15 studies on BP telemonitoring:

Authors searched five databases (PubMed, CINAHL, PsycINFO, EMBASE, and ProQuest) from 1995 to September 2009 to collect evidence on the impact of blood pressure (BP) telemonitoring on BP control and other outcomes in telemonitoring studies targeting patients with hypertension as a primary diagnosis. Fifteen articles met their review criteria. Authors found that BP telemonitoring resulted in reduction of BP in all but two studies; systolic BP declined by 3.9 to 13.0mm Hg and diastolic BP declined by 2.0 to 8.0mm Hg across these studies. These magnitudes of effect are comparable to those observed in efficacy trials of some antihypertensive drugs. Although BP control was the primary outcome of these studies, some included secondary outcomes such as healthcare utilization cost. Evidence of the benefits of BP telemonitoring on these secondary outcomes is less robust. Compliance with BP telemonitoring among patients was favorable, but compliance among participating healthcare providers was not well documented. This systemic review of 15 studies concluded that home BP telemonitoring is feasible in the management of hypertension.   

_

Effects of BP self measurement and telemedicine communication on physician prescribing habits: 2012:

This was a secondary analysis of a telemedicine trial of 241 patients with uncontrolled hypertension (BP >150/90 mmHg). Patient from two large medical centers were recruited and randomized to usual care (control group-C, N=121) or Telemedicine with usual care (T, N=120). The T group was provided a digital sphygmomanometer and training, along with CVD risk reduction counseling. They were instructed to report their BP, HR, weight, steps/day, and tobacco use twice weekly for 6 months. All patients had baseline and 6-month follow-up visits. Monthly reports on blood pressure and treatment guidelines were provided to both the patient and physician in the T group. At the end of the study, patients’ anti-hypertensive medications were compared to their baseline therapy. Patients in the telemedicine group were more likely to be prescribed more anti-hypertensive medications during the study. This may indicate that patient involvement in self-reporting via telemedicine changes the information available to the physician in such a way that leads to more appropriate and effective pharmacotherapy, better blood pressure control, and overall reduction in cardiovascular risk.

_

Two Model Hypertension Care:

_

Telemonitoring of home BP is the most effective way to lower Blood Pressure: 2013:

Patients with uncontrolled blood pressure can significantly improve their health using a new self-monitoring system called telemonitoring that can be used at home, according to a new study in the British Medical Journal (BMJ). The research showed that patients with this condition, which is usually difficult to treat with drugs alone, can greatly benefit from this portable system which enables them to record and send their own blood pressure readings straight to doctors in real-time. These figures are then checked online by doctors and nurses who contact patients if they need to discuss their health and treatment with them. Over the previous 15 years, systems similar to this have been tested on a small scale, however, this study is the first to observe incorporating use in frontline primary care, the experts, from the University of Edinburg, explained. Patients who used telemonitoring required more medical time and attention, compared to those who did not use it, the results also showed. The patients who used telemonitoring experienced a bigger reduction in their blood pressure than those who did not use it. “The drop in blood pressure was helped mainly by encouraging doctors to prescribe and patients to accept more prescriptions of anti-high blood pressure drugs,” the authors pointed out. On the other hand, telemonitoring use had little effect on people’s lifestyle changes, including weight control and salt consumption. Although effective drugs are available, controlling high blood pressure in health centers and GP practices – monitoring is insufficient, and clinicians are unwilling to increase treatment. Patients who do not take their medication as they should can also experience complications because their blood pressure will remain high. Professor Brian McKinstry, of the University of Edinburgh’s Centre for Population Health Sciences, concluded:    “We found that the use of supported telemonitoring in patients who manage their high blood pressure at home produces important reductions in blood pressure. We believe that telemonitoring has the potential to be implemented in many healthcare settings. Before this happens however, we would recommend testing it out on a much larger scale so that we can see that the reduction in blood pressure over six months can be achieved in the longer term and that it is cost effective.”

_________

In a nutshell, what do all studies on SMBP find?

_

Home BP and prognosis:

The prognostic significance of home BP has been reported to be comparable to, or slightly better than, that of AMBP. The high prognostic significance of home BP is considered to be derived from the stability of BP information. Evidence has also shown that home BP reflects target organ damage with similar or higher reliability than AMBP. AMBP provides data on short-term BP variability every 15–30 min, and these values are reported to have prognostic significance. The day-to-day variability of BP detected by home BP measurements has also been reported to predict the risk of cerebrovascular and cardiovascular diseases. Heart rate measured simultaneously with home BP also has a prognostic significance.

_

Home BP and clinical pharmacology of antihypertensive drugs:

As home BP provides a stable mean value and ensures high reproducibility, it is extremely effective for the evaluation of drug effects and their duration. Home BP eliminates the placebo effect and records the responses to antihypertensive drugs more accurately than AMBP, and, as such, is considered optimal for evaluating the effects of antihypertensive drugs. Consequently, home BP reduces the number of subjects necessary for the evaluation of drug effects compared with AMBP, and markedly reduces the number necessary when compared with clinic BP. Evaluation of the duration of drug effects has been considered possible by the use of the trough/peak (T/P) ratio based on AMBP. However, as the reproducibility of AMBP is not always adequate, the reproducibility of the T/P ratio is also unsatisfactory. It has recently been reported that the morning/evening (M/E) or evening/morning (E/M) ratio obtained from home BP measurements is very effective in evaluating the duration of drug effects.

_

Home BP and telemedicine:

With the advance of devices for home BP measurements, BP values have begun to be stored as electronic data. As a result, such data have been transmitted via telephone lines or the internet, and are widely used for decision making and clinical pharmacological evaluations.  Improvements in BP control by means of such telemedicine have been reported. 

_

Home BP and BP control:

The Japanese and International guidelines recognize home BP measurements as an optimal tool for long-term BP control. The introduction of home BP measurements in the diagnosis and treatment of hypertension facilitates the attainment of a goal BP compared with BP management based on clinic BP alone.  By implementing antihypertensive therapy according to home BP, the goal BP can be achieved sooner.  BP control has been reported to be improved by combining home BP measurements with behavioral therapy. Home BP measurements also reduce the frequency of clinic consultations and elevate the participation rate to medical treatment.  As home BP is measured and interpreted by the patients themselves, the possibility of self-regulation of antihypertensive medication according to home BP has become relevant in hypertension management.

_

Home BP and adherence: 

Home BP measurements require an active commitment by the patients themselves in medical care and health management, and results in a marked improvement in the adherence to medication.  High adherence to home BP measurements has also been reported to improve BP control. Patients with high adherence to home BP measurements have also shown high adherence to exercise or dietary intervention.

_

Home BP and seasonal changes in BP:

Unlike AMBP, home BP is effective in evaluating long-term changes in BP. For example, home BP can detect seasonal variations in BP. The monitoring of seasonal changes in home BP facilitates the titration of antihypertensive drugs.

_

Home BP and physiological & pathophysiological conditions:

Home BP can detect slight changes in BP mediated by modifications in lifestyle or by exposure to stress, as well as small changes in BP in response to antihypertensive drugs. For example, home BP can detect the depressor effect caused by the intake of fruits and vegetables in a population or by physical training, the hypertensive response to passive smoking in a population, the relationship with the longevity of parents and low BP in children, the relationship of combinations of hypertension candidate genes with the incidence of hypertension and so on. In a crossover study of calcium supplementation assessed by office, home and ambulatory BPs, the small reduction in BP was significant only for home BP.  Serial measurements of home BP also detected time-related biphasic changes in morning and evening BPs with alcohol consumption and restriction in hypertensive patients. Therefore, home BP measurements provide an excellent index for the evaluation of BP changes in individuals and for the comparison of BP among individuals and groups. In particular, the reliability and precision of BP as a phenotype are determinants of the results of gene-related studies, and home BP is considered to be extremely useful in such studies.

_

Home BP detects morning hypertension, a risk factor for cardiovascular events:

Patients on antihypertensive medication who have high blood pressure (SBP >145 mmHg) in the morning, as measured with home monitoring kits, are at increased risk of cardiovascular events, even if their clinic measurement is acceptable, researchers have found.  

_________

Ambulatory measurement of blood pressure (AMBP):

_

Ambulatory BP measurement:

_

Ambulatory measurement of blood pressure monitoring (AMBP) involves measuring blood pressure (BP) at regular intervals (usually every 20–30 minutes) over a 24-hour period while you carry on with normal daily activities. AMBP has the additional advantage of measuring your BP during sleep and it is now known that night time BP may give much valuable information. Your AMBP is measured with a small monitor, worn in a pouch on a belt, and the monitor is connected to a cuff on your upper arm. This cuff inflates and deflates regularly measuring the systolic (upper) and the diastolic (lower) blood pressure as well as your average blood pressure and heart rate. AMBP is safe and free of complications, apart from occasional discomfort when the cuff is inflating. Occasionally there may be slight bruising of the arm. Modern machines are light, quiet and easy to wear but can sometimes disturb sleep.

_

Upper limit of normal ambulatory blood pressure monitoring values:

Normal ambulatory blood pressure during the day is <135/85 mm Hg and <120/70 mm Hg at night. Levels above 135/85 mm Hg during the day and 125/75 mm Hg at night should be considered as abnormal.

_

Dippers and non-dippers:

1. Blood pressure will fall at night in normotensive individuals. People who undergo this normal physiological change are described as ‘dippers’.

2. In ‘non-dippers’ the blood pressure remains high, i.e. less than 10% lower than daytime average. There is also the phenomenon of ‘reverse dippers’ whose blood pressure actually rises at night. Both these conditions have also been reported to be associated with a poor outcome.

_

The diagnosis and management of hypertension has traditionally been based on blood pressure measurements taken in the office. However, the inherent variability of blood pressure and its susceptibility to transient emotional influences in normotensive and hypertensive people undermine the ability of conventional clinical measurement to accurately reflect the usual level of blood pressure in some people. In contrast to other means of blood pressure assessment, including self-assessment, ambulatory monitoring provides information automatically and noninvasively about the effects of blood pressure load over time and under the various circumstances during which blood pressure is not usually measured (including work and sleep). Whereas self-assessments at home usually provide periodic measurements over many days and weeks, ambulatory monitoring provides numerous measurements over a period of hours, up to a day. Thus, the sampling of a person s blood pressure provided by the two means is quite different. Although the accuracy of ambulatory monitoring is less than optimum, technical errors are relatively small compared with errors in the estimate of true pressure based on a small number of clinic readings and can be minimized if a standard protocol is followed, including calibration with a mercury sphygmomanometer immediately before and after the readings are taken. It is important to note that even with excellent calibration there is substantial variability in the results of ambulatory monitoring when repeated after an interval of 2 to 8 weeks. Thus, monitoring may need to be done repeatedly to provide an average measure of a person s usual ambulatory blood pressure. The devices currently available vary in their reliability and accuracy.  Reference values for ambulatory monitoring in normotensive subjects are available from recent studies: daytime pressures range from 101/62 to 143/90 mm Hg, and a daytime average of 135/84 mm Hg corresponds to a clinic-based cut-off of 140/90 mm Hg. In view of the generally lower pressures obtained with ambulatory monitoring than at the clinic, patients with an average blood pressure of more than 135/84 mm Hg on ambulatory monitoring and without target-organ damage should be followed closely for the development of higher pressures or target-organ damage. To date, ambulatory blood pressure monitoring has been primarily a research tool and has not had an established clinical role in the diagnosis and management of hypertension. Nevertheless, some clinical problems are better elucidated by this technique than by casual blood pressure readings, and ambulatory monitoring is being used increasingly in clinical decision making. Its most important clinical application is the detection of white-coat hypertension. Estimates of the prevalence of this syndrome vary from 20% to 39%. Other clinical situations in which ambulatory monitoring might be of diagnostic value include borderline hypertension with target-organ involvement, episodic hypertension and resistant hypertension. Many studies have shown a closer correlation of target-organ involvement (particularly left ventricular hypertrophy) with pressures obtained through ambulatory monitoring than with those obtained at the clinic, and there is also evidence that left ventricular hypertrophy occurs much less frequently in patients with white-coat hypertension than in those with confirmed essential hypertension. Other studies have shown that pressures obtained from ambulatory monitoring at work and the percentage of daily blood pressure loads correlate more strongly with left ventricular hypertrophy than do pressures measured at the clinic. The results of ambulatory blood pressure monitoring also appear to be a more potent predictor of cardiovascular disease and death in patients with hypertension than are casual blood pressure readings. However, the evidence concerning the value of ambulatory blood pressure monitoring is not complete in some respects, and some procedural issues make its use less than straightforward. The main clinical trials of the benefits of lowering blood pressure have used measurements taken at the office or clinic to establish the diagnosis of hypertension and to gauge the effects of treatment. Ambulatory monitoring as a substitute has not been tested in studies large enough to determine whether it provides a better measure of diagnosis or of risk reduction. There are other factors to be considered: ambulatory monitoring devices are expensive (in terms of both equipment and personnel costs) in comparison with the usual sphygmomanometers, they are error-prone and need careful calibration, they are inconvenient for patients, few centers can provide them, there is enough variability in the measurements they provide for the same patient from time to time that more than one monitoring session may be needed, and the service is not approved for reimbursement by government health insurance plans in some countries. Thus, it is premature to recommend the widespread application of ambulatory monitoring for the diagnosis of patients with mild hypertension.

_

Ambulatory blood pressure monitoring has been found to be clinically useful only in the following settings: to identify non-dippers and white-coat hypertension, evaluate drug resistant hypertension, episodic hypertension, evaluate antihypertensive drugs and in individuals with hypotensive episodes while on antihypertensive medication. However, this procedure should not be used indiscriminately in the routine work-up of a hypertensive patient because of its high cost.

___

AMBP and cardiovascular outcomes:

Several studies have demonstrated the prognostic benefit of AMBP, with evidence that 24-hour daytime or nighttime average BP values correlate with subclinical organ damage more closely than office values. The Ohasama study – the first study to address the prognostic value of AMBP – reported a greater association between ambulatory BP and CV mortality than office BP. Clement et al showed that for the same clinical systolic BP, CV prognosis was worsened (incidence of CV events multiplied by two to three) when 24-hour systolic BP was >135 mmHg. In the SYST-EUR (Systolic Hypertension in Europe) study, ambulatory but not clinical BP was shown to predict CV mortality during follow-up; higher 24-hour BP was associated with total, cardiac, and cerebrovascular events in untreated hypertensives.

_

AMBP for evaluating pharmacological treatment of hypertension:

To reduce CV risk of patients with hypertension, antihypertensive agents should provide effective, sustained, and smooth BP reduction throughout the 24-hour dosing period. AMBP has drastically improved the ability to assess the efficacy of antihypertensive drugs in both clinical trials and medical practice.  Greater reproducibility, lack of placebo effect, and absence of an alerting-dependent BP response make AMBP the ideal tool to quantify the antihypertensive effect of new drugs in clinical trials, as well as drug combinations or nonpharmacological measures.  It also makes it possible to compare the ability of different drugs or doses to provide smooth and consistent reductions in BP using indices such as trough-to-peak ratio and smoothness index.

_

Relative effectiveness of clinic and home blood pressure monitoring compared with ambulatory blood pressure monitoring in diagnosis of hypertension: systematic review:

The 20 eligible studies used various thresholds for the diagnosis of hypertension, and only seven studies (clinic) and three studies (home) could be directly compared with ambulatory monitoring. Compared with ambulatory monitoring thresholds of 135/85 mm Hg, clinic measurements over 140/90 mm Hg had mean sensitivity and specificity of 74.6% (95% confidence interval 60.7% to 84.8%) and 74.6% (47.9% to 90.4%), respectively, whereas home measurements over 135/85 mm Hg had mean sensitivity and specificity of 85.7% (78.0% to 91.0%) and 62.4% (48.0% to 75.0%). Neither clinic nor home measurement had sufficient sensitivity or specificity to be recommended as a single diagnostic test. If ambulatory monitoring is taken as the reference standard, then treatment decisions based on clinic or home blood pressure alone might result in substantial overdiagnosis. Ambulatory monitoring before the start of lifelong drug treatment might lead to more appropriate targeting of treatment, particularly around the diagnostic threshold. This review has shown that neither clinic nor home measurements of blood pressure are sufficiently specific or sensitive in the diagnosis of hypertension. Authors included 20 studies with 5683 patients that compared different methods of diagnosing hypertension in diverse populations with a range of thresholds applied. In the nine studies that used similar diagnostic thresholds and were included in the meta-analysis (two comparing home with ambulatory measurement only, six comparing clinic with ambulatory measurement only, and one study comparing all three methods), neither clinic nor home measurement could be unequivocally recommended as a single diagnostic test. Clinic measurement, the current reference in most clinical work and guidelines, performed poorly in comparison with ambulatory measurement, and, given that clinic measurements are also least predictive in terms of cardiovascular outcome, this is not reassuring for daily practice. Home monitoring provided better sensitivity and might be suitable for ruling out hypertension given its relative ease of use and availability compared with ambulatory monitoring. In the case of clinic measurement, the removal of studies with a mean blood pressure in the normotensive range reduced specificity still further. This has profound implications for the management of hypertension, suggesting that ambulatory monitoring might lead to more appropriate targeting of treatment rather than starting patients on lifelong antihypertensive treatment on the basis of clinic measurements alone, as currently recommended. In clinical practice, this will be particularly important near the threshold for diagnosis, where most errors in categorisation will occur if ambulatory monitoring is not used.

What is already known on this topic:

Hypertension is traditionally diagnosed after measurement of blood pressure in a clinic, but ambulatory and home measurements correlate better with outcome.

What this study adds:

Compared with ambulatory monitoring, neither clinic nor home measurements have sufficient sensitivity or specificity to be recommended as a single diagnostic test.  If the prevalence of hypertension in a screened population was 30%, there would only be a 56% chance that a positive diagnosis with clinic measurement would be correct compared with using ambulatory measurement.  More widespread use of ambulatory blood pressure for the diagnosis of hypertension would result in more appropriately targeted treatment.

_

Cost saving ambulatory BP monitoring (ABPM):

In 2011, British doctors began using ABPM as a confirmatory test on nearly every patient suspected of having hypertension. The change occurred after Britain’s National Institute for Health and Clinical Excellence concluded that ABPM was the most accurate and cost-effective option for clinching the diagnosis. An analysis published in the medical journal, the Lancet, projected that new approach will save Britain’s National Health Service $15 million over the first five years since it was adopted, mainly by avoiding treatment for those with white coat hypertension.

_____

Downside to ambulatory blood pressure monitoring:

1. It is not universally available although this is improving.

2. It requires specialist training.

3. Some patients find inflation of the cuff unbearable.

4. Sleep disturbance.

5. Bruising where the cuff is located.

6. Background noise may lead to interference (less with oscillometric methods).

7. Poor technique and arrhythmias may cause poor readings.

8. There is some evidence that SMBP may be better than AMBP for predicting cardiovascular risk at every level below severe hypertension (≥160/≥100 mm Hg). However, these findings need to be confirmed by larger trials.

___________

Advantages and limitations of SMBP vis-à-vis OMBP and AMBP:   

_

Blood pressure measured by any technique outside of the physician’s office tends to have lower values. In six studies comparing SMBP and OMBP, a consistently lower blood pressure by SMBP (SBP 5.4 ± 17.7 mm Hg and DBP 1.5 ± 6.3 mm Hg) was demonstrated. Three studies comparing AMBP and SMBP show similar daytime blood pressure results. While AMBP is the gold standard for the determination of WCH, SMBP is comparable to AMBP for prevalence of WCH. A systematic review including six comparisons of SMBP and AMBP found that blood pressures over the criterion of 135/85 mmHg were obtained more frequently\ overall with SMBPs. However, in the three studies with the largest numbers of SMBPs (29 to 56), the average AMBPs were higher than the SMBPs, to a lesser or greater degree.

_

 

__

The advantages of using SMBP monitoring to manage hypertension are:

1. Avoiding under-treatment of hypertension — SMBP monitoring can provide more frequent BP measurements. If transmitted to the health care provider, this can permit more rapid adjustments in antihypertensive medication and more effective BP control.

2. Enhancing patient self-participation in disease management and adherence to lifestyle and pharmacological interventions — long-term adherence to lifestyle modification strategies and antihypertensive medication is a key challenge in hypertension management. SMBP monitoring may help address this challenge by enhancing patient participation in disease management.

3. Avoiding overtreatment in patients with lower BP outside the clinic than in it — SMBP may be useful in identifying individuals with white coat hypertension, orthostatic BP changes, or hypotensive episodes from medication and thereby prevent overtreatment in these individuals.

4. Another advantage of the self-measurement devices to control the blood pressure when symptoms appear. Self-measurement of the blood pressure device makes it possible to control the blood pressure when symptoms appear like faintness, a loss of consciousness (symptoms of hypotension), headaches, a nosebleed or neurological symptoms (confusion, agitation….). A high or low blood pressure measured on the device can thus induce a consultation in the physician office or in the emergency service in the hospital.   

__

 

_

SMBP reliability:

The reliability of the patient recording of the blood pressure measurement is critical if this technique is to be trusted. Patients consistently misreport the results of the monitor when patient manual recordings are compared to a device that stored readings unbeknownst to the patient. Patient reports had mean differences in blood pressure of at least 10 mm Hg for SBP and 5 mm Hg for DBP compared to stored readings. In another study, 36% of patients underreported and 9% overreported blood pressure readings. Log books also had phantom readings noted; conversely, patients failed to report measurements that were taken and stored. Similar findings were observed with other monitoring technologies such as glucometers for diabetic patients and for recording metered dose inhaler usage in asthmatic patients. Thus, objective recording of the data is strongly advised. SMBP is a useful adjunct to OMBP measurement with properly validated monitors, can be performed by many patients, and is consistent with the goal of self-management.

_

Limitations of SMBP monitors:

Position: The cuff and the monitor should be at the same level of the heart; otherwise the reading has to be adjusted due to the difference in height. This is especially true for wrist cuff blood pressure monitors.

Patient Movement: Measurements will be unreliable or cannot be performed if the patient is moving, shivering or having convulsions. These motions may interfere with the detection of the arterial pressure pulses. In addition, the measurement time will be prolonged.

Cardiac Arrhythmia: Measurements will be unreliable and may not be possible due to irregular heartbeats caused by cardiac arrhythmia.

Rapid pressure change: If the arterial pulse pressure changes rapidly during measurement, the blood pressure monitor would not be able to obtain a good reading.

Severe Shock: When the patient is in severe shock or having hypothermia, blood flow would be reduced resulting in weak pulses. The weak signal may lead to inaccurate readings.

Heart rate: If the heart beats too fast (>240bpm) or too slow (<40bpm), measurement would be difficult.

_

Limitations of SMBP:

_______

Similarities between SMBP and AMBP:

_

_

The most important common denominator of SMBP and AMBP is the fact that they both provide out-of-office BP values, i.e., BP values obtained in the patient’s “natural” environment. Thus, these values are basically devoid of the alarm reaction associated with office BP measurement, responsible for the white coat effect. Another important common advantage of AMBP and SMBP is that, when current recommendations are followed, they both make use of automated, validated oscillometric devices. This makes the obtained BP values operator independent, thus avoiding some common limitations affecting office measurements.  Importantly, the application of these techniques is possible in a vast majority of cases, the two most relevant exceptions including important arrhythmias, e.g., frequent extrasystoles or atrial fibrillation, where oscillometric measurements are unreliable, and obesity with extremely large arm circumference and/or conical shaped arms, where fitting an appropriate cuff may be difficult. In the latter case the use of wrist devices for SMBP might possibly be justified, whereas otherwise upper arm devices should always be preferred. The above advantages, together with the ability of SMBP and AMBP to provide a much larger number of values than office BP measurements, result in more stable estimates of the prevailing BP in a given subject, reflecting the actual BP burden on cardiac and vascular targets more precisely than office readings. This has not only methodological but also clinical relevance, as documented by a number of studies showing the prognostic superiority of SMBP or AMBP over isolated office BP measurement. These observations are further reinforced by the demonstration that a worse prognosis characterizes subjects with normal office and elevated out-of-office BP, assessed by either SMBP or AMBP (masked hypertension), than subjects with normal out-of-office but elevated office BP (white coat or isolated office hypertension).  

_

Differences between AMBP and SMBP:

Notwithstanding the above similarities, there are major differences between SMBP and AMBP that importantly influence their possible clinical and research applications. One of the key issues is the economic aspect of using either SMBP or AMBP. Although the price of validated AMBP devices has fallen considerably over the last years, making them more easily and widely available, still, the costs of the system and its maintenance remain relatively high, unquestionably higher than those of SMBP. This is of particular relevance when promoting BP monitoring in low-resource settings, where the prevalence of hypertension is increasing and the limited availability of economic resources does not allow costly equipment to be considered in a population setting. Thus, should SMBP and AMBP provide equivalent clinical information, the former technique would have to be preferred on the background of the possibility to reduce patients’ management costs.  Admittedly, however, AMBP has a number of clinically relevant features that are not directly available with SMBP, which makes the former approach not easily replaceable by the latter. One of the peculiar advantages of AMBP lies in its ability to provide a series of frequent and automated BP measurements throughout the 24 hours, which makes AMBP, at variance from SMBP, capable to dynamically assess BP changes over relatively short periods of time. This might have clinical implications in light of the evidence supporting the adverse prognostic relevance of specific patterns of BP variability over 24 hours, including reduced nocturnal BP fall, increased short-term BP variability, and possibly also an excessive morning BP surge. Nevertheless, the actual clinical usefulness of assessing these dynamic BP features remains controversial because of the lack of universally accepted normal reference values for their interpretation, lack of well-defined interventions able to counteract their adverse effects, and missing evidence that their modification by treatment may significantly reduce cardiovascular risk.

_

Final position of SMBP vis-à-vis AMBP:

The current position is that SMBP and AMBP should coexist and be used as complementary tools, providing different information on a subject’s BP status.  However, SMBP may be a valid alternative to AMBP in many cases, possibly even in settings where AMBP is currently considered the method of choice, e.g., identification of isolated office hypertension and of masked hypertension, clinical evaluation of BP variability, and assessment of antihypertensive drug coverage. In fact, in clinical practice, SMBP is increasingly replacing AMBP, with use of the former being recommended in all treated hypertensive subjects by recent guidelines, a recommendation that cannot apply to AMBP. This is because SMBP is an ideal first-line tool because of its low cost, high availability, and easy application. It may also be the most reasonable option for the initial assessment of untreated subjects, in whom white coat or masked hypertension is suspected, i.e., those with highly variable office BP, with office BP close to diagnostic thresholds, with isolated out-of-office BP values discrepant from office BP, with evidence of organ damage contrasting with office BP findings, etc. Moreover, SMBP is clearly the tool of choice in monitoring BP control in treated subjects over extended periods of time, also because it has the particular advantage of promoting a better therapeutic adherence.

_

Recent studies showed that home blood pressure monitoring is as accurate as a 24-hour ambulatory monitoring in determining blood pressure levels. Researchers at the University of Turku, Finland studied 98 patients with untreated hypertension. They compared patients using a home blood pressure device and those wearing a 24-hour ambulatory monitor. Researcher Dr. Niiranen said that “home blood pressure measurement can be used effectively for guiding anti-hypertensive treatment”. Dr. Stergiou added that home tracking of blood pressure “is more convenient and also less costly than ambulatory monitoring.”

__

Schema of SMBP and AMBP:

A schema showing how both self/home and ambulatory BP measurements may be used in clinical practice is shown in the figure below. Self-BP monitoring may be used as an initial step to evaluate the out-of-office BP, and if AMBP is available it is most helpful in cases where the self/home BP is borderline (between 125/75 mm Hg and 135/85 mmHg). The target BP for self/home BP is usually 135/85 mm Hg for those whose target office BP is 140/90 mm Hg and 125/75 to 130/80 mm Hg for those whose target office BP is 130/80 mm Hg. Equivalent values for ambulatory BP in low risk hypertensive patients are 130/80 mm Hg for 24-hour BP, 135/85 mm Hg for the awake BP, and 125/75 mm Hg for the sleep BP.

_

The figure above shows that practical use of self/home BP monitoring and AMBP in clinical practice.

_______

Patient assessment strategy for HT vis-à-vis OMBP, SMBP and AMBP:

Patients frequently note that their SMBP is lower than the office-measured blood pressure (OMBP). A number of investigators have studied this difference and confirmed the observation. Combining a number of studies, SMBP is typically lower than OMBP by 8.1 mm Hg systolic and 5.6 mm Hg diastolic. In addition, the upper limit of normal for SMBP is 130 to 135 mm Hg systolic and 80 to 85 mm Hg diastolic. Readings above these limits should be considered abnormal. Finding the proper role for SMBP in standard clinical practice has been a challenge. Numerous organizations have clinical guidelines for the diagnosis and treatment of hypertension, but typically only have brief mention of SMBP. The Seventh Report of the Joint National Committee on Prevention, Detection, Evaluation, and Treatment of High Blood Pressure (JNC 7) recommends OMBP for diagnosis and treatment of hypertension, and designates SMBP as an adjunct for the monitoring of therapy. The American Heart Association recommends OMBP with a mercury sphygmomanometer. They recognized SMBP as an emerging force but also relegate it to a supplementary role. Currently most authorities view SMBP as a supplement to OMBP. SMBP will likely gain wider clinical acceptance as more research outcomes become available. Treatment can be started without confirmation of elevated office BP in patients with high office BP and target organ damage, or a high cardiovascular risk profile. In patients with raised office BP but without target organ damage (white-coat hypertension), or with normal office BP but unexplained target organ damage (masked hypertension), ambulatory or home BP monitoring or both must be used to confirm the diagnosis. Few longitudinal studies have addressed the long-term prognostic meaning of home BP measurement. Until more prospective data become available, management of hypertension exclusively based on self-measurement of BP at home cannot be recommended.

_

_______

Cost-effectiveness of home monitoring: A study:

There is some evidence that self-monitoring may be cost-effective. In a randomized study conducted by the Kaiser Permanente Medical Care Program in San Francisco, 430 patients with mild hypertension, most of whom were taking antihypertensive medications, were randomized either to a usual care group or to use self-monitoring. Their technique was checked by clinic staff, and they were asked to measure their blood pressure twice weekly and to send in a record of their readings every month. At the end of 1 yr the costs of care (which included physician visits, telephone calls, and laboratory tests) were 29% lower and blood pressure control slightly better in the self-monitoring group. The vast majority of both patients and their physicians considered that the self-monitoring procedure was worthwhile. The authors estimated that the annual cost of self-monitoring was $28 per year (in 1992 dollars), which assumed a depreciation of a $50 monitor over 5 yr, $10 for training (also depreciated), $1 for blood pressure reporting, and $6 for follow-up to enhance compliance. Combining this estimate with their study results led to an estimated cost saving per patient of $20 per year. Projecting these numbers on a national level, they estimated that about 15 million hypertensive patients in the United States are candidates for self-monitoring and that 20 of the 69 million annual hypertension related physician visits could be saved, with a cost saving of $300 million per year. These numbers seem very optimistic, but they clearly establish the potential for cost saving.

_

Effects of home BP on the medical economy:

The introduction of SMBP into the diagnosis and treatment of hypertension has been shown to have a strong effect on the medical economy. In fact, in Japan, where home BP-measuring devices are already used by most hypertensive patients, the introduction of home BP into the care of hypertension has resulted in a decrease in annual medical expenditure of about 1 trillion yen. This decrease has been mediated primarily by screening for white-coat hypertension and masked hypertension. As a result of large-scale intervention studies, the introduction of home BP has also been reported to lead to a reduction in medical expenditure via a decrease in the amount of drugs used.

______

Innovation and research in BP measuring technology:

_

Techniques for Self-Measurement of Blood Pressure (SMBP): Limitations and Needs for Future Research:

SMBP improves the overall management of hypertension provided it is implemented with methodologic care. This concerns especially the accuracy and technical requirements of blood pressure measuring devices that should be validated according to internationally accepted protocols. The use of memory-equipped automatic home monitors is strongly recommended because they reduce observer bias, avoid patients’ misreporting, and allow fully automatic analysis by software. For current use, simple software should be worked out that allow for analysis of readings in an objective manner. Miscuffing is also a frequent source of measurement error in obese arms when oscillometric devices are used. Modern automatic devices can overcome this problem because of special software algorithms that can provide accurate measurements over a wide range of arm circumferences when coupled with a single cuff of standard dimensions. Tronco-conical–shaped cuffs are a key component of this instrumentation because they better fit on large conical arms frequently present in obese individuals. Semi-rigid cuffs should be increasingly used because they ensure that the proper amount of tension is applied without the intervention of the user. Continuous technology improvement of instrumentation for SBPM can be achieved through close cooperation between manufacturers and validation centers. 

_

Wireless Blood Pressure Monitor:

_

Easy and precise self-measurement of your blood pressure with your smartphone:

_

Simply slip on the cuff, turn on the Wireless Blood Pressure Monitor and the Health Mate app will automatically launch. Following a brief set of instructions, you will be ready to take your blood pressure. Because it makes more sense to track your blood pressure over time, the Health Mate app stores all your BP readings, syncs with the Withings Health Cloud and creates an easy to understand chart. The app gives you an instant color feedback based on the World Health Organization’s official standards for a quick and simple blood pressure tracking experience. The Wireless Blood Pressure Monitor’s results have scientific value: it is compliant with European medical device regulations and has received clearance from the Food and Drug Administration (FDA) in the USA. It is also medically approved in Canada, Australia and New Zealand. The Wireless Blood Pressure Monitor works with all iOS 5.0 or higher devices, and with Android 4.0 or higher stmartphones and tablets using Bluetooth connectivity or your smartphone’s cable. Now you can check your blood pressure using your iPhone or iPad with two products that make it easy to download an app onto your iOS device, put on a blood pressure cuff, tap the touchscreen, and soon you have a blood pressure reading that you can track every day. They’re quick and reasonably priced.

 _

Considerations for Future Development of Cuff and Bladders:

There is a need for devices that make use of cuffs and bladders with appropriate characteristics. Manufacturers should pay special attention to the size and shape of the bladders and to the material used for cuffs. Semi-rigid cuffs should be increasingly used for self-BP measurement because they ensure that the proper amount of tension is applied for placement of the cuff. Elderly persons in particular often have problems in wrapping the cuff correctly around the arm. With cuffs made of soft material, it is more difficult for the user to apply the optimal amount of tension, and this may result in improper wrapping. Placing a flexible compliant laminate in the cuff, with an amount of tension pre-set by the manufacturer, may provide accurate BP measurements without the intervention of the user. Devices for clinical use may have soft cuffs because the BP measurement is performed under the supervision of health care personnel. Soft cuffs also have better durability, are less bulky, and are lower in cost. However, the use of conically shaped bladders in small cuffs may be preferable if they have to be applied on large arms. The appropriate slant angle for conical cuffs should be calculated from the arm characteristics in large samples, with arm circumferences ranging from 22 cm to 50 cm. Cylindrical and conical bladders of different size and shape should be constructed and compared in the various arm size classes, studying the influence of sex, age, adiposity, and BP level. Cuffs of soft and rigid material containing the same type of bladders should be compared either under the supervision of the clinician or by the patient at home. This would allow physicians to ascertain whether semi-rigid cuffs are more reliable than soft cuffs in real-life situations.

______

Atrial Fibrillation: The WatchBP device:

Atrial fibrillation (AF) is the most common cardiac arrhythmia. It affects over one percent of the general UK population and is related to one fifth of all strokes (European Heart Rhythm et al. 2010).

The WatchBP device:

The WatchBP Home is an automated blood pressure monitor with an implemented AF detection system. When a GP or Patient measures blood pressure using the WatchBP, the device automatically screens for AF without any extra effort. As a simple explanation, the algorithm of the device calculates the irregularity index (SD divided by mean) based on interval times between heartbeats and if the irregularity index is above a certain threshold value a patient is diagnosed as having AF. If a patient performs self-measurements at home and the WatchBP Home detects AF, it gives a warning that a visit to the GP is required. The systems’ accuracy has been investigated in several scientific studies and showed high diagnostic accuracy (Stergiou et al. 2009; Wiesel et al. 2009). Although, the WatchBP device has never been directly compared to pulse palpation for AF screening, results from different clinical studies consistently show a higher diagnostic accuracy for the WatchBP (Stergiou et al. 2009; Wiesel et al. 2009) device than for pulse palpation as compared to the gold standard: a 12-lead ECG assessed by a consultant (Hobbs et al., 2005, Morgan and Mant, 2002, Somerville et al., 2000, Sudlow et al., 1998). Based on the results of the SAFE study the AF detector of the WatchBP monitor shows an even higher rate of accuracy for the detection of AF than a GP or nurse using a 12-lead ECG system (Hobbs et al. 2005) as compared to a 12-lead ECG assessment by a consultant.

_

Context Classification during Blood Pressure: Sensor Chair:

Self-Measurement using the Sensor Seat and the Audio Classification Device:

Blood pressure self-measurement requires the patient to follow a range of recommendations. Patients must remain silent during measurements, be seated correctly with back support and legs uncrossed, and must have rested at least 5 minutes prior to taking the measurement. Current blood pressure devices cannot verify whether the patient has followed these recommendations or not. As a result, the data quality of BP measurements could be biased. Researchers present a proof-of-concept demonstration prototype that uses audio context classification for detecting speech during the measurement process, as well as a sensor seat for measuring patient posture and activity before and during the SMBP process.  

_

A Wristwatch that monitors Blood Pressure without cuff:

Now a new wireless monitor from Hewlett-Packard and a Singapore company called Healthstats aims to make it much easier for patients and doctors to monitor blood pressure. The device, which has the size and look of a wristwatch, can monitor pressure continuously—which provides a much more accurate picture than infrequent readings in the doctor’s office. Until now, the only way to do such continuous monitoring has been with a cumbersome inflatable cuff for the arm or wrist. The new monitor comes with related software designed to keep patients and doctors informed of the wearer’s vital signs, including blood pressure. Data is transmitted from the device to the user’s cell phone, and then to the cloud, where clinicians can review it. Patients and their doctors can view 24-hour graphs of blood pressure, and the system can sound alerts when it detects abnormalities in pressure or other measures. Unlike standard equipment, the Healthstats device relies on a sensor that rests against radial artery in the wrist and detects the shape of the pressure wave as blood flows through it.  (The device is first calibrated with a standard blood pressure monitor.) Together with algorithms they have developed, the indices can be processed to get heart rate, diastolic and systolic pressure, and other measures.

_

Healthstats CEO Dr.Ting Choon Meng with his BPro blood pressure monitor wristwatches. 

_

A revolutionary method to estimate aortic blood pressure:

With regard to the variation between central aortic and brachial pressures, it is assumed that the mean arterial and diastolic pressure remains largely unchanged from aortic root to brachial artery, and that it is variation in amplification of the pulsatile pressure component, namely, systolic pressure, that accounts for the central-to-brachial pressure differences. Thus, focus has been on the accurate derivation of central aortic systolic pressure (CASP).  For more than a hundred years, blood pressure has been measured in largely the same way. You’ve probably experienced it yourself: your doctor will inflate a cuff around your upper arm, temporarily interrupting the flood of blood in your brachial artery. From this, they will take a reading of the pressure when your heart beats (systolic pressure) and when it is between beats (diastolic pressure) – which is why blood pressure (BP) is given as ‘this number over that number’. But this is not ideal because blood pressure is amplified as it travels away from the heart. It has been known for a long time that the pressure in the large vessels close to the heart (e.g. the aorta) is lower than the corresponding pressure on the arm. This may seem surprising but it is due to amplification of the pressure wave as it moves away from the heart to the arm. A way was needed to eliminate the amplification that increases the pressure in the arm so that we could get back to the original central aortic pressure. To do this mathematical modeling is used, similar to the kind of modeling that is undertaken to remove distortion of waves in many other applications. The process acts like a filter, filtering out the amplified portion of the pulse wave to reveal the central aortic pressure. Being able to measure blood pressure near the heart, specifically in the aorta – called ‘central aortic systolic pressure’ or CASP – is important because this is where high blood pressure can cause damage. But obviously your aorta is much harder to reach than your upper arm, what with that whole rib cage and so on. It can be done – but only using a surgical procedure. Clearly what is needed is some way to measure CASP indirectly using blood vessels we can actually get at. Now, if the relationship between brachial BP and CASP was constant, there would be no problem – you could just use a multiplication factor. But the ratio between the two measurements varies not only between individuals but also within each person as they get older and their artery walls become stiffer. The new approach, developed by scientists at the University of Leicester, uses technology invented by Singapore-based medical device company HealthSTATS International: a device worn on the wrist which can accurately record a patient’s pulse. Not just the pulse rate but the actual pulse wave. In short, your pulse wave provides enough data to be able to determine your aortic blood pressure from a measurement of your brachial blood pressure – without having to cut you up or poke anything into you. The sensor records a pulse wave at the wrist at the same time that blood pressure is measured in the arm. The data is then used to mathematically compute the CASP. The process takes only a few minutes more than conventional measurements. The non-invasive procedure uses a device which not only looks like a wristwatch and is worn like a wristwatch but, in some versions, actually is a wristwatch. A carefully positioned pad presses on the radial artery on the inside of your wrist; it’s a bit tight but not uncomfortable. Wearing this device for 24 hours provides an average which flattens out pulse-raising factors such as excitement or exercise. Working with colleagues from HealthSTATS, the Leicester researchers have built up an extensive collection of patient data from which they have derived an effective algorithm for calculating CASP. Direct comparison with traditional CASP measurements obtained using the old-fashioned, invasive method shows a 99% correlation. The results of all this research have now been published in the prestigious Journal of the American College of Cardiology. It is worth stressing that the new system is not designed to replace the old inflatable cuff that we all know and love; you need the cuff and the wristwatch to calculate CASP (although you don’t need to wear the cuff for 24 hours). What it will do is let doctors measure CASP much more easily; you could potentially have your aortic blood pressure measured by your GP. The importance of all this is that brachial BP can be unreliable, especially in young people whose more flexible blood vessel walls can give misleadingly high blood pressure, leading to unnecessary medical interventions. Conversely, old people with stiffer blood vessels may give a misleadingly low reading of brachial BP, disguising dangerous high blood pressure which can be a precursor to heart attack or stroke. It may be some time before this technology reaches the majority of patients but the scientists hope that you see it soon because you’ll be helping them determine whether CASP really can become the standard measurement for blood pressure. And that could save lives.  The device looks like a normal blood pressure monitor, with one important difference. There is an additional strap attached to the monitor that is placed around the wrist. This contains the sensor that captures the pulse wave. Once the blood pressure cuff and wrist strap are in place – a button is pressed which blows up the cuff like a normal blood pressure measurement, but also captures the pulse wave at the wrist. The device contains the program that we developed that uses the blood pressure and the pulse wave form to derive central aortic pressure. The pulse sensor has also been incorporated into the strap of a wrist watch that allows ambulatory measurements of blood pressure to be recorded day and night. 

_

In this approach, radial artery waveform is obtained by noninvasive tonometry. In this method, the radial waveform is usually calibrated to brachial blood pressure, measured using standard sphygmomanometry, thereby generating a calibrated radial artery pressure waveform (RAPWF). Mathematical generalized transfer functions (GTFs) in the frequency or time domain have then been used to derive central aortic pressures and related aortic hemodynamic indices from the RAPWF. This method has, however, been criticized because of concerns that it may not be appropriate to apply a GTF generated in 1 cohort of patients to all patients with different disease states, at different ages, and receiving different treatments, and so forth. Nevertheless, applying a GTF to the RAPWF remains the most commonly used method for the noninvasive assessment of central aortic pressure indices. More recently, an alternative approach to estimating CASP from the RAPWF has been proposed. This requires the accurate identification of an inflection point on the RAPWF that is said to correspond to the superimposition of the peak of the reflected wave onto the outgoing pressure wave. Numerous recent studies have suggested that this inflection point, so-called SBP2, corresponds to the peak CASP and is a reasonably accurate way of noninvasively assessing CASP, without the need to apply a GTF. In another method, a simple approach for the accurate estimation of CASP in humans utilizes an n-point moving average (NPMA). 

_______

Frequently Asked Questions (FAQ) about SMBP:

1. Will I get the same reading each time I use my Blood Pressure Monitor?

No. Blood pressure is not a static value but changes with each heartbeat, even in rest condition. Both the upper blood pressure value (systolic B.P.) and the lower blood pressure value (diastolic B.P.) vary by 5 to 10 mmHg with each heartbeat in healthy individuals. These variations may be considerably greater in the event of certain cardiovascular disorders. Insufficient rest condition is the most frequent reason for improper use in self-administered blood pressure measurement. A resting time of at least 5 minutes should therefore be chosen before commencing blood pressure measurement. Deliberate movements, muscle activities, coughing, sneezing and psychological demands such as speaking, listening and watching (e.g. TV ) may lead to false readings when measuring blood pressure. Measurements should therefore be carried out under conditions of complete rest and without any distraction. Cardiac rhythm disorders can cause inaccurate readings or may result in measurement failure. These cardiac rhythm disorders may occur without the self-user being aware of them.

_

2. Why are my home readings different from my doctor’s readings?

Many factors affect blood pressure including the anxiousness of a doctor’s visit. When the blood pressure is measured in a hospital, it may be 25 to 30 mmHg higher than when measured at home. This is because you are tense at the hospital and relaxed at home. It is important to know your stable normal blood pressure at home. If your doctor is using automated oscillometric device on you and takes multiple readings, your clinic BP would be close to your home BP.

_

3. Are manual inflate monitors (semi-automated) as accurate as automatic inflate monitors?             

Yes. Both models comply with the same accuracy standards. The only difference is the way the cuff is inflated.

_

4. How often and when should I measure my blood pressure?

It is recommended that you consult with your health care professional for the time and frequency that is best suited for you. It is important to take your readings at similar times and conditions on a day-to-day basis. This will allow for reliable comparisons of your readings. Initially take BP twice a day, morning and evening with two recording at each time.

_

5. What happens if I do not place the cuff at heart level?

If the cuff is not at heart level, readings will be affected producing either higher or lower measurements.

_

6. Which arm should I use to take my blood pressure?

It is suggested to consult with your health care professional to determine which arm is best for you to use. For home monitoring, non-dominant arm is used to measure blood pressure. Ideally, both arms must be used for first BP measurements. The arm with higher BP is then used for daily BP recording. The same arm should be used for all future readings to ensure reliable comparisons.

_

7. Can I use my blood pressure monitor while exercising and also in moving vehicles?

The oscillometric method of blood pressure monitoring requires quiet, stable conditions. Movement, vibrations or other activity will interfere with the reading and likely cause an error or inaccurate reading. 

_

8. When the cuff deflates, you get an error message. What does this mean?

An error message (EE) can appear for various reasons: 

• Incorrect cuff placement

• Movement or talking during measurement

• Over or under inflation of cuff

• Not waiting long enough between subsequent measurements 

_

9. Why is the pressure close to the heart, i.e. in the central aortic pressure, different from the pressure that is measured in the arm using a conventional blood pressure device?

It has been known for a long time that the pressure in the large vessels close to the heart (e.g. the aorta) is lower than the corresponding pressure on the arm. This may seem surprising but it is due to amplification of the pressure wave as it moves away from the heart to the arm. If this amplification was fixed, then measuring pressure in the arm would always be a good measure of pressure in the aorta – but it is not fixed. The amplification of the pressure wave as it moves from the heart to the arm can vary with ageing, disease of the blood vessels and with medication. This means that the pressure we measure routinely in the arm is not always a good predictor of the pressure in the large arteries which we call central aortic pressure. This is important because the central aortic pressure is the true pressure that the heart, the brain and other major organs actually sees and as such, is likely to be a better indicator of the pressure that can cause damage if it is too high. Another interesting aspect of this pressure amplification is that it is paradoxically greater in younger people with healthy arteries. This means that some people with a high blood pressure when measured in their arm may actually have a completely normal central aortic pressure. This amplification effect is greatest for systolic pressure and can result in a difference between central systolic aortic pressure and systolic pressure in the arm as great as 30mmHg. So the only way to really know what the central aortic pressure is, is to actually measure it in some way. 

________

________

The moral of the story:

_

1. When we use the tem blood pressure (BP) we mean arterial blood pressure, which is lateral pressure exerted by column of flowing blood over wall of arteries (aorta and major arteries).

_

2. Hypertension (HT) is not synonymous with high blood pressure even though both terms are used interchangeably. Hypertension is persistent and irreversible elevation of blood pressure over a longer period of time above a level where only treatment reduces blood pressure and where treatment does more good than harm. Transient and reversible elevation of blood pressure is not hypertension.

_

3. About 40 % of world’s adult population has hypertension. Out of all hypertensives, half do not know that they have hypertension, 40% are treated, but only 13% are controlled.

_

4. Among all risk factors for death worldwide due to non-communicable diseases, hypertension is number one risk factor for death and it carries greater risk than smoking, diabetes and obesity.

_

5. Hypertension is one of the most readily preventable causes of heart disease and stroke. High blood pressure can be easily detected and we have very effective ways of treating high blood pressure and we have clear evidence of the benefits of such interventions. A decrease of 5 mmHg in systolic BP is estimated to result in a 14 percent reduction in mortality due to stroke, a 9 percent reduction in mortality due to heart disease, and a 7 percent reduction in all-cause mortality.

_

6. Each person has roughly 100,000 single blood pressure values per day as every heart beat generates pressure pulse wave. Also, blood pressure varies widely due to multiple factors. That is why higher the number of blood pressure measurements, greater the accuracy of blood pressure value. Physicians committees have proved that at least 15 measurements are necessary to determine true blood pressure.   

_

7. Assuming that doctor is measuring blood pressure using correct technique and validated device in clinic (office BP), yet frequently patient gets incorrect BP readings because they typically only consist of 1 or 2 individual measurements; the inherent variability of blood pressure; and the tendency for blood pressure to increase in the presence of a physician (the so-called white coat effect).  

_

8. A survey showed that 96% of primary care physicians habitually use a cuff size too small resulting in getting higher BP than actual BP. Other poor techniques shown by doctors are terminal digit preference, threshold avoidance, observer prejudice, rapid cuff deflation and absence of approximation of systolic BP by palpatory method. So a “real world” cut off point by manual clinic BP measurement (office BP) for hypertension is closer to 150/95 mm Hg instead of 140/90 mm Hg.

_

9. Studies on the so called gold standard for clinical measurement of blood pressure by mercury sphygmomanometer found that about 20 to 50 % of mercury sphygmomanometers have technical flaws affecting accuracy of BP measurement. A check of the devices in a major teaching hospital showed that only 5% of the investigated instruments had been properly serviced.

_

10. Instrument evaluation studies demonstrated technical defects or unacceptable measurement inaccuracy in up to 60% of the aneroid devices that had been evaluated.

_

11. The Korotkoff sound method tends to give values for systolic pressure that are lower than the true intra-arterial pressure, and diastolic values that are higher. The range of discrepancies is quite striking: one author commented that the difference between the two methods might be as much as 25 mm Hg in some individuals.

_

12. In my experience, air leakage from rubber tubing and bladder in cuff is the most common malfunction of any sphygmomanometer resulting in incorrect BP readings. 

_

13. Appropriate size bladder in cuff and position of cuff at mid-right atrium level are the two most important technical points while measuring BP by auscultatory or oscillometric technique.  In the sitting position, the mid-right atrium level is the midpoint of the sternum or the fourth intercostal space. In the supine position, the mid-right atrium is approximately halfway between the bed and the level of the sternum; so when measurements are taken in the supine position the arm should be supported with a pillow.

_

14. The sleeve should not be rolled up such that it has a tourniquet effect above the blood pressure cuff. On the other hand, applying the cuff over clothes is similar to the undercuffing error and will lead to overestimation of blood pressure.

_

15. Talking (increase of 17/13 mm Hg) or crossing of legs (increase of 7/2 mm Hg) during measurement and arm position (increase or decrease of 8 mm Hg for every 10 cm below or above mid-right atrium level) can significantly alter BP measurements. A full urinary bladder causes an increase in blood pressure of approximate10mm Hg.  

_

16. Even when performed properly in research studies, office measurement of blood pressure (OMBP) is a relatively poor predictor of cardiovascular risk related to BP status compared with methods of out-of-office BP measurement such as 24-hour ambulatory measurement of blood pressure (AMBP) or self measurement of blood pressure (SMBP) at home.

_

17. Automated oscillometric blood pressure measurement eliminates observer errors associated with the use of the manual auscultatory technique such as terminal digit preference, threshold avoidance, observer prejudice, rapid deflation etc.

_

18. In the United States and Europe, up to two thirds of people with hypertension do self-monitor.

_

19. SMBP has qualitative improvement and quantitative increase in information compared with clinic BP and such improvised data has greater significance. Self-measured blood pressure has higher sensitivity and higher accuracy than clinic measurement to identify true hypertension. Home blood pressure is better correlated with target organ damage and adverse prognosis than clinic BP. On the other hand, the reliability of the patient doing SMBP is poor. About half of patients consistently misreport monitor readings. Simply monitoring home BP is of little value if the patients or their physicians do not act on the results.  

_

20. AMBP has higher sensitivity and specificity for diagnosis of hypertension compared to both SMBP and OMBP. However, SMBP is an ideal first-line tool over AMBP because of its low cost, high availability, easy application, and because it has the particular advantage of promoting a better therapeutic adherence. Also, SMBP had a higher correlation (compared with OMBP) with AMBP.

_

21. SMBP can detect white coat hypertension and obviate unnecessary therapy. SMBP can also detect masked hypertension missed by doctor at clinic leading to better treatment of hypertension.

_

22. Automated oscillometric upper arm validated devices are best for SMBP. Self-BP measurements at home are usually performed using the non-dominant arm. When an apparent difference in BP is observed between the arms in a clinical setting, the arm showing the higher BP should be used for self-BP measurements. To provide consistent results, the same arm should always be used for self-BP measurements. For the beginner, I recommend duplicate SMBPs in the morning and evening. Duplicate means two readings at 1 minute interval.

_

23. Oscillometric wrist cuff devices for BP measurement are not recommended due to; 1) wrist is not held at mid-right atrium level, 2) radial and ulnar arteries are not completely occluded by sufficient pressure in cuff, 3) flexion and hyperextension at wrist influences BP, and 4) overestimation of systolic pressure occurs. Only in obesity with extremely large arm circumference and/or conical shaped arms, where fitting an appropriate cuff may be difficult, the use of wrist devices for SMBP might possibly be justified.   

_

24. Finger BP is physiologically different from brachial BP, and issues of vasospasm in the winter season as well as hydrostatic pressure differences are inevitable. Therefore, oscillometric finger-cuff devices are no longer recommended.

_

25. SMBP is typically lower than OMBP by 8.1 mm Hg systolic and 5.6 mm Hg diastolic.

_

26. As far as cut off value for hypertension is concerned in adults; SMBP, awake AMBP and AOBP (automated office BP) are same; i.e. 135/85 mm Hg.

_

27. SMBP leads to faster diagnosis of hypertension, better accuracy of diagnosis of hypertension, greater control of hypertension, overcomes therapeutic inertia (reluctance by doctors to increase medication), improves patient compliance and reduces risks of hypertension. Blood pressure self-monitoring could save hypertensive population from thousands of unnecessary deaths every year.  

_

28. Morning hypertension and non-reduction in nocturnal BP (non-dipper) correlate highly with target organ damage and both are missed in office BP measurement. AMBP and newer SMBP devices can pick up both.

_

29. SMBP can be done for high risk groups including children, pregnant women, elderly, obese, diabetic, chronic kidney disease, and even atrial fibrillation.

_

30. Most studies have shown that drug treatment for hypertension lowers clinic blood pressure more than home blood pressure and since home blood pressure is highly correlated with target organ damage and adverse prognosis, SMBP is far better than OMBP for evaluating efficacy of anti-hypertensive treatment.  

_

31. Home blood pressure monitoring can save costs in health care since it lowers the number of clinic visits compared to conventional treatment of hypertension. Home BP has also been reported to lead to a reduction in medical expenditure via a decrease in the amount of drugs used.  

_

32. Telemonitoring of home BP with physician leads to more appropriate and effective pharmacotherapy, better blood pressure control, and overall reduction in cardiovascular risk.  

_

33. The biggest drawback of traditional brachial artery occlusion devices (auscultatory or oscillometric) is the occlusion of the brachial artery influences the local value of blood-pressure. In other words, the measurement changes the parameter to be measured. So we do need other technique to measure BP without occlusion of brachial artery. The pulse wave velocity (PWV) principle rely on the fact that the velocity at which an arterial pressure pulse travels along the arterial tree depends, among others, on the underlying blood pressure.  Accordingly, after a calibration maneuver, these techniques provide indirect estimates of blood pressure by translating PWV values into blood pressure values. Innovative wrist watch device relies on a sensor that rests against radial artery in the wrist and detects the shape and velocity of the pressure pulse wave as blood flows through it. The device is first calibrated with a standard blood pressure monitor. Together with algorithms developed, the indices can be processed to get heart rate, diastolic and systolic pressure, and other measures. Now people can use wrist watch SMBP device using PWV principle. Using Fourier analysis, it is possible to derive the central aortic pressure waveform from the radial artery trace. However, comparisons with directly recorded aortic pressure made during cardiac catheterization have shown considerable scatter between the estimated and true values, so the technique cannot be recommended for estimating central aortic pressure.  

_

34. Central aortic pressure is a better predictor of cardiovascular outcome than peripheral pressure and peripherally obtained blood pressure does not accurately reflect central pressure because of pressure amplification. Also antihypertensive medications have differing effects on central aortic pressure despite similar reductions in brachial blood pressure. Brachial BP can be unreliable, especially in young people whose more flexible blood vessel walls can give misleadingly high blood pressure, leading to unnecessary medical interventions. Conversely, old people with stiffer blood vessels may give a misleadingly low reading of brachial BP, disguising dangerous high blood pressure which can be a precursor to heart attack or stroke. In an innovative technique, radial artery sensor records a pulse wave at the wrist, and at the same time blood pressure is measured in the upper arm by conventional way. The data is then used to mathematically compute central aortic pressure. The process takes a few minutes more than conventional measurements. Direct comparison with traditional central aortic measurements obtained using the old-fashioned, invasive method shows a 99% correlation.  All you need is a wrist watch device which records radial artery pressure pulse wave & a conventional upper arm BP monitor, and you can estimate your central aortic pressure at home.    

_

35. The sensitivity and specificity of self reported hypertension found by SMBP is about 71% and 90% respectively. These results confirm the validity of self-reported hypertension among population. Since one out of three adults have hypertension worldwide and since out of all hypertensives, only half are aware that they have hypertension; these so called healthy people (unaware about hypertension) can measure BP randomly at home to detect hypertension in them and report to their doctor thereby save their lives from death. I therefore recommend that every home should have automated oscillometric SMBP device and if this recommendation is accepted by people, thousands of lives from so called healthy population would be saved worldwide every year.  

 ___________

___________

Dr. Rajiv Desai. MD.

October 2, 2014

________

Postscript:

When most doctors and nurses cannot measure BP accurately, can we expect lay public to measure it accurately? This article is written to help people measure their own BP at home in most accurate way and adjust drug treatment accordingly in consultation with their doctors to save their lives. I urge doctors to possess two devices, mercury sphygmomanometer (regularly serviced) and automated oscillometric device (validated) in their clinic and take BP of every patient on both devices. In my view, ideal BP measurement device is still elusive.  

_

Footnote:

I never understood why we inflate cuff, occlude brachial artery and then slowly release cuff air and then measure systolic pressure with first korotkoff sound and diastolic pressure with last korotkoff sound. Can we do the reverse?  Slowly inflate brachial artery cuff and first sound heard is diastolic pressure and then keep on slowly inflating cuff till last sound heard which is systolic pressure. Well, I could not find any literature / reference on “Reverse Korotkoff Sounds”. If anybody knows it, please drop me an email. 

_

GENE THERAPY

August 31st, 2014

_______

GENE THERAPY:    

______

______

Caveat:

Medicine is an ever-changing science. As new research and clinical experience broaden our knowledge, changes in treatment and drug therapy are required. I have checked with sources believed to be reliable in their efforts to provide information that is complete and generally in accord with the standards accepted at the time of publishing this article. However, in view of the possibility of human error or changes in medical sciences, I do not assure that the information contained herein is in every respect accurate or complete, and I disclaim all responsibility for any errors or omissions or for the results obtained from use of the information contained in this work. Readers are encouraged to confirm the information contained herein with other sources. I have taken some information from articles that were published few years ago. The facts and conclusions presented may have since changed and may no longer be accurate. Questions about personal health should always be referred to a physician or other health care professional.  

______

Prologue:

“BLASPHEMY!” some cried when the concept of gene therapy first surfaced. For them tinkering with the genetic constitution of human beings was equivalent to playing God, and this they perceived as being sacrilegious! On the other side was the scientific community, abuzz with excitement at the prospect of being able to wipe certain genetic disorders in humans entirely from the human gene pool. Although the term gene therapy was first introduced during the 1980s, the controversy about the rationality of this line of treatment still rages on. In the center of the debate lie the gene therapy pros and cons that derive opinions from religious, ethical and undoubtedly, political domains. The concept of genes as carriers of phenotypic information was introduced in the early 19th century by Gregor Mendel, who later demonstrated the properties of genetic inheritance in peas. Over the next 100 years, many significant discoveries lead to the conclusions that genes encode proteins and reside on chromosomes, which are composed of DNA. These findings culminated in the central dogma of molecular biology, that proteins are translated from RNA, which is transcribed from DNA. James Watson was quoted as saying “we used to think that our fate was in our stars, but now we know, in large measures, our fate is in our genes”. Genes, the functional unit of heredity, are specific sequences bases that encode instructions to make proteins. Although genes get a lot of attentions, it is the proteins that perform most life functions. When genes are altered, encoded proteins are unable to carry out their normal functions, resulting in genetic disorders.  Gene therapy is a novel therapeutic branch of modern medicine. Its emergence is a direct consequence of the revolution heralded by the introduction of recombinant DNA methodology in the 1970s. Gene therapy is still highly experimental, but has the potential to become an important treatment regimen. In principle, it allows the transfer of genetic information into patient tissues and organs. Consequently, diseased genes can be eliminated or their normal functions rescued. Furthermore, the procedure allows the addition of new functions to cells, such as the production of immune system mediator proteins that help to combat cancer and other diseases. Most scientists believe the potential for gene therapy is the most exciting application of DNA science, yet undertaken.

__________

Note:

Please read my other articles ‘Stem cell therapy and human cloning’, ‘Cell death’ and ‘Genetically modified’ before reading this article.

__________

The rapid pace of technological advances has profound implications for medical applications far beyond their traditional roles to prevent, treat, and cure disease. Cloning, genetic engineering, gene therapy, human-computer interfaces, nanotechnology, and designer drugs have the potential to modify inherited predispositions to disease, select desired characteristics in embryos, augment “normal” human performance, replace failing tissues, and substantially prolong life span. As gene therapy is uprising in the field of medicine, scientists believe that after 20 years, this will be the last cure of every genetic disease. Genes may ultimately be used as medicine and given as simple intravenous injection of gene transfer vehicle that will seek our target cells for stable, site-specific chromosomal integration and subsequent gene expression. And now that a draft of the human genome map is complete, research is focusing on the function of each gene and the role of the faulty gene play in disease. Gene therapy will ultimately play Copernican part and will change our lives forever.

_

Gene therapy, the experimental therapy as on today:

Gene therapy is an experimental technique that uses genes to treat or prevent diseases. Genes are specific sequences of bases that encode instructions on how to make proteins. When genes are altered so that the encoded proteins are unable to carry out their normal functions, genetic disorders can result. Gene therapy is used for correcting defective genes responsible for disease development. Researchers may use one of several approaches for correcting faulty genes. Although gene therapy is a promising treatment which helps successfully treat and prevent various diseases including inherited disorders, some types of cancer, and certain viral infections, it is still at experimental stage. Gene therapy is presently only being tested for the treatment of diseases that have no other cures. Currently, the only way for you to receive gene therapy is to participate in a clinical trial. Clinical trials are research studies that help doctors determine whether a gene therapy approach is safe for people. They also help doctors understand the effects of gene therapy on the body. Your specific procedure will depend on the disease you have and the type of gene therapy being used. 

______

Introduction to gene therapy:

Gene therapy is a clinical strategy involving gene transfer with therapeutic purposes. It is based on the concept that an exogenous gene (transgene) is able to modify the biology and phenotype of target cells, tissues and organs. Initially designed to definitely correct monogenic disorders, such as cystic fibrosis, severe combined immunodeficiency or muscular dystrophy, gene therapy has evolved into a promising therapeutic modality for a diverse array of diseases. Targets are expanding and currently include not only genetic, but also many acquired diseases, such as cancer, tissue degeneration or infectious diseases. Depending on the duration planned for the treatment, type and location of target cells, and whether they undergo division or are quiescent, different vectors may be used, involving nonviral methods, non-integrating viral vectors or integrating viral vectors. The first gene therapy clinical trial was carried out in 1989, in patients with advanced melanoma, using tumor-infiltrating lymphocytes modified by retroviral transduction. In the early nineties, a clinical trial with children with severe combined immunodeficiency (SCID) was also performed, by retrovirus transfer of adenosine deaminase gene to lymphocytes isolated from these patients. Since then, more than 5,000 patients have been treated in more than 1,000 clinical protocols all over the world. Despite the initial enthusiasm, however, the efficacy of gene therapy in clinical trials has not been as high as expected; a situation further complicated by ethical and safety concerns. Further studies are being developed to solve these limitations.

_________

Historical development of gene therapy:

Chronology of development of gene therapy technology:

1970s, 1980s and earlier:

In 1972 Friedmann and Roblin authored a paper in Science titled “Gene therapy for human genetic disease?” Rogers (1970) was cited for proposing that exogenous good DNA be used to replace the defective DNA in those who suffer from genetic defects. However, these authors concluded that it was premature to begin gene therapy studies in humans because of lack of basic knowledge of genetic regulation and of genetic diseases, and for ethical reasons. They did, however, propose that studies in cell cultures and in animal models aimed at development of gene therapies be undertaken. Such studies–as well as abortive gene therapy studies in humans–had already begun as of 1972. In the 1970s and 1980s, researchers applied such technologies as recombinant DNA and development of viral vectors for transfer of genes to cells and animals to the study and development of gene therapies.

1990s:

The first approved gene therapy case in the United States took place on 14 September 1990, at the National Institute of Health, under the direction of Professor William French Anderson. It was performed on a four year old girl named Ashanti DeSilva. It was a treatment for a genetic defect that left her with ADA-SCID, a severe immune system deficiency. The effects were only temporary, but successful. New gene therapy approach repairs errors in messenger RNA derived from defective genes. This technique has the potential to treat the blood disorder thalassaemia, cystic fibrosis, and some cancers. Researchers at Case Western Reserve University and Copernicus Therapeutics are able to create tiny liposomes 25 nanometers across that can carry therapeutic DNA through pores in the nuclear membrane. Sickle-cell disease is successfully treated in mice. The mice – which have essentially the same defect that causes sickle cell disease in humans – through the use a viral vector, were made to express the production of fetal hemoglobin (HbF), which normally ceases to be produced by an individual shortly after birth. In humans, the use of hydroxyurea to stimulate the production of HbF has long been shown to temporarily alleviate the symptoms of sickle cell disease. The researchers demonstrated this method of gene therapy to be a more permanent means to increase the production of the therapeutic HbF. In 1992 Doctor Claudio Bordignon working at the Vita-Salute San Raffaele University, Milan, Italy performed the first procedure of gene therapy using hematopoietic stem cells as vectors to deliver genes intended to correct hereditary diseases. In 2002 this work led to the publication of the first successful gene therapy treatment for adenosine deaminase-deficiency (SCID). The success of a multi-center trial for treating children with SCID (severe combined immune deficiency or “bubble boy” disease) held from 2000 and 2002 was questioned when two of the ten children treated at the trial’s Paris center developed a leukemia-like condition. Clinical trials were halted temporarily in 2002, but resumed after regulatory review of the protocol in the United States, the United Kingdom, France, Italy, and Germany. In 1993 Andrew Gobea was born with severe combined immunodeficiency (SCID). Genetic screening before birth showed that he had SCID. Blood was removed from Andrew’s placenta and umbilical cord immediately after birth, containing stem cells. The allele that codes for ADA was obtained and was inserted into a retrovirus. Retroviruses and stem cells were mixed, after which the viruses entered and inserted the gene into the stem cells’ chromosomes. Stem cells containing the working ADA gene were injected into Andrew’s blood system via a vein. Injections of the ADA enzyme were also given weekly. For four years T cells (white blood cells), produced by stem cells, made ADA enzymes using the ADA gene. After four years more treatment was needed. The 1999 death of Jesse Gelsinger in a gene therapy clinical trial resulted in a significant setback to gene therapy research in the United States. Jesse Gelsinger had ornithine transcarbamylase deficiency. In a clinical trial at the University of Pennsylvania, he was injected with an adenoviral vector carrying a corrected gene to test the safety of use of this procedure. He suffered a massive immune response triggered by the use of the viral vector, and died four days later. As a result, the U.S. FDA suspended several clinical trials pending the re-evaluation of ethical and procedural practices in the field.

2003:

In 2003 a University of California, Los Angeles research team inserted genes into the brain using liposomes coated in a polymer called polyethylene glycol. The transfer of genes into the brain is a significant achievement because viral vectors are too big to get across the blood–brain barrier. This method has potential for treating Parkinson’s disease. RNA interference or gene silencing may be a new way to treat Huntington’s disease. Short pieces of double-stranded RNA (short, interfering RNAs or siRNAs) are used by cells to degrade RNA of a particular sequence. If a siRNA is designed to match the RNA copied from a faulty gene, then the abnormal protein product of that gene will not be produced.

2006:

In March 2006 an international group of scientists announced the successful use of gene therapy to treat two adult patients for X-linked chronic granulomatous disease, a disease which affects myeloid cells and which gives a defective immune system. The study, published in Nature Medicine, is believed to be the first to show that gene therapy can cure diseases of the myeloid system. In May 2006 a team of scientists led by Dr. Luigi Naldini and Dr. Brian Brown from the San Raffaele Telethon Institute for Gene Therapy (HSR-TIGET) in Milan, Italy reported a breakthrough for gene therapy in which they developed a way to prevent the immune system from rejecting a newly delivered gene. Similar to organ transplantation, gene therapy has been plagued by the problem of immune rejection. So far, delivery of the ‘normal’ gene has been difficult because the immune system recognizes the new gene as foreign and rejects the cells carrying it. To overcome this problem, the HSR-TIGET group utilized a newly uncovered network of genes regulated by molecules known as microRNAs. Dr. Naldini’s group reasoned that they could use this natural function of microRNA to selectively turn off the identity of their therapeutic gene in cells of the immune system and prevent the gene from being found and destroyed. The researchers injected mice with the gene containing an immune-cell microRNA target sequence, and the mice did not reject the gene, as previously occurred when vectors without the microRNA target sequence were used. This work will have important implications for the treatment of hemophilia and other genetic diseases by gene therapy. In August 2006, scientists at the National Institutes of Health (Bethesda, Maryland) successfully treated metastatic melanoma in two patients using killer T cells genetically retargeted to attack the cancer cells. This study constitutes one of the first demonstrations that gene therapy can be effective in treating cancer. In November 2006 Preston Nix from the University of Pennsylvania School of Medicine reported on VRX496, a gene-based immunotherapy for the treatment of human immunodeficiency virus (HIV) that uses a lentiviral vector for delivery of an antisense gene against the HIV envelope. In the Phase I trial enrolling five subjects with chronic HIV infection who had failed to respond to at least two antiretroviral regimens, a single intravenous infusion of autologous CD4 T cells genetically modified with VRX496 was safe and well tolerated. All patients had stable or decreased viral load; four of the five patients had stable or increased CD4 T cell counts. In addition, all five patients had stable or increased immune response to HIV antigens and other pathogens. This was the first evaluation of a lentiviral vector administered in U.S. Food and Drug Administration-approved human clinical trials for any disease. Data from an ongoing Phase I/II clinical trial were presented at CROI 2009.

2007:

On 1 May 2007 Moorfields Eye Hospital and University College London’s Institute of Ophthalmology announced the world’s first gene therapy trial for inherited retinal disease. The first operation was carried out on a 23 year-old British male, Robert Johnson, in early 2007. Leber’s congenital amaurosis is an inherited blinding disease caused by mutations in the RPE65 gene. The results of a small clinical trial in children were published in New England Journal of Medicine in April 2008. They researched the safety of the subretinal delivery of recombinant adeno-associated virus (AAV) carrying RPE65 gene, and found it yielded positive results, with patients having modest increase in vision, and, perhaps more importantly, no apparent side-effects.

2008:

In May 2008, two more groups, one at the University of Florida and another at the University of Pennsylvania, reported positive results in independent clinical trials using gene therapy to treat Leber’s congenital amaurosis. In all three clinical trials, patients recovered functional vision without apparent side-effects. These studies, which used adeno-associated virus, have spawned a number of new studies investigating gene therapy for human retinal disease.

2009:

In September 2009, the journal Nature reported that researchers at the University of Washington and University of Florida were able to give trichromatic vision to squirrel monkeys using gene therapy, a hopeful precursor to a treatment for color blindness in humans. In November 2009, the journal Science reported that researchers succeeded at halting a fatal genetic disorder called adrenoleukodystrophy in two children using a lentivirus vector to deliver a functioning version of ABCD1, the gene that is mutated in the disorder.

2010:

A paper by Komáromy et al. published in April 2010, deals with gene therapy for a form of achromatopsia in dogs. Achromatopsia, or complete color blindness, is presented as an ideal model to develop gene therapy directed to cone photoreceptors. Cone function and day vision have been restored for at least 33 months in two young dogs with achromatopsia. However, the therapy was less efficient for older dogs. In September 2010, it was announced that an 18 year old male patient in France with beta-thalassemia major had been successfully treated with gene therapy. Beta-thalassemia major is an inherited blood disease in which beta haemoglobin is missing and patients are dependent on regular lifelong blood transfusions. A team directed by Dr. Phillipe Leboulch (of the University of Paris, Bluebird Bio and Harvard Medical School) used a lentiviral vector to transduce the human ß-globin gene into purified blood and marrow cells obtained from the patient in June 2007. The patient’s haemoglobin levels were stable at 9 to 10 g/dL, about a third of the hemoglobin contained the form introduced by the viral vector and blood transfusions had not been needed. Further clinical trials were planned. Bone marrow transplants are the only cure for thalassemia but 75% of patients are unable to find a matching bone marrow donor.

2011:

In 2007 and 2008, a man being treated by Gero Hütter was cured of HIV by repeated Hematopoietic stem cell transplantation with double-delta-32 mutation which disables the CCR5 receptor; this cure was not completely accepted by the medical community until 2011. This cure required complete ablation of existing bone marrow which is very debilitating. In August 2011, two of three subjects of a pilot study were confirmed to have been cured from chronic lymphocytic leukemia (CLL). The study carried out by the researchers at the University of Pennsylvania used genetically modified T cells to attack cells that expressed the CD19 protein to fight the disease. In 2013, the researchers announced that 26 of 59 patients had achieved complete remission and the original patient had remained tumor-free. Human HGF plasmid DNA therapy of cardiomyocytes is being examined as a potential treatment for coronary artery disease as well as treatment for the damage that occurs to the heart after myocardial infarction.

2012:

The FDA approves clinical trials of the use of gene therapy on thalassemia major patients in the US. Researchers at Memorial Sloan Kettering Cancer Center in New York begin to recruit 10 participants for the study in July 2012. The study is expected to end in 2014. In July 2012, the European Medicines Agency recommended approval of a gene therapy treatment for the first time in either Europe or the United States. The treatment, called Alipogene tiparvovec (Glybera), compensates for lipoprotein lipase deficiency (LPLD), which can cause severe pancreatitis. People with LPLD cannot break down fat, and must manage their disease with a restricted diet. However, dietary management is difficult, and a high proportion of patients suffer life-threatening pancreatitis. The recommendation was endorsed by the European Commission in November 2012 and commercial rollout is expected in late 2013. In December 2012, it was reported that 10 of 13 patients with multiple myeloma were in remission “or very close to it” three months after being injected with a treatment involving genetically engineered T cells to target proteins NY-ESO-1 and LAGE-1 which exist only on cancerous myeloma cells.

2013:

In March 2013, Researchers at the Memorial Sloan-Kettering Cancer Center in New York, reported that three of five subjects who had acute lymphocytic leukemia (ALL) had been in remission for five months to two years after being treated with genetically modified T cells which attacked cells with CD19 genes on their surface, i.e. all B-cells, cancerous or not. The researchers believed that the patient’s immune systems would make normal T-cells and B-cells after a couple of months however they were given bone marrow to make sure. One patient had relapsed and died and one had died of a blood clot unrelated to the disease. Following encouraging Phase 1 trials, in April 2013, researchers in the UK and the US announced they were starting Phase 2 clinical trials (called CUPID2 and SERCA-LVAD) on 250 patients at several hospitals in the US and Europe to use gene therapy to combat heart disease. These trials were designed to increase the levels of SERCA2a protein in the heart muscles and improve the function of these muscles. The FDA granted this a Breakthrough Therapy Designation which would speed up the trial and approval process in the USA. In July 2013 the Italian San Raffaele Telethon Institute for Gene Therapy (HSR-TIGET) reported that six children with two severe hereditary diseases had been treated with a partially deactivated lentivirus to replace a faulty gene and after 7–32 months the results were promising. Three of the children had metachromatic leukodystrophy which causes children to lose cognitive and motor skills. The other children had Wiskott-Aldrich syndrome which leaves them to open to infection, autoimmune diseases and cancer due to a faulty immune system.  In October 2013, the Great Ormond Street Hospital, London reported that two children born with adenosine deaminase severe combined immunodeficiency disease (ADA-SCID) had been treated with genetically engineered stem cells 18 months previously and their immune systems were showing signs of full recovery. Another three children treated since then were also making good progress. ADA-SCID children have no functioning immune system and are sometimes known as “bubble children.” In October 2013, Amit Nathswani of the Royal Free London NHS Foundation Trust in London reported that they had treated six people with haemophilia in early 2011 using genetically engineered adeno-associated virus. Over two years later all six were still producing blood plasma clotting factor.

2014:

In January 2014, researchers at the University of Oxford reported that six people suffering from choroideremia had been treated with a genetically engineered adeno-associated virus with a copy of a gene REP1. Over a six month to two year period all had improved their sight. Choroideremia is an inherited genetic eye disease for which in the past there has been no treatment and patients eventually go blind. In March 2014 researchers at the University of Pennsylvania reported that 12 patients with HIV had been treated since 2009 in a trial with a genetically engineered virus with a rare mutation known to protect against HIV (CCR5 deficiency). Results were promising.

_

The three main issues for the coming decade will be public perceptions, scale-up and manufacturing, and commercial considerations. Focusing on single-gene applications, which tend to be rarer diseases, will produce successful results sooner than the current focus on the commoner, yet more complex, cancer and heart diseases.   

______

What is Gene?

A gene is an important unit of hereditary information. It provides the code for living organisms’ traits, characteristics, function, and physical development. Each person has around 25,000 genes that are located on 46 chromosomes. Gene is a segment of DNA found on chromosome that codes for a particular protein. It acts as a blue print for making enzymes and other proteins for every biochemical reaction and structure of body.

What is allele?

Alleles are two or more alternative forms of a gene that can occupy a specific locus (location) on a chromosome.  

What is DNA?

Deoxyribonucleic acid (DNA) is a nucleic acid that contains the genetic information used in the development and function of all known living organisms. The main role of DNA is the long-term storage of information. DNA is often compared to a set of blueprints or a recipe or code, since it contains the instructions needed to construct other components of cells, such as proteins. The DNA segments that carry this genetic information are called genes.

What are Chromosomes?

A chromosome is a singular piece of DNA, which contains many genes. Chromosomes also contain DNA-bound proteins, which serve to package the DNA and control its functions. Chromosomes are found inside the nucleus of cells.

What are Proteins?

Proteins are large organic compounds made of amino acids. They are involved in many processes within cells. Proteins act as building blocks, or function as enzymes and are important in “communication” among cells.

_

What are plasmids?

_

_

Plasmid is any extrachromosomal heritable determinant. Plasmids are fragments of double-stranded DNA that can replicate independently of chromosomal DNA, and usually carry genes. Although they can be found in Bacteria, Archaea and Eukaryotes, they play the most significant biological role in bacteria where they can be passed from one bacterium to another by horizontal gene transfer, usually providing a context-dependent selective advantage, such as antibiotic resistance.

_

In the center of every cell in your body is a region called the nucleus. The nucleus contains your DNA which is the genetic code you inherited from each of your parents. The DNA is ribbon-like in structure, but normally exists in a condensed form called chromosomes. You have 46 chromosomes (23 from each parent), which are in turn comprised of thousands of genes. These genes encode instructions on how to make proteins. Proteins make up the majority of a cell’s structure and perform most life functions. Genes tell cells how to work, control our growth and development, and determine what we look like and how our bodies work. They also play a role in the repair of damaged cells and tissues. Each person has more than 25,000 genes, which are made up of DNA. You have 2 copies of every gene, 1 inherited from your mother and 1 from your father.

_

_

DNA or deoxyribonucleic acid is the very long molecule that encodes the genetic information. A gene is a stretch of DNA required to make a functional product such as part or all of a protein. People have about 25,000 genes. During gene therapy, DNA that codes for specific genes is delivered to individual cells in the body.

_

The Human Genome:

The human genome is the entire genetic code that resides in every cell in your body (with the exception of red blood cells). The genome is divided into 23 chromosome pairs. During reproduction, two copies of the chromosomes (one from each parent) are passed onto the offspring. While most chromosomes are identical for males and females, the exceptions are the sex chromosomes (known as the X and Y chromosomes). Each chromosome contains thousands of individual genes. These genes can be further divided into sequences called exons and introns, which are in turn made up of even shorter sequences called codons. And finally, the codons are made up of base pairs, combinations of four bases: adenine, cytosine, thymine, and guanine. Or A, C, T, and G for short. The human genome is vast, containing an estimated 3.2 billion base pairs. To put that in perspective, if the genome was a book, it would be hundreds of thousands of pages long. That’s enough room for a dozen copies of the entire Encyclopaedia Britannica, and all of it fits inside a microscopic cell. 

_

_

Our genes help make us unique. Inherited from our parents, they go far in determining our physical traits — like eye color and the color and texture of our hair. They also determine things like whether babies will be male or female, the amount of oxygen blood can carry, and the likelihood of getting certain diseases. Scientists believe that every human has about 25,000 genes per cell. A mutation, or change, in any one of these genes can result in a disease, physical disability, or shortened life span. These mutations can be passed from one generation to another, inherited just like a mother’s curly hair or a father’s brown eyes. Mutations also can occur spontaneously in some cases, without having been passed on by a parent. With gene therapy, the treatment or elimination of inherited diseases or physical conditions due to these mutations could become a reality. Gene therapy involves the manipulation of genes to fight or prevent diseases. Put simply, it introduces a “good” gene into a person who has a disease caused by a “bad” gene. Variations on genes are known as alleles. Because of changes in the genetic code caused by mutations, there are often more than one type of gene in the gene pool. For example, there is a specific gene to determine a person’s blood type. Therefore, a person with blood type A will have a different version of that gene than a person with blood type B. Some genes work in tandem with each other.

_

Genes to protein:

Chromosomes contain long chains of DNA built with repeating subunits known as nucleotides. That means a single gene is a finite stretch of DNA with a specific sequence of nucleotides. Those nucleotides act as a blueprint for a specific protein, which gets assembled in a cell using a multistep process.

1. The first step, known as transcription, begins when a DNA molecule unzips and serves as a template to create a single strand of complementary messenger RNA.

2. The messenger RNA then travels out of the nucleus and into the cytoplasm, where it attaches to a structure called the ribosome.

3. There, the genetic code stored in the messenger RNA, which itself reflects the code in the DNA, determines a precise sequence of amino acids. This step is known as translation, and it results in a long chain of amino acids — a protein.

Proteins are the workhorses of cells. They help build the physical infrastructure, but they also control and regulate important metabolic pathways. If a gene malfunctions — if, say, its sequence of nucleotides gets scrambled — then its corresponding protein won’t be made or won’t be made correctly. Biologists call this a mutation, and mutations can lead to all sorts of problems, such as cancer and phenylketonuria. Gene therapy tries to restore or replace a defective gene, bringing back a cell’s ability to make a missing protein.  

_

Length measurements of DNA/RNA:

The following abbreviations are commonly used to describe the length of a DNA/RNA molecule:

bp = base pair(s)— one bp corresponds to approximately 3.4 Å (340 pm) of length along the strand, or to roughly 618 or 643 daltons for DNA and RNA respectively.

kb (= kbp) = kilo base pairs = 1,000 bp

Mb = mega base pairs = 1,000,000 bp

Gb = giga base pairs = 1,000,000,000 bp.

For case of single-stranded DNA/RNA units of nucleotides are used, abbreviated nt (or knt, Mnt, Gnt), as they are not paired.

Note:

Please do not confuse these terms with computer data units.

kb in molecular biology is kilobase pairs = 1000 base pairs

kb in computer data is kilobytes = 1000 bytes 

_

Gene Mutations:  

When human DNA is replicated there is the slight possibility for an error to occur. And while Human DNA has a built-in error-correction mechanism, sometimes this mechanism fails and a copying error is the result. These copying errors are called mutations. The vast majority of mutations occurs in ‘junk DNA’ and therefore has no effect on a person’s well being. When mutations occur in DNA that is used to code proteins, however, physiological effects can occur. Mutations themselves are relatively rare events. Estimates for the average number of mutations are over 100 per individual and most of those occur in ‘junk DNA’. Only a handful of mutations, between one and four, occur in protein-coding DNA. And while this might sound like a lot, given the size of the protein-coding DNA—around 100 million base pairs—mutations are fairly rare events.

_

Defective genes:

Each human being carries normal as well as some defective genes. Each of us carries about half a dozen defective genes. We remain blissfully unaware of this fact unless we, or one of our close relatives, are amongst the many millions who suffer from a genetic disease. About one in ten people has, or will develop at some later stage, an inherited genetic disorder, and approximately 2,800 specific conditions are known to be caused by defects (mutations) in just one of the patient’s genes. Some single gene disorders are quite common – cystic fibrosis is found in one out of every 2,500 babies born in the Western World – and in total, diseases that can be traced to single gene defects account for about 5% of all admissions to children’s hospitals. Although genes are responsible for predisposition to disease, the environment, diet, and lifestyle can affect the onset of the illness.   

_

Genetic Disorders:

A genetic disorder is a disease caused in whole or in part by a change in the DNA sequence away from the normal sequence. Genetic disorders can be caused by a mutation in one gene (monogenic disorder), by mutations in multiple genes (multifactorial inheritance disorder), by a combination of gene mutations and environmental factors, or by damage to chromosomes (changes in the number or structure of entire chromosomes, the structures that carry genes). Genetic disorders affect millions of people world-wide. Scientists have currently identified more than 4000 different genetic disorders.

There are four main types of genetic disorders. These include:

  • single-gene
  • multifactorial
  • chromosomal
  • mitochondrial

Single-gene disorders are caused by a defect in a single gene. Examples include Huntington’s disease, cystic fibrosis, and sickle cell anemia. Multifactorial disorders are caused by a combination of genes. Alzheimer’s, heart disease and even cancer can be influenced by multifactorial disorders. Chromosomal disorders, such as Down syndrome, are caused by changes or replications of an entire chromosome. Finally, there are mitochondrial disorders in which the DNA of mitochondria, tiny organelles used in cell metabolism become affected.

_

Genetic disorders affect about one in every ten people. Some, like cystic fibrosis, can have consequences early in a child’s life while others, like Huntington’s disease don’t show up until later in life. Preventing genetic disorders can be difficult. Unlike regular diseases which are a result of external factors, genetic diseases are caused by our very own DNA. When the genetic code in a gene is altered, the gene can become defective. Most genetic disorders are hereditary; however spontaneous mutation can occur without being inherited from parents. When the defective gene is passed onto an offspring, there is a risk that that offspring will develop that genetic disorder. Some genetic disorders are caused by dominant genes, requiring only a single gene for the disease to develop. Others are caused by recessive genes which require two copies of the defective gene, one from each parent, to cause the disease.

_

Multifaceted diseases:

One of the major consequences of widespread belief in biological determinism is the underlying assumption that if a trait or condition is genetic, it cannot be changed. However, the relationship between genotype (the actual genes an individual inherits) and phenotype (what traits are observable) is complex. For example, cystic fibrosis (CF) is a multifaceted disease that is present in about 1 in every 2,000 live births of individuals of European ancestry. The disease is recessive, meaning that in order for it to show up phenotypically, the individual must inherit the defective gene, known as CFTR, from both parents. More than 1,000 mutation sites have been identified in CFTR, and most have been related to different manifestations of the disease. However, individuals with the same genotype can show remarkably different phenotypes. Some will show early onset, others later onset; in some the kidney is most afflicted, whereas in others it is the lungs. In some individuals with the most common mutation the effects are severe, whereas in others they are mild to nonexistent. Although the reasons for those differences are not understood, their existence suggests that both genetic background and environmental factors (such as diet) play important roles. In other words, genes are not destiny, particularly when the genetic basis of a condition is unclear or circumstantial but also even in cases where the genetic basis of a disability can be well understood, such as in cystic fibrosis. With modern genomics (the science of understanding complex genetic interactions at the molecular and biochemical levels), unique opportunities have emerged concerning the treatment of genetically based disabilities, such as type I diabetes, cystic fibrosis, and sickle-cell anemia. Those opportunities have centered primarily on gene therapy, in which a functional gene is introduced into the genome to repair the defect, and pharmacological intervention, involving drugs that can carry out the normal biochemical function of the defective gene.

_

Inheritance of genetic disorders:

Most of us do not suffer any harmful effects from our defective genes because we carry two copies of nearly all genes, one derived from our mother and the other from our father. The only exceptions to this rule are the genes found on the male sex chromosomes. Males have one X and one Y chromosome, the former from the mother and the latter from the father, so each cell has only one copy of the genes on these chromosomes. In the majority of cases, one normal gene is sufficient to avoid all the symptoms of disease. If the potentially harmful gene is recessive, then its normal counterpart will carry out all the tasks assigned to both. Only if we inherit from our parents two copies of the same recessive gene will a disease develop. On the other hand, if the gene is dominant, it alone can produce the disease, even if its counterpart is normal. Clearly only the children of a parent with the disease can be affected, and then on average only half the children will be affected. Huntington’s chorea, a severe disease of the nervous system, which becomes apparent only in adulthood, is an example of a dominant genetic disease. Finally, there are the X chromosome-linked genetic diseases. As males have only one copy of the genes from this chromosome, there are no others available to fulfill the defective gene’s function. Examples of such diseases are Duchenne muscular dystrophy and, perhaps most well known of all, hemophilia.

_

Autosomal recessive, autosomal dominant and X-linked:

These terms are used to describe the common modes of inheritance for genetic disorders.

1. Autosomal recessive – where a genetic disorder requires both copies of a gene to be abnormal to cause the disease. Both parents of the affected individual are carriers, i.e., carry one abnormal copy but also have a normal copy so they themselves are not affected.

2. Autosomal dominant – some genetic disorders only need one copy of the gene to be abnormal, i.e., having one normal copy is just not enough. One of the parents is usually affected.

3. X-linked – is where the gene is on the X (sex) chromosome. The mother is usually a carrier with only the male children being at risk of having the disorder.

Homozygous/heterozygous:

Terminology used in a number of different contexts. One context is: homozygous, where a mistake is present in both copies of a gene; versus heterozygous, where the mistake is present in only one of the two gene copies.

_______

What is genetic testing? 

Genetic testing can determine whether a person is carrying the alleles that cause genetic disorders. Genetic testing involves analyzing a person’s DNA to see if they carry alleles that cause genetic disorders. Genetic testing is used to identify the presence of certain genes with a person’s DNA. This can be used to determine if a person contains the genes that cause genetic disorders. In cases like Huntington’s disease, a person can have advance warning of the onset of the disease. In other cases, parents each with a defective recessive gene will know if their offspring has the potential to develop a genetic disorder. It can be done at any stage in a person’s life. But there are limits to the testing, and the subject raises a number of ethical issues.

There are several types of genetic test, including testing for medical research:

Antenatal testing:

This is used to analyze an individual’s DNA or chromosomes before they are born. At the moment, it cannot detect all inherited disorders. Prenatal testing is offered to couples who may have an increased risk of producing a baby with an inherited disorder. Prenatal testing for Down’s syndrome, which is caused by a faulty chromosome, is offered to all pregnant women.

Neonatal testing:

Neonatal testing involves analyzing a sample of blood taken by pricking the baby’s heel. This is used just after a baby has been born. It is designed to detect genetic disorders that can be treated early. In the UK, all babies are screened for phenylketonuria, congenital hypothyroidism and cystic fibrosis. Babies born to families that are at risk of sickle cell disease are tested for this disorder.

Carrier testing:

This is used to identify people who carry a recessive allele, such as the allele for cystic fibrosis. It is offered to individuals who have a family history of a genetic disorder. Carrier testing is particularly useful if both parents are tested, because if both are carriers there is an increased risk of producing a baby with a genetic disorder.

Predictive testing:

This is used to detect genetic disorders where the symptoms develop later in life, such as Huntington’s disorder. Predictive testing can be valuable to people who have no symptoms but have a family member with a genetic disorder. The results can help to inform decisions about possible medical care.

_

Limits of genetic testing:

Genetic tests are not available for every possible inherited disorder. And they are not completely reliable. They may produce false positive or false negative results. These can have serious consequences.

False positives:

A false positive occurs when a genetic test has wrongly detected a certain allele or faulty chromosome. The individual or family could believe something is wrong when it is not. This may lead them to decide not to start a family, or to choose an abortion, in order to avoid having a baby with a genetic disorder.

False negatives:

A false negative happens when a genetic test has failed to detect a certain allele or faulty chromosome. The individual or family would be wrongly reassured. This may lead them to decide to start a family or continue with a pregnancy.

_

The technologies that make genetic testing possible range from chemical tests for gene products in the blood, through examining chromosomes from whole cells, to identification of the presence or absence of specific, defined DNA sequences, such as the presence of mutations within a gene sequence. The last of these is becoming much more common in the wake of the Human Genome Project. The technical details of particular tests are changing fast and they are becoming much more accurate. But the important point is that it is possible to test for more genes, and more variants of those genes, using very small samples of material. For an adult, a cheek scraping these days provides ample cells for most DNA testing. Before treatment for a genetic disease can begin, an accurate diagnosis of the genetic defect needs to be made. It is here that biotechnology is also likely to have a great impact in the near future. Genetic engineering research has produced a powerful tool for pinpointing specific diseases rapidly and accurately. There are different techniques to accomplish gene testing.  Short pieces of DNA called DNA probes can be designed to stick very specifically to certain other pieces of DNA. The technique relies upon the fact that complementary pieces of DNA stick together. DNA probes are more specific and have the potential to be more sensitive than conventional diagnostic methods, and it should be possible in the near future to distinguish between defective genes and their normal counterparts, an important development. Another technique involves a side-by-side comparison of more than one person’s DNA. Genes within a person can be compared with healthy copies of those genes to determine if the person’s genes are, in fact, defective.

_

All these different kinds of test can bring benefits. But all three, i.e. pre-natal diagnosis, childhood testing and adult testing, have also been noted as requiring careful management because of ethical problems that can arise from the kind of information they provide. We are confronted with moral choices here, for example, who gets that information and under what circumstances, what they do with it, and who decides what to do with it, are all important issues. Even finding out what people would like to know is not necessarily straightforward. (Is telling someone they can have a test for Huntington’s disease, say, the same as telling them they may be at risk of the disease?) Here we are not primarily concerned with the technologies for testing, but with the ethical context within which testing takes place; a context framed by issues such as informed consent, individual decision-making and confidentiality of genetic information.  

_

At this stage, we should distinguish genetic testing from genetic screening. Genetic testing is used with individuals who, because of their family history think they are at risk of carrying the gene for a particular genetic disease. Screening covers wide-scale testing of populations, to discover who may be at risk of genetic disease.

_

Genetic Screening: 

Genetic screening may be indicated in populations at risk of a particular genetic disorder. The usual criteria for genetic screening are

1. Genetic inheritance patterns are known.

2.  Effective therapy is available.

3.  Screening tests are sufficiently valid, reliable, sensitive and specific, noninvasive, and safe.

4. Prevalence in a defined population must be high enough to justify the cost of screening.

One aim of prenatal genetic screening is to identify asymptomatic parental heterozygotes carrying a gene for a recessive disorder. For example, Ashkenazi Jews are screened for Tay-Sachs disease, blacks are screened for sickle cell anemia, and several ethnic groups are screened for thalassemia. If a heterozygote’s mate is also a heterozygote, the couple is at risk of having an affected child. If the risk is high enough, prenatal diagnosis can be pursued (e.g., with amniocentesis, chorionic villus sampling, umbilical cord blood sampling, maternal blood sampling or fetal imaging). In some cases, genetic disorders diagnosed prenatally can be treated, preventing complications. For instance, special diet or replacement therapy can minimize or eliminate the effects of phenylketonuria, galactosemia, and hypothyroidism. Corticosteroids given to the mother before birth may decrease the severity of congenital virilizing adrenal hypoplasia.  Screening may be appropriate for people with a family history of a dominantly inherited disorder that manifests later in life, such as Huntington disease or cancers associated with abnormalities of the BRCA1 and BRCA2 genes. Screening clarifies the risk of developing the condition for that person, who can then make appropriate plans, such as for more frequent screening or preventive therapy. Screening may also be indicated when a family member is diagnosed with a genetic disorder. A person who is identified as a carrier can make informed decisions about reproduction. In a nutshell, genetic screening is justified only if disease prevalence is high enough, treatment is feasible, and tests are accurate enough.

_______

Genetic engineering vis-à-vis gene therapy vis-à-vis genetic enhancement:

Genetic engineering, also called genetic modification, is the direct manipulation of an organism’s genome using biotechnology. New DNA may be inserted in the host genome by first isolating and copying the genetic material of interest using molecular cloning methods to generate a DNA sequence, or by synthesizing the DNA, and then inserting this construct into the host organism. Genes may be removed, or “knocked out”, using a nuclease. Gene targeting is a different technique that uses homologous recombination to change an endogenous gene, and can be used to delete a gene, remove exons, add a gene, or introduce point mutations. An organism that is generated through genetic engineering is considered to be a genetically modified organism (GMO). The first GMOs were bacteria in 1973 and GM mice were generated in 1974. Insulin-producing bacteria were commercialized in 1982 and genetically modified food has been sold since 1994. Genetic engineering does not normally include traditional animal and plant breeding, in vitro fertilisation, induction of polyploidy, mutagenesis and cell fusion techniques that do not use recombinant nucleic acids or a genetically modified organism in the process. However the European Commission has also defined genetic engineering broadly as including selective breeding and other means of artificial selection. Cloning and stem cell research, although not considered genetic engineering, are closely related and genetic engineering can be used within them. Synthetic biology is an emerging discipline that takes genetic engineering a step further by introducing artificially synthesized genetic material from raw materials into an organism. If genetic material from another species is added to the host, the resulting organism is called transgenic. If genetic material from the same species or a species that can naturally breed with the host is used the resulting organism is called cisgenic. In medicine, genetic engineering has been used to mass-produce insulin, human growth hormones, follistim (for treating infertility), human albumin, monoclonal antibodies, antihemophilic factors, vaccines and many other drugs. Vaccination generally involves injecting weak, live, killed or inactivated forms of viruses or their toxins into the person being immunized. Genetically engineered viruses are being developed that can still confer immunity, but lack the infectious sequences. Mouse hybridomas, cells fused together to create monoclonal antibodies, have been humanised through genetic engineering to create human monoclonal antibodies. Genetic engineering has shown promise for treating certain forms of cancer.

_

Gene therapy is the genetic engineering of humans by replacing defective human genes with functional copies. Genetic enhancement refers to the use of genetic engineering to modify a person’s nonpathological human traits. In contrast, gene therapy involves using genetic engineering to alter defective genes or insert corrected genes into the body in order to treat a disease.  However, there is no clear distinction between genetic enhancement and gene therapy. One approach to distinguishing between the two is to classify any improvement beyond that which is “natural” as an enhancement. “Enhancement” would then include preventive measures such as vaccines, which strengthen one’s immune system to a point beyond that which would be achieved “naturally.” Another approach is to consider gene therapy as encompassing any process aimed at preserving or restoring “normal” functions, while anything that improves a function beyond that which is “normal” would be considered a genetic enhancement. This, however, would require “normal” to be defined, which only frustrates the clarification of enhancement versus therapy. Yet another way to distinguish between therapy and enhancement might rely on the goal of the genetic alteration. But the classification of the goal will necessarily depend on how “disease” or “normal” is defined.

_

Human genetic engineering is divided into four types. The first, which is being practiced today, is somatic cell gene therapy. Somatic cells are the cells in our bodies that are not the egg or sperm cells. Therefore, if a patient were to suffer from melanoma, for instance, somatic gene therapy could cure the skin cancer, but the cure would not extend to his posterity. Germ-line gene therapy, however, involves correcting the genetic defect in the reproductive cells (egg and sperm) of the patient so that his progeny will be cured of melanoma also. The third is enhancement genetic engineering, in which a gene is inserted to enhance a specific characteristic. For example, a gene cording for a growth hormone could be inserted to increase a person’s height. The last type is eugenic genetic engineering. It involves the insertion of genes to alter complex human traits that depend on a large number of genes as well as extensive environmental influences. This last type is the most ambitious because it aims at altering a person’s intelligence and personality. So far, only somatic cell gene therapy is being performed. The other types involve serious moral and social issues that prevent their being pursued at this time.

_

A genetically modified organism (GMO) is an organism (plant/ animal/ microorganism etc) whose genetic material (DNA) has been altered using genetic engineering techniques by either adding a gene from a different species or over-expressing/ silencing a preexisting native gene. Genetic material can be artificially inserted either by physically inserting the extra DNA into the nucleus of the intended host with a very small syringe/a gene gun, by using the ability of Agrobacterium (bacteria) to transfer genetic material to plants, and the ability of lentiviruses (viruses) to transfer genes to animal cells. Such bacteria/ viruses are then called vectors. Genetically modified (GM) foods are foods derived from genetically modified organisms (GMO). These GM foods could be derived from either plant kingdom (e.g. tomatoes) or animal kingdom (e.g. salmon fish). Genetic material in an organism can be altered without genetic engineering techniques which include mutation breeding where an organism is exposed to radiation or chemicals to create a non-specific but stable change, selective breeding (plant breeding and animal breeding), hybridizing and somaclonal variation. However, these organisms are not labeled as GMO. In the puritan medical terminology, any individual who has received gene therapy necessarily becomes GMO.

_

Transgenic animal:

A “transgenic animal” is defined as an animal which is altered by the introduction of recombinant DNA through human intervention. This includes two classes of animals; those with heritable germline DNA alterations, and those with somatic non-heritable alterations. Examples of the first class include animals with germline DNA altered through methods requiring ex vivo manipulation of gametes, early embryonic stages, or embryonic stem cell lines. Examples of the second class include animals with somatic cell DNA alterations achieved through gene therapy approaches such as direct plasmid DNA injection or virally-mediated gene transfer.” “Transgene” refers to a segment of recombinant DNA which is either: 1) introduced into somatic cells, or 2) integrated stably into the germline of its animal host strain, and is transmissible to subsequent generations.

_

Transgenesis:

_

Is insertion of the insulin gene in E. coli an example of gene therapy?

No, it’s a good example of genetic engineering though. To be more specific, it is an example of recombinant DNA technology.  So gene therapy, genetic enhancement, recombinant DNA technology, transgenesis etc are different kinds of genetic engineering.

_

Recombinant proteins and genetically engineered vaccines:

Here the therapy is to deliver proteins or vaccines which have been produced by genetic engineering instead of traditional methods. Methods involve:

1. Expression cloning of normal gene products — cloned genes are expressed in microorganisms or transgenic livestock in order to make large amounts of a medically valuable gene product;

2. Production of genetically engineered antibodies — antibody genes are manipulated so as to make novel antibodies, including partially or fully humanized antibodies, for use as therapeutic agents;

3. Production of genetically engineered vaccines — includes novel cancer vaccines and vaccines against infectious agents.

_

______

Gene therapy vs. cell therapy:

Gene therapy is introduction or alteration of genetic material within the cell/organism with the intention of curing or treating disease. Cell therapy is transfer of cells into a patient with the goal of improving a disease. Gene therapy can be defined as the use of genetic material (usually deoxyribonucleic acid – DNA) to manipulate a patient’s cells for the treatment of an inherited or acquired disease. Cell therapy can be defined as the infusion or transplantation of whole cells into a patient for the treatment of an inherited or acquired disease. Cell therapy involves either differentiated cell (e.g. lymphocyte) or stem cell (e.g. hematopoietic stem cells HSC). Stem cell research is about growing new organs and body parts out of basic cells, whereas gene therapy is about replacing or treating parts of the human genome.

_

Cell therapy: 

Cell therapy is the transfer of cells into a patient or animal to help lessen or cure a disease. Cell therapy could be stem cell therapy or non-stem cell therapy; either could be autologus (self) or allogenic (different individual). The origin of the cells depends on the treatment. The transplanted cells are often a type of adult stem cells which have the ability to divide and self renew as well as provide cells that mature into the relevant specialized cells of the tissue. Blood transfusion and transfusion of red blood cells, white blood cells and platelets are a form of cell therapy that is very well accepted. Another common cell therapy is bone marrow transplantation which has been performed for over 40 years. The term somatic cell therapy refers to the administration to humans of autologous, allogeneic, or xenogeneic living non-germline cells, other than transfusable blood products, for therapeutic, diagnostic, or preventive purposes. Examples of somatic cell therapies include implantation of cells as an in vivo source of a molecular species such as an enzyme, cytokine or coagulation factor; infusion of activated lymphoid cells such as lymphokine activated killer cells and tumor-infiltrating lymphocytes; and implantation of manipulated cell populations, such as hepatocytes, myoblasts, or pancreatic islet cells, intended to perform a complex biological function.

_

Example of gene therapy and cell therapy:

A classic example of gene therapy is the efforts to correct hemophilia.  Hemophilia A and hemophilia B are caused by deficiencies of the clotting factors factor VIII and factor IX respectively. FVIII and FIX are made in the liver and secreted into the blood where they have critical roles in the formation of clots at the sites of vessel injury. Mutations in the FVIII or FIX genes prevent clot formation, and patients with hemophilia are at a severe risk of bleeding to death. Using disabled virus carriers, researchers have been able to introduce normal FVIII and FIX genes into the muscle and liver of animal models of hemophilia, and in the case of FIX, human patients.  Currently the most common Cell Therapy (other than blood transfusions) is bone marrow transplantation. Bone marrow transplantation is the treatment of choice for many kinds of leukemia and lymphoma, and is used to treat many inherited disorders ranging from the relatively common thalassemias (deficiencies of alpha-globin or beta-globin, the components of hemoglobin) to more rare disorders like Severe Combined Immune Deficiency (SCID the “Bubble Boy” disease). The key to bone marrow transplantation is the identification of a good “immunological matched” donor. The patient’s bone marrow cells are then destroyed by chemotherapy or radiation, and cells from the matched donor are infused. The most primitive bone marrow cells, called stem cells then find their way to the bone marrow where the replicate to increase their number (self renew) and also proliferate and mature producing normal numbers of donor derived blood cells in the circulation of the patient in a few weeks. Unfortunately, not all patients have a good “immunological match”. In addition, up to a third (depending on several factors including the disease) of bone marrow grafts fail to fully repopulate the patient, and the destruction of the host bone marrow can be lethal, particularly in very ill patients. These factors combine to hold back the obvious potential of bone marrow transplantation.

_

How are gene therapy and cell therapy related?

Both approaches have the potential to alleviate the underlying cause of genetic diseases and acquired diseases by replacing the missing protein(s) or cells causing the disease symptoms, suppressing expression of proteins which are toxic to cells, or eliminating cancerous cells. 

_

Combining Cell Therapy with Gene Therapy:

Gene therapy and Cell therapy are overlapping fields of biomedical research with similar therapeutic goals. Some protocols utilize both gene therapy and cell therapy: stem cells are isolated from the patient, genetically modified in tissue culture to express a new gene, typically using a viral vector, expanded to sufficient numbers, and returned to the patient. Several investigative protocols of cell therapy involve the transfer of adult T lymphocytes which are genetically modified to increase their immune potency and can self renew and kill the disease-causing cells.  Stem cells from umbilical cord blood and other tissues are being developed to treat many genetic diseases and some acquired diseases.

_

Classical example of combining cell therapy and gene therapy:

Hematopoietic Stem cell transplantation and gene therapy:

Hematopoietic stem cell transplantation (HSCT) represents the mainstay of treatment for several severe forms of primary immunodeficiency diseases. Progress in cell manipulation, donor selection, the use of chemotherapeutic agents, and prevention and management of transplant-related complications has resulted in significant improvement in survival and quality of life after HSCT. The primary immunodeficiency diseases for which HSCT is most commonly performed include Severe Combined Immune Deficiency (SCID), Wiskott-Aldrich Syndrome (WAS), IPEX Syndrome, Hemophagocytic Lymphohistiocytosis (HLH) and X-linked Lymphoproliferative Disease (XLP). It can also be used in the treatment of Chronic Granulomatous Disease (CGD) and many other severe primary immunodeficiency diseases. The transplantation of HSCs from a “normal” individual to an individual with a primary immunodeficiency disease has the potential to replace the deficient immune system of the patient with a normal immune system and, thereby, affect a cure. There are two potential obstacles that must be overcome for HSCT to be successful. The first obstacle is that the patient (known as the recipient or host) may have enough immune function remaining after the transplant to recognize the transplanted stem cells as something foreign. The immune system is programmed to react against things perceived as foreign and tries to reject them. This is called graft rejection. In order to prevent rejection, most patients require chemotherapy and/or radiation therapy to weaken their own residual immune system enough to prevent it from rejecting the transplanted HSCs. This is called “conditioning” before transplantation. Many patients with SCID have so little immune function that they are incapable of rejecting a graft and do not require conditioning before HSCT. The second obstacle that must be overcome for the transplant to be successful is Graft versus Host Disease (GVHD). This occurs when the mature T-cells from the donor or which develop after the transplant, perceive the host’s tissues as foreign and attack these tissues. To prevent GVHD, medications to suppress inflammation and T-cell activation are used. These medications may include steroids, cyclosporine and other drugs. In some forms of severe primary immunodeficiency diseases, gene therapy may represent a valid alternative for patients who lack acceptable stem cell donors. To perform gene therapy, the patient’s HSCs are first isolated from the bone marrow or from peripheral blood, and they are then cultured in the laboratory with the virus containing the gene of interest. Various growth factors are added to the culture to make HSC proliferate and to facilitate infection with the virus. After two to four days, the cultured cells are washed to remove any free virus, and then they are transfused into the patient. The cells that have incorporated the gene of interest into their chromosomes will pass it to all cells that will be generated when these cells divide. Because the gene has been inserted into HSC, the normal copy of the gene will be passed to all blood cell types, but not to other cells of the body. Because primary immunodeficiency diseases are caused by gene defects that affect blood cells, this can be sufficient to cure the disease.  Gene therapy represents a life-saving alternative for those patients with severe forms of primary immunodeficiency diseases, who do not have a matched sibling donor. In these cases, performing an HSCT from a haploidentical parent or even from a MUD would carry some significant risks of GVHD. In contrast, GVHD is not a problem after gene therapy, because in this case the normal copy of the gene is inserted into the patient’s own HSC, negating the need for a HSC donor. Until now, gene therapy has been used to treat patients with SCID secondary to adenosine deaminase (ADA) deficiency, X-linked SCID, CGD and WAS.

_

Another example of Cell and Gene Therapy overlapping is in the use of T-lymphocytes to treat cancer:

Many tumors are recognized as foreign by the patient’s T-cells, but these T-cells do not expand their numbers fast enough to kill the tumor. T-cells found in the tumor can be grown outside the body to very high numbers and then infused into the patient, often causing a dramatic reduction in the size of the tumor. This treatment is especially effective for tumors that have spread, as the tumor specific lymphocytes will track them down where ever they are. The addition of gene to the T-cells can allow specific T-cells that may be more effective tumor killers, and a second gene that can be used to kill the expanded T-cells after they have done their job.  

____________

The technique of genetic manipulation of organisms:

The technique of genetic manipulation, or genetic modification, of organisms relies on restriction enzymes to cut large molecules of DNA in order to isolate the gene or genes of interest from human DNA, which has been extracted from cells. After the gene has been isolated, it is inserted into bacterial cells and cloned. This process enables large amounts of identical copies of the human DNA to be extracted for further experiments. Once inside the bacterial cells, if the human gene is active or ‘switched on’ then the bacteria behave like ‘living factories’, manufacturing large amounts of the human protein encoded by the gene as seen in the figure below. This can be extracted and purified from the bacterial cultures, ready for use by humans. Genetic manipulation has enabled unlimited quantities of certain human proteins to be produced more easily and less expensively than was previously possible. Problems exist with this approach; however, as proteins must fold themselves up into very specific structures to have a biological effect. Often this doesn’t happen very effectively in bacteria. In order to overcome this problem, the cloned human DNA has been introduced into sheep. In this case, the human protein is secreted into the milk, allowing for a continuous process of production as seen in the figure below. Alternatively, the cloned human DNA can be used for gene therapy by direct intervention in the individual’s DNA.

_

 

_

Human clotting factor VIII, the protein used to treat haemophilia, can be made by splicing the human gene into bacteria. Insulin, which is used to treat diabetes, can be produced by sheep in their milk. Then you can supply the missing gene product to the patient like any other medicine.

_

The figure below shows that copy of human gene cloned in bacteria can be used for gene therapy:

__________

Two fundamental gene therapy approaches:

Two approaches to gene therapy exist: correcting genes involved in causing illness; and using genes to treat disorders. Most of the public debate has been about the former meaning, i.e. correcting or repairing genes, but early applications have focused on the latter meaning. These applications involve using ‘designer’ DNA to tackle diseases that are not inherited – by using altered viruses designed specifically to attack say cancer cells. Here, the DNA is working more or less like a drug. In fact, many ‘gene therapy’ trials approved so far have been attempts to treat a variety of cancers. 

_________

Fundamentals of gene therapy:

_

What is Gene Therapy?

Gene therapy can broadly be considered any treatment that changes gene function. However, gene therapy is often considered specifically the insertion of normal genes into the cells of a person who lacks such normal genes because of a specific genetic disorder. The normal genes can be manufactured, using PCR, from normal DNA donated by another person. Because most genetic disorders are recessive, usually a dominant normal gene is inserted. Currently, such insertion gene therapy is most likely to be effective in the prevention or cure of single-gene defects, such as cystic fibrosis. It is intracellular delivery of genes to generate a therapeutic effect by correcting an existing abnormality. The Human Genome Project provides information that can be used to help replace genes that are defective or missing in people with genetic diseases.  

_

_

The figure below shows that mutated gene produces defective protein:

_

The figure below shows that corrected gene replaces defective gene:

Gene therapy is the transfer of genetic material into a host (human or animal) with the intention of alleviating a disease state. Gene therapy uses genetic material to change the expression of a protein(s) critical to the development and/or progression of the disease. In gene replacement therapy typically used for diseases of loss of protein function (inherited in an autosomal recessive manner), scientists first identify a gene that is strongly associated with the onset of disease or its progression. They show that correcting its information content or replacing it with expression of a normal gene counterpart corrects the defect in cultured cells and improves the disease in animal models, and is not associated with adverse outcomes. Scientists and clinicians then develop strategies to replace the gene or provide its function by administering genetic material into the patient. The relevant genetic material or gene usually is engineered into a “gene cassette” and prepared for introduction into humans according to stringent guidelines for clinical use. The cassette can be delivered directly as DNA, or engineered into a disabled viral vector, packaged into a type of membrane vesicles (termed liposome) so it is efficiently taken up by the appropriate cells of the body or used to genetically modify cells for implantation into patients. Other types of gene therapy include delivery of RNA or DNA sequences (oligonucleotide therapy) that can be used either to depress function of an unwanted gene, such as one responsible for a mutant protein which acts in a negative way to reduce normal protein function (usually inherited in an autosomal dominant manner), to try to correct a defective gene through stimulation of DNA repair within cells, or to suppress an oncogene which acts as a driver in a cancer cell. In other strategies for diseases and cancer, the gene/RNA/DNA delivered is a novel agent intended to change the metabolic state of the cells, for example to make cancer cells more susceptible to drug treatment, to keep dying cells alive by delivery of growth factors, to suppress or activate formation of new blood vessels or to increase production of a critical metabolite, such as a neurotransmitter critical to brain function. Vectors and cells can also be used to promote an immune response to tumor cells and pathogens by expressing theses antigens in immune responsive cells in combination with factors which enhance the immune response.

_

Gene therapy (use of genes as medicines) is basically to correct defective genes responsible for genetic disorder by one of the following approaches-

 • A normal gene could be inserted into a nonspecific location within the genome to replace the Nonfunctional gene (most common)

• An abnormal gene could be swapped for a normal gene homologous recombination

• An abnormal gene could be repaired through selective reverse mutation

• Regulation (degree to which a gene is turned on or off) of a particular gene could be altered

_

Other approaches:

In the most straightforward cases, gene therapy adds a functional copy of a gene to cells that have only non-functional copies. But there are times when simply adding a working copy of the gene won’t solve the problem. In these cases, scientists have had to think outside the box to come up other approaches.

Dominant negative:
Some mutations in genes lead to the production of a dominant-negative protein. A dominant-negative protein may block a normal protein from doing its job (for an example, see Pachyonychia congenita). In this case, adding a functional copy of the gene won’t help, because the dominant-negative protein will still be there causing problems.

Gain-of-function:
A gain-of-function mutation makes a protein that acts abnormally, causing problems all on its own. For example, let’s say a signal activates protein X, which then tells the cell to start growing and dividing. A gain-of-function mutation may make protein X activates cell growth even when there’s no signal, leading to cancer.

Improper regulation:
Sometimes a disorder can involve a protein that is functioning as it should—but there’s a problem with where, when, or how much protein is being made. These are problems of gene regulation: genes need to be turned “on” in the right place, at the right time, and to the right level. To address the above situations, you could prevent the cell from making the protein the gene encodes, repair the gene, or find a work-around aimed at blocking or eliminating the protein.

_

Gene therapy is the treatment of human disease by gene transfer. Many, or maybe most, diseases have a genetic component — asthma, cancer, Alzheimer’s disease, for example. However, most diseases are polygenic, i.e. a subtle interplay of many genes determines the likelihood of developing a disease condition, whereas, so far, gene therapy can only be contemplated for monogenic diseases, in which there is a single gene defect. Even in these cases only treatment of recessive diseases can be considered, where the correct gene is added in the continued presence of the faulty one. Dominant mutations cannot be approached in this way, as it would be necessary to knock out the existing faulty genes in the cells where they are expressed (i.e. where their presence shows an effect), as well as adding the correct genetic information. Gene therapy for recessive monogenic diseases involves introducing correct genetic material into the patient.

_

The term gene therapy describes any procedure intended to treat or alleviate disease by genetically modifying the cells of a patient. It encompasses many different strategies and the material transferred into patient cells may be genes, gene segments or oligonucleotides. The genetic material may be transferred directly into cells within a patient (in vivo gene therapy), or cells may be removed from the patient and the genetic material inserted into them in vitro, prior to transplanting the modified cells back into the patient (ex vivo gene therapy). Because the molecular basis of diseases can vary widely, some gene therapy strategies are particularly suited to certain types of disorder, and some to others. Major disease classes include:

1. Infectious diseases (as a result of infection by a virus or bacterial pathogen);

2. Cancers (inappropriate continuation of cell division and cell proliferation as a result of activation of an oncogene or inactivation of a tumor suppressor gene or an apoptosis gene);

3. Inherited disorders (genetic deficiency of an individual gene product or genetically determined inappropriate expression of a gene);

4. Immune system disorders (includes allergies, inflammations and also autoimmune diseases, in which body cells are inappropriately destroyed by immune system cells).

A major motivation for gene therapy has been the need to develop novel treatments for diseases for which there is no effective conventional treatment. Gene therapy has the potential to treat all of the above classes of disorder. Depending on the basis of pathogenesis, different gene therapy strategies can be considered.

_

_

Diseases that can be treated by gene therapy are categorized as either genetic or acquired. Genetic diseases are those which are typically caused by the mutation or deletion of a single gene. The expression of a single gene, directly delivered to the cells by a gene delivery system can potentially eliminate a disease. Prior to gene therapy studies, there was no alternative treatment for genetic disorders. Today, it is possible to correct genetic mutation with gene therapy studies. Conversely, a single gene is not defined as the sole cause of acquired diseases. Although gene therapy was initially used to treat genetic disorders only, it is now used to treat a wide range of diseases such as cancer, peripheral vascular diseases, arthritis, neurodegenerative disorders and AIDS.

_

Humans possess two copies of most of their genes. In a recessive genetic disease, both copies of a given gene are defective. Many such illnesses are called loss-of-function genetic diseases, and they represent the most straightforward application of gene therapy: If a functional copy of the defective gene can be delivered to the correct tissue and if it makes (“expresses”) its normal protein there, the patient could be cured. Other patients suffer from dominant genetic diseases. In this case, the patient has one defective copy and one normal copy of a given gene. Some of these disorders are called gain-of-function diseases because the defective gene actively disrupts the normal functioning of their cells and tissues (some recessive diseases are also gain-of-function diseases). This defective copy would have to be removed or inactivated in order to cure these patients. Gene therapy may also be effective in treating cancer or viral infections such as HIV-AIDS. It can even be used to modify the body’s responses to injury. These approaches could be used to reduce scarring after surgery or to reduce restenosis, which is the reclosure of coronary arteries after balloon angioplasty.

_

Gene therapy has become an increasingly important topic in science- related news. The basic concept of gene therapy is to introduce a gene with the capacity to cure or prevent the progression of a disease. Gene therapy introduces a normal, functional copy of a gene into a cell in which that gene is defective. Cells, tissue, or even whole individuals (when germ-line cell therapy becomes available) modified by gene therapy are considered to be transgenic or genetically modified. Gene therapy could eventually target the correction of genetic defects, eliminate cancerous cells, prevent cardiovascular diseases, block neurological disorders, and even eliminate infectious pathogens. However, gene therapy should be distinguished from the use of genomics to discover new drugs and diagnosis techniques, although the two are related in some respects.

_

Gene therapy is a fascinating and growing research field of translational medicine. The basic biological understanding of tissue function, cellular events, metabolic processes, stem cell function, are all linked to the genetic code and to the genetic material in all species. In mammalians as in more simple creatures, each and every phenotype structural characterizations, function and probably behavior is dependent on the special nature and timing of genetic material and events. In altering the genetic material of somatic cells, gene therapy may correct the underlying specific disease pathophysiology. In some instances, it may offer the potential of a one-time cure for devastating, inherited disorders. In principle, gene therapy should be applicable to many diseases for which current therapeutic approaches are ineffective or where the prospects for effective treatment appear exceedingly low.

______

Uses of gene therapy:

Gene therapy is being used in many ways. For example, to:

1. Replace missing or defective genes;

2. Deliver genes that speed the destruction of cancer cells;

3. Supply genes that cause cancer cells to revert back to normal cells;

4.  Deliver bacterial or viral genes as a form of vaccination;

5. Provide genes that promote or impede the growth of new tissue; and;

6. Deliver genes that stimulate the healing of damaged tissue.

_

A large variety of genes are now being tested for use in gene therapy. Examples include: a gene for the treatment of cystic fibrosis (a gene called CFTR that regulates chloride); genes for factors VIII and IX, deficiency of which is responsible for classic hemophilia (hemophilia A) and another form of hemophilia (hemophilia B), respectively; genes called E1A and P53 that cause cancer cells to undergo cell death or revert to normal; AC6 gene which increases the ability of the heart to contract and may help in heart failure; and VEGF, a gene that induces the growth of new blood vessels (angiogenesis) of use in blood vessel disease. A short synthetic piece of DNA (called an oligonucleotide) is being used by researchers to “pre-treat” veins used as grafts for heart bypass surgery. The piece of DNA seems to switch off certain genes in the grafted veins to prevent their cells from dividing and thereby prevent atherosclerosis.

_______
How does gene therapy work?

Scientists focus on identifying genes that affect the progression of diseases. Depending on the disease, the identified gene may be mutated so it doesn’t work. The mutation may shorten the protein, lengthen the protein, or cause it to fold into an odd shape. The mutation may also change how much protein is made (change its expression level). After identification of the relevant gene(s), scientists and clinicians choose the best current strategy to return cells to a normal state, or in the case of cancer cells, to eliminate them. Thus, one aim of gene therapy can be to provide a correct copy of its protein in sufficient quantity so that the patient’s disease improves or disappears. Five main strategies are used in gene therapy for different diseases and cancer: gene addition, gene correction, gene silencing, reprogramming, and cell elimination. In some common diseases, such as Parkinson’s disease and Alzheimer’s disease, different genes and non-genetic causes can underlie the condition. In these cases, gene/cell therapy can be directed at the symptoms, rather than the cause, such as providing growth factors or neutralizing toxic proteins.

_

1. Gene addition:

Gene addition involves inserting a new copy of the relevant gene into the nucleus of appropriate cells. The new gene has its own control signals including start and stop signals. The new gene with its control signals is usually packaged into either viral vectors or non-viral vectors. The gene-carrying vector may be administered into the affected tissue directly, into a surrogate tissue, or into the blood stream or intraperitoneal cavity. Alternatively, the gene-carrying vector can be used in tissue culture to alter some of the patients’ cells which are then re-administered into the patient. Gene therapy agents based on gene addition are being developed to treat many diseases, including adenosine deaminase severe combined immunodeficiency (ADA- SCID), alpha-antitrypsin deficiency, Batten’s disease, congenital blindness, cystic fibrosis, Gaucher’s disease, hemophilia, HIV infections, Leber’s congenital amaurosis, lysosomal storage diseases, muscular dystrophy, type I diabetes, X linked chronic granulomatous disease, and many others. 

_

2. Gene correction:

Gene correction involves delivering a corrected portion of the gene with or without supplemental recombinant machinery that efficiently recombines with the defective gene in the chromosome and corrects the mutation in the genome of targeted cells. This can also be carried out by providing DNA/RNA sequences that allow the mutated portion of the messenger RNA to be spliced out and replaced with a corrected sequences or, when available in the genome, increasing expression of a normal counterpart of the defective gene which can replace its function.

_

3. Gene silencing:

Gene silencing is a technique with which geneticists can deactivate an existing gene. By turning off defective genes, the harmful effects of that gene can be prevented. This is accomplished by binding a specific strand of RNA to an existing mRNA (messenger RNA) strand. Ordinarily, when DNA replicates, the mRNA creates a copy of the DNA strand. But by binding the RNA to the mRNA, the mRNA is prevented from replicating that portion of the DNA. Therefore, specific genes can be targeted and prevented from replicating into new DNA strands. Viruses like Hepatitis and AIDS can be treated using gene silencing techniques. Gene silencing is an approach used to turn a gene “off” so that no protein is made from it. Gene-silencing approaches to gene therapy can target a gene’s DNA directly, or they can target mRNA transcripts made from the gene. Triple-helix-forming oligonucleotide gene therapy targets the DNA sequence of a mutated gene to prevent its transcription. This technique delivers short, single-stranded pieces of DNA, called oligonucleotides, that bind specifically in the groove between a gene’s two DNA strands. This binding makes a triple-helix structure that blocks the DNA from being transcribed into mRNA.

_

RNA interference takes advantage of the cell’s natural virus-killing machinery, which recognizes and destroys double-stranded RNA. This technique introduces a short piece of RNA with a nucleotide sequence that is complementary to a portion of a gene’s mRNA transcript. The short piece of RNA will find and attach to its complementary sequence, forming a double-stranded RNA molecule, which the cell then destroys.

_

 Ribozyme gene therapy targets the mRNA transcripts copied from the gene. Ribozymes are RNA molecules that act as enzymes. Most often, they act as molecular scissors that cut RNA. In ribozyme gene therapy, ribozymes are designed to find and destroy mRNA encoded by the mutated gene so that no protein can be made from it.

_

MicroRNAs constitute a recently discovered class of non-coding RNAs that play key roles in the regulation of gene expression. Acting at the post-transcriptional level, these fascinating molecules may fine-tune the expression of as much as 30% of all mammalian protein-encoding genes. By changing levels of specific microRNAs in cells, one can also achieve downregulation of gene expression.  

_

Short Interfering RNA:

Double stranded RNA, homologous to the gene targeted for suppression, is introduced into cells where it is cleaved into small fragments of double stranded RNA named short interfering RNAs (siRNA). These siRNAs guide the enzymatic destruction of the homologous, endogenous RNA, preventing translation to active protein. They also prime RNA polymerase to synthesis more siRNA, perpetuating the process, and resulting in persistent gene suppression. Short Interfering RNAs reduce protein production of the corresponding faulty gene. For example, too much tumor necrosis factor (TNF) alpha is often expressed in the afflicted joints of rheumatoid arthritis patients. Since the protein is needed in small amounts in the rest of the body, gene silencing aims to reduce TNF alpha only in the afflicted tissue. Another example would be oncoproteins, such as c-myc or EGFR that are upregulated or amplified in some cancers. Lowering expression of these oncoproteins in cancer cells can inhibit tumor growth.

_

Antisense therapy: a type of gene silencing:

Antisense therapy is a form of treatment for genetic disorders or infections. When the genetic sequence of a particular gene is known to be causative of a particular disease, it is possible to synthesize a strand of nucleic acid (DNA, RNA or a chemical analogue) that will bind to the messenger RNA (mRNA) produced by that gene and inactivate it, effectively turning that gene “off”. This is because mRNA has to be single stranded for it to be translated. Alternatively, the strand might be targeted to bind a splicing site on pre-mRNA and modify the exon content of an mRNA. This synthesized nucleic acid is termed an “anti-sense” oligonucleotide because its base sequence is complementary to the gene’s messenger RNA (mRNA), which is called the “sense” sequence (so that a sense segment of mRNA ” 5′-AAGGUC-3′ ” would be blocked by the anti-sense mRNA segment ” 3′-UUCCAG-5′ “). As of 2012, some 40 antisense oligonucleotides and siRNAs were in clinical trials, including over 20 in advanced clinical trials (Phase II or III). Antisense drugs are being researched to treat a variety of diseases such as cancers (including lung cancer, colorectal carcinoma, pancreatic carcinoma, malignant glioma and malignant melanoma), diabetes, Amyotrophic lateral sclerosis (ALS), Duchenne muscular dystrophy and diseases such as asthma, arthritis and pouchitis with an inflammatory component.  

_

Example of antisense therapy:

Rather than replace the gene, the approach used by Ryszard Kole and colleagues at the University of North Carolina repairs the dysfunctional messenger RNA produced by the defective genes. The technique has also shown promise in treating other genetic diseases such as haemophilia A, cystic fibrosis and some cancers. Kole’s work focused on tricking the red blood cell manufacturing machinery of thalassaemic patients into producing normal haemoglobin from their mutated genes. In normal cells, DNA is transcribed into messenger RNA (mRNA), which is then translated to produce proteins such as haemoglobin. Normal copies of the beta haemoglobin gene contain three coding regions of DNA interspersed with two non-coding sequences, known as exons. These exons have to be removed before the mRNA can be translated to produce a fully functioning haemoglobin molecule. Short regions bordering the exons – known as splice sites – tell the cell where to cut and paste the mRNA. Some mutations create additional splice sites. This results in the inclusion of extra coding sequences in the mRNA, which when translated, end up producing malfunctioning haemoglobin molecules. Kole and colleagues set out to block these additional splice sites using antisense RNA. This “mirror image” sequence of RNA sticks to the aberrant splice sites. With these sites blocked, the splicing machinery focuses on the original – and correct – splice sites to produce the normal sequence of mRNA. In the team’s latest experiments, the bone marrow cells of two patients were genetically modified in vitro to produce the antisense RNA. The antisense genes were inserted into the cells’ nuclei by a modified lentivirus that had been crippled to ensure it was incapable of reproducing. In the test tube, the bone marrow cells produced about 20 to 30 per cent of a healthy person’s level of normal haemoglobin. This figure corresponds to the best available conventional treatments, bone marrow transplants or regular blood transfusions. Kole will soon seek regulatory approval to carry out human trials.

_

Short Hairpin RNA interference: another type of gene silencing:

_

_

To effectively silence specific genes in mammalian cells, Elbashu et al designed short hairpin RNA (shRNA). These sequences, which can be cloned into expression vectors and transferred to cells, result in the transcription of a double stranded RNA brought together by a hairpin loop structure. These shRNAs effectively mimic siRNA and result in specific and persistent gene suppression in mammalian cells. Multiple groups have effectively incorporated shRNA coding sequences into AAV and lentiviral vectors and demonstrated specific gene suppression in mammalian cells.

_

4. Reprogramming:

Reprogramming involves the addition of one or more genes into cells of the same tissue which causes the altered cells to have a new set of desired characteristics. For example, type I diabetes occurs because many of the islet cells of the pancreas are damaged. But the exocrine cells of the pancreas are not damaged. Several groups are deciphering which genes to add to some of the exocrine cells of the pancreas to change them into islet cells, so these modified exocrine cells make insulin and help heal type I diabetic patients. This is also the strategy in the use of induced pluripotent stem cells (iPS) where skin cells or bone marrow cells are removed from the patient and reprogrammed by transitory expression of transcription factors which turn on developmentally programmed genes, thereby steering the cells to become the specific cell types needed for cell replacement in the affected tissue.

_

5. Chimeraplasty: 

It is a non- viral method that is still being researched for its potential in gene therapy. Chimeraplasty is done by changing DNA sequences in a person’s genome using a synthetic strand composed of RNA and DNA. This strand of RNA and DNA is known as a chimeraplast. The chimeraplast enters a cell and attaches itself to the target gene. The DNA of the chimeraplast and the cell complement each other except in the middle of the strand, where the chimeraplast’s sequence is different from that of the cell. The DNA repair enzymes then replace the cells DNA with that of the chimeraplast. This leaves the chimeraplast’s new sequence in the cell’s DNA and the replaced DNA sequence then decays. 

_

6. Cell elimination:

Cell elimination strategies are typically used for cancer (malignant tumors) but can also be used for overgrowth of certain cell types (benign tumors). Typical strategies involve suicide genes, anti-angiogenesis agents, oncolytic viruses, toxic proteins or mounting an immune response to the unwanted cells. Suicide gene therapy involves expression of a new gene, for example an enzyme that can convert a pro-drug (non-harmful drug precursor) into an active chemotherapeutic drug. Expression of this suicide gene in the target cancer cells can only cause their death upon administration of a prodrug, and since the drug is generated within the tumor, its concentration is higher there and is lower in normal tissues, thus reducing toxicity to the rest of the body. Since tumors depend on new blood vessels to supply their ever increasing volume, both oligonucleotides and genes aimed at suppressing angiogenesis have been developed. In another approach, a number of different types of viruses have been harnessed through mutations such that they can selectively grow in and kill tumor cells (oncolysis), releasing new virus on site, while sparing normal cells. In some cases toxic proteins, such as those that produce apoptosis (death) of cells are delivered to tumor cells, typically under a promoter that limits expression to the tumor cells. Other approaches involve vaccination against tumor antigens using genetically modified cells which express the tumor antigens, activation of immune cells or facilitation of the ability of immune cells to home to tumors. Cancer therapy has been limited to some extent by the difficulty in efficient delivery of the therapeutic genes or oligonucleotides to sufficient numbers of tumor cells, which can be distributed throughout tissues and within the body. To compensate for this insufficient delivery, killing mechanisms are sought which have a “bystander effect” such that the genetically modified cells release factors that can kill non-modified tumor cells in their vicinity. Recent studies have found that certain cell types, such as neuroprecursor cells and mesenchymal cells, are naturally attracted to tumor cells, in part due to factors released by the tumor cells. These delivery cells can then be armed with latent oncolytic viruses or therapeutic genes which they can carry over substantial distances to the tumor cells.

________

Why and how gene therapy just got easier:

Some diseases, such as haemophilia and cystic fibrosis, are caused by broken genes. Doctors have long dreamed of treating them by adding working copies of these genes to cells in the relevant tissue (bone marrow and the epithelium of the lung respectively, in these two cases). This has proved hard. There have been a handful of qualified successes over the years, most recently involving attempts to restore vision to people with gene-related blindness. But this sort of gene therapy is likely to remain experimental and bespoke for a long time, as it is hard to get enough genes into enough cells in solid tissue to have a meaningful effect. Recently, though, new approaches have been devised. Some involve editing cells’ genes rather than trying to substitute them. Others create and insert novel genes—ones that do not exist in nature—and stick those into patients. Both of these techniques are being applied to cells from the immune system, which need merely to be injected into a patient’s bloodstream to work. They therefore look susceptible to being scaled up in a way that, say, inserting genes into retinal cells is not.

_

1. Gene editing:

Gene editing can be done in at least three ways.

A. The gene editing technology is called the CRISPR system, which refers to the “Clustered Regularly Interspaced Short Palindromic Repeats” that allow its action.  As the name suggests, the system inserts short palindromic DNA sequences called CRISPRs that are a defining characteristic of viral DNA. Bacteria have an evolved defense that finds these CRISPRs, treating them as evidence of unwanted viral DNA. Scientists insert DNA sequences that code for this bacterial cutting enzyme, along with the healthy version of our gene of interest and some extra RNA for targeting. All scientists need do is design their sequences so CRISPRs are inserted into the genome around the diseased gene, tricking the cell into identifying it as viral — from there, the cell handles the excision all on its own, replacing the newly “viral” gene with the study’s healthy version. The whole process plays out using the cell’s own machinery.  CRISPR-Cas-9 editing employs modified versions of a natural antiviral defense found in bacteria, which recognises and cuts specific sequences of DNA bases (the “letters” of the genetic code). The paper published in Nature under lead author Josiane Garneau, demonstrated how CRISPR functions as a defense mechanism against bacteriophages – the viruses that attack bacteria. CRISPR was first noticed as a peculiar pattern in bacterial DNA in the 1980s. A CRISPR sequence consists of a stretch of 20 to 50 non-coding base pairs that are nearly palindromic – reading the same forward and backward – followed by a “spacer” sequence of around 30 base pairs, followed by the same non-coding palindrome again, followed by a different spacer, and so on many times over. Researchers in the field of bacterial immunology realized that the spacers were in fact short sequences taken from the DNA of bacteriophages, and that bacteria can add new spacers when infected with new viruses, gaining immunity from those viral strains. What Garneau and her colleagues showed was the mechanism that made the system work: the spacers are transcribed into short RNA sequences, which a protein called Cas9 uses to find the same sequences in invading viruses and cut the viral DNA at the targeted site. That was a pretty interesting paper, because it showed Cas9 will cut DNA. And Cas9 uses short RNA sequences to be able to cut the DNA. Immediately, the system suggested a new method of gene editing: CRISPR-Cas9 complexes could be paired with RNA sequences that target any sites researchers were interested in cutting. In the fall of 2012, a team including Jennifer Doudna and Emmanuel Charpentier went on to show that CRISPR’s natural guiding system, which features two distinct types of RNA, could be replaced with a single sequence of artificially-produced guide RNA, or gRNA, without compromising its effectiveness. This opened up the possibility of rapid engineering, where only the gRNA sequence would have to be modified to target CRISPR to different areas of the genome. Finally, in January 2013, Zhang’s lab published a paper in Science that hit