Dr Rajiv Desai

An Educational Blog

IMITATION SCIENCE

_

IMITATION SCIENCE:

_

The figure below shows imitation jewelry resembling real jewelry but costs only rupees 788/=

In the same way, imitation science resembles science but fares poorly on scientific methods:

There is no evidence to show that high dose vitamins prevent cancer or cardiovascular disease and maintain good health in healthy (non-pregnant) adults. There is no evidence to show that calcium supplements prevent fractures in normal adults including post-menopausal women. Yet billions take vitamins and calcium pills recommended by doctors and/or media driven by pharmaceutical industry which displays purported scientific studies (imitation science) showing benefits of vitamins and calcium supplements.   

_______

Prologue:

There is a Calvin and Hobbes cartoon in which Calvin poses the question to Dad: “Why does ice float?” Dad responds: “Because it’s cold. Ice wants to get warm, so it goes to the top of liquids in order to be nearer to the Sun.” Bad science is commonly used to describe well-intentioned but incorrect, obsolete, incomplete, or over-simplified expositions of scientific ideas. An example would be the statement that electrons revolve in orbits around the atomic nucleus, a picture that was discredited in the 1920’s, but is so much more vivid and easily grasped than the one that supplanted it that it shows no sign of dying out. I am coining a new term “Imitation Science”. Imitation science is akin to imitation jewelry. Just as imitation jewelry looks like real jewelry but costs much less; imitation science looks like real science but fares poorly on scientific methods, gives fake explanation and results; and at the best misleads people, and at the worst fools people with numerous disastrous consequences. Bad science is still science, but it is poorly done. Pseudoscience isn’t science at all, but it pretends to be. We all need to know how to differentiate between science, bad science, and pseudoscience. Astronomy is science; Astrology is pseudoscience. Evolutionary Biology is science; Creationism is pseudoscience. How about cultural anthropology, abstract economics, string-theory, and evolutionary psychology – science or pseudoscience? Is pseudo-science just politically incorrect science? Or is there an objective difference? Sometimes bad science is a product of mistakes and other times, it is a product of ideologues purposefully manipulating data. Bad science “starts” when bogus findings manage to break out of the laboratory via mass media and quickly become accepted as fact by the masses. How does this happen? Most people are in agreement that the Fukushima Daiichi nuclear power plant meltdown was a disaster. It is not illogical to assume that this has the potential to have a very negative impact on the environment and on the health of living organisms. An article was published in Al Jazeera titled, Fukushima: It’s much worse than you think, scientific experts believe Japan’s nuclear disaster to be far worse than governments are revealing to the public. Naturally, people assumed that this article was reporting truth. It was tweeted 9,878 times and liked on Facebook 49,000 times. As a result, people were led to believe that infant mortality rates in the northwest of the United States had increased by 35% and that the government was doing something to cover it up. It wasn’t true. Michael Moyer at Scientific American, very easily and clearly demonstrated how these so-called scientists had used selective and manipulated data to falsely conclude that the Fukushima disaster was causing babies to die at alarming rates in the United States. But the damage was already done. It is likely that there are more people who read and believed this bogus science reported on by Al Jazeera and other media outlets and used it to validate and confirm their preexisting beliefs than people who read Michael Moyer’s debunking article. Whether or not these beliefs are grounded in reality has now become a moot point. The point is they are being reinforced by falsehood. This often can cause an ideological stalemate. No side of a debate can make a legitimate claim at truth because no side has purged its argument of the bogus science used to support its claims. Thus, elections, policy decisions and the like become less of a contest of truth vs. falsehood, and more of a contest of who can get their bad science to reach the widest audience and garner the largest following. Both sides of the debate become tainted, and truth becomes nearly impossible to find. The global implications are dangerous. There are several major debates raging right now in the political landscape: Global warming, HIV control, stem-cell research, and Iran’s uranium enrichment program to name a few. All of these debates have very definite science to back them up – and science is used on both sides of the debate. Much of that science is bad science, but it can be found on both sides. Somewhere out there, the truth is hidden. And to find the truth, I have to show to the world what science is, what bad science is and what pseudoscience is. To overcome the jugglery of words like bad science, pseudoscience, junk science, fringe science, fake science and pathological science, I coin the new term “Imitation science” which includes all these terms.   

_

_

Let me begin discussion on imitation science by giving 2 examples:

Example 1:

Color of car and its path experiment:

Let us stand on balcony of our flat overlying a road where hundreds of cars pass every day. At the end of the road there is traffic signal where some car turns to left, some turn to right and some go straight. Let us observe sample of 10,000 cars (Large sample). At the end of the study you find that 30 % car turned to left, 30 % car turned to right and 40 % car went straight. Out of 30 % car that turned to left, 70 % were red colored. Out of 30 % of cars that turned to right, 75 % were white colored and out of 40 % of car that went straight, 80 % were black colored. We conclude that if you have red colored car, you are more likely to turn left. If you have white colored car, you are more likely to turn right. If you have black colored car, you will go straight. This is imitation science.

Why imitation science? I will explain.

First, there is no logic or reasoning to assume that color of the car has anything to do with its path. However, if youth drove red colored cars more commonly and if there was a dance bar on turning left, red colored cars are more likely to turn left. But such data is missing.

Second, basic outcome is easily falsifiable. You yourself drive your red colored car and turn right and do it hundred times. Hundred times it will turn right at your will.

Third, let this experiment be conducted by your friend on another day. The results will be remarkably different. The experiment could not be replicated.

Fourth, out of 10,000 cars, 3,000 (30 %) turned to left and out of these 3,000; 2,100 (70 %) were red colored. That means out of 10,000 cars, 2,100 red colored cars turned to left. We do not know how many red colored cars that did not turn to left. Let us assume that number of red colored cars that did not turn to left is A. Then total number of red colored car would be 2100 + A and chance of turning to left would be

2100 x 100 

2100 + A

For example if A = 900, then chance of turning to left for red colored car will be 70 %

Since we did not know A (number of red colored car that did not turn to left), our earlier conclusions were false as it were based on bad statistics. So lack of all variables coupled with bad statistics would give erroneous conclusions.

_

Example-2

Lung cancer and smoking:

Let us take sample of 2 people, Mr. A smoker and Mr. B non-smoker; both 40 years old, healthy and let us follow them up for 10 years. After 10 years you find that A is healthy despite smoking daily while B has lung cancer despite non-smoker. Your conclusion would be that smoking has nothing to do with lung cancer and in fact smokers live healthy life. This is imitation science. Your facts are correct. But your methodology is wrong. The sample is too small. If you had done same study with 100 people, 50 % smoker and 50 % non-smokers, the results would have been different. If you had done same study with 1 million people, the results would have convincingly shown that smoking does cause lung cancer. All smokers do not develop lung cancer and all patients of lung cancer are not smokers. But if you smoke, your chance of getting lung cancer is significantly higher than non-smokers. This is the truth.

_

If you can understand and learn these two examples in letter and spirit, you know imitation science and you also know how you are misled daily by imitation science masquerading as good science on TV, newspapers, magazines, celebrity talks, advertisements, at social gathering; and also by your doctors, dieticians, nutritionists, gymnasium peers, colleagues at work place and astrologers.   

_____________

The growth of science:

We live in an age of science and technology. Science’s achievements touch all our lives through technologies like computers, jet planes, cell phones, the Internet and modern medicine. Our intellectual world has been transformed through an immense expansion of scientific knowledge, down into the most microscopic particles of matter and out into the vastness of space, with hundreds of billions of galaxies in an ever-expanding universe. The industrial applications of technological developments that have resulted from scientific research have been startling, to say the least. The statistics used to represent the tangible perquisites of this most powerful system stagger the imagination. The historian of science, Derek J. de Solla Price, in his book Little Science, Big Science, has observed that “using any reasonable definition of a scientist, we can say that 80 to 90 percent of all the scientists that have ever lived are alive now. Alternatively, any young scientist, starting now and looking back at the end of his career upon a normal life span, will find that 80 to 90 percent of all scientific work achieved by the end of the period will have taken place before his very eyes, and that only 10 to 20 percent will antedate his experience” (1963, pp. 1-2). De Solla Price’s conclusions are well supported with evidence. There is now, for example, well over 100,000 scientific journals published each year, producing over six million articles to be digested—clearly an impossible task. The Dewey Decimal Classification now lists well over 1,000 different classifications under the title of “Pure Science,” within each of which are dozens of specialty journals. As the number of individuals working in the field grows, so too does the amount of knowledge, creating more jobs, attracting more people, and so on. The membership growth curves for the American Mathematical Society (founded in 1888) and the Mathematical Association of America (founded in 1915), are dramatic demonstrations of this phenomenon. Regarding the accelerating rate of increase of individuals entering the sciences, in 1965 the Junior Minister of Science and Education in Great Britain made this observation: ‘For more than 200 years scientists everywhere were a significant minority of the population. In Britain today they outnumber the clergy and the officers of the armed forces’. The rate of increase in transportation speed has also shown geometric progression, most of the change being made in the last one percent of human history. Fernand Braudel tells us, for example, that “Napoleon moved no faster than Julius Caesar” (1979, p. 429). But in the last century the growth in the speed of transportation has been astronomical (figuratively and literally). One final example of technological progress based on scientific research will serve to drive the point home. Timing devices in various forms—dials, watches, and clocks—have improved in their efficiency, and the decrease in error can be graphed over time. In virtually every field of human achievement associated with science and technology the rate of progress matches that of the examples above. Reflecting on this rate of change, economist Kenneth Boulding observed (Hardison, 1988, p. 14):  As far as many statistical series related to activities of mankind are concerned, the date that divides human history into two equal parts is well within living memory. The world of today is as different from the world in which I was born as that world was from Julius Caesar’s. I was born in the middle of human history.

________

Demarcation between science and imitation science [vide infra]:

Demarcations of science from imitation science can be made for both theoretical and practical reasons. From a theoretical point of view, the demarcation issue is an illuminating perspective that contributes to the philosophy of science in the same way that the study of fallacies contributes to the study of informal logic and rational argumentation. From a practical point of view, the distinction is important for decision guidance in both private and public life. Since science is our most reliable source of knowledge in a wide variety of areas, we need to distinguish scientific knowledge from its look-alikes. Due to the high status of science in present-day society, attempts to exaggerate the scientific status of various claims, teachings, and products are common enough to make the demarcation issue pressing in many areas.

_

Climate deniers are accused of practicing pseudoscience as are intelligent design creationists, astrologers, UFOlogists, parapsychologists, practitioners of alternative medicine, and often anyone who strays far from the scientific mainstream. The boundary problem between science and pseudoscience, in fact, is notoriously fraught with definitional disagreements because the categories are too broad and fuzzy on the edges, and the term “pseudoscience” is subject to adjectival abuse against any claim one happens to dislike for any reason. In his 2010 book Nonsense on Stilts, philosopher of science Massimo Pigliucci concedes that there is “no litmus test,” because “the boundaries separating science, nonscience, and pseudoscience are much fuzzier and more permeable than Popper (or, for that matter, most scientists) would have us believe.” I call creationism “pseudoscience” not because its proponents are doing bad science—they are not doing science at all—but because they threaten science education in America, they breach the wall separating church and state, and they confuse the public about the nature of evolutionary theory and how science is conducted. We can demarcate science from pseudoscience less by what science is and more by what scientists do. Science is a set of methods aimed at testing hypotheses and building theories. If a community of scientists actively adopts a new idea and if that idea then spreads through the field and is incorporated into research that produces useful knowledge reflected in presentations, publications, and especially new lines of inquiry and research, chances are it is science. This demarcation criterion of usefulness has the advantage of being bottom up instead of top down, egalitarian instead of elitist, nondiscriminatory instead of prejudicial.  

_

J. Robert Oppenheimer said it best when he wrote: “The scientist is free, and must be free to ask any question, to doubt any assertion, to seek any evidence, to correct any errors.” In the second century AD, Claudius Ptolemy provided evidence that the Earth was the center of our universe with the Sun orbiting around it. That conclusion was valid because of his arguments supporting it. Also, no one needed to dispute an idea that seemed self-evident. 1500 years later Nicolas Copernicus discovered that Ptolemy had to be wrong. He put together his evidence, but held the results until late in his life because his results would not be easily accepted. He knew his data would challenge everything known about the planets, stars, and our place in the universe. Both Ptolemy and Copernicus were good scientists, because they were both working with the information and the tools they had at the time and they drew conclusions from the data available. Still, Ptolemy was wrong and Copernicus was right. Science is a rigorous discipline that uses peer review to insure the quality of the research. Anyone can make a scientific claim, but it is the other scientists in that discipline or field that will verify or reject the results. It is the process of research followed by peer review that eventually weeds out researchers who may, 1) have had incorrect or misinterpreted results, 2) are incompetent, or 3) attempt to subvert science with fraudulent data for their own agenda. Unfortunately, it can take years for the review process to reveal whether one scientist’s research is valid or not.   

_

Pseudoscience, fringe science, junk science and bad science:

An area of study or speculation that masquerades as science in an attempt to claim a legitimacy that it would not otherwise be able to achieve is sometimes referred to as pseudoscience, fringe science, or “alternative science”. Another term, junk science, is often used to describe scientific hypotheses or conclusions which, while perhaps legitimate in themselves, are believed to be used to support a position that is seen as not legitimately justified by the totality of evidence. Physicist Richard Feynman coined the term “cargo cult science” in reference to pursuits that have the formal trappings of science but lack “a principle of scientific thought that corresponds to a kind of utter honesty” that allows their results to be rigorously evaluated. Various types of commercial advertising, ranging from hype to fraud, may fall into these categories. There also can be an element of political or ideological bias on all sides of such debates. Sometimes, research may be characterized as “bad science”, research that is well-intentioned but is seen as incorrect, obsolete, incomplete, or over-simplified expositions of scientific ideas. Bad science and pseudoscience should not be confused with each other, however. While pseudoscience may also be bad science, most of bad science is not generally considered pseudoscience. In fact, bad science is normal. Pseudoscience, on the other hand, is defined precisely by deviating from the norm of science. While that norm is certainly defined in part by methodological standards, it is certainly also defined by social, cultural, and historical factors. “Normal science” is what “normal” scientists do (in “normal” laboratories, “normal” universities, and backed by “normal” means of finance). Pseudoscience is what the cranks do, often in their spare time and with the backing of questionable coteries of interests. Bad science, being normal, has the legitimacy conferred by the association with respected institutions. For this reason, bad science ought to be a much graver concern than pseudoscience. Especially to people who care about the state of science, and about the welfare of a modern society that is increasingly dependent on reliable information on crucial topics, bad science is without doubt the more dangerous of the two. How we as a society solve the energy crisis, stop global warming, cure cancer and Alzheimer, and feed 10 billion people will eventually be decided by the readers of Nature, not the Fortean Times. It is therefore supremely important that scientific journals are reliable and bad science is kept at bay. Among scientific skeptics and professional debunkers, so much time and effort has been wasted on pointing out the obvious holes, gaps, and inconsistencies in pseudoscience. Pseudoscience would then seem appropriately defined as research that does not avail itself of the scientific method. ‘Bad’ science would then follow that method, for whatever reason, poorly or lie/fabricate data.

_________

The philosophy is simple. For every intellectual activity, there is intellectual counter-culture. For every good science, there is imitation science.

_

_____________

Let me start with definition of science and then define all other ‘imitation science’:

_

Science:

Science (from Latin scientia, meaning “knowledge”) is best defined as a careful, disciplined, logical search for knowledge about any and all aspects of the universe, obtained by examination of the best available evidence and always subject to correction and improvement upon discovery of better evidence. Science is a search for basic truths about the Universe, a search which develops statements that appear to describe how the Universe works, but which are subject to correction, revision, adjustment, or even outright rejection, upon the presentation of better or conflicting evidence. Science is a systematic enterprise that builds and organizes knowledge in the form of testable explanations and predictions about the universe. In an older and closely related meaning, “science” also refers to a body of knowledge itself, of the type that can be rationally explained and reliably applied. A practitioner of science is known as a scientist. In the 17th and 18th centuries scientists increasingly sought to formulate knowledge in terms of laws of nature such as Newton’s laws of motion. And over the course of the 19th century, the word “science” became increasingly associated with the scientific method itself, as a disciplined way to study the natural world, including physics, chemistry, geology and biology.  Science is self-correcting, or so the old cliché goes. The gist is that one scientist’s error will eventually be righted by those who follow, building on the work. The process of self-correction can be slow – even decades long but eventually the scientific method will right the canon.  

_

Anecdotal report:

Anecdotal means based on personal accounts or casual observations rather than rigorous or scientific analysis and therefore not necessarily true or reliable.  Every student of science must understand that anecdotal report/ information/ study is not science. Anecdotal information propagated by media is neither science nor truth. For example, you find in newspaper that people who drink tea is less likely to suffer from heart attack. This is not a lie. It is anecdotal report. Some researcher in a study found that people who drink tea daily have lower incidence of heart attack as tea contains anti-oxidants and anti-oxidants supposedly reduce inflammation of atheroma.  It ought to be confirmed by double-blind placebo-controlled randomized trials on a much larger scale. Then it will become science. You cannot start drinking tea on regular basis by one anecdotal report. Newspapers publish such reports to increase their sales. You may still drink tea if you like it but not to save your life.   

_

Non-Science:

Simply put non-science refers to inquiry, academic work or disciplines that do not involve the process of empirical verification, or the scientific process to generate their products or knowledge base. These are disciplines that do not adopt a systematic methodology based on evidence. Many of the greatest human achievements represent non-science. They are activities that don’t purport to be scientific and are easily identified as non-scientific in their nature, including the arts, religion, and even philosophy. They usually involve creative processes, the acceptance of subjective knowledge, belief, revelation or faith in establishing their epistemological basis. The rejection of the principles of scientific inquiry is also frequently acknowledged in much postmodern thinking, which can also be categorized as non-science. Remember, imitation science and non-science are different. Imitation science resembles or masquerade as science while non-science has nothing to do with science including pretense.     

_

Faith:

Faith is non-science. Faith may be defined briefly as an illogical belief in the occurrence of the improbable. I would add irrational and highly delusional to the mix when faith requires one to accept magical violations of the well known, well tested or easily demonstrated laws of Nature. Science is progress and the future. Faith is regression to the Dark Ages.

_

Anti-science:

Anti-science is a position that rejects science and the scientific method. People holding antiscientific views do not accept that science is an objective method, as it purports to be, or that it generates universal knowledge. They also contend that scientific reductionism in particular is an inherently limited means to reach understanding of the complex world we live in. Anti-science is a rejection of “the scientific model [or paradigm]… with its strong implication that only that which was quantifiable, or at any rate, measurable… was real”.   In this sense, it comprises a “critical attack upon the total claim of the new scientific method to dominate the entire field of human knowledge”. The term “anti-science” refers to persons or organizations that promote their ideology over scientifically-verified evidence, either by denying said evidence and/or inventing their own. Anti-science positions are maintained and promoted especially in areas of conflict between the politically- or religiously-motivated pseudo-scientific position and actual science. In addition, anti-science positions are normally couched in reassuring code words, such as “intelligent design”, in order to appear less distortive of science.   Anti-science proponents also criticize what they perceive as the unquestioned privilege, power and influence science seems to wield in society, industry and politics; they object to what they regard as an arrogant or closed-minded attitude amongst scientists. Common anti-science targets include evolution, global warming, and various forms of medicine, although other sciences that conflict with the anti-science ideology are often targeted as well. Religious anti-science philosophy considers science as an anti-spiritual and materialistic force that undermines traditional values, ethnic identity and accumulated historical wisdom in favor of reason and cosmopolitanism. The modern usage of the term should not be confused with the anti-science movement in the 1960s and 1970s, which was largely concerned with the possible dehumanizing aspects of uncontrolled scientific and technological advancement.  

_

Bad Science:

Bad science is simply scientific work that is carried out poorly or with erroneous results due to fallacies in reasoning, hypothesis generation and testing or the methods involved. Science, like any endeavor can be carried out badly, and often with the most noble motives. Occasionally scientists deliberately mislead for personal gain, but in most cases bad science results from errors in the scientific process. Sometimes these errors are the result of poor practices, or the researchers’ unconscious imposition of their beliefs (looking for the answer they believe in). Science is an extraordinary process but as we have seen is not perfect, and has one notable flaw; it is carried out by people who are themselves unavoidably influenced by their own beliefs, and who are sometimes trained insufficiently and make mistakes.

_

Quasi-science:

Quasi-science is a term sometimes encountered, and is difficult to separate from pseudoscience. We may consider that quasi-science resembles science, having some of the form, but not all of the features of scientific inquiry. Quasi-science involves an attempt to use a scientific approach but where development of a scientific theoretical basis or application of scientific methodologies is insufficient for the work to be determined as an established science. Differentiating quasi-science from pseudo-science or bad-science is complex and there is certainly some overlap, as some quasi-science falls into the realm of pseudoscience. But, generally quasi-science can be considered work involving commonly held beliefs in popular science but where they do not meet the rigorous criteria of scientific work. This is often seen with “pop” science that may blur the divide between science and pseudoscience among the general public, and may also be seen in much science fiction. For example ideas about time-travel, immortality, aliens and sentient machines are frequently discussed in the media, although there is insufficient empirical basis for much of this to be seen as scientific knowledge at this time. Quasi-science does not normally reject, or purport to be a new/alternative science and may be developed with an application of rigorous scientific methods into scientific work.

_

Pseudoscience:

Pseudoscience is a belief or process which masquerades as science in an attempt to claim a legitimacy which it would not otherwise be able to achieve on its own terms. The most important of its defects is usually the lack of the carefully controlled and thoughtfully interpreted experiments which provide the foundation of the natural sciences and which contribute to their advancement. Michael Shermer defined pseudoscience as “claims presented so that they appear [to be] scientific even though they lack supporting evidence and plausibility”.  In contrast, science is “a set of methods designed to describe and interpret observed and inferred phenomena, past or present, and aimed at building a testable body of knowledge open to rejection or confirmation” (Shermer 1997, p. 17) Merriam-Webster Dictionary defines pseudoscience as a system of theories, assumptions, and methods erroneously regarded as scientific.

_

Parascience:

Parascience (Spirituality, New Age, Astrology & Self-help / Alternative Belief Systems) is the study of subjects that are outside the scope of traditional science because they cannot be explained by accepted scientific theory or tested by conventional scientific methods. Pseudoscience is research/theorizing that denies at least one of ‘normal’ science’s axioms or supports the existence of ‘things’ that ‘normal’ science denies, if that research is conducted outside of ‘normal’ channels. It’s preferable to see pseudoscience and science as an epistemological spectrum. At one end, there is an ideal of trying to only gain or lose confidence in an idea under the influence of values that tend to inform a scientific methodology (appropriate use of logic, falsification, blinding, rigour, repetition etc.). At the other there are influences of culture, tribalism, resources, esteem and so forth, that increase or decrease that confidence in an idea for irrational reasons.

_

Fringe science:

There are differing definitions of fringe science. By one definition it is valid, but not mainstream science, whilst by another broader definition it is generally viewed in a negative way as being non-scientific. Fringe science is scientific inquiry in an established field of study that departs significantly from mainstream or orthodox theories, and is classified in the “fringes” of a credible mainstream academic discipline. Fringe science covers everything from novel hypotheses that can be tested via scientific method to wild ad hoc theories and “New Age mumbo jumbo” with the dominance of the latter resulting in the tendency to dismiss all fringe science as the domain of pseudoscientists, hobbyists, or quacks. Other terms used for the portions of fringe science that lack scientific integrity are pathological science, voodoo science, and cargo cult science. Michael W. Friedlander suggests some guidelines for responding to fringe science, which he argues is a more difficult problem to handle, “at least procedurally,” than scientific misconduct. His suggested methods include impeccable accuracy, checking cited sources, not overstating orthodox science, thorough understanding of the Wegener continental drift example, examples of orthodox science investigating radical proposals, and prepared examples of errors from fringe scientists. A particular concept that was once accepted by the mainstream scientific community can become fringe science because of a later evaluation of previously supportive research. For example the idea that focal infections of the tonsils or teeth were a primary cause of systemic disease was once considered medical fact, but is now dismissed for lack of evidence. Conversely, fringe science can include novel proposals and interpretations that initially have only a few supporters and much opposition. Some theories which were developed on the fringes (for example, continental drift, existence of Troy, heliocentrism, the Norse colonization of the Americas, and Big Bang Theory) have become mainstream because of the discovery of supportive evidence.

_

Junk science:

Junk science is a term typically used in the political arena to describe ideas that proponents erroneously, for political reasons, dubiously or even fraudulently claim scientific backing. In the United States, junk science is any scientific data, research, or analysis considered to be spurious or fraudulent. The concept is often invoked in political and legal contexts where facts and scientific results have a great amount of weight in making a determination. It usually conveys a pejorative connotation that the research has been untowardly driven by political, ideological, financial, or otherwise unscientific motives. Junk science is the promotion of a finding as “scientific” or “unscientific” based mainly upon whether its conclusions support the answers (or views) favored by the promoters. Junk science consists of giving poorly done scientific work the same authority as work which conforms to the scientific method. It is akin to politicized science, i.e., the selective use of scientific evidence to reach predetermined conclusions and support extra-scientific political goals. The concept was first invoked in relation to expert testimony in civil litigation.  More recently, invoking the concept has been a tactic to criticize research on the harmful environmental or public health effects of corporate activities, and occasionally in response to such criticism. In these contexts, junk science is counterposed to the “sound science” or “solid science” that favors one’s own point of view. In some cases, junk science may result from a misinterpretation of previous sound scientific studies.

_

Pathological science:

Pathological science is a reference to science which involves barely detectible phenomena that is then reported a being carefully studied. It is interesting that this is an example of circular referencing. Irving Langmuir coined the term. While he has the 1932 Nobel Prize in Chemistry and probably ran into a good deal of bad science, he is also a darling of the pseudoskeptics. In his Irving Langmuir’s Symptoms of Pathological Science:

  1. The maximum effect that is observed is produced by a causative agent of barely detectable intensity, and the magnitude of the effect is substantially independent of the intensity of the cause.
  2. The effect is of a magnitude that remains close to the limit of detectability; or, many measurements are necessary because of the very low statistical significance of the results.
  3. Claims of great accuracy.
  4. Fantastic theories contrary to experience.
  5. Criticisms are met by ad hoc excuses thought up on the spur of the moment.
  6. Ratio of supporters to critics rises up to somewhere near 50% and then falls gradually to oblivion.

_

N-rays as pathological science:

Langmuir discussed the issue of N-rays as an example of pathological science, one that is universally regarded as pathological. The discoverer, René-Prosper Blondlot, was working on X-rays (as were many physicists of the era) and noticed a new visible radiation that could penetrate aluminium. He devised experiments in which a barely visible object was illuminated by these N-rays, and thus became considerably “more visible”. After a time another physicist, Robert W. Wood, decided to visit Blondlot’s lab, where he had since moved on to the physical characterization of N-rays. The experiment passed the rays from a 2 mm slit through an aluminium prism, from which he was measuring the index of refraction to a precision that required measurements accurate to within 0.01 mm. Wood asked how it was possible that he could measure something to 0.01 mm from a 2 mm source, a physical impossibility in the propagation of any kind of wave. Blondlot replied, “That’s one of the fascinating things about the N-rays. They don’t follow the ordinary laws of science that you ordinarily think of.” Wood then asked to see the experiments being run as usual, which took place in a room required to be very dark so the target was barely visible. Blondlot repeated his most recent experiments and got the same results—despite the fact that Wood had reached over and covertly removed the prism.

_

Pseudomathematics:

Pseudomathematics is a form of mathematics-like activity undertaken by many non-mathematicians – and occasionally by mathematicians themselves. The efforts of pseudomathematicians divide into three categories:

  • attempting apparently simple classical problems long proved impossible by mainstream mathematics; trying metaphorically or (quite often) literally to square the circle
  • generating whole new theories of mathematics or logic from scratch
  • attempting hard problems in mathematics (the Goldbach conjecture comes to mind) using only high-school mathematical knowledge  

____________

Scientific literacy:

Scientific literacy is defined as knowing basic facts and concepts about science and having an understanding of how science works. It is important to have some knowledge of basic scientific facts, concepts, and vocabulary. Those who possess such knowledge are able to follow science news and participate in public discourse on science-related issues. Having appreciation for the scientific process may be even more important. Knowing how science works, i.e., understanding how ideas are investigated and either accepted or rejected, is valuable not only in keeping up with important science-related issues and participating meaningfully in the political process, but also in evaluating and assessing the validity of various types of claims people encounter on a daily basis (including those that are pseudoscientific) (Maienschein 1999). Surveys conducted in the United States and other countries reveal that most citizens do not have a firm grasp of basic scientific facts and concepts, nor do they have an understanding of the scientific process. In addition, belief in pseudoscience seems to be widespread, not only in the United States but in other countries as well. A substantial number of people throughout the world appear to be unable to answer simple, science-related questions as seen in the table below. Many did not know the correct answers to several (mostly) true/false questions designed to test their basic knowledge of science.

_

 

_

A recent study of 20 years of survey data collected by NSF concluded that “many Americans accept pseudoscientific beliefs,” such as astrology, lucky numbers, the existence of unidentified flying objects (UFOs), extrasensory perception (ESP), and magnetic therapy (Losh et al. 2003). Such beliefs indicate a lack of understanding of how science works and how evidence is investigated and subsequently determined to be either valid or not. Scientists, educators, and others are concerned that people have not acquired the critical thinking skills they need to distinguish fact from fiction. The science community and those whose job it is to communicate information about science to the public have been particularly concerned about the public’s susceptibility to unproven claims that could adversely affect their health, safety, and pocketbooks (NIST 2002).

_

In 1998, Rudiger C. laugksch in his essay Scientific Literacy incorporated a broad definition of scientific literacy consisting of several dimensions:

  1. The scientifically literate person understands the nature of scientific knowledge.
  2. The scientifically literate person accurately applies appropriate science concepts, principles, laws and theories in interacting with his universe.
  3. The scientifically literate person uses processes of science in solving problems, making decisions and furthering his own understanding of the universe.
  4. The scientifically literate person interacts with the various aspects of his universe in a way that is consistent with the values that underlie science.
  5. The scientifically literate person understands and appreciates the joint enterprises of science and technology and the interrelationship of these with each and with other aspects of society.
  6. The scientifically literate person has developed a richer, more satisfying, and more exciting view of the universe.

_

Scientific literacy can be defined in several ways. One way includes the following elements: 

1. The ability to think critically; 

2. The ability to use evidential reasoning to draw conclusions; and

3. The ability to evaluate scientific authority.

 It does not necessarily mean the accumulation of scientific facts, although a certain basic factual knowledge seems imperative. Critical thinking can be learned by most anyone. Although it is important in scientific reasoning, it is even more important in our daily lives, for a clear understanding of our surroundings and problems makes life enjoyable and safer. All people make assumptions, all hold biases based on previous experience, and all have emotions. These interfere with our interpretations of the world around us and how we solve our problems. They must be set aside in order to evaluate information and problems. As a matter of self-defense against those who would victimize us with hoaxes, frauds, flaky schemes, or do us physical harm, critical thinking is essential.

_

The Organization for Economic Co-Operation and Development’s (OECD) Program for International Student Assessment (PISA) define scientific literacy as: “the capacity to use scientific knowledge, to identify questions and to draw evidence-based conclusions in order to understand and help make decisions about the natural world and the changes made to it through human activity.” 

_

Ninety-five percent of Americans are scientifically illiterate, according to a worried Carl Sagan (1996).  That means that about 197 million people in the United States cannot understand how science works, what the process is of evidential reasoning, or whose opinions to trust. Only about 10 million people can; most of these are professional scientists, engineers or technicians. Scientific illiteracy contributes to an anti-science mentality that threatens our very existence (Ehrlich and Ehrlich, 1996). This “prescription for disaster” may well have everlasting consequences for the world.  Science is ubiquitous in the world, and many of our decisions, publicly and personally, depend on an understanding of it.  We cannot tolerate widespread scientific illiteracy. Scientific illiteracy is a world-wide problem. The data are not available for world-wide scientific illiteracy, but if the American rate of about 95% is taken as a minimum for the rest of the world, then we have billions of people unable to understand a large portion of what affects them. If science is important to America, it is just as important, if not more so, to much of the rest of the world. Population growth, environmental deterioration, biodiversity decline, greenhouse effects, health problems, food and air quality, natural hazards, and a multitude of other scientific problems face these countries in larger measure than in the United States. The consequences for them are significantly greater than for America. The use of pesticides, toxic chemicals, tobacco, and false health remedies are foisted on the rest of the world in larger amounts than in the U. S. because of less stringent rules and regulations and understanding by their populaces.  With a knowledgeable population, countries around the world could more effectively deal with these problems.  Many of the problems naturally cross political boundaries, and so, threaten even countries with some scientific literacy, if not the entire world. Number of people (in millions) over the age of 15 in the world who will be scientifically illiterate or literate based on population projections for mid-years as shown in a table below (McDevitt, 1996) and assuming an illiteracy rate identical to that estimated for the United States. This percentage is undoubtedly too low and may increase through time because the increasing populations of most countries will tax their educational systems even more than they are now.

Year and 
Total World Population 
Scientifically Illiterate (95%) 
over age 15
Scientifically Literate (5%) 
over age 15
 1998:    5771 million  3700+ million 199+ million
2000:     6090 million 4000+ million 213+ million
2010:     6861 million 4700+  million 250+ million
2020:     7599 million 5400+  million 285+ million

_

 Students who are scientifically literate:

•Know and understand the scientific concepts and processes required for participation in society

•Ask, find, or determine answers to questions derived from curiosity about their world

•Describe, explain, and predict natural phenomena

•Read with understanding science articles in the popular press and engage in social conversation about the validity of the conclusions

•Identify scientific issues underlying national and local decisions

•Express positions that are scientifically and technologically informed

•Evaluate the quality of scientific information on the basis of its source and the methods used to generate it

•Pose and evaluate arguments based on evidence and apply conclusions from such arguments appropriately

_

The extent to which students acquire a range of social and cognitive thinking skills related to the proper usage of science and technology determines whether they are scientifically literate. Education in the sciences encounters new dimensions with the changing landscape of science and technology, a fast-changing culture, and a knowledge-driven era. A reinvention of the school science curriculum is one that shapes students to contend with its changing influence on human welfare. Scientific literacy, which allows a person to distinguish science from pseudoscience such as astrology, is among the attributes that enable students to adapt to the changing world. Its characteristics are embedded in a curriculum where students are engaged in resolving problems, conducting investigations, or developing projects.   

_

There was a symposium on “Science Literacy and Pseudoscience” where it was revealed there that people in the U.S. know more about basic science today than they did two decades ago, good news that researchers say is tempered by an unsettling growth in the belief in pseudoscience such as astrology and visits by extraterrestrial aliens. So, science literacy is clearly increasing (from 10 to 28% according to one measure) but at the same time pseudoscientific beliefs are also increasing. It strikes me that this may be a problem for the educators in that they might be teaching students (and thus the public) scientific facts but not teaching them how to think scientifically.   

_

The relationship between child’s play and scientific exploration:

Laura Schulz finds that babies learning about the world have much in common with scientists. Laura Schulz, an associate professor of brain and cognitive sciences at MIT, has always been interested in learning and education. Schulz has devoted her academic career to investigating how learning takes place during early childhood. Starting in infancy, children are quickly able to learn a great deal about how the world works, based on a very limited amount of evidence. Schulz’s research, much of which she does at a “Play Lab” at Boston Children’s Museum, reveals that children, and even babies, inherently use many of the same strategies employed in the scientific method — a systematic process of forming hypotheses and testing them based on observed evidence. “All of these abilities that we think of as scientific abilities emerged because of the hardest problem of early childhood learning, which is how to get accurate abstract representations from sparse, noisy data,” she says. Schulz became interested not only in how children learn from observed evidence, but also how they generate evidence through exploration. She has found that many of the components of the scientific method — isolating variables, recognizing when evidence is confounded, positing unobserved variables to explain novel events — are in fact core to children’s early cognition. In one recent study, she investigated infants’ abilities to determine, from very sparse evidence, the properties of sets of objects. In the study, babies watched as an experimenter pulled a series of three balls, all blue, from a box of balls. Each ball squeaked when the experimenter squeezed it. The babies were then handed a yellow ball from the box. When most of the balls in the box were blue, the babies squeezed the yellow ball, suggesting that they generalized the squeaking to the entire box of balls. However, if the balls in the box were mostly yellow, the babies were much less likely to try to squeeze them, showing they believed that the blue, squeaking balls were a rare exception. Another recent experiment explored the value of offering lessons versus allowing children to explore on their own. In that study, Schulz found that children who were shown how to make a toy squeak were less likely to discover the toy’s other features than children who were simply given the toy with no instruction. “There’s a tradeoff of instruction versus exploration,” she says. “If I instruct you more, you will explore less, because you assume that if other things were true, I would have demonstrated them.” Although Schulz hopes that someday her work will lead to development of new education strategies, that is more of a long-term goal.

_

When every child employs scientific method for learning, then why scientific literacy is so poor in population? Obviously, natural scientific instinct is suppressed by parents, schools, religion, culture, peers, media and neighborhood. Schools always coerce child to memorize rather than understand. Religion always coerce child to follow faith without questioning. Parents always coerce child to follow their viewpoint of life without questioning and so on and so forth.

_________

Coincidence, miracle and science:

I quote from my article on my facebook page ‘The coincidence’ posted in 2010:

The English Dictionary defines the coincidence as sequence of events that actually occurs accidentally but seems to occur as planned/arranged. I define the coincidence as occurrence of two or more events together when there is no logical reasoning for their togetherness. For example; you are speeding your car across a traffic signal and suddenly the green light becomes the red light and you apply a brake. Your car movement and the change in signal is a coincidence. The coincidence is the basis of all miracles and the development of the real science. I will explain. Various god men and god women perform miracles. For example; the sick devotee becomes healthy after the touch of a God-man. Actually, the devotee may be psychic or having self-limiting illness but the credit goes to the God-man. It is a coincidence and not a miracle. Isaac Newton saw an apple falling from the tree and enunciated the laws of gravity. Many people saw apples falling from the tree for thousands of years and considered it as a coincidence. Newton thought why apple falls down and not goes to the sky. When the coincidence becomes repeated and logical, it becomes a scientific law. The distinction between correlation and causality will be discussed later on.

_________

Good science:

“Good science” is usually described as dependent upon qualities such as falsifiable hypotheses, replication, verification, peer-review and publication, general acceptance, consensus, collectivism, universalism, organized skepticism, neutrality, experiment/ empiricism, objectivity, dispassionate observation, naturalistic explanation, and use of the scientific method. A good example of this tendency is provided by Loevinger: “While there are innumerable specialized fields in science today, and while knowledge in one field does not necessarily transfer to another field, there are, nevertheless, general standards applicable to all fields of science that distinguish genuine science from pseudo-science and quack science.” Scientists are in possession of a simple, identifiable, universal scientific method which guides activity and can be employed in practical contexts to distinguish “good science” from “junk science.”   

_

The norms of science are seen as prescribing that scientists should be detached, uncommitted, impersonal, self-critical, and open-minded in their attempts to gather and interpret objective evidence about the natural world. It is assumed that considerable conformity to these norms is maintained; and the institutionalization of these norms is seen as accounting for that rapid accumulation of reliable knowledge which has been the unique achievement of the modern scientific community.

_

James Lett (in Ruscio) described six characteristics of scientific reasoning. These are falsifiability, logic, comprehensiveness, honesty, replicability, and sufficiency. Falsifiability is the ability to disprove a hypothesis. Logic dictates that the premise must be sound and that the conclusion must follow validly from the premise. Comprehensiveness must account for all the pertinent data, not just some of it. Honesty means that any and all claims must be truthful and not be deceptive. Replicability is the idea that similar results can be obtained by other researchers in other labs using similar methods. For this to have any meaning, the methods must also be transparent. In other words, the method used to obtain the results must be described in detail. Finally, sufficiency means that all claims must be backed by sufficient evidence. Any study that does not meet all of these criteria is not scientifically sound. 

_

Real science is hard. You don’t get to be an expert in biochemistry, astrophysics, immunology, multivariate mathematics, or any other natural science just by surfing the internet for a few hours or days. Real science is sitting in dozens of classes over years absorbing and understanding the decades of research that preceded you. It means learning how to be critical, not for criticism’s sake, but to find a new idea that might blossom in to the next best thing in science. It means spending years of your life studying a small idea. It means being smarter than almost anyone else you know. It means writing better than any of your friends or pals. It means late nights and early mornings. It means standing up to the criticism of your peers and of the leaders in your field. And if you put in the really hard work that it takes to be a scientific expert, then you will be given due weight to your ideas. And maybe you’ll change things.

___________

Scientific method:

The real purpose of the scientific method is to make sure nature hasn’t misled you into thinking you know something you actually don’t know.

Robert Pirsig

 _

The scientific method originated from advances in the sixteenth century. Early thinkers such as Copernicus, Paracelsus and Vesalius departed from dogma and led us to the Renaissance and the Scientific Revolution. Sir Francis Bacon in 1620 popularized and promoted what later became the scientific method.  A hypothesis, if confirmed, is elevated to a theory which, if confirmed, is elevated to a law. Under the scientific method, the burden of proof is on the party advancing the hypothesis, not on the reader. In fact, “proving the negative” is a logical impossibility.

_

The scientific method is a body of techniques for investigating phenomena, acquiring new knowledge, or correcting and integrating previous knowledge. To be termed scientific, a method of inquiry must be based on empirical and measurable evidence subject to specific principles of reasoning.  Science is basically the organized inquiry into the nature of reality. In its simplest form, it is observation of nature leading to hypothesis describing what is observed. This in turn leads to predictions about the behavior of what has been observed. For science to be practiced these predictions must be able to be tested and the results of those tests must be used to modify the hypotheses so that it can better describe that in nature which was observed. Science is all about raising questions and doing experiments to figure out the answers to questions. Science is not just information and definitions. Following the scientific method, we observe, raise questions, and make hypotheses about how and why things are the way they are observed to be. We make theories to explain the world around us. We make predictions based on our theories. We design experiments to check the validity of these hypotheses. After carrying out the experiments we analyze the results and draw conclusions. In the process of doing the experiments, new questions and problems arise. We may have to modify our hypotheses and theories. We then design new experiments in the continuing process of scientific enquiry. It is this process of discovery that makes science exciting and meaningful. The scientific method is not just a highly sophisticated process that can only be undertaken after years of formal study, foreign degrees, or post-doctoral experience. Take a look at the way young child learn and you can see that they often follow a scientific method, although usually without being fully aware of the process they are following. It does not matter to the scientific method what the answer turns out to be. You cannot dictate your preferred answers to the scientific method. That’s dishonest and manipulative. You must start with the question; make a hypothesis and then end up with a convincing answer! You cannot start with the answer first! Nor can you object to the facts and observations that the scientific method started from.  

_

The chief characteristic which distinguishes the scientific method from other methods of acquiring knowledge is that scientists seek to let reality speak for itself; discuss supporting a theory when a theory’s predictions are confirmed and challenging a theory when its predictions prove false. Although procedures vary from one field of inquiry to another, identifiable features distinguish scientific inquiry from other methods of obtaining knowledge. Scientific researchers propose hypotheses as explanations of phenomena, and design experimental studies to test these hypotheses via predictions which can be derived from them. These steps must be repeatable, to guard against mistake or confusion in any particular experimenter. Theories that encompass wider domains of inquiry may bind many independently derived hypotheses together in a coherent, supportive structure. Theories, in turn, may help form new hypotheses or place groups of hypotheses into context. Scientific inquiry is generally intended to be as objective as possible in order to reduce biased interpretations of results. Another basic expectation is to document, archive and share all data and methodology so they are available for careful scrutiny by other scientists, giving them the opportunity to verify results by attempting to reproduce them. This practice, called full disclosure, also allows statistical measures of the reliability of these data to be established (when data is sampled or compared to chance).

_

Science begins with observation. Hypothesis is devised to explain observations. The usefulness of a hypothesis is directly related to its ability to make testable predictions. If a hypothesis is consistent with existing observations and makes useful predictions that are confirmed by additional observation, it may be considered conditionally correct. If it does not match observations, then it must be either modified or abandoned. If it makes no useful predictions, then it must be abandoned as well because it is useless and has no explanatory power. This is of course a description of science in an idea world. In the real world, science is done by human beings. Humans often have ulterior motives, be it a theological agenda, a political agenda, or just plain old fashion egotism. These agendas can often skew scientific observations and hypothesis. Observations may also be faulty due to simple incompetence. This is why science requires a second basic foundation, in addition to observation. This foundation is called peer review. Any scientific observation or hypothesis that has not been subjected to peer review should be considered as incomplete. Of course, even with peer review, there is often a lot of room for debate on many questions. This is particularly true when the subject is complex and the available observational evidence limited. In the case of global warming for instance, the fact that the trend in Earth’s climate has been upward for the last century or so is pretty much beyond dispute. The question of why is still open to a lot of legitimate debate. Climate is a very complicated subject and distinguishing between a significant trend and normal cycles it not easy given that climate cycles may extend considerably farther in time than reliable observations. Such debates can leave those of us not directly involved in science (or even those involved in science when the question relates to a field outside their area of expertise) scratching our heads with no clue as to who is correct. There is no certain cure for this problem but there are some things you can do to ameliorate the situation. First, educate yourself as much as possible on those subjects that interest you. The more you know, the better you will be able to sort the wheat for the chef.  Second, carefully examine the motives of proponents of any non mainstream hypothesis. Are they trying to explain observations not adequately explained by existing theories, or are they pushing an unrelated agenda?  Next, apply the “evidence” test. Do they have evidence to support their theory, or is their evidence primarily just an attack on evidence used to support another theory. Remember, disproof of an alternative theory is not equivalent to proof of your own theory. As an exercise, visit any young earth creationist web site. How much is evidence for a young earth and how much is just attacks of evolution and other branches of mainstream theory? Be sure to check for unverifiable assertions (Darwin renounced evolution on his death bed) and logical fallacies: ad homenims (evolutionists are godless atheists), slippery slope arguments (evolution theory leads to Marxism and Nazism), arguments from authority (Isaac Newton believed in Creation), etc. If it passes the “motive” test and the “evidence” test, then it is time to apply the “usefulness” test. Does it make testable predictions? Can it be falsified? As an exercise, ask Dunash what testable predictions Geocentrism makes and what possible observations would convince him that Geocentrism is wrong. Finally, when the “new” or “alternative” theory challenges the mainstream theory, remember that the burden of proof always lies with the upstart. Mainstream theories become mainstream because they survive years or even centuries of close scrutiny and testing. They have proven their worth. In science, all theories are provisional, but some are more provisional than others.   

_

Scientific Method Steps in a nutshell:

Science follows certain rules and guidelines based upon the scientific method which says that you must:

Observe the phenomenon

Ask a question

Do some background research

Construct a hypothesis

Test your hypothesis by doing an experiment

Analyze the data from the experiment and draw a conclusion regarding the hypothesis that you tested

If your experiment shows that your hypothesis is false, think some more and go back further research.

If your experiment shows that your hypothesis is true communicate your results.

_

There is no one way to “do” science; different sources describe the steps of the scientific method in different ways. Fundamentally, however, they incorporate the same concepts and principles.  

Step 1: Make an observation:

Almost all scientific inquiry begins with an observation that piques curiosity or raises a question. For example, when Charles Darwin (1809-1882) visited the Galapagos Islands (located in the Pacific Ocean, 950 kilometers west of Ecuador), he observed several species of finches, each uniquely adapted to a very specific habitat. In particular, the beaks of the finches were quite variable and seemed to play important roles in how the birds obtained food. These birds captivated Darwin. He wanted to understand the forces that allowed so many different varieties of finch to coexist successfully in such a small geographic area. His observations caused him to wonder, and his wonderment led him to ask a question that could be tested.

Step 2: Ask a question:

The purpose of the question is to narrow the focus of the inquiry, to identify the problem in specific terms. The question Darwin might have asked after seeing so many different finches was something like this: What caused the diversification of finches on the Galapagos Islands?

Here are some other scientific questions:

What causes the roots of a plant to grow downward and the stem to grow upward?

What brand of mouthwash kills the most germs?

Which car body shape reduces air resistance most effectively?

What causes coral bleaching?

Does green tea reduce the effects of oxidation?

What type of building material absorbs the most sound?

Coming up with scientific questions isn’t difficult and doesn’t require training as a scientist. If you’ve ever been curious about something, if you’ve ever wanted to know what caused something to happen, then you’ve probably already asked a question that could launch a scientific investigation.

Step 3: Formulate a hypothesis:

The great thing about a question is that it yearns for an answer, and the next step in the scientific method is to suggest a possible answer in the form of a hypothesis. A hypothesis is often defined as an educated guess because it is almost always informed by what you already know about a topic. For example, if you wanted to study the air-resistance problem stated above, you might already have an intuitive sense that a car shaped like a bird would reduce air resistance more effectively than a car shaped like a box. You could use that intuition to help formulate your hypothesis. Generally, a hypothesis is stated as an “if … then” statement. In making such a statement, scientists engage in deductive reasoning, which is the opposite of inductive reasoning. Deduction requires movement in logic from the general to the specific. Here’s an example: If a car’s body profile is related to the amount of air resistance it produces (general statement), then a car designed like the body of a bird will be more aerodynamic and reduce air resistance more than a car designed like a box (specific statement). Notice that there are two important qualities about a hypothesis expressed as an “if … then” statement. First, it is testable; an experiment could be set up to test the validity of the statement. Second, it is falsifiable; an experiment could be devised that might reveal that such an idea is not true. If these two qualities are not met, then the question being asked cannot be addressed using the scientific method.

_

According to Schick and Vaughn, researchers weighing up alternative hypotheses may take into consideration:

Testability — compare falsifiability

Parsimony — as in the application of “Occam’s razor”, discouraging the postulation of excessive numbers of entities

Scope — the apparent application of the hypothesis to multiple cases of phenomena

Fruitfulness — the prospect that a hypothesis may explain further phenomena in the future

Conservatism — the degree of “fit” with existing recognized knowledge-systems.

_

Step 4: Conduct an experiment:

Many people think of an experiment as something that takes place in a lab. While this can be true, experiments don’t have to involve laboratory workbenches, Bunsen burners or test tubes. They do, however, have to be set up to test a specific hypothesis and they must be controlled. Controlling an experiment means controlling all of the variables so that only a single variable is studied. The independent variable is the one that’s controlled and manipulated by the experimenter, whereas the dependent variable is not. As the independent variable is manipulated, the dependent variable is measured for variation. In our car example, the independent variable is the shape of the car’s body. The dependent variable — what we measure as the effect of the car’s profile — could be speed, gas mileage or a direct measure of the amount of air pressure exerted on the car.  Controlling an experiment also means setting it up so it has a control group and an experimental group. The control group allows the experimenter to compare his test results against a baseline measurement so he can feel confident that those results are not due to chance.  Now consider our air-resistance example. If we wanted to run this experiment, we would need at least two cars — one with a streamlined, birdlike shape and another shaped like a box. The former would be the experimental group, the latter the control. All other variables — the weight of the cars, the tires, even the paint on the cars — should be identical. Even the track and the conditions on the track should be controlled as much as possible.

Step 5: Analyze data and draw a conclusion:

During an experiment, scientists collect both quantitative and qualitative data. Buried in that information, hopefully, is evidence to support or reject the hypothesis. The amount of analysis required to come to a satisfactory conclusion can vary tremendously. Sometimes, sophisticated statistical tools have to be used to analyze data. Either way, the ultimate goal is to prove or disprove the hypothesis and, in doing so, answer the original question.

_

_

Other components of scientific method:

The scientific method also includes other components required even when all the iterations of the steps above have been completed:

Replication:

If an experiment cannot be repeated to produce the same results, this implies that the original results were in error. As a result, it is common for a single experiment to be performed multiple times, especially when there are uncontrolled variables or other indications of experimental error. For significant or surprising results, other scientists may also attempt to replicate the results for themselves, especially if those results would be important to their own work.

External review: Peer review evaluation:

The process of peer review involves evaluation of the experiment by experts, who give their opinions anonymously to allow them to give unbiased criticism. It does not certify correctness of the results, only that the experiments themselves were sound (based on the description supplied by the experimenter). If the work passes peer review, which may require new experiments requested by the reviewers, it will be published in a peer-reviewed scientific journal. The specific journal that publishes the results indicates the perceived quality of the work. Scientific journals use a process of peer review, in which scientists’ manuscripts are submitted by editors of scientific journals to (usually one to three) fellow (usually anonymous) scientists familiar with the field for evaluation. The referees may or may not recommend publication, publication with suggested modifications, or, sometimes, publication in another journal. This serves to keep the scientific literature free of unscientific or pseudoscientific work, to help cut down on obvious errors, and generally otherwise to improve the quality of the material. The peer review process can have limitations when considering research outside the conventional scientific paradigm: problems of “groupthink” can interfere with open and fair deliberation of some new research.

Data recording and sharing:

Scientists must record all data very precisely in order to reduce their own bias and aid in replication by others, a requirement first promoted by Ludwik Fleck (1896–1961) and others. They must supply this data to other scientists who wish to replicate any results, extending to the sharing of any experimental samples that may be difficult to obtain.

__________

You ought to have logical explanation for evidence:

 

_

There could be alternative explanation for evidence:

 

___________

Hypothesis and theory:

A hypothesis (plural hypotheses) is a proposed explanation for a phenomenon. For a hypothesis to be a scientific hypothesis, the scientific method requires that one can test it. Scientists generally base scientific hypotheses on previous observations that cannot satisfactorily be explained with the available scientific theories. A working hypothesis is a provisionally accepted hypothesis proposed for further research. Even though the words “hypothesis” and “theory” are often used synonymously, a scientific hypothesis is not the same as a scientific theory. A scientific hypothesis is a proposed explanation of a phenomenon which still has to be rigorously tested. In contrast, a scientific theory has undergone extensive testing and is generally accepted to be the accurate explanation behind an observation. A hypothesis requires more work by the researcher in order to either confirm or disprove it. In due course, a confirmed hypothesis may become part of a theory or occasionally may grow to become a theory itself. Any useful hypothesis will enable predictions by reasoning (including deductive reasoning). It might predict the outcome of an experiment in a laboratory setting or the observation of a phenomenon in nature. The prediction may also invoke statistics and only talk about probabilities.

_

A hypothesis is a working assumption. Typically, a scientist devises a hypothesis and then sees if it “holds water” by testing it against available data (obtained from previous experiments and observations). If the hypothesis does hold water, the scientist declares it to be a theory. When a hypothesis proves unsatisfactory, it is either modified or discarded. If the hypothesis survived testing, it may become adopted into the framework of a scientific theory. This is a logically reasoned, self-consistent model or framework for describing the behavior of certain natural phenomena. A theory typically describes the behavior of much broader sets of phenomena than a hypothesis; commonly, a large number of hypotheses can be logically bound together by a single theory. Thus a theory is a hypothesis explaining various other hypotheses. In that vein, theories are formulated according to most of the same scientific principles as hypotheses. In addition to testing hypotheses, scientists may also generate a model based on observed phenomena. This is an attempt to describe or depict the phenomenon in terms of a logical, physical or mathematical representation and to generate new hypotheses that can be tested. In popular usage, a theory is just a vague and fuzzy sort of fact and a hypothesis is often used as a fancy synonym to `guess’. But to a scientist a theory is a conceptual framework that explains existing observations and predicts new ones. For instance, suppose you see the Sun rise. This is an existing observation which is explained by the theory of gravity proposed by Newton. This theory, in addition to explaining why we see the Sun move across the sky, also explains many other phenomena such as the path followed by the Sun as it moves (as seen from Earth) across the sky, the phases of the Moon, the phases of Venus, the tides, just to mention a few. You can today make a calculation and predict the position of the Sun, the phases of the Moon and Venus, the hour of maximal tide, all 200 years from now. The same theory is used to guide spacecraft all over the Solar System.

_

 

_

The basic elements of the scientific method are illustrated by the following examples:

Example-1:

Structure of DNA: 

 Question: Previous investigation of DNA had determined its chemical composition (the four nucleotides), the structure of each individual nucleotide, and other properties. It had been identified as the carrier of genetic information by the Avery–MacLeod–McCarty experiment in 1944, but the mechanism of how genetic information was stored in DNA was unclear.

Hypothesis: Francis Crick and James D. Watson hypothesized that DNA had a helical structure.

Prediction: If DNA had a helical structure, its X-ray diffraction pattern would be X-shaped. This prediction was determined using the mathematics of the helix transform, which had been derived by Cochran, Crick and Vand (and independently by Stokes).

Experiment: Rosalind Franklin crystallized pure DNA and performed X-ray diffraction to produce photo. The results showed an X-shape.

Analysis: When Watson saw the detailed diffraction pattern, he immediately recognized it as a helix. He and Crick then produced their model, using this information along with the previously known information about DNA’s composition and about molecular interactions such as hydrogen bonds.

_

Example-2:

Discovery of expanding universe with galaxies moving away from each other:

In 1919, when Edwin Hubble (of Hubble Space Telescope fame) arrived on California’s Mount Wilson to use the 100-inch Hooker Telescope, then the world’s largest, astronomers generally believed that the entire universe consisted of a single galaxy — the Milky Way. But as Hubble began making observations with the Hooker Telescope, he noticed that objects known as “nebulae,” thought to be components of the Milky Way, were located far beyond its boundaries. At the same time, he observed that these “nebulae” were moving rapidly away from the Milky Way. Hubble used these observations to make a groundbreaking generalization in 1925: The universe wasn’t made up of one galaxy, but millions of them. Not only that, Hubble argued, but all galaxies were moving away from each other due to a uniform expansion of the universe. Science makes predictions and tests those predictions using experiments. Generalizations are powerful tools because they enable scientists to make predictions. For example, once Hubble asserted that the universe extended far beyond the Milky Way, it followed that astronomers should be able to observe other galaxies. And as telescopes improved, they did discover galaxies — thousands and thousands of them, in all different shapes and sizes. Today, astronomers believe that there are about 125 billion galaxies in the universe. They’ve also been able to conduct numerous experiments over the years to support Hubble’s notion that the universe is expanding. One classic experiment is based on the Doppler Effect. Most people know the Doppler Effect as a phenomenon that occurs with sound. For example, as an ambulance passes us on the street, the sound of its siren seems to change pitch. As the ambulance approaches, the pitch increases; as it passes, the pitch decreases. This happens because the ambulance is either moving closer to the sound waves it is creating (which decreases the distance between wave crests and increases pitch) or moving away from them (which increases the distance between wave crests and decreases pitch). Astronomers hypothesized that light waves created by celestial objects would behave the same way. They made the following educated guesses: If a distant galaxy is rushing toward our galaxy, it will move closer to the light waves it is producing (which decreases the distance between wave crests and shifts its color to the blue end of the spectrum). If a distant galaxy is rushing away from our galaxy, it will move away from the light waves it is creating (which increases the distance between wave crests and shifts its color to the red end of the spectrum). To test the hypothesis, astronomers used an instrument known as a spectrograph to view the spectra, or bands of colored light, produced by various celestial objects. They recorded the wavelengths of the spectral lines, and their intensities, collecting data that eventually proved the hypothesis to be correct.

_

Old theory and new theory:

Experiments sometimes produce results which cannot be explained with existing theories. In this case it is the job of scientists to produce new theories which replace the old ones. The new theories should explain all the observations and experiments the old theory did and, in addition, the new set of facts which lead to their development. One can say that new theories devour and assimilate old ones. Scientists continually test existing theories in order to probe how far they can be applied.  When an old theory cannot explain new observations it will be (eventually) replaced by a new theory. This does not mean that the old ones are “wrong” or “untrue”, it only means that the old theory had a limited applicability and could not explain all current data. The only certain thing about currently accepted theories is that they explain all available data, which, if course, does not imply that they will explains all future experiments!  In some cases new theories provide not only extensions of old ones, but a completely new insight into the workings of nature. Thus when going from Newton’s theory of gravitation to Einstein’s, our understanding of the nature of space and time was revolutionized. Nonetheless, no matter how beautiful and simple a new theory might be, it must explain the same phenomena the old one did. Even the most beautiful theory can be annihilated by a single ugly fact. Scientific theories have various degrees of reliability and one can think of them as being on a scale of certainty. Up near the top end we have our theory of gravitation based on a staggering amount of evidence; down at the bottom we have the theory that the Earth is flat. In the middle we have our theory of the origin of the moons of Uranus. Some scientific theories are nearer the top than others, but none of them ever actually reach it. An extraordinary claim is one that contradicts a fact that is close to the top of the certainty scale and will give rise to a lot of skepticism. So if you are trying to contradict such a fact, you had better have facts available that are even higher up the certainty scale: extraordinary evidence is needed for an extraordinary claim.  

_

Scientific theory is empirical, and is always open to falsification if new evidence is presented. That is, no theory is ever considered strictly certain as science accepts the concept of fallibilism. The philosopher of science Karl Popper sharply distinguishes truth from certainty. He writes that scientific knowledge “consists in the search for truth”, but it “is not the search for certainty … All human knowledge is fallible and therefore uncertain.” 

_

_

In ordinary conversation, the word “theory” connotes an opinion, a conjecture, or a supposition. But in science, the term has a much more limited meaning. A scientific theory is an attempt to explain some aspect of the natural world in terms of empirical evidence and observation. It commonly draws upon established principles and knowledge with the aim of extending them in a logical and consistent way that enables one to make useful predictions. All scientific theories are tentative and subject to being tested and modified. As theories become more mature, they grow into more organized bodies of knowledge that enable us to understand and predict a wider range of phenomena. Examples of such theories are quantum theory, Einstein’s theories of relativity, and evolution.

Scientific theories fall into two categories:

1. Theories that have been shown to be incorrect, usually because they are not consistent with new observations;

2. All other theories

In other words

Theories cannot be proven to be correct; there is always the possibility that further observations will disprove the theory. A theory that cannot be refuted or falsified is not a scientific theory. For example, the theories that underlie astrology (the doctrine that the positions of the stars can influence one’s life) are not falsifiable because they, and the predictions that follow from them, are so vaguely stated that the failure of these predictions can always be “explained away” by assuming that various other influences were not taken into account. It is similarly impossible to falsify so-called “creation science” or “intelligent design” because one can simply evoke the “then a miracle occurs” at any desired stage.

_

If scientific theories keep changing, where is the Truth?

In 1666 Isaac Newton proposed his theory of gravitation. This was one of the greatest intellectual feats of all time. The theory explained all the observed facts, and made predictions that were later tested and found to be correct within the accuracy of the instruments being used. As far as anyone could see, Newton’s theory was ‘the Truth’.  During the nineteenth century, more accurate instruments were used to test Newton’s theory, these observations uncovered some slight discrepancies. Albert Einstein proposed his theories of Relativity, which explained the newly observed facts and made more predictions. Those predictions have now been tested and found to be correct within the accuracy of the instruments being used. As far as anyone can see, Einstein’s theory is ‘the Truth’.  So how can the Truth change? Well the answer is that it hasn’t. The Universe is still the same as it ever was. When a theory is said to be “true” it means that it agrees with all known experimental evidence. But even the best of theories have, time and again, been shown to be incomplete: though they might explain a lot of phenomena using a few basic principles, and even predict many new and exciting results, eventually new experiments (or more precise ones) show a discrepancy between the workings of nature and the predictions of the theory. In the strict sense this means that the theory was not “true” after all; but the fact remains that it is a very good approximation to the truth, at least where a certain type of phenomena is concerned.  When an accepted theory cannot explain some new data (which has been confirmed), the researchers working in that field strive to construct a new theory. This task gets increasingly more difficult as our knowledge increases, for the new theory should not only explain the new data, but also all the old one: a new theory has, as its first duty, to devour and assimilate its predecessors.

_________

What is Ockham’s Razor? 

Occam’s razor also written as Ockham’s razor is a principle of parsimony, economy, or succinctness used in logic and problem-solving. It states that among competing hypotheses, the hypothesis with the fewest assumptions should be selected. Ockham’s Razor is the principle proposed by William of Ockham in the fourteenth century: ‘Pluralitas non est ponenda sine neccesitate’, which translates as “entities should not be multiplied unnecessarily”.  The application of the principle often shifts the burden of proof in a discussion. The razor states that one should proceed to simpler theories until simplicity can be traded for greater explanatory power.

_

_

When a new set of facts requires the creation of a new theory the process is far from the orderly picture often presented in books. Many hypothses are proposed, studied, rejected. Researchers discuss their validity (sometimes quite heatedly) proposing experiments which will determine the validity of one or the other, exposing flaws in their least favorite ones, etc. Yet, even when the unfit hypotheses are discarded, several options may remain, in some cases making the exact same predictions, but having very different underlying assumptions. In order to choose among these possible theories a very useful tool is what is called Ockham’s razor.

 

In many cases this is interpreted as “keep it simple”, but in reality the Razor has a more subtle and interesting meaning. Suppose that you have two competing theories which describe the same system, if these theories have different predictions than it is a relatively simple matter to find which one is better: one does experiments with the required sensitivity and determines which one give the most accurate predictions. For example, in Copernicus’ theory of the solar system the planets move in circles around the sun, in Kepler’s theory they move in ellipses. By measuring carefully the path of the planets it was determined that they move on ellipses, and Copernicus’ theory was then replaced by Kepler’s. But there are theories which have the very same predictions and it is here that the Razor is useful. Consider for example the following two theories aimed at describing the motion of the planets around the sun

• The planets move around the sun in ellipses because there is a force between any of them and the sun which decreases as the square of the distance.

• The planets move around the sun in ellipses because there is a force between any of them and the sun which decreases as the square of the distance. This force is generated by the will of some powerful aliens.

Since the force between the planets and the sun determines the motion of the former and both theories posit the same type of force, the predicted motion of the planets will be identical for both theories. The second theory, however, has additional baggage (the will of the aliens) which is unnecessary for the description of the system.  If one accepts the second theory solely on the basis that it predicts correctly the motion of the planets one has also accepted the existence of aliens whose will affect the behavior of things, despite the fact that the presence or absence of such beings is irrelevant to planetary motion (the only relevant item is the type of force). In this instance Ockham’s Razor would unequivocally reject the second theory. By rejecting this type of additional irrelevant hypotheses guards against the use of solid scientific results (such as the prediction of planetary motion) to justify unrelated statements (such as the existence of the aliens) which may have dramatic consequences. In this case the consequence is that the way planets move, the reason we fall to the ground when we trip, etc. is due to some powerful alien intellect, that this intellect permeates our whole solar system, it is with us even now…and from here an infinite number of paranoid derivations. For all we know the solar system is permeated by an alien intellect, but the motion of the planets, which can be explained by the simple idea that there is a force between them and the sun, provides no evidence of the aliens’ presence nor proves their absence.  A more straightforward application of the Razor is when we are face with two theories which have the same predictions and the available data cannot distinguish between them. In this case the Razor directs us to study in depth the simplest of the theories. It does not guarantee that the simplest theory will be correct, it merely establishes priorities.  A related rule, which can be used to slice open conspiracy theories, is Hanlon’s Razor: “Never attribute to malice that which can be adequately explained by stupidity”.

_

Limitations of Occam’s Razor:

Occam’s razor would demand that scientists accept the simplest possible theoretical explanation for existing data. However, science has shown repeatedly that future data often supports more complex theories than existing data. Science prefers the simplest explanation that is consistent with the data available at a given time, but the simplest explanation may be ruled out as new data become available. That is, science is open to the possibility that future experiments might support more complex theories than demanded by current data and is more interested in designing experiments to discriminate between competing theories than favoring one theory over another based merely on philosophical principles. When scientists use the idea of parsimony, it only has meaning in a very specific context of inquiry. A number of background assumptions are required for parsimony to connect with plausibility in a particular research problem. The reasonableness of parsimony in one research context may have nothing to do with its reasonableness in another. It is a mistake to think that there is a single global principle that spans diverse subject matter. It has been suggested that Occam’s razor is a widely accepted example of extraevidential consideration, even though it is entirely a metaphysical assumption. There is little empirical evidence that the world is actually simple or that simple accounts are more likely than complex ones to be true. Most of the time, Occam’s razor is a conservative tool, cutting out crazy, complicated constructions and assuring that hypotheses are grounded in the science of the day, thus yielding “normal” science: models of explanation and prediction. There are examples where Occam’s razor would have picked the wrong theory given the available data. Simplicity principles are useful philosophical preferences for choosing a more likely theory from among several possibilities that are each consistent with available data. A single instance of Occam’s razor picking a wrong theory falsifies the razor as a general principle.  Michael Lee and others provide cases where a parsimonious approach does not guarantee a correct conclusion and, if based on incorrect working hypotheses or interpretations of incomplete data, may even strongly support a false conclusion. He states, “When parsimony ceases to be a guideline and is instead elevated to an ex cathedra pronouncement, parsimony analysis ceases to be science.”  

_

Karl Popper’s doctrine of falsification as an important ingredient in understanding scientific method:

Popper described the demarcation problem between science and pseudoscience as the “key to most of the fundamental problems in the philosophy of science”. He refuted verifiability as a criterion for a scientific theory or hypothesis to be scientific, rather than pseudoscientific or metaphysical. Instead he proposed as a criterion that the theory be falsifiable, or more precisely that “statements or systems of statements, in order to be ranked as scientific, must be capable of conflicting with possible, or conceivable observations”.  A central part of Karl Popper’s project is figuring out how to draw the line between science and pseudo-science. He seems committed to the idea that scientific methodology is well-suited — perhaps uniquely so — for building reliable knowledge and for avoiding false beliefs. Indeed, under the assumption that science has this kind of power, one of the problems with pseudo-science is that it gets an unfair credibility boost by so cleverly mimicking the surface appearance of science. The big difference Popper identifies between science and pseudo-science is a difference in attitude. While a pseudo-science is set up to look for evidence that supports its claims, Popper says, a science is set up to challenge its claims and look for evidence that might prove it false. In other words, pseudo-science seeks confirmations and science seeks falsifications. There is a corresponding difference that Popper sees in the form of the claims made by sciences and pseudo-sciences: Scientific claims are falsifiable — that is, they are claims where you could set out what observable outcomes would be impossible if the claim were true — while pseudo-scientific claims fit with any imaginable set of observable outcomes. What this means is that you could do a test that shows a scientific claim to be false, but no conceivable test could show a pseudo-scientific claim to be false. Sciences are testable, pseudo-sciences are not. So, Popper has this picture of the scientific attitude that involves taking risks: making bold claims, then gathering all the evidence you can think of that might knock them down. If they stand up to your attempts to falsify them, the claims are still in play. But, you keep that hard-headed attitude and keep your eyes open for further evidence that could falsify the claims. If you decide not to watch for such evidence — deciding, in effect, that because the claim hasn’t been falsified in however many attempts you’ve made to falsify it, it must be true — you’ve crossed the line to pseudo-science. This sets up the central asymmetry in Popper’s picture of what we can know. We can find evidence to establish with certainty that a claim is false. However, we can never (owing to the problem of induction) find evidence to establish with certainty that a claim is true. So the scientists realizes that their best hypotheses and theories are always tentative — some piece of future evidence could conceivably show them false — while the pseudo-scientists are sure as sure as can be that their theories have been proven true. (Of course, they haven’t been — problem of induction again.) So, why does this difference between science and pseudo-science matter? As Popper notes, the difference is not a matter of scientific theories always being true and pseudo-scientific theories always being false. The important difference seems to be in which approach gives better logical justification for knowledge claims. A pseudo-science may make you feel like you’ve got a good picture of how the world works, but you could well be wrong about it. If a scientific picture of the world is wrong, that hard-headed scientific attitude means the chances are good that we’ll find out we’re wrong — one of those tests of our hypotheses will turn up the data that falsifies them — and switch to a different picture. Another note on “falsifiability” — the fact that many attempts to falsify a claim have failed does not mean that the claim is unfalsifiable. Nor, for that matter, would the fact that the claim is true make it unfalsifiable. A claim is falsifiable if there are certain observations we could make that would tell us the claim is false — certain observable ways the world could not be if the claim were true. So, the claim that Mars moves in an elliptical orbit around the sun could be falsified by observations of Mars moving in an orbit that deviated at all from an elliptical shape. The scientist’s acceptance of a theory is always tentative (and this is one piece of Popper’s account that most scientists whole-heartedly endorse), then even the theory with the best evidential support is still a theory. Indeed, even if a theory happened to be completely true, it would still be a theory! (Why? You could never be absolutely certain that some future observation might not falsify the theory. In other words, on the basis of the evidence, you can’t be 100% sure that the theory is true.) So, for example, dismissing Darwin’s theory as “just a theory” as if that were a strike against it is misunderstanding what science is up to. Of course there is some uncertainty; there is with all scientific theories. Of course there are certain claims the theory makes that might turn out to be false; but the fact that there is evidence we could conceivably get to demonstrate these claims are false is a scientific virtue, not a sign that the theory is unscientific. By contrast, “Creation Science” and “Intelligent Design Theory” don’t make falsifiable claims (at least, this is what many people think; Larry Laudan disputes this but points out different reasons these theories don’t count as scientific). There’s no conceivable evidence we could locate that could demonstrate the claims of these theories are false. Thus, these theories just aren’t scientific. Certainly, their proponents point to all sorts of evidence that fits well with these theories, but they never make any serious efforts to look for evidence that could prove the theories false. Their acceptance of these theories isn’t a matter of having proof that the theories are true, or even a matter of these theories having successfully withstood many serious attempts to falsify them. Rather, it’s a matter of faith. None of this means Darwin’s theory is necessarily true and “Creation Science” is necessarily false. But it does mean (in the Popperian view that most scientists endorse) that Darwin’s theory is scientific and “Creation Science” is not.

_

Continuing from the preceding paragraphs on falsifiability, we are led to classify science as follows:

1. Science: Basic assumptions directly falsifiable.

2. Quasi-Science: Basic assumptions not directly falsifiable, but certain consequences of the basic assumptions falsifiable.

3. Pseudo-Science: Basic assumptions not directly falsifiable, nor any consequence of the basic assumptions.

Examples:

1. Solid Mechanics based on Navier’s equations expressing equilibrium of forces and a constitutive relation such as Hooke’s law, which are directly falsifiable.

2. Statistical Mechanics with basic assumption of molecular chaos, which is not directly falsifiable.

3. Astrology and Marxism with basic assumptions unknown and thus beyond falsification.

_

Reading Popper’s Logic of Scientific Discovery shows that Popper is deeply troubled by the presence of statistics in modern physics represented by statistical mechanics and quantum mechanics with its statistical interpretation formed by Born and forcefully propagated as the Copenhagen Interpretation by Bohr. Popper is troubled because the basic assumptions are not falsifiable, and thus threatens to put modern physics into the camp of quasi-science.

It is instructive to compare with a legal case:

If A has a shotgun, then A has the ability to shoot B. Now, suppose that B is found dead with a bullet through his head. Can we then conclude that A is likely to have shot B?

No, of course not, unless the basic assumption that A has a shotgun can be verified or at least be made likely. So to observe the consequence of an assumption does not validate the assumption and its consequence.

Suppose the basic assumption that A has a shotgun cannot be falsified, then A may be in trouble without having anything to do with the death of B. This argument hopefully exhibits the problematic aspect of quasi-science with basic assumptions impossible to falsify: If the assumption that A has a shotgun cannot be directly falsified, then A may be arrested even if A is fully innocent (until you find C who shot dead B).

_

Criticism of Popper’s demarcation criterion of falsification: 

Philosophers criticize falsification because to falsify a theory conclusively we must rely upon observations which are, as Popper noted, fallible and open to revision.  A further critique emphasizes Popper’s extremely naïve view of testing, which fails to account for the uncertainties and complexities of real world test situations. Thus, while it may appear that a theory is being disproved by a negative test result or failed prediction, it is always possible that part of the test situation itself might be the source of problems and not the theory.  Popper’s demarcation criterion has been criticized both for excluding legitimate science (Hansson 2006) and for giving some pseudosciences the status of being scientific (Agassi 1991; Mahner 2007, 518–519). Strictly speaking, his criterion excludes the possibility that there can be a pseudoscientific claim that is refutable. According to Larry Laudan (1983, 121), it “has the untoward consequence of countenancing as ‘scientific’ every crank claim which makes ascertainably false assertions”. Astrology, rightly taken by Popper as an unusually clear example of a pseudoscience, has in fact been tested and thoroughly refuted (Culver and Ianna 1988; Carlson 1985). Similarly, the major threats to the scientific status of psychoanalysis, another of his major targets, do not come from claims that it is untestable but from claims that it has been tested and failed the tests. Other philosophers of science have rejected the criterion of falsifiability or supplemented it with other criteria, such as verifiability (e.g., verificationism) or coherence (e.g., confirmation holism). A scientific method involves experiment, to test the ability of some hypothesis to adequately answer the question under investigation. In contrast, unfettered observation is not as likely to raise unexplained issues or open questions in science, as would the formulation of a crucial experiment to test the hypothesis. A thought experiment might also be used to test the hypothesis as well.    

_

Is Popper’s falsifiability criterion the solution to the problem of demarcating science from pseudoscience?  No.

For Popper’s criterion ignores the remarkable tenacity of scientific theories. Scientists have thick skins. They do not abandon a theory [merely] because facts contradict it. They normally either invent some rescue hypothesis to explain what they then call a mere anomaly and if they cannot explain the anomaly, they ignore it, and direct their attention to other problems. Note that scientists talk about anomalies, [recalcitrant instances,] and not refutations. History of science, of course, is full of accounts of how crucial experiments allegedly killed theories. But all such accounts are fabricated long after the theory has been abandoned. [Had Popper ever asked a Newtonian scientist under what experimental conditions he would abandon Newtonian theory, some Newtonian scientists would have been exactly as nonplussed as are some Marxists.]

_

Multi-criterial approaches to identify pseudoscience:

Popper’s method of demarcation consists essentially of the single criterion of falsifiability although some authors have wanted to combine it with the additional criteria that tests are actually performed and their outcomes respected. Most authors have proposed list of demarcation criteria. A large number of lists have been published that consist of (usually 5–10) criteria that can be used in combination to identify a pseudoscience or pseudoscientific practice. This includes lists by Langmuir ([1953] 1989), Gruenberger (1964), Dutch (1982), Bunge (1982), Radner and Radner (1982), Kitcher (1982, 30–54), Hansson (1983), Grove (1985), Thagard (1988), Glymour and Stalker (1990), Derkson (1993, 2001), Vollmer (1993), Ruse (1996, 300–306) and Mahner (2007).  One such list reads as follows:

1. Belief in authority: It is contended that some person or persons have a special ability to determine what is true or false. Others have to accept their judgments.

2. Nonrepeatable experiments: Reliance is put on experiments that cannot be repeated by others with the same outcome.

3. Handpicked examples: Handpicked examples are used although they are not representative of the general category that the investigation refers to.

4. Unwillingness to test: A theory is not tested although it is possible to test it.

5. Disregard of refuting information: Observations or experiments that conflict with a theory are neglected.

6. Built-in subterfuge: The testing of a theory is so arranged that the theory can only be confirmed, never disconfirmed, by the outcome.

7. Explanations are abandoned without replacement. Tenable explanations are given up without being replaced, so that the new theory leaves much more unexplained than the previous one. (Hansson 1983)

__________

Rules for evidential reasoning (from Lett, 1990):  A guide to intelligent living and the scientific method.  All claims should be subjected to these rules. 

Falsifiability Conceive of all evidence that would prove the claim false
Logic Argument must be sound
Comprehensiveness Must use all the available evidence
Honesty Evaluate evidence without self-deception
Replicability Evidence must be repeatable
Sufficiency 1.  Burden of proof rests on the claimant.
2.  Extraordinary claims require extraordinary evidence.
3.  Authority and/or testimony is always inadequate

_

What is true scientific method?

True scientific method says that you cannot “prove” anything. All you can do is disprove alternative hypothesis — all you can do is show that alternative answers are incorrect.  One thing to be very wary of are studies which go out to prove something — whose purpose is to support some hypothesis. Unless they are undertaken under strict controls, they will most likely suffer from the old adage that says that if you look hard enough, you find the data necessary to support any agenda. You can imagine the pressures on a research organization to deliver the proper findings when a corporation is paying them to do so. Which company would hire a testing company back if they determined that the company’s product really was dangerous? Much like a reporter who has already written the title to an article before he looks for the supporting evidence, many studies suffer from the same warped point of view. When you report on correlations, explain that they do not imply a cause. Push your scientific sources to give multiple probable causes and push them to elaborate on the alternative hypotheses that were tested and disproved.

_

Limitations of the Scientific Method

Clearly, the scientific method is a powerful tool, but it does have its limitations. These limitations are based on the fact that a hypothesis must be testable and falsifiable and that experiments and observations be repeatable. This places certain topics beyond the reach of the scientific method. Science cannot prove or refute the existence of God or any other supernatural entity. Science is meant to give us a better understanding of the mysteries of the natural world, by refuting previous hypotheses, and the existence of supernatural beings lies outside of science all together. Sometimes, scientific principles are used to try to lend credibility to certain nonscientific ideas, such as intelligent design. Intelligent design is the assertion that certain aspects of the origin of the universe and life can be explained only in the context of an intelligent, divine power. Proponents of intelligent design try to pass this concept off as a scientific theory to make it more palatable to developers of public school curriculums. But intelligent design is not science because the existence of a divine being cannot be tested with an experiment. Science is also incapable of making value judgments. Another limitation of the scientific method is when it comes making value judgments about whether certain scientific phenomena are “good” or “bad”.  For example, the scientific method cannot alone say that global warming is bad or harmful to the world, as it can only study the objective causes and consequences. It can study the causes and effects of global warming and report on those results, but it cannot assert that driving SUVs is wrong or that people who haven’t replaced their regular light bulbs with compact fluorescent bulbs are irresponsible.  Furthermore, science cannot answer questions about morality, as scientific results lay out of the scope of cultural, religious and social influences.  Anyone who tries to draw moral lessons from the laws of nature is on very dangerous ground. Evolution in particular seems to suffer from this. At one time or another it seems to have been used to justify Nazism, Communism, and every other -ism in between. These justifications are all completely bogus. Similarly, anyone who says evolution theory is evil because it is used to support Communism” (or any other -ism) has also strayed from the path of Logic. Occasionally, certain organizations use scientific data to advance their causes. This blurs the line between science and morality, and encourages the creation of “pseudo-science,” which tries to legitimize a product or idea with a claim that has not been subjected to rigorous testing. And yet, used properly, the scientific method is one of the most valuable tools humans have ever created. It helps us solve everyday problems around the house and, at the same time, helps us understand profound questions about the world and universe in which we live.

___________

Skepticism:

Let’s talk about skepticism. Let’s be clear what it is, because the term is abused by so many. Real scientific skepticism is the noble art of constantly questioning and doubting claims and assertions, and holding that the accumulation of evidence is of fundamental importance. It forms part of the scientific method, which requires relentless testing and reviewing of claimed facts and theories. In other words, skepticism is not just the questioning or doubting of claims, it is gathering all of the evidence and weighing it properly. A real skeptic, for example, gives more weight to evidence provided in a meta-review articles (which rolls up all of the evidence for and against a particular hypothesis, removing obviously biased or poorly done studies) than to anecdotal commentary from an anonymous website. A real skeptic will value evidence that comes from people who are experts in the particularly field even over those who claim an expertise based on education or research in an unrelated field. An immunologist knows more about immunology than a biochemist, because it takes so long to understand a field of science.

On the other hand, there are those who claim to be skeptics, but are, in fact pseudoskeptics. A pseudoskeptic, in its proper use, is someone who will claim they are a skeptic of a concept, but in reality would not be convinced by any evidence that might be presented. They are close-minded to any evidence, or their standards of evidence are such that it approaches the Nirvana Fallacy, or they want evidence that’s perfect otherwise, it’s worthless. Sometimes, a pseudoskeptic will often call a real skeptic a “pseudoskeptic”, but that’s just a misuse of the term, a misdirection to confuse the argument. Pseudoskeptic is clearly a word that is a synonym of denialism, as there is usually a vast amount of real evidence which is simply willfully ignored by these pseudoskeptics. For example, there is an individual on Facebook, Vaccine Skeptic Society, who is a proven vaccine denier and a pseudoskeptic. Real skeptics are always prepared to change their positions based on new evidence, relying on the scientific method. Vaccine Denier Society (to use proper terminology) has been shown to ignore all evidence except that which support their position, which isn’t skepticism. It’s just blindly using confirmation bias and ignoring evidence, including the quality of that evidence. Don’t fall for the trap that you should be “open-minded” or neutral to anti-science or pseudoscience.  Open-mindedness and neutrality are expectations that you will balance real scientific evidence, not treat the rhetoric as if it has equal weight to scientific method.

__________

Are mistakes in good science a bad science?  No……

A hypothesis can simply be mistaken, though plausible given available knowledge at some point in time. Even the very best scientists can make such mistakes, especially regarding phenomena that have not been amenable to proper research at the time. The concept of a “luminiferous aether”, prior to the Michelson-Morley experiment, is a suitable example. Even Einstein made this kind of mistake, for instance in believing that the universe was static rather than expanding, before Edwin Hubble demonstrated otherwise. But “bad science” is something worse than mistaken. It is an idea or hypothesis that is based on errors in methodology or reasoning or understanding that a competent scientist simply should not make, given available knowledge at the time. “Not even wrong science” is even worse. It’s usually the product of an earnest, well-meaning individual who has an enthusiasm for some branch of science, but hardly any actual training or understanding of the subject. Examples of this sort of thing include cosmological theories involving “dynamic energy vortices”, “proofs” that the theory of special relativity is obviously wrong, etc. Good science is any reasonable, plausible hypotheses that meet the necessary conditions and have “some” evidence in their favor, even if they haven’t yet been fully “proven”.  

Examples of good science where mistakes were made:

1. Black holes. At first almost everyone, even Einstein, thought this was a crazy idea. Eddington nearly destroyed Chandrasekhar’s career over this issue, though the evidence for black holes, lately, has been pretty darn good. But there are still doubters, and very recently a proposal has been made that could give an alternative account of black holes, and the controversial theories of dark matter and dark energy as well.

2. Dark energy/cosmological constant. Although a small cosmological constant used in the Friedmann equation gives a very good fit with universe expansion data from supernovae, other (still quite controversial) observations of very distant gamma-ray bursts do not fit. And there are alternative accounts. Further, there is no decent theoretical explanation of a small cosmological constant.

3. Dark matter. The evidence for this is very good, and of many kinds. Yet many people keep trying to come up with alternatives.  A very reputable physicist, Jacob Beckenstein, has recently claimed to have reconciled MOND (modified newtonian dynamics) with relativity so as to provide another viable alternative.

4. Inflationary cosmology and hypotheses about its antecedents. There are sound mathematical theories for such things, but little (for inflation) or nothing (for antecedents) in the way of physical evidence. The “standard” hypothesis is the Big Bang, but there are many variants and alternatives, associated with big names like Hawking, Steinhardt, Linde, Turok, etc. Unlike the previous examples, there’s scant evidence, in spite of much elegant theory.

5. Quantum mechanics and determinism. Some quite reputable physicists keep trying to find some sort of determinism underlying quantum mechanics.

6. The Alvarez asteroid theory to explain the Cretaceous-Tertiary mass extinction. Since the discovery of the Chicxulub impact crater, people have become pretty convinced of this theory, despite heavy early skepticism. Yet evidence keeps turning up that the impact crater was formed long (ca. 300,000 years) before the extinctions began.  

_

Pseudoscience differs from erroneous good science. Science thrives on errors, cutting them away one by one. False conclusions are drawn all the time, but they are drawn tentatively. Hypotheses are framed so they are capable of being disproved. A succession of alternative hypotheses is confronted by experiment and observation. Science gropes and staggers toward improved understanding. Proprietary feelings are of course offended when a scientific hypothesis is disproved, but such disproofs are recognized as central to the scientific enterprise. Pseudoscience is just the opposite. Hypotheses are often framed precisely so they are invulnerable to any experiment that offers a prospect of disproof, so even in principle they cannot be invalidated. Practitioners are defensive and wary. Skeptical scrutiny is opposed. When the pseudoscientific hypothesis fails to catch fire with scientists, conspiracies to suppress it are deduced. 

___________

Study, Research and Science:

All three of these terms mean the acquisition of information about something. They are different in that they suggest a degree of exactitude and applied discipline. The problem is that an academically trained scientist will assign meaning to the terms based on how they are used in relationship to the methodologies of mainstream science. A person working in the outer reaches of a frontier subject will use the terms more in the sense of “this is what I am trying to do.” However, pseudoskeptics, the general public, governments, universities and funding agencies all take their lead from academically trained scientists. In that context, Research and Science can only be conducted by an academically credentialed doctorate. A person may say that he or she is “researching a subject” as one might research a vacation, but it is not acceptable to us the term in any context that might imply research as part of Science. Something like “I am researching ghosts” is simply unacceptable terminology from the academic viewpoint. It does not matter if you have a college degree, or if you have one, the subject of the degree. If you are learning about it, maybe acting as a practitioner and trying to see what background sound works or what video noise is best, then you are studying the subject. You are not doing research and you are not conducting science. At best, you are a naturalist observing nature. If you have devised a hypothesis and a protocol to test that hypothesis, then you are conducting research. If you put the results of that research in a coherent report with documented data that might be reviewed by others, and at least intend to publish it in some kind of a vetted journal, then you can reasonably call your work Science. The catch here is that you may not be conducting good science or you may not be conducting appropriate science. It is also important that you are formalizing your hypothesis and protocol by writing them down in case you plan to publish later.

_

Problems with research:

The following table identifies a number of fallacies, problems, biases, and effects that scholars have, over the centuries, recognized as confounding the conduct of good research. Note that some of these “methodological potholes” remain contentious among some scholars. (Based on Huron 2000)  

Problem Remedy/Advice
ad hominem argument Criticizing the person rather than criticizing the argument. Focus on the quality of the argument. Be prepared to learn from people you don’t like. Avoid construing a debate as a personal or social fight.
discovery fallacy Criticizing an idea because of its origin (for example, an idea given in a religious text). Criticize the justifications offered in support of an idea rather than how the idea originated.
ipse dixit Appealing to an authority figure in support of an argument. Cite published research rather than identifying authority figures. Provide references so others can judge the quality of the supporting research for themselves. Recognize that experts are fallible.
ad baculum argument Appealing to physical or psychological threat. Do not threaten.
egocentric bias The tendency to assume that other people experience things the same way we do. Don’t rely exclusively on introspection. Listen carefully to what others report. Carry out a survey or run an experiment in order to observe the behaviors of others. Be wary when generalizing from your own experiences.
cultural bias The inappropriate application of a concept to people from another culture. Talk with culturally knowledgeable people. Carry out cross-cultural experiments. Listen carefully in post-experiment debriefings.
cultural ignorance The failure to make a distinction that people in another culture readily make. Talk with culturally knowledgeable people. Listen carefully in post-experiment debriefings.
over-generalization The tendency to assume that a research result generalizes to a wide variety of real-world situations. Be circumspect when describing results. Look for converging evidence. Ask other people’s opinions. Analyze additional works. Run further experiments.
inertia fallacy The idea that research consistent with a particular conclusion will “grow” in the future. A subtle fallacy that is evident in such statements as “Research is increasingly showing that …”. Future research is just as likely to over-turn a current theory as to confirm it. Research results do not have inertia. Talk about research results in the past tense (“Research has shown …” rather than “Research is showing …”). Avoid “growth” or “band-wagon” metaphors when describing the evidence pertaining to some theory.
relativist fallacy The belief that no idea, hypothesis, theory or belief is better than another. Avoid “absolute” relativism; the world appears to be “relatively relative.” Don’t mistake relativism for pluralism.
universalist phobia A prejudice against the possibility of cross-cultural universals. Familiarize yourself with the music from a variety of cultures. Investigate notions of similarity and difference. Use cross-cultural surveys or experiments where appropriate.
problem of induction The problem (identified by Hume) that no number of particular observations can establish the truth of some general conclusion. Avoid claiming you know the truth. Present your research results as “consistent” or “inconsistent” with a particular theory, hypothesis or interpretation.
positivist fallacy The problem arising when a phenomenon is deemed not to exist because no evidence is available: “Absence of evidence is interpreted as evidence of absence.” Recognize that not all phenomena leave obvious evidence of their existence.
confirmation bias The tendency to see events as conforming to some interpretation, hypothesis, or theory while viewing falsifying events as “exceptions”. Be systematic in your observations. When counting examples that either confirm or contradict your theory, do not change the counting criteria as you go in order to exclude some contradicting instances. Establish the qualifying criteria before you start counting.
hindsight bias The ease with which people confidently interpret or explain any set of existing data. Whenever possible, attempt to predict observations in advance. Aim to test ideas rather than to look for confirmation.
unfalsifiable hypothesis The formulation of a theory, hypothesis or interpretation which cannot be, in principle, falsified. Whenever possible, formulate theories, hypotheses or interpretations so they are, in principle, falsifiable. Identify the sorts of observations that would be inconsistent with your views.
post-hoc hypothesis Following data collection, the formulation and testing of additional hypotheses not envisaged before the data were collected. Limit. Beware of hindsight bias and multiple tests. Collect new data; analyze additional works.
smorgasbord thinking Having enough hypotheses to explain all possibilities. Don’t deceive yourself that you have only one prediction. Write your prediction down before you analyze any data. Ask yourself whether you have an explanation should the data show a reverse trend; if so, recognize that this is bad.
ad-hoc hypothesis The proposing of a supplementary hypothesis that is intended to explain why a favorite theory or interpretation failed an experimental test. Open to grave abuse. Try to avoid. Test the ad hoc hypothesis in a separate follow-up study.
sensitivity syndrome The tendency to try to interpret every perturbation in a data set; a failure to recognize that data always contains some “noise”. Use test-retest and other techniques to estimate the margin of error for any collected data. Report chance levels, p values, effect sizes. Beware of hindsight bias.
positive results bias A bias commonly shown by scholarly journals to publish only studies that demonstrate positive results (i.e., where data and theory agree). Seek replications for suspect phenomena. Be aware of possible “bottom-drawer effect”.
bottom-drawer effect Unawareness of unpublished negative results of earlier studies. A consequence of positive results bias. Maintain contact and communicate within a scholarly community. Ask other scholars whether they have carried out a given analysis, survey or experiment. Widely report negative results through informal channels.
head-in-the-sand syndrome The failure to test important theories, assumptions, or hypotheses that are readily testable. Be willing to test ideas that everyone presumes are true. Ignore criticisms that you are merely confirming the obvious. Collect pertinent data. Carry out analyses. Do a survey. Run an experiment.
data neglect The tendency to ignore readily available information when assessing theories, assumptions or hypotheses. Don’t ignore existing resources. Test your hypotheses using other available data sets.
research hoarding The failure to make the fruits of your scholarship available for the benefit of others. Publish often. Prefer to write short research articles rather than books. Make your data available to others.
double-use data The use of a single data set both to formulate a theory and to “independently” test the theory. Avoid. Collect new data.
skills neglect The human disposition to resist learning new scholarly methods that may be pertinent to a research problem. Resist scholarly laziness. Engage in continuing education. Learn things your peers and teachers don’t know. Don’t assume you received a thorough education.
control failure The failure to contrast an experimental group with a control group. Add a control group.
third variable problem The presumption that two correlated variables is causally linked; such links may arise through an unknown third variable. Avoid interpreting correlation as causality. Carry out an experiment where manipulating variables can test notions of probable causality.
reification Falsely concretizing an abstract concept (e.g. regarding spatial representations of pitch structure as mental representations). Take care with terminology.
validity problem When an operational definition of a variable fails to accurately reflect the true theoretical meaning of the variable. Think carefully when forming operational definitions. Use more than one operational definition. Seek converging evidence.
anti-operationalizing problem The tendency to raise perpetual objections to all operational definitions. Propose better operational definitions. Seek converging evidence using several alternative operational definitions.
problem of ecological validity The problem of generalizing results from controlled experiments to real-world contexts. Seek converging evidence between controlled experiments and experiments in real-world settings.
naturalist fallacy The belief that what IS is what OUGHT to be. Imagine desirable alternatives.
presumptive representation The practice of representing others to themselves. Exercise care when portraying or summarizing the views of others — especially when your portrayal causes a disadvantaged group to lose power.
exclusion problem The tendency to prematurely exclude competing views. Remember that “no theory is every truly dead.”
contradiction blindness The failure to take contradictions seriously. Attend to possible contradictions.
multiple tests If a statistical test relies on a 0.05 confidence level, then, on average, a spuriously significant result will occur for each 20 tests performed. Avoid excessive numbers of tests for a given data set. Use statistical techniques to compensate for multiple tests. Split large data sets into one or more “reserved sets” or “training sets”. Prefer hypothesis testing over open-ended chasing after significance. Report the actual number of tests performed in a study.
overfitting Excessive fine-tuning of one’s hypotheses or theories to one particular data set or group of observations. The theories that arise from overfitting describe noise rather than real phenomena. See also sensitivity syndrome. Recognize that samples or observations typically differ in detail. In forming theories, continue to collect new data sets or observations to avoid overfitting.
magnitude blindness The tendency to become preoccupied with statistically significant results that nevertheless have a small magnitude of effect. Aim to uncover the most important factors influencing a phenomenon first.
regression artifacts The tendency to interpret regression toward the mean as an experimental phenomenon. Don’t use extreme values as a sampling criterion. Use a control group (such as scrambling orders) to compare with the experimental group.
range restriction effect Failure to vary an independent variable over a sufficient range of values — with the consequence that the effect size looks small. Decide what range of a variable or what effect size is of interest. Run a pilot study.
ceiling effect When a task is so easy that the experimental manipulation shows little/no effect. Make the task more difficult. Run a pilot study.
floor effect When a task is so difficult that the experimental manipulation shows little/no effect. Make the task easier. Run a pilot study.
sampling bias Any confound that causes the sample to not be representative of the pertinent population. Use random sampling. If there are identifiable sub-groups use a stratified random sample. Where possible, avoid “convenience” or haphazard sampling.
homogeneity bias Failure to recognize that sub-groups within a sample respond differently. For example, where responses diverge between males and females, or between Germans and Dutch. Use descriptive methods and data exploration methods to examine the experimental results. Use cluster analysis methods where appropriate.
cohort bias or cohort effect Differences between age groups in a cross-sectional study that are due to generational differences rather than due to the experimental manipulation. Use a more narrow range of ages. Use a longitudinal design instead of a cross-sectional design.
expectancy effect Any unconscious or conscious cues that convey to the subject how the experimenter wants them to respond. Expecting someone to behave in a particular way has been shown to promote the expected behavior. Use standardized interactions with subjects. Use automated data-gathering methods. Use double-blind protocol.
placebo effect The positive or negative response arising from the subject’s belief about the efficacy of some manipulation. Use a placebo control group.
demand characteristics Any aspect of an experiment that might inform subjects of the purpose of the study. Control demand characteristics by: (1) using deception (for example, by adding “filler” questions that make it more difficult for subjects to infer the experimental question), (2) debriefing subjects at the end of the experiment, (3) using field observation, (4) avoiding within-subjects designs where all subjects are aware of all the experimental conditions. (5) asking subjects not to discuss the experiment with future participants.
reactivity problem When the act of measuring something changes the measurement itself. Use clandestine measurement methods.
history effect Any change between a pretest measure and posttest measure that is not attributable to the experimental manipulation. Isolate subjects from external information. Use post-experiment debriefing to identify possible confounds.
maturation confounds Any changes in responses due to changes in the subject not related to the experimental manipulation. Examples of maturation changes include increasing boredom, becoming hungry, and (for longer experiments) reduced reaction times, fading beauty, becoming wiser, etc. Prefer short experiments. Provide breaks. Run a pilot study.
testing effect In a pretest-posttest design, where a pre-test causes subjects to behave differently. Use clandestine measurement methods. Use a control group with no manipulation between pre- and post-test.
carry-over effect When the effects of one treatment are still present when the next treatment is given. Leave lots of time between treatments. Use between-subjects design.
order effect In a repeated measures design, the effect that the order of introducing treatment has on the dependent variable. Randomize or counter-balance treatment order. Use between-subjects design.
mortality problem In a longitudinal study, the bias introduced by some subjects disappearing from the sample. Convince subjects to continue; investigate possible differences between continuing and non-continuing subjects.
premature reduction The tendency to rush into an experiment without first familiarizing yourself with a complex phenomenon. Use descriptive and qualitative methods to explore a complex phenomenon. Use explorative information to help form testable hypotheses and to identify plausible confounds that need to be controlled.
spelunking “Exploring” a phenomenon without ever testing a proper hypothesis or theory. Don’t just describe. Look for underlying patterns that might lead to “generalized” knowledge. Formulate and test hypotheses.
shifting population problem The tendency to reconceive of a sample as representing a different population than originally thought. Write-down in advance what you think is the population.
instrument decay Changes of measurement over time due to fatigue, increased observational skill, or changes of observational standards. Use a pilot study to establish observational standards and develop skill.
reliability problem When various measures or judgments are inconsistent. Solutions: (1) careful training of experimenter, (2) careful attention to instrumentation, (3) measure reliability, and avoid interpreting affects smaller than the error bars.
hypocrisy Holding others to a higher methodological standard than oneself. Employ higher standards than others. Be as thorough as possible in your self-criticism. Follow your own advice.

 __

How can you differentiate the Good Research from the Bad?

The major criteria used to evaluate scientific research are grouped under the headings “reliability” and “validity.” Reliability is the degree to which a measure is free of measurement error. It is typically assessed by looking at the consistency of a measure. For example, a reliable weight scale will produce a consistent weight estimate for the same object over time. A reliable measure of playground aggression will produce consistent aggression estimates for the same child across several observers. There are several types of validity, and each refers to the confidence you can have in making certain conclusions.

_

Bullshit and research:

In his best-selling book, On Bullshit (Princeton Press 2005), philosopher Harry G. Frankfort argues that bullshit (manipulative misrepresentations) is worse than an actual lie because it denies the value of truth. A bullshitter’s fakery consists not in misrepresenting a state of affairs but in concealing his own indifference to the truth of what he says. The liar, by contrast, is concerned with the truth, in a perverse sort of fashion: he wants to lead us away from it. Truthtellers and liars are playing opposite sides of a game, but bullshitters take pride in ignoring the rules of the game altogether, which is more dangerous because it denies the value of truth and the harm resulting from dishonesty. People sometimes try to justify their bullshit by citing relativism, a philosophy which suggests that objective truth does not exist (Nietzsche stated, “There are no facts, only interpretations”). An issue can certainly be viewed from multiple perspectives, but anybody who claims that justifies misrepresenting information or denies the value of truth and objective analysis is really bullshitting.  

_

Why most published research findings are false:

There was an interesting essay in the journal PLoS Medicine, about how most brand new research findings will turn out to be false. There is increasing concern that most current published research findings are false. The probability that a research claim is true may depend on study power and bias, the number of other studies on the same question, and, importantly, the ratio of true to no relationships among the relationships probed in each scientific field. In this framework, a research finding is less likely to be true when the studies conducted in a field are smaller; when effect sizes are smaller; when there is a greater number and lesser preselection of tested relationships; where there is greater flexibility in designs, definitions, outcomes, and analytical modes; when there is greater financial and other interest and prejudice; and when more teams are involved in a scientific field in chase of statistical significance. Simulations show that for most study designs and settings, it is more likely for a research claim to be false than true. Moreover, for many current scientific fields, claimed research findings may often be simply accurate measures of the prevailing bias. Following corollaries were derived from the essay:

_

Corollary 1: The smaller the studies conducted in a scientific field, the less likely the research findings are to be true. Small sample size means smaller power and, for all functions above, the PPV for a true research finding decreases as power decreases towards 1 − β = 0.05. Thus, other factors being equal, research findings are more likely true in scientific fields that undertake large studies, such as randomized controlled trials in cardiology (several thousand subjects randomized) than in scientific fields with small studies, such as most research of molecular predictors (sample sizes 100-fold smaller).

Note: A positive predictive value (PPV) generally refers to what is established by control groups, while a post-test probability refers to a probability for an individual. 

_

Corollary 2: The smaller the effect sizes in a scientific field, the less likely the research findings are to be true. Power is also related to the effect size. Thus research findings are more likely true in scientific fields with large effects, such as the impact of smoking on cancer or cardiovascular disease (relative risks 3–20), than in scientific fields where postulated effects are small, such as genetic risk factors for multigenetic diseases (relative risks 1.1–1.5). Modern epidemiology is increasingly obliged to target smaller effect sizes. Consequently, the proportion of true research findings is expected to decrease. In the same line of thinking, if the true effect sizes are very small in a scientific field, this field is likely to be plagued by almost ubiquitous false positive claims. For example, if the majority of true genetic or nutritional determinants of complex diseases confer relative risks less than 1.05, genetic or nutritional epidemiology would be largely utopian endeavors.

_

Corollary 3: The greater the number and the lesser the selection of tested relationships in a scientific field, the less likely the research findings is to be true. As shown above, the post-study probability that a finding is true (PPV) depends a lot on the pre-study odds (R). Thus, research findings are more likely true in confirmatory designs, such as large phase III randomized controlled trials, or meta-analyses thereof, than in hypothesis-generating experiments. Fields considered highly informative and creative given the wealth of the assembled and tested information, such as microarrays and other high-throughput discovery-oriented research, should have extremely low PPV.

_

Corollary 4: The greater the flexibility in designs, definitions, outcomes, and analytical modes in a scientific field, the less likely the research findings are to be true. Flexibility increases the potential for transforming what would be “negative” results into “positive” results, i.e., bias. For several research designs, e.g., randomized controlled trials or meta-analyses, there have been efforts to standardize their conduct and reporting. Adherence to common standards is likely to increase the proportion of true findings. The same applies to outcomes. True findings may be more common when outcomes are unequivocal and universally agreed (e.g., death) rather than when multifarious outcomes are devised (e.g., scales for schizophrenia outcomes). Similarly, fields that use commonly agreed, stereotyped analytical methods (e.g., Kaplan-Meier plots and the log-rank test)  may yield a larger proportion of true findings than fields where analytical methods are still under experimentation (e.g., artificial intelligence methods) and only “best” results are reported. Regardless, even in the most stringent research designs, bias seems to be a major problem. For example, there is strong evidence that selective outcome reporting, with manipulation of the outcomes and analyses reported, is a common problem even for randomized trials. Simply abolishing selective publication would not make this problem go away.

_

Corollary 5: The greater the financial and other interests and prejudices in a scientific field, the less likely the research findings are to be true. Conflicts of interest and prejudice may increase bias. Conflicts of interest are very common in biomedical research, and typically they are inadequately and sparsely reported. Prejudice may not necessarily have financial roots. Scientists in a given field may be prejudiced purely because of their belief in a scientific theory or commitment to their own findings. Many otherwise seemingly independent, university-based studies may be conducted for no other reason than to give physicians and researchers qualifications for promotion or tenure. Such nonfinancial conflicts may also lead to distorted reported results and interpretations. Prestigious investigators may suppress via the peer review process the appearance and dissemination of findings that refute their findings, thus condemning their field to perpetuate false dogma. Empirical evidence on expert opinion shows that it is extremely unreliable.

_

Corollary 6: The hotter a scientific field (with more scientific teams involved), the less likely the research findings are to be true. This seemingly paradoxical corollary follows because, as stated above, the PPV of isolated findings decreases when many teams of investigators are involved in the same field. This may explain why we occasionally see major excitement followed rapidly by severe disappointments in fields that draw wide attention. With many teams working on the same field and with massive experimental data being produced, timing is of the essence in beating competition. Thus, each team may prioritize on pursuing and disseminating its most impressive “positive” results. “Negative” results may become attractive for dissemination only if some other team has found a “positive” association on the same question. In that case, it may be attractive to refute a claim made in some prestigious journal. The term Proteus phenomenon has been coined to describe this phenomenon of rapidly alternating extreme research claims and extremely opposite refutations. Empirical evidence suggests that this sequence of extreme opposites is very common in molecular genetics.

_

These corollaries consider each factor separately, but these factors often influence each other. For example, investigators working in fields where true effect sizes are perceived to be small may be more likely to perform large studies than investigators working in fields where true effect sizes are perceived to be large. Or prejudice may prevail in a hot scientific field, further undermining the predictive value of its research findings. Highly prejudiced stakeholders may even create a barrier that aborts efforts at obtaining and disseminating opposing results. Conversely, the fact that a field is hot or has strong invested interests may sometimes promote larger studies and improved standards of research, enhancing the predictive value of its research findings. Or massive discovery-oriented testing may result in such a large yield of significant relationships that investigators have enough to report and search further and thus refrain from data dredging and manipulation. It predictably generated a small flurry of ecstatic pieces from humanities graduates in the media, along the lines of science is made-up, self-aggrandising, hegemony-maintaining, transient fad nonsense; and this is the perfect example of the parody hypothesis. Scientists know how to read a paper. That’s what they do for a living: read papers, pick them apart, pull out what’s good and bad.

_______

How do you overcome relative risk reduction fallacy?

Number needed to treat (NNT):

A figure which is often quoted in medical research is the number needed to treat (NNT). This is the number of people who need to take the treatment for one person to benefit from the treatment. The number needed to treat is basically another way to express the absolute risk reduction. For example, say a pharmaceutical company reported that medicine X reduced the relative risk of developing a certain disease by 25%. If the absolute risk of developing the disease was 4 in 100 then this 25% reduction in relative risk would reduce the absolute risk to 3 in 100. But this can be looked at another way. If 100 people do not take the medicine, then 4 in those 100 people will get the disease. If 100 people do take the medicine, then only 3 in those 100 people will get the disease. Therefore, 100 people need to take the treatment for one person to benefit and not get the disease. So, in this example, the NNT is 100. A quick way of obtaining the NNT for a treatment is to divide 100 by the absolute reduction in percentage points in risk when taking the medicine. Say the absolute risk of developing complications from a certain disease is 4 in 20. Say a medicine reduces the relative risk of getting these complications by 50%. This reduces the absolute risk from 4 in 20, to 2 in 20. In percentage terms, 4 in 20 is 20%, and, 2 in 20 is 10%. Therefore, the reduction in absolute risk in taking this medicine is from 20% to 10% – a reduction of 10 percentage points. The NNT would be 100 divided by 10. That is, 10 people would need to take the medicine for one to benefit. The NNT concept has been gaining in popularity because of its simplicity to compute and its ease of interpretion. NNT data are especially useful in comparing the results of multiple clinical trials in which the relative effectiveness of the treatments are readily apparent. For example, the NNT to prevent stroke by treating patients with very high blood pressures (DBP 115-129) is only 3 but rises to 128 for patients with less severe hypertension (DBP 90-109).  

________

There is a beautifully clear explanation of John Ioannidis’s statistical argument that a large proportion of published research findings are likely to be false, summarized in this graphic seen below:

________

Acceptability of fake research by journals:

What hurts science – rejection of good or acceptance of bad?

Science published a story by John Bohannon about acceptance of a fake and deeply-flawed paper at open access journals, despite peer review. Disturbingly, 157 journals accepted the bogus article and only 98 rejected it. The experiments that his paper detailed, he writes, “are so hopelessly flawed that the results are meaningless.” Any upstanding publication ought to have rejected the paper after its editor or a reviewer looked it over, he writes, but that didn’t always happen. More than half of the publications that responded ended up accepting the research. Then, they asked him to pay for it to be published. Scientists and some journalists swiftly pointed out the grave problems with this attack on open access – this sting operation highlights problems with traditional peer review, but it says very little about open access, as the same experiment was not performed on subscription journals. The bigger question is – how much damage is there from publication of poor science? Does anyone really read the journals that accepted this? Another journalist took the 157 journals that accepted Bohannon’s fake paper and asked how many articles are in the libraries of the PubChase users from them. The answer is that out of over 75,000 articles of their users, only 5 are from this set (all five are from the single journal Bioinformation). In contrast, their users have 1,631 articles from 12 of the 98 journals that rejected the paper. The real problem in science is not that bad papers get published; that has always been and will continue to be the case. The real problem is that good and important papers are rejected and delayed from publication by journals such as Science. These delays hurt the progress of science and they demoralize and ruin careers. Finally, when it comes to publishing bad research, Science is not the journal that should be pointing fingers. The 2011 editorial “Retracted Science and the Retraction Index” showed unambiguously that the higher the journal’s impact factor, the higher its retraction rate. Not surprisingly, Science had the second worst retraction rate of all the journals considered in that editorial.

_

The above figure shows that during John Bohannon’s fake research sting operation, the highest density of acceptances was based by journals in India, where academics are always under intense pressure to publish in order to get promotions and bonuses.

_______________

Bad science:

Bad science is a category that covers the whole continuum from poor research methods and sloppy reasoning, through unchecked biases, to the tinkering with data and conclusions to fit the desired outcome (or the outcome more susceptible to lead to further funding), with the category of outright fraud at the extreme end. Bad science is not science; it is an income system wearing the trappings of science.

_

I quote from my article on ‘Science against racism’:

“The drawings from Josiah C. Nott and George Gliddon’s Indigenous races of the earth (1857), which suggested black people ranked between white people and chimpanzees in terms of intelligence. So many great scientific mind, anthropologists and philosophers of from 18’th to 20’th century supported racism based on science, reason and logic is shocking. These names include Robert Boyle, Carl Linnaeus, Georges Cuvier, Blumenbach and Buffon, John Hunter, Christoph Meiners, Voltaire, John Mitchell, Samuel Stanhope Smith, Benjamin Rush, Immanuel Kant, G.W.F. Hegel, Arthur Schopenhauer, Charles White, Franz Ignaz Pruner etc. I am amazed that how can so many scholars in various disciples be so wrong. The world of science ought to feel guilty for propagating scientific racism in 18’th and 19’th century that supported racism on scientific basis albeit erroneously, which ultimately led to Nazism, Fascism, slavery in America and apartheid in South Africa. Scientists of those times cannot absolve themselves of the crime of causing suffering, slavery, discrimination, and deaths of hundreds of thousands of innocent blacks, Jews and other disadvantaged groups. I apologize to the world on behalf of those scientists.”

Basically scientific racism was bad science.

_

In general, bad science can be blamed on

1. The ego of a small subset of scientists

2. Unethical or unreliable research practices

3. Improper statistical sampling (not using a large enough “n” for a trial).

_

The following questions that Belluz encourages us to ask are simple, and yet powerful enough to weed out a lot of bad science:

1. Is the sample representative?
2. How would this study square in the real world?
3. Who funded the research?
4. Was the report based on an experiment or observational science?
5. How big is the study?
6. What about the other evidence?

_

So how do we sort through the rubbish to find the gems of knowledge. We should look for these things:

1. Large population studies (the one that showed that Statin drugs were useless followed 64,000 patients).

2. Multivariate studies like the ones in The China Study.

3. A plausible biochemical mechanism (we can’t see how telephone poles cause heart disease).

4. Lab studies, perhaps with animals that show at a cellular level the proposed biochemical mechanism in action.

5. Clinical tests that show the expected outcomes based on prescribing the proposed beneficial course of action.

6. The more significant the results, the more dependable the science.

_

How to avoid bad science: 

Examples of truly Bad Science are everywhere. So, what can one do to avoid ambush by the Bad Scientists? Four small philosophical exercises come to mind.

The first is a methodological battle plan called “Ockham’s Razor,” named after the 14th century philosopher William of Ockham. In philosophy, it says that a problem should be stated in its basic and simplest terms. In science, according to Ockham’s Razor, the theory that fits the facts of a problem with the fewest number of assumptions is the one that should be selected.[vide supra]

The second tactic is termed “reductio ad absurdum,” which is the disproof of a proposition (or stupid experiment) by showing the absurdity to which it leads when carried out to its logical conclusion. A good example of such a situation is the case of the prosecutor who argued that seven fingerprints identified by two print examiners make a total of fourteen little traces of the burglar defendant. The reduction ad absurdum of that case is the notion that a third print examiner would up the ante to twenty-one clues, or that a dozen examiners identifying a single fingerprint would make for 12 traces of the suspect. The clues multiply like bunny rabbits. The mind boggles. Think of where the Bad Scientist is trying to lead you and look to the dark at the end of the tunnel.

The third fallback is to common sense, the bane of Bad Scientists the world over. It was Thomas Huxley who said, “Science is simply common sense at its best–that is, rigidly accurate in observation and merciless to fallacy in logic.” This is where juries trod on the best laid plans of eloquent attorneys. They step back for a moment and resort to instinct, to common sense. Lawyers, especially those True Believers who do the prosecuting, are notoriously bad at feigning common sense. They are better at reduction ad absurdum. Cops, on the other hand, are excellent at instinct and common sense, but poor on seeing the absurdity of a proposition’s logical conclusion.

The fourth, one needs to stand on one’s ground. And this means more than just do not testify to methods beyond your expertise or do not selectively ignore evidence to the contrary or do not overstate your qualifications. Standing your ground means you have to get in the face of anyone who even hints at being a Bad Scientist. You’ll need to gently redirect the novice Bad Scientist at times, showing him the light and letting him know where you stand.

_

How bad science is generated:

The following information is intended to show a few common examples of bad science and/or problems in research – it is not meant exhaustive list nor is it meant to point a wagging finger at scientists.  In many if not most cases, problems with studies are a result of interpretation and reporting as opposed to the study itself.

Overgeneralization and Extrapolation of Results:

This problem typically occurs when the results of a study from a specific sample are extrapolated to what is believed to be a similar group.  An example would be research where a new cholesterol drug was tested on females aged 30-50.  Can we, or should we make assumptions on what the drug might do for males or 65 year old women?  Absolutely not. Or what about a research study evaluating an after school reading program in New York City.  Would the results of this study be applicable in Des Moines, Iowa?  Perhaps, but we cannot and should not assume that the results would be the same.

Conflict of Interest[vide infra]:

You should always look at the conflict of interest statement at the end of a research study as part of your evaluation of potential bias in both study design as well as the data. For example, a recent study compared 1,534 studies involving cancer research. Studies that had industry funding focused on treatment 62 percent of the time, compared to 36 percent for other studies not funded by industry. And the studies funded by industry focused on epidemiology, prevention, risk factors, screening or diagnostic methods only 20 percent of the time, vs. 47 percent for studies that had declared no industry funding.

Absolute vs. Relative Percentages [vide supra & infra]:

Suppose that there was a medical problem that caused 2 people in 1,000,000 to have a stroke, and suppose there was a treatment that would reduce the problem to only 1 person per 1,000,000.  This would be an improvement of 0.0001% in an absolute sense.  However had researcher reported the results using relative percentages, he could have stated: “New medical treatment yields a 50% reduction in risk of stroke.”  This would obviously be quite misleading, but it is a common practice.

Unpublished Clinical Trials (vide infra]:

A study by the Yale School of Medicine found that 50% of clinical trials funded by the National Institutes of Health (NIH) had published their research findings within 30 months of study completion.  The problem is also extremely common in research that has been funded by pharmaceutical companies. These unpublished studies may have been withheld to prevent a medical intervention from being shown in a bad light. This problem of unpublished results is also common in studies with small sample sizes. And lastly, it is common for researchers to only report results that are statistically significant and thereby leave out data with negative findings.  This data would be especially helpful for follow-on research during the literature review process.

Selective Observation:

Selective observation is when a researcher is drawn to a particular conclusion based on an existing bias or belief.  For example, a researcher that is studying obesity may believe that obese people lack willpower and may construct an experiment that involves a plate of doughnuts in a conference room at work.  If that researcher only records data about obese subjects and doesn’t record non-obese subjects, they may have a biased experiment.

Scientific fundamentalists:

The spirit of free inquiry is being repressed within the scientific community by fear-based conformity. Institutional science is being crippled by dogmas and taboos. Increasingly expensive research is yielding diminishing returns. Bad religion is arrogant, self-righteous, dogmatic and intolerant. And so is bad science. But unlike religious fundamentalists, scientific fundamentalists do not realize that their opinions are based on faith. They think they know the truth. They believe that science has already solved the fundamental questions. The details still need working out, but in principle the answers are known. Science at its best is an open-minded method of inquiry, not a belief system. But the “scientific worldview,” based on the materialist philosophy, is enormously prestigious because science has been so successful.

_________

Cognitive Bias in research:

Cognitive bias addresses the question why people should rely on science for decisions about medicine, rather than just their personal judgment. Cognitive bias is a pattern of deviation in judgment, whereby inferences about other people and situations may be drawn in an illogical fashion. Individuals create their own “subjective social reality” from their perception of the input.  An individual’s construction of social reality, not the objective input, may dictate their behaviour in the social world.  Thus, cognitive biases may sometimes lead to perceptual distortion, inaccurate judgment, illogical interpretation, or what is broadly called irrationality.  Some cognitive biases are presumably adaptive. Cognitive biases may lead to more effective actions in a given context.  Furthermore, cognitive biases enable faster decisions when timeliness is more valuable than accuracy, as illustrated in heuristics.

_

Bias arises from various processes that are sometimes difficult to distinguish. These include

Information-processing shortcuts (heuristics)

Mental noise

The mind’s limited information processing capacity

Emotional and moral motivations

Social influence

_

Types of biases:

Name Description
Fundamental Attribution Error (FAE) Also known as the correspondence bias (Baumeister & Bushman, 2010) is the tendency for people to over-emphasize personality-based explanations for behaviours observed in others. At the same time, individuals under-emphasize the role and power of situational influences on the same behaviour. Jones and Harris’ (1967) classic study illustrates the FAE. Despite being made aware that the target’s speech direction (pro-Castro/anti-Castro) was assigned to the writer, participants ignored the situational pressures and attributed pro-Castro attitudes to the writer when the speech represented such attitudes.
Confirmation bias The tendency to search for or interpret information in a way that confirms one’s preconceptions. In addition, individuals may discredit information that does not support their views.  The confirmation bias is related to the concept of cognitive dissonance. Whereby, individuals may reduce inconsistency by searching for information which re-confirms their views.
Self-serving bias The tendency to claim more responsibility for successes than failures. It may also manifest itself as a tendency for people to evaluate ambiguous information in a way beneficial to their interests.
Belief bias When one’s evaluation of the logical strength of an argument is biased by their belief in the truth or falsity of the conclusion.
Framing Using a too-narrow approach and description of the situation or issue.
Hindsight bias Sometimes called the “I-knew-it-all-along” effect, is the inclination to see past events as being predictable.

_

Confirmation bias in science makes it bad science:

Confirmation bias (also called confirmatory bias) is a tendency of people to favor information that confirms their beliefs or hypotheses. People display this bias when they gather or remember information selectively, or when they interpret it in a biased way. The effect is stronger for emotionally charged issues and for deeply entrenched beliefs. They also tend to interpret ambiguous evidence as supporting their existing position. A series of experiments in the 1960s suggested that people are biased toward confirming their existing beliefs. Confirmation biases contribute to overconfidence in personal beliefs and can maintain or strengthen beliefs in the face of contrary evidence. Poor decisions due to these biases have been found in political and organizational contexts. A distinguishing feature of scientific thinking is the search for falsifying as well as confirming evidence.  However, many times in the history of science, scientists have resisted new discoveries by selectively interpreting or ignoring unfavorable data. Previous research has shown that the assessment of the quality of scientific studies seems to be particularly vulnerable to confirmation bias. It has been found several times that scientists rate studies that report findings consistent with their prior beliefs more favorably than studies reporting findings inconsistent with their previous beliefs.  However, assuming that the research question is relevant, the experimental design adequate and the data are clearly and comprehensively described, the found results should be of importance to the scientific community and should not be viewed prejudicially, regardless of whether they conform to current theoretical predictions.  Confirmation bias may thus be especially harmful to objective evaluations regarding nonconforming results since biased individuals may regard opposing evidence to be weak in principle and give little serious thought to revising their beliefs.  Scientific innovators often meet with resistance from the scientific community, and research presenting controversial results frequently receives harsh peer review.  In the context of scientific research, confirmation biases can sustain theories or research programs in the face of inadequate or even contradictory evidence; the field of parapsychology has been particularly affected.  An experimenter’s confirmation bias can potentially affect which data are reported. Data that conflict with the experimenter’s expectations may be more readily discarded as unreliable, producing the so-called file drawer effect. To combat this tendency, scientific training teaches ways to prevent bias.  For example, experimental design of randomized controlled trials (coupled with their systematic review) aims to minimize sources of bias. The social process of peer review is thought to mitigate the effect of individual scientists’ biases, even though the peer review process itself may be susceptible to such biases.

_

While performing experiments to test hypotheses, scientists may have a preference for one outcome over another, and so it is important to ensure that science as a whole can eliminate this bias. This can be achieved by careful experimental design, transparency, and a thorough peer review process of the experimental results as well as any conclusions. After the results of an experiment are announced or published, it is normal practice for independent researchers to double-check how the research was performed, and to follow up by performing similar experiments to determine how dependable the results might be. Taken in its entirety, the scientific method allows for highly creative problem solving while minimizing any effects of subjective bias on the part of its users (namely the confirmation bias).

_

Availability heuristic:

The availability heuristic is a mental shortcut that occurs when people make judgments about the probability of events by how easy it is to think of examples. The availability heuristic operates on the notion that if something can be recalled, it must be important. The availability of consequences associated with an action is positively related to perceptions of the magnitude of the consequences of that action. In other words, the easier it is to recall the consequences of something, the greater we perceive these consequences to be. Sometimes this heuristic is beneficial, but the frequencies that events come to mind are usually not accurate reflections of their actual probability in real life. For example, if a student was asked whether their college had more students from Colorado or more from California, their answer would probably be based on the personal examples they were able to recall.  For example, many people think that the likelihood of dying from shark attacks is greater than that of dying from being hit by falling airplane parts, when more people actually die from falling airplane parts. When a shark attack occurs, the deaths are widely reported in the media whereas deaths as a result of being hit by falling airplane parts are rarely reported in the media. In a 2010 study exploring how vivid television portrayals are used when forming social reality judgments, people watching vivid violent media gave higher estimates of the prevalence of crime and police immorality in the real world than those not exposed to vivid television. These results suggest that television violence does in fact have a direct causal impact on participants’ social reality beliefs. Repeated exposure to vivid violence leads to an increase in people’s risk estimates about the prevalence of crime and violence in the real world.  Researchers examined the role of cognitive heuristics in the AIDS risk-assessment process. 331 physicians reported worry about on-the-job HIV exposure, and experience with patients who have HIV. By analyzing answers to questioners handed out, researchers concluded Availability of AIDS information did not relate strongly to perceived risk. Participants in a 1992 study read case descriptions of hypothetical patients who varied on their sex and sexual preference. These hypothetical patients showed symptoms of two different diseases. Participants were instructed to indicate which disease they thought the patient had and then they rated patient responsibility and interactional desirability. Consistent with the availability heuristic, either the more common (influenza) or the more publicized (AIDS) disease was chosen.

_

Illusory superiority:

Illusory superiority is a cognitive bias that causes people to overestimate their positive qualities and abilities and to underestimate their negative qualities, relative to others. This is evident in a variety of areas including intelligence, performance on tasks or tests, and the possession of desirable characteristics or personality traits. It is one of many positive illusions relating to the self, and is a phenomenon studied in social psychology. Illusory superiority is often referred to as the above average effect. Other terms include superiority bias, leniency error, sense of relative superiority, and the primus inter pares effect.

_

Clustering illusion:

The clustering illusion is the tendency to erroneously perceive small samples from random distributions to have significant “streaks” or “clusters”, caused by a human tendency to underpredict the amount of variability likely to appear in a small sample of random or semi-random data due to chance.

_

Asch conformity:

In psychology, the Asch conformity experiments or the Asch Paradigm were a series of laboratory experiments directed by Solomon Asch in the 1950s that demonstrated the degree to which an individual’s own opinions are influenced by those of a majority group. The methodology developed by Asch has been utilized by many researchers and the paradigm is in use in present day social psychology. The paradigm has been used to investigate the relationship between conformity and task importance, age, gender, and culture.

_

Compliance bias:

A still more subtle component of healthy-user bias has to be confronted. This is the compliance or adherer effect. Quite simply, people who comply with their doctors’ orders when given a prescription are different and healthier than people who don’t. This difference may be ultimately unquantifiable. The compliance effect is another plausible explanation for many of the beneficial associations that epidemiologists commonly report, which means this alone is a reason to wonder if much of what we hear about what constitutes a healthful diet and lifestyle is misconceived. The lesson comes from an ambitious clinical trial called the Coronary Drug Project that set out in the 1970s to test whether any of five different drugs might prevent heart attacks. The subjects were some 8,500 middle-aged men with established heart problems. Two-thirds of them were randomly assigned to take one of the five drugs and the other third a placebo. Because one of the drugs, clofibrate, lowered cholesterol levels, the researchers had high hopes that it would ward off heart disease. But when the results were tabulated after five years, clofibrate showed no beneficial effect. The researchers then considered the possibility that clofibrate appeared to fail only because the subjects failed to faithfully take their prescriptions. As it turned out, those men who said they took more than 80 percent of the pills prescribed fared substantially better than those who didn’t. Only 15 percent of these faithful “adherers” died, compared with almost 25 percent of what the project researchers called “poor adherers.” This might have been taken as reason to believe that clofibrate actually did cut heart-disease deaths almost by half, but then the researchers looked at those men who faithfully took their placebos. And those men, too, seemed to benefit from adhering closely to their prescription: only 15 percent of them died compared with 28 percent who were less conscientious. “So faithfully taking the placebo cuts the death rate by a factor of two,” says David Freedman, a professor of statistics at the University of California, Berkeley. “How can this be? Well, people who take their placebo regularly are just different than the others. The rest is a little speculative. Maybe they take better care of themselves in general. But this compliance effect is quite a big effect.” The moral of the story, says Freedman, is that whenever epidemiologists compare people who faithfully engage in some activity with those who don’t — whether taking prescription pills or vitamins or exercising regularly or eating what they consider a healthful diet — the researchers need to account for this compliance effect or they will most likely infer the wrong answer. They’ll conclude that this behavior, whatever it is, prevents disease and saves lives, when all they’re really doing is comparing two different types of people who are, in effect, incomparable. It’s this compliance effect that makes these observational studies the equivalent of conventional wisdom-confirmation machines.

_______

Errors:

Conventional methods assume all errors are random and that any modeling assumptions (such as homogeneity) are correct. With these assumptions, all uncertainty about the impact of errors on estimates is subsumed within conventional standard deviations for the estimates (standard errors), and any discrepancy between an observed association and the target effect may be attributed to chance alone. When the assumptions are incorrect, however, the logical foundation for conventional statistical methods is absent, and those methods may yield highly misleading inferences. Epidemiologists recognize the possibility of incorrect assumptions in conventional analyses when they talk of residual confounding (from nonrandom exposure assignment), selection bias (from nonrandom subject selection), and information bias (from imperfect measurement). These biases rarely receive quantitative analysis, a situation that is understandable given that the analysis requires specifying values (such as amount of selection bias) for which little or no data may be available. An unfortunate consequence of this lack of quantification is the switch in focus to those aspects of error that are more readily quantified, namely the random components. Systematic errors can be and often are larger than random errors, and failure to appreciate their impact is potentially disastrous. The problem is magnified in large studies and pooling projects, for in those studies the large size reduces the amount of random error, and as a result the random error may be but a small component of total error. In such studies, a focus on “statistical significance” or even on confidence limits may amount to nothing more than a decision to focus on artifacts of systematic error as if they reflected a real causal effect.

_

Intentional mistakes missed:

In one classic study, the editor of a prominent medical journal sent out an article containing eight intentional mistakes to 200 of the journal’s regular reviewers. No one spotted them all. Some reviewers didn’t find any.  Last year, researchers at biotech firm Amgen found that they could reproduce the results of just six of 53 studies in cancer research considered to be breakthroughs. Even if work is ultimately retracted, research has shown it still manages to make its way across the vast corners of the Internet, sometimes with no retraction notice attached, and influence future works.  

_________

Conflict of Interest in Biomedical Research making it bad science:

Traditionally, academic biomedical research institutions and for-profit companies have had different missions. Academic institutions have focused on teaching, research, and public service, whereas companies have focused on generating revenue through commercial activities. But the distinction between their missions is becoming blurred now that academic institutions and their employees have opportunities to make significant amounts of money from research contracts, equity holdings, patents, and other relationships with industry, particularly pharmaceutical and biotechnology companies. These opportunities have been facilitated over the past quarter century by the Bayh-Dole Act of 1980 and significant public and private investment in biomedical research. Some of these new financial interests have raised concern about conflicts of interest.

_

The potential conflicts are between, on the one hand, the obligation of biomedical researchers to conduct, oversee, and assess studies according to scientific and ethical principles and, on the other hand, the desire for financial gain. The risk is that these conflicts could adversely affect the quality of research, possibly harming human subjects and anyone who relies on the research, including patients. It is difficult to prove that financial interests have caused researchers or their institutions to waiver in their commitment to producing quality studies, and there is considerable disagreement over which financial interests might inappropriately influence whom and under what circumstances. But studies of academic biomedical researchers have found troubling correlations between financial relationships with industry and problems with research, including a tendency to produce pro-sponsor results, increased secrecy, and poor study design. Even in the absence of evidence that research quality has dramatically suffered, conflicts of interest can create the appearance of impropriety. The idea that money threatens impartial judgment has strong intuitive appeal. When researchers and research institutions take money from industry or have a financial stake in their own research, they risk undermining trust in the results of that research, as well as in individual researchers, their institutions, and the whole biomedical research system. Because of the complex nature of biomedical research, it is no exaggeration to say that trust is essential to its continued success.

_

Prevalence of Conflicts of Interest:

Studies on the extent and impact of financial interests in biomedical research have fueled concern. They have found that financial interests between academic researchers and industry are common, and are correlated with both results that favor sponsors and increased secrecy of scientists refusing to share data with colleagues, withholding negative data from publication, and delaying publication of research results.

•A 2007 survey in the Journal of the American Medical Association of medical school department heads found that nearly 60% of respondents had personal relationships with industry.

•A 2003 review article, also in the Journal of the American Medical Association, found studies suggesting that between 23% and 28% of academic investigators received research funding from industry, over 40% received research-related gifts, and about 33% had personal financial ties with industry sponsors.

•The same review also found strong and consistent evidence that industry-sponsored research tends to draw conclusions favoring industry, often uses an inactive control, and sometimes administers a higher dose of the sponsor’s drug than of the comparison drugs or uses comparison drugs that are poorly absorbed. Industry sponsorship of research, as well as involvement with start-up companies and other commercial relationships were significantly associated with delaying publication or withholding data.

•A 1999 survey conducted by the Association of University Technology Managers, a group that promotes academic technology transfer, found that 68% of academic research institutions held equity in companies that in turn sponsored research there.

_

Benefits along with Risks:

Despite the risks, financial relationships with industry can have a number of benefits. They help bring new drugs and medical devices to market and economic growth to surrounding regions and the nation as a whole. They also boost research budgets, whether directly through research funding or indirectly through gifts, sponsorships, royalty payments, dividends, and proceeds from the sale of start-up companies.  Researchers and students can also derive benefits from collaboration with industry, including the opportunity to access data, equipment, and materials, and the satisfaction of seeing basic research translated into commercial products. In addition, because average academic salaries have barely improved in real value for over 30 years and are lower than salaries of nonacademic scientists, health professionals, and engineers, opportunities for additional compensation can assist research institutions with recruitment and retention. Academic researchers may also have a strong reluctance to give their time, expertise, or resources including inventions to industry without being compensated, even if compensation risks creating a conflict of interest. This reluctance may stem from what one commentator writing in the New England Journal of Medicine in 2005 called ‘the no conflict, no interest principle’, according to which a financial stake increases an individual’s commitment to a project and, therefore, its chances of success. This attitude may also reflect a belief that it is unfair to prevent individuals from profiting from their effort and that restrictions are intrusions on privacy and freedom of association. Finally, the Bayh-Dole Act explicitly encourages commercialization activity by federally funded institutions and mandates that institutions share royalties with individual inventors.

_

Some examples of conflict of interest:

•More than half of the scientists involved in testing Rezulin (Troglitazone), a type-II diabetes drug, had received funding or other compensation from Parke-Davis/Warner-Lambert, its manufacturer. The drug was fast-tracked through Food and Drug Administration approval in 1997 on the basis of their research but was withdrawn from the market three years later when it was shown to have caused liver failure in at least 90 patients. Newspaper reports and academic commentaries expressed concern that the financially conflicted scientists may have concluded that the drug was safer and more effective than the evidence warranted. 

•Potential conflicts of interest extend to government advisors. In 2005, an FDA advisory panel voted to allow the painkillers Celebrex (celecoxib), Bextra (valdecoxib), and Vioxx (rofecoxib) to remain on the market, despite data showing that they increased the risk of heart attacks. A week later, the Center for Science in the Public Interest reported that 10 of the 32 panel members had recently provided consultations to the manufacturers of the drugs, leading to speculation that if these conflicted researchers had been left off the panel, the drugs would have been withdrawn from the market.  

_

Setting Effective and Ethical Policies:

An important barrier to improving conflict of interest policies is surely the mixed message to the biomedical research community on the propriety of financial interests. On the one hand, an outcry often accompanies revelations of financial interest because of a strong suspicion that money can cause bias. On the other hand, technology transfer and receipt of industry research funds are encouraged and expected, and carry significant benefits. As long as this mixed message persists, cultural change may be extremely difficult. There is no simple solution to this problem. Institutions, policymakers, and professional organizations will need to acknowledge the benefits and risks of the financial relationships and the care required to navigate them. A sympathetic response to the bind some institutions and individuals feel themselves to be in and tools for avoiding or managing financial conflicts of interest will surely prove more useful than condemnation or cavalier disregard. Finally, continued discussion about the relationships among incentives in research, funding, and financial conflicts of interest is important. As much as possible, this discussion should reach outside the biomedical community to include policymakers, advocacy and professional organizations, and the media. Otherwise, the management of financial conflicts of interest runs the risk of being seen simply as window dressing way to make research-industry financial relationships appear innocuous without assuring that they really are. 

___________

Correlation becomes Causality in bad Science/ pseudoscience:

Correlation is defined a statistical relation between two or more variables such that systematic changes in the value of one variable are accompanied by systematic changes in the other. But that does not necessarily mean that the change in the second variable is caused by the change in the first variable or vice versa. A common mistake of bad research is to assume or imply that association (two things tend to occur together) proves causation (one thing causes or influences another). Two of the most misused and misunderstood terms in evaluating scientific evidence are correlation and causation, two powerful analytical tools that are critical to evidence based medicine.  Correlation is the grouping of variables that may occur together.  For example, smoking correlates with lung cancer in that those who smoke tend to develop lung cancer at a statistically significant rate.  It’s important to note that correlation does not prove causation.  However, once you have numerous well-designed studies that correlate lung cancer to smoking, along with adding in biological and physiological models that support the correlation, then we can arrive at a consensus that not only is smoking correlated with lung cancer, it causes it.

_

When something A is correlated with something else B in a study, it is easy to jump to the false conclusion that A is causing B. A may or may not be causing B, as there is no way to know just from an observational study. A may be causing B.  B may be causing A. Third variable, C may be causing A and B. This third variable C is what is called a confounding variable. An example of this is that ice cream sales in Florida are correlated with shark attacks. Does this mean that eating ice cream causes shark attacks?  Maybe the sharks like the ice cream dripping down your chin so they are more likely to attack. Of course this is silly. When it is hot, people eat more ice cream and they also go swimming more, which leads to more shark attacks. Observational studies are full of confounding variables. 

_

Scientists use correlation studies as one type of tool to explore the relationship between events. And under the right circumstances, we can draw causality inferences from these studies. For example, if we make a set of observations of days with dark clouds and days with rain, we might find a high correlation and we might infer that dark clouds cause rain. However, correlation does not always mean causation. For example, we could also observe a correlation between the number of people carrying umbrellas and days with rain and erroneously conclude that people carrying umbrellas cause rain. There may be yet a third thing that causes both. In this example, the dark clouds cause the rain and the dark clouds cause people to carry umbrellas, so raining and umbrellas correlate but one doesn’t cause the other.

_

We observe correlations every day. But they are subjective observations for which we cannot state a causal relationship without substantial research. The anti-vaccination movement is rife with these observations which they use to “prove” a cause.  An anti-vaccine conspiracy website claims that pregnant women are miscarrying babies after getting the vaccine shot. The fact is that there is a statistical chance that women will miscarry during any pregnancy. This is random variability not a cause. In fact, based on the rate of miscarriage, we could expect that thousands of women would miscarry within 24 hours of getting the H1N1 flu shot. But it’s not correlation, unless significant studies show a causal relationship.  For example, thousands of people broke a bone or had a desire to eat a burger after getting the vaccine shot, but that’s because in a large enough population of individuals, you can find literally millions of different actions after getting a vaccine shot. So, the miscarriage rate after receiving the swine flu shot is not correlated.  It’s just a random observation.  And there is no biological cause that could be described.  Nevertheless, the “flu vaccine causes miscarriage” conspiracy has been thoroughly debunked by research, but still the internet meme continues. Pseudoscience sometimes uses the same methodology (or lack of methodology) to make positive assertions. Homeopaths will claim that their dilutions will cure whatever disease, yet they do not have scientific evidence supporting them, but there plenty of evidence that debunks what they practice. Is there any physical, chemical or biological mechanism that will allow the quack procedure to work?  If you cannot imagine it without violating some of the basic laws of science, then we should stand by Occam’s razor, which states often times the simplest solution is the best. So, if there is no evidence of vaccinations being correlated, let alone causal, to autism, then that remains the simplest solution. To explain a possible tie without any evidence would require us to suspend what we know of most biological processes.

 _

Some examples:

1. Many people die in hospitals, and there are occasional examples of patients harmed during visits (due to medical care errors or hospital-based infections), so a bad researcher could “prove” that hospitals are dangerous. However, this confuses causation (people often go to hospitals when they are at risk of dying), and provides no base case (what would happen to those people had they not gone to a hospital) for comparison. It is likely that hospitals significantly reduce death rates compared with what would otherwise occur, despite many examples to the contrary.

_

2. Many dense urban neighborhoods have higher crime and mental illness rates than lower-density suburbs, so people sometimes assume that density causes social problems. But these problems actually reflect poverty and isolation. There is no evidence that for a given demographic group, shifting from lower- to higher-density housing causes social problems (1000 Friends, 1999). It would be more appropriate to conclude that urban social problems are caused by middle-class flight and suburban communities’ exclusionary policies that cause disadvantaged people to concentrate in city neighborhoods.

_

3. Breast-fed babies may grow up to be smarter adults:

The study says: the longer babies are breast fed (A), the smarter they will become (B). But maybe it is that women who can afford to stay at home to breast feed their children, also have time to read to them and better prepare them for the critical first couple of years of school. Also, I have shown in my article ‘science of love’ that contact comfort given by mother to infant is far more important than mere nursing the infant and breast feeding does give contact comfort and better perception of maternal love resulting in good development of the baby’s prefrontal cortex.

_

So when confronted with a publication that claims a relationship between food and disease, we must be wary. For example, if a study shows a small correlation between blueberry consumption and reduction in cancer, the blueberry growers will immediately go to the press claiming the life-saving properties of blueberries. Then you might find a bunch of people eating the standard American diet adding blueberries to their diet and expecting cancer to be reduced. They will then be surprised when the cancer isn’t reduced. What is going on here? What is going on is that there is a correlation between blueberry consumption and those people who eat an entire pattern of only whole plant-based foods (because blueberries are one of the things they eat) and there is also a large correlation between people who eat only whole plant-based foods and a reduction in cancer. But for people who continue the same eating pattern that promotes cancer, (i.e. animal products, oil, sugar) but add some blueberries, they are just not going to see a measurable benefit. Can you see how correlation studies can be misinterpreted?

_

Can correlation studies lead to false conclusions, even in the presence of well-meaning researchers?

Yes.

How?

By conducting plenty of small studies rather than one large study.

1. Statistical variability is higher with smaller sample sizes. For example, with a carefully chosen sample size of two, we can find a person who smokes heavily but lived to be 100 and another who didn’t smoke and died at 50. If we calculate a correlation on this very small sample set, we see a high correlation of smoking with long life. Therefore smoking promotes long life. Right? Wrong. As sample sizes get bigger, smoking becomes overwhelmingly correlated with shorter life. So we can be fooled with too small a sample set.

2. With a larger number of studies, there is a better chance that one of them will have a statistical variation in the direction you want. Then, if you can just suppress the studies that don’t show what you want and promote the ones that show what you do want, you are golden. Even without nefarious intent, this affect can happen naturally. For example, if 99 scientists put out a statement showing that smoking was bad for you, how many reporters would pick this up and write about it – none, because it’s not news. However, if one scientist put out a statement that smoking was good for you, that would be news, and so reporters would pick it up. So what you hear about in news, books, and other sources may be affected by this newsworthiness bias, which can dramatically distort the reality of the situation.

3. Health is multivariate – that is, it depends on many factors. If we are measuring just one factor – say smoking – we are hoping that all the other factors that affect health and longevity (e.g. diet, exercise) will average out so our correlation of the single factor will be meaningful. But if we can find a sample set where negative cross-correlations exist, we can show almost anything we want. For example, if we can find a sample set of people in which the ones who smoke more, also eat better and exercise more, and if the diet and exercise are more powerful health promoters than smoking is a health reducer, then the correlation in this sample set of smoking with health will indicate that smoking is good for you. It’s not true of course, but that’s what it looks like with a carefully contrived data set. To avoid this problem the better researchers try to incorporate as many possible factors into their data collection and their mathematical analysis as possible and then do multivariate analysis on the data set which reliably pulls out the correlations of each of the individual factors. This process is what makes the data that was discussed in The China Study such a powerful tool.

_

The big companies making food and pharmaceuticals are very aware of the points we discussed above. They use these methods routinely to “prove” that their products are good for you when in fact they are not.

__________

Scientific misconduct:

The term “scientific misconduct” refers to situations such as where researchers have intentionally misrepresented their published data or have purposely given credit for a discovery to the wrong person. Scientific misconduct is the violation of the standard codes of scholarly conduct and ethical behavior in professional scientific research. A Lancet review on Handling of Scientific Misconduct in Scandinavian countries provides the following sample definitions:

Danish definition: “Intention or gross negligence leading to fabrication of the scientific message or a false credit or emphasis given to a scientist”

Swedish definition: “Intentional distortion of the research process by fabrication of data, text, hypothesis, or methods from another researcher’s manuscript form or publication; or distortion of the research process in other ways.”

The consequences of scientific misconduct can be damaging for both perpetrators and any individual who exposes it. In addition there are public health implications attached to the promotion of medical or other interventions based on dubious research findings. The U.S. National Science Foundation defines three types of research misconduct: fabrication, falsification, and plagiarism. The consequences of scientific fraud vary based on the severity of the fraud, the level of notice it receives, and how long it goes undetected. For cases of fabricated evidence, the consequences can be wide ranging, with others working to confirm (or refute) the false finding, or with research agendas being distorted to address the fraudulent evidence.

_

_

_

Charles Babbage (1792-1871), professor of mathematics at Cambridge University, described three forms of outright scientific dishonesty with regard to observation:

(1) Trimming: the smoothing of irregularities to make the data look extremely accurate and precise.

(2) Cooking: retaining only those results that fit the theory while discarding others that do not.

(3) Forging: inventing some or all of the research data that are reported, and even reporting experiments or procedures to obtain those data that were never performed.

_

Kenneth Feder listed six basic motives for scientific fraud:

(1) Financial gain. Books and television programs proposing outlandish theories earn millions each year.

(2) The pursuit of fame. The desire to find the first, the oldest, the long-lost, and the thought-to-have-been-mythological provides incentive for the invention, alteration, or exaggeration of data.

(3) Nationalistic or racial pride. Many attempt through deception to glorify their ancestors, and by extension themselves, by attributing to them grand and important achievements that are in reality undeserved.

(4) Religious interests. Adherents to particular religions sometimes succumb to the temptation to falsely prove through archaeology the validity of their beliefs.

(5) The desire for a more “romantic” past. There are those who reject what they view as “mundane” theories in favor of those that are more exciting, such as the proposal of lost continents, ancient astronauts, and advanced technologies.

(6) Mental instability. Some unsound claims are the fruits of unsound minds.

_

According to David Goodstein of Caltech, there are motivators for scientists to commit misconduct, which are briefly summarized here:

Career pressure:

Science is still a very strongly career-driven discipline. Scientists depend on a good reputation to receive ongoing support and funding, and a good reputation relies largely on the publication of high-profile scientific papers. Hence, there is a strong imperative to “publish or perish”. Clearly, this may motivate desperate (or fame-hungry) scientists to fabricate results.

Ease of fabrication:

In many scientific fields, results are often difficult to reproduce accurately, being obscured by noise, artifacts, and other extraneous data. That means that even if a scientist does falsify data, they can expect to get away with it – or at least claim innocence if their results conflict with others in the same field. There are no “scientific police” who are trained to fight scientific crimes; all investigations are made by experts in science but amateurs in dealing with criminals. It is relatively easy to cheat although difficult to know exactly how many scientists fabricate data. 

_

A 2009 PloS One study by Dr. Daniele Fanelli, a researcher at the University of Edinburgh who studies bias and misconduct in science, found that two percent of scientists, on average, admitted to at least one incident of serious misconduct, such as fabrication, falsification or modification of data—all of which distort scientific knowledge. When talking about their colleagues’ behavior, 14 percent said they observed serious scientific misconduct. In this climate, there are several ways that science journalists can enhance their reporting, and consequently, public understanding, of misconduct. The first step is to make this problematic aspect of the scientific culture explicit, as Carl Zimmer did in his detailed examination of how pressures to achieve and maintain success have fueled an increasing rate of scientific retractions.  Another way to do this is for reporters to cover retractions, the public admissions by journals that studies they printed should never have been published, most often because of deliberate deceit or honest mistakes. This is routine practice at Reuters Health, for example, according to Ivan Oransky, its executive editor. Moreover, he said that if a reporter covered a paper that was retracted, they would update their earlier report. “When we do it on a five or six year old study, we can look a little silly,” he said. “But we’ll take a little looking silly, if it corrects the record.”

_

Science Journal Retractions of bad science:

Earlier I discussed scientific misconduct in general giving rise to bad science and now I will discuss journal retractions of bad science. In science, a retraction of a published scientific article indicates that the original article should not have been published and that its data and conclusions should not be used as part of the foundation for future research. The common reasons for the retraction of articles are scientific misconduct and serious error. Each year hundreds of peer-reviewed scientific articles are retracted. Most involve no blatant malfeasance; the authors themselves often detect errors and retract the paper. Some retractions, however, as documented on the blog Retraction Watch, entail plagiarism, false authorship or cooked data. No journal is safe from retractions, from the mighty “single-word-title” journals such as Nature, Science and Cell, to the myriad minor, esoteric ones. Bad science papers can have lasting effects. Consider the 1998 paper in the journal the Lancet that linked autism to the MMR vaccine for measles, mumps and rubella. That paper was fully retracted in 2010 upon evidence that senior author Andrew Wakefield had manipulated data and breached several proper ethical codes of conduct. Nevertheless the erroneous paper continues to undermine public confidence in vaccines. After the Lancet article, MMR vaccination rates dipped sharply and haven’t fully rebounded. This decline in the MMR vaccine has been tied to a rise in measles cases resulting in permanent injury and death.  

_

Some examples of Science Journal Retractions of Bad Science:

1: Chronic fatigue syndrome is caused by a virus:

Chronic fatigue syndrome (CFS) is a disorder of unknown origin. Some researchers, in fact, consider this a psychological disorder largely confined to wealthier countries, affecting women more than men. Then came a study published in Science in October 2009 by researchers from the Whittemore Peterson Institute in Reno, Nevada. The researchers associated CFS with something called xenotropic murine leukemia virus-related virus (XMRV), which they said they found in blood samples of patients with CFS. CFS advocates were elated. At last there was proof that their disease was real, they said. Retrovirus experts, on the other hand, were skeptical. Maybe the blood samples were contaminated. It turns out that the paper is likely wrong. No other lab could reproduce the results. Science issued an “Editorial Expression of Concern” in July after the authors themselves refused to retract their paper. The Science editorial states bluntly that the study purported “to show that … XMRV was present in the blood of 67 percent of patients with chronic fatigue syndrome compared with 3.7 percent of healthy controls. Since then, at least 10 studies conducted by other investigators and published elsewhere have reported a failure to detect XMRV in independent populations of CFS patients.” The authors finally issued a partial retraction, removing data now known to be from contaminated samples. Science followed with a full retraction. Meanwhile, in a disturbing twist, senior author Judy Mikovits was fired from the Whittemore Peterson Institute and arrested in California in over charges for possession of stolen property and unlawful taking of computer data, equipment and supplies. Science is investigating whether the data were manipulated.  

_

2: Litter breeds crime and discrimination:

It sounded so reasonable: Graffiti and litter in urban settings can trigger changes in the brain that can lead to crime, hatred and discrimination. Alas, the senior author of this April 2011 paper in Science, Dutch social psychologist Diederik Stapel, might have fabricated much of the data. The journal Science retracted the paper in November upon realization that Stapel, a media darling whose name frequented the New York Times, may have faked data in at least 30 papers, according to a report from Stapel’s university, Tilburg University in the Netherlands. Stapel has since been suspended from Tilburg pending further investigation. The objective reader must now question other pet theories from Stapel. These include his “findings” that beauty-advertising works because it makes women feel worse about themselves, and that conservative politics leads to hypocrisy.

_

3: Butterfly meets worm, falls in love, and has caterpillars:

The Proceedings of the National Academy of Sciences (PNAS) published a fantastic claim in 2009 by zoologist Donald Williamson, which was delightfully reported in the science news media. Williamson claimed that ancestors of modern butterflies mistakenly fertilized their eggs with sperm from velvet worms. The result was the necessity for the caterpillar stage of the butterfly life cycle. The PNAS paper got a few laughs among evolutionary scientists, but it hasn’t yet been retracted. Williamson’s follow-up 2011 paper in the journal Symbiosis, however, has been retracted. Researchers Michael Hart and Richard Grosberg at the University of Texas, Austin, systematically refuted all of Williamson’s claims in the pages of PNAS by the end of 2009. They based their arguments entirely on well-known concepts of both basic evolution and the genetics of modern worms and butterflies. When Symbiosis published its butterfly-meets-worm article in January 2011, Hart raised questions with the editor.

_

4: Treat appendicitis with antibiotics, not surgery:

The Journal of Gastrointestinal Surgery published an article in 2009 by Indian researchers titled “Conservative management of acute appendicitis.” The gist was that antibiotics might be a safe alternative to an appendectomy, the surgical removal of the appendix. Well, maybe not. The journal retracted the paper in October. Italian surgeons had raised a red flag with the study in a lengthy letter published in 2010 in the same journal, politely citing a multitude of problems with the study’s methodology. The Indian researchers responded a month later with their own two-paragraph letter defending the methodology and calling for a larger study to establish the superiority of antibiotic treatment over surgery. There’s no word whether that larger study is pending, but the journal’s editors retracted the original article for reasons of alleged plagiarism, stating that “significant portions of the article were published earlier” by other researchers in 2000 and 1995.  

_

Scientific enterprise is not just a quest for knowledge and truth; it is also a fairly good reflection of the whole spectrum of human behaviour: from genius, passion and jealousy, to mistakes and misconduct. Although new scientific advances and insights are always exciting, the reaction of many scientists to mistakes and misconduct—and to the accompanying retraction of articles—reflects the collapse of a profound belief in the truth-seeking nature of the ideal scientist: one who is devoid of ordinary human flaws. Recently, there have been several highly publicized retractions in high-profile journals, which creates a feeling that the integrity of science is in decline. It also raises the question of whether retraction rates for scientific articles are higher than in the past. Researchers used the Medline database to calculate both the number of published articles and the number of retractions since 1950: more than 17 million articles have been recorded in Medline and, as of 21 October 2007, 871 of these have been retracted.

_

The figure above shows the number of articles and the percentage of articles retracted since 1950 as recorded in Medline. It shows that retraction rates for scientific articles are higher than in the past.

_

The increasing rate of retracted scientific articles is a disturbing trend. Although correction of the scientific record is laudable per se, erroneous or fraudulent research can cause enormous harm, diverting other scientists to unproductive lines of investigation, leading to the unfair distribution of scientific resources, and in the worst cases, even resulting in inappropriate medical treatment of patients. Furthermore, retractions can erode public confidence in science. Any retraction represents a tremendous waste of scientific resources that are often supported with public funding, and the retraction of published work can undermine the faith of the public in science and their willingness to provide continued support. The corrosive impact of retracted science is disproportionate to the relatively small number of retracted articles. The scientific process is heavily dependent on trust. To the extent that misconduct erodes scientists’ confidence in the literature and in each other, it seriously damages science itself. As Arst has noted, “All honest scientists are victims of scientists who commit misconduct”. Yet, retractions also have tremendous value. They signify that science corrects its mistakes.

_

Many studies have been conducted to understand the trends in the retraction rate and how it has risen over the decades.   A study published in PLOS ONE, aiming to understand the scope and characteristics of retracted articles across the full spectrum of scholarly disciplines, surveyed 42 of the largest bibliographic databases for major scholarly fields and publisher websites to identify retracted articles. The authors looked at 4,449 scholarly publications retracted from 1928 to 2011. The number of articles retracted per year increased from 2001 to 2010. Further, retractions due to alleged publishing misconduct such as plagiarism and author-initiated duplicate publication (47%) outnumbered those due to alleged research misconduct or data fabrication (20%) or questionable data/interpretations (42%). According to the study, most retracted articles did not contain flawed data, and the authors of most retracted articles have not been accused of research misconduct.  Another study published recently by the Proceedings of the National Academy of Sciences (PNAS) found out that two-thirds of retractions are because of some form of misconduct. This study involved 2,047 retractions of biomedical and life science research articles in PubMed from 1973 to May 3, 2012. Only 21.3% of retractions were attributable to error. In contrast, 67.4% of retractions were attributable to misconduct, including fraud or suspected fraud (43.4%), duplicate publication (14.2%), and plagiarism (9.8%).

_

Researchers recently showed in a mathematical model that articles published in high-impact and highly visible journals receive significantly greater scrutiny; consequently, there is a greater chance that flawed articles will be identified in these journals (Cokol et al, 2007).

_

Scientific Misconduct in India:

In November 2010, Steen caused a stir by asserting that “American scientists are significantly more prone to engage in data fabrication or falsification than scientists from other countries.” His assertions were contested strongly by O’Hara who reanalyzed Steen’s data to show that, when scaled by the overall scientific output, the fraud rate for US scientists — at 4.5 per 100,000 papers — is quite close to the world average of about 4 papers per 100,000. More importantly, O’Hara also showed that the fraud rate is much higher at 18 for Indian scientists; it is this result that prompted another study in which researcher focus on 69 papers by Indian authors published during the 10-year period of 2001-2010, only to be retracted later.  First, plagiarism is overwhelmingly the primary mode of misconduct in India. Of the 69 retracted papers, the retraction of 45 could be traced to some form of scientific misconduct — plagiarism (of both text and data) and self-plagiarism accounted for 26 and 18, respectively. Only one paper was retracted due to what might amount to falsification. Second, at 44 per 100,000 papers, the India’s misconduct-related retraction rate is far higher than the world average for all retractions (due to misconduct as well as genuine errors) of about 17. And, third, this rate could be said to have accelerated during the decade — while it was 34 during the first half, it rose to 48 in the second half. In this study, 9 retractions were due to genuine errors — including three on the editorial side! For the remaining 15 retractions, a reason could not be assigned.

__________

How results were manipulated to show Einstein’s theory of relativity valid:

Arthur Eddington was so convinced of the theory of general relativity that he altered his data to support it. Eddington set out to put Einstein to the test by carefully measuring how light was bent during a solar eclipse. But apparently the examiner went soft. When the results were in, Eddington threw out 16 photographic plates that didn’t support Einstein’s theory. Even worse, he then published his research without those 16 plates and showed how Einstein’s theory accurately predicted the resulting data. It was this experiment that helped launch the public acceptability of relativity. Strangely enough, the hoax still has legs. You can still find the experiment listed in current textbooks as “proof” of Einstein’s theory.

 _

How it was manipulated to show that homosexuality is common:

Alfred Kinsey’s landmark studies of the 1950s, known as the Kinsey Reports, were the major emphasis on late-20th-century views of human sexuality. The incidence of homosexuality, bisexuality, adultery, and childhood sexual behavior were higher than previously thought, which helped lead to different views of adult and childhood sexual behavior. According to Judith Reisman, however, Kinsey’s research was fraught with very bad scientific method and possibly fraud. He obtained much of his data by interviewing prisoners, his interviewing technique was biased, and he used reports from pedophiles to hypothesize about childhood sexual behavior. Kinsey’s estimates on the extent of homosexual behavior (38.7% in males ages 36 – 40) have not been validated in subsequent studies. In contrast, a Batelle report found that 2.3% of men reported having sex with another man. Nonetheless, Kinsey’s landmark study still remains one of the primary sources for current sexuality discussions.

_

How it was manipulated to show that primitive pre-historic tribes exist:

One of the most startling anthropological discoveries of the 20th century was the discovery of a primitive, cave-dwelling society in the Philippines in 1971. The Tasadays, as they were called, were a find of enormous proportions because they lived a life undisturbed by hundreds of years of society. And to many an academic’s delight, anthropologists could now directly observe how people lived in such societies. The Tasadays even used stone tools. If you’re thinking it’s impossible that such an isolated group could exist in the Philippines as late as the 1970s, you’re right. It turns out that their “discoverer,” PANAMIN (Private Association National Minorities) secretary Manuel Elizalde Jr., paid local farmers to live in the caves, take off their clothes, and appear Stone Age. In return he gave them money and security from counterinsurgency and tribal fighting. The fact that the Tasaday were a hoax was not confirmed until the fall of Marcos in 1983, invalidating, no doubt, many PhD dissertations that had been written in the interim.

_

Cold fusion: where bad science turns into pseudoscience:  

Cold fusion is the claim of nuclear reactions at relatively low temperatures, rather than at millions of degrees. The most commonly-touted system is an electrolytic cell with a palladium cathode electrolyzing deuterium oxide (heavy water). The term was popularized by the work of Stanley Pons and Martin Fleischmann, which gained tremendous publicity but was irreproducible. It was then taken up with enthusiasm by cranks, thinking they could be the ones to bring cheap energy to the world. The most outrageous claims include blatant fraud such as the Energy Catalyzer. Cranks also believe there is a massive conspiracy to suppress information about cold fusion. There are credible scientists working on fusion at less than millions of degrees, but they tend to avoid the term “cold fusion.”  Fleischmann and Pons were electrochemists who had worked together for many years. As they told the story, in 1983 they saw something anomalous and thought, “That’s funny …“; refining their experiments over the next six years, they finally decided that all chemical explanations had been ruled out and that a nuclear reaction was the only remaining possibility. Their paper, “Electrochemically induced nuclear fusion of deuterium“, was accepted by the Journal of Electroanalytical Chemistry on March 22, 1989, but was publicized by press conference the next day. They caught the world’s attention, but were unable to replicate their process publicly after the press releases. Pons’ and Fleischmann’s work was quickly discredited by scientists, due to multiple, repeated failures of replication and a lack of reports of expected neutron flux from the fusion reaction. Deuterium fusion often produces helium-3 as well as substantial amounts of neutrons; the reaction rate necessary for the reported heat would have produced fatal levels of highly-penetrating neutron radiation.  At the present time, with our current understanding of physics and electrochemistry, cold fusion as Pons and Fleischmann described it is impossible. This is because the energy required to kick-start a fusion reaction is very high, and doing it with a mere electrochemical cell is an astounding result — one that would have to have extraordinary evidence to back it up.        

_______

Examples of bad science in the Bible are described below but bad science exists in all religions:

 * The bat is a bird (Lev. 11:19, Deut. 14:11, 18);

 * Some fowls are four-footed (Lev. 11:20-21);

 * Some creeping insects have four legs. (Lev. 11:22-23);

 * Hares chew the cud (Lev. 11:6);

 * Conies chew the cud (Lev. 11:5);

 * Camels don’t divide the hoof (Lev. 11:4);

 * The earth was formed out of and by means of water (2 Peter 3:5 RSV);

 * The earth rest on pillars (1 Sam. 2:8);

 * The earth won’t be moved (1Chron. 16:30);

 * A hare does not divide the hoof (Deut. 14:7);

 * The rainbow is not as old as rain and sunshine (Gen. 9:13);

 * A mustard seed is the smallest of all seeds and grows into the greatest of all shrubs (Matt. 13:31-32 RSV);

 * Turtles have voices (Song of Sol. 2:12);

 * The earth has ends or edges (Job 37:3);

 * The earth has four corners (Isa. 11:12, Rev. 7:1);

 * Some 4-legged animals fly (Lev. 11:21);

 * The world’s language didn’t evolve but appeared suddenly (Gen. 11:6-9)

 * A fetus can understand speech (Luke 1:44).

 * The moon is a light source like the sun (Gen 1:16)

____________

Pseudoscience:

Professor Steven Dutch provides a straightforward explanation (Dutch, 2010):

•Pseudoscience is demonstrably faulty observations or theories, or elaborate speculation without an adequate basis.

•Pseudoscience is usually supported by logical fallacies. The only way it’s possible to accept faulty data is through faulty reasoning.

•Pseudoscience is in open defiance of scientific consensus.

_

Pseudoscience is a claim, belief, or practice which is presented as scientific, but does not adhere to a valid scientific method, lacks supporting evidence or plausibility, cannot be reliably tested, or otherwise lacks scientific status. Pseudoscience is often characterized by the use of vague, contradictory, exaggerated or unprovable claims, an over-reliance on confirmation rather than rigorous attempts at refutation, a lack of openness to evaluation by other experts, and a general absence of systematic processes to rationally develop theories. The demarcation problem between science and pseudoscience has ethical political implications, as well as philosophical and scientific issues.  Differentiating science from pseudoscience has practical implications in the case of health care, expert testimony, environmental policies, and science education.  Distinguishing scientific facts and theories from pseudoscientific beliefs such as those found in astrology, medical quackery, and occult beliefs combined with scientific concepts, is part of science education and scientific literacy.  Robert T. Carroll stated: “Pseudoscientists claim to base their theories on empirical evidence, and they may even use some scientific methods, though often their understanding of a controlled experiment is inadequate. Many pseudoscientists relish being able to point out the consistency of their ideas with known facts or with predicted consequences, but they do not recognize that such consistency is not proof of anything. It is a necessary condition but not a sufficient condition that a good scientific theory be consistent with the facts.”

_

_

Examples of pseudoscience concepts, proposed as scientific when they are not scientific, include: acupuncture, alchemy, ancient astronauts, applied kinesiology, astrology, Ayurvedic medicine, biorhythms, cellular memory, cold fusion, craniometry, creation science, Scientology founder L. Ron Hubbard’s engram theory, enneagrams, eugenics, extrasensory perception (ESP), facilitated communication, graphology, homeopathy, intelligent design, iridology, kundalini, Lysenkoism, metoposcopy, N-rays, naturopathy, orgone energy, Oyagaku, paranormal plant perception, phrenology, physiognomy, qi, New Age psychotherapies (e.g., rebirthing therapy), reflexology, remote viewing, neuro-linguistic programming (NLP), reiki, Rolfing, therapeutic touch, and the revised history of the solar system proposed by Immanuel Velikovsky.

_

Four groups of pseudoscience:

1. Obvious pseudoscience: Theories which, while purporting to be scientific, are obviously bogus may be so labeled and categorized as such without more.

2. Generally considered pseudoscience: Theories which have a following, but which are generally considered pseudoscience by the scientific community, such as astrology, may properly contain that information and may be categorized as pseudoscience.

3. Questionable science: Theories which have a substantial following, but which some critics allege to be pseudoscience, such as psychoanalysis, may contain information to that effect, but generally should not be so characterized.

4. Alternative theoretical formulations: Alternative theoretical formulations which have a following within the scientific community are not pseudoscience, but part of the scientific process.

_

How widespread is belief in Pseudoscience?

Belief in pseudoscience is relatively widespread. Various polls show the following:

•More than 25 percent of the public believes in astrology in the U.S., that is, that the position of the stars and planets can affect people’s lives. In one recent poll, 28 percent of respondents said that they believed in astrology; 52 percent said that they did not believe in it; and 18 percent said that they were not sure (Newport and Strausberg 2001). Nine percent of those queried in the 2001 NSF survey said that astrology was “very scientific” and 32 percent answered “sort of scientific”; 56 percent said that it was not at all scientific.  A minority of respondents (15 percent) said that they read their horoscope every day or “quite often”; 30 percent answered “just occasionally.”  As far India is concerned, 90 % people believe in astrology. 

•At least half of the public believes in the existence of extrasensory perception (ESP). The statistic was 50 percent in the latest Gallup poll and higher in the 2001 NSF survey, in which 60 percent of respondents agreed that “some people possess psychic powers or ESP.”

•A sizable minority of the public believes in UFOs and that aliens have landed on Earth. In 2001, 30 percent of NSF survey respondents agreed that “some of the unidentified flying objects that have been reported are really space vehicles from other civilizations”, and one-third of respondents to the Gallup poll reported that they believed that “extraterrestrial beings have visited earth at some time in the past.”

•Polls also show that one quarter to more than half of the public believes in haunted houses and ghosts, faith healing, communication with the dead and lucky numbers.

_

Warning Signs of Pseudoscience:

Philosophers have identified distinguishing features that can serve as warning signs that may alert that the claim being made is pseudoscience (Lilienfeld, 2013). The warning signs are as follows:

•Pseudoscience has the tendency to invoke ad hoc hypotheses, which can be thought of as “escape hatches” or loopholes, as a means of immunizing claims from falsification.

•Pseudoscience has an absence of self-correction and an accompanying intellectual stagnation.

•Pseudoscience has an emphasis on confirmation rather than refutation.

•Pseudoscience has a tendency to place the burden of proof on skeptics, not proponents, of claims.

•Pseudoscience has excessive reliance on anecdotal and testimonial evidence to substantiate claims.

•Pseudoscience has evasion of the scrutiny afforded by peer review.

•Pseudoscience has an absence of “connectivity”, that is, a failure to build on existing scientific knowledge.

•Pseudoscience uses impressive-sounding jargon whose primary purpose is to lend claims a facade of scientific respectability.

_

_

What causes pseudoscience to exist in the first place?

In a report Singer and Benassi (1981) wrote that pseudoscientific beliefs have their origin from at least four sources.

Common cognitive errors from personal experience

Erroneous sensationalistic mass media coverage

Sociocultural factors

Poor or erroneous science education

_

Features of pseudoscience include:

• Dogmatic; ignores contradicting facts

• Subject to confirmation-bias by selectively reporting evidence and research results

• Hypotheses cannot be tested 

• No evolution in understanding or theory

• An appeal to recognized authority is used to support claims

• Metaphorical/analogy driven thinking

• Anecdotes as evidence

• Lack of explicit mechanisms

• Special pleading (elusive evidence)

• Conspiracy theory

• Concept is described for the public rather than scientists

_

Implications of pseudoscience:

Political implications:

The demarcation problem between science and pseudoscience brings up debate in the realms of science, philosophy and politics. Imre Lakatos, for instance, points out that the Communist Party of the Soviet Union at one point declared that Mendelian genetics was pseudoscientific and had its advocates, including well-established scientists such as Nikolai Vavilov, sent to a Gulag and that the “liberal Establishment of the West” denies freedom of speech to topics it regards as pseudoscience, particularly where they run up against social mores. In India, Hindu nationalists consistently supported astrology for political gains under pretext of supporting ancient Indian science. Pseudoscience is used recurrently in political, policy-making discourse in allegations of distortion or fabrication of scientific findings to support a political position. The Prince of Wales has accused climate change skeptics of using pseudoscience and persuasion to hinder the world from adopting precautionary principles to avert the negative effects of global warming. People have given attention to the climate skeptics and have tried to understand the kind of pseudoscience they are canvassing. But he insisted the “environmental collapse” evidence is already existing at this time, not only in climbing temperatures but the imprint on particular species like honey bees.  

_

Health and education implications:

Distinguishing science from pseudoscience has practical implications in the case of health care, expert testimony, environmental policies, and science education. Treatments with a patina of scientific authority which have not actually been subjected to actual scientific testing may be ineffective, expensive, and dangerous to patients, and confuse health providers, insurers, government decision makers, and the public as to what treatments are appropriate. Claims advanced by pseudoscience may result in government officials and educators making poor decisions in selecting curricula; for example, creation science may replace evolution in studies of biology.

_

Pseudoscience as panacea to solve all your problems:  

In one of the best review articles ever written about pseudoscience, “Investigating the Paranormal,” by David F. Marks (Nature, 13 March 1986), Marks summarizes psychological studies of believers in pseudoscientific concepts and concludes, “Belief in the paranormal is metaphysical and therefore not subject to the constraints of empirically based science…. Pseudoscience is a… system of untestable beliefs steeped in illusion, error and fraud. …Pseudosciences are remarkably stable…; their presence on the edges of science can be expected indefinitely.”  The popularity of pseudoscience is assured, because it invariably tells us things that are reassuring far past the point of being too good to be true. You are grieving over your beloved lost pet dog?  Well, this psychic lady can tell you precisely where to find it, all she has to do is touch its photo!  You are 75 years old and in poor health, but this hippy-looking professor says he’s right on the verge of discovering how people can live for 5,000 years, even you! Wow, where do we send our money?!? You’re 100 pounds overweight and have never been able to slim down? Well, here’s a new miracle diet: eat as much as you want of anything you want and still lose weight, by taking this mystical special wonder herb! Only $100 for a 2-week supply!  And so on and so forth.    

___________

Junk science is faulty scientific data and analysis used to advance special interests and hidden agendas:

Examples of special interests include:

• The media may use junk science to produce sensational headlines and programming, the purpose of which is to generate increased readership and viewership. More readers and viewers mean more revenues from advertisement. The media may also use junk science to advance personal or organizational social and political agendas.

•Personal injury lawyers, sometimes referred to simply as trial lawyers (as in the American Association of Trial Lawyers or ATLA), may use junk science to extort settlements from deep-pocketed businesses or to bamboozle juries into awarding huge verdicts.

• Social and political activists may use junk science to achieve social and political change.

•Government regulators may use junk science to expand regulatory their authority, increase their budgets o advance the political agenda of elected officials.

•Businesses may use junk science to bad-mouth competitors’ products, make bogus claims about their own products, or to promote political or social change that would increase sales and profits.

•Politicians may use junk science to curry favor with special interest groups, to be politically correct or to advance their own personal political beliefs.

•Individual scientists may use junk science to achieve fame and fortune.

•Individuals who are ill (real or imagined) may use junk science to blame others for causing their illness. Individuals may also use junk science to seek fame and fortune.

_

Pragmatic Applications of junk science:

  • Dismissal of dangers of smoking by tobacco companies
  • Dismissal of dangers of marijuana by legalization advocates
  • Denial of energy shortages
  • Denial of environmental problems
  • Perpetual motion machines
  • Diet fads
  • Quack medical cures

_

How do you know whether information you got is legitimate or junk science:

It can be a challenge to determine if information is junk science. In order to find out what is being reported is legitimate information, or unfounded or skewed junk science, readers and watchers of mainstream media must develop strategies to tell fact from fiction.

Some things to look for when determining if information is junk science are:

•Careful analysis of the quality and sources of the information

•Resources used to get the information by the source

•Level of known bias of the source of the information

•Snopes.com – a website dedicated to uncovering incorrect information and providing legitimate support for its claims

•Quackwatch – a website dedicate to debunking myths regarding medical or health related information

•Climate Skeptic – a website whose purpose is to debunk myths about aspects of climate change information

•JunkScience.com – a website created by Steven Milloy who gathers information to support or debunk the latest information in the media

•Junkfood Science – this is a blog by Sandy Szwarc intended to cover issues presented by the media regarding nutrition

•Mythbusters – Mythbusters is a television show on which the hosts create experiments and activities to determine if well known scientific information is fact or fiction

•John Stossel – a media reporter who questions information regularly

•Local news reporters who do investigative work

_

Some examples of Junk science:

1. An example of this is the misinterpretation of the research on eggs and cholesterol. Studies had found that eggs contained cholesterol and that cholesterol in blood contributed to heart disease. So as a result of these studies, people were advised against eating eggs. What was missing, however, was a study showing that the cholesterol in blood is a direct result of cholesterol intake in diet. We now know that that is not the case, that saturated fat intake and overall body fat are the primary contributors to cholesterol production in blood.

_

2. For example, during the early 1990s a massive cholera outbreak in Latin America caused 10,000 deaths as a result of countries refusing to use chlorine to disinfect water supplies because of the labeling of chlorine as a carcinogen by the United States Environmental Protection Agency (EPA).

_

3. Perhaps the most tragic example of junk science was the cancer scare fraud perpetrated by Rachel Carson about the insecticide DDT in her 1962 book, Silent Spring. As a result, in 1972 the EPA banned DDT. The EPA administrator at that time was a member of the radical Environmental Defense Fund and based the ban solely on politics, not on science. The judge who had presided over seven months of hearings concluded that DDT was not a carcinogen to man and did not harm fresh water fish, birds or other wildlife.  Yet today, millions of people worldwide die every year because DDT cannot be used against the mosquitoes that cause malaria.  

_____________

Pseudo-pseudoscience: what is not pseudoscience despite resembling pseudoscience: 

Errors Made in Good Faith:

Polywater involved the claim that some chemists in the 1960’s had created a form of water that consisted of long chains of water molecules. It eventually turned out the observations were due to contamination of microscopic quantities of water by impurities. The original claimants eventually admitted their mistake.

Informed Speculation:

 Attempting to estimate the number of habitable planets and the possibility of life in the universe may or may not bear fruit, but it’s based on real data and is not pseudoscience. Speculating on the history, culture, and geography of those planets, in the total absence of data, is pseudoscience.

_

Allegation against mainstream science by imitation science: 

One of the commonest allegations against mainstream science is that its practitioners only see what they expect to see. Scientists often refuse to test fringe ideas because science tells them that this will be a waste of time and effort. Hence they miss ideas which could be very valuable. This is the blinkers argument, by analogy with the leather shields placed over horses’ eyes so that they only see the road ahead. It is often put forward by proponents of new-age beliefs and alternative health. It is certainly true that ideas from outside the mainstream of science can have a hard time getting established. But on the other hand the opportunity to create a scientific revolution is a very tempting one: wealth, fame and Nobel prizes tend to follow from such work. So there will always be one or two scientists who are willing to look at anything new. If you have such an idea, remember that the burden of proof is on you. The new theory should explain the existing data, provide new predictions and should be testable; remember that all scientific theories are falsifiable. Read the articles and improve your theory in the light of your new knowledge. Starting a scientific revolution is a long, hard slog. Don’t expect it to be easy. If it was, we would have them every week. People putting forward extraordinary claims often refer to Galileo as an example of a great genius being persecuted by the establishment for heretic theories. They claim that the scientific establishment is afraid of being proved wrong, and hence is trying to suppress the truth. This is a classic conspiracy theory. The Conspirators are all those scientists who have bothered to point out flaws in the claims put forward by the researchers. The usual rejoinder to someone who says “They laughed at Columbus, they laughed at Galileo” is to say “But they also laughed at Bozo the Clown”. Many pseudo- and fringe-scientists often react to the failure of science to confirm their prized beliefs, not by gracefully accepting the possibility that they were wrong, but by arguing that science is defective. If one chooses to defend a favored idea under the banner of science, then one cannot reject science when it does not support the idea. It is completely contradictory to pay lip-service to non-arbitrary science, and then arbitrarily minimize science in favor of ‘other ways of knowing.  

_

From Pseudoscience to Science:

There have been incidents where what was once considered pseudo-science became a respectable theory. In 1911, German astronomer and meteorologist Alfred Wegener first began developing the idea of Continental Drift. The observation that the coastlines of African and South American seemed to fit together was not a new observation: scientists just couldn’t believe that the continents could have drifted so far to cross the 5,000 mile Atlantic Ocean. At the time, it was a common theory that a land bridge had existed between Africa and Brazil. However, one day in the library Wegener read a study about a certain species that could not have crossed the ocean, yet had fossils appeared on both sides of the supposed land bridge. This piece of evidence lead Wegener to believe that our world had once been one piece, and had since drifted apart. However, Wegener’s theory encountered much hostility and disbelief.  In those times, it was the norm for scientists to stay within the scopes of their fields, meaning biologists did not study physics, chemists did not study oceanology and of course, meteorologists/astronomers  like Wegener did not study geology. Thus, Wegener’s theory faced much criticism just due to the fact that he was not a geologist. Also, Wegener could not explain why the continents moved, just that they did. This lack of reasoning lead to more skepticism about the theory and all these factors combined lead to the viewing of continental drift as Pseudo-Science.  However, today much evidence exists that shows that Continental Drift is a perfectly acceptable scientific theory. Today, the modern ideas of plate tectonics can help explain Continental Drift, as the Plate Tectonic Theory presents the idea that the earth’s surface is made up of several large plates that often move up to a few inches every year.  Also, the development of paleomagnetism, which allows us to determine the earth’s magnetic poles at the time a rock formed, suggests that the earth’s magnetic poles have changed many times in the last 175 million years and that at one time South America and Africa were connected.     

_
Scientific superstition may leads to pseudoscience:

I quote from my article ‘Superstition’:

Superstition is an innate instinct of associating of two or more random events/perceptions defying logic (reason) and/or knowledge. All animals are instinctively superstitious in the sense that their brain keep on associating random events of environment perceived by their senses for survival and this trait is evolutionarily hardwired into the genes of their brain cells. It is an evolutionary design not to think about reasons but just repeat what seemed to work last time. Humans are no exception no matter whether they are religious people or atheists. However, instincts derived in earlier evolutionary time may have had important survival value at that time but in a later era may have purely detrimental effects on survival. Science can destroy most of the superstitions due to scientific method but science has its own superstitions. The French scientist Claude Bernard, widely considered the Father of Physiology, recommended that researchers treat all theories with skepticism. Bernard is not saying that scientists should be extreme skeptics. Nor is he saying that theories are mere guesses. What he is saying is that theories are attempts to connect the factual dots within a logically consistent model. Theories can help lead us to new discoveries. But we must not let them blind us to facts that don’t fit the model. We must trust our observations or our theories only after experimental verification. If we trust too much, the mind becomes bound and cramped by the results of its own reasoning; it no longer has freedom of action, and so lacks the power to break away from that blind faith in theories which is only scientific superstition. It has often been said that, to make discoveries, one must be ignorant. This opinion, mistaken in itself, nevertheless conceals a truth. It means that it is better to know nothing than to keep in mind fixed ideas based on theories whose confirmation we constantly seek, neglecting meanwhile everything that fails to agree with them. Nothing could be worse than this state of mind; it is the very opposite of inventiveness. In scientific education, it is very important to differentiate between determinism which is the absolute principle of science, and theories which are only relative principles to which we should assign but temporary value in the search for truth. In short, we must not teach theories as dogmas or articles of faith. By exaggerated belief in theories, we often give a false idea of science; we often overload and enslave the mind, by taking away its freedom, smothering its originality and infecting it with the taste for systems. Finally, he points out that not only religious beliefs, but scientific beliefs, can obstruct the search for truth: To sum up, two things must be considered in experimental science: method and idea. The object of method is to direct the idea which arises in the interpretation of natural phenomena and in the search for truth. The idea must always remain independent, and we must no more chain it with scientific beliefs than with philosophic or religious beliefs; we must be bold and free in setting forth our ideas, must follow our feeling, and must on no account linger too long in childish fear of contradicting theories…

________

Science versus pseudoscience:

_

_

Scientific Theory vs. Pseudoscientific Theory

The key difference between genuine science and pseudoscience is that pseudoscience theories lack the substance of science, but present themselves as scientific.

•A scientific theory makes claims that are testable. The claims it makes prohibit particular events or occurrences from happening. That is to say, these claims are conceivably refutable.

•A pseudo-scientific theory makes claims that are not testable. Its claims prohibit nothing. There is nothing that could count as disconfirming evidence against such claims.

_

Difference between scientific methods and pseudoscientific methods:

Science:

Although there’s no hard-and-fast formal version, the scientific method goes thus:

  • Observe – look at the world and find a result that seems curious.
  • Hypothesize – come up with a reasonable explanation.
  • Predict – the most important part of a hypothesis or theory is its ability to make predictions. These predictions should be falsifiable and specific.
  • Experiment – compare the predictions with empirical evidence (usually experimental evidence, often supported by mathematics). This step is the reason why a hypothesis or theory has to be falsifiable, if you can’t prove it wrong, you can’t really prove it right. Information from these predictions can lead to a refinement of the hypothesis.
  • Reproduce – ensure the result is a true reflection of reality by verifying it with others.
  • Then repeat as needed.

Pseudoscience:

Again, there’s no formal version but from experience the pseudoscientific method goes more like this:

  • Observe, ideally without the use of expensive (if any) equipment.
  • Fantasize an explanation.
  • Extrapolate beyond reason and develop your Galileo gambit.
  • Confirm using experiments designed only to support your fantasy and exclude any possibility of any other result.
  • Rinse and repeat because in the world of tooth-fairy science there’s no doubt that fifty identically biased studies in zero impact journals are more persuasive than one.

_

_

_

Good scientist:

The good scientist is also their own greatest critic. Noted American physicist Richard Feynman (1918 – 1988) is noted for saying, “…if you’re doing an experiment, you should report everything that you think might make it invalid — not only what you think is right about it; other causes that could possibly explain your results; and things you thought of that you’ve eliminated by some other experiment, and how they worked — to make sure the other fellow can tell they have been eliminated. Details that could throw doubt on your interpretation must be given, if you know them. You must do the best you can — if you know anything at all wrong, or possibly wrong — to explain it. If you make a theory, for example, and advertise it, or put it out, then you must also put down all the facts that disagree with it, as well as those that agree with it. There is also a more subtle problem. When you have put a lot of ideas together to make an elaborate theory, you want to make sure, when explaining what it fits, that those things it fits are not just the things that gave you the idea for the theory; but that the finished theory makes something else come out right, in addition”.

_

Characteristics of pseudo-scientist:

1. The pseudo-scientist considers himself a genius.

2. He regards other researchers as stupid, dishonest or both.

3. He believes there is a campaign against his ideas, a campaign comparable to the persecution of Galileo or Pasteur. He may attribute his ‘persecution’ to a conspiracy by a scientific ‘masonry’ who are unwilling to admit anyone to their inner sanctum without appropriate initiation.

4. Instead of side-stepping the mainstream, the pseudo-scientist attacks it head-on: The most revered scientist is Einstein so Einstein is the most likely establishment figure to be attacked.

5. He has a tendency to use complex jargon, often making up words and phrases. You can compare this to the way that schizophrenics talk in what psychiatrists call ‘neologisms’, “words which have meaning to the patient, but sound like Jabberwocky to everyone else.”

_

_

With regards to a litmus test to identify science from pseudoscience, the line of demarcation is straight and strong. The logical positivists made contributions to the methodology of science with their litmus test. Kurt Gödel’s proofs required that the positivist use a priori truths to explain away a priori truths. Their attempt was certainly admirable for science but their demarcation was too rigid and untenable. You could argue that something should be classified as pseudoscience if it contradicts any of the above criteria.  There’s a difference between inconsistent or incongruent and contradiction.  A theory may have anomalies or unknown ways of falsifiability but if the theory is utterly incapable of being falsified at any point it’s not scientific. A theory can be scientific if it is based on little evidence or observation but if there is none then it’s just as bad as Popperian science and pure speculation. Also, as an example, if a theory uses illogical reasoning then it should not be scientific (it can have controversial reasoning, which is not preferred, but it cannot be fallacious). 

_

Criteria to distinguish Science from Pseudoscience:

To discern between science and pseudoscience, it is necessary to use criteria to form a basis of comparison. Dr. Massimo Pigliucci of the University of Tennessee proposes a useful list of evaluating criteria to distinguish science from pseudoscience:

1. Anachronistic thinking – If an argument is based on the wisdom of the ancients or on the use of outmoded scientific terminology, there is good reason to be suspicious.

2. Seeking mysteries – Pseudoscience tends to emphasize the existence and supposed unsolvability of mysteries, while science’s objective is to solve mysteries.

3. Appeals to myths – Humans tend to use pseudoscience in an attempt to formulate an explanation for things that they cannot easily understand.

4. Casual approach to evidence – Real science has evidence to back up its claims! Solid, verifiable evidence is the cornerstone that sets science apart from pseudoscience. Hearsay is not considered solid evidence.

5. Spurious similarities – This is the trap of human thinking that attempts to draw parallels between concepts or phenomena that seem reasonable. These presumptions require an in-depth analysis to be verified. Similarities in phenomena can sometimes yield viable insights; however, they require a high standard for verification beyond what appears to be a correlation.

6. Explanation by scenario – This occurs when the pseudoscience proponent is trying to explain phenomena by “storytelling”, which is when one invents a scenario to fit the presumed story.

7. Research by literary interpretation – This fallacy occurs when the proponents of a pseudoscience position claim that statements made by scientists are open to alternative interpretations that should be regarded as equally valid, in spite of a lack of evidence to support the alternative.

8. Refusal to revise – One of the hallmarks of pseudoscience is the refusal on the part of the pseudoscience proponent to revise their position in the face of new evidence. Pseudoscience proponents will repeat their same arguments over and over, in spite of evidence to the contrary. Real science supports that a claim about a phenomena is evolving as new research evolves; it is not a stationary, fixed idea that is not open to revision as new evidence is discovered.

9. Shift the burden of proof to the other side – When a pseudoscience proponent proposes an alternative theory that is contrary to the established, scientifically backed theory, the burden of proof for the pseudoscience claim is on the pseudoscience proponent! Pseudoscientists love to play this game of trying to shift the burden of proof. Furthermore, it is faulty thinking by the pseudoscience proponent to claim that their theory is just as plausible as the established theory because their claim has not been disproven. More often than not, if a claim can’t generate real interest from scientists in the research community, it is most likely because that theory is not plausible, and therefore not worth the time and money necessary for further exploration.

10. A theory is legitimate simply because it’s new, alternative, or daring.

_

The table below summarizes differences between science and pseudoscience:

SCIENCE PSEUDO-SCIENCE 
Science never proves anything. Pseudoscience aims to prove an idea.
Self-correcting methodology which involves critical thinking. Starts with a conclusion and gives easy answers to complex problems.
An on-going process to develop a better understanding of the physical world by testing all possible hypotheses. The primary goal of science is to achieve more unified and more complete understanding of the physical world.  Often driven by social, cultural, religious, political or commercial goals.  
Involves a continual expansion of knowledge due to intense research. A field has not evolved a lot since the beginning.  If any research is done, it is done to justify the claims, rather than expand them.
Scientists constantly attempt to refute other scientists’ works. An attempt to disprove the beliefs is considered hostile and unacceptable.
When results or observations are not consistent with a scientific understanding, intense research follows. Results or observations that are not consistent with current beliefs are ignored.
Remains questionable at any time.  There are two types of theories: those that have been proven wrong by experimentation and data, and every other theory.  Thus, no theory can be proven correct; every theory is also subject to being refuted.   Beliefs of the field cannot usually be tested empirically so will likely not ever be proven wrong; Thus, Pseudo-scientists believe that they are right just because no one can prove them wrong. 
Concepts are based on previous understandings or knowledge. Pseudo-Scientists are often not in touch with main-stream science and are often driven by the egos of the “scientists”.  Furthermore, famous names and testimonials are often used for support rather than scientific evidence. 
Findings must be stated in unambiguous, clear language. Pseudoscience often uses very vague, yet seemingly technical terms.
Their findings are expressed primarily through scientific journals that are peer-reviewed and maintain rigorous standards for honesty and accuracy. The literature is aimed at the general public. There is no review, no standards, no pre-publication verification, no demand for accuracy and precision.
Reproducible results are demanded; experiments must be precisely described so that they can be duplicated exactly or improved upon. Results cannot be reproduced or verified. Studies, if any, are always so vaguely described that one can’t figure out what was done or how it was done.
Failures are searched for and studied closely, because incorrect theories can often make correct predictions by accident, but no correct theory will make incorrect predictions.  Failures are ignored, excused, hidden, lied about, discounted, explained away, rationalized, forgotten, and avoided at all costs.
As time goes on, more and more is learned about the physical processes under study. No physical phenomena or processes are ever found or studied. No progress is made; nothing concrete is learned.
Convinces by appeal to the evidence, by arguments based upon logical and/or mathematical reasoning, by making the best case the data permit. When new evidence contradicts old ideas, they are abandoned. Convinces by appeal to faith and belief. Pseudoscience has a strong quasi-religious element: it tries to convert, not to convince. You are to believe in spite of the facts, not because of them. The original idea is never abandoned, whatever the evidence.
Does not advocate or market unproven practices or products. Generally earns some or all of his living by selling questionable products (such as books, courses, and dietary supplements) and/or pseudoscientific services (such as horoscopes, character readings, spirit messages, and predictions).
Uses careful observation and experimentation to confirm or reject a hypothesis. Evidence against theories and laws are searched for and studied closely.  Starts with a hypothesis, looks only for evidence to support it. Little or no experimentation. Conflicting evidence is ignored, excused, or hidden. The original idea is never abandoned, whatever the evidence.
Based on well-established, repeating patterns and regularities in nature.  Focuses, without skepticism, on alleged exceptions, errors, anomalies, and strange events.
 Reproducible results are required of experiments. In case of failure, no excuses are acceptable. Results cannot be reproduced or verified. Excuses are freely invented to explain the failure of any scientific test.
 Personal stories or testimonials are not accepted as evidence. Personal stories or testimonials are relied upon for evidence.
Consistent and interconnected; one part cannot be changed without affecting the whole.  Inconsistent and not interconnected; any part can be arbitrarily changed in any way without affecting other parts.
Argues from scientific knowledge and from the results of experiments. Argues from ignorance. The lack of a scientific explanation is used to support ideas.
Uses vocabulary that is well defined and is in wide usage by co-workers.  Uses specially invented terms that are vague and applied only to one specific area.
 Convinces by appeal to evidence, by arguments based on logical and/or mathematical reasoning.  Attempts to persuade by appeal to emotions, faith, sentiment, or distrust of established fact.
Peer review. Literature written for fellow scientists who are specialists and experts. No peer review. Literature written for the general public without checks or verification.
 Progresses; as time goes on, more and more is learned. No progress; nothing new is learned as time passes. There is only a succession of fads.
Always undergoing revision. Has a no-holds-barred, let-the-facts-fall-where-they-may attitude. Clings emotionally to pet ideas long after they’ve been shown to be wrong.
Limits claims to matters that can be supported reasonably well with good evidence. Makes sensational and exaggerated claims that go far beyond the evidence.
Actively seeks out comments and criticism from well-informed colleagues before publishing. Shares research data. Avoids informed criticism before publication. Often keeps the details of their work obscured and secretive.
Claims are first published in professional journals that use peer review to ensure the work meets minimal standards of competence and accuracy. Goes straight to the public, claims are presented in commercial books, magazines, and other venues whose publishers make no independent effort to verify accuracy.
Frames claims in such a way that if wrong, they can be proven wrong. Frames claims in such a way that they cannot be proven wrong. Constantly shifts the grounds for substantiation, makes vague and ambiguous statements that cannot be tested.
Understands that the burden of proof is on the investigator making the claim. A hypothesis is not considered valid until it has stood up to many tests. Places the burden of disproof on the critics. Holds that the claim is true unless others “disprove” it.
The more a claim contradicts previously demonstrated evidence, the greater the new evidence must be before it will be accepted. Thrives on sensationalism, the more outrageous a claim, the more publicity it will receive, and thus more public following.
Realizes all information is imperfect. Attempts to assess the amount of error attached to all measurements and the degree of reliability associated with all claims. Often presents claims as infallibly true. Does not distinguish between the varying quality of evidence (if any) used to support claims. Claims are often personality-driven, not evidence-oriented.
When shown to be wrong, science acknowledges the fact and modifies the work accordingly. Sees any criticism as the sign of closed minds and ignorance of scientists. Quick to don the role of martyr and appeal to public sympathies.
Scientists build on other scientific work. They familiarize themselves with previous relevant work before attempting to extend or modify it. Pseudoscientists often ignore previous studies altogether, especially work that conflict with their pet theories.
Science is an error-correcting activity. Pseudoscience is an error-promulgating activity.
Extraordinary claims need extraordinary evidence. Extraordinary claims are made without extraordinary, unambiguous, reproducible evidence.
Does not move goalpost. Predefined endpoints of research are changed when the evidence is not favorable.
No selection bias. Selects to study only subjects that are likely to agree with or confirm the idea.
Correlation does not mean causation. Assumes correlation means causation.
Does not demonize skeptics. When confronted with contradictory evidence, it characterizes the skeptic as evil.

_

______

Science versus bad science:

Marks of good science:

  • It makes claims that can be tested and verified.
  • It has been published in a peer reviewed journal (but beware … there are some dodgy journals out there that seem credible, but aren’t).
  • It is based on theories that are discussed by and argued for by many experts in the field.
  • It is backed up by experiments that have generated enough data to convince other experts of its legitimacy.
  • Its proponents are secure enough to accept areas of doubt and need for further investigation.
  • It does not fly in the face of the broad existing body of scientific knowledge.
  • The proposed speaker works for a university and/or has a PhD or other bona fide high level scientific qualification.

 

Marks of bad science:

  • Has failed to convince many mainstream scientists of its truth.
  • Is not based on experiments that can be reproduced by others.
  • Contains experimental flaws or is based on data that does not convincingly corroborate the experimenter’s theoretical claims.
  • Comes from overconfident fringe experts.
  • Uses over-simplified interpretations of legitimate studies and may combine with imprecise, spiritual or new age vocabulary, to form new, completely untested theories.
  • Speaks dismissively of mainstream science.

__________

Bad science versus pseudoscience:

The fundamental difference between bad science and pseudoscience can be explained with the following hypothetical examples (Hansson 1996).

Case 1: A biochemist performs an experiment that she interprets as showing that a particular protein has an essential role in muscle contraction. There is a consensus among her colleagues that the result is a mere artifact, due to experimental error.

Case 2: A biochemist goes on performing one sloppy experiment after the other. She consistently interprets them as showing that a particular protein has a role in muscle contraction not accepted by other scientists.

Case 3: A biochemist performs various sloppy experiments in different areas. One is the experiment referred to in case 1. Much of her work is of the same quality. She does not propagate any particular unorthodox theory.

According to common usage, 1 and 3 are regarded as cases of bad science and only 2 as a case of pseudoscience. What is present in case 2, but absent in the other two, is a deviant doctrine. Isolated breaches of the requirements of science are not commonly regarded as pseudoscientific. Pseudoscience, at it is commonly conceived, involves a sustained effort to promote teachings different from those that have scientific legitimacy at the time. This explains why fraud in science is not usually regarded as pseudoscientific. Such practices are not in general associated with a deviant or unorthodox doctrine. To the contrary, the fraudulent scientist is anxious that her results be in conformity with the predictions of established scientific theories. Deviations from these would lead to a much higher risk of disclosure.

_

Bad science vs. pseudoscience vis-à-vis rape:

Bad science and pseudoscience produce similar results, but are two different things. Bad science follows the scientific method, but uses outdated methods, sloppy designs and procedures, may draw erroneous conclusions, fails to explore alternate explanations of results, etc. it is science that contains errors, omissions, and falsehoods or is incomplete. It might also be based on false assumptions, use faulty reasoning, or poor logic. Two example of bad science that are frequently bandied about as “factual” statistics are Mary Koss’ study from the mid 80s asserting that one in four women will be the victim of a rape or an attempt between adolescence and the completion of college and the Eugene Kanin study indicating that 41% of rape reports are false. The Koss study had a number of problems, but the worst was that Koss ignored statements by her subjects indicating they had not been victims of sexual assault (or attempts). This places the study dangerously close to the category of pseudoscience. Kanin, on the other hand, used a definition of false allegation that may have been far-fetched and overly broad in some respects. Another feature of bad science is bias on the part of the researchers buried in the report. This can be illustrated by the World Economic Forum’s Global Gender Gap Report. The authors of this study intentionally incorporated bias into their design in order to preclude the possibility of finding anything other than what they were looking for. This particular analysis described any gender imbalance that favored women as an area of “equality” while describing any imbalance that favored men as contributing to the disadvantage of women and evidence of discrimination. The report also failed to include categories in which women might be more likely to hold an advantage. This report, even more so than Koss, might qualify as pseudoscience or outright fraud.

_

Pseudoscience is fake science. It might appear to look like real science, it does not follow the scientific method, ignores contradictory evidence, and isn’t falsifiable (can’t be disproven). There is usually an indifference to facts. Facts that don’t fit are discarded or ignored and the “facts” that are presented have generally not been proven in any scientific way. The research is often sloppy and may include hearsay, news reports, ancient myths, anecdotes, rumor, personal history, or case examples rather than scientific study. Pseudoscientific research usually begins with a spectacular or implausible hypothesis and searches for evidence that will support it instead of designing experiments that may disprove the hypothesis. It often confuses correlation with causation and pseudoscientists rarely test their theories. When they do test the theories failures are often explained away as anomalies (the spirits just weren’t willing or there was a nonbeliever present). The “science” itself rarely progresses. Some new technology may be incorporated as a way to increase the mystery, but the pseudoscientific theory remains unchanged. Ignorance and fallacy are used in place of fact. The lack of proof to the contrary proves the pseudoscientific theory. If it can’t be disproven, it must be true. Science hasn’t been able to prove widespread underreporting does not exist, therefore widespread underreporting does exist. They appeal to authority or emotion. “Believe the woman.” “We must take action to protect women.” “We must protect our daughters.” This is similar to the claim made by Susan Brownmiller that “Rape is a conscious process of intimidation by which all men keep all women in a state of fear.” It is an appeal to emotion. It is a deliberate attempt to play on the insecurities of women by using their natural fear of being raped. It relies upon a complete redefinition of the word. Rape is no longer a sexual act; it has become a “process of intimidation.” In this same book she also made an appeal to authority. She stated that only 2% of all rape allegations are false. This was supposedly based on statements made by a New York City by sex crimes investigator. It was not a reference to any scientifically conducted study and was eventually discredited. Another characteristic of pseudoscience is typically a profit to be made. Often, those who make extraordinary claims are selling something, or are attempting to secure funding, or even to advance a political agenda. Rape is a huge industry, especially on college campuses where federal and state funds are used to support research, prevention programs, and counseling centers. This, of course, is where most of the research is conducted and it is conducted by researchers whose jobs depend upon findings consistent with a high prevalence of rape and other forms of sexual assault. Koss received considerable support and assistance from Ms. Magazine in order to conduct her research. Ms. Magazine is a radical feminist publication that pushes a political agenda. Koss credits them with helping to secure her funding and with providing office space and other assistance. Studies funded, conducted, or supported by stakeholders who have a financial or political interest in the outcome should be regarded as highly questionable.

_

The table below summarizes differences between bad science and pseudoscience:  

Bad science Pseudoscience
Scientific method Follows but uses outdated methods, sloppy designs and procedures Does not follow
Position vis-à-vis science Bad science is still science, but it is poorly done. Can be a type of bad science but not science.
Can become good science Yes Occasionally 
Harmful to society Yes Yes
Propagated by Scientists Quacks, religious leaders,
Dogma Willing to give up dogma in light of new evidence Refuse to give up dogma in light of new evidence
Associated with faith & religion No Yes
Purpose/ intention  Genuine mistake, occasionally commercial interest or fame Propagate faith, make money, dominate population, give false hope, 
Classical example Vaccines are harmful and therefore public must avoid it Homeopathy, astrology
Relationship with good science Can be beginning of good science, can be opposing good science, No relationship with good science
Conclusions may draw erroneous conclusions, fails to explore alternate explanations of results, All conclusions are correct despite evidence to contrary.
Based on False assumptions, use faulty reasoning, or poor logic. All assumptions are valid, truthful and non-falsifiable despite evidence to contrary 
Bias May have confirmatory bias, compliance bias Bias is the basis of pseudoscience,  ignores contradicting facts
Errors Present Denies any errors.
Conflict of interest May be present Always present
Correlation vs. causality Correlation may become causality Correlation means causality
 Retraction in journals Possible Never retract despite evidence to contrary
Anecdotal information  Can be used as evidence Anecdotal information is the evidence.
Conspiracy theory Absent Present
Concept Described for scientists Described for lay people
Deviant doctrine Absent Present
Fraud in science Bad science can be fraud in science. It cannot be considered fraud in science because it has no relationship with scientific methods.
Falsifiability of basic assumptions  Basic assumptions falsifiable.  Basic assumptions are not falsifiable. 

__________

Fringe science vs. mainstream science:

_

The above table shows that there is no qualitative difference between fringe and mainstream under these criteria; both appear to be science. There is, however, a significant quantitative difference in the extent to which the criteria are met. Therefore Fringe Science is different from Mainstream Science. Statistical comparisons reveal significant quantitative differences between fringe and mainstream journals as seen in the table above. 

____________

The Spectrum of Scientific Probability:

The chart below combines a numerical scale proposed by Arthur Strahler and a zone description by James Trefil. Possible examples are listed in the center column.

10,000:1 IN FAVOR  Heliocentric Astronomy
Quantum Mechanics
CENTER
1,000:1 Evolution
Quarks
100:1 FRONTIER
10:1 IN FAVOR Impact-caused Extinction
1:1 Extraterrestrial Intelligence
10:1 AGAINST Paleolinguistics
100:1 Loch Ness
Bigfoot
FRINGE
1,000:1 UFO’s
Psychic Phenomena
10,000:1 AGAINST Velikovsky
Creationism

The Frontier is the most interesting area to scientists. It’s hard to identify topics in the 10- or 100-to-one range for or against because these areas are under active exploration; and ideas at these levels rapidly move in to the center or out to the fringe. Ideas don’t stay in the Frontier long. Anything more than 100 to one against is probably too iffy to interest most scientists, but there are always a few high rollers who consider the gamble worth the potential payoff.

__________

The crucial difference between science and imitation science is admission of being wrong:

Physicists are excited about the possibility that their theory might be wrong and are racing to double check pretty much every experiment and calculations of the theory that have been done so far. Just to make it crystal clear; physicists are excited that they may be wrong! And herein lies the difference between science and pseudoscience: you will never, ever find a proponent of pseudoscience who will admit that they might be wrong, never mind actually be excited about it. Never. They will always find ways to explain away the parts of their theories that are shown to be wrong, can’t be proved or make no logical sense at all and have no evidence to support them. They will jump through hoops, crawl through holes, do somersaults, whatever it takes to avoid admitting that anything they say or have said is wrong. Why is this important? Because being willing to admit that you are wrong, to go back to the drawing board, to start over if needed, based on experimental evidence is crucial to our basic understanding of the world. It is how all scientific and technological progress is made. Scientists are willing to change their minds and to admit that they were wrong if that is where the evidence leads. You won’t get the same from the promoters of pseudoscience, just more contortions and more bullshit. The promoters of pseudoscience will cling to their beliefs to their last dying breath or at least until they are completely, totally and publicly humiliated. Good scientists will always accept the evidence, even if it means discarding a lifetime of research and work. And this is why science will always trump pseudoscience every time. 

____________

Why demarcation between science and pseudoscience important:

A demarcation of science from pseudoscience has many practical applications such as the following:

1. Healthcare: Medical science develops and evaluates treatments according to evidence of their efficiency. Pseudoscientific activities in this area give rise to inefficient and sometimes dangerous interventions. Healthcare providers, insurers, government authorities and – most importantly – patients need guidance on how to distinguish between medical science and medical pseudoscience.

2. Expert testimony: It is essential for the rule of law that courts get the facts right. The reliability of different types of evidence must be correctly determined, and expert testimony must be based on the best available knowledge. Sometimes it is in the interest of litigants to present non-scientific claims as solid science. Therefore courts must be able to distinguish between science and pseudoscience.

3. Environmental policies: In order to be on the safe side against potential disasters it may be legitimate to take preventive measures when there is valid but yet insufficient evidence of an environmental hazard. This must be distinguished from taking measures against an alleged hazard for which there is no valid evidence at all. Therefore, decision-makers in environmental policy must be able to distinguish between scientific and pseudoscientific claims.

4. Science education: The promoters of some pseudosciences (notably creationism) try to introduce their teachings on school curricula. Teachers and school authorities need to have clear criteria of inclusion that protect students against unreliable and disproved teachings.

__

Harms of imitation science:

Pseudoscience often strikes educated, rational people as too nonsensical and preposterous to be dangerous and as a source of amusement rather than fear. Unfortunately, this is not a wise attitude. Pseudoscience can be extremely dangerous.

  • Penetrating political systems, it justifies atrocities in the name of racial purity
  • Penetrating the educational system, it can drive out science and sensibility;
  • In the field of health, it dooms thousands to unnecessary death or suffering
  • Penetrating religion, it generates fanaticism, intolerance, and holy war
  • Penetrating the communications media, it can make it difficult for voters to obtain factual information on important public issues. 

_

_________

Do scientists aspire to become celebrities? 

There is another false notion commonly held by the layman, that major scientific discoveries are often the products of amateur minds, and therefore that the authority of the scientist is sometimes to be critically suspected. The philosophical assumption of this is that the discovery of some new fact or idea is usually a matter of accident, and therefore that discovery in science is essentially no different than, say, the finding of a buried treasure, and anyone might stumble onto a chest of doubloons without having any education. While there was a time when big discoveries were made by gifted individuals — think of Alexander Fleming and penicillin, Marconi and radio — most developments are now brought about by organized teams or committees; think of the transistor and of Lunar exploration.  Of course, as we have often seen, a few trained scientists are simply charlatans, and a larger percentage is honestly self-deluded. For any scientist to assume that because he is highly educated he cannot therefore be deluded or deceived is a grave error. The layman has much greater difficulty differentiating between the real scientists and the scientists who are simply — innocently — wrong and have chosen to take up residence in that fabled — and increasingly crowded — Ivory Tower. While a scientist in a free society has the same right as any other citizen to speak out on any topic he wishes, many reputable scientists choose to speak or write publicly on subjects outside their established fields of accomplishment or expertise. When a scientist purports to speak authoritatively outside his field of knowledge, he may then be exploiting his reputation — accomplishments and attributes — and playing on that reputation to extend his authority in a possibly unrelated field. An academic who has achieved credibility in the field of statistics cannot legitimately claim that he therefore speaks authoritatively on politics, or that he is able to detect trickery. In today’s society, we are very accustomed to see celebrities — all too often people in science — endorsing various products and services that have no relationship whatsoever to their professional lives, and motion picture stars sell soap and mortgage plans freely without arousing very much wonder from the public about why they are found on our TV screens and in our magazines performing this task. We are easily blinded by glamour and reputation, which often do not lend any validation whatsoever to such endorsements. This applies both to movie stars and to Ph.D.s.

_

Celebrities promote bad science:

_

_

Some celebrities abuse their fame to promote dangerous or otherwise harmful misinformation.

Larry King:

Larry King’s job as a professional interviewer is to bring on a huge number of people from all backgrounds and let them speak their minds, and this is a good thing. We hear from people doing good, people doing bad, people we agree with, and people we disagree with. But Larry’s show is supposed to be better than all the other interview shows. Only Larry gets to talk to heads of state, U.S. Presidents, the top movers and shakers. He hits them hard, asks them the tough questions, puts them on the spot. Unless — and that’s a very big unless — they are on the show to promote some pseudoscience or paranormal claim. Of these guests, Larry asks no tough questions. He gives them an unchallenged platform to promote their harmful claim. He gives their web addresses and shows their books and DVDs. He acts as their top salesman for the hour. Larry King gave every indication that CNN fully endorses celebrity psychics, conspiracy theorists, ghost hunters, UFO advocates, and promoters of non-scientific alternatives to healthcare.

Bill Maher:

While we love Bill Maher’s movie Religulous and appreciate that his is one of the very few public voices opposing the 9/11 conspiracy myths, we can’t deny that he has a darker side. Bill Maher is a board member of PETA — one of the people actually approving their payments to people like convicted arsonist Rod Coronado — but his ongoing act that’s most harmful to the world is his outspoken denial of evidence-based medicine. Yes, Bill is correct that a good diet and exercise are good for you, but he seems to think that doctors deny this. Not any doctor I’ve ever spoken to. Bill made it clear on a four-minute speech on his show that he believes government and Big Pharma conspire to keep everyone sick by prescribing drugs. If even a single person takes Bill’s claims to heart and avoids needed medical treatment as a result, Bill Maher is guilty of a terrible moral crime. Considering the huge size of his audience, this seems all too likely.

Prince Charles:

What’s even worse than a comedian denying modern medicine is when the future King of England does the same thing. This is the kind of medieval superstition we expect from witch doctors like South Africa’s former health minister Manto Tshabalala-Msimang, not from the royal family of one of the world’s most advanced nations (well, it would be, except that royal families are kind of a medieval thing too). Through The Prince’s Foundation for Integrated Health, Prince Charles attempts to legitimize and promote the use of untested, unapproved, and implausible alternative therapies of all sorts instead of using modern evidence-based medicine. He has a “collaborative agreement” with Bravewell, the United States’ largest fundraising organization dedicated to the promotion of non-scientific alternatives to healthcare. As perhaps the most influential man in the United Kingdom, Prince Charles displays gross irresponsibility that directly results in untreated disease and death.

Jenny McCarthy:

The most outspoken anti-vaccine advocate is, by definition, the person responsible for the most disease and suffering in our future generation. Jenny McCarthy’s activism has been directly blamed for the current rise in measles. She also blames vaccines for autism, against all the well established evidence that shows autism is genetic, and she spreads this misinformation tirelessly. She believes autism can be treated with a special diet, and that her own son has been “healed” of his autism through her efforts. Since one of the things we do know about autism is that it’s incurable, it seems likely that her son probably never even had autism in the first place. So Jenny now promotes the claim that her son is an “Indigo child” — a child with a blue aura who represents the next stage in human evolution. If you take your family’s medical advice from Jenny McCarthy, this is the kind of foolishness you’re in for. Instead, get your medical advice from someone with a plausible likelihood of knowing something about it, like say, oh, a doctor, and not a doctor who belongs to the anti-vaccine Autism Research Institute or its Defeat Autism Now! Project. Go to StopJenny.com for more information.

Oprah Winfrey:

The only person who can sit at the top of this pyramid is the one widely considered the most influential woman in the world and who promotes every pseudoscience: Oprah Winfrey. To her estimated total audience of 100 million, many of whom uncritically accept every word the world’s wealthiest celebrity says, she promotes the paranormal, psychic powers, new age spiritualism, conspiracy theories, quack celebrity diets, past life regression, angels, ghosts, alternative therapies like acupuncture and homeopathy, anti-vaccination, detoxification, vitamin megadosing, and virtually everything that will distract a human being from making useful progress and informed decisions in life. Although much of what she promotes is not directly harmful, she offers no distinction between the two, leaving the gullible public increasingly and incrementally injured with virtually every episode.

___________

Is animal research and experiment bad science?

_

_

Around 50 to 100 million vertebrate animals a year are used worldwide in animal testing. They are either killed during experiments or euthanized. Many millions of animals are bred for experiments but destroyed as surplus. All kinds of animals that suffer from being caught up in this massive industry are rodents of all kinds, such as mice, rats, gerbils, hamsters, and guinea pigs, and other animals, such as pigs, dogs, monkeys, rabbits, cats, horses, cows, sheep, fish, amphibians, and reptiles, and not forgetting birds and insects–you name it! Experiments resembling acts of sadism are inflicted on animals: they are subjected to painful and lethal diseases; deprived, isolated, starved, burned, blinded, poisoned, irradiated; they are used to test chemicals like cleaning products. In many experiments pain or distress are prevented through anesthesia; however, many others are performed where pain or distress are not relieved. Often anesthesia is not used because it is regarded as interfering with results. Thousands of other experiments are designed to deliberately inflict mental stress.

_

According to the Humane Society, registration of a single pesticide requires more than 50 experiments and the use of as many as 12, 000 animals.

Several cosmetic tests commonly performed on mice, rats, rabbits, and guinea pigs include:

a) Skin and eye irritation tests where chemicals are rubbed on shaved skin or dripped into the eyes without any pain relief.

b) Repeated force-feeding studies that last weeks or months, to look for signs of general illness or specific health hazards.

c) Widely condemned “lethal dose” tests, where animals are forced to swallow large amounts of a test chemical to determine what dose causes death.

In tests of potential carcinogens, subjects are given a substance every day for two years. Others tests involve killing pregnant animals and testing their fetuses.

_

Most people who support animal research do so on the basis they believe it can benefit the human species and cure diseases in some form, however below I have summarized the reality:

Differences in genes, biology and physiology between animals and humans:

The prime reason as to why animal testing cannot be applied to humans is due to the difference in species, genetic make-up and vary in physiology and biology. We simply cannot get the results from one species and apply it to another. There is an overwhelming practical case against vivisection: the results of animal experiments cannot be generalized to human beings because we have a vastly different physiology from other animal species. Strychnine, for example, kills people but not monkeys, and Belladonna is deadly to humans yet harmless to rabbits. While morphine calms and anaesthetizes people, it causes dangerously manic excitement in cats and mice. One typical medical consequence of these biological differences between us and other animal species is that the use of hugely beneficial Digitalis for cardiac patients was delayed for many years because it was first tested on dogs and resulted in dangerously high canine blood pressure. Beagle dogs, for example, cannot get heart disease due to being designed for a high intake of meat in their diet due to being solely carnivores, yet heart disease is rife in the human population. Lemon juice, something used commonly, can kill cats as do grapes to dogs. One study conducted by the pharmaceutical company Pfizer came to the conclusion that one would be better off tossing a coin than relying on animal experiments to answer the question of carcinogenic substances. Only 5 – 25% of the substances harmful to humans also have adverse effects on the experimental animals. Tossing a coin delivers better results.

_

Research based on animal experimentation repeatedly fails all along the line. 92% of potential pharmaceutical drugs that are shown by animal testing to be effective and safe do not pass clinical trials , either because of insufficient effectiveness or undesired side effects. Of the 8% of substances that are approved, half are later taken off the market because grave, often even lethal side effects in humans become evident. For instance, the invention of the cancer mouse was believed to be the long-sought key to combating malignant tumours. In the mid eighties, researchers at the Harvard University succeeded in inserting a human cancer gene into the genome of mice, so that the rodents prematurely developed tumours. This genetically engineered mouse was even the first mammal to be patented, in the USA in 1988 and in Europe in 1992. Since then, tens of thousands of cancer mice have been cured, but all the treatments that were successful in rodents failed in humans.

_

Wasteful:

Animal research is very wasteful of money and animal lives. For example, just one study to see if a chemical might cause cancer takes 5 years from start to finish, uses 860 animals and costs between £1-2 million. Out of 100 million animals used worldwide every year, only approximately 20 brand-new drugs are approved for human use by the main drugs regulator in the USA – not only does this show that animals are not always used for vital medical research, but also that those animal tests are not very productive!

_

Unreliable:

Because animals do not get many of the diseases we do, such as heart disease, many types of cancer, HIV, Parkinson’s disease, schizophrenia, and so on, they have to be artificially induced. Not only can this involve some very cruel practices such as brain damaging, surgery, injection with toxic chemicals or infected tissue from other animals, but the diseases the animals get are not the same as the human disease. Animal-based HIV research is not only ethically indefensible, it is also bad science. For example primates used in HIV/AIDS research are injected with the primate version – SIV – which causes illness much more quickly than HIV does in humans. Primates used in Parkinson research have a toxin chemical injected into their brains which causes similar symptoms but these can be reversed; however patients with Parkinson’s sadly do not get better.

_

Morality:

Animal rights activists generally view moral arguments as the paramount reason for opposing animal experimentation. However, unwilling to rely solely on claims that advance animals’ interests directly, activists also employ a variety of “practical” arguments. When publicly arguing against animal experimentation, the most common strategy criticizes the scientific validity of the experiments (known as “bad science” arguments). Activists claim that animal experimentation is wasteful, redundant, inapplicable, and often harmful to humans. That an animal rights group would make these arguments seems natural: they simply want to see an end to animal experimentation–how that is achieved is irrelevant.

_

Francione made the following series of arguments against the necessity of animal experimentation:

  1. It is logically impossible to point to a causal role of animal experiments in medical discoveries because such experiments are used as a matter of course. Any posited causal link is slight given that it required extensive extrapolation from nonhumans to humans.
  2. Animal experimentation is not always the most effective way to solve human health problems.
  3. At least some animal experimentation has been counterproductive and misleading.
  4. Many animal experiments are trivial and not linked to human health.
  5. Nonhuman animal pain and suffering is not reduced to the extent claimed by researchers.

_

Some of the main limitations of animal research are summarized below:

  • Animal studies do not reliably predict human outcomes.
  • Nine out of ten drugs that appear promising in animal studies go on to fail in human clinical trials.
  • Reliance on animal experimentation can impede and delay discovery.
  • Animal studies are flawed by design.

_

False positive and false negative of animal experiments:

(1)The numerous pharmaceutical drugs that were considered safe based on animal experiments, but caused serious or even lethal adverse effects in humans, are proof that the results of animal experiments cannot be transferred to humans with the necessary reliability. Lipobay®, Vioxx®, Trasylol®, Acomplia® and TGN1412 are just the tip of the iceberg. In Germany alone, as many as 58 000 deaths are estimated to be the result of drug side effects.

(2) On the other hand, no one knows how many beneficial pharmaceutical drugs are never released because they are prematurely abandoned on the basis of misleading animal experiments. Many drugs that are highly beneficial nowadays, such as aspirin, ibuprofen, insulin, penicillin or phenobarbital, would not be available if one had relied on animal testing in earlier days, because these substances induce grave damage in certain animal species due to differing metabolic processes. They would have failed outright if subjected to the present-day procedures applied in the development of active ingredients.

_

In an English meta-study the results of different treatment methods on experimental animals and patients based on the relevant scientific publications were compared. Only three of the six disorders investigated delivered correlations, the remaining half did not. In a further comparative study a British research team determined that the results of studies conducted on both animals and humans often differ quite considerably. According to the study, the inexact results of animal experiments can endanger patients and are also a waste of research funding.  In a German study, 51 applications for animal experiments that were approved in Bavaria were analyzed with regard to their clinical implementation. The research team discovered that even ten years later not one single project had been demonstrably implemented in human medicine.  Animal experimentation is not only useless, it is even harmful. It implies security that does not exist, and the false results it delivers only impede medical progress.

_

In 1989, researchers at the pharmaceutical giant Merck, Sharp & Dohme (MSD) were working on a promising protease drug. Development was going well until the scientists decided to test the new therapy on dogs and rats. They all died. According MSD’s former vice-president of worldwide basic research, Bennett M Shapiro, the company “stopped development” of its most promising protease inhibitor. MSD presumed the drug would have the same lethal effect on humans. The result?  Research on a potentially life-saving treatment was halted in 1989, and clinical trials of a new protease drug, Crixivan, did not start until 1993. As we now know, protease inhibitors do not have the same fatal consequences for humans. On the contrary, they have dramatically improved the lives of people with HIV. These setbacks in the development of anti-HIV treatments highlight the scientific flaws of animal-based medical research. So there was a four-year delay in the clinical trials of protease-inhibitor treatments. This may have contributed to the needless deaths of tens of thousands of people worldwide.

_

Most animal experiments are not relevant to human health, they do not contribute meaningfully to medical advances and many are undertaken simply of out curiosity and do not even pretend to hold promise for curing illnesses. The only reason people are under the misconception that animal experiment help humans is because the media, experimenters, universities and lobbying groups exaggerate the potential of animal experiments to lead to new cures and the role they have played in past medical advances. Researchers from the Yale School of Medicine and several British universities published a paper in the British Medical Journal titled “Where Is the Evidence That Animal Research Benefits Humans?” The researchers systematically examined animal studies and concluded that little evidence exists to support the idea that animal experimentation has benefited humans. In fact, many of the most important advances in health are attributable to human studies, including the discovery of the relationships between cholesterol & heart disease and smoking & cancer, the development of X-rays, and the isolation of the AIDS virus. Between 1900 and 2000, life expectancy in the United States increased from 47 to 77 years. Although animal experimenters take credit for the greatly improved life expectancy rate, medical historians report that improved nutrition, sanitation, and other behavioral and environmental factors—rather than anything learned from animal experiments—are responsible for the fact that people are living longer lives.

_

Taking a healthy being from a completely different species, artificially inducing a condition that he or she would never normally contract, keeping him or her in an unnatural and distressful environment, and trying to apply the results to naturally occurring diseases in human beings is dubious at best. Physiological reactions to drugs vary enormously from species to species. Penicillin kills guinea pigs but is inactive in rabbits; aspirin kills cats and causes birth defects in rats, mice, guinea pigs, dogs, and monkeys; and morphine, a depressant in humans, stimulates goats, cats, and horses. Further, animals in laboratories typically display behavior indicating extreme psychological distress, and experimenters acknowledge that the use of these stressed-out animals jeopardizes the validity of the data produced. Sir Alexander Fleming, who discovered penicillin, remarked, “How fortunate we didn’t have these animal tests in the 1940s, for penicillin would probably have never been granted a license, and probably the whole field of antibiotics might never have been realized.” Modern non-animal research methods are faster, cheaper and more relevant to humans than cruel and irrelevant animal tests. Animal experiments persist not because they are the best science, but because of experimenters’ personal biases and archaic traditions. 

_

Views that support animal experiment:

The main argument against animal experiment is this: testing drugs on animals can (and has) produced both “false positives” and “false negatives” with respect to their effects on humans. Further, because animals are unsuited to predict drug effects, they are therefore unsuited for all other scientific endeavors and animal studies should be abolished forthwith. In the case of using animals to test drugs, a “false positive” would be a case in which the drug produced no adverse affects during animal tests, but did produce adverse affects in humans (safe in animals, unsafe in humans). A “false negative” would be a case in which a drug produced dangerous side effects during animal tests, but did not when used by humans (unsafe in animals, safe in humans). The concept of false positives and false negatives is a fact of life for scientists and physicians – the terms are used all the time in science and medicine (one of the main goals of science and medicine is to correctly distinguish between true and false positives; and true and false negatives).

_

Drugs are not introduced en masse to the market exclusively on the basis of having been proved safe in animals. They are introduced to the market gradually, over several years, as they pass through 3 phases of clinical trials. In other words, in spite of the fact that a given drug was used safely in animal tests, it is assumed unsafe for widespread dissemination until it has successfully passed clinical trials.  Only after a drug has passed the clinical trials, and its safe dosage, side effects and efficacy have been established to the satisfaction of a governmental authority (yet another couple of years), is it released for general use. And then, it is still monitored (post-release monitoring is sometimes considered the 4th phase of clinical trials). Bear in mind that clinical trials are carefully designed and the patients very carefully monitored (this means objective as well as subjective evaluation – blood tests, urine tests, rash, cardiac output, blood pressure, etc., as well as subjective patient complaints of nausea, shortness of breath, taste abnormalities, fatigue, etc.).

Here’s the system:

Preclincial trials – animal trials – of 3 – 4 years duration, perhaps several hundred animals used.

Phase I: 20 – 80 volunteers, to determine safety and dosage – of 1 year duration

Phase II: 100 – 300 volunteers, to evaluate effectiveness and identify obvious side effects – of 2 years duration

Phase III: 1000 – 3000 volunteers, to further characterize effectiveness, and identify and monitor adverse effects – of 3 years duration

If the drug passes these three phases, and it satisfies a governmental review – 2 years – it is released.

Phase IV: Post release monitoring of the entire patient population receiving the drug.

_

Typically, drugs that reach the market would have been tested on fewer than a combined total of 4500 animals and human volunteers (think of this number as being a rule-of-thumb cost/benefit tipping point). So – for a drug that has an actual adverse side effect in only 1:100,000 cases, you will be unlikely to identify it during the preclinical and clinical trials, even if the response of your animal and human cohorts perfectly mirrors the general population. In novel drugs (many are not) it can be really hard to predict the frequency of side effects, or what they might be, until the drug has been on the market. Once a drug is released, and if it is prescribed for a rare condition, it may take years before you see the adverse reaction (it’s the numbers – how long will it take to find that 1 person in 100,000 who will have an adverse reaction? And then, how many people over what length of time in what size patient population should react badly, and how badly, before you issue warnings, restrict use of the drug or withdraw it?).  On the other hand, if the drug is intended for a fairly common disease – like rheumatoid arthritis (as in Opren [Benoxaprofen]) – and it is widely prescribed to large numbers of patients (say 3,000,000 over 3 years), and the incidence of adverse side effects is 1:100,000, the odds are you’re going to see any unsuspected adverse reaction sooner rather than later.

_

Benoxaprofen was discovered by a team of Lilly chemists at its British laboratory. This laboratory was assigned to explore new anti-arthritic compounds in 1966. Lilly applied for patents on benoxaprofen seven years later and also filed for permission from the FDA to start testing the drug on humans. It had to undergo the three-step clinical testing procedure required by the Federal Government. Lilly began Phase I of the progress by testing a handful of healthy human volunteers. These tests had to prove that the drug posed no clear and immediate safety hazards. In Phase II a larger number of human subjects, including some with minor illnesses, was tested. The drug’s effectiveness and safety was the major target of these tests. Phase III was the largest test and began in 1976. More than 2,000 arthritis patients were administered the drug by more than 100 doctors. The doctors reported the results to the Lilly Company. When the company formally requested to begin marketing the drug in January 1980 with the FDA, the document consisted of more than 100,000 pages of test results and patients’ records. The British Medical Journal reported in May 1982 that physicians in the UK believed that the drug was responsible for at least 12 deaths, mainly caused by kidney and liver failure. A petition was filed to have benoxaprofen removed from the market. On the fourth of August 1982 the British government temporarily suspended sales of the drug in UK ‘on grounds of safety’. The British Committee on the Safety of Medicines declared, in a telegram to the FDA, that it had received reports of more than 3,500 adverse side-effects among patients who had used benoxaprofen. There were also 61 deaths, most of which were of elderly people. Almost simultaneously, the FDA said it had reports of 11 deaths in the USA among benoxaprofen users, most of which were caused by kidney and liver damage. The Eli Lilly Company suspended sales of benoxaprofen that afternoon. 

_

There really isn’t a way around the numbers. Drugs can appear perfectly fine throughout the clinical trials, only to show their darker side when a very large population is exposed to them. But that’s one cost of modern medicine: the drug may benefit hundreds of thousands or even millions of people, but it may also make some people sick or even kill some folks. In the real world, it’s impossible for this not to be the case. Apparently, people against animal experiment would have us ignore the animal results, on the theory that animals and humans are too different for the animal results to be valid. But if we’re to follow their logic, we should go back and test every agent that proved dangerous or lethal to animals on human beings!  Is that really what they advocating?  Are they prepared to accept full responsibility – legal, moral and financial – for doing so, or would they prefer for drug company executives and scientists to do that?  The fact of the matter is that animals are a very good, though imperfect, indicator of how drugs will affect humans. If you doubt that, consider that all drugs that reach market and are proven safe in humans were found to be safe in animals. And agents that prove unsafe for humans far more often than not will prove unsafe for animals, passionate assertions to the contrary notwithstanding. To ignore these facts is to ignore a lot of “objective-reality” biological history in favor of a faith-based ideology. Finally animals are used for far more than just testing drugs: they are used for understanding basic biological mechanisms. And understanding these mechanisms in health is the basis for understanding them in disease, and understanding them in disease is the basis for developing pharmacological agents to treat them.

_

What is the alternative of animal experimentation?

Human clinical and epidemiological studies, human tissue- and cell-based research methods, cadavers, sophisticated high-fidelity human patient simulators and computational models are more reliable, more precise, less expensive, and more humane than animal experiments. Progressive scientists have used human brain cells to develop a model “microbrain,” which can be used to study tumors, as well as artificial skin and bone marrow. We can now test irritancy on protein membranes, produce and test vaccines using human tissues, and perform pregnancy tests using blood samples instead of killing rabbits. 

_____________

Application of Statistics in science:

Statistical hypothesis testing:

When a possible correlation or similar relation between phenomena is investigated, such as, for example, whether a proposed remedy is effective in treating a disease, that is, at least to some extent and for some patients, the hypothesis that a relation exists cannot be examined the same way one might examine a proposed new law of nature: in such an investigation a few cases in which the tested remedy shows no effect do not falsify the hypothesis. Instead, statistical tests are used to determine how likely it is that the overall effect would be observed if no real relation as hypothesized exists. If that likelihood is sufficiently small (e.g., less than 1%), the existence of a relation may be assumed. Otherwise, any observed effect may as well be due to pure chance. P value less than 0.05 means the relationship observed is 5 % by chance and 95 % by reason. In statistical hypothesis testing two hypotheses are compared, which are called the null hypothesis and the alternative hypothesis. The null hypothesis is the hypothesis that states that there is no relation between the phenomena whose relation is under investigation, or at least not of the form given by the alternative hypothesis. The alternative hypothesis, as the name suggests, is the alternative to the null hypothesis: it states that there is some kind of relation. The alternative hypothesis may take several forms, depending on the nature of the hypothesized relation; in particular, it can be two-sided (for example: there is some effect, in a yet unknown direction) or one-sided (the direction of the hypothesized relation, positive or negative, is fixed in advance). Conventional significance levels for testing the hypotheses are 0.10, 0.05, and 0.01. Whether the null hypothesis is rejected and the alternative hypothesis is accepted, all must be determined in advance, before the observations are collected or inspected. If these criteria are determined later, when the data to be tested is already known, the test is invalid. It is important to mention that the above procedure is actually dependent on the number of the participants (units or sample size) that is included in the study. For instance, the sample size may be too small to reject a null hypothesis and, therefore, is recommended to specify the sample size from the beginning. It is advisable to define a small, medium and large effect size for each of a number of the important statistical tests which are used to test the hypotheses.

_

Meta-analysis:

In statistics, a meta-analysis refers to methods focused on contrasting and combining results from different studies, in the hope of identifying patterns among study results, sources of disagreement among those results, or other interesting relationships that may come to light in the context of multiple studies.  In its simplest form, this is normally by identification of a common measure of effect size, of which a weighted average might be the output of a meta-analysis. The weighting might be related to sample sizes within the individual studies. More generally there are other differences between the studies that need to be allowed for, but the general aim of a meta-analysis is to more powerfully estimate the true effect size as opposed to a less precise effect size derived in a single study under a given single set of assumptions and conditions. Meta-analyses are often, but not always, important components of a systematic review procedure. For instance, a meta-analysis may be conducted on several clinical trials of a medical treatment, in an effort to obtain a better understanding of how well the treatment works. Here it is convenient to follow the terminology used by the Cochrane Collaboration,  and use “meta-analysis” to refer to statistical methods of combining evidence, leaving other aspects of ‘research synthesis’ or ‘evidence synthesis’, such as combining information from qualitative studies, for the more general context of systematic reviews. Meta-analysis forms part of a framework called estimation statistics which relies on effect sizes, confidence intervals and precision planning to guide data analysis, and is an alternative to null hypothesis significance testing.

_

Advantages of meta-analysis:

Conceptually, a meta-analysis uses a statistical approach to combine the results from multiple studies. Its advantages can therefore be interpreted as follows:

1. Results can be generalized to a larger population.

2. The precision and accuracy of estimates can be improved as more data is used. This, in turn, may increase the statistical power to detect an effect.

3. Inconsistency of results across studies can be quantified and analyzed. For instance, does inconsistency arise from sampling error, or are study results (partially) influenced by between-study heterogeneity.

4. Hypothesis testing can be applied on summary estimates.

5. Moderators can be included to explain variation between studies.

6. The presence of publication bias can be investigated.

_

Pitfalls of meta-analyses:

A meta-analysis of several small studies does not predict the results of a single large study. Some have argued that a weakness of the method is that sources of bias are not controlled by the method: a good meta-analysis of badly designed studies will still result in bad statistics. This would mean that only methodologically sound studies should be included in a meta-analysis, a practice called ‘best evidence synthesis’.  Other meta-analysts would include weaker studies, and add a study-level predictor variable that reflects the methodological quality of the studies to examine the effect of study quality on the effect size. However, others have argued that a better approach is to preserve information about the variance in the study sample, casting as wide a net as possible, and that methodological selection criteria introduce unwanted subjectivity, defeating the purpose of the approach.   

_

Publication bias: the file drawer problem:

Another potential pitfall is the reliance on the available corpus of published studies, which may create exaggerated outcomes due to publication bias, as studies which show negative results or insignificant results are less likely to be published. For any given research area, one cannot know how many studies have gone unreported. File drawer problem results in the distribution of effect sizes that are biased, skewed or completely cut off, creating a serious base rate fallacy, in which the significance of the published studies is overestimated, as other studies were either not submitted for publication or were rejected. This should be seriously considered when interpreting the outcomes of a meta-analysis. This can be visualized with a funnel plot which is a scatter plot of sample size and effect sizes. In fact, for a certain effect level, the smaller the study, the higher is the probability to find it by chance. At the same time, the higher the effect level, the lower is the probability that a larger study can result in that positive result by chance. If many negative studies were not published, the remained positive studies give rise to a funnel plot in which effect size is inversely proportional to sample size, that is, an important part of the shown effect is due to chance that is not balanced in the plot due to unpublished negative data absence as shown in the figure below. In contrast, when most studies were published, the effect shown has no reason to be biased by the study size, so a symmetric funnel plot results. So, if no publication bias is present, one would expect that there is no relation between sample size and effect size. A negative relation between sample size and effect size would imply that studies that found significant effects were more likely to be published and/or to be submitted for publication. There are several procedures available that attempt to correct for the file drawer problem, once identified, such as guessing at the cut off part of the distribution of study effects. Methods for detecting publication bias have been controversial as they typically have low power for detection of bias, but also may create false positives under some circumstances. For instance small study effects, wherein methodological differences between smaller and larger studies exist, may cause differences in effect sizes between studies that resemble publication bias. However, small study effects may be just as problematic for the interpretation of meta-analyses, and the imperative is on meta-analytic authors to investigate potential sources of bias. A Tandem Method for analyzing publication bias has been suggested for cutting down false positive error problems, and suggesting that 25% of meta-analyses in the psychological sciences may have publication bias. However, low power problems likely remain at issue, and estimations of publication bias may remain lower than the true amount.

_

Agenda-driven bias:

The most severe fault in meta-analysis often occurs when the person or persons doing the meta-analysis have an economic, social, or political agenda such as the passage or defeat of legislation. People with these types of agendas may be more likely to abuse meta-analysis due to personal bias. For example, researchers favorable to the author’s agenda are likely to have their studies cherry-picked while those not favorable will be ignored or labeled as “not credible”. In addition, the favored authors may themselves be biased or paid to produce results that support their overall political, social, or economic goals in ways such as selecting small favorable data sets and not incorporating larger unfavorable data sets. The influence of such biases on the results of a meta-analysis is possible because the methodology of meta-analysis is highly malleable.  A 2011 study done to disclose possible conflicts of interests in underlying research studies used for medical meta-analyses reviewed 29 meta-analyses and found that conflicts of interests in the studies underlying the meta-analyses were rarely disclosed. The 29 meta-analyses included 11 from general medicine journals, 15 from specialty medicine journals, and three from the Cochrane Database of Systematic Reviews. The 29 meta-analyses reviewed a total of 509 randomized controlled trials (RCTs). Of these, 318 RCTs reported funding sources, with 219 (69%) receiving funding from industry. Of the 509 RCTs, 132 reported author conflict of interest disclosures, with 91 studies (69%) disclosing one or more authors having industry financial ties. The information was, however, seldom reflected in the meta-analyses. Only two (7%) reported RCT funding sources and none reported RCT author-industry ties. The authors concluded “without acknowledgment of conflict of interest due to industry funding or author industry financial ties from RCTs included in meta-analyses, readers’ understanding and appraisal of the evidence from the meta-analysis may be compromised.”    

_

Applications in modern science of meta-analysis:

Modern statistical meta-analysis does more than just combine the effect sizes of a set of studies. It can test if the outcomes of studies show more variation than the variation that is expected because of sampling different research participants. If that is the case, study characteristics such as measurement instrument used, population sampled, or aspects of the studies’ design are coded. These characteristics are then used as predictor variables to analyze the excess variation in the effect sizes. Some methodological weaknesses in studies can be corrected statistically. For example, it is possible to correct effect sizes or correlations for the downward bias due to measurement error or restriction on score ranges. Another example is the development of clinical prediction models, where meta-analysis may be used to combine data from different research centers, or even to aggregate existing prediction models. Meta-analysis can be done with single-subject design as well as group research designs. This is important because much of the research on low incidents populations has been done with single-subject research designs. Considerable dispute exists for the most appropriate meta-analytic technique for single subject research.  Meta-analysis leads to a shift of emphasis from single studies to multiple studies. It emphasizes the practical importance of the effect size instead of the statistical significance of individual studies. This shift in thinking has been termed “meta-analytic thinking”. The results of a meta-analysis are often shown in a forest plot. Results from studies are combined using different approaches. One approach frequently used in meta-analysis in health care research is termed ‘inverse variance method’. The average effect size across all studies is computed as a weighted mean, whereby the weights are equal to the inverse variance of each study’s effect estimator. Larger studies and studies with less random variation are given greater weight than smaller studies. Other common approaches include the Mantel–Haenszel method and the Peto method.  A recent approach to studying the influence that weighting schemes can have on results has been proposed through the construct of gravity, which is a special case of combinatorial meta-analysis. Signed differential mapping is a statistical technique for meta-analyzing studies on differences in brain activity or structure which used neuroimaging techniques such as fMRI, VBM or PET. Different high throughput techniques such as microarrays have been used to understand Gene expression. MicroRNA expression profiles have been used to identify differentially expressed microRNAs in particular cell or tissue type or disease conditions or to check the effect of a treatment. A meta-analysis of such expression profiles was performed to derive novel conclusions and to validate the known findings. 

___________

Imitation science in medical practice:

_

The figure above summarizes the effects of modern medicine on world health. Nonetheless medical science is frequently tampered with bad science as you will see in the following paragraph.

_

How to read Scientific Studies to know that it is indeed science and not imitation science:

Many scientific studies are published every day that make sweeping claims off of questionable data. When the corrections come in, these don’t get nearly as much publicity. So what’s a nonscientist to do?  How can a layperson tell the difference between a well designed scientific study and a poorly designed one that reveals little? While each study has to be taken in context and the methodology examined on an individual basis, there are some traits that good scientific studies and articles generally share in common. While many perfectly good scientific studies will not possess all of these traits, a study that possesses none of them should be read with caution. Here are the traits of well-designed scientific studies:

Randomization:

Subjects are randomly divided into control and study groups, and subjects are selected randomly as opposed to self-selected or selected as part of a group already being studied. For example, a study of people with heart disease is better if patients are picked at random than if patients already involved with a particular hospital are allowed to self-select.

Double Blind:

Double Blind studies are studies in which neither the experimenter nor the experimental subject knows whether they are in the control or study group. For example, a heart patient would not know whether or not she was being given a placebo, and the physician administering the placebo would not know either. Single blind studies are less reliable. These are studies where the experimenter knows what group the subject is in but the subject does not. Studies that are not blind at all are highly unreliable and should not be used as meaningful sources of data until follow up double-blind studies are conducted.

Representative Sample:

A good scientific study uses a very large sample size. A study of twenty patients, for example, does not have enough participants to represent a cross section of the population. The subjects should also be representative of the larger population, meaning that they come from various race, gender, class, etc. backgrounds. This helps to account for variables like environment, income, racial discrimination and gender that may serve to alter the experimental results. Studies with small samples cannot definitively tell us anything; they can only tell us that a larger study is necessary.

Multiple Studies:

It can be exciting to hear that a miracle drug worked in one study. But if the results aren’t repeated, then it doesn’t matter that the drug worked in that one study. It is easy to find a correlation between just about anything in a single study. Good scientific studies are repeated many times and yield the same results. If a different researcher repeats the experiment and does not obtain the same results as the original experimenter, something is wrong with the study.

Multi-Institutional Studies:

Even the best, most honorable scientists can be biased by their own life experiences or the research institutes for which they work. For example, a researcher who works for a chiropractor has a vested interest in showing that chiropractic works. A pharmaceutical company that sells St.John’s Wort has a vested interest in showing that it cures depression. Thus it is important that research be conducted across several institutions, or that after one institution has conducted a study, a different institution with a different researcher repeat the study and see if the same results are obtained.

Does not make Sweeping Claims:

A scientific article claimed that scorpions forced sex on other scorpions. From this claim, a researcher concluded that humans must carry a rape gene and rape must be natural. You can’t make this stuff up. With massive pressure on scientists to publish or lose their jobs, it’s easy to understand why a researcher may claim that their study proves more than it actually does. Moreover, scientific journals and newspapers alike both like exciting headlines. “Sometimes scorpions look like they’re forcing sex on other scorpions” is a lot less exciting than, “Rape is natural!” Beware of studies that make broad, sweeping, exciting claims. They may be showing something completely different than what they claim to be. Similarly, be wary of studies that purport to show that human nature will always be a particular way. Science should be falsifiable, and studies that make future claims about human societies cannot be falsified with experimental data, and thus are not true scientific studies!  

_

Randomized Controlled Trials (RCT) vs. Observational Studies:

When most of us think of a scientific study, we’re thinking of a randomized controlled trial (RCT). The participants are divided randomly into two groups, matched as well as possible for age, sex, health history, smoking status, and any other factor that might affect the outcome. One group is given the treatment; the other is given no treatment. In the more rigorous forms of RCT, the “no treatment” group is given a sham treatment (known as “placebo”) so that the subjects don’t know whether they’ve received treatment or not. This is sometimes called a “single-blinded” trial. Since the simple act of being attended to has a positive and significant effect on health (the “placebo effect”), unblinded trials (also known as “open label”) are usually not taken very seriously. To add additional rigor, some trials are structured so that the clinicians administering the treatment don’t know who’s receiving the real treatment or not. This is sometimes called a “double-blinded” trial. And if the clinicians assessing outcomes don’t know who received the real treatment, it’s sometimes called “triple-blinded”. (These terms are now being discouraged in favor of simply calling a study “blinded” and specifying which groups have been blinded.) Double-blinded, randomized controlled trials are the gold standard of research, because they’re the only type of trial that can prove the statement “X causes Y”.

_

What is an Observational Study? Cohort Studies and Cross-Sectional Studies:

In an observational study, the investigators don’t attempt to control the behavior of the subjects: they simply collect data about what the subjects are doing on their own. There are two main types of observational studies: cohort studies identify a specific group and track it over a period of time, whereas population studies measure characteristics of an entire population at one single point in time.  Cohort studies can be further divided into prospective cohort studies, in which the groups and study criteria are defined before the study begins, and retrospective cohort studies, in which existing data is “mined” after the fact for possible associations. It’s easy to see that looking for statistical associations in data that already exists is far easier and cheaper than performing a randomized clinical trial. Unfortunately, there are several problems with observational studies. The first, and most damning, is that observational studies cannot prove that anything is the cause of anything else! They can only show an association between two or more factor—and that association may not mean what we think it means. In fact, it may not mean anything at all! “Selection bias” occurs because, unlike an RCT in which the participants are randomly assigned to groups that are matched as well as possible, the people in an observational study choose their own behavior.

Most women will be familiar with the classic story of selection bias: the saga of hormone replacement therapy (HRT):

1991: “Every woman should get on HRT immediately, because it prevents heart attacks!”

2002: “Every woman should get off HRT immediately, because it causes heart attacks!”

How did a 50% reduction in CAD (coronary artery disease) turn into a 30% increase in CAD?

 It’s because the initial data from 1991 was from the Nurses’ Health Study, an associative cohort study which could only answer the question “What are the health characteristics of nurses who choose to undergo HRT versus nurses who don’t?” The follow up data from 2002 was from a randomized clinical trial, which answered the much more relevant question “What happens to two matched groups of women when one undergoes HRT and the other doesn’t?” It turns out that the effect of selection bias—women voluntarily choosing to be early adopters of a then-experimental procedure—completely overwhelmed the actual health effects of HRT. In other words, nurses who were willing to undergo cutting-edge medical treatment were far healthier than nurses who weren’t.

_

Limitations of RCT: RCTs can give the wrong answer:

RCTs can be very helpful when they disprove claims that a drug works. This can rein in the unscrupulous huckster, the credulous physician and protect the vulnerable patient. But RCTs can also be unhelpful. For many conditions they simply do not produce a meaningful outcome. For instance, where superficially similar problems can be caused by both drug and illness, such as suicidality on antidepressants, RCTs may perversely show that drugs that clearly cause a problem seemingly don’t cause it.  Within the mental health domain it’s debatable if RCTs can show anything “works”. Using the clinical trial procedures used to bring Prozac (Fluoxetine) on the market, alcohol could have been shown to be as effective an antidepressant as and safer than Prozac. Rather than being used as initially intended – to first do no harm – trials have given rise to an efficacy fetish. As a result we are now all put on an increasing number of drugs that have been approved by regulators and there seems to be no way to limit the number of drugs we end up on. End up on because these drugs that work have not been shown to save lives. The result is that drug induced death is now the third leading cause of death. This is a tragedy of mythic proportions. RCTs tell us almost nothing about cause and effect. They discover nothing. They likely block the discovery of many treatments. But for the fact that Barry Marshall in 1980 took a laboratory rather than a clinical trials approach to showing ulcers were caused by H. pylori and could be cured by antibiotics, Glaxo would have used RCTs to bury the evidence that antibiotics cure ulcers to defend the very first blockbuster drug, their H2 antagonist for ulcers, Zantac (ranitidine). More generally in a 400 page book about treating illnesses and drugs, Bad Pharma has only 3 pages (101-104) that come close to dealing with biology, even though biology is the primary driver of the superficial associations that controlled trials throw up.  

___________

Evidence based medicine (EBM):

Evidence-based medicine (EBM) is “the conscientious, explicit and judicious use of current best evidence in making decisions about the care of individual patients.” Trisha Greenhalgh and Anna Donald define it more specifically as “the use of mathematical estimates of the risk of benefit and harm, derived from high-quality research on population samples, to inform clinical decision-making in the diagnosis, investigation or management of individual patients.”  EBM seeks to assess the strength of the evidence of risks and benefits of treatments (including lack of treatment) and diagnostic tests. This helps clinicians predict whether a treatment will do more good than harm. Evidence quality can be assessed based on the source type (from meta-analyses and systematic reviews of triple-blind randomized clinical trials with concealment of allocation and no attrition at the top end, down to conventional wisdom at the bottom), as well as other factors including statistical validity, clinical relevance, currency, and peer-review acceptance. EBM recognizes that many aspects of health care depend on individual factors such as quality- and value-of-life judgments, which are only partially subject to quantitative scientific methods. Application of EBM data therefore depends on patient circumstances and preferences, and medical treatment remains subject to input from personal, political, philosophical, religious, ethical, economic, and aesthetic values.

_

Current evidence based medical practice:

From the state of current knowledge, around 13 % of all treatments have good evidence and further 21 % are likely to be beneficial. By any yardstick it is abysmally low.

_

EBM by U.S. Preventive Services Task Force (USPSTF):

Systems to stratify evidence by quality have been developed, such as this one by the United States Preventive Services Task Force for ranking evidence about the effectiveness of treatments or screening:

Level I: Evidence obtained from at least one properly designed randomized controlled trial.

Level II-1: Evidence obtained from well-designed controlled trials without randomization.

Level II-2: Evidence obtained from well-designed cohort or case-control analytic studies, preferably from more than one center or research group.

Level II-3: Evidence obtained from multiple time series with or without the intervention. Dramatic results in uncontrolled trials might also be regarded as this type of evidence.

Level III: Opinions of respected authorities, based on clinical experience, descriptive studies, or reports of expert committees.

_

Categories of recommendations:

In guidelines and other publications, recommendation for a clinical service is classified by the balance of risk versus benefit of the service and the level of evidence on which this information is based. The U.S. Preventive Services Task Force uses:

Grade A—Strongly Recommended: The USPSTF strongly recommends that clinicians provide [the service] to eligible patients. The USPSTF found good evidence that [the service] improves important health outcomes and concludes that benefits substantially outweigh harms.

Grade B—Recommended: The USPSTF recommends that clinicians provide [the service] to eligible patients. The USPSTF found at least fair evidence that [the service] improves important health outcomes and concludes that benefits outweigh harms.

Grade C—No Recommendation: The USPSTF makes no recommendation for or against routine provision of [the service]. The USPSTF found at least fair evidence that [the service] can improve health outcomes but concludes that the balance of benefits and harms is too close to justify a general recommendation.

Grade D—Not Recommended: The USPSTF recommends against routinely providing [the service] to asymptomatic patients. The USPSTF found at least fair evidence that [the service] is ineffective or that harms outweigh benefits.

Grade I—Insufficient Evidence to Make a Recommendation: The USPSTF concludes that the evidence is insufficient to recommend for or against routinely providing [the service]. Evidence that the [service] is effective is lacking, of poor quality, or conflicting and the balance of benefits and harms cannot be determined.

_________

Systemic reviews:

A systematic review is a literature review focused on a research question that tries to identify, appraise, select and synthesize all high quality research evidence relevant to that question. Systematic reviews of high-quality randomized controlled trials are crucial to evidence-based medicine.  An understanding of systematic reviews and how to implement them in practice is becoming mandatory for all professionals involved in the delivery of health care. Besides health interventions, systematic reviews may concern clinical tests, public health interventions, social interventions, adverse effects, and economic evaluations.  Systematic reviews are not limited to medicine and are quite common in other sciences where data are collected, published in the literature, and an assessment of methodological quality for a precisely defined subject would be helpful.  Other fields where systematic reviews are used include psychology, nursing, dentistry, public health, occupational therapy, speech therapy, physical therapy, educational research, sociology, business management, environmental management and conservation biology. Systematic reviews often, but not always, use statistical techniques (meta-analysis) to combine results of the eligible studies, or at least use scoring of the levels of evidence depending on the methodology used. While systematic reviews are regarded as the strongest form of medical evidence, a review of 300 studies found that not all systematic reviews were equally reliable, and that their reporting could be improved by a universally agreed upon set of standards and guidelines.  A further study by the same group found that of 100 systematic reviews monitored, 7% needed updating at the time of publication, another 4% within a year, and another 11% within 2 years; this figure was higher in rapidly-changing fields of medicine, especially cardiovascular medicine.  

_

Cochrane Collaboration and EBM:

The Cochrane Collaboration is an independent nonprofit organization consisting of a group of more than 31,000 volunteers in more than 120 countries who systematically review randomized trials of the effects of prevention, treatments and rehabilitation as well as health systems interventions. The collaboration was formed to organize medical research information in a systematic way in the interests of evidence-based medicine.  The group conducts systematic reviews of randomized controlled trials of health-care interventions, which it publishes in the Cochrane Library. When appropriate, they also include the results of other types of research. Cochrane Reviews are published in The Cochrane Database of Systematic Reviews section of The Cochrane Library. A few reviews (in fields such as occupational health) have also studied the results of non-randomized, observational studies. The collaboration formed an official relationship in January 2011 with the World Health Organization as a partner NGO, with a seat on the World Health Assembly to provide input into WHO resolutions. The Cochrane Collaboration provides a handbook for systematic reviewers of interventions which “provides guidance to authors for the preparation of Cochrane Intervention reviews.” The Cochrane Handbook outlines eight general steps for preparing a systematic review:

1. Defining the review question and developing criteria for including studies

2. Searching for studies

3. Selecting studies and collecting data

4. Assessing risk of bias in included studies

5. Analyzing data and undertaking meta-analyses

6. Addressing reporting biases

7. Presenting results and “summary of findings” tables

8. Interpreting results and drawing conclusions  

___________

Placebo effect:

Placebo is a medicine or procedure prescribed for the psychological benefit to the patient rather than for any physiological effect. There is a trial of fish oil pills. And the claim was fish oil pills improve school performance and behavior in children. And they said, “We’ve done a trial. All the previous trials were positive, and we know this one’s gonna be too.” That should always ring alarm bells, right, because if you already know the answer to your trial, you shouldn’t be doing one. Either you’ve rigged it by design, or you’ve got enough data so there’s no need to randomize people anymore.  They were taking 3,000 children, they were going to give them all these huge fish oil pills, six of them a day, and then a year later, they were going to measure their school exam performance and compare their exam performance against what they’d predicted their exam performance would have been if they hadn’t had the pills. The kids got the pills, and then their performance improved. What else could it possibly be if it wasn’t the pills? It’s the placebo effect. The placebo effect is one of the most fascinating things in the whole of medicine. It’s not just about taking a pill and your performance and your pain getting better. It’s about our beliefs and expectations. It’s about the cultural meaning of a treatment. And this has been demonstrated in a whole raft of fascinating studies comparing one kind of placebo against another. So we know, for example, that two sugar pills a day are a more effective treatment for getting rid of gastric ulcers than one sugar pill a day. Two sugar pills a day beats one sugar pill a day. And that’s an outrageous and ridiculous finding, but it’s true. So we know that our beliefs and expectations can be manipulated, which is why we do trials where one half of the people get the real treatment and the other half get placebo, the placebo controlled randomized trial. 

_

Moerman examined 117 studies of ulcer drugs between 1975 and 1994, and found that they interact in a way you would never have expected, culturally, rather than pharmacodynamically. Cimetidine was the first ulcer drug and it eradicated 80 % of ulcers in various clinical trials. As time passed, the success rate of cimetidine deteriorated to 50 %. This deterioration occurred after introduction of ranitidine. So the same drug became less effective after new drugs were brought in. It could be due to change in research protocols. But highly compelling possibility is that the older drugs become less effective after new ones were brought in because of deteriorating medical belief in them.

_

Placebo as pain reliever:  

Doesn’t acupuncture stimulate the release of endorphins and other natural pain-killers? Isn’t that why it works? Perhaps. However, Antonella Pollo et al. demonstrated that placebos can help people with serious pain (Blausell 2007: pp. 139 ff). Other researchers, such as Donald Price, have shown that placebos work to reduce pain only when the subject believes that the therapy is capable of reducing pain. “This belief can be instilled through classical conditioning or simply by the suggestion of a respected individual that this intervention (or therapy) can reduce pain” (ibid., p. 141). Martina Amanzio et al. demonstrated that “at least part of the physiological basis for the placebo effect is opioid in nature” (ibid., p. 160). That is, we can be conditioned to release such chemical substances as endorphins, catecholamines, cortisol, and adrenaline. One reason, therefore, that people report pain relief from both acupuncture and sham acupuncture may be that both are placebos that stimulate the opioid system, the body’s natural pharmacy. When one considers the difficulty, if not impossibility, of doing a double-blinded study on acupuncture, and when one considers the dropout rates of acupuncture studies, one should conclude that the evidence from all the best studies supports the hypothesis that acupuncture works by the placebo effect, not by balancing yin and yang energies, whatever that might mean.  Is it possible that acupuncture and homeopathy and other alternative therapies that have lasted hundreds or thousands of years are just placebos? Yes, as long as a treatment is not harming people right and left, it could be having a beneficial effect due to people’s beliefs, expectations, and conditioning. This is true of both scientific and alternative treatments.  

__________

Complementary and alternative medicine (CAM) as imitation science:

Too many alternative therapists remain uninterested in determining the safety and efficacy of their interventions. These practitioners also fail to see the importance of rigorous clinical trials in establishing proper evidence for or against their treatments and where evidence already exists that treatments are ineffective or unsafe, alternative therapists will carry on regardless with their hands firmly over their ears. Despite this disturbing situation, the market for alternative treatments is booming and the public is being misled over and over again, often by misguided therapists, sometimes by exploitive charlatans.  It is now time for the tricks to stop, and for the real treatments to take priority. In the name of honesty, progress and good healthcare, it is imperative that scientific standards, evaluation and regulation to be applied to all types of medicine, so that patients can be confident that they are receiving treatments that demonstrably generate more good than harm.  If such standards are not applied to the alternative medicine sector, then homeopaths, acupuncturists, chiropractors, herbalists and other alternative therapists will continue to prey on the most desperate and vulnerable in society, raiding their wallets, offering false hope, and endangering their health.

Acupuncture:

The traditional principles are deeply flawed, as there is no evidence at all to demonstrate the existence of Ch’i or meridians. Various reviews show that it is no more effective than placebo for any condition except possibly for some types of pain and nausea.

Homeopathy:

There is a mountain of evidence to suggest that homeopathic remedies simply do not work, which is not surprising as they typically do not contain a single molecule of any active ingredient. Any observable effects are due to the placebo effect.

For example, a 2002 systematic review – a rigorous analysis of all available evidence – concluded that the best available evidence “does not warrant positive recommendations for its use in clinical practice.” A 2010 review of the “best evidence” concluded that homeopathic remedies have no “effects beyond placebo.” Even the U.S. National Institutes of Health National Center for Complementary and Alternative Medicine, an entity that has a specific mission to be open-minded about unconventional treatments, concluded, “there is little evidence to support homeopathy as an effective treatment for any specific condition.”  Homeopathy has been around for hundreds of years. The basic philosophy behind the practice is the idea of “like cures like.” A homeopathic remedy consists of a natural substance – a bit of herb, root, mineral, you get the idea – that “corresponds” to the ailment you wish to treat. The “active” agent is placed in water and then diluted to the point where it no longer exists in any physical sense. In fact, practitioners of homeopathy believe that the more diluted a remedy is, the more powerful it is. So, if you subscribe to this particular worldview, ironically, you want your active agents to be not just non-existent, but super non-existent. For those of us who reside in the material world, where the laws of physics have relevance, a homeopathic remedy is either nothing but water or, if in capsule form, a sugar pill. Of course, “like cures like” and super dilution have absolutely no foundation in science. There is no evidence to support the idea that the active agents – the herb, root, mineral – correspond in any biologically meaningful way to the particular ailments that the homeopathic treatments are meant to treat. Of course, the idea that a super-diluted solution could have some measurable impact on our bodies conflicts with the known laws of physics and chemistry. If a homeopathic solution contains no true ingredients, how can it have a physical impact on the body?

Chiropractic therapy:

The scientific evidence suggests that it is only worth seeing a chiropractor if you have a back problem. Try conventional treatments before turning to a chiropractor for back pain. They are likely to be cheaper and just as effective. Check the reputation of your chiropractor. They have been shown to be markedly more likely to be involved in malpractice, fraud and sexual transgressions than are medical doctors.

Herbal medicine:

By contrast with acupuncture, homeopathy and chiropractic manipulation, there is no doubt that some herbal medicine is effective. Much of modern pharmacology has evolved from the herbal tradition. Note that the word drug comes from the Swedish word druug meaning dried plant. What has to be considered here is the difference between scientific development of herbal extracts and what might be called ‘alternative herbal medicine’. Alternative herbal therapists continue to believe that Mother Nature knows best and that the whole plant provides the ideal medicine, whereas scientists believe that nature is just a starting point and that the most potent medicines are derived from identifying and sometimes manipulating key components of a plant.  Most alternative herbal medicines have not been tested for efficacy or safety to the same level as have conventional drugs. We have to ask: which herbal remedies work, and are they safe. It should be noted that for many diseases and conditions there are no effective herbal remedies, including: cancer, diabetes, multiple sclerosis, osteoporosis, asthma, hangover and hepatitis.  There are about 25 % herbal remedies for which there is some evidence of efficacy. The amount or quality of the evidence of efficacy for the remaining 75% of other remedies was poor or medium. In many cases there is a possibility of interaction with other drugs. If you are on any medication a medical doctor should be consulted before using herbal medicines. 

_

Placebo effect and complementary & alternative medicine (CAM):

1. The placebo effect is real and is capable of exerting at least a temporary pain reduction effect. It occurs only in the presence of the belief that an intervention (or therapy) is capable of exerting this effect. This belief can be instilled through classical conditioning, or simply by the suggestion of a respected individual that this intervention (or therapy) can reduce pain.

2. The placebo effect has a plausible, biochemical mechanism or action (at least for pain reduction), and that mechanism of action is the body’s endogenous opioid system.

3. There is no compelling credible scientific evidence to suggest that many CAM therapies benefit medical conditions or reduce medical symptoms (pain or otherwise) better than a placebo.

4. Many CAM therapies have no scientifically plausible biochemical mechanism of action over and above those proposed for the placebo effect.

_

I quote from my article on ‘The Pain’:

About 35% of people report marked pain relief after receiving a saline injection they believe to have been morphine. This is not only a placebo effect (of which CAM practitioners take advantage to claim their efficacy); but makes clear that pain is not merely a sensation or perception but a basic emotion just like lust, anger etc.

_

 Acupuncture and lung cancer pain:

  “Acupuncture Reduces Pain in Lung Cancer Patients – New Findings”. The article was posted on a credulous site that promotes acupuncture, a practice that has never been proven to yield any benefit over the placebo effect. The focus of the study was the evaluation of acupuncture and its effect on lung cancer related symptoms such as nausea, anxiety, pain, depression and a sense of not feeling well. A total of 33 lung cancer patients received 45 minute acupuncture treatments at a rate of 1 – 2 times per week for at least 4 treatments. The results demonstrated that acupuncture effectively reduced pain levels in over 60% of patients. Additionally, 30% of patients noted improvements with a sense of well-being with at least 4 acupuncture treatments. This number jumps to a 70% improvement in well-being when patients received 6 or more acupuncture treatments. However the study reveals a set of major scientific flaws:

1. There is no placebo control group in the study.

2. There is no group that receives the best known and possible medical pain intervention for lung cancer patients.

The placebo effect is a powerful, built-in effect wherein belief about outcome affects outcome. If a patient receives a sham, non-medical treatment, but the treatment provider believes it’s effective and the patient believes it’s effective, then the patient can get better even though no medical intervention actually occurs. It must be controlled for before declaring a treatment successful. Degree of belief determines outcome, independent of the activity of the treatment. While for acupuncture it is difficult to provide a sham control procedure, in fact sham acupuncture (where the needles press against, but never puncture, the skin) exists and has been used in properly controlled studies of acupuncture. These independent studies have shown no effect of acupuncture over placebo. At best, a patient is paying for the privilege of being deceived. This study is bad science. Without a control group, the most accurate and scientifically honest interpretation is that the patients experienced the placebo effect. Studies of placebo have shown that “more sham intervention equals faster positive outcome,” because the belief is that more treatment means more healing. The results – more acupuncture led to more reports of pain relief because acupuncture is behaving as a placebo and more placebo speeds the placebo effect. 

_

Homeopathic Nosodes and Nostrums:

Bad Science Watch is an independent non-profit watchdog and advocate for the enforcement and strengthening of consumer protection regulation. Bad Science Watch launched a new website to support their campaign to stop the sale of nosodes—ineffective homeopathic preparations marketed as “vaccine alternatives” by some homeopaths and naturopaths. The website, www.StopNosodes.org, features information for the public about nosodes and the danger they pose, steps that concerned citizens and health professionals can take to help the campaign, and an open letter to Health Canada. There is no scientific evidence that nosodes can prevent or treat any disease. Despite this the Natural Health Products Directorate has licensed at least 179 nosode products (82 of which are used as vaccine alternatives), assuring the public that they are safe and effective. As a result Canadians choosing nosodes to prevent dangerous diseases like measles, whooping cough, and polio are acting on false assurances, and are given a dangerous undue sense of security. Additionally, they decrease the herd immunity in their communities, exposing themselves and others to further unnecessary risk. Since they provide no protection or benefit and contribute to falling vaccination rates, Bad Science Watch is calling on Health Canada to cease issuing licenses for nosodes and revoke the licenses for all existing products. “By licensing nosodes Health Canada undermines its own policies and is working against its own efforts to promote vaccination,” said Michael Kruse, campaign director and co-founder of Bad Science Watch. “We must stop putting Canadian families at unnecessary risk and ban these products.”  

_

The following points are critical in the growing popularity of alternative methods despite being imitation science:

1. Faith of the patient in the art, method or skill of the healer.

2. No expensive referral to specialists, and no “passing of the buck” between doctors.

3. No expensive investigations — the method is the art and is complete.

4. No need for medical insurance.

5. Time spent in the process and method of healing.

6. Listening and communication skills.

7. No prospect of hospitalization.

8. Limited medication costs.

_

I quote from my article on ‘complementary and alternative medicine’:

It would be ideal to have a medicine that is both efficacious and safe. However, if any medicine is not efficacious, then safety issue is a non-starter. Most CAM therapies are not efficacious in acute & emergency illnesses and therefore its claim of safety is irrelevant. Also, if a medicine is not safe but life saving in serious illness, it can be used judiciously. For example, chemotherapy given to a patient suffering from cancer causes hair loss & low blood counts but controls cancerous growth and prolongs life. A longer, lengthier review process (including larger clinical trials) will reduce the probability that unsafe drugs enter the market, but at the cost of delaying patient access to potentially useful therapies.  Also, because patient who use drugs often present with other co-morbidities which may lead to death or serious injury, it is often difficult to establish that an unsafe drug was the direct cause of injury. So if you have to choose between efficacy & safety, please choose efficacy first and safety can be dealt with later on. I will give another example. Suppose, you have medicine A with guaranteed efficacy but doubtful safety and you have another medicine B with guaranteed safety but doubtful efficacy for your illness. What will you do?  You have gone to a practitioner to relieve your illness. If you take medicine B, since the efficacy of the medicine B is doubtful, you may worsen depending on the type of illness. However, if you take medicine A, you are sure that your illness will be relieved but you may get side effects depending on type of medicine. So logically, you may worsen anyway but worsening due to type of illness is worse than worsening due to type of medicine because you cannot choose type of illness but you can certainly choose type of medicine depending on double-blinded clinical trials. So the dictum goes that ‘in event of clash between the efficacy and the safety of any medicine, efficacy would be primary concern and safety would be secondary concern’.

________

The figure below depicts two ways to practice medicine, science and pseudoscience:

Most of CAM therapists practice pseudoscience while most of the mainstream medical practitioners (allopathic) practice science but there are many areas where even mainstream medical science is adulterated with bad science as discussed below.  

_

Cancer research in crisis: Are the drugs we count on based on bad science? 

Two thought provoking and disturbing studies are out which raise major questions about conduct of the “War on Cancer.” One examines the quality of basic research and the other concludes that half of current cancer deaths could be prevented. Almost 90 percent of early stage cancer research looking for improved treatments is wrong, according to scientists at biotechnology giant Amgen and the MD Anderson Cancer Center.  The researchers describe their findings as “shocking.” The allegations about questionable research in the quest for treatments appear in the prestigious journal Nature.  C. Glenn Begley, the former head of cancer research at Amgen, and surgical oncologist Lee M. Ellis of MD Anderson Cancer Center in Houston describe how scientists at the Thousand Oaks, Calif.-based Amgen tried to replicate the results of 53 landmark cancer research papers.  By landmark, they mean papers cited by others as significant progress.  All were so-called “pre-clinical,” meaning they were studies in rodents or with cells in petri dishes. Only 6 out of 53 landmark cancer studies could be replicated, a dismal success rate of 11%! The Amgen researchers had deliberately chosen highly innovative cancer research papers, hoping that these would form the scientific basis for future cancer therapies that they could develop. It should not come as a surprise that progress in developing new cancer treatments is so sluggish. New clinical treatments are often based on innovative scientific concepts derived from pre-clinical laboratory research. However, if the pre-clinical scientific experiments cannot be replicated, it would be folly to expect that clinical treatments based on these questionable scientific concepts would succeed. Reproducibility of research findings is the cornerstone of science. Peer-reviewed scientific journals generally require that scientists conduct multiple repeat experiments and report the variability of their findings before publishing them. However, it is not uncommon for researchers to successfully repeat experiments and publish a paper, only to learn that colleagues at other institutions can’t replicate the findings. In science, replication is proof. If a study can’t be reproduced reliably, it is wrong.  Most of the papers in question describe gene mutations or other changes in cancer cells that could be potential targets for new cancer treatments.  Such research is obviously critical for companies like Amgen deciding how to spend hundreds of millions testing potential drugs in humans. The findings at Amgen do not differ greatly from those at a team at Bayer HealthCare in Germany, which reported earlier that it could not replicate 25 per cent of studies. This does not necessarily indicate foul play. These investigators were all competent, well-meaning scientists who truly wanted to make advances in cancer research. The reasons for lack of reproducibility may include intentional fraud and misconduct, yes, but more often it is negligence, inadvertent errors, imperfectly designed experiments and the subliminal biases of the researchers or other uncontrollable variables. Clinical studies, of new drugs, for example, are often plagued by the biological variability found in study participants. A group of patients in a trial may exhibit different responses to a new medication compared to patients enrolled in similar trials at different locations. In addition to genetic differences between patient populations, factors like differences in socioeconomic status, diet, access to healthcare, criteria used by referring physicians, standards of data analysis by researchers or the subjective nature of certain clinical outcomes – as well as many other uncharted variables – might all contribute to different results.  The practice involves many parties — not just the scientists — in the research process who turn blind eyes to questionable actions. As Begley and Ellis detail it, “To obtain funding, a job, promotion or tenure, researchers need a strong publication record…Journal  editors, reviewers, and grant review committees [and also add journalists] often look for a scientific finding that is simple, clear and complete—a ‘perfect’ story.  It is therefore tempting for investigators to submit suspected data sets for publication, or even to manipulate data. Whatever the motivation, the results are all too often wrong.

_

Is there a link between breast cancer and antiperspirant?

This scare started in 1999 with a circular email claiming that toxins were “purged” through perspiration, and that when the armpit sweat glands were blocked, toxins built up in the lymph nodes behind them, causing cancer in the upper outer quadrant of the breast. While it’s true to say that an excess of cancers occur in this quadrant of breast tissue – the one closest to the lymph nodes – it is also by far the largest area of breast tissue. Furthermore, sweating is certainly not the primary means by which the body rids itself of toxins. But the email caused so much concern at the time that the American Cancer Society and the National Cancer Institute issued statements to reassure people, and scientists began to study the issue. If you really want to get to the bottom of whether an environmental factor is causing an illness, it’s good to be able to compare one group who were exposed to your toxin, antiperspirant for example, with a group who weren’t. In 2002 a large study was published looking at 1,600 women and found no link between antiperspirant use and breast cancer. It was testament to the influence of the original “hoax” email (as Harvard medics called it) that they also went out of their way to study its specific claim that shaving before using antiperspirant increased the chances of it causing cancer, because toxic chemicals could get in more easily, and again found no link. However the story rose again when Dr Philippa Darbre, a molecular biologist from Reading University, published a speculative hypothesis paper, in a small but respected academic journal, describing the effects of some of the ingredients of antiperspirants on non-living tissue in the laboratory. Specifically, that aluminum has been reported to at least bind to DNA, and also to alter gene expression in some situations, and that parabens (preservatives used in some cosmetic creams) have been reported to work similarly to estrogen in some experiments. This has certainly generated more media interest than any of her other more eminent papers on, for example, IGF-II receptors and breast cancer. Science is all about testing interesting hypotheses. In this case we have an interesting but half-finished laboratory-based story against a convincingly large study of real people that showed antiperspirants to be safe. Unfortunately, it looks like the largest risk factors for breast cancer at the moment are the ones we can’t control such as age, sex, and family history; or the ones we might not want to such as smoking and age at childbirth.

__

Breast Cancer treatment by bone marrow transplant: bad science at its best:

For more than a decade, physicians convinced breast cancer patients that bone marrow transplants were their best hope of salvation. But the insurance companies who resisted paying for the procedures were right all along: It was experimental medicine and most women were a lot better off without it. How could so many oncologists ignore basic principles of science?  A four-inch-long needle was the standard tool used to extract bone marrow from the hips of breast cancer patients— a bruising procedure. The marrow was then reinjected after high doses of chemotherapy had destroyed the patients’ natural supply. The treatment was the brainchild of Frei, director of Dana-Farber at the time and a towering figure in the field of cancer research. Like other physicians, Frei had been frustrated by limited success treating advanced breast cancer with chemotherapy. The tumors would respond— and sometimes even disappear— only to come creeping back once the treatment ended. Frei knew that a single dose high enough to kill every last cell might also kill the patient. But he thought that a combination of drugs might be tolerable. Each would have bad side effects— carmustine can harm the lungs; cisplatin can damage the kidneys; both drugs, along with a third cyclophosphamide, can wipe out the bone marrow. But together they might spread the collateral damage around. “Intuitively, it made sense,” says Craig Henderson, a former colleague of Frei and Peters’s at Dana-Farber. “If a little chemo is good, more is better.” Combination chemotherapy with a bone marrow transplant, as the technique is called, had cured leukemias and lymphomas, but no one knew if it would work on breast cancer. “We knew the treatment was going to kill some people along the way, to be quite honest, but we knew the disease killed everybody,” Peters says. So the possibility of a cure seemed worth the risk. “We were going to swing and go for the ring,” he says. “That drove us. You had to believe you were going to pull off something that was going to change history.” The medical chart of a breast cancer patient is an extraordinarily detailed and painful document, listing page after page of treatment and response. When oncologists audited a clinical trial in South Africa and found only the skimpiest of charts, they began to suspect fraud. Most oncologists now agree that high-dose chemotherapy is simply the wrong treatment for breast cancer. Chemotherapy works by killing cells when they’re dividing, so the faster cancer cells divide, the harder they’re hit. Blood cancers— some leukemias and lymphomas— grow rapidly and so are acutely vulnerable. But breast cancer cells reproduce more slowly and tend to be far more resistant to cancer drugs. No matter how many breast cancer cells are killed, at least a few survive, and they keep dividing after the chemotherapy ends.   

______________

Imitation science in Pharmaceuticals: The truth about Drug Companies:
The combined profits for the ten drug companies in the Fortune 500 ($35.9 billion) were more than the profits for all the other 490 businesses put together ($33.7 billion) [in 2002]. Over the past two decades the pharmaceutical industry has moved very far from its original high purpose of discovering and producing useful new drugs. Now primarily a marketing machine to sell drugs of dubious benefit, this industry uses its wealth and power to co-opt every institution that might stand in its way, including the US Congress, the FDA, academic medical centers, and the medical profession itself.  Corruption in the pharmaceutical sector occurs throughout all stages of the medicine chain, from research and development to dispensing and promotion. The medicine chain refers to each step involved in getting drugs into the hands of patients, including drug creation, regulation, management and consumption. The WHO notes that corruption is so widespread in part because medicines pass through a large number of intermediaries before they reach the patients who need them. Each extra step provides an opportunity for corruption to take place, ultimately driving up the cost of the medicine or diverting it toward the wrong recipients.

_

Trials against placebo vs. trials against current good treatment: 

Everybody thinks they know that a trial should be a comparison of your new drug against placebo. But actually, in a lot of situations, that’s wrong because often we already have a very good treatment that is currently available, so we don’t want to know that your alternative new treatment is better than nothing. We want to know that it’s better than the best currently available treatment that we have. And yet, repeatedly, you consistently see people doing trials still against placebo. And you can get a license to bring your drug to market with only data showing that it’s better than nothing, which is useless for a doctor trying to make a decision. But that’s not the only way that you can rig your data. You can also rig your data by making the thing that you compare your new drug against really rubbish. You can give the competing drug in too low a dose, so that people aren’t properly treated. You can give the competing drug in too high a dose, so that people get side effects. And this is exactly what happened with antipsychotic medication for schizophrenia. Twenty years ago, new generations of antipsychotic drugs were brought in and the promise was that they would have fewer side effects. So people set about doing trials of these new drugs against the old drugs, but they gave the old drugs in ridiculously high doses, 20 milligrams a day of haloperidol. And it’s a foregone conclusion, if you give a drug at that high a dose, that it will have more side effects and that your new drug (risperidone) will look better. In 1996 in Kano, Nigeria, the drug company Pfizer compared a new antibiotic during a meningitis outbreak on 200 childern to a competing antibiotic that was known to be effective at a higher dose than that used during the trial. 11 children died, divided almost equally between the two and the participants were not told that the competing antibiotic at the effective dose was available in the next-door building from Médecins Sans Frontières.  Pfizer failed to get proper consent for the trial and the experimental drug trovafloxacin was eventually nixed for meningitis because it had a risk of complications.  

_

Bad Pharma is a book by British physician and academic Ben Goldacre about the pharmaceutical industry, its relationship with the medical profession, and the extent to which it controls academic research into its own products. The book was first published in September 2012 in the UK by the Fourth Estate imprint of HarperCollins. It was published in the United States in February 2013 by Faber and Faber. Goldacre argues in the book that “the whole edifice of medicine is broken,” because the evidence on which it is based is systematically distorted by the pharmaceutical industry. He writes that the industry finances most of the clinical trials into its products, that it routinely withholds negative data, that trials are often conducted on small groups of unrepresentative subjects, that it funds much of doctors’ continuing education, and that apparently independent academic papers may be planned and even ghostwritten by pharmaceutical companies or their contractors, without disclosure.  Goldacre calls the situation a “murderous disaster,” and makes a number of suggestions for action by patients’ groups, physicians, academics and the industry itself. Goldacre writes in the introduction that the book aims to defend the following paragraph:

_

Doctors generally want to do the best for their patients, but they can’t know what that is if half of the data on clinical trials of drugs is missing and some of the rest is distorted. The editors of medical journals want to publish good research but know that when companies test their shiny new drug against other treatments, they don’t always play fair. The vital comparison may be made against a placebo (Goldacre gives a harrowing account of how such a trial led to children in India dying when there was a perfectly good drug to treat them) or against unusually low or abnormally high doses of the drug – to ensure suitable conclusions as to efficacy and the severity of side-effects. It’s no surprise that most published trials funded by drug companies show positive results.

_

Missing Data: Withhold data:

_

_

Classical example of withholding data:

A Mr. X is accused of murdering Mr. Y in Mumbai on December 12, 2009. The case is in the court. The prosecution brought a witness who saw Mr. X coming out of home of Mr. Y with a knife in his hand and blood stained clothes. Mr. X is convicted of the crime. The prosecution did not bring three other witnesses who saw Mr. X in Delhi on December 12, 2009. The prosecution withheld data that did not suit their hypothesis and got innocent man convicted. In legal case, there is a defense attorney who can bring these three witnesses to save his client but in medical research by pharmaceutical industry, these are no defense lawyers for the innocent population and therefore when the negative data is withheld, a bogus drug is marketed with the hope of curing an illness at the cost of lives of innocent peoples. 

_

The bias towards positive evidence:

It is the peculiar and perpetual error of the human understanding to be moved and exited by affirmatives than negatives.

Francis Bacon

I have nothing to add but to say that such bias suppresses all studies with negative outcome and as a result, bogus drugs are marketed with little efficacy and host of side effects.

_

It’s no surprise that overall, industry-funded trials are four times more likely to give a positive result than independently sponsored trials. The negative data goes missing in action; it’s withheld from doctors and patients. And this is the most important aspect of the whole story. It’s at the top of the pyramid of evidence. We need to have all of the data on a particular treatment to know whether or not it really is effective. This is a drug called reboxetine and if you read trials on this drug, they were all positive. They were all well conducted. Unfortunately, it turned out that many of these trials were withheld. In fact, 76 percent of all of the trials that were done on this drug were withheld from doctors and patients. And this is not an isolated story. Around half of all of the trial data on antidepressants has been withheld, but it goes way beyond that. The Cochrane Group was trying to get a hold of the data on that to bring it all together. But the companies withheld that data from them, and so did the European Medicines Agency for three years. This is a problem that is currently lacking a solution. And to show how big it goes, this is a drug called Tamiflu (oseltamir), which governments around the world have spent billions and billions of dollars on. And they spend that money on the promise that this is a drug which will reduce the rate of complications with H1N1 flu. We already have the data showing that it reduces the duration of your flu by a few hours. But Governments don’t care about that. We prescribe these drugs, we stockpile them for emergencies on the understanding they will reduce the number of complications, which means pneumonia and which means death. The infectious diseases Cochrane Group, which are based in Italy, have been trying to get the full data in a usable form out of the drug companies, so that they can make a full decision about whether this drug is effective or not, and they’ve not been able to get that information.

_

The clinical trials undertaken by drug companies routinely reach conclusions favourable to the company. For example, in 2007 researchers studied every published trial on statins, drugs prescribed to reduce cholesterol levels. In the 192 trials that were studied, industry-funded trials were 20 times more likely to produce results that favoured the drug. These positive results are achieved in a number of ways. Sometimes the industry-sponsored studies are flawed by design (for example by comparing the new drug to an existing drug at an inadequate dose), and sometimes the patients are selected to make a positive result more likely. In addition, the data is studied as the trial progresses, and if the trial seems to be producing negative data it is stopped prematurely and the results are not published, or if it is producing positive data it may be stopped early so that longer-term effects are not examined. This publication bias, where negative results remain unpublished, is endemic within medicine and academia.  As a consequence, doctors may have no idea what the effects are of the drugs they prescribe.  

_

Manipulating volunteers:

The process as new drugs move from animal testing through phase 1 (first-in-man study), phase 2, and phase 3 clinical trials. Phase 1 participants are referred to as volunteers, but in the US are paid $200–$400 per day, and because studies can last several weeks and subjects may volunteer several times a year, earning potential becomes the main reason for participation.  Participants are usually taken from the poorest groups in society, and outsourcing increasingly means that trials may be conducted in countries with highly competitive wages by contract research organisations (CROs). The rate of growth for clinical trials in India is 20 percent a year, in Argentina 27 percent, and in China 47 percent, while trials in the UK have fallen by 10 percent a year and in the US by six percent. The shift to outsourcing raises issues about data integrity, regulatory oversight, language difficulties, the meaning of informed consent among a much poorer population, the standards of clinical care, the extent to which corruption may be regarded as routine in certain countries, and the ethical problem of raising a population’s expectations for drugs that most of that population cannot afford.  It also raises the interesting question of whether the results of clinical trials using one population can invariably be applied elsewhere. There are both social and physical differences: whether patients diagnosed with depression in China are really the same as patients diagnosed with depression in California, and Oriental people metabolize drugs differently from Westerners. There have also been cases of available treatment being withheld during clinical trials.  

_

Manipulating regulators:

A regulator – such as the Medicines and Healthcare products Regulatory Agency (MHRA) in the UK, or the Food and Drug Administration (FDA) in the United States – ends up advancing the interests of the drug companies rather than the interests of the public. This happens for a number of reasons, including the revolving door of employees between the regulator and the companies, and the fact that friendships develop between regulator and company employees simply because they have knowledge and interests in common. There are also issues of surrogate outcomes, accelerated approval, and the difficulty of having ineffective drugs removed from the market once they have been approved. Regulators do not require that new drugs offer an improvement over what is already available, or even that they be particularly effective.

_

Manipulating trials:

The clinical trials can be flawed by design and by analysis, and that it has the effect of maximizing a drug’s benefits and minimizing harm. There have been instances of fraud though these are rare. More common are the “wily tricks, close calls, and elegant mischief at the margins of acceptability.” These include testing drugs on unrepresentative, “freakishly ideal” patients; comparing new drugs to something known to be ineffective, or effective at a different dose or if used differently; conducting trials that are too short or too small; and stopping trials early or late. It also includes measuring uninformative outcomes; packaging the data so that it is misleading; ignoring patients who drop out (using per-protocol analysis rather than intention to treat analysis); changing the main outcome of the trial once it has finished; producing subgroup analyses that show apparently positive outcomes for certain tightly defined groups (such as Chinese men between the ages of 56 and 71) in order to hide an overall negative outcome; and conducting “seeding trials,” where the objective is to persuade physicians to use the drug.   

_

Pharmaceutical companies may be financing drug studies in order to influence their outcomes, say researchers writing in Deutsches Ärzteblatt International. The findings confirm the conclusions of two previous reviews published in 2003 which looked at the pharmaceutical industry’s influence on research, the authors say. Researchers studied 57 publications obtained from a systematic Medline search from November 1, 2002 to December 16, 2009. Selected studies were evaluated by two of the authors. These 57 papers were supplemented by studies found in their references sections. They say pharmaceutical companies exploit a wide variety of possibilities for manipulating study results.  Apart from financing the study, financial links to the authors such as payments for lectures, may tend to make the results of the study more favorable for the company. Not only the results themselves, but also their interpretation, are significantly more often in accordance with the wishes of the sponsor. In some publications, the authors detected evidence that sponsors from the pharmaceutical industry had influenced study protocols. For example, placebos were more frequently used in drug studies than was the case with independently financed studies. Some favorable effects were linked to financial support from the pharmaceutical industry. They note that 25% of researchers and two-thirds of academic institutions have industry ties, meaning it is quite normal, and that the methodological quality of studies with industrial support tended to be better than government-funded drug studies. It may be a matter of nuance. The researchers said 23 of 26 studies (out of 57 publications analyzed) seemed to show results favorable to the sponsor but in only 4 cases was it apparent.  So 4 obviously apparent cases versus 3 obviously not apparent cases.

_

How drug companies manipulate drug trials:

•Conduct a trial of your drug against a treatment known to be inferior.

•Trial your drugs against too low a dose of a competitor drug.

•Conduct a trial of your drug against too high a dose of a competitor drug (making your drug seem less toxic).

•Conduct trials that are too small to show differences from competitor drugs.

•Use multiple endpoints in the trial and select for publication those that give favourable results.

•Do multicentre trials and select for publication results from centers that are favourable.

•Conduct subgroup analyses and select for publication those that are favourable.

•Present results that are most likely to impress—for example, reduction in relative rather than absolute risk

_

Statistical manipulation:

Relative Risk promoted and Absolute Risk hidden:

Let’s say that drug A adds an average of 2 days to the lifespan of trial subjects, while another drug adds an average of 1 day to subjects’ lifespan. Neither one is an impressive result, especially when, as is so often the case, that extra day is bought at enormous financial cost and adverse effects. The pharmaceutical researchers get around that by claiming the drug has doubled the life expectancy! They base it on the fact that 2 days is twice as long as 1 day. It’s a lie by statistics. Though Relative Risk is appropriate in some instances, it is never a valid measure of efficacy or safety when comparing a drug against a placebo or another drug. Outcomes that are presented in terms of relative risk reduction exaggerate the apparent benefits of the treatment. For example if four people out of 1,000 will have a heart attack within the year, but on statins only two will, that is a 50 percent reduction if expressed as relative risk reduction. But if expressed as absolute risk reduction, it is a reduction of just 0.2 percent. Relative Risk is a useful tool in epidemiological studies that’s misused in drug trials. Applying epidemiological tools to drug trials is a common trick. 

_

P-Value:

The P-Value is supposed to be a rating of the likelihood of obtaining the results reached by the study. The lower the number, the better, because it implies that the probability that the specific conclusion reached by chance is low. For example, in a study of fever treated by paracetamol (acetaminophen), if p-value is 0.05, it means the chance of fever subsiding on its own is 5 out of 100, meaning 95 times paracetamol would reduce fever. So a study that has a p-value of 0.05 would imply a 5-in-100—1-in-20—chance of getting the obtained result without the drug. That sounds pretty good. What you don’t know, though, is how often similar trials have been done with negative results. What if you were to learn that 20 such trials had been unable to attain the same result? That puts the p-value in a different focus. When the odds are 1-in-20, then achieving the result once out of 20 trials is hardly surprising. Why is this relevant? Because the vast majority of trials are never published! It isn’t unrealistic to assume that a large number of failed trials have been produced and hidden away. P-values have little meaning unless you’re also privy to the full range of studies performed on the same topic. What pharmaceutical companies do is to show a study with low p-values and conceal many studies with high p-values.   

_

Statistical Undermining of Medical Research:

This is a sampling of some common statistical and study design tricks brought to bear in medical pseudo science. All are used, and not rarely. The study that doesn’t use at least one of these statistical tricks is probably the exception. The fact that the “gold standard” of randomized double-blind placebo controlled trials has come to rely on statistical manipulation clarifies the fraudulent nature of the concept. Close examination of the gold standard behind evidence-based medicine shows that the so-called evidence doesn’t exist. The gold standard of medicine is actually tinsel.

_

Marketing manipulations:

Doctors are persuaded to prescribe brand-name drugs that are no more effective than significantly cheaper off-patent ones or the generic ones. Then the issue of medicalization of certain conditions whereby pharmaceutical companies “widen the boundaries of diagnosis” before offering solutions. Female sexual dysfunction was highlighted in 1999 by a study in the Journal of the American Medical Association, which alleged that 43 percent of women were suffering from it. Later two of the three authors declared that they had undertaken consultancy work for Pfizer, which at the time was preparing to launch female Viagra. Then the issue of celebrity endorsement of certain drugs where claims in advertisements are aimed at doctors, and also direct-to-consumer advertising. Then the issue of the influence of drug reps, how ghostwriters are employed by the drug companies to write papers for academics to publish, how independent the academic journals really are, how the drug companies finance doctors’ continuing education, and how patients’ groups are often funded by industry.

_

How marketing manipulations were used to phenomenally increase the diagnosis of conditions for which a prescription of an SSRI ensued: 

Marketing drugs is highly complicated because there’s no direct pipeline from the manufacturer to the end-user. Lots of folks control and manipulate the pathway that delivers the drug eventually to the mouth or body of the patient–most notably the physician who has to write the prescription, but also insurance companies, government regulators, managers of hospital formularies, and numerous other players. The key to selling drugs successfully today is first, to be able to control the channel: Pharmaceutical manufacturers, like other marketing-driven enterprises, have realized that it is less in the product, the brand or even the patent where their fortunes lie, but in the stream, the marketing channel. Once you control the channel, you can insert any product you like into it, no matter how useless or dangerous. But, if Rule One is control the channel, Rule Two is not to appear obviously to do so. Modern marketing lingo is full of words about “synergy” and so forth. The basic idea is to figure out ways to bend other parties to do what you want them to, by appealing in each case to their own interests, so that you never appear to be using power over them but always with and alongside them. It is in pursuit of this goal that pharmaceutical companies patiently and painstakingly unleash so many different measures at so many different levels, at such a high expense, as part of their strategy to market any given drug. Applbaum summarizes all the elements, for example, that went into marketing the SSRI antidepressants: “combined marketing and R&D divisions created and publicized research to demonstrate the efficacy of the drug; obtained academic ‘key opinion leader’ (KOL) endorsements for professional audiences (people whose careers and pocketbooks improved simultaneously); aired celebrity spokespeople and advertising to educate the lay public about the disease; lavishly funded antistigma campaigns; promoted among family doctors the use of abridged depression questionnaires and educated, and thus empowered, these doctors (and eventually their non-MD assistants) to look for telltale signs of depression and treat it; enrolled (in some cases, also bankrolled) the support of patient advocacy groups and solicited testimonials from among them; generated certified guidelines formulated and endorsed by psychiatrists in the employ of industry, to be adopted by hospital formularies and public insurance programs; took a lead role in determining the curriculum and scientific programs at continuing medical education programs and professional congresses; designed Web sites with diagnostic self-tests encouraging consumers along the path from self-diagnosis to the request of medication at the doctor’s office…. The result was a phenomenal increase in the diagnosis of conditions for which a prescription of an SSRI ensued.” The way the company conducts this complex orchestra “results in the ability to pull all those disparate strings so that it looks like there is a consensus firming up from multiple locations, and hence it is not manipulated, cannot possibly be manipulated. One of the reasons the manipulation is obscure to us is that it seems as if they are not supervising the process at every step, because their hands are off the process so much of the time–And how, anyway, could they control the entire system?”   

_

Changing goal posts:

One of the most common tricks for giving an impression of a treatment’s benefit is to move the goal posts. Heart disease is a prime example of this. The goal of the individual is to maintain or regain health. That is, the patient wants to maintain the ability to breathe freely, to be without pain, and to avoid heart attacks, strokes, and their negative effects. The goal of a pharmaceutical corporation is different and that is to sell drugs. However, it’s much harder to demonstrate that a drug can prevent a heart attack than it is to show it can change some marker that may be associated with an increased risk of heart attack, even though changing that marker may have no benefit on the real goal of preventing heart attacks. Cholesterol is often used as a stand-in for heart attacks in drug studies. Any drug that reduces cholesterol slightly is marketed as a drug to reduce heart attack and thereby saves life. There are hundreds of drugs in the market that change some biochemical marker in the blood or alter some physiological process and are promoted as drugs that reduce heart attacks, strokes, cancers, memory loss, fractures etc; where double blind RCT by independent researchers found no evidence of such claims.

_

Changing Endpoints:

The problem of moving goalposts is similar, but is applied before a study starts. Changing Endpoints is applied after it’s found that the results of a study are not to the researchers’ (or pharmaceutical company’s) liking, so they change the endpoint to one that the data would make look good. The problem with this technique is that the study wasn’t designed to investigate the redefined endpoint, so may not be meaningful. Changing Endpoints is, however, a common technique when the original results aren’t what the researchers or pharmaceutical firm wanted to see—so they insure that we don’t get to see the undesirable result.

_

Changing Treatment Duration:

The Changing Treatment Duration trick is becoming more common. It’s used to hide adverse effects or loss of efficacy, and is generally implemented by shortening a trial’s length. When study researchers suddenly stop a trial, it’s usually with the claim that the results were so good it would be unethical to continue the trial and deprive the placebo group of treatment benefits. This game was used to great effect in gaining approval and popularity for hormone replacement therapy. Ultimately, the stunt resulted in immense harm to thousands, possibly millions, of women. A major trial was stopped while benefits seemed apparent, but just before the harm of HRT would show up.

_

Short Term Trials:

Trials that are short when the drug or treatment will likely be long term, as seems to be the goal in most cases, are obviously unacceptable. They are, though, the rule. Even when the use of a drug is meant to be short term, the long term effects still need to be addressed. Short term beneficial effects must be balanced against the potential of long term harm.

_

Drug trial results are reported in “Person-Time”:

The concept of Person-Time—person-days, person-weeks, person-months, or person-years—was developed in epidemiological research, where it’s useful in the measurement of incidence in a population. In an epidemiological context, viewing the rate of incidence in terms of person-years, rather than in terms of number of people, can be of value, as it helps deal with problems of people moving in and out of a study population. It’s referred to as incidence rate, cumulative incidence, or incidence proportion. In drug trials, the number of subjects is, or should be, fixed. If people are moving in and out of such trials, the reasons are often related to the condition or drug being studied. The use of Person-Time can hide those reasons. What we, as consumers, need to know—and what doctors as provider of medical treatments needs to know—is how many people are likely to benefit from a treatment, not the benefit per person-days or -months or -years. Yet, more and more, drug study results are given in terms of Person-Time. It makes no sense, unless the purpose is to mislead.

_

Intention to Treat in drug trials:

Like Person-Time reporting, intention to treat comes from epidemiology. It’s an analysis based on the method of treatment initially intended, not on what was finally done. It should seem obvious that Intention to Treat has absolutely no place within drug trials: Let’s take, for example, a trial of the imaginary drug, Rojiv. The intent is to give all non-placebo subjects a certain dose of Rojiv. If, however, the treatment is changed for some subjects for any reason at all, the results are meaningless. We’ve learned absolutely nothing about the efficacy of Rojiv.  Intention to Treat reporting would give the results as if everyone had actually received Rojiv. Hard as it is to believe, some drug trials game the system by using the obviously bogus Intention to Treat system of reporting results.

_

Publication Control:

The fact is that trials controlled or run by pharmaceutical corporations are almost never published unless the results are to the company’s liking. The FDA can’t get its hands on them when the pharma company refuses. They justify this on the basis of proprietary rights, by saying that the information is protected as a trade secret. It’s unconscionable, but it’s routine practice.

Publication Control consists of a range of tricks, including:

  • Not publishing negative results.
  • Withholding trial data while publishing claimed results, making it impossible to verify the legitimacy of claimed results.
  • Using ghost writers to produce the articles for publication, thus turning the researchers into cogs in the research mill by removing their names from the process.
  • Using famous names as authors, even though they had nothing to do with the project, thus giving an illegitimate impression of authority and helping assure publicity.

________

Doctors are also at fault:

Poor research and bias cannot be placed simply at the door of drug companies. The BMJ revealed earlier this year that half of publicly funded research in America wasn’t published within the required time period. Doctors are often resistant to the notion they could ever be influenced by ads and sponsorship, even though the evidence to the contrary is overwhelming. They also rely on education paid for by drug companies because (unusually among professionals) they are loath to pay for it themselves. At the BMJ they are revising their declarations of interest form to say they will seek to work with doctors who have not received financial hand-outs from drug companies (funding for research is different). But pharmaceutical companies are, after all, not charities. They exist to make and sell drugs, some of which work well, and to make a profit for their shareholders. They may talk as if they want to improve healthcare and sometimes they mean it, but only proper regulation from external agencies will make any difference. There is evidence that companies spend much more on marketing than they do on research and development (in America 24.4% of the sales dollar is spent on promotion versus 13.4% on research and development). They also inflate the cost of developing new drugs, companies claiming that it costs £550m to bring a new drug to the market but others put it at a quarter of that cost. In India, ciprofloxacin was manufactured at half rupee per tablet and sold at 20 rupees per tablet at the time of launch because the company hyperinflated cost of manufacturing.    

_

According to a new database created by ProPublica, seven drug companies paid $282 million to more than 17,000 doctors during 2009-2010.

The drug companies included in the database are:

  1. Cephalon
  2. Johnson&Johnson
  3. Merck
  4. Pfizer
  5. GlaxoSmithKline
  6. AstraZeneca
  7. Eli Lilly

_

Can the source of funding for Medical Research affect the results?

Many clinical research studies are funded by pharmaceutical companies and there is a general perception that such industry-based funding could potentially skew the results in favor of a new medication or device. The rationale underlying this perception regarding the influence of industry funding is fairly straightforward. Pharmaceutical companies or device manufacturers need to increase the sales of newly developed drugs or devices in order to generate adequate profits. It would be in their best interest to support research that favors their corporate goals. Even though this rationale makes intuitive sense, it does not necessarily prove that industry-funding does influence the results of trials. However, there is also data to support the fact that the funding source does seem to correlate with the outcomes of clinical trials. One such study was conducted by Paul Ridker and Jose Torres and published in 2006 in JAMA (Journal of the American Medical Association). Ridker and Torres analyzed randomized cardiovascular trials published in leading, peer-reviewed medical journals (JAMA, The Lancet, and the New England Journal of Medicine) during the five year period of 2000-2005 in which one treatment strategy was directly compared to a competing treatment. They found that 67.2% of studies funded exclusively by for-profit organizations favored the newer treatment, whereas only 49.0% of studies funded by non-profit organizations (such as non-profit foundations and state or federal government agencies) showed results in favor of the newer treatment. This contrast was even more pronounced for pharmaceutical drugs, where 65.5% of the industry sponsored studies showed benefits of the newer treatment, while only 39.5% of non-profit funded studies favored the new treatment. One argument that is repeatedly mentioned in defense of the high prevalence of positive findings in industry-funded studies is the publication bias of journals. The concern refers to the fact that editors and peer reviewers of journals may give preference to articles that show positive findings with new therapies. However, the analysis by Ridker and Torres demonstrated that these journals did publish a substantial number of “negative studies”, in which the new therapy was not superior to the established standard of care. Studies such as the one by Ridker and Torres, the recognition that pharmaceutical companies provide various kinds of incentives for physicians to promote newer therapies and the realization that industry sponsors may perform selective analyses of clinical trial data to potentially exaggerate benefits of certain drugs have all contributed to the perception that industry funding could skew the results in favor of a drug or device made by the sponsor. This is precisely the reason why most leading medical journals now require an exact description of the funding sources and any potential financial interests that the authors of a research article may have. The disclosures are usually described in depth towards the end of the full-length article, but some journals even indicate funding sources in the brief abstract of an article. This allows the readership of the published articles to consider the funding source and potential financial interests of the authors when evaluating the results and conclusions of a clinical trial.

_

How corrupted Drug Companies deceive and manipulate Doctors through manipulating journals:

Game 1: Unwanted results are not published:

There are simply thousands of scientific studies out there that have never been seen by you or your physician because they have been screened out by editors and reviewers who are being paid to uphold an industry agenda. Published studies overwhelmingly favor the funding company’s drug. Whichever drug is manufactured by the study sponsor is the drug that comes out on top, 90 percent of the time!  Given this, how can medical journals be considered unbiased?

Game 2: Bad results are submitted as good:

When a scientific study has findings that cast doubt on the efficacy of a drug, oftentimes the negative findings are morphed into positive ones. For example, in 2008, FDA officials analyzed a registry of 74 antidepressant trials, which included trials that were published and those that were not. The FDA’s findings were then written up in an article in the New England Journal of Medicine. This is what they found:

•38 of the trials reported positive results, and 37 of the 38 were published.

•36 trials had negative or questionable findings. Of the 36, 22 were not published at all, and 11 were published in a way that conveyed the results as though they were positive.

So, if you just went to the published literature, it would look like 94 percent of the studies were positive, when in reality only about 50 percent were positive … equivalent to a coin toss.

For statins, the odds that the funding company’s drug will come out on top are staggering:

•The odds that the funding company’s statin drug will come out looking better than anyone else’s statin in the “results” section of the article are 20:1.

•The odds that the funding company’s statin will come out on top in the “conclusions” part of the article are 35:1.

So, even if they can’t make the results look good, they can often find a way to twist the conclusions so that their drug appears favorable.

Game 3: A Favorable study is submitted Multiple Times:

When a study yields positive results, it is often submitted multiple times in a way that the reader doesn’t realize it’s the same study, obscured by different author lists and different details. Analyzers have had to look very carefully to determine which studies are actually duplicates because they are so cleverly disguised.

Game 4: Follow-Up reviews done by biased Experts:

The editorials that follow from a study, submitted by so-called unbiased experts and then published in reputable journals, are often done by non-neutral parties who have a financial tie to the drug maker.

Game 5: Ghostwriting:

Many of the articles that appear in medical journals purportedly written by well-known academics are actually written by unacknowledged ghostwriters on Drug Company’s payroll.

Game 6: Journal bias:

Medical journals are generally considered by medical practitioners to be a source of reliable information. But medical journals are also businesses. Three editors, who agreed to discuss finances only if they remained anonymous, said a few journals that previously measured annual profits in the tens of thousands of dollars now make millions annually. The truth is that Drug Company has become quite adept at manipulating and brainwashing practitioners of conventional medicine. They influence the very heart and center of the most respected medical journals, creating dogma and beliefs that support the drug paradigm because it is blessed by the pinnacle of scientific integrity: the prestigious peer-reviewed medical journal. Peer-reviewed medical journals contain advertisements that are almost exclusively for drugs, amidst articles that are biased toward promoting those drugs. If you have looked through a medical journal lately, you’ll see full-page Pharma glossies, cover to cover. Pharmaceutical companies spend almost twice as much on marketing as they spend on research! In 2003, drug companies spent $448 million dollars on advertising in medical journals. It has been calculated that the return on investment on medical journal ads is between $2.22 and $6.86 for every dollar spent, with larger and older brands at the higher end. Long-term returns may be even higher when you consider that one ad viewed by a physician could result in hundreds or even thousands of drug purchases, based on the prescriptions he or she writes. The term “peer-review” has come to imply scientific credibility. But the fact is that many of the peer-reviewers are on the drug company’s payroll, and those who are not are unlikely to detect flawed research or outright fraud. Medical journals are the number one source of medical information for physicians. In fact, nearly 80 percent of physicians use medical journals for their education, which exceeds information from any other source. Do you really want to blindly take the advice of a physician whose only source of medical information is a medical journal engaged in such profound conflicts of interest?  

Game 7: Drug Companies masquerading as educators:

The education of medical students and residents also comes through the filter of the drug industry, which seeks to groom them before they even finish medical school. According to Dr. Golomb’s data, drug companies now spend $18.5 billion per year promoting their drugs to physicians. That amounts to $30,000 per year for every physician in the U. S.!  And drug companies are allowed to develop their own education curriculum for medical students and residents, lavishing them with gifts, indirectly paying them to attend meetings and events where they promote the company’s products.

_

So we see the journals being published by professional societies (e.g. British Medical Association), where the pharmaceutical companies provide the funds for gaining the results they want and the academics/medical professionals provide the writing, reviewing, and promotion of the results to the doctors/students/patients/media.   

_

Fraud in medical journals:

___________

Statin’s status:

Statins (LDL reducing medicine) are the best selling medicines in the history of modern pharmaceuticals. The market for statins was $26 billion in 2005, and sales for Lipitor alone reached $14 billion in 2006. Merck and Bristol Myers-Squib are actively seeking “over-the-counter” (OTC) status for their statin drugs. It is a billion dollar drug range, and these companies will do anything to keep up the myth of cholesterol being bad for us and linking it to disease. Statins are prescribed to men and women, children and the elderly, people with heart disease and people without heart disease. In fact, these drugs have a reputation for being so safe and effective that one UK physician, John Reckles has suggested that we put statins in the water supply. But recent research, often coming from independent health research field, is telling us that this is little more than a scam on a massive scale. Still, one cannot help but feel sorry for the confusing advice that not only is “bad” cholesterol actually “bad”, because “new insights” tell us so, but some “good” cholesterol is actually “bad”, or can go “bad”. Even statins that reduce “bad” cholesterol also reduce the risk of certain cancer. What methods to these companies adopt to get the results from clinical trials they look for?    

_

What an independent research suggests is as follows:

Statins don’t increase survival in healthy people:

Statins have never been shown to be effective in reducing the risk of death in people with no history of heart disease. No study of statins on this “primary prevention population” has ever shown reduced mortality in healthy men and women with only an elevated serum cholesterol level and no known coronary heart disease. (CMAJ. 2005 Nov 8;173(10):1207; author reply 1210.) Statins do reduce the risk of heart attacks in this population, but the reduction is relatively modest.

_

Statins don’t increase survival in women:

Despite the fact that around half of the millions of statin prescriptions written each year are handed to female patients, these drugs show no overall mortality benefit regardless of whether they are used for primary prevention (women with no history of heart disease) or secondary prevention (women with pre-existing heart disease). In women without coronary heart disease (CHD), statins fail to lower both CHD and overall mortality, while in women with CHD, statins do lower CHD mortality but increase the risk of death from other causes, leaving overall mortality unchanged. (JAMA study)

_

Statins don’t increase survival in the elderly:

The only statin study dealing exclusively with seniors, the PROSPER trial, found that pravastatin did reduce the incidence of coronary mortality (death from heart disease). However, this decrease was almost entirely negated by a corresponding increase in cancer deaths. As a result, overall mortality between the pravastatin and placebo groups after 3.2 years was nearly identical. This is a highly significant finding since the rate of heart disease in 65-year old men is ten times higher than it is in 45-year old men. The vast majority of people who die from heart disease are over 65, and there is no evidence that statins are effective in this population.

_

Do statins work for anyone?

Among people with CHD or considered to be at high risk for CHD, the effect of statins on the incidence of CHD mortality ranges from virtually none (in the ALLHAT trial) to forty-six percent (the LIPS trial). The reduction in total mortality from all causes ranges from none (the ALLHAT trial) to twenty-nine percent (the 4S trial). However, the use of statins in this population is not without considerable risk. Statins frequently produce muscle weakness, lethargy, liver dysfunction and cognitive disturbances ranging from confusion to transient amnesia. They have produced severe rhabdomyolysis that can lead to life-threatening kidney failure.

_

Aspirin just as effective as statins (and much cheaper!):

Perhaps the final nail in the coffin for statins is that a recent study in the British Medical Journal showed that aspirin is just as effective as statins for treating heart disease in secondary prevention populations – and 20 times more cost effective! Aspirin is also far safer than statins are, with fewer adverse effects, risks and complications.

_

The bottom line on statins:

  • The only population that statins extend life in is men under 80 years of age with pre-existing heart disease.
  • In men under 80 without pre-existing heart disease, men over 80 with or without heart disease, and women of any age with or without heart disease, statins have not been shown to extend lifespan.
  • Statins do reduce the risk of cardiovascular events in all populations. A heart attack or stroke can have a significant, negative impact on quality of life—particularly in the elderly—so this benefit should not be discounted.
  • However, the reductions in cardiovascular events are often more modest than most assume; 60 people with high cholesterol but no heart disease would need to be treated for 5 years to prevent a single heart attack, and 268 people would need to be treated for 5 years to prevent a single stroke.
  • Statins have been shown to cause a number of side effects, such as muscle pain and cognitive problems, and they are probably more common than currently estimated due to under-reporting.
  • Aspirin works just as well as statins do for preventing heart disease, and is 20 times more cost effective.

_

Having low cholesterol is as bad and unhealthy as having high cholesterol. Low cholesterol is associated with health risks like cancer, depression, anxiety and premature labor. Your total cholesterol should be between 150 and 200 mg/dL. However, I have seen many patients on statins who have total cholesterol less than 150 mg/dL. 

_

Why change of heart by Cochrane researchers over statins?

The ‘Cochrane Collaboration’ is an international collective of researchers whose self-proclaimed role is to provide accurate and robust assessments of health interventions. The group specialises in ‘meta-analyses’: the grouping together of several similar studies on interventions including drug therapies. In 2011, Cochrane researchers assessed the evidence relating to statin use in individuals at low risk of cardiovascular disease (defined as a less than 20 per cent risk over 10 years), and concluded that there was limited evidence of overall benefit.  Earlier this year, the same Cochrane group updated their data and concluded that overall risk of death and cardiovascular events (e.g. heart attack or stroke) were reduced by statins in low risk individuals, without increasing the risk of adverse events (including muscle, liver and kidney damage). It seems the Cochrane reviewers had had quite some change of heart. A paper published in the BMJ on 22 October, 2013 questions the evidence on which this U-turn appears to have been made. The authors of the BMJ piece note that although the 2013 meta-analysis included four additional trials, these trials did not substantially change the findings. The change in advice was actually based on another meta-analysis, published in 2012, conducted by a group known as the Cholesterol Treatment Trialists’ (CTT) collaboration. Among other things, the CTT authors concluded that, in low risk individuals, for each 1.0 mmol/l (39 mg/dl) reduction in LDL-cholesterol, statins reduce overall risk of death and heart attack by about 9 per cent and 20 per cent respectively. The conclusion was that statins have significant benefits in low risk individuals that greatly exceeded known risks of treatment. However, the CTT authors took the odd step of calculating the benefits of statins according to a theoretical reduction in LDL-cholesterol levels. A much more realistic appraisal would be simply to calculate if, compared to placebo, statins actually reduce the risk of health outcomes. The BMJ authors use the data from the CTT meta-analysis and found that risk of death was not reduced by statins at all. So, the CTT authors had extrapolated the data in a way that showed a benefit that actually does not exist in reality. And what of the claim that statins reduce the risk of cardiovascular events such as heart attack or stroke? The data shows that about 150 low-risk individuals would need to be treated for five years to prevent one such event (i.e. only about one in 750 individuals will benefit per year). They also draw our attention to the impact of statin treatment on ‘serious adverse events’. This outcome can be improved by statins as a result of, say, a reduced number of heart attacks, but worsened through side effects such as muscle or liver damage. The BMJ authors note that the CTT review did not consider serious adverse events (a major omission). Without knowing more about this, though, we simply cannot make a judgment regarding the overall effect of statins, and whether the net effect is beneficial or not. Interestingly, of three major trials that were included in the CTT review that assessed overall serious adverse effects, none found overall benefits from statin treatment. So, while the CTT authors seem to have over-hyped the benefits of statins, they seem at the same time to have been quite keen to steer clear of talk of their very real risks and the absence of evidence for overall benefit. The BMJ authors draw our attention to the fact that every single trial included in the CTT was industry funded. Such trials are well known to report results more favourably and perhaps downplay risks than independently funded research. So Cochrane collaboration is misled by CTT review funded by industry to promote sell of statins.

_

Side effects of statins reported by independent researchers (not funded by drug companies):

Cholesterol is crucial for energy, immunity, fat metabolism, leptin, thyroid hormone activity, liver related synthesis, stress intolerance, adrenal function, sex hormone syntheses and brain function. When prescribing HMGCoA reductase inhibitors (statins) one needs to be cognizant of the fact that the body had increased its’ cholesterol as a compensatory mechanism and investigate accordingly. We seem to have fallen into the marketing trap and ignored the niggling side effects with regard to the HMGCoA reductase inhibitors. The only statin benefit that has actually been demonstrated is in middle aged men with coronary heart disease.  For normal healthy individuals who are eager to achieve primary prevention, researchers discovered that for every 10,000 people taking a statin, there were 307 extra patients with cataracts, 23 additional patients with acute kidney failure and 74 extra patients with liver dysfunction. Furthermore, statin therapy increased muscle fatigabilty by 30% with 11.3% incidence of rhabdomyolysis at high doses. What’s more, it induces inflammatory myopathy, including necrotizing autoimmune myopathy with immunosuppression and the statin-related myopathy can last for 12 months. An additional side-effect of statin therapy is erectile dysfunction, which is 10 times more in young men taking the lowest dose of statin. When statins were discontinued over 50 percent had full recovery within 6 months. Further still the FDA-Adverse reporting system database reported that for every 10,000 reports of a statin-associated adverse event, approximately 40 reports were for statin-induced interstitial lung disease. There is increased risk of Diabetes Mellitus, Cataract formation, and Erectile Dysfunction in young statin users, all of which are alarming. Furthermore there is a significant increase in the risk of cancer and neurodegenerative disorders in the elderly plus an enhanced risk of a myriad of infectious diseases. All side effects are dose dependant and persist during treatment. Primary prevention clinical results provoke the possibility of not only the lack of primary cardiovascular protection by statin therapy but highlight the very real possibility of augmented cardiovascular risk in women, patients with Diabetes Mellitus and the young. Statins are associated with triple the risk of coronary artery and aortic calcification. These finding on statin major adverse effects had been under-reported and the way in which they withheld from the public, and even concealed, is a scientific farce. 

_

Is saturated fat and LDL real culprit for heart attack?

A passel of recent research that suggests that the “obsession” with lowering a patients’ total cholesterol with statins, and a public health message that has made all sources of saturated fat verboten to the health-conscious, have failed to reduce heart disease. Indeed, they have set off market forces that have put people at greater risk. After the Framingham Heart Study showed a correlation between total cholesterol and risk for coronary artery disease in the early 1970s, patients at risk for heart disease were urged to swear off red meat, school lunchrooms shifted to fat-free and low-fat milk, and a food industry eager to please consumers cutting their fat intake rushed to boost the flavor of their new fat-free offerings with added sugar (and, of course, with trans-fats). The result is a rate of obesity that has “rocketed” upward. And, despite a generation of patients taking statins (and enduring their common side effects), the trends in cardiovascular disease have not demonstrably budged. A 2009 UCLA study showed that three-quarters of patients admitted to the hospital with acute myocardial infarction do not have high total cholesterol; what they do have, at a rate of 66%, is metabolic syndrome — a cluster of worrying signs including hypertension, high fasting blood sugar, abdominal obesity, high triglycerides and low HDL (“good” cholesterol). Meanwhile, research has shown that when people with high LDL cholesterol (the “bad” kind) purge their diet of saturated fats, they lower one kind of LDL (the large, buoyant particles called “Type A” LDL), but not the small, dense particles (“Type B” LDL) that are linked to high carbohydrate intake and are implicated in heart disease. Recent research has also shown that Mediterranean diets — admittedly skimpy on red meat but hardly light on saturated fats — have outpaced both statins and low-fat diets as a means of preventing repeat heart attacks. Other research suggests that the saturated fat in dairy foods may protect against hypertension, inflammation and a host of other dysfunctions increasingly linked to heart attacks. It is time to bust the myth of the role of saturated fat in heart disease and wind back the harms of dietary advice that has contributed to obesity. When saturated fat got mixed up with the high sugar added to processed food in the second half of the 20th century, it got a bad name. On the question of which is worse — saturated fat or added sugar, the American Heart Assn. has weighed in — the sugar many times over. Real food includes saturated fat and real food lives up to the principle that food should confer wellness, not illness. Instead of lowering serum cholesterol with statins, which is dubious at best, how about serving up some real food?  

_

What about trans fat?

Trans fats are a natural component of animal products such as milk and meat. The FDA says that it is synthesized in the guts of grazing animals. The artificial trand- fats – also known as trans fatty acids – are made by adding hydrogen to liquid oil, which turns it into a solid, like margarine or Crisco. This makes it a handy ingredient for processed food manufacturers, since it improves the texture, stability and shelf life. It’s also inexpensive. Today, it’s often used in foods including microwave popcorn, coffee creamers, packaged cookies, cans of frosting and frozen pizza, among others. It’s not just bad, it’s doubly bad. For one thing, it increases blood levels of the “bad” cholesterol low-density lipoprotein (LDL). To make matters worse, researchers also believe that trans fats reduce blood levels of high-density lipoprotein (HDL), the “good” cholesterol. The fried fish plate, with hush puppies and onion rings, had 33 grams of trans fat. The American Heart Assn. recommends that people consume no more than about two grams of trans fat per day — which could be found naturally in milk and meat.  The FDA announced that it was moving to eliminate added trans fat  from processed food means that microwave popcorn, frozen pizza, refrigerated dough, cookies and ready-to-use frostings are too much of a health risk. Yes, even that coffee creamer is trying to kill you. FDA officials say this move can prevent as many as 7,000 deaths and 20,000 heart attacks a year.

_

Bestselling health authors Jonny Bowden, Ph.D., and Stephen Sinatra, M.D. give readers a 4-part strategy based on the latest studies and clinical findings for effectively preventing, managing, and reversing heart disease, focusing on diet, exercise, supplements, and stress and anger management. The Great Cholesterol Myth reveals the real culprits of heart disease, including:

– Inflammation

– Fibrinogen

– Triglycerides

– Homocysteine

– Belly fat

– Triglyceride to HDL ratios

– High glycemic levels

_

Inflammation worsens danger of heart disease:

Inflammation is the body’s response to noxious substances. Those substances can be foreign, like bacteria, or found within our body, as in autoimmune diseases like rheumatoid arthritis. In the case of heart disease, inflammatory reactions within atherosclerotic plaques can induce clot formation. When the lining of the artery is damaged, white blood cells flock to the site, resulting in inflammation. Inflammation not only further damages the artery walls, leaving them stiffer and more prone to plaque buildup, but it also makes any plaque that’s already there more fragile and more likely to burst. Oxidative damage is a natural process of energy production and storage in the body. Oxidation produces free radicals, which are molecules missing an electron in their outer shell. Highly unstable and reactive, these molecules “attack” other molecules attempting to “steal” electrons from their outer shells in order to gain stability. Free radicals damage other cells and DNA, creating more free radicals in the process and a chain reaction of oxidative damage. Normally oxidation is kept in check, but when oxidative stress is high or the body’s level of antioxidants is low, oxidative damage occurs. Oxidative damage is strongly correlated to heart disease. Studies have shown that oxidized LDL cholesterol is 8 times stronger a risk factor for heart disease than normal LDL. The researchers found that inflammation leads to a reduction of mature collagen in atherosclerotic plaques, leading to thinner caps that are more likely to rupture. This is important because other studies have shown that it is not atherosclerosis alone, but the rupture of the atherosclerotic plaques, that causes heart attacks and strokes. It follows, then, that if we want to prevent heart disease we need to do everything we can to minimize inflammation and oxidative damage. C-reactive protein (CRP) is an inflammatory marker — a substance that the liver releases in response to inflammation somewhere in the body. Studies indicate that men with high levels of CRP have triple the risk of heart attack and double the risk of stroke compared to men with lower CRP levels. In women, studies have shown that elevated levels of CRP may increase the risk of a heart attack by as much as seven times. The statins reduce levels of CRP. This may be more significant in accounting for the ability of these drugs to statistically lower the incidence of heart disease than the role these drugs play in lowering cholesterol levels.

Top four causes of oxidative damage & inflammation

  1. Stress
  2. Smoking
  3. Poor nutrition
  4. Physical inactivity

By focusing on reducing or completely eliminating, when possible, the factors in our life that contribute to oxidative stress and inflammation, we can drastically lower our risk for heart disease.

_

People who are plagued by statin propaganda must read following facts:

Myth–High cholesterol is the cause of heart disease.
Fact–Cholesterol is only a minor player in the cascade of inflammation which is a cause of heart disease.

Myth–High cholesterol is a predictor of heart attack.
Fact–There is no correlation between cholesterol and heart attack.

Myth–Lowering cholesterol with statin drugs will prolong your life.
Fact–There is no data to show that statins have a significant impact on longevity.

Myth–Statin drugs are safe.
Fact–Statin drugs can be extremely toxic including causing death.

Myth–Statin drugs are useful in men, women and the elderly.
Fact–Statin drugs do the best job in middle-aged men with coronary disease.

Myth–Statin drugs are useful in middle-aged men with coronary artery disease because of its impact on cholesterol.
Fact–Statin drugs reduce inflammation and improve blood viscosity (thinning blood). Statins are extremely helpful in men with low HDL and coronary artery disease.

Myth–Saturated fat is dangerous.
Fact–Saturated fats are not dangerous. The killer fats are the trans fats from partially hydrogenated oils.

Myth–The higher the cholesterol, the shorter the lifespan.
Fact–Higher cholesterol protects you from gastrointestinal disease, pulmonary disease and hemorrhagic stroke. 

Myth–A high carbohydrate diet protects you from heart disease.
Fact–Simple processed carbs and sugars predispose you to heart disease.

Myth–Fat is bad for your health.
Fact–Monounsaturated and saturated fats protect you from metabolic syndrome. Sugar is the foe in cardiovascular disease.

Myth–There is good (HDL) cholesterol and bad (LDL) cholesterol.
Fact–This is over-simplistic. You must fractionate LDL and HDL to assess the components.

Myth–Cholesterol causes heart disease.
Fact–Cholesterol is only a theory in heart disease and only the small component of Lipoprotein (a) or type-B LDL are susceptible to oxidation and inflammation.  

__________

Zyprexa (olanzapine) story:  

Applbaum claims that Eli Lilly managed to turn aside the threat to Zyprexa sales essentially by ” a strategy of creating a shadow science to drown out noncompany-sponsored (and competitors’) research reports on the side effects of the drug.” The way in which this strategy is most at odds with the drug industry’s self-image, and propaganda image presented to the medical profession, of its partnership in the pursuit of hard scientific evidence, is that, “the company treated the medical concerns associated with their drug as a relative and fungible truth–in short, as a brand truth that they had the right and the resources to control. The Zyprexa documents show how Lilly sought to deceive physicians in the United States about the severity of the side effects, using the physicians’ own incomplete knowledge against them.” Thus, “An initial estimate is that about three quarters of the 5506 pages of Zyprexa documents are devoted to what Sergio Sismondo (2007) calls the ‘ghost management’ of science, and much of that to the side effects issue. Research is manipulated into sales fodder. Each claim in the doctor’s office should be supported by research, and research is ‘ordered,’ as though from a shop, accordingly.”

_

The thalidomide scandal:

Thalidomide had been discovered by accident in 1954 by a small German company called Chemie Grunenthal. Grunenthal had sold the drug all over the world, aggressively promoting it as an anti-morning-sickness pill for pregnant women and emphasizing its absolute safety – it would harm neither the mother nor the child in the womb. The latter guarantee turned out to be tragically wrong. The drug was eventually withdrawn in 1961. But it was too late to prevent some 8,000 babies around the world being born with thalidomide deformities. In Britain, the drug was marketed by the giant liquor company Distillers. Though there had been a terrible tragedy, governments declared that since the testing and marketing of the drug had met all the legal requirements of the time, what had happened was not the companies’ responsibility. However, the claim by Distillers that they had followed the best practices of the time because no one then tested drugs on pregnant animals was simply untrue – other drug companies did.

_

Neurontin (Gabapentin) story:

How Pfizer manipulated studies:

The drug maker Pfizer earlier this decade manipulated the publication of scientific studies to bolster the use of its epilepsy drug Neurontin for other disorders, while suppressing research that did not support those uses, according to experts who reviewed thousands of company documents for plaintiffs in a lawsuit against the company. Pfizer’s tactics included delaying the publication of studies that had found no evidence the drug worked for some other disorders, “spinning” negative data to place it in a more positive light, and bundling negative findings with positive studies to neutralize the results, according to written reports by the experts, who analyzed the documents at the request of the plaintiffs’ lawyers.  One of the experts who reviewed the documents, Dr. Kay Dickersin of the Johns Hopkins Bloomberg School of Public Health, concluded that the Pfizer documents spell out “a publication strategy meant to convince physicians of Neurontin’s effectiveness and misrepresent or suppress negative findings.” In the case of Pfizer’s Neurontin, the negative studies would have increased doubts about the drug’s value for several unapproved uses — treating bipolar disorder, controlling certain types of pain and preventing migraine headaches, according to the expert opinions.  So-called off-label use of Neurontin for those conditions helped propel its sales to nearly $3 billion a year before it lost patent protection in 2004.  In one example, the experts concluded that Pfizer had deliberately delayed release of a study that showed the drug had little effect against pain that is a complication of long-term diabetes, even as the outside researcher who was a lead investigator for the study, Dr. John Reckless of Bath, England, pushed to publish the unflattering findings on his own.  

_

Aropax, Paxil or Seroxat (paroxetine) story: 

How SmithKline Beecham (SKB, subsequently GlaxoSmithKline) manipulates research evidence: a case study:

Between 1993 and 1998, SmithKline Beecham (SKB, subsequently GlaxoSmithKline) provided $5 million to various academic institutions to fund research into paroxetine (also known as Aropax, Paxil (GSK) or Seroxat), led by Martin Keller. Keller was from Brown University and received $800,000 for participation in the project. The results were published in 2001 by Keller et al. in the journal article, “Efficacy of paroxetine in the treatment of adolescent major depression: a randomized, controlled trial”, in the Journal of the American Academy of Child & Adolescent Psychiatry (JAACAP). The article concluded that “paroxetine is generally well tolerated and effective for major depression in adolescents”. This was a serious misrepresentation of both the effectiveness and safety of the drug. In fact, when SKB set out their methodology for their proposed study protocol, they had specified two primary and six secondary outcome measures. All eight proved negative, that is, on none of those measures did children on paroxetine do better than those on placebos. The published article misrepresented one of the primary outcomes so that it appeared positive, and deleted all six pre-specified secondary outcomes, replacing them with more favourable measures. SKB papers also revealed that at least eight adolescents in the paroxetine group had self-harmed or reported emergent suicidal ideas compared to only one in the placebo group. But these adverse events were not properly reported in the published paper. Instead, some were described as “emotional liability” while others were left out altogether. Although published in Keller’s name, the article was ghostwritten by agents of SKB, and the company maintained tight control of the article’s content throughout its development. GlaxoSmithKline’s internal documents, disclosed in litigation, show that company staff were aware that the study didn’t support the claim of efficacy but decided it would be “unacceptable commercially” to reveal that. According to a company position paper, the data were selectively reported in Keller et al.’s article, in order to “effectively manage the dissemination of these data in order to minimize any potential negative commercial impact”. As it turns out, the Keller et al. article was used by GlaxoSmithKline to ward off potential damage to the profile of paroxetine and to promote off-label prescribing to children and adolescents. Canadian Medical Association Journal described a leaked document indicating that GlaxoSmithKline had deliberately hidden two studies from regulators showing that its antidepressant, Paxil (paroxetine), could increase the risk of suicide in children. The company has paid nearly a billion dollars in legal settlements over Paxil, including $390 million for suicides and attempted suicides related to the drug. Evidence of manipulation has also emerged in the recent Senate investigation into GlaxoSmithKline’s diabetes drug Avandia (Rosiglitazone) .

_

Vioxx (rofecoxib) story:

Merck and Co. manipulated data on its drug Vioxx (rofecoxib):

Two new studies suggest drug-maker Merck and Co. manipulated data on its drug Vioxx, was slow to disclose adverse events associated with the painkiller, and used academic researchers to enhance the credibility of scientific studies largely written by Merck employees. One Canadian researcher, Dr. Claire Bombardier of Toronto’s Mount Sinai Hospital, denied that she was involved in manipulating data. The articles were based on company documents made public as a result of thousands of lawsuits leveled against Merck and Co. after it withdrew the former blockbluster medication from worldwide markets in September 2004 because studies revealed people who used it were at higher risk of heart attacks and strokes. They were published in the Journal of the American Medical Association, which also ran a damning editorial calling on everyone in the business of clinical research to pull up their socks, starting with doctors. “The profession of medicine in every aspect – clinical, education and research – has been inundated with profound influence from the pharmaceutical and medical device industries,” editor Dr. Catherine DeAngelis and deputy editor Dr. Phil Fontanarosa wrote. “This has occurred because physicians have allowed it to happen and it is time to stop.” DeAngelis and Fontanarosa said the Merck actions are not unique in the industry.

__

I am so much hurt, angry and disappointed with pharmaceutical industry that my blood starts boiling when I see a medical rep promoting a drug. The first question that comes in my mind is that as a doctor, should I be a part of criminal conspiracy?  This is no longer science versus imitation science but we have reached a threshold of criminal behavior.

____________

Imitation science in nutrition:

_

Why Nutrition Science is so bad:

When we look into nutritional science, we know how badly it is done. To be fair, nutrition science is very difficult. There are so many lifestyle and nutrition choices that can have a positive or negative influence on health, that it is hard to isolate the effect of any one food or group of foods. There are two main types of studies; Observational (Epidemiological) studies and Controlled Experiments. The gold standard of science is the controlled experiment (for e.g. RCT). In a controlled experiment, you hold all variables constant except for the one variable that you are studying. For example, if you wanted to study the effect of a fertilizer on growing plants, you would plant two groups of the same seeds. You would use the same soil, same size pot; put the pots in the same area where the temperature and light exposure are the same, and give them the same amount of water. You would put fertilizer in one but not the other.  If the plants in the fertilized pot grow faster and larger, then this would support your hypothesis that fertilizer helps plants grow. It is very hard to do this in a nutritional study.  The only way to be sure of the quality and quantity of food consumed, would be to lock people in a metabolic ward where food could be controlled.  You would have to weigh and measure all the food served, as well as all the food that wasn’t eaten.  Since most people are interested in being healthy for the rest of their life, and not just for the next 6 months, these studies would have to last 10 to 20 years to be really meaningful.  Not too many people would volunteer to be in a study that would lock them in a metabolic ward for 20 years!  Even if you could do a study like this, it might not be relevant to the real world. In a metabolic ward you have to stick to the diet; there is no other food choices.  In the real world, there are food choices on almost every street corner.  People have to be able to stay on a nutritional lifestyle long term for it to be useful.  Very few controlled studies are done in nutritional science.  Almost all the studies we hear about are observational studies.  In these studies information is gathered on what people eat, usually with food frequency questionnaires. These people are followed for a number of years and their health outcomes are observed.  Correlations are made between the food people ate and their health outcomes. These types of studies have many limitations, the biggest being that they cannot provide any information on cause and effect (vide supra). Observational studies are useful for coming up with a new hypothesis, but then the hypothesis has to be tested in a Controlled Experiment.

_

If we were to do a study of the BMI of marathon runners compared to the BMI of the average person, you might find that running marathons is correlated with a lower BMI. The conclusion of the study could say that running in marathons was linked, or correlated, or associated with a lower BMI. They could not say that running in marathons causes you to become lean and have a lower BMI, even though most people hearing about this study would think that running marathons does cause people to have a lower BMI. It conforms to our preconceived notions about exercise and weight, so we assume that causation is proved by the study.  If we look at another hypothetical study, it will illustrate why this is not the case.  If we did a similar study but used professional basketball players and height, we would find that playing basketball is correlated with being taller than the average person. Does that mean that playing basketball causes you to grow taller?  If you are 5’6” and want to be 6’ tall, can you play basketball for a few years and expect to grow?  Of course not!  Playing basketball doesn’t cause you to grow taller; it’s just that if you are tall you are more likely to play professional basketball. The same logic can be used in the other study: Running marathons might not make you lean; it’s just that lean people are more likely to run marathons.   

_

Observational studies are full of confounding variables as discussed earlier in correlation vs. causality:

There are two main groups of people that influence the outcomes of nutritional studies. There are people who are very health conscious and do whatever they can to be healthy, and those that do not care about their health and make food and lifestyle choices that are purely based on giving them pleasure. People who are health conscious tend to have better health than people who are not health conscious. They follow the health advice that has been generally given the last 40 years; they smoke less, drink less alcohol, exercise more, take vitamins, eat less calories, eat less sugar, eat less refined processed foods, and eat more vegetables. Any one of these variables could be contributing to their good health, but from an observational study you can’t tell which one. You would need a controlled experiment to do that.  Another problem with observational nutritional studies is that the data collected is not very reliable.  It is usually collected using food frequency questionnaires.  In these studies people are asked to recall what they ate in the last day, month, year, or even four years. Most people can’t remember what they ate last Tuesday, let alone what they ate three years ago.  People who see themselves as healthy tend to overestimate foods they consider healthy and underestimate foods they consider unhealthy.  If the data that the study is based on is not accurate, then how useful is the study?  Some scientists can have such a strong belief in what the outcome of their study will be that they become biased.  Since there are so many confounding variables, you can make the outcome of a study say pretty much whatever you want it to say.  Most studies will try and account for these variables, but it is almost impossible to know exactly how much of a part each one played in someone’s health.  Some researchers will use a third variable to link two items when there is no direct link.  A good example of this is saturated fat, cholesterol and heart disease. Many studies (such as Dr. Jolliffe’s Anti-Coronary Club experiment) will claim to show that saturated fat causes heart disease even if the data in their data shows that people who ate more saturated fat had a lower incidence of heart disease.  They do this by saying that saturated fat intake was associated with higher cholesterol, which is claimed to be a marker for heart disease. (Even though it has never been proven that high cholesterol causes heart disease) This is why we get so many mixed messages from the so called “nutrition experts”.  The way Observational studies are reported, it gives the idea that they determine cause and effect when they really only show correlation. In one group of people a certain food may be correlated with high cancer rates, in another group they may be correlated with low cancer rates.  The truth may be that the food has ‘no’ impact on cancer rates.  As long as we remember the limitations of the study we won’t get sucked into these false assumptions.  So the next time you hear about the latest study telling you to stay away from a certain food, or that another food is a miracle cure, remember that 99% of these studies don’t actually prove anything. It is time to stop funding these types of observational studies.  How many studies do we need that give sensational headlines but do not add to our knowledge.  We have enough hypotheses about health and nutrition.  We need to start doing controlled experiments to find out which hypotheses about diet and nutrition result in healthy outcomes. However, RCTs are expensive—especially nutrition studies, which require feeding large groups over extended periods, and to be completely rigorous, isolating the subjects so they can’t consume foods that aren’t part of the experiment. (These are usually called “metabolic ward studies”.)  As a result, RCTs are infrequently done, especially in the nutrition field. Since nutrition RCTs are so rare, almost all nutrition headlines are based on observational studies. The overwhelming majority of nutrition headlines are from cohort studies, in which health data has been collected for years (or decades) from a fixed group of people, often with no specific goal in mind.

_

Let’s watch the same group of people for decades, measure some things every once in a while, and see what happens to them. Then we can go back through the data and see if the people with a specific health issue had anything else in common.  It’s easy to see that looking for statistical associations in data that already exists is far easier and cheaper than performing a randomized clinical trial. This huge pitfall of observational studies is often neglected: in large cohort studies, data is often self-reported, and self-reported data is often wildly inaccurate. “Correlation does not imply causation” means “Just because two things vary in a similar way over time doesn’t mean one is causing the other.” Since observational studies can only prove correlation, not causation, almost every nutrition article which claims “X causes Y” is factually wrong. The only statements we can make from an observational study are “X associated with Y” or “X linked with Y”. Also, there are cases in which sampling bias and selection bias skew the results, and the cases in which the data is inaccurate: There is also the possibility that the truth is being stretched or broken…that the data is being misrepresented.

_

Some examples of nutritional observational studies:

“Red meat is blamed for one in 10 early deaths” (The Daily Telegraph)

 Since Pan et.al. is an observational study, we can’t assign blame. Also, people who admitted to eating the most red meat had, by far, the lowest cholesterol levels. Wait, what? Aren’t saturated fat and cholesterol supposed to cause heart disease? This is another clue that the story and the data isn’t quite as advertised.

“Eating steak increases the risk of early death by 12%.”

Another false statement: associational studies cannot prove causation. The study found that cutting the amount of red meat in peoples’ diets to 1.5 ounces (42 grams) a day, equivalent to one large steak a week, could prevent almost one in 10 early deaths in men and one in 13 in women.” Note the weasel words “could prevent”. Just like playing basketball could make you taller, but it won’t. And just like HRT could have prevented heart attacks: instead, it caused them.

“Replacing red meat with poultry, fish or vegetables, whole grains and other healthy foods cut the risk of dying by up to one fifth, the study found.”

 No, it didn’t. The risk of dying was associated with self-reported intake of red meat and “healthy foods”.

_

If only the science of diet and health were that simple. Scientists, alas, must struggle with a number of vexing questions about such studies:

  • Does a finding of statistical significance necessarily imply clinical or biological significance?
  • Do statistical findings based on populations necessarily count for individuals?
  • Do statistical associations provide guidelines for behavior?
  • Are the methods used in statistical studies adequate to draw conclusions about behavior?

We are talking here about a huge meta-analysis of 97 studies of obesity and mortality carried out by Katherine Flegal and her colleagues at the National Center for Health Statistics. The figures in the paper were difficult to follow, so they are summarized in the table below:

Relationship of weight category to the risk of mortality:

WEIGHT CATEGORY BMI RANGE MORTALITY RISK
Normal 18.5 – 25 1.00
Overweight 25 – 30 0.94 (95% CI, 0.91-0.96)
Obesity, Grade 1 30 – 35 0.95 (95% CI, 0.88-1.01)
Obesity, Grades 2 and 3 >35 1.29 (95% CI, 1.18-1.41)

Compared to people with BMI’s in the normal range, those with BMI’s considered overweight or somewhat obese display no increased risk of mortality. Indeed, those in the obesity grade 1 category may have a slightly reduced risk. The study only finds an increased risk of mortality—by 29%—when the BMI exceeds 35.

I quote from my article on ‘obesity’:

The result suggests that being overweight makes you live longer. There are varying reasons why the researchers might have used these broad groups, including the fact that many studies are too small to be able to present statistically reliable results according to finer gradations in BMI.  Also, low body weight often results from chronic disease, rather than being a cause of chronic disease. The weight loss may have been unintentional as a result of the underlying disease process; or the weight loss may have been intentional, because patients with serious conditions often become motivated for the first time to lose weight. Regardless, because of this phenomenon, people with a BMI below 25 are a mix of healthy individuals and those who are ill and have lost weight due to their disease. Leaner people are also more likely to smoke than their heavier counterparts. These factors will artificially inflate mortality rates among lean people, thus diminishing the harmful impact of overweight and obesity. Nonetheless, the fundamental point is that the same data can give contradictory and differing results depending on the methods of the processing of data. All medical researchers must understand this fact about the fallibility of scientific studies on obesity.

_________

Nutritional supplements (food supplements):

These are supplements to increase intake of vitamins, minerals, fatty acids, amino acids etc to maintain or improve health, fitness or wellbeing.  Nutritionists promote the idea that to maintain good health it is essential to have a good diet which is achieved by taking regular dietary supplements which are also promoted as a form of alternative therapy for many (nearly all) medical conditions. This is known as Nutritional Therapy.  Nutritionists and Nutritional Therapists are different from dieticians and those working in the science of nutrition. They often have no science qualifications but have false or unrelated qualifications or have qualifications from their own ‘Institutes’. For example Patrick Holford has a BSc in psychology; his ‘qualification’ in nutrition or related subjects is Diploma, awarded by the Institute for Optimum Nutrition which he founded, and where many other nutritionists were trained.  Another well-known nutritionist is Gillian McKeith whose ‘PhD’ was bought from a non-accredited US University. Most well-known nutritionists (e.g. Patrick Holford) have their own companies selling nutritional supplements as essential for general health. Nutritional claims can be advertised, whether tested or not, but medical claims cannot be written on the pill bottles [as there is no medical evidence, this is illegal]. They are promoted by the websites, and books of the nutritionists and by uncritical journalists in newspapers or on TV. There is very little evidence for any therapeutic or preventative functions for the supplements they sell. In this way they are similar to other alternative medicine systems. Of course there is a problem that good scientific controlled experiments are often difficult in nutritional studies. But there have been excellent studies of some important nutritionists’ claims: for example, the use of vitamin C to prevent colds; fish oil to improve children’s intelligence; antioxidants to prevent cancer etc. The books and web sites of nutritionists will have a lot of sound advice but this is available from all good books on diet and on government websites. We do not need a nutritionist to tell us that we should eat plenty of fruit and vegetables, we should not overcook them, we should take plenty of exercise etc.  

_

Doe Vitamin C prevents colds? No !

The Cochrane plain language summary:  “Regular ingestion of vitamin C had no effect on common cold incidence in the ordinary population. However, it had a modest but consistent effect in reducing the duration and severity of common cold symptoms. In five trials with participants exposed to short periods of extreme physical stress (including marathon runners and skiers) vitamin C halved the common cold risk. Trials of high doses of vitamin C administered therapeutically, starting after the onset of symptoms, showed no consistent effect on either duration or severity of common cold symptoms. However, only a few therapeutic trials have been carried out, and none have examined children, although the effect of prophylactic vitamin C has been greater in children. One large trial with adults reported equivocal benefit from an 8 g therapeutic dose at the onset of symptoms, and two trials using five-day supplementation reported benefit. More trials are necessary to settle the possible role of therapeutic vitamin C, meaning administration immediately after the onset of symptoms”.  It is important to remember the difference between preventing colds, where Cochrane review found no evidence and treatment where it showed minor benefits at very high doses meaning modest 13.6 % reduction in cold duration for children taking high dose vitamin C. In other words, children who had the highest number of colds might expect a reduction of 4 days per year; no significant benefit by any yardstick.     

_

Chondroitin and glucosamine for arthritis:

 Sales of chondroitin and glucosamine are a worth billions of dollars, but the evidence that they work has never been good. A new meta-analysis of clinical trials now shows that chondroitin on the symptoms of osteoarthritis is “minimal or nonexistent”. (Reichenbach S and others. Meta-analysis: Chondroitin for osteoarthritis of the knee or hip. Annals of Internal Medicine 146:580-590, 2007]. It is the same old story. Early trials were small and badly-designed. They seemed to show some effect, which was wildly exaggerated by the supplement hucksters to push sales. Eventually somebody does the trials properly, and it is found that there is little or no benefit. Glucosamine shows a similar trend. Glucosamine is a synthetic chemical, but it is not a licensed medicine in the UK. It is marketed as a “food supplement”, not as a drug. It is not approved for prescription on the NHS. The latest Cochrane review does not entirely rule out some benefit, but again the effects seem to get smaller as the trials get better.

_

Omega-3 PUFA for prevention of cardiovascular disease:

 The Cochrane review shows that it is not clear whether dietary or supplemental omega 3 fats (found in oily fish and some vegetable oils) alter total deaths, cardiovascular events (such as heart attacks and strokes) or cancers in the general population, or in people at risk of, or with, cardiovascular disease. When the analysis was limited to fish-based or plant-based, dietary or supplemental omega 3 fats there was still no evidence of reduction in deaths or cardiovascular events in any group.

_

Antioxidant supplements for prevention of mortality in healthy participants and patients with various diseases:

This is a good example of a thorough Cochrane review. Its conclusion is: The current evidence does not support the use of antioxidant supplements in the general population or in patients with various diseases.

 Read the full summary of conclusions to illustrate the thoroughness of the Cochrane reviews:

 “The present systematic review included 78 randomized clinical trials. In total, 296,707 participants were randomized to antioxidant supplements (beta-carotene, vitamin A, vitamin C, vitamin E, and selenium) versus placebo or no intervention. Twenty-six trials included 215,900 healthy participants. Fifty-two trials included 80,807 participants with various diseases in a stable phase (including gastrointestinal, cardiovascular, neurological, ocular, dermatological, rheumatoid, renal, endocrinological, or unspecified diseases). A total of 21,484 of 183,749 participants (11.7%) randomized to antioxidant supplements and 11,479 of 112,958 participants (10.2%) randomized to placebo or no intervention died. The trials appeared to have enough statistical similarity that they could be combined. When all of the trials were combined, antioxidants may or may not have increased mortality depending on which statistical combination method was employed; the analysis that is typically used when similarity is present demonstrated that antioxidant use did slightly increase mortality (that is, the patients consuming the antioxidants were 1.03 times as likely to die as were the controls). When analysis were done to identify factors that were associated with this finding, the two factors identified were better methodology to prevent bias from being a factor in the trial (trials with ‘low risk of bias’) and the use of vitamin A. In fact, when the trials with low risks of bias were considered separately, the increased mortality was even more pronounced (1.04 times as likely to die as were the controls). The potential damage from vitamin A disappeared when only the low risks of bias trials were considered. The increased risk of mortality was associated with beta-carotene and possibly vitamin E and vitamin A, but was not associated with the use of vitamin C or selenium. The current evidence does not support the use of antioxidant supplements in the general population or in patients with various diseases”.

_

Multivitamins – don’t believe the Hype:

A major study published found that taking a standard multivitamin pill lowered the risk of developing cancer. Published in the Journal of the American Medical Association, this large-scale, randomized, double-blind, placebo-controlled trial of 14,641 male physicians in the US aged 50 years or older, found that taking a daily multivitamin supplement reduced the risk of developing cancer. Inevitably, this has been accompanied by triumphant headlines, lauding the benefits of multivitamins (and you can just imagine the supplement industry rubbing its hands in glee). The first thing to point out is that the size of the benefit in this study was modest, amounting to an 8% reduced risk in cancer incidence in those taking the daily multivitamin supplement (and we should note that there was actually no significant difference in the risk of cancer mortality). But still, 8% is 8%, and in the war against cancer, most of us would welcome nudging the odds in our favour. But don’t get too swept away with the idea that multivitamins are some sort of panacea. Relying on multivitamins to quell our cancer risk is misguided. Multivitamins are a poor relation to real food. Unlike their synthetic counterparts, real food contains a mind-boggling diversity of not just vitamins and minerals, but hundreds, indeed thousands, of bio-active plant compounds, known as phytonutrients. Prize candidates include lycopene (cooked/processed tomatoes), catechins (green tea), glucosinolates (cruciferous vegetables such as broccoli), quercetin (onions), sulphur compounds (onions and garlic), anthocyanins (berries), luteolin (celery), chlorogenic acid (coffee), flavanols (cocoa), lupeol (mango), resistant starch (legumes), isoflavones (soya), and the list could go on and on…. In essence, what we find is a complexity and synergism of bio-active nutrients that mere multivitamin pills can’t come close to replicating.

_

Can daily vitamin pills increase your risk of disease? Are synthetic antioxidants harmful?

For decades the message has been clear: supplements deliver vital nutrients often missing from our diets, particularly antioxidants such as vitamins A, C and E, which help fight the damaging action of free radicals. These molecules are derived from oxygen and are produced by factors as varied as pollution and breathing. Worryingly, free radicals have been linked to a host of serious ailments, including cardiovascular disease, degenerative diseases such as Alzheimer’s, autoimmune conditions, diabetes and cancer. So the thinking has been, free radicals bad, antioxidant pills good. But increasingly scientists are questioning the benefits of antioxidant pills, and even suggesting that some might actually cause us serious harm. Most recently, a study published last month by the University of California found no good evidence that they reduce the risk of cancer in healthy people.  More alarmingly, the researchers, who looked at numerous studies assessing the impact of antioxidants (as well as folic acid, calcium and vitamin D), suggested that large doses of some could help promote cancer. These were beta carotene (a form of vitamin A) and vitamins C and E.  And these were not isolated findings. A worrying body of research now shows the antioxidant pills you’re taking to protect your health may, in fact, be increasing your risk of disease, and even premature death.

_

One early study from 1994 found regularly taking beta carotene supplements (a 20 mg pill) increased the risk of death from lung cancer by 8 per cent. A 2002 study found large doses of vitamin C (1g) and E (800 iu, the unit by which some vitamins are measured) almost trebled the risk of premature death among postmenopausal women. In 2010, scientists found that taking antioxidant supplements (vitamins A, C, E, beta carotene) could increase bladder cancer risk by 50 per cent.  And a U.S. study last year found vitamin E supplements (dose of 150 iu) increased the risk of prostate cancer by 17 per cent, with the risk of death increasing as the dose got larger. And yet, despite such findings, the sale of supplements generally continues to rise, with the biggest boost seen in individual supplements, for example, vitamin C capsules.

_

Concerns about antioxidant supplements are highlighted in a new book, The Health Delusion, written by Aidan Goggins, a pharmacist, and Glen Matten, both of whom have masters degrees in nutritional medicine:

As the authors explain: ‘Millions of people are misled into ritualistically ingesting these substances in the belief that they are enhancing their general health and well-being.’ In fact, they say, these pills can be positively unhealthy. They are particularly critical of the manufacturers: ‘Maybe it’s a genuine lack of comprehension of the science, or a stubbornness to expunge former beliefs, or worse, a blatant attempt to cash in while there’s still money to be made. ‘Whatever it is, (the supplement manufacturers) are putting your health in jeopardy and it’s high time it stopped. It is clear that it is no longer science but market forces that are driving the macabre antioxidant industry. It’s a controversial view, but Goggins and Matten point out that supplements are based on a flawed understanding of how antioxidants work. They say the original studies which switched the world on to the health-giving properties of antioxidants were based on diets rich in these compounds in their natural state — i.e. as found in fruit and vegetables. The antioxidant theory was first mooted by U.S. scientist Denham Harman in the 1950s.  He suggested that the ageing process and its related diseases were the consequence of free-radical activity, and showed that free-radical inhibitors (antioxidants) were able to extend the lifespan of mice.  Over subsequent decades, these findings were backed up by mounting evidence from laboratory studies that showed diets containing antioxidants’ stopped free radicals in their tracks, reducing the incidence of heart disease, strokes and cancers. Free radicals quickly became public enemy number one, and antioxidants our saviours,’ write Goggins and Matten. By the late 1970s, antioxidant supplements were flying off the shelves, with manufacturers packing larger and larger doses into each pill. Vitamins swiftly became a global mega-business, worth an estimated £43 billion today. The industry backed studies which supported a growing belief that vitamin pills could be just as effective as vitamins ingested in their natural form. But as Goggins and Matten point out, the studies extolling the virtues of vitamin pills were largely ‘observational’. This means they reported what appeared to happen to groups of people taking vitamins.  However, subsequent ‘intervention studies’ (that is, more rigorous clinical trials  involving placebo groups) have failed to show such dramatic results.  Not only did the intervention studies show no positive effects from antioxidant supplementation, but also a worrying trend of increased harmful effects was emerging,’ they say.  The omens weren’t good. Cancer, heart disease and mortality — the very things antioxidants were supposed to protect us against — were increased in those who supplemented their diet.’ Meanwhile, scientists began to realize that free radicals could actually be important to our health. They perform a host of vital functions in the body, including helping the immune system fight infection. Significantly, studies now show they actually stop the growth and cause the death of cancer cells. The emerging science indicates that free radicals only turn ‘bad’ when the body’s coping abilities are overwhelmed — a term known as ‘oxidative stress’. We are left with a delicate balancing act, explain Goggins and Matten.  Both too many and too few free radicals spell trouble. And, it seems, large doses of vitamin pills can upset that delicate balance.  When vitamin companies started to put large doses in their capsules, the implication was that you could use supplements as you might a drug — in other words, like a preventative medicine. We thought we could become masters of this dynamic, complex, finely tuned, self-regulating system simply by consuming large doses of antioxidants in the form of a pill,’ say Goggins and Matten.  But high-dose supplements are very different from the levels of  antioxidants found in fruits and vegetables.  ‘By taking high-dose antioxidant pills, we end up overwhelming our body and putting this fragile balance out of whack.’ The recommended daily allowance (RDA) for vitamin E, for instance, is 22 iu, but your average vitamin E pill contains 18 times that.  Similarly, a diet rich in fruit and vegetables provides around 200 mg vitamin C per day, yet supplement doses of 1000mg (1g) are routinely taken.  At best, this could be money wasted. A meta-analysis of trials published in 2008 found that dietary vitamin C (from food such as oranges and red peppers) can offer protection against heart disease, and even reduce the risk of breast cancer in women with a family history of the disease.  But the same trials found these reductions in risk did not exist in those taking vitamin C supplements, reported the European Journal of Cardiovascular Prevention and Rehabilitation. (And while many people think that if you take too much vitamin C any excess is simply excreted by the body, in high doses, some of the excess will still be absorbed.)  Furthermore, an excessive intake of some nutrients (in pill form) can actually diminish the effect of other nutrients, causing real health problems. For instance, vitamin E is found in eight different forms in the body but most supplements contain only one (alpha tocopherol). Studies show that when we ingest high levels of one type of vitamin E, our bodies kick out the other types to make room for it.  This upsets a delicate balance, negating any potential disease-fighting properties and rendering the body more vulnerable to disease at a cellular level, writes Goggins and Matten.  Alpha tocopherol may be associated with a reduced risk of prostate cancer, but only when levels of another form of vitamin E are also high — as they would be in food. Taking a high dose of one nutrient without regard to the others is a bit like playing Russian roulette with your health,’ say Goggins and Matten. You should still strive to get antioxidants, but they should come the way nature intended — via food.’

_

But do the same concerns apply to ordinary multivitamin and mineral supplements?

The authors say that a low-dose capsule which provides the recommended daily amounts of nutrients is unlikely to be harmful.  In fact, they accept that for women who are pregnant or breastfeeding, or for those who are following a strict vegetarian diet, there is a very legitimate need for use of a broader range of nutrients to meet the additional needs of the body. But, for the rest of us, it’s just money wasted.  If your diet is terrible, then a multivitamin may be of some benefit,’ says Matten.  But we cannot over-emphasize how much of a “poor man’s alternative” it is to an optimal diet. The notion that we can replace the synergy of literally hundreds of nutrients found in food with isolated nutrients in a pill form is absurd.

_

USPSTF: No evidence that Vitamins prevent cancer or cardiovascular Disease:

The U.S. Preventive Services Task Force has determined that there is not enough evidence to recommend for or against most vitamin and mineral supplements — alone or in combination — for preventing cardiovascular disease and cancer. The statement is available for public comment on their website. A review of the evidence appears in the Annals of Internal Medicine. The group did, however, recommend against taking beta-carotene or vitamin E supplements for disease prevention, writing that vitamin E conferred no benefit and beta-carotene increases lung cancer risk in smokers. The group says that a healthy diet “may play a role in the prevention of cancer or cardiovascular disease.”  The recommendations do not apply to children, women who are pregnant or may become pregnant, and people who are hospitalized, have chronic illness, or have a nutritional deficiency.  

_

Food Vitamins are Superior to Non-Food Vitamins:

Although many mainstream health professionals believe, “The body cannot tell whether a vitamin in the bloodstream came from an organically grown cantaloupe or from a chemist’s laboratory”, this belief is quite misleading for several reasons.

First it seems to assume that the process of getting the amount of the vitamin into the bloodstream is the same (which is frequently not the case).

Secondly, scientists understand that particle size is an important factor in nutrient absorption even though particle size is not detected by chemical assessment.

Thirdly, scientists also understand that, “The food factors that influence the absorption of nutrients relate not only to the nature of the nutrients themselves, but also their interaction with each other and with the nonabsorbable components of food”.

Fourthly, “the physiochemical form of a nutrient is a major factor in bioavailability” (and food and non-food vitamins are not normally in the same form).

Fifthly, most non-food vitamins are crystalline in structure.

Published scientific research has concluded, “natural food vitamins are nutritionally superior to synthetic ones”.

Food vitamins are in the physiochemical forms which the body recognizes, generally are not crystalline in structure, contain food factors that affect bioavailability, and appear to have smaller particle sizes. This does not mean that non-food vitamins do not have any value (they clearly do), but it is important to understand that natural food complex vitamins have actually been shown to be better than isolated, non-food, vitamins.

 Comparison of Certain Biological Effects of Food and Non-Food Vitamins

Food Vitamin Compared to USP/ ‘Natural’/ synthetic/ Non-Food Vitamins
Vitamin A More complete, as scientists teach that vitamin A is not an isolate
Vitamin B Complex More effective in maintaining good health and liver function
Vitamin B-9 More utilizable above 266mcg (Recommended Daily Intake is 400mcg)
Vitamin C Over 15.6 times antioxidant effect  
Vitamin D Over 10 times the antirachitic effect
Vitamin E Up to 4.0 times the free radical scavenging strength
Vitamin H Up to 100 times more biotin effect
Vitamin K Safer for children

The difference is more than quantitative. Most vitamins sold are not food–they are synthetically processed petroleum and/or hydrogenated sugar extracts–even if they say “natural” on the label. They are not in the same chemical form or structural form as real vitamins are in foods; thus they are not natural for the human body. True natural food vitamins are superior to synthetic ones. Food vitamins are functionally superior to non-food vitamins as they tend to be preferentially absorbed and/or retained by the body. Isolated, non-food vitamins, even when not chemically different are only fractionated nutrients.

___________

Vitamin B-12 and vitamin D myth:

A cross-sectional and interventional study was carried out to assess the incidence of vitamin B12 / vitamin D deficiency in male office executives in the tropical city of Mumbai, India. A total of 75 senior executives were surveyed and subjected to analysis of blood levels of vitamin D (25 Hydroxy Cholecalciferol) by RIA method and vitamin B12 by CLIA method. The same was performed in a reputed analytical laboratory with NABL accreditation. History of smoking, exposure to sunlight, exercise, dietary habits, consumption of vitamin supplements, medication etc. was obtained. The results revealed 65% executives with vitamin B12 deficiency (less than 193 pg/ml) and 28% executives with vitamin D deficiency (less than 7.6 ng/ml). In the second phase of the survey, 58 executives with low B-12/ D values were given vitamin B-12/D3 oral supplements for a period of three months along with counseling for lifestyle modification. A modified questionnaire was then circulated and the subjects analyzed for B-12/D3 values. Significant improvements in serum B-12 and D3 values were seen after the oral therapy, sun exposure and dietary modifications.

_

Many researchers agrees that blood levels of Vitamin D under 20 ng/ml are associated with poor health outcomes, including increased mortality from all causes and increased risk of developing osteoporosis, diabetes, autoimmune diseases, heart disease, depression and several types of cancer. In fact, the lower the Vitamin D levels fall below 20, the greater the risk. Almost all researchers agree that when blood levels of Vitamin D are low, vitamin D supplements ought to be given to raise them up to 20 with supplements. Yet despite the health potential of vitamin D, as many as half of all adults and children are said to have less than optimum levels and as many as 10 percent of children are highly deficient, according to a 2008 report in The American Journal of Clinical Nutrition.

_

Although numerous studies have been promising, there are scant data from randomized clinical trials. Little is known about what the ideal level of vitamin D really is, whether raising it can improve health, and what potential side effects are caused by high doses. And since most of the data on vitamin D comes from observational research, it may be that high doses of the nutrient don’t really make people healthier, but that healthy people simply do the sorts of things that happen to raise vitamin D. “Correlation does not necessarily mean a cause-and-effect relationship,” said Dr. JoAnn E. Manson, a Harvard professor who is chief of preventive medicine at Brigham and Women’s Hospital in Boston.  “People may have high vitamin D levels because they exercise a lot and are getting ultraviolet-light exposure from exercising outdoors,” Dr. Manson said. “Or they may have high vitamin D because they are health-conscious and take supplements. But they also have a healthy diet, don’t smoke and do a lot of the other things that keep you healthy.”

_

What were complaints of the sample of Mumbai study which showed 65% executives with vitamin B-12 deficiency and 28% executives with vitamin D deficiency?

Vague complaints of unexplained pain in upper and lower limbs were encountered among senior executives.

Are these symptoms suggest vitamin B-12 or D deficiency?

It is true that these executives worked long hours in air-conditioned offices and in spite of living in a city with a tropical climate, were barely exposed to sunlight. However, they all belong to high socio-economic status who are fortunate to eat wholesome nutritional diets containing vitamin B-12 and D. Also, such class of people routinely take vitamin tablets or their doctors had prescribed them for any other illness in past. How come they have vitamin B-12 or D deficiency and their complaints are so vague and non-specific.  

_

Role of vitamin D beyond bone health:

Of course, there are some placebo-controlled clinical trials which showed that high-doses of vitamin D can be beneficial in conditions unrelated to physiological role of vitamin D.

  • A Japanese study found that children taking 1,200 IU/day had a decrease in influenza during the winter. 
  • A New Zealand study found that 4,000 IU/day reduced insulin resistance (a precursor of diabetes) in Asian women, and the effect only occurred if the blood level rose to 32.
  • A Norwegian study found that 20,000 to 40,000 IU/week given for one year reduced depression scores in overweight individuals.
  • A clinical trial done at Creighton University in Nebraska found a 75 percent reduction in the incidence of cancer among post-menopausal women given 1,100 IU Vitamin D per day for four years. The effect required blood levels greater than 32.   

Vitamin D is a star nutrient these days, as research links it to numerous health benefits. Studies suggest vitamin D may go beyond its well-established role in bone health and reduce the risk of cancer, heart disease, stroke, diabetes, autoimmune diseases, and more. However, Patsy Brannon, PhD, RD, a Cornell University professor of nutritional sciences and a member of the IOM committee, spoke about health benefits of vitamin D beyond bone health at the American Dietetic Association’s 2011 annual meeting in San Diego. “The committee of 14 scientists reviewed more than 1,000 publications and determined that the evidence was inconsistent and inconclusive to include any other health benefits in the new recommendations,” Brannon said. “The committee is not dismissing the role of vitamin D in other areas, we need more clinical trials, consistent evidence, and evidence that supports causality.” 

_

My view:

First, what is normal level of vitamin B-12 and vitamin D is debatable. In fact, from above discussion, I am tempted to believe that prevalence of vitamin B-12 and vitamin D deficiency is highly overestimated.

Second, should we treat a mere biochemical abnormality if symptoms and signs do not match?

Third, should we trust studies which showed benefits of high doses of vitamin D in conditions which has no relevance to the physiological action of vitamin D? The sole role of vitamin D is to help absorption of dietary calcium and phosphorus in intestines, nothing more and nothing less and all studies which display the greater role of vitamin D must be taken with a pinch of salt.  

Vitamin D deficiency results in an increase in PTH levels, which in turn increases osteoclastic activity, resulting in the removal of the matrix and mineral from the skeleton.  As a result, vitamin D deficiency in adults reduces bone mineral content, leading to osteopenia and osteoporosis. In addition, the secondary hyperparathyroidism results in phosphorus loss in the kidneys, resulting in a normal serum calcium with a low-normal serum phosphorus level. This causes a low calcium-phosphorus product in the blood, which results in the inability of the collagen matrix to be mineralized, leading to osteomalacia. Therefore, for adults, vitamin D deficiency is associated with normal serum calcium, low-normal or normal serum phosphorus, elevated PTH, and a normal or elevated alkaline phosphatase.  Review of many studies of vitamin D status of both young and old subjects show blunted PTH response in the presence of hypovitaminosis D. Sahota has reported that 50% of patients with hypovitamininosis D (<30 nmol/L) fail to develop hyperparathyroidism, and as a result have lower 1,25 Dihydroxyvitamin D, and a lower serum calcium as a result of less calcium absorption.  So low vitamin D level ought to have high PTH level with normal calcium level or normal PTH level with low calcium level. You cannot have vitamin D deficiency with normal calcium and normal PTH level.  In the absence of these criteria, how can you justify the diagnosis of vitamin D deficiency merely on lower serum level of vitamin D? 

It would be speculative to diagnose vitamin B-12 deficiency in the absence of megaloblastic anemia or neurological symptoms and signs. At least high MCV value which is a traditional criterion for vitamin B12 deficiency must be looked for (except concomitant iron deficiency or hemoglobinopathies that may cause normal or even low MCV values). Nevertheless to make diagnosis of vitamin B-12 deficiency merely on basis of serum B-12 level is a mockery of medical science.     

_

Does low level of 25-hydroxyvitamin D in blood indicate deficiency of vitamin D?

Right now, doctors use a blood test that measures a person’s total 25-hydroxyvitamin D level. And if you consider just that total level, up to 90 percent of black Americans would be labeled vitamin D deficient. Among the nearly 1,200 black adults in a study, the average total vitamin D level was just shy of 16 nanograms per milliliter (ng/mL), versus almost 26 ng/mL among 900 white adults. In general, levels below 20 ng/mL are considered a vitamin D deficiency. The apparently high prevalence of vitamin D deficiency among blacks may be an artifact of what form is measured clinically, according to a New England Journal of Medicine study. The researchers looked at study participants’ levels of vitamin D-binding protein, which basically locks up the vitamin, away from body cells’ use. It turned out that blacks also had lower levels of vitamin D-binding protein. So on balance, black and white adults had similar levels of “bioavailable” vitamin D — the kind that their bodies can actually use. The blacks had higher bone mineral densities than whites and similar levels of bioavailable vitamin D. The explanation may lie with variation in a gene associated with vitamin D-binding protein. The variant more prevalent among blacks is associated with lower levels of the binding protein. (Whites with the variant protein also showed lower binding levels.) Low levels of vitamin D-binding protein in blacks may provide protection against the manifestations of vitamin D deficiency despite low levels of total 25-hydroxyvitamin D. The authors conclude that low levels of 25-hydroxyvitamin D don’t necessarily indicate a deficiency. 

_

Does low level of vitamin B-12 in blood suggest deficiency of vitamin B-12?

Vitamin B12 deficiency causes particular changes to the metabolism of two clinically relevant substances in humans:

1.Homocysteine (homocysteine to methionine, catalysed by methionine synthase) leading to hyperhomocysteinemia;

2.Methylmalonic acid (methylmalonyl-CoA to succinyl-CoA, of which methylmalonyl-CoA is made from methylmalonic acid in a preceding reaction)

Methionine is activated to S-adenosyl methionine, which aids in purine and thymidine synthesis, myelin production, protein/neurotransmitters/fatty acid/phospholipid production and DNA methylation. 5-Methyl tetrahydrofolate provides a methyl group, which is released to the reaction with homocysteine, resulting in methionine. This reaction requires cobalamin as a cofactor. The creation of 5-methyl tetrahydrofolate is an irreversible reaction. If B12 is absent, the forward reaction of homocysteine to methionine does not occur, and the replenishment of tetrahydrofolate stops. Because B12 and folate are involved in the metabolism of homocysteine, hyperhomocysteinuria is a non-specific marker of deficiency. Methylmalonic acid is used as a more specific test of B12 deficiency. 

Some researchers propose that the current standard norms of vitamin B12 levels are too low. In Japan, the lowest acceptable level for vitamin B12 in blood has been raised from about 200 pg/mL to 550 pg/mL.

Serum homocysteine and methylmalonic acid levels are considered more reliable indicators of B12 deficiency than the concentration of B12 in blood. The levels of these substances are high in B12 deficiency and can be helpful if the diagnosis is unclear. Approximately 10% of patients with vitamin B12 levels between 200–400pg/ml will have a vitamin B12 deficiency on the basis of elevated levels of homocysteine and methylmalonic acid. From 20% – 40% of elderly people with low serum B12 levels have normal homocysteine and methylmalonic acid levels and should not be considered deficient. Falsely low values have been associated with multiple myeloma, oral contraceptives, folate deficiency, and pregnancy. Sometimes, a true cobalamin deficiency will not be detected by the serum vitamin B12 test. Some examples of falsely normal serum cobalamin results might be seen with (but not limited to) liver disease, myeloproliferative disorders, and renal insufficiency. If a patient has clinical evidence of vitamin B12 deficiency and a normal serum B12 level, it is important to evaluate further. Accepted lower limits of serum B12 levels in adults range between 170 and 250 pg/ml; however, higher levels (but less than 350 pg/ml) have been recorded in 15% of ostensibly healthy elderly patients with other findings suggestive of a deficiency state, most notably increased levels of serum methylmalonic acid. The true lower limits of normal serum B12 would therefore appear to be somewhat poorly defined.

_

Vitamin B-12 deficiency in India:

Times of India dated sept 7, 2012 reports that vitamin B-12 eludes 8 out of 10 Indians. Times of India dated sept 23, 2013 reports that in India, approximately 60-70% of the population is believed to be having low vitamin B-12 levels, with nearly 80% of urban middle class having this problem. Vegetarians have a 4.4 times higher risk of low vitamin B12 concentrations. It has been repeatedly sermon-ed by journals, media and pharmaceutical companies that since large segment of Indian population is vegetarian, they must take vitamin B-12 pills because the only source of vitamin B-12 is animal products like meat/eggs etc. I am pure vegetarian since childhood and I never developed vitamin B-12 deficiency. Of course, I drink milk and take dairy products; all contain sufficient B-12. In fact, entire Indian vegetarian population is lacto-veg who routinely consume milk and milk products.  

_

The daily requirement of vitamin B-12 in food is 2 microgram per day. The B-2 storage in liver can prevent B-12 deficiency for 3 years if diet contains no vitamin B-12. B-12 is synthesized in nature by micro-organisms. Animal-derived foods are a primary source because animals eat other animal food, they produce B-12 internally due to the intestinal bacteria, and they eat food contaminated with bacteria. Vitamin B12 is produced also by micro-organisms in the soil. In the past, root vegetables contained adequate amounts of B-12. Today root vegetables are cleaned so well that all traces of B-12 are removed. The surface bacteria on lightly washed organic vegetables do provide B-12. Fermented products such as tempeh and miso (obtained from fermented soya beans), shiitake mushrooms and algae (spirulina and nori) contain substances which are similar chemically to vitamin B12. Lacto-ovo vegetarians receive B12 through eggs and dairy products. Obtaining enough B12 through a pure vegan diet is more challenging.  Confirmed vegan sources include fortified soy milks, other fortified foods, vitamin pills, and Red Star nutritional yeast (T6635 ). Red Star is available at many natural food stores. Long term studies of vegans have detected a very low rate of B12 deficiency. In fact, more meat-eaters than vegans suffer from this deficiency due to problems absorbing B-12.  

_

Vitamin B12 synthesis by human small intestinal bacteria:

In man, physiological amounts of vitamin B-12 (cyanocobalamin) are absorbed by the intrinsic factor mediated mechanism exclusively in the ileum. Human faeces contain appreciable quantities of vitamin B-12 or vitamin B-12-like material presumably produced by bacteria in the colon, but this is unavailable to the non-coprophagic individual. This theory is reinforced by the fact that many species of totally or primarily vegetarian animals eat their feces. Eating feces allows them to obtain B-12 on their diets of plant foods. However, the human small intestine also often harbours a considerable microflora which may synthesize significant amounts of the vitamin B-12. Researchers have shown that at least two groups of organisms in the small bowel, Pseudomonas and Klebsiella sp., may synthesize significant amounts of the vitamin. It is possible that some vegans can ward of overt vitamin B-12 deficiency, and even mild B-12 deficiency, through B-12 production by bacteria in the small intestine.

_

Besides there are other reasons why vegetarian Indian population does not develop B-12 deficiency:

  • In India, water is contaminated with various bacteria, including those from human and animal feces.
  • The practice of defecating in open fields and lack of proper sewage.
  • The mode of toilet hygiene where hand washing with water is used instead of toilet paper.

_

All above discussion shows that most vegetarian Indians do not need vitamin B-12 supplements. So called studies showing low levels of vitamin B-12 in blood among Indian population gave too much emphasis to one biomarker without any clinical correlation, without measuring methymalonic acid in blood, and ignoring so many other variables in B-12 saga. In fact, like Japan, India must raise the lowest acceptable level for vitamin B12 in blood from about 200 pg/mL to 550 pg/mL. The present Indian saga of B-12 deficiency is imitation science which benefits pharmaceutical companies far more than population.  

__________

Osteoporosis, fractures and vitamin D:

Osteoporosis is defined by the World Health Organization (WHO) as a bone mineral density of 2.5 standard deviations or more below the mean peak bone mass (average of young, healthy adults) as measured by dual-energy X-ray absorptiometry; the term “established osteoporosis” includes the presence of a fragility fracture. The disease may be classified as primary type 1, primary type 2, or secondary. The form of osteoporosis most common in women after menopause is referred to as primary type 1 or postmenopausal osteoporosis. Primary type 2 osteoporosis or senile osteoporosis occurs after age 75 and is seen in both females and males at a ratio of 2:1. Secondary osteoporosis may arise at any age and affect men and women equally. This form results from chronic predisposing medical problems or disease, or prolonged use of medications such as glucocorticoids, when the disease is called steroid- or glucocorticoid-induced osteoporosis. The prevalence of osteoporosis increases with age because of the progressive loss of bone. Osteoporosis is considered a woman’s disease, but the prevalence in men also increases exponentially with age. One in 4 women and one in 8 men older than 50 years are believed to have osteoporosis.  A fracture is considered to be osteoporotic (fragility fracture) if it is caused by relatively low trauma, such as a fall from standing height or less; a force which in a young healthy adult would not be expected to cause a fracture. Overwhelming evidence has shown that the incidence of fracture in specific settings is closely linked to the prevalence of osteoporosis or low bone mass. In a prospective study of 8134 women older than 65 years in age, Cummings et al showed that the women with BMD (bone mineral density) of the femoral neck in the lowest quartile have 8.5-fold greater risk of sustaining a hip fracture than those in the highest quartile. Each 1 standard deviation decrease in femoral neck BMD increases the age adjusted risk of having a hip fracture 2.6-fold. Thus, a strong correlation exists between BMD and fracture risk.

_

The myth of Calcium and vitamin D supplementation in post-menopausal women:

What’s a young woman who’s actively considering the quality of her post-menopausal years should do?

As of now, calcium continues to be included in her daily vitamin regimen. While the results of the most recent study on the effects of calcium on overall health were disappointing, most doctors seem to believe that it still plays an important role in long-term bone health. A study found that calcium does little to prevent broken bones and fractures in women over 50. Researchers also discovered that calcium supplements are not only less effective than originally thought, but that they can also cause kidney stones. Of the women reviewed, there were an additional five cases of kidney stones per 10,000 women per year. The study’s results refute an established medical theory — that calcium and vitamin D maintain healthy bones and since women’s bones become more fragile after menopause they should start taking supplements as early as possible. It also jeopardizes the $993 million a year calcium supplement industry. $993 million!  Calcium and vitamin D may not be living up to their hype, but they don’t really need to. According to Gina Kolata’s piece in the New York Times, many women over 50 already get an adequate amount of the two from sunlight and diet.

_

Taking Vitamin D supplements does not prevent Osteoporosis:

Taking vitamin D supplements does not prevent osteoporosis, according to a study of more than 4000 health adults, published in The Lancet. With close to half of adults aged 50 and older using vitamin D supplements, the authors conclude that continuing widespread use of these supplements to prevent osteoporosis in healthy adults is needless. Reid and colleagues from the University of Auckland conducted a systematic review and meta-analysis of all randomized trials examining the effects of vitamin D supplementation on bone mineral density in healthy adults up to July 2012. Analysis of data from 23 studies involving 4082 healthy adults (average age 59 years) did not identify any effects for people who took vitamin D for an average period of 2 years, apart from a small but statistically significant increase in bone density (0.8%) at the femoral neck. According to the authors, such a localized effect is unlikely to be clinically significant. The authors conclude, “This systematic review provides very little evidence of an overall benefit of vitamin D supplementation on bone density…Continuing widespread use of vitamin D for osteoporosis prevention in community-dwelling adults without specific risk factors for vitamin D deficiency seems to be inappropriate.”  In other word, people with normal bones and an adequate calcium intake, there is little or no need for vitamin D supplementation. Supplementation to prevent osteoporosis in healthy adults is not warranted.

_

Vitamin D and Calcium supplements may not prevent Fractures:

The U.S. Preventive Services Task Force (USPSTF), an independent group of health experts, reviewed research on the role of vitamin D and calcium supplements in preventing fractures, and found that adding 400 IU of vitamin D and 1000 mg of calcium to a healthy diet does not lower risk of fractures in post-menopausal women, and that for younger women and for men, the studies are too inconclusive to support regular use of the supplements. “It’s important to keep in mind that the presumption is that the people we are talking about here do not have known bone disease, they don’t have osteoporosis and they are not vitamin D deficient,” says the task force chair Dr. Virginia Moyer, a pediatrics professor at Baylor College of Medicine. “This is supplemental, so this is above and beyond getting what the expert consensus is for what you should be getting every day.”

_
Does Vitamin D prevent Fractures in elderly?

The review, published in the New England Journal of Medicine, looked at data from 11 trials that included a combined total of more than 31,000 people. These trials compared vitamin D to placebo in people who were at least 65 years old and monitored participants for fractures over time. Vitamin D was given daily, weekly, or every four months, and in some of the trials, vitamin D was combined with calcium. The people in the trials were monitored for hip and non-vertebral (that is, not involving the spine) fractures.  After analyzing the combined data from all of the studies, the reviewers found the following:

  • Pretrial vitamin D levels were generally low, with 30% of participants having clear deficiencies and 88% having levels considered to be suboptimal.
  • People whose actual intake of vitamin D was 792 to 2,000 IU per day had a significant, 30% reduction in hip fractures, and a 14% reduction in all non-vertebral fractures.
  • There was no reduction in any fracture risk in people whose actual intake was less than 792 IU per day.
  • Among people with the highest actual vitamin D intake, those who also received 1,000 mg per day or more of calcium had a higher risk of fracture than people who took less than 1,000 mg per day.

The findings suggest that only a high intake of vitamin D leads to a significant reduction in the risk of fracture.  Based on the findings of this review, older people can protect their bones by taking 800 to 2,000 IU per day of vitamin D. The reviewers noted that taking vitamin D appeared to be just as effective when given weekly as when given daily. Aging is associated with decrease sun exposure, oral intake and skin activation of Vitamin D, and Vitamin D absorption. All of these factors may contribute to Vitamin D insufficiency, which is required for calcium absorption and bone mineralization. 

_

Calcium supplements are unnecessary at best and potentially harmful at worst:

Calcium has become extremely popular to supplement with, especially amongst older women, in the hope that it will prevent osteoporosis. We’ve all seen the products on the market aimed at the “worried-well”, such as Viactiv and Caltrate, suggesting that supplementing with calcium can help maintain bone health and prevent osteoporosis, a serious condition affecting at least 10% of American women.

(1) Yet the evidence that calcium supplementation strengthens the bones and teeth was never strong to begin with, and has grown weaker with new research published in the past few years. A 2012 analysis of NHANES data found that consuming a high intake of calcium beyond the recommended dietary allowance, typically from supplementation, provided no benefit for hip or lumbar vertebral bone mineral density in older adults.

(2) And a 2007 study published in the American Journal of Clinical Nutrition found that calcium supplements don’t reduce fracture rates in older women, and may even increase the rate of hip fractures.

 (3) Beyond being ineffective for bone health, calcium supplements are associated with some pretty serious health risks. Studies on the relationship between calcium and cardiovascular disease (CVD) suggest that dietary intake of calcium protects against heart disease, but supplemental calcium may increase the risk. A large study of 24,000 men and women aged 35–64 years published in the British Medical Journal (BMJ) in 2012 found that those who used calcium supplements had a 139% greater risk of heart attack during the 11-year study period, while intake of calcium from food did not increase the risk.

(4) A meta-analysis of studies involving more than 12,000 participants also published in BMJ found that calcium supplementation increases the risk of heart attack by 31%, stroke by 20% and death from all causes by 9%.

 (5) An analysis involving 12,000 men published in JAMA Internal Medicine found that intakes of over 1,000 mg of supplemental calcium per day (from multivitamins or individual supplements) were associated with a 20% increase in the risk of death from CVD.

(6) Researchers suspect that the large burst of calcium in the blood that occurs after supplementation may facilitate the calcification of arteries, whereas calcium obtained from food is absorbed at slower rates and in smaller quantities than from supplements.

(7) It is also suspected that extra calcium intake above one’s requirements is not absorbed by bones, but rather excreted in the urine, increasing the risk of calcium kidney stones, or circulated in the blood, where it might attach to atherosclerotic plaques in arteries or heart valves.

(8) The Office of Dietary Supplements at the National Institutes of Health has compiled a comprehensive review of the health risks associated with excess calcium, particularly from supplementation. For example, daily supplementation of calcium at 1000 milligrams is associated with increased prostate cancer risk and an increase in kidney stones.

(9) Additionally, a recent Swedish study reported a 40% higher risk of death among women with high calcium intakes (1400 mg and above), and a 157% higher risk of death if those women were taking a 500 mg calcium supplement daily, compared to women with moderate daily calcium intakes (600-1000 mg).

(10) A Consumer Lab analysis found that many of the calcium supplements they analyzed failed quality testing, including lead contamination and mislabeled contents.

_

If you’re concerned about maintaining healthy bones, you’re better off ensuring adequate calcium intake from foods like dairy products, sardines, salmon, dark leafy greens and bone broth. 600 milligrams per day from food (approximately two servings of dairy products or bone-in fish) is plenty to maintain adequate levels of calcium in the body. Healthy bone formation also depends on vitamin D and vitamin K2, both of which regulate calcium metabolism. There are also other minerals besides calcium involved in supporting bone health, such as silica and magnesium. If you have adequate levels of these nutrients, and regularly perform weight-bearing exercise, there is no need for calcium supplementation, which will likely do more harm than good.

_

Do calcium supplements really keep bones healthy, and even make them?

Dr. Robert Thompson, M.D. doesn’t think so. His book The Calcium Lie asserts that, in fact, our bones are comprised of a dozen minerals – calcium being just one. Exclusively focusing on supplementing just one of these minerals, Thompson says, can actually decrease your bone density and increase your risk of osteoporosis! Adequate calcium intake is needed for healthy bones. However, consuming supplemental calcium such as 1500mg per day may in fact inhibit its absorption rate into your body. Our bones need calcium, but they also need many other minerals. If you flood your body with calcium but deny it the so-called “trace” minerals that it needs, you might be setting your skeleton up for a break down.

Unprocessed salt: Dr. Thompson recommends using unprocessed salt such as Himalayan salt to provide your body with the trace minerals that your bones need.

Omega-3: This nutrient not only prevents cognitive decline associated with aging, but recent studies in the British Journal of Nutrition show that these essential fatty acids also enhance bone mineral content.

Vitamin K: Found in broccoli, Brussels sprouts and leafy greens like spinach, kale and collard greens, vitamin K helps bones retain calcium and develop the right structure. Crucial for infants, vitamin K can only be absorbed with fat – another reason a no-fat diet is not the best idea.

Exercise: Strength training helps to counteract the effects of aging on bones, which become less dense and more brittle as time goes by. Weight-bearing exercise is perhaps one of the best preventative measures one can take against osteoporosis, as the pressure on bones make them create new bone material.

Sunshine: A cheap and easy treatment that improves your physical and mental health, sunshine provides the vitamin D that your body needs. If you live in an overcast climate and do not get the recommended 15-20 minutes of sunshine a day, consider taking a Vitamin D supplement to ward off deficiency.  

__________

Milk, sugar and acne:

For over 50 years dermatologists have been denying the link between diet and acne. They insist on prescribing gut-destroying pharmaceuticals with real long-term adverse side effects, and laugh at the notion that diet has anything to do with this teenage nightmare.  In a 1969 study, researchers fed 65 teens and young adults either a chocolate bar or a placebo bar (with no chocolate liquids) for one month. They concluded that there was no difference between the two groups. Only later did other scientists point out that the placebo, which was intended to be a healthy control, actually was loaded with artificial trans fats.  Then, in a 1971 study, 27 medical students with acne ate chocolate, peanuts, milk or cola for just one week. The researcher concluded that there were no effects on their skin condition from eating the food. The study is now recognized as poorly designed and too small to have any scientific relevance.  Nevertheless, the two studies have been cited repeatedly as gospel not only for the proposition that chocolate has no effect on acne, but that food in general has no effect. Recently some integrative dermatologists and dietitians have been revisiting the link between diet and acne, and the role nutrition can play in treatment. Finally, a new study published in the Journal of the Academy of Nutrition and Dietetics has revealed increasing evidence of the connection between what you eat and how healthy your skin is. Researchers have concluded that diet and acne are in fact connected, and the biggest culprits are a high glycemic load diet and dairy products. Nutrition is finally being recognized as an important player in acne treatment. Although the research from studies over the last 10 years did not demonstrate that diet causes acne, the researchers found that it may influence or aggravate it. They urged the medical community to adopt diet therapy as an adjunct treatment for acne, and urged them to consider the possibility of dietary counseling in their treatment plans.

_________

Bad science about cooking oil:

Myth: Vegetable oil contains zero cholesterol and therefore safe for heart disease.

This is a very misleading statement made by the food labels. No vegetable oil contains cholesterol. Although the food label says that it has zero cholesterol, that food might have trans-fats and saturated fats that may raise cholesterol levels. Also vegetable oils consumed in excess can cause an increase in total fat intake. Too much of fat in the diet increases blood cholesterol.  

_

Myth: Vegetable Oil helps reduce Cholesterol:

All vegetables and veggie-based products contain chemicals called phytosterols, which block the intestine from absorbing cholesterol — thereby reducing LDL cholesterol in your blood. Large amounts of phytosterols have a dramatic effect on cholesterol, as many studies have shown. Olive oil, Canola oil, and Corn oil are loaded with phytosterols.  While it’s true that plant sterols are found in everything from vegetable oils and grains to fruits and vegetables, you would need to eat approximately 100 pounds of fruits and vegetables daily to get the total daily intake of 0.8 grams needed for plant sterols to lower your cholesterol.  Since vegetable oil contains little amount of phytosterols, it will not reduce cholesterol.

_

Myth: Unrefined vegetable oil is better than refined oil.   

 Unrefined oil contains “impurities” some of them good for you, like nutrients or plant proteins, and some bad, like plant carcinogens or rat feces. Good and bad, these “adulterants” may taste bad, burn, or spoil easily. In general, unrefined oil will burn at a lower temperature and spoil faster than refined. Conversely, refined oil can withstand higher heat and will keep longer. Crude vegetable oil contain variable amounts of non-glyceride impurities, such as free fatty acids, non fatty materials generally classified as “gums” or phosphatides, color pigments, moisture, and dirt. Most of these materials are detrimental to the finished product color, flavor, and smoking ability, and must be removed by purification step. However, the refining process removes phytosterols, iron, calcium, and magnesium among other things. Additionally, refining process adds little trans fat. On the balance, refined oil is better than unrefined crude vegetable oil.   

________

Who Needs Vitamin and Mineral Supplements?

Anyone whose diet lacks the 40-plus nutrients needed for good health may benefit from vitamin and mineral supplements. In general, the following groups can be helped, but they should consult their doctor or a registered dietitian when deciding if they need a supplement or choosing one:

•           Pregnant and lactating women

•           Vegans and some people on vegetarian diets

•           Anyone on a low-calorie diet (intentional and unintentional)

•           Certain disease states (including people with a history of cancer)

•           People who suffer from food allergies or intolerances

•           Picky eaters who limit food groups, or have limited variety within food groups

•           Anyone with a poor diet

•           People taking certain medications

__________

Bad science in adding salt to water when cooking foods at home:

A common chemistry misconception that is related to boiling point elevation is the reason often given for adding salt to water when boiling foods. It is often stated that this is to increase the temperature of the boiling water and thus speed the rate of cooking. It is certainly true that a small increase in cooking temperature can significantly increase the rate of cooking; cooking times will typically be only half as long if the water temperature is raised by 10 – 20 degree C. However, even if we make the cooking water as salty as sea water (which requires adding twelve tablespoons of table salt per gallon of water!), the boiling point will only increase by 0.6 C which will only decrease the cooking time by a few percent. If you feel compelled to save even this small cooking time, then the last thing you need is to risk increasing your blood pressure further by consuming so much sodium!  

__________

Meteorology and imitation science:          

Is global warming science or imitation science:

How do we know Global Warming is real and human caused:

Converging Lines of Evidence:

1. Carbon Dioxide Increase

2. Melting Polar Ice Caps

3. Melting Glaciers

4. Sea Level Rise

Climate Deniers’ Arguments and Scientists’ Rebuttals:

Despite the overwhelming evidence there are many people who remain skeptical. One reason is that they have been fed lies, distortions, and misstatements by the global warming denialists who want to cloud or confuse the issue:

1. It’s just natural climatic variability.

2. It’s just another warming episode, like the Mediaeval Warm Period, or the Holocene Climatic Optimum or the end of the Little Ice Age

3. It’s just the sun, or cosmic rays, or volcanic activity or methane.

4. The climate records since 1995 (or 1998) show cooling.

5. We had record snows in the winters of 2009–2010, and in 2010–2011.

6. Carbon dioxide is good for plants, so the world will be better off.

_

Why do people deny Climate Change?

Thanks to all the noise and confusion over the debate, the general public has only a vague idea of what the debate is really about, and only about half of Americans think global warming is real or that we are to blame. As in the debate over evolution and creationism, the scientific community is virtually unanimous on what the data demonstrate about anthropogenic global warming. This has been true for over a decade. When science historian Naomi Oreskes surveyed all peer-reviewed papers on climate change published between 1993 and 2003 in the world’s leading scientific journal, Science, she found that there were 980 supporting the idea of human-induced global warming and none opposing it. In 2009, Doran and Kendall Zimmerman surveyed all the climate scientists who were familiar with the data. They found that 95–99% agreed that global warming is real and that humans are the reason. In 2010, the prestigious Proceedings of the National Academy of Sciences published a study that showed that 98% of the scientists who actually do research in climate change are in agreement with anthropogenic global warming. Every major scientific organization in the world has endorsed the conclusion of anthropogenic climate change as well. This is a rare degree of agreement within such an independent and cantankerous group as the world’s top scientists. This is the same degree of scientific consensus that scientists have achieved over most major ideas, including gravity, evolution, and relativity. These and only a few other topics in science can claim this degree of agreement among nearly all the world’s leading scientists, especially among everyone who is close to the scientific data and knows the problem intimately. If it were not such a controversial topic politically, there would be almost no interest in debating it, since the evidence is so clear-cut.

_

Pillars of science denial:

_

The first pillar of climate change denial — that climate change is bad science — attacks various aspects of the scientific consensus about climate change. The scientific community’s consensus, based on over a century of research, is that the Earth’s climate is changing significantly, that human activity is significantly responsible for the change, that the change will have a significant effect on the world and our society, and that humans are able to take significant actions to reduce and mitigate its impact.

Accordingly, there are climate change deniers:

•who deny that significant climate change is occurring

•who acknowledge that significant climate change is occurring, but deny that human activity is significantly responsible

•who acknowledge that significant climate change is occurring and that human activity is significantly responsible, but deny the scientific evidence about its significant effects on the world and our society

•who acknowledge that significant climate change is occurring, that human activity is significantly responsible, and that it will have a significant effect on the world and our society, but who deny that humans can take significant actions to reduce or mitigate its impact.

Of these varieties of climate change denials, the most visible are the first and the second: denial that significant climate change is occurring and denial that human activity is significantly responsible. But all at least partly contradict the scientific community’s consensus on the answers to the central questions of climate change. Because the scientific community’s consensus is the best standard available for judging what good science is, the primary way to counter the first pillar is to refer to the consensus — as displayed, for example, in authoritative statements from scientific organizations or systematic reviews of the scientific research literature.

_

Coal industry targeted scientist supporting global warming and called it bad science:

In the midst of the long, stifling summer of 1988, NASA climate specialist Jim Hansen sat before a congressional committee and delivered a wake-up call. He said the months-long drought that was cooking the Midwest might well be caused by global warming and be indicative of what could happen with increasing frequency in the future.  Hansen then went one bold step further, saying, “The greenhouse effect has been detected and it is changing our climate now.”  Many of Hansen’s scientific colleagues winced at his assertion. Although most knew that what he said was probably true, a public declaration seemed premature, even dangerous. Most scientists, especially those who specialized in climate studies wanted a lot more evidence before declaring to the world that pollution was warming, and endangering, the planet. Such a declaration from the scientific community, they knew, would be problematic because global warming promises not only droughts and searing hot spells, but also floods, loss of species, deaths of forests, rising sea levels, shifts in agricultural belts and the spread of infectious disease. Global warming is, in the understated parlance of science, a “nontrivial problem.”  One fear was that Hansen’s premature declaration would provide fuel to the industry skeptics anxious to debunk anything that might cost them money. Sure enough, after Hansen’s testimony, the howl from the coal industry was loud and long. The reason? The greenhouse effect is caused primarily by dumping carbon dioxide into the atmosphere by burning coal and other fossil fuels. The carbon dioxide allows the sun’s heat to reach the surface of the Earth and then traps it there by preventing the planet from radiating the heat back into space. It’s not rocket science to decide that the best solution to slow global warming is to burn less coal and other fossil fuels. That, of course, requires a more efficient use of energy, much stricter pollution regulations and potentially lower profits for the power industry. Industry’s reaction to the scientific warnings of global warming was not to admit the problem and dedicate themselves to finding solutions.  Instead, they launched a mocking attack on the messengers, the scientists. In the eight years since Hansen told Congress global warming was underway, scientists have piled up a mountain of evidence showing that we are indeed heating up the planet, and in the scientific community declared in a U. N.-, sponsored report that “the balance of evidence suggests that there has been a discernible human influence on global climate.”  In other words, the planet is warming and we’ve probably caused it. When the coal industry tells us that global warming is a “myth” or, worse yet, a “hoax,” and then points to the many unanswered questions in the thousands of scientific studies on the subject, we lay people need to be aware that these problems have often been raised by the very scientists who are being attacked. That’s the scientific process. One scientist develops a theory as to how carbon dioxide is trapping heat in the atmosphere, then another scientist challenges the theory with a string of objections. More scientists join in the fray and test and retest the idea, then decide if it is a good one or should be discarded.  This process has gone on countless times during the years of research into and will continue for years to come. But the research has reached a point where the scientists have enough solid evidence to say that there is absolutely no doubt that human activities are increasing the atmospheric concentrations of the greenhouse gases, which tend to warm the atmosphere.

_

Is weather forecast imitation science?

How good are the Weather Channel’s predictions?

A new study in the American Meteorological Society’s Monthly Weather Review attempts to answer that question – for one US cable network at least. A team from Texas A&M University at College Station studied The Weather Channel’s predictions over a 14-month period from 2004 to 2006, and compared them with real data on the actual weather at 50 locations across the US. The team was particularly interested in the channel’s figures that claimed to predict the probability of rain within a 12-hour time slot. To everybody’s surprise, the forecasts actually turned out to be pretty good – at least for short-term predictions. Precipitation probabilities between 40% and 90% are pretty much spot on, the team concluded, although predictions outside this range did tend to be unreliable. As you might expect, the predictions were worse the further in advance they were, and you wouldn’t bother taking much notice of weekly forecasts, with the actual weather varying wildly from the predictions.  Interestingly, the forecasts seem to veer on the safe side, consistently predicting more rain than actually fell – quite the opposite of the British storm saga.

_

New technology allows better extreme Weather Forecasts:

New technology that increases the warning time for tornadoes and hurricanes could potentially save hundreds of lives every year.  Stronger or more frequent weather extremes will likely occur under climate change, such as more intense downpours and stronger hurricane winds. Improved weather prediction, there­fore, will be vital to giving communities more time to prepare for dangerous storms, saving lives and minimizing damage to infrastructure. New radar technology will allow forecasters to better “see” extreme weather, as will potential improvements to satellite technology, as well as computer models that run on more powerful supercomputers. Longer warning time is only effective when paired with better understanding of how to get people to respond to the warnings, all part of an effort to build a “weather-ready nation.”

_

Winds exceeding 200 miles per hour tear a devastating path three quarters of a mile wide for six miles through the town, destroying schools, a hospital, businesses and homes and claiming roughly 160 lives. The Joplin tornado was only one of many twister tragedies in the spring of 2011. A month earlier a record-breaking swarm of tornadoes devastated parts of the South, killing more than 300 people. April was the busiest month ever recorded, with about 750 tornadoes. At 550 fatalities, 2011 was the fourth-deadliest tornado year in U.S. history. The stormy year was also costly. Fourteen extreme weather and climate events in 2011—from the Joplin tornado to hurricane flooding and blizzards—each caused more than $1 billion in damages. The intensity continued early in 2012; on March 2, twisters killed more than 40 people across 11 Midwestern and Southern states. Tools for forecasting extreme weather have advanced in recent decades, but researchers and engineers at the National Oceanic and Atmospheric Administration are working to enhance radars, satellites and supercomputers to further lengthen warning times for tornadoes and thunderstorms and to better determine hurricane intensity and forecast floods. If the efforts succeed, a decade from now residents will get an hour’s warning about a severe tornado, for example, giving them plenty of time to absorb the news, gather family and take shelter.

_

The Power of Radar:
Meteorologist doug forsyth is heading up efforts to improve radar, which plays a role in forecasting most weather. Forsyth, who is chief of the Radar Research and Development division at NOAA’s National Severe Storms Laboratory in Norman, Okla., is most concerned about improving warning times for tornadoes because deadly twisters form quickly and radar is the forecaster’s primary tool for sensing a nascent tornado. Radar works by sending out radio waves that reflect off particles in the atmosphere, such as raindrops or ice or even insects and dust. By measuring the strength of the waves that return to the radar and how long the round-trip takes, forecasters can see the location and intensity of precipitation. The Doppler radar currently used by the National Weather Service also measures the frequency change in returning waves, which provides the direction and speed at which the precipitation is moving. This key information allows forecasters to see rotation occurring inside thunderstorms before tornadoes form. In 1973 NOAA meteorologists Rodger Brown, Les Lemon and Don Burgess discovered this information’s predictive power as they analyzed data from a tornado that struck Union City, Okla. They noted very strong outbound velocities right next to very strong inbound velocities in the radar data. The visual appearance of those data was so extraordinary that the researchers initially did not know what it meant. After matching the data to the location of the tornado, however, they named the data “Tornadic Vortex Signature.” The TVS is now the most important and widely recognized metric indicating a high probability of either an ongoing tornado or the potential for one in the very near future. These data enabled longer lead times for tornado warnings, increasing from a national average of 3.5 minutes in 1987 to 14 minutes today.

_

Doppler radar improvisation:

Although Doppler radar has been transformative, it is not perfect. It leaves meteorologists blind to the shape of a given particle, which can distinguish, say, a rainstorm from a dust storm.

Dual polarization:

One critical upgrade is called dual polarization. This technology allows forecasters to differentiate more confidently between types of precipitation and amount. Although raindrops and hailstones may sometimes have the same horizontal width—and therefore appear the same in Doppler radar images—raindrops are flatter. Knowing the difference in particle shape reduces the guesswork required by a forecaster to identify features in the radar scans. That understanding helps to produce more accurate forecasts, so residents know they should prepare for hail and not rain, for example. Information about particle size and shape also helps to distinguish airborne bits of debris lofted by tornadoes and severe thunderstorms, so meteorologists can identify an ongoing damaging storm. Particle data are especially important when trackers are dealing with a tornado that is invisible to the human eye. If a tornado is cloaked in heavy rainfall or is occurring at night, dual polarization can still detect the airborne debris.

Phased-array radar:

Current Doppler radars scan at one elevation angle at a time, with a parabolic dish that is mechanically turned. Once the dish completes a full 360-degree slice, it tilts up to sample another small sector of the atmosphere. After sampling from lowest to highest elevation, which during severe weather equates to 14 individual slices, the radar returns to the lowest angle and begins the process all over again. Scanning the entire atmosphere during severe weather takes Doppler radar four to six minutes. In contrast, phased-array radar sends out multiple beams simultaneously, eliminating the need to tilt the antennas, decreasing the time between scans of storms to less than a minute. The improvement will allow meteorologists to “see” rapidly evolving changes in thunderstorm circulations and, ultimately, to more quickly detect the changes that cause tornadoes. Heinsel­man and her team have demonstrated that phased-array radar can also gather storm information not currently available, such as fast changes in wind fields, which can precede rapid changes in storm intensity.

_

Eyes in the Sky: The satellites:

 Of course, even the best radars cannot see over mountains or out into the oceans, where hurricanes form. Forecasters rely on satellites for these situations and also rely on them to provide broader data that supplement the localized information from a given radar. Without more detailed satellite observations, extending the range of accurate weather forecasts—especially for such extreme events as hurricanes—would be severely restricted. Monitoring weather requires two types of satellites: geostationary and polar-orbiting. Geostationary satellites, which stay fixed in one spot at an altitude of about 22,000 miles, transmit near-continuous views of the earth’s surface. Using loops of pictures taken at 15-minute intervals, forecasters can monitor rapidly growing storms or detect changes in hurricanes (but not tornadoes). Geostationary satellites will improve, too. Advanced instruments that will image the earth every five minutes in both visible and infrared wavelengths will be onboard the GOES-R series of satellites to be launched in 2015. They will increase observations from every 15 minutes to every five minutes or less, allowing scientists to monitor the rapid intensification of severe storms. The GOES-R satellites will also provide the world’s first space view of where lightning is occurring in the Western Hemi­sphere. The lightning mapper will help forecasters detect jumps in the frequency of in-cloud and cloud-to-ground lightning flashes. Research suggests that these jumps occur up to 20 minutes or more before hail, severe winds and even tornadoes. Polar satellites, which orbit the earth from pole to pole at an altitude of approximately 515 miles, give closer, more detailed observations of the temperature and humidity of different layers of the atmosphere. A worldwide set of these low Earth orbit (LEO) satellites covers the entire globe every 12 hours. Their data will be used in computer models to improve weather forecasts, including hurricane tracks and intensities, severe thunderstorms and floods. The suite of advanced microwave and infrared sensors will relay much improved three-dimensional information on the atmosphere’s temperature, pressure and moisture, because rapid changes in temperature and moisture, combined with low pressure, signify a strong storm. Infrared sensors provide these measurements in cloud-free areas, and microwave sensors can “see through clouds” to the earth’s surface.

_

Internal Waves may get Meteorologists off the hook for Bad Predictions:

While it may be easy to blame weathermen when their predictors go horribly, horribly wrong, one US researcher says that the meteorologist may not be to blame — that there are underlying scientific forces at work that can sabotage even the most well-researched forecasts. Those forces are known as internal waves, and according to Brigham Young University (BYU) mechanical engineering professor Julie Crockett, they impact our daily weather and long-term climate without us even realizing it. Internal waves occur both within the ocean and the atmosphere, she explained. We can´t see these “highly influential” waves, and neither can the models used by meteorologists, Crockett said, which is why their predictions can sometimes miss the mark. “Atmospheric internal waves are waves that propagate between layers of low-density and high-density air,” the Provo, Utah-based University said in a statement. “Although hard to describe, almost everyone has seen or felt these waves. Cloud patterns made up of repeating lines are the result of internal waves, and airplane turbulence happens when internal waves run into each other and break.” “Internal waves are difficult to capture and quantify as they propagate, deposit energy and move energy around,” Crockett added. “When forecasters don´t account for them on a small scale, then the large scale picture becomes a little bit off, and sometimes being just a bit off is enough to be completely wrong about the weather.” One example of this phenomenon likely occurred in BYU´s home state of Utah in 2011. Meteorologists there had predicted that a tremendous winter storm was set to strike statewide just prior to Thanksgiving, the university said. As a result, schools cancelled classes and people were sent home early in order to avoid the inclement weather — only the storm never actually happened. “Though it´s impossible to say for sure, internal waves may have been driving stronger circulations, breaking up the storm and causing it to never materialize,” BYU explained. “When internal waves deposit their energy it can force the wind faster or slow the wind down such that it can enhance large scale weather patterns or extreme kinds of events,” added Crockett, who is attempting to model internal waves and discover how they interact with one-another or with other, related phenomena. “We are trying to get a better feel for where that wave energy is going.” The oceanic internal waves exist between layers of low-density water and high-density water, and can affect the water´s circulation as well as weather-related phenomena such as the Jet Stream and the Gulf Stream, the researchers said. Both types of internal waves contain enough energy to cause changes to the climate. “Crockett´s latest wave research, which appears in a recent issue of the International Journal of Geophysics, details how the relationship between large-scale and small-scale internal waves influences the altitude where wave energy is ultimately deposited,” BYU representatives explained. “To track wave energy, Crockett and her students generate waves in a tank in her lab and study every aspect of their behavior. She and her colleagues are trying to pinpoint exactly how climate changes affect waves and how those waves then affect weather,” they added. “Based on this, Crockett can then develop a better linear wave model with both 3D and 2D modeling that will allow forecasters to improve their weather forecasting.”

_

So in a nutshell, weather forecast is good science but wrong predictions do occur as many variables are not yet known and also we need technological improvisation. Wind speed is not the accurate way of predicting storm intensity; why so is discussed later on.   

_________

Imitation science and media:

The problems in science, though serious, are nothing new. And the alternative approaches to understanding and changing the world, including journalism, are much worse. In fact, some of the worst problems in science are the direct result, in my opinion, of the poor quality of science journalism. One of the key reasons that leading scientific journals publish bad papers is that both the authors and the editors are looking for media buzz, and can usually count on the media to oblige. I am outraged by the pseudoscience, antiscience, and plain lack of common sense that I see on television, read in books, magazines and tabloids, and hear on the radio.  All of these distort, confuse, and plain misinform about science, how science is done, and who practices science.  Much of the media, especially television, preys on the superstitions and fears of unknowledgeable citizens (who live in a civilization acutely dependent on science and on scientific reasoning), simply to sell their products.  Scientists understand the processes of science and its importance to them.  One thing is clear to them–they have pathetically little help from the mass media, television in particular.  In fact, TV works strongly against achieving a scientifically literate people.  Why?  To make a buck.  Tony Tavares, President of Disney’s Anaheim Sports, was quoted in Time Magazine (August 4, 1997):  “Our main goal is to get people to spend their disposable income with properties associated with the company, whether they’re our theme parks, videos, movies or our sports teams.  If you’ve got a dollar, we want it.”  I recognize that the mass media are controlled by people as ignorant of science as people in general.  Hence, it is the responsibility of scientists, to try to inform people about this problem and to suggest alternatives by working with the mass media.  But mass media are the problem because they don’t know enough about science, and so they enthusiastically embrace junk-science because they are unwilling to risk losing their audiences with real science they don’t comprehend. 

_

We need proper science, proper evidence. So, “Red wine can help prevent breast cancer.” This is a headline from the Daily Telegraph in the UK “A glass of red wine a day could help prevent breast cancer.” So you go and find this paper, and what you find is it is a real piece of science. It’s a description of the changes in the behavior of one enzyme when you drip a chemical extracted from some red grape skin onto some cancer cells in a dish on a bench in a laboratory somewhere. And that’s a really useful thing to describe in a scientific paper, but on the question of your own personal risk of getting breast cancer if you drink red wine, it tells you absolutely bugger all. So red wine help prevent breast cancer is imitation science created by media out of good science paper.   

_

The mass media has a good deal of trouble with science, yet its influence is enormous.  The mass media assail us daily with both good and bad information.  How do they do with science?  In general, those responsible in the media are as uninformed about science and its processes as the general public.  Indeed, newspaper editors in general, have the same understanding of science processes and topics (like humans lived with dinosaurs) as that of the public at large.  Screen writers, although they would like to do intelligent stories about science, simply do not have the basic knowledge to do so (Steve Allen, personal communication, 1997).  As a result, the mass media serve science very poorly. The table below shows the inferred relative influence on the general public and the scientific content of the various media. 

Medium Influence Science
Television ++++++ —–+
Newspapers ++++ –++
Tabloids +++ —–
Movies ++ —-+
Magazines
Internet ++ -+
Radio ++ —+
Books + –++

As you can see, the most influential media is television which has very little science content.

_

A fundamental misunderstanding everywhere in the world is that material in the mass media presented in a scientific manner is real science. Unfortunately, it seldom is.  It is largely pseudoscience, antiscience, superstition, and dogma. This deluge contributes hugely to scientific illiteracy by confusing fact with fiction, scientific theory with belief, and scientists with non-scientists.  Why people are fascinated with and will pay good money for pseudoscientific or antiscientific claims is a deep problem, but it involves poor education, personal and mass delusion, indoctrination, hopelessness, fear of other people, apprehension about the world around them, dread of the unfamiliar, and a multitude of others (Eve and Harrold, 1991; Miller, 1987; Shermer, 1997).  The answer here is education by all means possible–schools, individuals, political bodies, corporations, and the mass media, especially television. Pseudoscience, antiscience and weird beliefs are all around us–in movies, books, TV, radio, newspapers, street corners, pulpits, meetings, school boards, political bodies, and even universities and colleges–and real science is obscure by comparison (the Martian Pathfinder landing on Mars in 1997 is a nice exception).  Antiscience simply ignores scientific reasoning altogether in making its claims.  Pseudoscience makes claims that claim to be and sound scientific but that are based on selected or inadequate evidence, false authority, unsupported beliefs, and it disallows proper tests of the claims. The results can be often humorous and entertaining, but sometimes tragic and costly. For example, billions of dollars are spent each year by Americans on pseudoscientific solutions to health problems alone. Science, on the other hand, uses logic, critical thinking, appropriate evidence, subjects all authority to scrutiny, and allows testing of its claims.  Almost anybody can learn these basic ways of science.  While scientists tout the “scientific method” as the way we do science, it is mostly a reordering of the actual activities that scientists go through so it makes sense. Unfortunately, the scientific method is taught as the way science is done and it appears dull and agonizing. This formalization of the scientific thinking process makes people fear science.  Few scientists actually work that way.  Instead, they get excited, hopeful, interested, intrigued, and puzzled. Their ideas come to them in the shower, on the freeway, while playing baseball with their kids, as well as in the laboratory or library. Science is creative and exciting.  It certainly can be just as fun and entertaining as pseudoscience.  In most cases, it is more so. And it provides a challenge and reward to get it right. You do not need to formalize your thinking in this way to understand how science is done or to practice it on a personal basis.

_

A typical reporter asked to write an article on astrology thinks he has done a thorough job if he interviews six astrologers and one astronomer. The astronomer says it’s all total bunk; the six astrologers say its great stuff and really works and for $50 they’ll be glad to cast anyone’s horoscope. (No doubt!) To the reporter, and apparently to the editor and readers, this confirms astrology six to one! Yet if the reporter had had the very small degree of sense and intelligence required to realize he should have interviewed seven astronomers (all of whom are presumably knowledgeable about the planets and their interactions, but all of whom are also disinterested in astrology, and therefore able to be both knowledgeable and objective) he would have gotten the correct result: seven informed judgments that astrology is nonsense. Everything in pseudoscience seems to generate something for sale; look for courses in how to remember past lives, how to do remote viewing, how to improve your ESP ability, how to hunt for ghosts, how to become a prophet, how to heal yourself of any disease mentally, how to get the angels on your side, how to… you name it, you got it… but pay up first. And I believe part of that pay up goes in journalist’s pocket directly or through sell of newspapers or increase in TRP; and so they broadcast imitation science.  

_

Bad reporting on Bad Science:

Americans are constantly bombarded by the “science of things that aren’t so,” a phrase coined by Nobel Prize-winning chemist Irving Langmuir. Whether it is studies examining chemicals, sugary drinks or any number of other ordinary things, the number and frequency of news reports about relatively benign activities and substances that appear under alarmist headlines are rapidly increasing. There are two primary causes for this phenomenon. First and foremost is the attitude of the news media. Desperate for higher ratings or more readers in an ever more competitive 24-hour news cycle, most media organizations still cling to the old newspaperman’s credo, “If it bleeds, it leads.” Every editor knows that a headline that elicits panic attracts more readers than one explaining that everything is just fine, so explication too often takes a back seat to sensation. The other cause is found within certain researchers themselves. They promote the results of studies of questionable design as evidence of harm when in fact, such studies do nothing more than, at most, provide a small, suggestive but inconclusive piece in a much larger puzzle. Combine this with the zeal to obtain future research grants and the result is a perverse cycle of poorly conceived research used as a premise for more money to conduct more poorly conceived research. The common denominator here is confusion between research that demonstrates a simple association between two things and evidence that an activity, product or substance actually causes harm. There is a critical difference between association and causation. A common source of misapprehension in the reporting of scientific studies is the concept of “linkage.” For example, a study that finds a link between fire engines and fires might conclude that the former cause the latter. It’s preposterous, of course, but it illustrates at a simplistic level the difficulty that one can experience in interpreting an association between two events. Scientific research is complex. But complexity can also be used as a means of misinforming and misleading consumers, and we’ve seen that repeatedly in ideologically-driven studies of various chemicals, including BPA and the widely used herbicides atrazine and glyphosate. Such studies and the lurid headlines spawned by them do nothing to promote understanding but sow confusion and misinformation about science in general and certain chemicals in particular. There is plenty of pseudo-controversy swirling around BPA, both in consumer and scientific circles. After more than 5,000 studies of the chemical over several decades, none of which has ever shown any human harm from BPA in normal consumer use, it’s easy to make a case that continued research on BPA is a waste of time and increasingly scarce research funding. Yet the almost evangelical fervor with which some scientists and activists attack this compound continues to generate more junk science, more bad reporting and more unwarranted fear among consumers.

_

Why most science news is false:

 I report a study by François Gonon et al., “Why Most Biomedical Findings Echoed by Newspapers Turn Out to be False: The Case of Attention Deficit Hyperactivity Disorder”, published on PLoS ONE:

Researchers focused on attention deficit hyperactivity disorder (ADHD). Using Factiva and PubMed databases, they identified 47 scientific publications on ADHD published in the 1990s and soon echoed by 347 newspapers articles. They selected the ten most echoed publications and collected all their relevant subsequent studies until 2011. They checked whether findings reported in each “top 10” publication were consistent with previous and subsequent observations. They also compared the newspaper coverage of the “top 10” publications to that of their related scientific studies. They found that seven of the “top 10” publications were initial studies and the conclusions in six of them were either refuted or strongly attenuated subsequently. The seventh was not confirmed or refuted, but its main conclusion appears unlikely. Among the three “top 10” that were not initial studies, two were confirmed subsequently and the third was attenuated. The newspaper coverage of the “top 10” publications (223 articles) was much larger than that of the 67 related studies (57 articles). Moreover, only one of the latter newspaper articles reported that the corresponding “top 10” finding had been attenuated. The average impact factor of the scientific journals publishing studies echoed by newspapers (17.1 n = 56) was higher (p<0.0001) than that corresponding to related publications that were not echoed (6.4 n = 56).  This will not be a surprise to any honest working scientists, nor to members of the public who have observed the fate of science and technology in the media ecosystem over the years. Gonon et al. focused not only on the role of publication bias and sensationalism at top scientific journals — but also of popular media who have their own motivations, which often lead to credulous trumpeting of “results” that were never published in the technical or scientific literature at all, much less featured in a high-impact-factor journal.

_

Bad science in the headlines: Who takes responsibility when science is distorted in the mass media?

We have seen intensification in the debate on how scientific stories with societal relevance are reported in the media. One cause was the Korean stem-cell scandal, in which Hwang Woo-Suk and colleagues claimed to have cloned human embryos. Lambasting the media for their part in distorting or sensationalizing scientific findings is not new, but recent scandals—and some not so recent—highlight the role played by elements of the scientific community itself. Ultimately, each side must take an appropriate share of responsibility for the impact that controversial or dubious research has on the public. When it comes to distributing blame for misleading the public, most scientists are quick to point to the media. In an analysis of the public reporting of science in the UK, published earlier this year by the Social Market Foundation (SMF; London, UK), the press in particular come under fire for negative sensationalism that is often to the detriment of the public good (SMF, 2006). The report repeatedly cites the media frenzy that began in 1998 when Andrew Wakefield, a UK-based doctor, suggested a link between the administration of the combined measles, mumps and rubella (MMR) vaccine and the development of autism in children. The news reached the media through press releases and a press conference. Relentless negative press coverage over the next five years eventually led to a significant fall in MMR vaccinations in the UK and elsewhere. Many newspapers conveniently ignored expert advice on the safety of the vaccine in favour of attacking the UK government, which defended the vaccine’s safety. “This mistrust of the government and its motives coupled with a sensationalist tendency in some parts of the press … lie at the heart of the problem,” the SMF report states. “Particular difficulties arise when a scientific story becomes a political one and balanced scientific reporting is left behind in the face of a ‘good’ political story.” It seems too easy to blame the press, given that the scientific community might have had little chance to alter the course of this runaway train. But a closer look at the scientific beginnings of this and other media disasters, such as genetically modified (GM) organisms, cloning and stem-cell research, tells a different story.   

_

The threat of bad science broadcast by media: A false vaccine-autism link caused great harm to people:

No parent wants to make his or her own child sick. So when Andrew Wakefield’s 1998 study indicated that the combined measles, mumps and rubella vaccine (MMR) could cause children to develop autism, an entire industry developed to prevent the vaccine from being given. Along with scares that Thimerosal — a mercury preservative previously used in childhood vaccines — was a culprit in autism rates, anti-vaccine fury spread throughout the country. Media coverage was widespread. Grass-roots organizations sprouted. School systems revised vaccine regulations. All this for a study on 12 children that has now been proven false.  And all occurring in spite of the consistent recommendations of physicians’ groups urging parents to vaccinate their children.  The Wakefield study had been repudiated for years with dozens of follow-up studies that never found a linkage between the vaccine and autism. The comprehensive new report from the editors of the British Medical Journal flatly accused Dr. Wakefield of using fraudulent data to “prove” his theory. Yet health decisions were based on research conducted with this tiny sample, which by research standards is a clearly flawed practice. Regardless of the strength of the findings or their original publication in a prestigious medical journal, there is no way one can control all of the variables in just 12 subjects. The unfortunate result of the fear generated by this study has been that millions of parents avoided the vaccine. Large numbers of children were no longer given the MMR vaccine, and some of these unvaccinated children came down with these previously rare childhood illnesses, whose risks are associated with serious complications, and in some cases, death.

_

How findings such as those in Dr. Wakefield’s study are able to enjoy wide public acceptance must be examined so that we can avoid following the next baseless medical trend:

First, the influence of media coverage — along with the appearances of celebrity proponents espousing one scientist’s point of view — should not be underestimated. Actress Jenny McCarthy, a parent of a child with autism, has received significant media attention for her anti-vaccine views and advocacy of nonproven methods for “curing” autism. Quack cures and medical advice are bad enough when found on paid ads in the middle of the night. They should not be the content of legitimate news programs.

Second, many people, especially minorities, are understandably suspicious of the scientific community. The abusive and unethical syphilis research conducted in Tuskegee, Ala. from 1932 to 1972 on 400 African-Americans has lingered for decades as a stain on scientific research. The unearthing of numerous other high-profile cases has only increased the fear that legitimate researchers are often unethical.

Finally, the general public lacks scientific knowledge; many people also sense that scientists, including physicians, cling to prevailing scientific dogma. Without the full incorporation of teaching research ethics in graduate and professional schools, and the adoption of routine standards for the responsible conduct of research, the public’s trust is diminished. Misconduct happens rarely, but its impact can be extremely high. From fraudulent stem cell research to the abuse of research subjects, the need for strict standards is evident. In the Wakefield case, it took a journalist, not a research team, to decipher the data and demonstrate the fraud. If the scientific community were more attuned to misconduct, a routine investigation of Dr. Wakefield’s study could have been conducted, and more children would have been spared the serious consequences of these childhood illnesses.

 _

Science by press conference:

In an increasingly competitive environment, scientists look to media coverage, and through it public attention, as an additional way to attract funding. Some also use the media to spread their views against the consensus of the scientific community. Research institutions and universities also use the media to bring themselves to the attention of the public. Issuing press releases and holding press conferences on the publication of a ‘sexy’ paper are commonplace. The science that enters the public consciousness in this way does so without a thorough examination by the scientific community, and has earned the fitting nickname ‘science by press conference’. Famous examples are the claims by Stanley Pons and Martin Fleischmann to have discovered cold fusion in 1989, and Clonaid’s claim to have cloned a human baby in 2002.

_

Science is increasingly ‘sold’ in a competitive market, which is why many high-profile scientific journals want to attract ‘sexy’ research to ensure that their scoop will be covered by the media within hours of publication. Harvesting the pearls of research, journals exert tight control about what can be made public and when; “Journals control when the public learns about findings from taxpayer-supported research by setting dates when the research can be published,” wrote New York Times staff writer Lawrence Altman (2006). “Increasingly, journals and authors’ institutions also send out news releases ahead of time … so that reports from news organizations coincide with a journal’s date of issue.” According to Altman, it is no surprise that scientific findings are distorted by the press: “…often the news release is sent without the full paper, so reports may be based only on the spin created by a journal or an institution.”  Media reporting left the definitive negative mark because the general public read newspapers, not scientific journals. Bad science has a devastating effect on scientific communities and, if it is reported in the media, it can have a devastating effect on the whole of society. Scientists who behave unprofessionally, or use the media to push a premature minority view or fraudulent research, have generally found themselves ex institutio quite rapidly, as have Wakefield, Hwang, Pusztai and countless others: the scientific community has little mercy with its own kind. The same is not necessarily true of the world of journalism: Dacre has not resigned, nor—on the whole—have other editors or correspondents who have distorted a scientific story. Media resignations do sometimes occur, but mainly for legally punishable misconduct such as libel.

_

Many observers of the media– science wars still believe that the public reporting of scientific stories would improve if only more scientifically trained journalists would go into the media (SMF, 2006). What this assertion misses is the fact that the media primarily want journalists who can write interesting stories of relevance to the general public. Many of the most respected science journalists have no scientific background. Tim Radford, former science editor of The Guardian and one of the most respected science writers in the business, started his career as a general reporter with the New Zealand Herald at the age of 16, and does not have a university degree. John Noble Wilford of the New York Times, a doyen of science writing in the USA, started as a general assignment journalist with The Wall Street Journal. It was another non-scientifically trained journalist, Brian Deer, who methodically exposed the truth behind the Wakefield case for the Sunday Times and Channel 4 television in the UK, doing what the biomedical research community could not, or did not, do.

_

Arsenic eating bacteria and how media broadcast story and bury story when the claim was proved false:

Take the case of the infamous arsenic-eating bacteria. In December 2010 no lesser organization than NASA claimed to have found bacteria living in an isolated lake in California that appeared to feed on arsenic and may actually have incorporated it into their DNA. While not necessarily taking in all the biological subtleties of such a claim, most people would know that arsenic is lethal to all living things. So news of a creature that not only eats it, but possibly incorporates it into their ultimate fabric, must be big news. The story was reported online through the prestigious journal Science and was picked up by news media all around the world. NASA had found a radically different form of life living right here on planet Earth. Or had it? Almost immediately some microbiologists spotted errors in the methodology and data handling of the original paper. The presence of arsenic was more likely to be a form of contamination of the sample. So they went out and did the decent scientific thing; they tried to replicate the experiments and failed to find the mysterious arsenic-fuelled DNA in the organism now known as GFAJ-1. When these replicate test results were released, they did not get the same level of media coverage as the original erroneous claims. It just wasn’t sexy enough to say that someone made a mistake. It was the scientific equivalent of burying a correction at the bottom of page 26 of a newspaper.

_

Poor studies are more likely to get picked up by the media if they reflect our prejudices and play on our insecurities. A recent study concluded that men under stress preferred larger women than their less stressed brethren. This touches several societal nerves; sexual attraction, women’s body image, questions of appropriate weight and obesity, men’s sexual selection criteria and how they could be manipulated. Add to this some quasi-scientific explanations based in our evolutionary history and you have a story the media just couldn’t leave alone. More is the pity, because the study was deeply flawed in several ways and the conclusions it drew were way beyond the limits of the data collected. The study was conducted on a small group of white male university students, some of whom were subjected to a ‘stressful’ simulated job interview while the others were not. The difference in mean size of the preferred women between the two groups was pretty small and the range of sizes preferred by the two groups overlapped to a large degree. Add to this the complexities of human sexual selection from cultural and other influences and there was nothing meaningful that the study could say on the matter. Yet it still made prominent positions in the media across the globe. Similarly, scientists have recently had a strong backlash against the media overlaying anthropomorphic interpretations of natural behaviour in other species. Female Laysan Albatrosses are not lesbians just because they co-operate in nest building, egg sitting and rearing of their young, but that didn’t stop the headlines to the contrary. This says more about our societal angst around issues of homosexuality than anything about the life history of these innocent birds. And this is not just a recent problem with the media. It was recently revealed that the biologist on Robert Falcon Scott’s Antarctic expedition was so shocked at strange sexual practices among Adelie Penguins (including necrophilia, homosexual couplings and pack rapes of immature females) that he wrote his notes in Greek, so few could read them and he never published that part of his journals. 

_

How scientific community contribute to better media coverage of science:

A common complaint from scientists is that the media gets their stories horribly wrong. But there is a conspirator to this problem from within the ranks of the scientists themselves. All too often bad science makes it to press in the popular media and most of that bad science concerns sensational results on which the media feeds. So how do we spot the sensationally bad science stories or the sensational misreporting of good science?  Believing that one knows who is to blame or who behaves better can result in an embarrassing corrective lesson. Both the media and scientists have misled the public on some important matters. And when it comes to uncovering fraud and freedom of the press; an editor backs an investigative reporter considering him superior to the mechanisms of the scientific community, whose members sometimes do not have the freedom and protection that they need in order to speak out. Foundations, institutes and media observers might present the failings of journalism and ponder possible remedies, but there are three sure ways for the research community itself to contribute to better media coverage of science:

1. Deal with its own bad science before it gets into the news,

2. Reign back media-hungry journals, and

3. Be more proactive in feeding good science into the news.

 _

This is what one scientist said about getting public support through media:

On the one hand, as scientists, we are ethically bound to the scientific method, in effect promising to tell the truth, the whole truth, and nothing but . . . On the other hand, we are not just scientists, but human beings as well. And like most people we’d like to see the world a better place . . . To do that we need to get some broad-based support, to capture the public’s imagination. That, of course, entails getting loads of media coverage. So we have to offer up scary scenarios, make simplified, dramatic statements, and make little mention of any doubts we might have . . . Each of us has to decide what the right balance is between being effective and being honest. I hope that means being both.    

_

Fox TV and the Apollo Moon Hoax:

On Thursday, February 15th 2001 (and replayed on March 19), the Fox TV network aired a program called “Conspiracy Theory: Did We Land on the Moon?”, hosted by X-Files actor Mitch Pileggi. The program was an hour long, and featured interviews with a series of people who believe that NASA faked the Apollo Moon landings in the 1960s and 1970s. The biggest voice in this is Bill Kaysing, who claims to have all sorts of hoax evidence, including pictures taken by the astronauts, engineering details, discussions of physics and even some testimony by astronauts themselves. The program’s conclusion was that the whole thing was faked in the Nevada desert (in Area 51, of course!). According to them, NASA did not have the technical capability of going to the Moon, but pressure due to the Cold War with the Soviet Union forced them to fake it. I do not want to go into the details of scientific evidence of moon landing and the fact that astronauts indeed walked on moon. What I want to emphasize is the role media plays in distorting good science into imitation science under pretext of freedom of expression with the sole intention to raise TRP and make more money.

_

Media manipulates relative risk reduction to create scare:

Headline in media: CT scans in childhood can triple the chance of developing brain cancer:

The fact is that relative risk tells you nothing about actual risk:

The size of the initial absolute risk is what’s really important here. If the initial risk is very small, even a huge increase may not make much absolute difference. But for a risk that is quite large already, smaller increases can still have a big impact. The headline finding of a study published in June 2012 was that having several CT scans as a child ‘could make you three times as likely to develop leukaemia or brain cancer as an adult’. This sounds quite worrying (and we could make it sound even scarier still – another way of saying three times as likely is ‘a 200 per cent increased’ chance of cancer). But digging a little deeper into the research, as much of the coverage sensibly did, showed that because the chances of developing these cancers are so small (0.4 per 10,000 children aged 0-9 develop brain tumours and 0.6 per 10,000 children aged 0-9 develop leukaemia), the increased risk would mean one additional case of brain cancer and one of leukaemia for every 10,000 children given the scans. 

_

How to read articles about health and healthcare in media:

If you’ve just read a health-related headline that’s caused you to spit out your morning tea (“Tea causes cancer”), I request you keep calm and carry on. On reading further you’ll often find the headline has left out many important points. The most important rule to remember: “Don’t automatically believe the headline”. It is there to draw you into buying the paper and reading the story. Before spraying your newspaper with tea in the future, you need to interrogate the article to see what it says about the research it is reporting on.

Does the article support its claims with scientific research?

If an article touts a treatment or a lifestyle factor that is supposed to prevent or cause a disease, but doesn’t give any information about the scientific research behind it, or refers to research that has yet to be published, then treat it with caution.

Is the article based on a conference abstract?

Another area for caution: news articles based on conference abstracts. Research presented at conferences is often at a preliminary stage and usually hasn’t been scrutinized by experts in the field. Also conference abstracts rarely provide full details about methods, making it difficult to judge how well the research was conducted. For these reasons, articles based on conference abstracts should be no cause for alarm.

Was the research in humans?

Quite often the “miracle cure” in the headline turns out to have only been tested on cells in the laboratory or on animals. These stories are often accompanied by pictures of humans, creating the illusion that the “miracle cure” came from human studies. Studies in cells and animals are crucial first steps and should not be undervalued. However, many drugs that show promising results in cells in laboratories don’t work in animals, and many drugs that show promising results in animals don’t work in humans. If you read a headline about a drug or food “curing” rats, there is a chance it might cure humans in the future, but unfortunately a larger chance that it won’t. So no need to start eating large amounts of the “wonder food” featured in the article.

How many people did the research study include?

In general, the larger a study the more you can trust its results. Small studies may miss important differences because they lack statistical “power”, and small studies are more susceptible to finding things (including things that are wrong) purely by chance. You can visualize this by thinking about tossing a coin. We know that if we toss a coin the chance of getting a head is the same as that of getting a tail – 50/50. However, if we didn’t know this and we tossed a coin four times and got three heads and one tail, we might conclude that getting heads was more likely than tails. But this chance finding would be wrong. If we tossed the coin 500 times – gave the experiment more “power” – we’d be much more likely to get an even number of heads and tails, giving us a better idea of the true odds. When it comes to sample sizes, bigger is usually better. So when you see a study conducted in a handful of people, proceed with caution.

Did the study have a control group?

There are many different types of studies, and they are appropriate for answering different types of questions. If the question being asked is about whether a treatment or exposure has an effect or not, then the study needs to have a control group. A control group allows the researchers to compare what happens to people who have the treatment/exposure with what happens to people who don’t. If the study doesn’t have a control group, then it’s difficult to attribute results to the treatment or exposure with any level of certainty. Also, it’s important that the control group is as similar to the treated/exposed group as possible. The best way to achieve this is to randomly assign some people to be in the treated/exposed group and some people to be in the control group. This is what happens in a randomized controlled trial (RCT) which is why they are considered the “gold standard” way of testing the effects of treatments and exposures. So when reading about a drug, food or treatment that is supposed to have an effect, you want to look for evidence of a control group, and ideally evidence that the study was an RCT. Without either, retain some healthy skepticism. In fact most of the studies published in news papers are observational studies which don’t prove anything. 

Did the study actually assess what’s in the headline?

This one is a bit tricky to explain without going into a lot of detail about “proxy outcomes”. To avoid doing that, here is the key thought: the research study needs to have examined what is being talked about in the headline and article. (Somewhat alarmingly, this isn’t always the case.) For example, you might read a headline that claims “Tomatoes reduce the risk of heart attacks”. What you need to look for is evidence that the study actually looked at heart attacks. You might instead see that the study found that tomatoes reduce blood pressure. This means that someone has extrapolated that tomatoes must also impact heart attacks, as high blood pressure is a risk factor for heart attacks. Sometimes these extrapolations will prove to be true, but other times they won’t. So if a news story is focusing on a health outcome that was not examined by the research, treat it with a grain of salt.

Who paid for and conducted the study?

This is a somewhat cynical point, but one that’s worth making. The majority of trials today are funded by manufacturers of the product being tested – be it a drug, vitamin cream or foodstuff. This means they have a vested interest in the results of the trial which can affect what the researchers find and report in all sorts of conscious and unconscious ways. This does not automatically make all manufacturer-sponsored trials to be unreliable. Some are very good. But it’s worth looking to see who funded the study to sniff out a potential conflict of interest for yourself.

Should you “shoot the messenger”?

Sometimes journalists take a piece of research and misrepresent it, making claims the scientists themselves never made. Other times the scientists or their institutions over-extrapolate, making claims their research can’t support. These claims are then repeated by the journalists. Given erroneous claims can come from a variety of places, don’t automatically ‘shoot the messenger’ by blaming the journalist.

__________

Entertainment and bad science:

_

The table below shows bad science in various movies:

_

Scientific inaccuracies abound in movies and TV, and all for the sake of good entertainment. This is fine. Although they do inspire students to become scientists, there are also unintended consequences.  After watching The Core, it’s not likely that someone out there is going to try to get to the Earth’s core by building some supposedly indestructible vessel, but it is likely to leave the idea in impressionable minds that the Earth is something it is not. This is where we need to identify these misconceptions and turn them into learning opportunities. On the other hand, ‘breaking bad’ is a TV series that promote scientific accuracy in the entertainment industry.

_

Advertisement and bad science:

_

Olay’s Regenerist product claims to be formulated with an exclusive Olay amino-peptide + B3 complex. Regenerist incorporates other proven anti-aging ingredients such as vitamin E, pro-vitamin B5, green tea extract, allantoin and glycerin.

_

Pseudoscience is the shaky foundation of practices — often medically related — that lack a basis in evidence. Its “fake” science dressed up, sometimes quite carefully, to look like the real thing. If you’re alive, you’ve encountered it, whether it was the guy at the mall trying to sell you Power Balance bracelets, the shampoo commercial promising you that “amino acids” will make your hair shiny, or the peddlers of “natural remedies” or fad diet plans, who in a classic expansion of a basic tenet of advertising, make you think you have a problem so they can sell you something to solve it. Pseudosciences are usually pretty easily identified by their emphasis on confirmation over refutation, on physically impossible claims, and on terms charged with emotion or false sciencey-ness.  Sometimes, what peddlers of pseudoscience say may have a kernel of real truth that makes it seem plausible. But even that kernel is typically at most a half truth, and often, it’s that other half they’re leaving out that makes what they’re selling pointless and ineffectual. In the UK, a television advertisement for Olay Regenerist Face Cream has been banned for using bogus science. Claims in the ad that “pentapeptides” could reduce the appearance of visible lines and could be used as a substitute for cosmetic surgery were deemed misleading.  In 2007, a consumer organization in the UK found skin care companies were “blinding consumers with science” using terms like nanoparticles, pentapeptides, lipopeptides and hyaluronic acid. These are all legitimate scientific words, and whilst some have been associated with skin repair, others are just there to sound sciencey. For example, hyaluronic acid is a component of the extracellular matrix, the “scaffolding” which supports the cells, and has been used to assist in the repair of burns and wound healing. Hyaluronic acid is indeed an acid, and there is no evidence that it plumps the skin when applied topically, only when it is injected ala botox. “Nanoparticles” on the other hand, included in some products may actually be harmful, and should be avoided, dermatologists suggest. Adding scientific jargon to a tiny bottle of cream is just another way manufacturers can get away with charging you an arm and a leg for a tub of sorbolene and water with some nice perfume added.

_

99.4 % pure: It floats! This description of Ivory Soap is a classic example of junk science from the 19th century. Not only is the term “pure” meaningless when applied to an undefined mixture such as hand soap, but the implication that its ability to float is evidence of this purity is deceptive. The low density is achieved by beating air bubbles into it, actually reducing the “purity” of the product and in a sense cheating the consumer.

_________

The importance of questioning academic research — especially when it has a corporate tie:

If you want to influence a mass audience, for instance, you can try to do what the Pentagon does and subtly bake slanted information into entertainment products such as movies and television shows. If, on the other hand, you are looking to influence a slightly higher-brow audience, you can embed disinformation in newspapers’ news and opinion pages. And if you are looking to brainwash politicians, think tanks, columnists and the rest of the political elite in order to rig an esoteric debate over public policy, you can attempt to shroud your agitprop in the veneer of science. While these are all diabolically effective methods of manipulating political discourse, the latter, which involves corporate funding of academic research, is the most insidious of all.  At the national level, media organizations frothed with news about Stanford University researchers supposedly determining that organic food is no more healthy than conventionally produced food. In the rush to generate audience-grabbing headlines, most of these news outlets simply regurgitated the Stanford press release, which deliberately stressed that researchers did not find strong evidence that organic foods are more nutritious or carry fewer health risks than conventional alternatives. The word “deliberately” is important here — as watchdog groups soon noted, Stanford is a recipient of corporate largess from agribusinesses such as Cargill, which have an obvious vested financial interest in denigrating organics. I simply raise questions about whether corporate money improperly tilts that study and those criticisms, and whether, at minimum, the public has a right to know whether corporate money is flowing to an allegedly independent academic voice. This gets to why scrutinizing science and its funding sources is so important: It forces more questions and fosters a more vigorous public debate. Of course, questions, when deployed malevolently, can also be used to momentarily obscure the truth. As climate change deniers show time and time again, tiny, inane queries or conspiracy theories — no matter how disconnected from data – can be used to undermine decades of scientific research and well-established facts. Whether questions about food policy, telecom policy, climate policy or any other contentious issue at the national or local level, questions — even uncomfortable ones about paymasters — are good and necessary. They prevent us from submitting to corporate subterfuge and do not allow controversial findings to become assumed fact simply because they come from a venerable source.

_________

Science, politics and policy making: 

Does bad science makes bad laws: the BPA hoax:

Let me discuss the plastics additive bisphenol A (BPA), a chemical used to harden plastic and has been in wide consumer use for more than half a century. It is used to improve the safety and reliability of everything from DVDs and consumer electronics to sports safety equipment and shatterproof bottles. BPA is among the most tested chemicals in history and not a single study has ever shown any harm to humans under normal consumer use and exposure. Yet junk scientists have generated enough false or misleading data to prompt lawmakers and regulators to propose and pass bad laws based on this bad science. This issue has been raised in Europe as well and scientists on the European Food Safety Authority (EFSA) evaluated more than 800 studies on the topic, including one that was the basis for a partial BPA ban in Denmark. But after reviewing the science, the EFSA concluded that there is no evidence of harm from BPA and no evidence of neurobehavioral effects, even for infants and small children. Such studies are normally conducted on laboratory rats, including a large independent study by the U.S. Environmental Protection Agency. Dr. Earl Gray at the EPA laboratory in North Carolina applied his 30 years of experience to the question and in doing so he and his colleagues failed to find any discernible adverse effects in the rats or their offspring, despite claims that BPA disrupted endocrines in humans. Rats in these studies were unaffected even when they were fed doses several thousand times higher than the maximum exposure that could be experienced by humans. The methodology in such experiments has been around a very long time and is well established. Opponents of such studies — all of which show no demonstrable effect — usually fall back on claiming that the rats used were “insensitive” to BPA. That is rubbish. The bogus BPA fears have also been debunked by Professor Richard Sharpe, a senior scientist at the Center for Reproductive Biology in Edinburgh, Scotland.  He published an opinion dismissing the oft-heard propagandistic claims that “The science is clear and the findings are just not scary, but horrific,” and that feeding an infant from a shatterproof baby bottle is like giving a baby a birth control pill. “This statement,” wrote Prof. Sharpe, “is not only wrong, it is a complete misrepresentation of what the scientific facts show.” Addressing the issue of the difference between the female sex hormone, ethinyl oestradiol, and BPA, Sharpe wrote, “You need at least 10,000 times as much bisphenol A as you do ethinyl oestradiol to have a similar oestrogenic potency.” “There is accurate data on how much bisphenol A we ingest every day and it is at least 50,000 times less than the level needed to make this the equivalent of taking a contraceptive pill,” wrote Sharpe. “In fact, it is less still, because not much of the bisphenol A we ingest ever gets into the blood to cause any effect in the body.”

_

Matthias Rath, AIDS, South Africa, multivitamin and politics:

The researchers enrolled 1,078 HIV-positive pregnant women and randomly assigned them to have either a vitamin supplement or placebo. Notice once again, if you will, that this is another large, well-conducted, publicly funded trial of vitamins, conducted by mainstream scientists, contrary to the claims of nutritionists that such studies do not exist. The women were followed up for several years, and at the end of the study, 25 per cent of those on vitamins were severely ill or dead, compared with 31 per cent of those on placebo. There was also a statistically significant benefit in CD4 cell count (a measure of HIV activity) and viral loads. These results were in no sense dramatic – and they cannot be compared to the demonstrable life-saving benefits of anti-retroviral – but they did show that improved diet, or cheap generic vitamin pills, could represent a simple and relatively inexpensive way to marginally delay the need to start HIV medication in some patients. In the hands of Rath, this study became evidence that vitamin pills are superior to medication in the treatment of HIV/AIDS, that anti-retroviral therapies ‘severely damage all cells in the body including white blood cells’, and worse, that they were ‘thereby not improving but rather worsening immune deficiencies and expanding the AIDS epidemic’.  ‘The answer to the AIDS epidemic is here,’ he proclaimed. Anti-retroviral drugs were poisonous, and a conspiracy to kill patients and make money. ‘Stop AIDS Genocide by the Drugs Cartel’ said one headline. ‘Why should South Africans continue to be poisoned with AZT? There is a natural answer to AIDS.’ The answer came in the form of vitamin pills. ‘Multivitamin treatment is more effective than any toxic AIDS drug.’ ‘Multivitamins cut the risk of developing AIDS in half.’  Tragically, Matthias Rath had taken these ideas to exactly the right place. Thabo Mbeki, the President of South Africa at the time, was well known as an ‘AIDS dissident’, and to international horror, while people died at the rate of one every two minutes in his country, he gave credence and support to the claims of a small band of campaigners who variously claim that AIDS does not exist, that it is not caused by HIV, that anti-retroviral medication does more harm than good, and so on. President Mbeki sent a letter to world leaders comparing the struggle of the ‘AIDS dissidents’ to the struggle against apartheid. The researchers from the Harvard School of Public Health were so horrified that they put together a press release setting out their support for medication, and stating starkly, with unambiguous clarity, that Matthias Rath had misrepresented their findings. Media regulators failed to act. To outsiders the story is baffling and terrifying. The United Nations has condemned Rath’s adverts as ‘wrong and misleading’. ‘This guy is killing people by luring them with unrecognized treatment without any scientific evidence,’ said Eric Goemaere, head of Médecins sans Frontières SA, a man who pioneered anti-retroviral therapy in South Africa. Rath sued him.   

_

In 2000, the now infamous International AIDS Conference took place in Durban. Mbeki’s presidential advisory panel beforehand was packed with ‘AIDS dissidents’, including Peter Duesberg and David Rasnick. On the first day, Rasnick suggested that all HIV testing should be banned on principle, and that South Africa should stop screening supplies of blood for HIV. ‘If I had the power to outlaw the HIV antibody test,’ he said, ‘I would do it across the board.’ At various times during the peak of the AIDS epidemic in South Africa their government argued that HIV is not the cause of AIDS, and that anti-retroviral drugs are not useful for patients. They refused to roll out proper treatment programs, they refused to accept free donations of drugs, and they refused to accept grant money from the Global Fund to buy drugs. One study estimates that if the South African national government had used anti-retroviral drugs for prevention and treatment at the same rate as the Western Cape province (which defied national policy on the issue), around 171,000 new HIV infections and 343,000 deaths could have been prevented between 1999 and 2007. Another study estimates that between 2000 and 2005 there were 330,000 unnecessary deaths, 2.2 million person years lost, and 35,000 babies unnecessarily born with HIV because of the failure to implement a cheap and simple mother-to-child-transmission prevention program. Between one and three doses of an ARV drug can reduce transmission dramatically. The cost is negligible. It was not available.

_

Science can be manipulated in the service of a greater good is deeply dangerous to society and rational public policy. Science is not Democratic or Republican. Scientific integrity, logic, reason, and the scientific method are core to the strength of our world. We may disagree among ourselves about matters of opinion and policy, but we (and our elected representatives) must not misuse, hide, or misrepresent science and fact in service of our political wars. But there are some important distinctions that should be made. First, those who manipulate, misrepresent, or misuse the science of climate change or evolution do so in fields where a vast amount of compelling and conclusive science is available, and where the degrees of freedom for disagreement are small — far smaller than the disagreements that are pushed by deniers and creationists. If you argue that the climate isn’t changing due to human activities, or the changes aren’t going to have costly consequences, or smoking doesn’t cause cancer, or the Earth is 6,000 years old, you can do so only by ignoring or misrepresenting a massive and persuasive body of science. The science is more complicated  and the room for legitimate disagreement is far larger — in areas such as GMOs, nuclear power, and hydraulic fracturing (or fracking). As a result, there is less accord among scientists or environmentalists or policymakers about appropriate policy in these areas. Sometimes this is the result of incomplete science or gaps in our knowledge: We often simply don’t know enough. Proponents of fracking, for example, argue that there is no evidence of adverse impacts on groundwater quality. While this isn’t entirely true, the greater problem is that we just haven’t looked very hard. And when we do look, we find impacts. Sometimes this is the result of unavoidable uncertainties in the science. While there is some degree of uncertainty (in the sense of “a range of possible outcomes”) in just about all science, that degree varies from field to field. And not everything is equally uncertain — beware those who point to natural uncertainties to discredit entire fields. Sometimes this is the result of subjective perceptions or ethical judgments in how to interpret or weight the science in the context of broader social values. When this is the case, improvements in science and understanding may have no effect whatsoever on perceptions and political positions.

_

Take GMOs. The debate over GMOs suffers from all three factors: more and better science is needed; there are significant uncertainties in what we think we already know; and some of the positions held by both supporters and opponents of GMOs are actually based on issues completely unrelated to the science. The arguments being highlighted by anti-GMO activists focus on the risk to public health from eating foods containing GMOs. There is little or weak scientific evidence for this risk and some campaigners in America are indeed exaggerating or misrepresenting it in order to sway voters.  But absence of evidence of a health risk is not the same as evidence of absence. Reading the literature, I can say that the evidence for health problems from eating GMOs is extremely limited and unconfirmed, but that is not the same as saying we know them to be safe. Much more, and better, scientific research is needed on these risks. Perhaps more importantly, there are other complicated and potentially serious risks and objections to GMOs — not all of them purely scientific — including the risk of gene pollution, misuse of agricultural chemicals, financial and economic questions about market dominance, concerns among farmers about monopolistic and predatory practices of a few companies, and interference in farming practices. These risks deserve more research and analysis, and they are certainly legitimate grounds for disagreement among environmentalists and others, or for arguing that some precaution and transparency in labeling are justified.

_

Nuclear power is another example of an issue where there is both considerable hyperbole, but also sufficiently complex and nuanced problems to permit disagreement, even using good science. The “environmental” and scientific communities are not monolithic. Some “environmentalists” have long supported nuclear power because they give greater weight to some of its advantages compared to either its disadvantages or its alternatives. Do you believe that climate change and greenhouse gas emissions are a greater threat than the health risks of the nuclear fuel cycle and that renewables will not fill the gap quickly enough? How do you balance voluntary versus non-voluntary risks? Or high probability/low consequence risks compared to low probability/high consequence events? Can the risks of nuclear proliferation be eliminated in all countries with “non-military” nuclear programs? These are complex issues only partly informed by “science” and therefore strong and principled opposition to nuclear power is not necessarily “anti-scientific.”  

_

Anti-vaccine quackery, anti-GMO pseudoscience, and climate change denialism: Is there a connection?

What does drive anti-science views, like anti-vaccinationism, anti-genetically modified organisms (GMO), and anthropogenic global warming (AGW) denialism?  A new study by University of Bristol psychologist Stephan Lewandowsky and his colleagues in the PLoS ONE concludes that it’s not so much politics as belief in various conspiracy theories that is correlated most strongly with anti-vaccine views and various other anti-science views. Basically, Lewandowsky carried out a survey in which he attempted to examine correlations between conspiratorial thinking and belief in conspiracy theories with three major forms of antiscientific belief systems: Anti-vaccinationism, anti-GMO, and AGW denialism. The three potential predictors of these anti-science views that they chose to examine were endorsement of the free market, conservatism-liberalism, and conspiracist ideation.  The new study has some fascinating implications for the longstanding battle over who is worse when it comes to distorting science: The left, or the right. Addressing this issue was a key motivation behind the research, and the basic upshot is that left-wing science denial was nowhere to be found—at least not in the sense that left-wingers reject established science more frequently than right-wingers on issues like GMOs or vaccines. “I chose GM foods and vaccinations based on the intuition in the media that this is a left wing thing,” Lewandowsky explains. “And as it turns out, I didn’t find a lot of evidence for that.” When it comes to GM foods, Lewandowsky found no association between left-right political orientation and distrust of these foods’ safety in his American sample. When it comes to vaccines, meanwhile, the study found that two separate political factors seemed to be involved in vaccine resistance, leading to a complex stew. “There’s some evidence that progressives are rejecting vaccinations, but equally there is an association between libertarianism and the rejection of vaccinations,” Lewandowsky explains. Lefties presumably do it because they’re anti-corporate, and Big Pharma is involved in the vaccine business; libertarians presumably do it because they’re anti-government, and the anti-vaccine movement has long levied dubious charges that the government (the CDC in particular) has been hiding the truth on this matter.

___________

The biggest challenge to 21’st century is from HIV & tobacco and imitation science helps these evils:

How bad science spreads AIDS and kills:

Why the “circumcision solution” to the AIDS epidemic in Africa will increase transmission of HIV:

A handful of circumcision advocates have recently begun haranguing the global health community to adopt widespread foreskin-removal as a way to fight AIDS. Their recommendations follow the publication of three randomized controlled trials (RCTs) conducted in Africa between 2005 and 2007. These studies have generated a lot of media attention. In part this is because they supposedly show that circumcision reduces HIV transmission by a whopping 60%, a figure that wins the prize for “most misleading possible statistic” as we’ll see in a minute. Yet as one editorial concluded: “The proven efficacy of MC [male circumcision] and its high cost-effectiveness in the face of a persistent heterosexual HIV epidemic argues overwhelmingly for its immediate and rapid adoption.” The “randomized controlled clinical trials” upon which these recommendations are based represent bad science at its most dangerous: we are talking about poorly conducted experiments with dubious results presented in an outrageously misleading fashion. These data are then harnessed to support public health recommendations on a massive scale whose implementation would almost certainly have the opposite of the claimed effect, with fatal consequences. As Gregory Boyle and George Hill explain in their exhaustive analysis of the RCTs: While the “gold standard” for medical trials is the randomized, double-blind, placebo-controlled trial, the African trials suffered [a number of serious problems] including problematic randomization and selection bias, inadequate blinding, lack of placebo-control (male circumcision could not be concealed), inadequate equipoise, experimenter bias, attrition (673 drop-outs in female-to-male trials), not investigating male circumcision as a vector for HIV transmission, not investigating non-sexual HIV transmission, as well as lead-time bias, supportive bias (circumcised men received additional counseling sessions), participant expectation bias, and time-out discrepancy (restraint from sexual activity only by circumcised men).  

_

From 2008 to 2011, 13 countries in southern and eastern Africa at the heart of the HIV/AIDS epidemic have been on a mission to circumcise 80 percent of their men by 2015 in an effort to cut in half the rate of sexual transmission of the disease from 2011 levels. The estimated price tag for all of the 13 countries (Botswana, Kenya, Lesotho, Malawi, Mozambique, Namibia, Rwanda, South Africa, Swaziland, Tanzania, Uganda, Zambia and Zimbabwe) to reach the 80 percent male circumcision rate by 2015 would be somewhere on the order of $1.5 billion, the authors of one of the papers suggest. To keep that saturation constant for another 10 years would cost a further $500 million. These 20.3 million circumcisions, however, could prevent some 3.4 million new HIV infections in both men and women, according to the new findings. From 2016 to 2025, after accounting for the initial expenditures, the programs would save some $16.5 billion. So an attempt was made to show that male circumcision on mass scale to reduce HIV epidemic is also cost-effective.

_

Basic fault in RCT:

Because those circumcised men knew they were in the treatment group in the first place, had less sex over the duration of the study (because they had bandaged, wounded penises for much of it), and had safer sex when they had it (because they received free condoms and special counseling from the doctors), thereby reducing their overall exposure to HIV compared to the control group by a wide margin; is the basic fault in RCT.   

_

What does the frequently cited “60% reduction” in HIV infections actually mean?

Across all three female-to-male trials, of the 5,411 men subjected to male circumcision, 64 (1.18%) became HIV-positive. Among the 5,497 controls, 137 (2.49%) became HIV-positive, so the absolute decrease in HIV infection was only 1.31%. That’s right:  60% is the relative reduction in infection rates, comparing two vanishingly small percentages: a clever bit of arithmetic that generates a big-seeming number, yet one which wildly misrepresents the results of the study. The absolute decrease in HIV infection between the treatment and control groups in these experiments was a mere 1.31%, which can hardly be considered clinically significant, especially given the numerous confounds that the studies failed to rule out.

_

80% of adult American males are circumcised, and have been for decades, but this hasn’t stopped HIV/AIDS here. U.S. cemeteries are full of circumcised men who died from AIDS. Did circumcision help any of the nearly 1 million mostly circumcised Americans who have died of AIDS? Why is it that New Zealand stopped routine circumcision, yet it has one of the lowest HIV infection rates in the world?  Why does the US continue the practice, and have one of the highest?  The French and Danes don’t circumcise, and their infection rates are 1/10th that of the US. So the logic that male circumcision leads to reduce HIV transmission is flawed. 

_

The Kampala Monitor reported men as saying, “I have heard that if you get circumcised, you cannot catch HIV/AIDS. I don’t have to use a condom.” Commenting on this problem, a Brazilian Health Ministry official stated: “The WHO [World Health Organization] and UN HIV/AIDS program … gives a message of false protection because men might think that being circumcised means that they can have sex without condoms without any risk, which is untrue.” Circumcision has been promoted as a natural condom, and African men have reported having undergone circumcision in order not to have to continually use condoms. Such a message has been adopted by public health researchers. A recent South African study assessing determinants of demand for circumcision listed “It means that men don’t have to use a condom” as a circumcision advantage in the materials they presented to the men they surveyed.

_

How rational is it to tell men that they must be circumcised to prevent HIV, but after circumcision they still need to use a condom to be protected from sexually transmitted HIV? Condoms provide near complete protection, so why additional protection would be needed? It is not hard to see that circumcision is either inadequate (otherwise there would be no need for the continued use of condoms) or redundant (as condoms provide nearly complete protection).  

_

The studies we’ve looked at, claiming to show a benefit of circumcision in reducing transmission of HIV, are paragons of bad design and poor execution; and any real-world roll-out of their procedures would be very difficult to achieve safely and effectively. The likeliest outcome is that HIV infections would actually increase—both through the circumcision surgeries themselves performed in unsanitary conditions, and through the mechanism of risk compensation and other complicating factors of real life. The “circumcision solution” is no solution at all. It is a waste of resources and a potentially fatal threat to public health. If circumcision results in lower condom use, the number of HIV infections will increase.

_

What should we conclude?

 Green et al. get it right: “Before circumcising millions of men in regions with high prevalence of HIV infection, it is important to consider alternatives. A comparison of male circumcision to condom use concluded that supplying free condoms is 95 times more cost effective.”  And not only more cost effective, but also more effective—period—in slowing the spread of HIV. Condoms are cheap, easy to distribute, do not require the surgical removal of healthy genital tissue, and—yes—are much, much, much, much more effective at preventing infections. Comparison shows that Condoms caused 80% minimum reduction in HIV infection while Circumcision caused clinically insignificant absolute reduction, according to the most optimistic presentation of data from three deeply flawed studies. There is no contest.

_____

Tobacco industry manipulation of research:

The focus here is on the strategies used by the tobacco industry to deny, downplay, distort and dismiss the growing evidence that, like active smoking, environmental tobacco smoking (ETS) causes lung cancer and other effects in non-smokers. It does not address the history of scientific knowledge about tobacco and how it was used or not used to reduce lung cancer and other harmful effects of tobacco smoke.  

The primary motivation of the tobacco industry has been to generate controversy about the health risks of its products. The industry has used several strategies including:

1. Funding and publishing research that supports its position;

2. Suppressing and criticizing research that does not support its position;

3. Changing the standards for scientific research;

4. Disseminating interest group data or interpretation of risks via the lay (non‑academic) press and directly to policymakers.

 The funding research serves multiple purposes for the tobacco industry. The research that is directly related to tobacco has been used to refute scientific findings suggesting that the product is harmful and sustain controversy about adverse effects. Tobacco industry-supported research has been used to prepare the industry for litigation or legislative challenges. The industry may also have funded research not directly related to tobacco, in order to generate good publicity, enhance industry credibility and to distract from tobacco products as a health problem.

_

Bad science in concluding that passive smoking is not harmful to health:

In the sample of 106 review articles, the only factor associated with conclusion that passive smoking is not harmful was whether the author of the review article was affiliated with the tobacco industry (Barnes and Bero, 1998). As shown in the table below, review articles concluding that passive smoking is not harmful were about 90 times more likely to be funded by the tobacco industry than those concluding that second-hand smoke is harmful. Methodological quality, peer-review status, outcomes studied in the reviews, and year of publication were not associated with the conclusions of the articles. Thus, sponsorship of review articles by the tobacco industry appears to influence the conclusions of these articles, independent of methodological quality.

_

____________

_____________

Before I end discussion on ‘Imitation Science’; let me discuss the reverse i.e. how good science was labeled as imitation science by Indian leaders, Indian media and Indian scientists/doctors which might have inadvertently caused preventable deaths or claim to have saved lives when not warranted. 

_

Mathematical formula of Pi:

The above mathematical formula was discovered by me in the year 1980. Indian leaders and media were aware of this discovery at that time. They consulted Indian scientists of that time who opined that it is a theoretical formula which cannot be of any utility in practical science (informally useless). However, I used the same formula in my theory of duality of existence to delineate relationship between straightness and curvedness. This theory of duality of existence is now accepted by majority of scientists all over world. My formula of Pi was labeled as imitation science by Indian leaders, Indian media and Indian scientists but now world recognizes it as good science.   

_

Scientific treatment of Dengue fever:

I have discovered in 2010 that intravenous infusion of normal saline to patients of dengue fever at right time in right dose can prevent and/or treat dengue shock syndrome and save lives. It is published on my website as well as my facebook page. I have also stated that hundreds of dengue patients have died in India due to undue emphasis on low platelet count and incorrect administration of intravenous infusions. Indian leaders and media stated that I am making exaggerated claims and the only advantage of normal saline is that it is cheap compared to other intravenous fluids (colloid or crystalloid). Neither Indian medical council nor Indian medical association took cognizance of my discovery and even today dengue patients are dying in India due to delayed diagnosis and incorrect treatment. Even today unnecessary platelet transfusions are given in a dengue patient who is not having any bleeding and whose platelet count is more than 20,000 per microliter of blood; with the hope of saving life. Even today, emphasis is on platelet count rather than maintenance of intravascular volume.

 _

Strength (intensity) of cyclone:

How do you judge the strength or intensity of a cyclone?

Traditional meteorological science says that wind speed determines the intensity of cyclone and extent of damage caused by cyclone. Higher the wind speed, greater the damage. Media constantly broadcast wind speed to warn people.

My theory regarding strength of cyclone: [from my article on ‘The Cyclone’ published on my website in May 2011]

Let me start with example. The earth spins on its axis with period of rotation of about 24 hours. That means any point on earth except both poles, rotates around axis of rotation in 24 hours. Now, if you are on equator, you will rotate with the speed of 1700 km/h but as you shift to higher latitudes, the speed of rotation will decrease. Nonetheless, the period of rotation will remain same i.e. 24 hours. In the same way, an average tropical cyclone is a disc of about 10 km high and 600 km wide. This disc spins on its axis which passes through the center of the storm (center of eye). The period of rotation is the time taken by any point on this disc (except its axis) to rotate full circle, which remains same for all points on the disc. That means if a wind A is 50 km away from the axis and another wind B is 100 km away from the axis, the period of rotation for both winds will be same. However the speed of wind B will be faster than wind A as wind B is further away from axis than wind A. The strength of a tropical cyclone is inversely proportional to the period of rotation. Lesser the period, faster the speed of rotation and more energy will be required. Also, larger the size of the disc, greater will be energy required to produce rotation of cyclone. So my formula for cyclone strength or intensity is as follows. 

_

Where diameter of cyclone in kilometers, period of rotation in hours and intensity means strength of cyclone.

_

This is my formula for intensity of cyclone which does not consider wind speed but size of cyclone and period of rotation. Larger the size of cyclone, greater will be intensity. Greater the period of rotation, lesser will be intensity. Many times, largest cyclone is not strongest if the period of rotation is more. On the other hand, medium sized cyclone can be very intense if the period of rotation is less. Tornado can be deadly because even though size is small, its period of rotation is very less (rapidly rotating). The size of cyclone can be determined by satellite pictures and the period of rotation is determined by Doppler radar by measuring period of rotation of the eye of cyclone. If a modest cyclone is strengthening, either its period of rotation will lessen and/or its size will increase; which can help forecast the intensity of a cyclone.

_

Classical example of fallibility of wind speed as intensity of cyclone: Cyclone Phailin:

Cyclone Phailin made a landfall on 11 October 2013, near Gopalpur in Odisha coast India at around 2130 IST (1600 UTC). It was labeled as Very Severe Cyclonic Storm by Indian meteorological Department (IMD). It recorded highest wind speed of 205 km/h (125 mph) at 3-minute sustained and 260 km/h (160 mph) at1-minute sustained. As the storm moved inland, wind speeds picked up from 100 km/h (62 mph) to 200 km/h (120 mph) within 30 minutes. Brahmapur, the closest city to the point of landfall suffered devastation triggered by gale winds, with fallen trees, uprooted electric poles and broken walls in various places of the city. Officials from Odisha’s state government said that around 12 million people may be affected. As part of the preparations 600 buildings were identified as cyclone shelters and people were evacuated from areas near the coast, including Ganjam, Puri, Khordha and Jagatsinghapur districts in Odisha. The cyclone has prompted India’s biggest evacuation in 23 years with more than 900,000 people moved up from the coastline in Odisha and Andhra Pradesh to safer places.

_

However, there were very few reports of damage to property or life due cyclone Phailin. As of 14th October 2013, 36 people have been reported dead from Odisha. In 1999, a similar storm, named Odisha, devastated almost same region of India leading to over 10,000 casualties. So the sense of relief is strong in the state of Odisha, where Phailin made landfall. That cyclone of 1999, the strongest ever recorded in Bay of Bengal, carried winds of 155 mph at landfall. Phailin arrived with winds of 140 mph. Can difference in landfall wind speed between 155 mph to 140 mph reduce deaths from 10,000 to 36? 

_

There are several factors which, together, help explain why disastrous consequences were avoided from Phailin.

1) Effective storm warnings: The Indian Meteorological Department, for several days, provided credible information about Phailin, which helped motivate the preparation and response effort. Indian central government claimed that their efforts resulted in saving lives of thousands of people as they have given funds to upgrade technology at IMD to predict cyclone accurately in timing, location and intensity.

2) Evacuations: India conducted its largest storm evacuation ever, re-locating more than 900,000 people from the coast to shelters in schools and government offices. State government claimed that their efforts to evacuate people saved lives.

3) Location of Phailin’s landfall and its geography: Phailin washed ashore near Brahmapur about 100 miles farther south than Odisha did in 1999.  In this region, the continental shelf is steeper, meaning there was less low-lying terrain vulnerable to storm surge (the wall of water pushed ashore by the storm’s winds) flooding.

 4) The storm substantially weakened prior to and during landfall: At landfall, Phailin’s maximum sustained winds were around 125-140 mph whereas they may have been 160 mph or even higher in the 24 hours preceding. The storm surge peaked at around 13 feet, not the 20+ feet that was feared. 

5) The storm’s intensity may have been overestimated: While the Joint Typhoon Warning Center and other U.S. forecasters estimated the storm’s peak intensity reached category 5 levels, the Indian Meteorological Department did not. While the Indian Meteorological Department predicted a serious storm (and its predictions motivated the massive preparation efforts), its forecasts were not as dire as some others. Assessing the intensity of a tropical cyclone in the Indian Ocean (and Bay of Bengal) is different from other ocean basins, and the regional expertise of the Indian Meteorological Department may have proven superior.

_

So what is the real reason of such a low death toll in cyclone Phailin of 2013 as compared to cyclone Odisha of 1999 in the same region?

Not to undervalue evacuation efforts, not to belittle accuracy of IMD, not to incense credit-taking politicians, I humbly state that the intensity of cyclone was predicted on bad science. If IMD had used my formula, the intensity of cyclone would have been better predicted. The strength (intensity) of a tropical cyclone is directly proportional to the size of cyclone and inversely proportional to the period of rotation of cyclone. Undue emphasis on wind speed gives rise to overestimation or underestimation of true intensity of a cyclone.  My formula of storm intensity is good science but it is labeled as imitation science by India resulting in unwarranted claims of saving lives of thousands of people.   

_____________

The moral of the story:

1. Science is defined as a careful, disciplined, logical search of knowledge about all aspects of the universe, obtained by systematic study of the structure and behavior of the universe based on facts learned through observation, questioning, explanation and experimentation, and always subject to correction and improvement upon discovery of better evidence. The primary goal of science is to achieve more unified and more complete understanding of the natural world and the universe.    

_

2. A hypothesis is a proposed explanation for a phenomenon/observation which ought to be tested by experiments and experiments ought to confirm predictions. A theory involves large number of hypotheses which have undergone extensive testing and is generally accepted to be the accurate explanation behind much broader sets of phenomena. The theory of gravity proposed by Newton explains why we see the sun move across the sky, the phases of the Moon, the phases of Venus, the tides etc and also useful to guide spacecraft all over the Solar System. Incorrect theories can often make correct predictions by chance, but no correct theory will make incorrect predictions.  

_

3. Researchers must treat all theories with skepticism because exaggerated belief or trust in theories enslave the mind, by taking away its freedom and smothering its originality, coercing us to constantly seek confirmation of theory, neglecting everything that fails to agree with it. Open-mindedness and neutrality are expectations that you will balance real scientific evidence.  

_

4. Imitation science resembles good science and thereby masquerades as science in an attempt to claim a legitimacy that it would not otherwise be able to achieve. It fares poorly on scientific methods, gives fake explanation and results; and misleads people with not infrequent disastrous consequences. It includes bad science, pseudoscience, junk science, fringe science, fake science, pathological science and all other look-alike sciences. Everybody makes mistake and error of judgment in otherwise good scientific study with noble motive is not imitation science.   

_

5. True scientific method says that you cannot prove anything. All you can do is disprove alternative hypothesis. True scientific theory cannot be proven to be correct because there is always a possibility that further observations will disprove the theory and therefore the existing theory does necessarily need to be improvised, modified or replaced in future. The scientist’s acceptance of a theory is always tentative because you could never be absolutely certain that some future observation might not falsify the theory.  Science is self-correcting, or so the old cliché goes. The gist is that one scientist’s error will eventually be righted by those who follow, building on the work. This is the fundamental difference between science on one hand and imitation science & faith on the other hand. True Scientists are willing to change their minds and to admit that they were wrong if that is where the evidence leads while imitation scientists and followers of faith will never admit that they might be wrong.     

_

6. Pseudoscience is a research that does not avail itself of the scientific method while bad Science does follows scientific method but rather poorly with/without fabricated data. The hallmark of pseudoscience is that the original idea is never abandoned, whatever the evidence. While pseudoscience may also be bad science, most of bad science is not generally considered pseudoscience. Pseudo-science seeks confirmations and science seeks falsifications.  Basic assumptions and consequences of assumptions are falsifiable in science while basic assumptions and consequences of assumptions are not falsifiable in pseudoscience.   

_

7. The logic used by imitation science is that; if A is associated with B, then A caused B; no matter A and B are randomly associated by chance, no matter there is lack of any plausible explanation for such association, no matter there are other variables linked to A or B, no matter A failed to be associated with B in replicating experiment, and no matter there is evidence to contrary.   

 _

8. Faith is defined as an illogical belief in the occurrence of the improbable by accepting violations of the well known, well tested or easily demonstrated laws of Nature. Faith is non-science and not even pseudoscience. Since all religions are fundamentally based on faith, it is unlikely that religion and science would reconcile with each other despite penchant to do so.     

_

9. Science is ubiquitous in the world, and many of our decisions, publicly and personally, depend on an understanding of it. Scientific literacy is defined as knowing basic facts and concepts about science and having an understanding of how science works. All people make assumptions, all hold biases, and all have emotions. These interfere with their interpretations of the world around them and how they solve problems. Lack of scientific literacy results in inability to solve problems like population growth, environmental deterioration, biodiversity decline, greenhouse effects, health problems, food and air quality, natural hazards, HIV pandemic, hoaxes, frauds & flaky schemes , tobacco hazards, pesticides, fake health remedies etc. Lot of people accept pseudoscientific beliefs such as astrology, lucky numbers, the existence of unidentified flying objects (UFOs), extrasensory perception (ESP), homeopathy and magnetic therapy due to lack of scientific literacy.    

_

10. People are misled daily by imitation science masquerading as good science on TV, newspapers, magazines, celebrity talks, advertisements, at social gathering; and also by their doctors, dieticians, nutritionists, gymnasium peers, colleagues at work place and astrologers. The most important reason why imitation science is so widespread and so popular is because majority of the world population lack scientific literacy and that is why people believe things that are reassuring far past the point of being too good to be true.

_

11. Take a look at the way young child learn and you can see that they follow a scientific method, although usually without being fully aware of the process they are following. Even babies inherently use many of the same strategies employed in the scientific method — a systematic process of forming hypotheses and testing them based on observed evidence. When every child employs scientific method for learning, then why scientific literacy is so poor in population? Obviously, natural scientific instinct is suppressed by parents, schools, religion, culture, peers, media and neighborhood. Schools always coerce child to memorize rather than understand. Religions always coerce child to follow the faith without questioning. Parents always coerce child to follow their viewpoint of life without questioning and so on and so forth.

_

12. One cannot overemphasize false positive and false negative results of animal experiment & research because false positives and false negatives are part of science and one of the goal of medical science is to distinguish between true positive & false positive, and between true negative & false negative. It is also true that many advances in health are attributable to human studies without animal experiments but animal experiments are always followed by human studies and therefore the issue of animal experiment cannot be viewed in isolation. Animals are good though imperfect indicator of how drugs will affect humans and therefore animal experiments & research is a good science; notwithstanding the morality argument of killing animals as majority of world population eat meat which necessitates killing millions of animals daily.  

_

13. Anecdotal means based on personal accounts or casual observations rather than rigorous or scientific analysis and therefore not necessarily true or reliable. Anecdotal report/ information/ study no matter how impressive is imitation science. It ought to be replicated repeatedly to acquire status of science.  

_

14. Even though double blind randomized controlled trials (RCT) are gold standard to determine efficacy of drugs on various illnesses, these RCTs are more helpful in proving a claim that the drug does not work rather than the claim that the drug works. In fact, RCTs against the best currently available treatment is better than RCTs against placebo (placebo controlled). The fact that the gold standard of double blind RCTs have come to rely on manipulation by drug companies clarifies the fraudulent nature of the concept. Plenty of drugs approved by regulators apparently work as per RCTs but do not save lives. The net result is that drug induced death is now the third leading cause of death. RCTs therefore lie between good science and imitation science.  

_

15. Outcomes presented as relative risk reduction rather than absolute risk reduction by researchers for therapy is imitation science. Three RCTs showed that circumcision reduces HIV transmission by a whopping 60% which is a relative risk reduction in infection rates but the absolute decrease in HIV infection between treatment and control groups in these RCTs was a mere 1.31%. NNT (numbers needed to treat) for  circumcision is  77  i.e.  77 people need to undergo circumcision to prevent one HIV transmission. Obviously role of  circumcision is overplayed by researchers & media when they use relative risk reduction. The fact is that relative risk tells you nothing about actual risk.    

_

16. Observational study only help making a hypothesis, it can neither prove nor disprove the hypothesis. 99% of nutritional studies are observational studies which actually don’t prove anything. Whatever nutritional study you read or see on media are observational studies. When observational studies are portrayed as fact or truth, it becomes imitation science.   

_

17. Meta-analysis of studies could be misleading due to publication bias of not reporting studies which show negative results, and agenda (economic, social, or political) driven meta-analysis. So meta-analysis is not infrequently associated with imitation science even though intention of meta-analysis is to promote good science. 

_

18. The essence of science is to discover truth and in that endeavor, when studies/ data are hidden or suppressed especially studies/trials with negative results/outcomes, truth becomes the casualty. When the negative data is withheld, a bogus drug is marketed with the hope of curing an illness at the cost of lives of innocent people. Pharmaceutical companies routinely withhold negative data from public, doctors & regulators.  

_

19. The profession of medicine in every aspect – clinical, education and research – has been inundated with profound influence from the pharmaceutical and medical device industries. The issue of research integrity is the major crisis facing modern medicine. People believe that modern medicine is evidence-based and that’s what separates it from alternatives but once people realize how skewed so much of the evidence is when the research studies are funded by drug companies, we have a crisis of confidence of catastrophic proportions. Majority of complementary & alternative medicine (CAM) is imitation science, but when modern medicine is also significantly based on imitation science, how can we criticize acupuncture and homeopathy?  

_

20. Pharmaceutical companies spend almost twice as much on marketing as they spend on research! In America 24.4% of the sales dollar is spent on promotion versus 13.4% on research and development; proving the fact that health care improvement is not the goal of industry. In 2003, drug companies spent $448 million dollars on advertising in medical journals. It has been calculated that the return on investment on medical journal ads is between $2.22 and $6.86 for every dollar spent. The pharmaceutical industry has moved very far from its original purpose of discovering and producing useful new drugs to a marketing machine to sell drugs of dubious benefits. Pharmaceutical industry funded trials are four times more likely to give a positive result than independently sponsored trials. The majority of research carried out by big drug companies is imitation science.   

_

21. There is overwhelming evidence to show that leading pharmaceutical corporations manipulate research to sale drugs at the cost of lives of billions of people. Drug research & publication is manipulated by the ‘ghost management’ of science, the imitation science at its worst. The distinction between imitation science and criminal behavior is getting blurred day by day.  

_

22. The biggest story of 21’st century imitation science is the story of statins. A multi-billion dollar scandal involving big drug companies, researchers, authors, journals, doctors and media is unfolding. Industry-funded trials were 20 times more likely to produce results that favored statins than independently sponsored trials. Except for middle aged men with coronary artery disease, all other uses of statins are purely speculative.  As far as primary prevention of heart attack is concerned among low risk groups, to prevent  heart attack in one individual, 750 people need to take statins for one year. However, out of these 750 people;  23 may develop cataract, 1.7 may develop acute kidney failure, 5.5 may develop liver dysfunction and 250 may have increased muscle fatigability.   

_

23. When researchers and research institutions take money from industry or have a financial stake in their own research, money threatens impartial judgment resulting in biased research or motivated research. 

_

24. Review articles concluding that passive smoking is not harmful were about 90 times more likely to be funded by the tobacco industry than those concluding that passive smoking is harmful. Tobacco industry funded research shows that passive smoking is not harmful to health, a classical example of imitation science, which raises concern about quality of research tied to corporate world and therefore it would be prudent to question research funded by corporate world.   

_

25. There are well over 100,000 scientific journals published each year, producing over six million articles and most published research findings are false, indicating grip of imitation science over the world. A research finding is less likely to be true when the studies conducted in a field are smaller; when effect sizes are smaller; when there is a greater number and lesser pre-selection of tested relationships; where there is greater flexibility in designs, definitions, outcomes, and analytical modes; when there is greater financial and other interest and prejudice;  when more teams are involved in a scientific field in chase of statistical significance; and when there is bias shown by researchers or the methodology they adopted.  Lot of cancer research is imitation science. Only 11 % of cancer research could be replicated.     

_

26. One study found that 2 % scientists admit that they are guilty of scientific misconduct while 14 % of scientists admit that their colleagues are guilty of scientific misconduct. Another study found that 1 out of 3 scientists admit of using questionable research practices.

_

27. In science, a retraction of a published scientific article indicates that the original article should not have been published and that its data and conclusions should not be used as part of the foundation for future research. The common reasons for retraction of articles are scientific misconduct and serious error. At 44 per 100,000 papers, India’s misconduct-related retraction rate is far higher the world average for all retractions (due to misconduct as well as genuine errors) of about 17 per 100,000 papers. Although correction of the scientific record is laudable per se, erroneous or fraudulent research can cause enormous harm, diverting other scientists to unproductive lines of investigation, leading to the unfair distribution of scientific resources, and in the worst cases, even resulting in inappropriate medical treatment of patients. Furthermore, retractions can erode public confidence in science. Corruption is ubiquitous in India involving all spheres of life; scientists and doctors are no exceptions.       

__

28. Nutritional (food) supplement is imitation science; vitamin C does not prevents colds, chondroitin and glucosamine are useless for arthritis,  omega-3 fatty acids do not prevent cardiovascular disease and antioxidants do not maintain health and help in various diseases.  In fact, there is increased risk of mortality associated with supplements containing large doses of beta-carotene and possibly vitamin E and vitamin A. The 50 billion dollar food supplement pill industry’s products are ineffective or even harmful. It is clear that it is no longer science but market forces are driving antioxidant industry. Unwarranted calcium supplements may be harmful to health. Only a low-dose vitamin capsule which provides the recommended daily amounts of nutrients is unlikely to be harmful.

 _

29. Vitamin D supplements do not prevent osteoporosis and fractures in normal people including post-menopausal women  who have adequate diet and exposure to sun (approximately 5–30 minutes of sun exposure between 10 AM and 3 PM at least twice a week). Surfaces like snow, sand, pavement, and water reflect much of the UV radiation that reaches them. Because of this reflection, UV intensity can be deceptively high even in shaded areas. I want to emphasize that indirect UV radiation that is reflected off of buildings-particularly those painted white-and off of light-colored concrete surfaces, such as sidewalks, patios, and driveways also synthesize vitamin D in skin. UVB radiation does not penetrate glass, so exposure to sunshine indoors through a glass window does not produce vitamin D.  Elderly populations have prevalent vitamin D deficiency due to decrease sun exposure, oral intake and skin activation of Vitamin D along with decrease Vitamin D absorption. Elderly population does benefit from Vitamin D  supplementation.          

_  

30. There is overwhelming evidence to show that vitamins in food are superior to non-food vitamins marketed as ‘natural’ vitamins albeit deceptively or synthetic vitamins. The only use of non-food vitamins available in market is genuine vitamin deficiency in the body due to any reason. To make diagnosis of vitamin B-12 deficiency or vitamin D deficiency merely on the basis of low blood level is imitation science. People should be wary of such diagnosis made by doctors where the sole beneficiary is pharmaceutical industry.     

_

31. Scientific racism supported by many scientists and scholars of the18’th & 19’th century was imitation science which led to Nazism, Fascism, slavery in America and apartheid in South Africa.

_

32. Imitation science caused death of 10,000 people during the early 1990s in Latin America due to cholera and death of millions of people due to malaria worldwide as chlorine to disinfect water supplies and DDT to kill mosquitoes were wrongly labeled as carcinogen by the United States Environmental Protection Agency (EPA).   

_

33. Science can be manipulated in the service of a greater good is deeply dangerous to society and rational public policy. South African government, for political reasons, argued that HIV is not the cause of AIDS, and that anti-retroviral drugs are not useful for patients. This policy decision caused 330,000 deaths between 2000 and 2005, all thanks to imitation science used for political purpose.

_

34. Male circumcision as a strategy to reduce transmission of HIV in areas of high HIV prevalence is imitation science. Instead, promotion and supply of free condoms is a good science.  

_

35. Climate change & global warming deniers practice imitation science for their political and/or economic agenda.    

_

36. Majority of the editors, anchors and journalists of mass media are scientifically illiterate themselves and coupled with commercial and/or political interest, their sole intention is to promote imitation science. On the top of it, even good science work is distorted into imitation science by mass media for commercial and/or political interest. In fact, most science news is false.  

_

37. My mathematical formula of Pi, scientific treatment of Dengue fever and formula for the intensity (strength) of cyclone are good science but labeled as imitation science by India that might have inadvertently caused preventable deaths or unwarranted claims of saving lives. No wonder John Bohannon’s fake research sting operation showed that the highest density of acceptances of fake research was based by journals in India where academics are always under intense pressure to publish in order to get promotions & bonuses, consequently real research is a casualty.     

___________

Dr. Rajiv Desai. MD.

December 1, 2013

 ___________

Postscript:

I am coining a new term ‘Imitation science’ that resembles good science but fails on everything that good science stands for. I hope that scientific community accepts this new term in its vocabulary. The only way to restrict imitation science is to improve scientific literacy of population by encouraging natural scientific instinct of children by parents, schools, individuals, political bodies, corporations, the mass media especially television and internet.    

_____________

Footnote-1:

When I was in Saudi Arabia, one Saudi medical student was learning medicine from me. She was in II MBBS. Just to check her scientific literacy, I asked her: “When you look in the sky, you see sun. How does sun emit light?” She was speechless. She did not know despite learning science in school and college.  All I wanted ‘hydrogen is converted into helium at high temperature in nuclear fusion releasing energy in the form of light’. Now you may ask how that is related to medicine. Well, all basic sciences are inter-related. You cannot become good doctor if your fundamental scientific concepts are poor.

Footnote-2:

How indirect UV radiation help synthesis of vitamin D?

The “minimal erythrodermal dose” [MED] means the amount of sun most people need to lightly pinken the skin; that is exposure to natural sunlight for 5 to 15 minutes a day. Approximately 25 μg (1000 IU) of vitamin D can be synthesized when 6% of the body is exposed to one MED for 5 minutes two or three times per week (Holick 1999). This is based on the observation that a healthy individual whose whole body is exposed to one MED of simulated sunlight will have circulating vitamin D concentrations comparable to those from ingesting 250 μg to 625 μg (10,000 IU to 25,000 IU) of vitamin D, the equivalent to drinking ¼ cup of cod liver oil or eating 2 pounds of sun-dried Shitake mushrooms. As you can see that sunlight is very efficient in generating vitamin D and to generate same amount of vitamin D, we have eat lot of food. However, there is no consensus about whether there is a dosage of sunlight exposure that would make vitamin D, but not encourage cancer. The opinion of one researcher is 25% of the minimal erythermal dose (MED) twice a week.  Everybody thinks that sun light exposure for vitamin D synthesis means exposure to direct sun light. However there is lot of exposure to indirect sun light when we sit in shades. It is not visible light but ultraviolet rays of type B that stimulate vitamin D synthesis.  Total UV radiation present at a given location = direct UV + indirect UV. Shade forms a barrier between you and the sun, which protects you from direct UV radiation. You may still get enough exposure to cause sunburn from indirect UV radiation. So it is quite possible to get sunburn while sitting in the shade or on a cloudy day. Like blue light, ultraviolet scatters off the atmosphere, and can reach you even if you are in the shade. Basically, if you can see the blue sky, UV can reach you. A general rule of thumb is that if you can see a lot of sky then UV radiation can reach you. If reflected UVB rays can cause sun burn, why not vitamin D synthesis?  Indeed, because the same wavelengths of UV radiation produce both vitamin D and sunburn, you cannot separate the wanted from unwanted effects. However, clothing generally blocks UV radiation. So to get vitamin D form reflected UVB rays, one has to expose legs, arms, back, face etc to indirect UVB radiation. According to Vitamin D Expert Panel Meeting October 11 – 12, 2001Atlanta, Georgia: Sun light reflection from sand and light cement produces some synthesis of vitamin D but there is no vitamin D synthesis when an individual is in total shade. Indirect light reflected from asphalt does not produce vitamin D.  

__________________________________________________________ 

I wish merry Christmas to everybody and hope that the year 2014 shall bring peace, prosperity and security to the world.

_________________________________________________________________

Tags: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

1,276 comments on “IMITATION SCIENCE”

Leave a Reply

Your email address will not be published. Required fields are marked *

Designed by @fraz699.