June 15th, 2015




I am reminded of Abraham Lincoln’s remark: “The world has never had a good definition of the word liberty. We all declare for liberty, but in using the same word we do not mean the same thing.”  Substitute ‘net neutrality’ for ‘liberty’, and that’s where we are today. The Internet has unleashed innovation, enabled growth, and inspired freedom more rapidly and extensively than any other technological advance in human history. Its independence is its power. Net neutrality means internet service providers (ISPs) should treat all data on internet equally. The ISPs have structural capacity to determine the way in which information is transmitted over the internet and the speed at which it is delivered.  And the present internet network operators, principally large telephone and cable companies—have an economic incentive to extend their control over the physical infrastructure of the internet to leverage their control of internet content.  If they went about it in the wrong way, these companies could institute changes that have the effect of limiting the free flow of information over the internet in a number of troubling ways. Network operators could prioritize the transmission of some content—their own for example—over other material produced by competitors. If this was to be allowed, web companies would lose revenues that they could otherwise devote to improvements in old products and innovations in new ones. Worse yet, the smaller content providers, who can now capitalize on the two-way nature of the internet—whether online stores or forums for democratic discourse—might be unable to secure quality service online.  An entrepreneur’s fledgling company should have the same chance to succeed as established corporations, and that access to a high school student’s blog shouldn’t be unfairly slowed down to make way for advertisers with more money. At the core of the principle of net neutrality is thus the idea that all content on the internet should be accessible in a fully equitable way and once an internet user has accessed that content, he should be able to engage with that content in the same way that he would engage with any other content on the internet. Allowing broadband carriers to control what people see and do online would fundamentally undermine the principles that have made internet such a success. On the other hand, to be honest, there is no absolute neutrality. The world is neither neutral nor equal. Umpires in a game of cricket were perceived to be biased and so we have neutral umpires from countries not playing the present game. Humans have been subjective. They’ve got their own positions, opinions and priorities. So net neutrality cannot be seen in isolation of entire gambit of human behaviours but approached by combining different views and opinions.


Internet terminology, abbreviations and synonyms:

Internet Backbone:

The collection of cables and data canters that make up the core of the internet. This is operated not by a single operator but by many independent companies spread across the globe.

Internet Service Provider (ISP):

A company, such as Comcast or Verizon or Airtel or Tata docomo, that plugs into the backbone and then provides internet connections to homes and businesses. ISP is also known as TSP (telecom service provider) or Telco or broadband carriers or network operators or internet access providers or platform operator. An ISP provides internet services to users via cable or wireless connections.

Access ISP = last mile ISP = eyeball ISP = ISP that provides internet access to user.

Content Provider:

Companies such as Google, Facebook, and Netflix that provide the webpages, videos, and other content that moves across the internet. My website   is also a content provider. A content provider is anyone who has a website that delivers content to internet users. Content and service providers (CSPs) offer a wide range of applications and content to the mass of potential consumers.


Where one internet operation connects directly to another so that they can trade traffic. This could be a connection between an ISP such as Comcast and an internet backbone provider such as Level 3. But it could also be a direct connection between an ISP and a content provider such as Google.

Content Delivery Network (CDN):

A network of computer servers set up inside an ISP that delivers popular photos, videos, and other content. These servers can deliver this content faster to home users because they’re closer to home users. Companies such as Akamai and Cloudflare run CDNs that anyone can use. But content providers such as Google and Netflix now run their own, private CDNs as well.


FCC (federal communication commission of U.S.) and TRAI (telecom regulatory authority of India) are some examples of regulators that regulate ISPs.


ISP = internet service provider

CSP = content & service provider

IU = internet user = user = consumer

NN = net neutrality

IP = internet protocol

TCP = transmission control protocol

VoIP = voice over internet protocol

Kbps = kilobits per second

Mbps = megabits per second = 1000 Kbps

Gbps = gigabits per second = 1000 Mbps

QoS = quality of service

CDN = content delivery network

P2P = peer-to-peer file sharing

SMS = short message service

MMS = multimedia message service

OTT = over the top services

BE = best effort

LAN = local area network

WAN = wide area network

WLAN = wireless local area network

DSL = digital subscriber line

Packets = datagrams

IPTV = internet protocol television


Who’s an Internet user?

A user is a pretty broad term to describe someone who uses the Internet so let’s take a closer look at what “user” means. A user can be a person, small business, local city, state or national government agency, or a large organization, such as the U.S. Government, AT&T, Google, Microsoft, or Facebook. As you can see by this wide range of Internet users, an organization that makes laws, sets tariffs, owns portions of the cables that makeup the Internet, or has the money to buy faster speeds and pay for larger amounts of data could obtain an advantage over a smaller organization or user. In addition to size, governments of certain countries restrict both who is allowed to use the Internet and what the users can do when using the Internet. Some countries have tightly controlled Internets within their borders, and net neutrality is sometimes used more broadly to include the freedom to send and receive data without government restrictions.


The figure below depicts how internet works today. To understand net neutrality and how ISPs interfere to circumvent net neutrality, this figure must be memorised:


There are a lot of emotional terms used to describe various aspects of what makes the melting pot of the neutrality debate. For example, censorship or black-holing (where route filtering, fire-walling and port blocking might say what is happening in less insightful way); free-riding is often bandied about to describe the business of making money on the net (rather than overlay service provision); monopolistic tendencies, instead of the natural inclination of an organisation that owns a lot of kit that they’ve sunk capital into, to want to make revenue from it!


Growth of internet:

As the flood of data across the internet continues to increase, there are those that say sometime soon it is going to collapse under its own weight. Back in the early 90s, those of us that were online were just sending text e-mails of a few bytes each, traffic across the main US data lines was estimated at a few terabytes a month, steadily doubling every year. But the mid 90s saw the arrival of picture rich websites, and the invention of the MP3. Suddenly each net user wanted megabytes of pictures and music, and the monthly traffic figure exploded. For the next few years we saw more steady growth with traffic again roughly doubling every year. But since 2003, we have seen another change in the way we use the net. The YouTube generation want to stream video, and download gigabytes of data in one go. In one day, YouTube sends data equivalent to 75 billion e-mails; so it’s clearly very different. The network is growing up, is starting to get more capacity than it ever had, but it is a challenge. Video is real-time; it needs to not have mistakes or errors. E-mail can be a little slow. You wouldn’t notice if it was 11 seconds rather than 10, but you would notice that on a video.



Introduction to net neutrality:

The Internet owes much of its success to the fact that it is open and easily accessible, provided that the user has an Internet connection. Any content provider who has opportunity to test its ideas and their relative value in the marketplace can put its content on internet. The required investment, such as buying a domain name, renting a space on a server and implementing its application or software has been relatively low. As a result, new services have been made available to consumers: browsing, mailing, Peer-to-Peer (P2P), instant messaging, Internet telephony (Voice over Internet Protocol ‘VoIP’), videoconference, gaming online, video streaming, etc. This development has taken place mainly on a commercial basis without any regulatory intervention.


Net neutrality is the principle that all data on the internet is equal, and must be treated equally with no discrimination on the basis of content, user or design by governments and Internet Service Providers (ISP’s). Net Neutrality is the principle that all data on the internet is transported using best effort. This includes not discriminating for origin and service. Under this principle, consumers can make their own choices about what applications and services to use and are free to decide what lawful content they want to access, create, or share with others… Once you’re online, you don’t have to ask permission or pay tolls to broadband providers to reach others on the network. If you develop an innovative new website, you don’t have to get permission to share it with the world. For example, Times is a widely popular online newspaper, while Mirror has comparatively fewer visitors to their website. Right now, if Mirror wanted to boost their page views, they would have to write more engaging stories and find ways to share their content so that more people read it. They are not allowed to make deals with ISP’s to charge customers less money if they visit Mirror website. Net Neutrality means that Internet Service Providers should bill you on the amount of bandwidth you have consumed, and not on which website you visited. Net neutrality is the principle that all packets of data over the internet should be transmitted equally, without discrimination. So, for example, net neutrality ensures that my blog can be accessed just as quickly as, say the BBC website. Essentially, it prevents ISPs from discriminating between sites, organisations etc. whereby those with the deepest pockets can pay to get in the fast lane, whilst the rest have to contend with the slow lane. Instead, every website is treated equally, preventing the big names from delivering their data faster than a small independent online service. This ensures that no one organisation can deliver their data any quicker than anyone else, enabling a fair and open playing field that encourages innovation and diversity in the range of information material online. The principles of net neutrality are effectively the reason why we have a (reasonably) diverse online space that enables anyone to create a website and reach a large volume of people. Network neutrality is the idea that Internet service providers must allow customers equal access to content and applications regardless of the source or nature of the content. Presently the Internet is indeed neutral: All Internet traffic is treated equally on a first‐come, first‐serve basis by Internet backbone owners. The Internet is neutral because it was built on phone lines, which are subject to ‘common carriage’ laws. These laws require phone companies to treat all calls and customers equally. They cannot offer extra benefits to customers willing to pay higher premiums for faster or clearer calls, a model knows as tiered service.


Net neutrality is not a new concept relative to the age of the Internet; its roots are embedded within the founders.  Net Neutrality refers to a guiding principle that preserves the free and open Internet with no discrimination.  It makes it such that an Internet Service Provider (ISP) cannot discriminate the speed of the connection – or lack thereof – to one content provider versus another (Eudes 2008). When the Internet was first invented, founders wanted to be sure that it was to provide a safe haven for the transportation of information without any biases. They wanted to ensure that all people had a consistent way to use the Internet; regardless of their connection and social status (Margulius, 2003). Net Neutrality has two polarizing factions; those who are in favor, and those who are not. On this topic there is not a middle ground. Those who are in favor of Net Neutrality consist of organizations like Microsoft, Google, and other content providers.  Those who are against Net Neutrality are generally made of telecommunication network organizations and/or ISPs (Owen 2007).  Network neutrality, or open inter-working, means in accessing the World Wide Web, one is in full control over how to go online, where to go and what to do, as long as these are lawful. So firms that provide Internet services should treat all lawful Internet content in a neutral manner. It also required such companies not to charge users, content, platform, site, application or mode of communication differentially. These are also the founding principles of the Internet and what has made it the largest and most diverse platform for expression in recent history.


Net neutrality is when an ISP treats all content on the internet neutrally, and does not prioritize one over the other. ISPs are charging content companies because money makes their shareholders happy. Also because they believe they have the right to do so when a certain content provider (e.g. netflix) takes up the majority of the bandwidth from their data centers. Companies are concerned because it will give ISPs free reign to downright slow down any content they please and demand money to bring it to normal speed. Whether you’re accessing How-To Geek, Google, or a tiny website running on shared hosting somewhere, your Internet service provider treats these connections equally and forwards the data along without prioritizing any one party. Your Internet service provider could prioritize data from Google, charging them for the privilege. They could throttle Netflix while providing you with unlimited bandwidth to stream videos from their own video-streaming service. They could restrict the bandwidth available to VoIP applications and encourage you to keep paying for a phone line. They could throttle connections to websites run by startups and other individuals that haven’t signed a contract with the Internet service provider to pay for priority access. These actions would all be violations of net neutrality. However, by and large, Internet service providers don’t violate net neutrality in this way. They just forward packets along — that’s the way the Internet has worked and it has given us the Internet we have today.


The figure below shows how ISPs would like the internet to be without net neutrality:


One percent of the world’s population controls almost 50 percent of the world’s wealth, according to the poverty eradication nonprofit Oxfam. Advocates of net neutrality worry that loosening the rules for ISPs will result in a one-percent version of the Internet. Here’s how it could happen. In 2004, Internet traffic was more or less equally distributed across thousands of Web companies. Just 10 years later, half of all Internet traffic originated from only 30 companies. The top three websites by daily unique visitors and page views are Google, Facebook and YouTube. In terms of data, Netflix and YouTube hog more than half of all downstream traffic in North America. That means one out of every two bytes of data traveling across the Internet is streaming video from Netflix or YouTube. If the distribution of Internet traffic is so out of whack now, imagine what it would be like if ISPs were given the green light to give further preferential treatment to the biggest players. Would there be any bandwidth left for the 99 percent — independent video producers, upstart social media sites, bloggers and podcasters? This is a really important reason why you should care about net neutrality. The Internet, as it exists today, is an open forum for free speech and freedom of expression. Websites publishing both popular and unpopular viewpoints are treated equally in terms of how their data gets from servers to screens. If the FCC allows Internet service providers (ISPs) to charge extra money for access to Internet last-mile fast lanes, the playing field of free speech is no longer equal. Those with the money to pay for special treatment could broadcast their opinions more quickly and more smoothly than their opponents. Those without as many resources — activists, artists and political outsiders — could be relegated to the Internet slow lane.


If you’re lucky enough to live in a country that doesn’t regulate the information you access online, you probably take net neutrality for granted. You search the Web unrestricted by government censors, free to choose what information to believe or discard, and what websites and online services to patronize. In mainland China, citizens of the highly restrictive communist regime enjoy no such freedoms. This is what a heavily censored and closely monitored Internet looks like:

1. Chinese internet service providers (ISPs) block access to a long list of sites banned by the government.

2. Specific search terms are red flagged; type them into Google and you’ll be blocked from the search engine for 90 seconds.

3. Chinese ISPs are given lists of problematic keywords and ordered to take down pages that include those words.

4. The government and private companies employ 100,000 people to police the Internet and snitch on dissenters.

5. The government also pays people to post pro-government messages on social networks, blogs and message boards.


The unequal Web:

The figure above shows that richer countries rank highest for net access, freedom and openness. The web is becoming less free and more unequal, according to a report from the World Wide Web Foundation. Its annual web index suggests web users are at increasing risk of government surveillance, with laws preventing mass snooping weak or non-existent in over 84% of countries. It also indicates that online censorship is on the rise. The report led web inventor Sir Tim Berners-Lee to call for net access to be recognised as a human right. That means guaranteeing affordable access for all, ensuring internet packets are delivered without commercial or political discrimination, and protecting the privacy and freedom of web users regardless of where they live.


Net neutrality worldwide:

This map shows data from Glasnost, one of the measurement lab tools for examining your internet connection. Authors map the percentage of tests where violations of net neutrality was discovered worldwide. Data covers the period from 2012-12-26 00:02:11 to 2013-12-22 23:59:19.



Outline of computer, internet, bits, bytes, speed, packets and internet protocol:

Computer is defined as a programmable machine that computes (stores, processes and retrieves) information (data) according to a set of instructions (program). Computer processes data in numerical form and its digital electronic circuits perform mathematical operations using Binary System. Binary system means using only two digits for arithmetic processing, namely, 0 and 1 known as bits (binary digits).

0 means absence of current/voltage in electronic circuit = off

1 means presence of current/voltage in electronic circuit = on

A series of 8 consecutive bits is known as a byte which permits 256 different on/off combinations.


Computers see everything in terms of binary. In binary systems, everything is described using two values or states: on or off, true or false, yes or no, 1 or 0. A light switch could be regarded as a binary system, since it is always either on or off. As complex as they may seem, on a conceptual level computers are nothing more than boxes full of millions of “light switches.” Each of the switches in a computer is called a bit, short for binary digit. A computer can turn each bit either on or off. Your computer likes to describe on as 1 and off as 0. By itself, a single bit is kind of useless, as it can only represent one of two things. By arranging bits in groups, the computer is able to describe more complex ideas than just on or off. The most common arrangement of bits in a group is called a byte, which is a group of eight bits.


Internet is defined as a global communication system of data connectivity between computers using transmission control protocol (TCP) and internet protocol (IP) to serve billions of users in the world. Internet is the greatest invention in communication breaking barriers of age/distance/language/religion/race/region and making the world a better place to live in. If you do not have internet access in 21′st century, you are illiterate. Internet scores over media due to internet’s openness and neutrality. Every school must teach basics of computer and internet to students.

Data transfer rate (speed) of internet is usually in bits per second.

1000 bits per second = 1 kilobit per second (Kbps)

1000000 bits per second = 1 megabit per second (Mbps) = 1000 Kbps

Broadband means download internet speed of more than 4 Mbps and upload internet speed of more than 1 Mbps. Newer technology with fiber-optic cables can give internet speed of 100 Mbps.

The speed of travel of data from computer to computer through wireless technology (air) is same as the speed of radio waves (speed of light) which is 300,000 kilo meters per second. The speed of travel of data from computer to computer through wired network is same of speed of electricity which is also near speed of light. Please do not confuse between speed of data travel i.e. speed of light and internet speed i.e. data transfer rate in Kbps or Mbps which refers to the speed of  digital data converted into radio waves or electricity and not the speed of data when traveling through the air or wires. Data transfer rate and data travel rate are different. The term latency is used to determine amount of time taken by packets to travel from source to destination. Since speed of light is constant and fastest, latency depends on time taken by packets to travel through routers (queuing) and other hardware/software.


IP address:

The picture below illustrates two computers connected to the Internet; your computer with IP address and another computer with IP address The Internet is represented as an abstract object in-between.


An Internet Protocol address (IP address) is a numerical label assigned to each device (e.g., computer, printer) participating in a computer network that uses the Internet Protocol for communication.  An IP address serves two principal functions: host or network interface identification and location addressing. A name indicates what we seek. An address indicates where it is. A route indicates how to get there. The designers of the Internet Protocol defined an IP address as a 32-bit number and this system, known as Internet Protocol Version 4 (IPv4), is still in use today. However, because of the growth of the Internet and the predicted depletion of available addresses, a new version of IP (IPv6), using 128 bits for the address, was developed in 1995. IP addresses are usually written and displayed in human-readable notations, such as (IPv4), and 2001:db8:0:1234:0:567:8:1 (IPv6). Each version defines an IP address differently. Because of its prevalence, the generic term IP address typically still refers to the addresses defined by IPv4. IPv4 addresses are canonically represented in dot-decimal notation, which consists of four decimal numbers, each ranging from 0 to 255, separated by dots, e.g., Each part represents a group of 8 bits (octet) of the address. In some cases of technical writing, IPv4 addresses may be presented in various hexadecimal, octal, or binary representations. There are about 4.3 billion IP addresses. The class-based, legacy addressing scheme places heavy restrictions on the distribution of these addresses. TCP/IP networks are inherently router-based, and it takes much less overhead to keep track of a few networks than millions of them. The rapid exhaustion of IPv4 address space, despite conservation techniques, prompted the Internet Engineering Task Force (IETF) to explore new technologies to expand the addressing capability in the Internet. The permanent solution was deemed to be a redesign of the Internet Protocol itself. This new generation of the Internet Protocol, intended to replace IPv4 on the Internet, was eventually named Internet Protocol Version 6 (IPv6) in 1995.  The address size was increased from 32 to 128 bits or 16 octets. This, even with a generous assignment of network blocks, is deemed sufficient for the foreseeable future. Mathematically, the new address space provides the potential for a maximum of 2128, or about 3.403×1038 addresses. The Domain Name System (DNS) converts IP addresses to domain names so that users only need to specify a domain name to access a computer on the Internet instead of typing the numeric IP address.  DNS servers maintain a database containing IP addresses mapped to their corresponding domain names.


IP address assignment:

Internet Protocol addresses are assigned to a host either anew at the time of booting, or permanently by fixed configuration of its hardware or software. Persistent configuration is also known as using a static IP address. In contrast, in situations when the computer’s IP address is assigned newly each time, this is known as using a dynamic IP address. An Internet Service Provider (ISP) will generally assign either a static IP address (always the same) or a dynamic address (changes every time one logs on). If you connect to the Internet from a local area network (LAN) your computer might have a permanent IP address or it might obtain a temporary one from a DHCP (Dynamic Host Configuration Protocol) server. In any case, if you are connected to the Internet, your computer has a unique IP address.


Packets and protocols:

When a file is sent from one computer to another, it is broken into small pieces called packets. A typical packet contains perhaps 1,000 or 1,500 bytes. It turns out that everything you do on the Internet involves packets. For example, every Web page that you receive comes as a series of packets, and every e-mail you send leaves as a series of packets. The packets are labelled individually with origin, destination and place in the original file. The packets are sent sequentially over network. Each packet carries the information that will help it get to its destination — the sender’s IP address, the intended receiver’s IP address, something that tells the network how many packets this e-mail message has been broken into and the number of this particular packet. When a packet get on a router, the router looks at the packet to see where it needs to go. The routers determine where to send information from one computer to another. Routers are specialized computers that send your messages and those of every other Internet user speeding to their destinations along thousands of pathways. The packets carry the data in the protocols that the Internet uses: Transmission Control Protocol/Internet Protocol (TCP/IP). Using “pure” IP, a computer first breaks down the message to be sent into small packets, each labelled with the address of the destination machine; the computer then passes those packets along to the next connected Internet machine (router), which looks at the destination address and then passes it along to the next connected internet machine, which looks the destination address and pass it along, and so forth, until the packets (we hope) reach the destination machine. IP is thus a “best efforts” communication service, meaning that it does its best to deliver the sender’s packets to the intended destination, but it cannot make any guarantees. If, for some reason, one of the intermediate computers “drops” (i.e., deletes) some of the packets, the dropped packets will not reach the destination and the sending computer will not know whether or why they were dropped. By itself, IP can’t ensure that the packets arrived in the correct order, or even that they arrived at all. That’s the job of another protocol: TCP (Transmission Control Protocol). TCP sits “on top” of IP and ensures that all the packets sent from one machine to another are received and assembled in the correct order. Should any of the packets get dropped during transmission, the destination machine uses TCP to request that the sending machine resend the lost packets, and to acknowledge them when they arrive. TCP’s job is to make sure that transmissions get received in full, and to notify the sender that everything arrived OK. Each packet is sent off to its destination by the best available route — a route that might be taken by all the other packets in the message or by none of the other packets in the message. This makes the network more efficient. First, the network can balance the load across various pieces of equipment on a millisecond-by-millisecond basis. Second, if there is a problem with one piece of equipment in the network while a message is being transferred, packets can be routed around the problem, ensuring the delivery of the entire message. Packets don’t necessarily all take the same path — they’ll generally travel the path of least resistance. That’s an important feature. Because packets can travel multiple paths to get to their destination, it’s possible for information to route around congested areas on the Internet. In fact, as long as some connections remain, entire sections of the Internet could go down and information could still travel from one section to another — though it might take longer than normal. When the packets get to you, your device arranges them according to the rules of the protocols. It’s kind of like putting together a jigsaw puzzle. When you send an e-mail, it gets broken into packets before zooming across the Internet. Phone calls over the Internet also convert conversations into packets using the Voice over Internet protocol (VoIP).


Many things can happen to packets as they travel from origin to destination, resulting in the following problems as seen from the point of view of the sender and receiver:

Low throughput:

Due to varying load from disparate users sharing the same network resources, the bit rate (the maximum throughput) that can be provided to a certain data stream may be too low for realtime multimedia services if all data streams get the same scheduling priority.

Dropped packets:

The routers might fail to deliver (drop) some packets if their data loads are corrupted, or the packets arrive when the router buffers are already full. The receiving application may ask for this information to be retransmitted, possibly causing severe delays in the overall transmission.


Sometimes packets are corrupted due to bit errors caused by noise and interference, especially in wireless communications and long copper wires. The receiver has to detect this and, just as if the packet was dropped, may ask for this information to be retransmitted.


Latency is defined as the time it takes for a source to send a packet of data to a receiver. Latency is typically measured in milliseconds. The lower the latency (the fewer the milliseconds), the better the network performance. It might take a long time for each packet to reach its destination, because it gets held up in long queues, or it takes a less direct route to avoid congestion. This is different from throughput, as the delay can build up over time, even if the throughput is almost normal. In some cases, excessive latency can render an application such as VoIP or online gaming unusable. Ideally latency is as close to zero as possible.


Packets from the source will reach the destination with different delays. A packet’s delay varies with its position in the queues of the routers along the path between source and destination and this position can vary unpredictably. This variation in delay is known as jitter and can seriously affect the quality of streaming audio and/or video.

Out-of-order delivery:

When a collection of related packets is routed through a network, different packets may take different routes, each resulting in a different delay. The result is that the packets arrive in a different order than they were sent. This problem requires special additional protocols responsible for rearranging out-of-order packets to an isochronous state once they reach their destination. This is especially important for video and VoIP streams where quality is dramatically affected by both latency and lack of sequence.



At their most basic level, protocols establish the rules for how information passes through the Internet. Protocols are to computers what language is to humans. Since this article is in English, to understand it you must be able to read English. Similarly, for two devices on a network to successfully communicate, they must both understand the same protocols. Without these rules, you would need direct connections to other computers to access the information they hold. You’d also need both your computer and the target computer to understand a common language. When you want to send a message or retrieve information from another computer, the TCP/IP protocols are what make the transmission possible. You’ve probably heard of several protocols on the Internet. For example, hypertext transfer protocol (HTTP) is what we use to view Web sites through a browser — that’s what the http at the front of any Web address stands for. If you’ve ever used an FTP server, you relied on the file transfer protocol. Protocols like these and dozens more create the framework within which all devices must operate to be part of the Internet.


Protocol Stacks:

So your computer is connected to the Internet and has a unique address. How does it ‘talk’ to other computers connected to the Internet? An example should serve here: Let’s say your IP address is and you want to send a message to the computer The message you want to send is “Hello computer!”  Obviously, the message must be transmitted over whatever kind of wire connects your computer to the Internet. Let’s say you’ve dialled into your ISP from home and the message must be transmitted over the phone line. Therefore the message must be translated from alphabetic text into electronic signals, transmitted over the Internet, then translated back into alphabetic text. How is this accomplished? Through the use of a protocol stack. Every computer needs one to communicate on the Internet and it is usually built into the computer’s operating system (i.e. Windows, Unix, etc.). The protocol stack used on the Internet is referred to as the TCP/IP protocol stack because of the two major communication protocols used. The TCP/IP stack looks like this:

Protocol Layer Comments
Application Protocols Layer Protocols specific to applications such as WWW, e-mail, FTP, etc.
Transmission Control Protocol Layer TCP directs packets to a specific application on a computer using a port number.
Internet Protocol Layer IP directs packets to a specific computer using an IP address.
Hardware Layer Converts binary packet data to network signals and back.
(E.g. Ethernet network card, modem for phone lines, etc.)

If we were to follow the path that the message “Hello computer!” took from our computer to the computer with IP address, it would happen something like this:


Internet layers/protocol layers:

The internet layer is a group of internetworking methods, protocols, and specifications in the Internet protocol suite that are used to transport datagrams (packets) from the originating host across network boundaries to the destination host specified by a network address (IP address) which is defined for this purpose by the Internet Protocol (IP). A common design aspect in the internet layer is the robustness principle: ‘Be liberal in what you accept, and conservative in what you send’ as a misbehaving host can deny Internet service to many other users. The internet layer of the TCP/IP model is often compared directly with the network layer (layer 3) in the Open Systems Interconnection (OSI) protocol stack. OSI’s network layer is a catch-all layer for all protocols that facilitate network functionality. The internet layer, on the other hand, is specifically a suite of protocols that facilitate internetworking using the Internet Protocol.  Protocol layers exist to reduce design complexity and improve portability and support for change. Networks are organised as series of layers or levels each built on the one below. The purpose of each layer is to offer services required by higher levels and to shield higher layers from the implementation details of lower layers.


OSI Reference Model:


OSI consists of 7 layers of protocols, i.e., of 7 different areas in which the protocols operate. In principle, the areas are distinct and of increasing generality; in practice, the boundaries between the layers are not always sharp. The model draws a clear distinction between a service, something that an application program or a higher-level protocol uses, and the protocols themselves, which are sets of rules for providing services.



The OSI Model was developed to help provide a better understanding of how a network operates. The better you understand the model the better you will understand networking.  It is composed of seven OSI layers.  Each layer is unique and supports the creation and control of data packets. The layers start with Physical and ends with the Application.  The first three layers relate to network equipment.  For example, switches are layer 2 devices and routers are layer 3 devices.

1. The first layer is the Physical layer and is where the data is either put onto the media or taken off the media. The media could be the network cable or wireless.  The data is in the form of bits and is called Bits as the PDU (protocol data unit). These bits can be voltage levels that represent binary numbers of 1 or 0.  They could also be light pulses traveling on a fiber optic cable or radio wave pulses for a wireless network.

2. The second layer is the Data Link layer and is where framing of the data takes place. The Frame is the PDU name at this layer.  The MAC (media access control) physical address is added or removed depending on which direction the data is traveling.  The MAC address is used by switches to switch the data to the appropriate computer or node that it is intended for in a LAN (local area network).

3. The third layer is the Network layer and is where the IP (internet protocol) address is added or removed and the PDU at this layer is called a Packet.  Routers operate at this level and use the IP (logical address) to route the data to the appropriate network. Network locations are found by the routers using routing tables to locate the appropriate networks.

4. The fourth layer is the Transport layer and is where the data is segmented (broken into pieces) and used by the TCP protocol to ensure accurate and reliable data is transferred. The data segments are numbers so that proper sequencing can be determined on the receiving side in order to rebuild accurate files. The PDU name at this layer is called Segment.

5. The fifth layer is the Session layer and is where the session is created, maintained, and torn-down when finished.

6. The sixth layer is the Presentation layer and is where the data is formatted or decrypted into files that the user can understand.

7. The seventh layer is the Application layer and is the user interface to the network where that data is either being generated or received.



The Internet, like any other computer network, is defined in terms of layers; these are the often-referenced “OSI Layers”. This division into layers is a logical (rather than physical) one; the data traversing the network is eventually one long series of bits — 0′s and 1′s. Such “layers” is how we address the representation of those many bits; their grouping into clusters of bits that have meaning. The different network layers are different levels of interpretation of this large set of bits moving along the wire. Understanding the same raw traffic at different layers allows us to bridge the semantic gap between a bunch of 0′s and 1′s and an e-mail being sent or to a web site being browsed. After all, all emails and browsing sessions end up as 0′s and 1′s on a wire. Processing those sequences of bits at different layers of abstraction is what makes the network as versatile as it is and technically manageable. In a nutshell, Internet traffic is interpreted at seven layers, where each layer introduces meaningful data objects and uses the underlying layer to transfer these objects. Each of the many components of the Internet (applications sending and receiving data, routers, modems, and wires) knows how to process data at its own layer and needs not be aware of what the data represents at higher layers or of how data is processed by the lower layers.


Understanding the layered architecture of the Internet allows us to define net neutrality:

Network neutrality is the adherence to the paradigm that operation at a certain layer, by a network component (or provider) that is chartered for operating at that layer, is not influenced by interpretation of the processed data at higher layers. So network neutrality is an intended feature of the Internet. A component operating at a certain layer is not required to understand the data it processes at higher layers. The network card operating at Layer 2 does not need to know that it is sending an e-mail message (Layer 7). It only needs to know that it is sending a frame (Layer 2) with a certain opaque payload. Net-neutrality is thus built into the Internet. When expanding the notion of net neutrality from the purely technical domain to the service domain, we can define network neutrality as the adherence to the paradigm that operation of a service at a certain layer is not influenced by any data other than the data interpreted at that layer, and in accordance with the protocol specification for that layer. Therefore, a service provider is said to operate in net neutrality if it provides the service in a way what is strictly “by the book”, where “the book” is the specification of the network protocol it implements as its service. Its operation is network-neutral if it is not impacted by any other logic other than that of implementing the network layer protocol that it is chartered at implementing.



So how do packets find their way across the Internet? Does every computer connected to the Internet know where the other computers are? Do packets simply get ‘broadcast’ to every computer on the Internet? The answer to both the preceding questions is ‘no’. No computer knows where any of the other computers are, and packets do not get sent to every computer. The information used to get packets to their destinations are contained in routing tables kept by each router connected to the Internet. Routers are packet switches. A router is usually connected between networks to route packets between them. Each router knows about its sub-networks and which IP addresses they use. The router usually doesn’t know what IP addresses are ‘above’ it. When a packet arrives at a router, the router examines the IP address put there by the IP protocol layer on the originating computer. The router checks its routing table. If the network containing the IP address is found, the packet is sent to that network. If the network containing the IP address is not found, then the router sends the packet on a default route, usually up the backbone hierarchy to the next router. Hopefully the next router will know where to send the packet. If it does not, again the packet is routed upwards until it reaches a NSP (network service provider) backbone. The routers connected to the NSP backbones hold the largest routing tables and here the packet will be routed to the correct backbone, where it will begin its journey ‘downward’ through smaller and smaller networks until it finds its destination.


Modem vs. router:

A router is a device that forwards data packets along networks. A router is connected to at least two networks, commonly two LANs or WANs or a LAN and its ISP’s network. Routers are located at gateways, the places where two or more networks connect. While connecting to a router provides access to a local area network (LAN), it does not necessarily provide access to the Internet. In order for devices on the network to connect to the Internet, the router must be connected to a modem. While the router and modem are usually separate entities, in some cases, the modem and router may be combined into a single device. This type of hybrid device is sometimes offered by ISPs to simplify the setup process.



A modem (modulator-demodulator) is a device that modulates signals to encode digital information and demodulates signals to decode the transmitted information. The goal is to produce a signal that can be transmitted easily and decoded to reproduce the original digital data. A modem is a device that provides access to the Internet. The modem connects to your ISP. Modems can be used with any means of transmitting analog signals, from light emitting diodes to radio. A common type of modem is one that turns the digital data of a computer into modulated electrical signal for transmission over telephone lines and demodulated by another modem at the receiver side to recover the digital data. Modems which use a mobile telephone system (GPRS, UMTS, HSPA, EVDO, WiMax, etc.), are known as mobile broadband modems (sometimes also called wireless modems). Wireless modems can be embedded inside a laptop or appliance, or be external to it. External wireless modems are connect cards, USB modems for mobile broadband and cellular routers. A connect card is a PC Card or ExpressCard which slides into a PCMCIA/PC card/ExpressCard slot on a computer. USB wireless modems use a USB port on the laptop instead of a PC card or ExpressCard slot. A USB modem used for mobile broadband Internet is also sometimes referred to as a dongle. A cellular router may have an external datacard (AirCard) that slides into it. Most cellular routers do allow such datacards or USB modems. Cellular routers may not be modems by definition, but they contain modems or allow modems to be slid into them. The difference between a cellular router and a wireless modem is that a cellular router normally allows multiple people to connect to it (since it can route data or support multipoint to multipoint connections), while a modem is designed for one connection. By connecting your modem to your router (instead of directly to a computer), all devices connected to the router can access the modem, and therefore, the Internet. The router provides a local IP address to each connected device, but they will all have the same external IP address, which is assigned by your ISP.


The figure below shows request path and return path of internet utilizing modem, router and DNS server:


In order to retrieve this article, your computer had to connect with the Web server containing the article’s file. We’ll use that as an example of how data travels across the Internet. First, you open your Web browser and connect to our Web site. When you do this, your computer sends an electronic request over your Internet connection to your Internet service provider (ISP). The ISP routes the request to a server further up the chain on the Internet. Eventually, the request will hit a domain name server (DNS). This server will look for a match for the domain name you’ve typed in ( If it finds a match, it will direct your request to the proper server’s IP address. If it doesn’t find a match, it will send the request further up the chain to a server that has more information. The request will eventually come to our Web server. Our server will respond by sending the requested file in a series of packets.


Peer to Peer file sharing:



Peer-to-peer file sharing is different from traditional file downloading. In peer-to-peer sharing, you use a software program (rather than your Web browser) to locate computers that have the file you want. Because these are ordinary computers like yours, as opposed to servers, they are called peers. The process works like this:

•You run peer-to-peer file-sharing software (for example, a Gnutella program) on your computer and send out a request for the file you want to download.

•To locate the file, the software queries other computers that are connected to the Internet and running the file-sharing software.

•When the software finds a computer that has the file you want on its hard drive, the download begins.

•Others using the file-sharing software can obtain files they want from your computer’s hard drive.

The file-transfer load is distributed between the computers exchanging files, but file searches and transfers from your computer to others can cause bottlenecks. Some people download files and immediately disconnect without allowing others to obtain files from their system, which is called leeching. This limits the number of computers the software can search for the requested file.  As Peer-to-Peer (P2P) file exchange applications gain popularity, Internet service providers are faced with new challenges and opportunities to sustain and increase profitability from the broadband IP network. Unlike other P2P download methods, BitTorrent maximizes transfer speed by gathering pieces of the file you want and downloading these pieces simultaneously from people who already have them. This process makes popular and very large files, such as videos and television programs, download much faster than is possible with other protocols. Due to the unique and aggressive usage of network resources by Peer-to-Peer technologies, network usage patterns are changing and provisioned capacity is no longer sufficient. Extensive use of Peer-to-Peer file exchange causes network congestion and performance deterioration, and ultimately leads to customer dissatisfaction and churn.



Please do not confuse between peering and peer-to-peer file transfer. Peering is direct connection between ISP and content provider (e.g. Google) bypassing internet backbone while peer to peer is sharing files between client computers rather than downloading file from content provider. During peering, you are getting file from content provider at faster speed while during P2P, you are getting file from another user’s computer at faster speed. Peering is violation of net neutrality by ISP while P2P is violation of net neutrality by consumers.


End-to-end principle:

The principle states that, whenever possible, communications protocol operations should be defined to occur at the end-points of a communications system, or as close as possible to the resource being controlled. This leads to the model of a minimal dumb network with smart terminals, a completely different model from the previous paradigm of the smart network with dumb terminals. All of the intelligence is held by producers and users, not the networks that connect them. End-to-end design of the network entails that the intelligence would be exclusively located at the edges of the Internet (i.e. with end users), and not at the core (i.e. with networks).  If the hosts need a mechanism to provide some functionality, then the network should not interfere or participate in that mechanism unless it absolutely has to. Or, more simply put, the network should mind its own business. If a network function can be implemented correctly and completely using the functionalities available on the end-hosts, that function should be implemented on the end-hosts without delegating any task to the network (i.e., intermediary nodes in between the end-hosts). Because the end-to-end principle is one of the central design principles of the Internet, and because the practical means for implementing data discrimination violate the end-to-end principle, the principle often enters discussions about net neutrality (NN). The end-to-end principle is closely related, and sometimes seen as a direct precursor to the principle of net neutrality.


The Internet is a global, interconnected and decentralised autonomous computer network. We can access the Internet via connections provided by Internet access providers (ISP). These access providers transmit the information that we send over the Internet in so-called data “packets”. The way in which data is sent and received on the Internet can be compared to sending the pages of a book by post in lots of different envelopes. The post office can send the pages by different routes and, when they are received, the envelopes can be removed and the pages put back together in the right order. When we connect to the Internet, each one of us becomes an endpoint in this global network, with the freedom to connect to any other endpoint, whether this is another person’s computer (“peer-to-peer”), a website, an e-mail system, a video stream or whatever.

The success of the Internet is based on two simple but crucial components of its architecture:

1. Every connected device can connect to every other connected device.

2. All services use the “Internet Protocol,” which is sufficiently flexible and simple to carry all types of content (video, e-mail, messaging etc.) unlike networks that are designed for just one purpose, such as the voice telephony system.


Technical internet:

Internet is the abbreviation of the term internetwork, which describes the connection between computer networks all around the world on the basis of the same set of communication protocols. At its start in the 1960s, the Internet was a closed research network between just a few universities, intended to transmit text messages. The architectural design of the Internet was guided by two fundamental design principles: Messages are fragmented into data packets that are routed through the network autonomously (end-to-end principle) and as fast as possible (best-effort principle [BE]). This entails that intermediate nodes, so-called routers, do not differentiate packets based on their content or source. Rather, routers maintain routing tables in which they store the next node that lies on the supposedly shortest path to the packet’s destination address. However, as each router acts autonomously along when deciding the path along which it sends a packet, no router has end-to-end control over which path the packet is send from sender to receiver. Moreover, it is possible, even likely, that packets from the same message flow may take different routes through the network. Packets are stored in a router’s queue if they arrive at a faster rate than the rate at which the router can send out packets. If the router’s queue is full, the package is deleted (dropped) and must be resent by the source node. Full router queues are the main reason for congestion on the Internet. However, no matter how important a data packet may be, routers would always process their queue according to the first-in-first-out principle. These fundamental principles always were (and remain in the context of the NN debate) key elements of the open Internet spirit. Essentially, they establish that all data packets sent to the network are treated equally and that no intermediate node can exercise control over the network as a whole. This historic and romantic view of the Internet neglects that Quality of Service (QoS) has always been an issue for the network of networks. Over and beyond the sending of mere text messages, there is a desire for reliable transmission of information that is time critical (low latency), or for which it is desired that data packets are received at a steady rate and in a particular order (low jitter). Voice communication, for example, requires both, low latency and low jitter. This desire for QoS was manifested in the architecture of the Internet as early as January 1, 1983, when the Internet was switched over to the Transmission Control Protocol / Internet Protocol (TCP/IP). In particular, the Internet protocol version 4 (IPv4), which constitutes the nuts and bolts of the Internet since then, already contains a type of service (TOS) field in its header by which routers could prioritize packets in their queues and thereby establish QoS. However, a general agreement on how to handle data with different TOS entries was never reached and thus the TOS field was not used accordingly. Consequently, in telecommunications engineering, research on new protocols and mechanisms to enable QoS in the Internet has spurred ever since, long before the NN debate came to life. In addition, data packets can even be differentiated solely based on what type of data they are carrying, without the need for an explicit marking in the protocol header. This is possible by means of so-called Deep Packet Inspection (DPI). All of these features are currently deployed in the Internet as we know it, and many of them have been deployed for decades. The NN debate, however, sometimes questions the existence and use of QoS mechanisms in the Internet and argues that the success of the Internet was only possible due to the BE principle. While the vision of an Internet that is based purely on the BE principle is certainly not true, some of these claims nevertheless deserve credit.


Commercial internet:

Another far-reaching event was the steady commercialization of the Internet in the 1990s. At about the same time, the disruptive innovation of content visualization and linkage via the Hyper Text Markup Language (HTML), the so called World Wide Web (WWW) made the Internet a global success. Private firms began to heavily invest in backbone infrastructure and commercial ISPs provided access to the Internet, at first predominately by dial up connections. The average data traffic per household severely increased with the availability of broadband and rich media content (Bauer et al., 2009). According to the Minnesota Internet Traffic Studies (Odlyzko et al., 2012) Internet traffic in the US is growing annually by about 50 percent. The increase in network traffic is the consequence of the on-going transition of the Internet to a fundamental universal access technology. Media consumption using traditional platforms such as broadcasting and cable is declining and content is instead consumed via the Internet. Today the commercial Internet ecosystem consists of several players. Internet users (IUs) are connected to the network by their local access provider (ISP), while content and service providers (CSPs) offer a wide range of applications and content to the mass of potential consumers. All of these actors are spread around the world and interconnect with each other over the Internet’s backbone, which is under the control of an oligopoly of big network providers (Economides, 2005). The Internet has become a trillion dollar industry (Pélissié du Rausas et al., 2011) and has emerged from a mere network of networks to the market of markets. Much of the NN debate is devoted to the question whether the market for Internet access should be a free market, or whether it should be regulated in the sense that some feasible revenue flows are to be prohibited.


The principal Internet services:

• E‐mail person‐to‐person messaging; document sharing.

• Newsgroups discussion groups on electronic bulletin boards.

• Chatting and instant messaging interactive conversations.

• Telnet logging on to one computer system and doing work on another.

• File Transfer Protocol (FTP) transferring files from computer to computer.

• World Wide Web retrieving, formatting, and displaying information (including text, audio, graphics, and video) using hypertext links.


The modern Internet was invented to be a free and open network that allows anyone with a Web connection to communicate directly with any individual or computer on that network. Over the past 25 years, the Internet has transformed the way we do just about everything. Think about the conveniences and services that wouldn’t exist without the Internet:

• instant access to information about everything email

• online shopping

• online social networks

• independent global news sources

• streaming movies, TV shows and music

• online banking

• video calls and videoconferencing

The Internet has evolved so quickly and works so well precisely because the technology behind the Internet is neutral. In other words, the physical cables, routers, switches, servers and software that run the Internet treat every byte of data equally. A streaming movie from Netflix shares the same crowded fiber optic cable as the pictures from your niece’s birthday. The Internet doesn’t pick favourites. That, at its core, is what net neutrality means. And that’s one of the most important reasons why you should care about it: to keep the Internet as free, open and fair as possible, just as it was designed to be.



Networking allows one computer to send information to and receive information from another. We may not always be aware of the numerous times we access information on computer networks. Certainly the Internet is the most conspicuous example of computer networking, linking millions of computers around the world, but smaller networks play a role in information access on a daily basis. We can classify network technologies as belonging to one of two basic groups. Local area network (LAN) technologies connect many devices that are relatively close to each other. Wide area network (WAN) technologies connect a smaller number of devices that can be many kilometers apart. Ethernet is a wired LAN technology while Wi-Fi is wireless LAN technology. WAN is a computer networking technologies used to transmit data over long distances, and between different LANs and other localised computer networking architectures. Network nodes can be connected using any given technology, from circuit switched telephone lines (DSL) through radio waves (wireless broadband/mobile broadband) through optic fibre.


Broadband network:

The ideal telecommunication network has the following characteristics: broadband, multi-media, multi-point, multi-rate and economical implementation for a diversity of services (multi-services). The Broadband Integrated Services Digital Network (B-ISDN) intended to provide these characteristics. Asynchronous Transfer Mode (ATM) was promoted as a target technology for meeting these requirements.



A multi-media call may communicate audio, data, still images, or full-motion video, or any combination of these media. Each medium has different demands for communication quality, such as:

1. bandwidth requirement,

2. signal latency within the network, and

3. signal fidelity upon delivery by the network.

The information content of each medium may affect the information generated by other media. For example, voice could be transcribed into data via voice recognition, and data commands may control the way voice and video are presented. These interactions most often occur at the communication terminals, but may also occur within the network.


Internet access:

Internet access connects individual computer terminals, computers, mobile devices, and computer networks to the Internet, enabling users to access Internet services, such as email and the World Wide Web. Internet service providers (ISPs) offer Internet access through various technologies that offer a wide range of data signalling rates (speeds). Consumer use of the Internet first became popular through dial-up Internet access in the 1990s. By the first decade of the 21st century, many consumers in developed nations used faster, broadband Internet access technologies. As of 2014, broadband was ubiquitous around the world, with a global average connection speed exceeding 4 Mbit/s.


Choosing an Internet service:

It all depends on where you live and how much speed you need. Internet service providers (ISPs) usually offer different levels of speed based on your needs. If you’re mainly using the Internet for email and social networking, a slower connection might be all you need. However, if you want to download a lot of music or watch streaming movies, you’ll want a faster connection. You’ll need to do some research to find out what the options are in your area. Here are some common types of Internet service.


Dial-up is generally the slowest type of Internet connection, and you should probably avoid it unless it is the only service available in your area. Like a phone call, a dial-up modem will connect you to the Internet by dialling a number, and it will disconnect when you are done surfing the Web. Unless you have multiple phone lines, you will not be able to use your land line and the Internet at the same time with a dial-up connection.

DSL (digital subscriber line):

DSL service uses a broadband connection, which makes it much faster than dial-up. DSL is a high-speed Internet service like cable Internet. DSL provides high-speed networking over ordinary phone lines using broadband modem technology. DSL technology allows Internet and telephone service to work over the same phone line without requiring customers to disconnect either their voice or Internet connections. DSL technology theoretically supports data rates of 8.448 Mbps, although typical rates are 1.544 Mbps or lower. DSL Internet services are used primarily in homes and small businesses. DSL Internet service only works over a limited physical distance and remains unavailable in many areas where the local telephone infrastructure does not support DSL technology. However, it is unavailable in many locations, so you’ll need to contact your local ISP for information about your area. DSL connects to the Internet via phone line but does not require you to have a land line at home. Unlike dial-up, it will always be on once its set up, and you’ll be able to use the Internet and your phone line simultaneously.


Cable service connects to the Internet via cable TV, although you do not necessarily need to have cable TV in order to get it. It uses a broadband connection and can be faster than both dial-up and DSL service; however, it is only available in places where cable TV is available.


A satellite connection uses broadband but does not require cable or phone lines; it connects to the Internet through satellites orbiting the Earth. As a result, it can be used almost anywhere in the world, but the connection may be affected by weather patterns. A satellite connection also relays data on a delay, so it is not the best option for people who use real-time applications, like gaming or video conferencing.

3G and 4G:

3G and 4G service is most commonly used with mobile phones and tablet computers, and it connects wirelessly through your ISP’s network. If you have a device that’s 3G or 4G enabled, you’ll be able to use it to access the Internet away from home, even when there is no Wi-Fi connection. However, you may have to pay per device to use a 3G or 4G connection, and it may not be as fast as DSL or cable.

Wireless hotspots:

If you’re out and about with an internet device like a laptop, tablet or smartphone, you might want to connect at a wireless hotspot. Wireless ‘hotspots’ are places like libraries and cafés, which offer you free access to their broadband connection (Wi-Fi). You may need to be a member of the library or a customer at a café to get the password for the wireless connection.


Wired vs. wireless internet access:

A wired network connects devices to the Internet or other network using cables. The most common wired networks use cables connected to Ethernet ports on the network router on one end and to a computer or other device on the cable’s opposite end. A wireless local-area network (LAN) uses radio waves to connect devices such as laptops to the Internet and to your business network and its applications. When you connect a laptop to a Wi-Fi hotspot at a cafe, hotel, airport lounge, or other public place, you’re connecting to that business’s wireless network. Almost all of the discussion surrounding net neutrality has been confined to wired (that is, cable, DSL and fiber) broadband in the U.S. while in India, most internet is wireless mobile broadband. In India they have an abnormally high mobile to fixed broadband ratio of 4:1 and only 15.2 million wired broadband connections in a country of 1.25 billion. India has a fixed broadband penetration ratio of 1.2 per 100 as against the world average of 9.4 per 100. The Open Internet Order by FCC adopted definitions for “fixed” and “mobile” Internet access service. It defined “fixed broadband Internet access service” to expressly include “broadband Internet access service that serves end users primarily at fixed endpoints using stationary equipment … fixed wireless services (including fixed unlicensed wireless services), and fixed satellite services.”  It defined “mobile broadband Internet access service” as “a broadband Internet access service that serves end users primarily using mobile stations.” So fixed internet access include wired and wireless technology while mobile internet access is always wireless. The transparency rule applies equally to both fixed and mobile broadband Internet access service. The no-blocking rule applied a different standard to mobile broadband Internet access services and mobile Internet access service was excluded from the unreasonable discrimination rule.


Wired network Wireless network
Consumers use cable (cable TV), copper wire (DSL) or fiber-optic to connect to internet Consumers use radio waves to connect to internet via 3G/4G data card containing modem (mobile broadband) or through Wi-Fi using LAN
Large capacity of data transmission, volume uncapped It requires the use of spectrum, which is a scarce public resource,  limited capacity of data transmission,  restrictive volume caps
Multiple simultaneous users do not significantly affect speed Multiple simultaneous users significantly reduces speed
Majority of American population uses wired network Majority of Indian population uses wireless network
Net neutrality debate mainly involve wired transmission in America Net neutrality debate mainly involve wireless transmission in India
Wired connection speed is near maximum throughput Wireless connection speed will be less than the maximum throughput due to various factors reducing signal strength
Wired connection generally have faster internet speed Wireless connection generally have slower internet speed
You have to access internet at a fixed point You can move around with device within network coverage area for internet access
Voice and video quality not significantly affected in network congestion Voice and video quality significantly affected in network congestion


What is spectrum?

Spectrum in wireless telephone/internet transmission is the radio frequency spectrum that ranges from very low frequency radio waves at around 10kHz (30 kilometres wavelength) up to 100GHz (3 millimetres wavelength). The radio spectrum is divided into frequency bands reserved for a single use or a range of compatible uses. Within each band, individual transmitters often use separate frequencies, or channels, so they do not interfere with each other. Because there are so many competing uses for wireless communication, strict rules are necessary to prevent one type of transmission from interfering with the next. And because spectrum is limited — there are only so many frequency bands — governments must oversee appropriate licensing of this valuable resource to facilitate use in all bands. Governments spend a considerable amount of time allocating particular frequencies for particular services, so that one service does not interfere with another. These allocations are agreed internationally, so that interference across borders, as well as between services, is minimised. Not all radio frequencies are equal. In general, lower frequencies can reach further beyond the visible horizon and are better at penetrating physical obstacles such as rain or buildings. Higher frequencies have greater data-carrying capacity, but less range and ability to pass through obstacles. For example, Mobile broadband uses the spectrum of 225 MHz to 3700 MHz while Wi-Fi uses 2.4 and 5 GHz frequency. Capacity is also dependent on the amount of spectrum a service uses — the channel bandwidth. For many wireless applications, the best trade-off of these factors occurs in the frequency range of roughly 400MHz to 4GHz, and there is great demand for this portion of the radio spectrum.


All communication devices that use digital radio transmissions operate in a similar way. A transmitter generates a signal that contains encoded voice, video or data at a specific radio frequency, and this is radiated into the environment by an antenna (also known as an aerial). This signal spreads out in the environment, of which a very small portion is captured by the antenna of the receiving device, which then decodes the information. The received signal is incredibly weak — often only one part in a trillion of what was transmitted. In the case of a mobile phone call, a caller’s voice is converted by the handset into digital data, transmitted via radio to the network operator’s nearest tower or base station, transferred to another base station serving the recipient’s location, and then transmitted again to the recipient’s phone, which converts the signal back into audio through the earpiece. There are a number of standards for mobile phones and base stations, such as GSM, WCDMA and LTE, which use different methods for coding and decoding, and ensure that users can only receive voice calls and data that are intended for them.


The bandwidth of a radio signal is the difference between the upper and lower frequencies of the signal. For example, in the case of a voice signal having a minimum frequency of 200 hertz (Hz) and a maximum frequency of 3,000 Hz, the bandwidth is 2,800 Hz (3 KHz). The amount of bandwidth needed for 3G services could be as much as 15-20 Mhz, whereas for 2G services a bandwidth of 30-200 KHz is used. Hence, for 3G huge bandwidth is required. Please do not confuse between bandwidth of 2G/3G spectrum and bandwidth of internet transmission i.e. internet speed.


What is Broadband?

Broadband is a technology that transmits data at high speed along cables, ISDN / DSLs (Digital Subscriber Lines) and mobile phone networks. The most common type of broadband is ADSL (carried along phone lines), though cable (using new fibre-optic cables) and mobile broadband (using 3G and 4G mobile reception) are hot contenders to topple ADSL’s dominance. ADSL broadband comes from your local telephone exchange, through a Fixed Line Access Network made out of copper wires. These are the telephone lines that you see in the street. The lines in the street connect to the wiring inside your house and provide you an internet and phone connection through the socket on the wall. Unlike the copper wires of an ADSL connection, cables are partially made of fibre-optic material, which allows for much faster broadband speeds and increased reliability. The other advantage of cable is that it also allows for the transmission of audio and visual signals, which means you can get both landline and digital TV services from your cable broadband provider. Mobile broadband uses 3G and 4G mobile phone technology. These are made possible by two complementary technologies, HSDPA and HSUPA (high speed download and upload packet access, respectively).


Broadband provides improved access to Internet services such as:

1. Faster World Wide Web browsing

2. Faster downloading of documents, photographs, videos, and other large file

3. Telephony, radio, television, and videoconferencing

4. Virtual private networks and remote system administration

5. Online gaming, especially massively multiplayer online role-playing games which are interaction-intensive


Broadband technologies supply considerably higher bit rates than dial-up, generally without disrupting regular telephone use. Various minimum data rates and maximum latencies have been used in definitions of broadband, ranging from 64 kbit/s up to 4.0 Mbit/s. In 1988 the CCITT standards body defined “broadband service” as requiring transmission channels capable of supporting bit rates greater than the primary rate which ranged from about 1.5 to 2 Mbit/s.  A 2006 Organization for Economic Co-operation and Development (OECD) report defined broadband as having download data transfer rates equal to or faster than 256 kbit/s.  And in 2015 the U.S. Federal Communications Commission (FCC) defined “Basic Broadband” as data transmission speeds of at least 25 Mbit/s downstream (from the Internet to the user’s computer) and 3 Mbit/s upstream (from the user’s computer to the Internet). The trend is to raise the threshold of the broadband definition as higher data rate services become available.


Broadband infrastructure:

Proponents of net neutrality regulations say network operators have continued to under-invest in infrastructure. However, according to Copenhagen Economics, US investment in telecom infrastructure is 50 percent higher that of the European Union. As a share of GDP, The US’s broadband investment rate per GDP trails only the UK and South Korea slightly, but exceeds Japan, Canada, Italy, Germany, and France sizably.  On broadband speed, Akamai reported that the US trails only South Korea and Japan among its major trading partners, and trails only Japan in the G-7 in both average peak connection speed and percentage of the population connection at 10 Mbit/s or higher, but are substantially ahead of most of its other major trading partners. The White House reported in June 2013 that U.S. connection speeds are the fastest compared to other countries with either a similar population or land mass. Broadband speeds in the United States, both wired and wireless, are significantly faster than those in Europe. Broadband investment in the United States is several multiples that of Europe. And broadband’s reach is much wider in the United States, despite its much lower population density. In other words, broadband speed is directly proportional to investment in broadband infrastructure. I live in small town Daman where maximum internet download speed I got from any ISP is 2.5 Mbps. This is because of poor broadband infrastructure in India.


Bandwidth (internet speed):

In computer networks, bandwidth is used as a synonym for data transfer rate (internet speed), the amount of data that can be carried from one point to another in a given time period (usually a second). Network bandwidth is usually expressed in bits per second (bps); modern networks typically have speeds measured in the millions of bits per second (megabits per second, or Mbps) or billions of bits per second (gigabits per second, or Gbps). How fast your internet is depends on three factors: Download speed (how fast you can retrieve something from the internet), upload speed (sending something to a remote location on the internet), and latency (lag time between each point during information transfer). Download speed is what you experience the most and can send you to tap your fingers for what seems like minutes before a web page shows up on your screen. If you’re streaming movies from Netflix, download speed is important. The higher the number for download speed, the quicker the movie will get from the Netflix website to your computer. A movie downloaded at 15 Mbps should take one-tenth as long as having a 1.5 Mbps connection. Note that bandwidth is not the only factor that affects network performance: There is also packet loss, latency and jitter, all of which degrade network throughput and make a link perform like one with lower bandwidth.  A network path usually consists of a succession of links, each with its own bandwidth, so the end-to-end bandwidth is limited to the bandwidth of the lowest speed link (the bottleneck). Different applications require different bandwidths. This is important because some sites use much more bandwidth than others depending on their content and media. Video is one of the main ways to use a lot of bandwidth.  For example, sites like Netflix and YouTube use almost half of North America’s Internet Bandwidth during peak hours of the day (according to CNET). An instant messaging conversation might take less than 1,000 bits per second (bps); a voice over IP (VoIP) conversation requires 56 kilobits per second (Kbps) to sound smooth and clear.  Standard definition video (480p) works at 1 megabit per second (Mbps), but HD video (720p) wants around 4 Mbps, and HDX (1080p), more than 7 Mbps. Effective bandwidth — the highest reliable transmission rate a path can provide — is measured with a bandwidth test. This rate can be determined by repeatedly measuring the time required for a specific file to leave its point of origin and successfully download at its destination.


Speed vs. latency:

There is more to an Internet connection’s speed than just its bandwidth. This is especially true with satellite Internet connections, which can offer speeds of up to 15 Mbps – but will still feel slow. Latency is defined as the time it takes for a source to send a packet of data to a receiver. Latency is typically measured in milliseconds. Latency is independent of internet speed. Consider the analogy of a car travelling at 100 mph from A to B. This might sound fast but gives no indication of whether the car has driven the most direct route; if direct, fine; if from A to C to D to B the journey is going to take longer. So with network traffic; you might have a fast Internet connection, but if the route between the user’s computer and the server being accessed is indirect, response times will be slower. Latency is a true indicator of whether network traffic has taken the shortest possible route. The lower the latency (the fewer the milliseconds), the better the network performance. Together, latency and bandwidth define the speed and capacity of a network. Network latency is the term used to indicate any kind of delay that happens in data communication over a network. Network connections in which small delays occur are called low-latency networks whereas network connections which suffer from long delays are called high-latency networks. High latency creates bottlenecks in any network communication. It prevents the data from taking full advantage of the network pipe and effectively decreases the communication bandwidth. The impact of latency on network bandwidth can be temporary or persistent based on the source of the delays. On DSL or cable Internet connections, latencies of less than 100 milliseconds (ms) are typical and less than 25 ms desired. Satellite Internet connections, on the other hand, average 500 ms or higher latency. Wireless mobile broadband latency varies from 80 ms (LTE) to 125 ms (HSPA).

How to measure latency:

Ping Command:

One of the first things to try when your connection doesn’t seem to be working properly is the ping command. Open a Command Prompt window from your Start menu and run a command like ping or ping This command sends several packets to the address you specify. The web server responds to each packet it receives. In the command below, you can see % of packet loss and the time each packet takes. Ping cannot perform accurate measurements, principally because it uses the ICMP protocol that is used only for diagnostic or control purposes, and differs from real communication protocols such as TCP. Furthermore routers and ISP’s might apply different traffic shaping policies to different protocols. For more accurate measurements it is better to use specific software (for example: lft, paketto, hping, superping.d, NetPerf, IPerf)


A very good example when bandwidth would directly correlate to speed is when you are downloading a file across the network or Internet. Greater bandwidth means that more of the file is being transferred at any given time. The file would be therefore be downloaded faster. This is also applicable when you are browsing the Internet as greater bandwidth would result in web pages loading faster and video streaming to be smoother. But in certain cases, speed and bandwidth do not literally mean the same thing. This is true when you talk about real time applications like VoIP or online gaming. In these cases, latency or response time is more important than having more bandwidth. Even if you have a lot of bandwidth, you may experience choppy voice transmission or response lag if your latency is too high. Upgrading your bandwidth would probably not help since it would no longer be used. Latency can’t be upgraded easily as it requires that any noise be minimized as well as the amount of time that it takes for packets to move from source to destination and vice versa. To obtain the best possible speed for your network or Internet connection, it is not enough to have a high bandwidth connection. It is also important that your latency is low, to ensure that the information reaches you quickly enough. This only matters though if you have enough bandwidth as low latencies without enough bandwidth would still result in a very slow connection.


The speed at which websites download from the Internet is dependent on the following factors:

1. Web design:

•Design of the web page – number of graphics and their size, use of frames and tables

•Size of the web page – overall length of page. Note; having valid, compliant html/css coding on your website will allow your browser to render the page much more quickly

2. Your browsing history:

•Whether or not you have ever accessed the site before. If you have accessed it recently, the files may be in your cache and the site will load more quickly the second and subsequent times.

•How full your web browser cache is – you may need to clear your cache if you’ve set it to only reserve a small amount of space.

3. Your computer configuration and settings:

•How much memory you have in your computer – the more RAM the better.

•The size of your network buffer – most overlooked setting, have your IT staff review the settings.

•How fragmented the data on your hard drive is – you may need to run a defragment program.

•The number of programs you have running simultaneously while downloading. Running multiple programs hogs valuable RAM space.

•Cookies should be cleared regularly (bi-weekly or monthly) to help reduce the load on your browser thus slowing down your performance.

4. The network used to access the site:

•Speed of your connection to the Internet – your modem/cable/DSL/wireless speed.

•Quality of your telephone/broadband line – bad connections mean slower transmissions.

•Access speed on the server where the site is hosted – if the site is hosted on a busy server, it may slow down access speed.

•How much traffic there is to the site at the same time you are trying to access it.

•The load on the overall network at your ISP – how busy it is.

Any or all of the above can slow download time. Web designers only have control over the first two items!

5. Limitations of your computer:

There are also other ways you can improve your speeds, here are a few:

• Update to the latest Web browser and Operation System versions.

•Clear out your cache: Old information retained by your web browser may be making it perform slower than it could.

•Reformatting your hard drive: Although technical in nature, by reloading your Operating System, you will be able to get rid of unnecessary files that linger around your computer.

•Change your ISP: As drastic as this may sound, some providers oversell their services. As a result, they simply cannot supply their users with speeds needed by modern day web activities. Before you take the plunge to get a new ISP, make sure you read some reviews about them and ask your friends for recommendation.

•It may be time for an upgrade! Get a new computer! Modern software programs take up more and more resources and it is quite possible that your current hardware simply cannot keep up to date with the current standards.


There are other factors involved in internet speed:

1. End-User Hardware Issues: If you have an old router that just can’t keep up with modern speeds or a poorly configured Wi-Fi connection that’s being slowed down by interference, you won’t actually experience the connection speeds you’re paying for — and that’s not the Internet service provider’s fault.

2. Distance from ISP: The further you are away from your Internet service provider’s hardware, the weaker your signal can become. If you’re in a city, you’re likely to have a faster connection than you would in the middle of the countryside.

3. Congestion: You’re sharing an Internet connection line with many other customers from your Internet service provider, so congestion can result as all these people compete for the Internet connection. This is particularly true if all your neighbours are using BitTorrent 24/7 or using other demanding applications.

4. Time of Day: Because more people are probably using the shared connection line during peak hours — around 6pm to midnight for residential connections — you may experience slower speeds at these times.

5. Throttling; Your Internet service provider may slow down (or “throttle”) certain types of traffic, such as peer-to-peer traffic. Even if they advertise “unlimited” usage, they may slow down your connection for the rest of the month after you hit a certain amount of data downloaded. Throttling is a process by which the amount of bandwidth you use through your Internet provider is limited in some way, usually in the form of slower upload or download speeds. This is done to allow others to more effectively connect to the Internet Service Provider’s servers. Net neutrality supporters are concerned with throttling because they believe current legislation is leaving the doors open to allow ISPs to throttle based on their own discretion of what sites you visit. For example, if you plan to watch “House of Cards” in UltraHD on Netflix, but your ISP decides it’s going to “throttle” access to Netflix, you may have to settle for some grainy 720p or worse, 480p on your new giant, curved Ultra HDTV.

6. Server-Side Issues: Your download speeds don’t just depend on your Internet service provider’s advertised speeds. They also depend on the speeds of the servers you’re downloading from and the routers in between. For example, if you’re in the US and experience slowness when downloading something from a website in Europe, it may not be your Internet service provider’s fault at all — it may be because the website in Europe has a slow connection or the data is being slowed down at one of the routers in between you and the European servers.

Many factors can impact Internet connection speed, and it’s hard to know which is the precise problem. Nevertheless, in real-life usage, you’ll generally experience slower speeds than your Internet service provider advertises — if only because it’s so dependent on other people’s Internet connections.


Can your download/upload speed affected by number of simultaneous users on any network, wired or wireless?

Simply yes!

The more users on any network, wired or wireless, the less bandwidth available to each of them. The type of activity also has a huge impact on performance.  If everyone is only checking e-mail it’s not likely to cause slowdowns. But if you have someone trying to stream a Netflix movie and someone else running a Skype video chat you can probably forget about playing an online game as well.


When multiple connected computers or devices – such as a mobile phone and wireless router – connect simultaneously to the same network, the result of having to share the available bandwidth, Internet access speed could be reduced. Wireless connection throughput is subject to conditions such as the local radio environment, number of devices sharing the same wireless network, range of the wireless coverage, interferences, physical obstacles and capability of receiving end. As a result, actual wireless connection speed will be less than the maximum throughput. In practice very few wireless networks can ever achieve their full quoted data rate. It is strongly dependant on signal strength. There are then various overheads, for TCP, IP and the wireless transport layer, including traffic that manages the connection even if you are not actively using it currently. These overheads include acknowledgments that need to be sent when data is received (and vice-versa).  Each website is served by a server connected to the network, and network bandwidth is distributed according to a website’s usage. So when the number of users is low, your connection speed will be faster. In contrast, when the number of simultaneous users is high, your linking network server will be congested, causing the connection speed to drop, especially for an overseas network server and the amount of users.  In summary, the end-to-end data throughput is also dependent on the bandwidth of the connection from the web or network server to the internet. The speed of the flow not only depends on bandwidth and number of users but also depends on routers and network conditions between the two devices involved in the flow.


Upstream and Downstream Bandwidth:

When a device uses the Internet, information flows in two ways: to the device and from the device. When data flows to the device, the movement of information is downstream. When data flows from the device, the movement is upstream. Typical Internet processes involve more downstream usage than upstream usage; information flows to the device more than it flows from it. As a result, most Internet connections prioritize downstream bandwidth. However, for large data transfers, remote access, video chats and voice over IP calls, more upstream bandwidth is required. Many Internet routers have Quality of Service, or QoS, settings that can prioritize bandwidth usage in the case of increased upstream flow.


Multiple Users of a Single Connection:

When multiple people use a single connection, more devices consume the finite bandwidth of the connection. Therefore, each device is allocated a smaller portion of the available bandwidth. As a result, all devices may experience a slower data transfer. Some router QoS settings allow you to prioritize device bandwidth use so that certain devices have increased access to the bandwidth.



Now let me discuss two human factors responsible for provoking humans to choose one site over another besides obvious cost & quality factors:

1. Human intolerance for slow-loading sites

2. Human audio-visual perception


Consumer intolerance to slow-loading sites:

Video Stream Quality impacts viewer behaviour:

The Internet is radically transforming all aspects of human society by enabling a wide range of applications for business, commerce, entertainment, news and social networking. Perhaps no industry has been transformed more radically than the media and entertainment segment of the economy. As media such as television and movies migrate to the Internet, there are twin challenges that content providers face whose ranks include major media companies (e.g., NBC, CBS), news outlets (e.g., CNN), sports organizations (e.g., NFL, MLB), and video subscription services (e.g., Netflix, Hulu). The first major challenge for content providers is providing a high-quality streaming experience for their viewers, where videos are available without failure, they startup quickly, and stream without interruptions. A major technological innovation of the past decade that allows content providers to deliver higher-quality video streams to a global audience of viewers is the content delivery network (or, CDN for short). CDNs are large distributed systems that consist of hundreds of thousands of servers placed in thousands of ISPs close to end users. CDNs employ several techniques for transporting media content from the content provider’s origin to servers at the “edges” of the Internet where they are cached and served with higher quality to the end user. The second major challenge of a content provider is to actually monetize their video content through ad-based or subscription-based models. Content providers track key metrics of viewer behavior that lead to better monetization. Primary among them relate to viewer abandonment, engagement, and repeat viewership. Content providers know that reducing the abandonment rate, increasing the play time of each video watched, and enhancing the rate at which viewers return to their site increase opportunities for advertising and upselling, leading to greater revenues. The key question is whether and by how much increased stream quality can cause changes in viewer behavior that are conducive to improved monetization. Relatively little is known from a scientific standpoint about the all-important causal link between video stream quality and viewer behavior for online media. While understanding the link between stream quality and viewer behavior is of paramount importance to the content provider, it also has profound implications for how a CDN must be architected. An architect is often faced with trade-offs on which quality metrics need to be optimized by the CDN. A scientific study of which quality metrics have the most impact on viewer behavior can guide these choices. As an example of viewer behavior impacting CDN architecture, authors performed small-scale controlled experiments on viewer behavior a decade ago that established the relative importance of the video to startup quickly and play without interruptions. These behavioral studies motivated an architectural feature called prebursting that was deployed on Akamai’s live streaming network that enabled the CDN to deliver streams to a media player at higher than the encoded rate for short periods of time to fill the media player’s buffer with more data more quickly, resulting in the stream starting up faster and playing with fewer interruptions. It is notable that the folklore on the importance of startup time and rebuffering were confirmed in two recent important large-scale scientific studies. The current work sheds further light on the important nexus between stream quality and viewer behavior and, importantly, provides the first evidence of a causal impact of quality on behavior.


Authors study the impact of video stream quality on viewer behavior in a scientific data-driven manner by using extensive traces from Akamai’s streaming network that include 23 million views from 6.7 million unique viewers. They show that viewers start to abandon a video if it takes more than 2 seconds to start up, with each incremental delay of 1 second resulting in a 5.8% increase in the abandonment rate. Further, they show that a moderate amount of interruptions can decrease the average play time of a viewer by a significant amount. A viewer who experiences a rebuffer delay equal to 1% of the video duration plays 5% less of the video in comparison to a similar viewer who experienced no rebuffering. Finally, authors show that a viewer who experienced failure is 2.32% less likely to revisit the same site within a week than a similar viewer who did not experience a failure.  On average, YouTube streams 4 billion hours of video per month. That’s a lot of video, but it’s only a fraction of the larger online-streaming ecosystem. For video-streaming services, making sure clips always load properly is extremely challenging, and this study reveals that it’s important to video providers, too. Maybe this has happened to you: You’re showing a friend some hilarious video that you found online. And right before you get to the punch line, a little loading dial pops up in the middle of the screen. Buffering kills comedic timing, and according to this study it kills attention spans, too.  People are pretty patient for up to two seconds. If you start out with, say, 100 users — if the video hasn’t started in five seconds, about one-quarter of those viewers are gone, and if the video doesn’t start in 10 seconds, almost half of those viewers are gone. If a video doesn’t load in time, people get frustrated and click away. This may not come as a shock, but until now it hadn’t come as an empirically supported fact, either. This is really the first large-scale study of its kind that tries to relate video-streaming quality to viewer behavior.


User intolerance for slow-loading sites:

The figure above shows abandonment rate of online video users for different Internet connectivities.  Users with faster Internet connectivity (e.g., fiber) abandon a slow-loading video at a faster rate than users with slower Internet connectivity (e.g., cable or mobile).  A “fast lane” in the Internet can irrevocably decrease the user’s tolerance to the relative slowness of the “slow lane”.


Voice, video and human perception:

Voice and video signals must come fast and in a specific sequence. Conversations become difficult if words or syllables go missing or are delayed by more than a couple of tenths of a second. Our eyes can tolerate a bit more variation in video than our ears can tolerate in voice; on the other hand, video needs much more bandwidth. The human hearing system does not tolerate these flaws well because of its acute sense of timing. Twenty milliseconds of sudden silence can disturb a conversation. Voice and video can be converted into series of packets coded to identify their contents as requiring transmission at a regular rate. For telephony, the packet priority codes are designed to keep the conversation flowing without annoying jitter—variations in when the packets are received. Similar codes help keep video packets flowing at the proper rate. In practice, these flow controls are not crucial in today’s fixed broadband networks, which generally have enough capacity to transmit voice and video. But mobile apps are a different story. The Internet discards packets that arrive after a maximum delay, and it can request retransmission of missing packets. That’s okay for Web pages and downloads, but real-time conversations can’t wait. Software may skip a missing packet or fill the gap by repeating the previous packet. That’s tolerable for vowels, which are long, even sounds, so a packet lost from the middle of “zoom” would go unnoticed. But consonants are short and sharp, so losing a packet at the end of “can’t” turns it into “can.” Severe congestion can cause whole sentences to vanish and make conversation impossible. Such congestion is most serious on wireless networks, and it also already affects fixed broadband and backbone networks. Consumers frustrated by long video-buffering delays sometimes blame cable companies for intentionally throttling streaming video from companies like Netflix. But in 2014 the Measurement Lab consortium reported that the real bottlenecks are at interconnections between Internet access providers and backbone networks.


Is internet common carrier?

In common law countries, common carrier is a legal classification for a person or company which transports goods and is legally prohibited from discriminating or refusing service based on the customer or nature of the goods. The common carrier framework is often used to classify public utilities, such as electricity or water, and public transport. In the United States, there has been intense debate between some advocates of net neutrality, who believe Internet providers should be legally designated common carriers, and some Internet service providers, who believe the common carrier designation would be a heavy regulatory burden. You expect your home Internet connection to “just work” like water and electricity. But what if the electric company provided inadequate power to your Whirlpool refrigerator, because Whirlpool hadn’t paid a fee? And what if the water company completely cut off the flow from your Kohler faucet because it owned a stake in another faucet company? Unlike public utilities, your Internet service provider (ISP) can abuse its power to influence which Internet businesses win and lose by slowing down or even blocking sites and services.  The idea that the Internet should be operated like a public “road” — carrying all traffic, with no discrimination against any traveller, no matter what size, shape or type — seems to many a bedrock principle. But should the Internet be regulated like other public utilities — like water or electricity? Under FCC policy, Internet service providers such as Verizon and Comcast (ISPs)had to treat all content equally, including news sites, Facebook and Twitter, cloud-based business activities, role-playing games, Netflix videos, peer-to-peer music file sharing, photos on Flickr — even gambling activity and pornography. Citizens can run all manner of applications and devices, and no content provider is given preferential treatment or a faster “lane” than anyone else. No content can be blocked by Internet service providers or charged differential rates. But it also meant that ISPs could not sell faster services to businesses willing to pay, a form of market regulation that, critics say, stifles innovation and legitimate commercial activity.


Is Internet an Information Service or a Telecommunications Service?

Another major issue in the Net Neutrality kerfuffle is whether the Internet is classified as an information service or a more regulated telecommunications service. The fact that the Internet was reclassified as an information service by the FCC in 2002, led to Verizon’s successful challenge of Net Neutrality rules. Net Neutrality proponents obviously want the Internet reclassified as a telecommunications service. They feel this extra regulation will allow the principles of Net Neutrality to once again guide the concept of a free Internet. Considering that many of you only have one or two options when choosing a local ISP, regulation may be ultimately necessary to prevent monopoly abuse. If telecommunications companies are successful in instituting an Internet fast lane for video traffic, expect your Netflix subscription to increase by $5 – 10 per month, especially with Ultra HD becoming more popular. The spectre of ISPs blocking content from other competing entities is another issue that may have to be solved separately from the Internet “fast lane” issue.



ISP (internet service/access provider):

Should ISPs be allowed to selectively prioritize communications between their customers and specific destinations on internet or should the transmission of data be done in a neutral way that does not consider the destination of a communication?  Can ISPs arbitrarily assign preference to business partners or their own content?  Can they charge additional fees to content providers for “priority” connections?  Could they even arbitrarily block or severely degrade communications by their users to competitors such as competing Internet telephone (VoIP) companies, search engines, and online stores?  For all the promise of the Internet, there is a serious threat to its potential for revitalizing democracy. The danger arises because there is, in most markets, a very small number of broadband network operators, and this may not change in the near future.


To understand what the ISPs are implying here, consider figure above. From an economic point of view ISPs are the operators of a two-sided market platform that connects the suppliers of content and services (CSPs) with the consumers (IUs) that demand these services. In a two-sided market, each side prefers to have many partners on the other side of the market. Thus, CSPs prefer to have access to many IUs, because these create advertisement revenues. Likewise IUs prefer the variety that is created by many CSPs.  Suppose for a minute that there would only be one ISP in the world which connects CSPs with IUs. This ISP would consider these cross-side externalities and select a payment scheme for each side that maximizes its revenues. Instead of demanding the same payment from both sides, the classic result is that the platform operator chooses a lower fee from the side that is valued the most. In this vein, entry is stimulated and the added valuation can be monetized. There are several real world examples that demonstrate this practice: Credit card companies levy fees on merchants, not customers. Dating platforms offer free subscriptions to women, not men. Sometimes even a zero payment seems not enough to stimulate entry by the side that is valued the most. Then, the platform operator may consider to pay for entry (e.g., offer free drinks to women in a club). Such two-sided pricing is currently not employed in the Internet. One of the reasons is that CSPs and IUs are usually not connected to the same ISP, as depicted in figure above. The core of the Internet is comprised by several ISPs that perform different roles. More precisely, the core can be separated into (i) the customer access network: the physical connection to each household, (ii) the backhaul network, which aggregates the traffic from all connected households of a single ISP and (iii) the backbone network: the network that delivers the aggregated traffic from and to different ISPs. IUs are connected to a so-called access ISP which provides them with general access to the Internet. In most cases, IUs are subscribed to only one access ISP (known as single-homing) and cannot switch ISPs arbitrarily, either because they are bound by a long-term contract, or because they simply do not have a choice of ISPs in the region where they live. Conversely CSPs are usually subscribed to more than one backbone ISP (known as multi-homing), and sometimes, like in the case of Google, even maintain their own backbone network. This limits the extent of market power that each backbone ISP can exercise on the connected CSPs severely (Economides, 2005). The important message is that currently CSPs and IUs only pay the ISP through which they connect to the Internet. Interconnection between the backbone and access ISPs is warranted by a set of mutual agreements that are either based on bill-and-keep arrangements (peering) or volume-based tariffs (transit). In case of transit, the access ISP has to pay the backbone ISP, and not the other way around. Consequently, the IUs subscription fee is currently the main revenue source for access ISPs. Moreover, in many countries customers predominantly pay flat fees for their access to the Internet, and thus they are not sensitive with respect to how much traffic they are generating. Moreover, due to competition or fixed-mobile substitution, prices for Internet access have dropped throughout the years. Currently, it seems unlikely that access ISPs can evade from this flat rate trap. For example, in 2010 the big Canadian ISPs tried to return to a metered pricing scheme by imposing usage based billing on their wholesale products. As a consequence, smaller ISPs that rely on resale and wholesale products of the big Canadian ISPs would not be able to offer real flat rates anymore. With the whole country in jeopardy to loose unlimited Internet access, tremendous public protest arose and finally regulators decided to stop the larger telecommunications providers from pursuing such plans (, 2011). At the same time Internet traffic has increased, a trend that is often created by an increasing number of quality demanding services. One prominent example for this development is the company Netflix. Netflix offers video on demand streaming of many TV shows and movies for a monthly subscription fee. According to Sandvine (2010, p.14), already 20.6 percent of all peak period bytes downloaded on fixed access networks in North America are due to Netflix. In total, approximately 45 percent of downstream traffic on North American fixed and mobile access networks is attributable to real-time entertainment (Sandvine, 2010, p.12). In an effort to prepare for the extra-flood ISPs were and are forced to invest heavily in their networks.  Such investments are always lumpy and thus periodically cause an overprovisioning of bandwidth, which, however, is soon filled up again with new content. This is the vicious circle that network operators are trying to escape from. However, it is important to emphasize that transportation network equipment providers like Cisco, Alcatel Lucent and Huawei are constantly improving the efficiency of their products (e.g., by making use of new sophisticated multiplexing methods) such that the costs per unit of bandwidth are decreasing. This partially offsets the costs that ISPs worry about. In summary, ISPs claim that their investments in the network are hardly counter-balanced by new revenues from IUs. In reverse, CSPs benefit from the increased bandwidth of the customer access networks, which enables them to offer even more bandwidth demanding services, which in turn leads to a recongestion of the network and a new need for infrastructure investments. In the absence of additional profit prospects on the user side, access ISPs could generate extra revenue from CSPs, who are in part causing the necessity for infrastructure investments, by exercising their market power on the installed subscriber base in the sense of a two-sided market. CSPs have a high valuation for customers, consequently, the terminating access ISP demands an extra fee (over and beyond the access fee to the backbone ISP they are connected to) from the CSP for delivering its data to the IUs. This new revenue stream (the black arrows in the figure above) would clearly be considered as a violation of net neutrality.


Internet contains three classes of ISPs:

1. Eyeball ISPs, such as Time Warner Cable and Comcast specialize in delivery to hundreds of thousands of residential users, i.e., supporting the last-mile connectivity.

2. Content ISPs specialize in providing hosting and network access for end-users and commercial companies that offer content, such as Google, and Yahoo. Typical examples are content distribution networks (CDNs).

3. Transit ISPs. Transit ISPs model the Tier-1 ISPs, such as Level 3, Qwest, and Global Crossing, which provide transit services for other ISPs and naturally form a full-mesh topology to provide the universal accessibility of the Internet.


Evolution of commercial internet with all powerful last-mile ISP:



In the early Internet, the flow of traffic (mainly emails and files) was roughly symmetrical. A packet originating at ISP A and handed off to ISP-B for delivery would be balanced by a packet moving in the opposite direction. ISPs often entered into no-cost agreements to carry one another’s traffic, each figuring that the amount of traffic it carried for another ISP would be matched by that other ISP carrying its own traffic. Network neutrality prevailed naturally since ISPs, compensated for bandwidth used, did not differentiate one packet from another. The more packets of any kind, the more profits for all ISPs, an economic situation that aligned nicely with customers’ interest in having an expanding supply of bandwidth. Internet economics have changed considerably in recent years with the rise of behemoth for-profit content providers such as Facebook, Google, Amazon, and Netflix. These for-profit content providers did two things: it changed the Internet from a symmetric network to an asymmetric one where the vast preponderance of traffic flows from content providers to customers. And it introduced a new revenue stream, one outside the Internet and generated by advertising, online merchandizing, or payments for gaming, streaming video, and financial and other services. The Internet has evolved from a simple, symmetric network where light email and web traffic flowed between academics and researchers and the only revenues were from selling bandwidth, to an asymmetric one where traffic flows from content providers to consumers, generating massive revenues for content providers. The ISPs themselves were changing and becoming more specialized. Where eyeball ISPs serve people, transit ISPs serve the content providers and earn revenue by delivering content to consumers on behalf of the content providers. Since transit ISPs don’t have direct access to consumers, they arrange with the eyeball ISPs for the last-mile delivery of content to customers. With an imbalance in the direction of traffic and no mechanism for appropriate compensation, the previous no-cost (or zero-dollar) bilateral arrangements broke down and were replaced by paid-peering arrangements where ISPs pay one another to carry one another’s traffic. Each ISP adopts its pricing policies to maximize profit, and these pricing policies play a role in how ISPs cooperate with one another, or don’t cooperate. Profit-seeking and cost-reduction objectives often induce selfish behaviors in routing—ISPs will avoid links considered too expensive for example—thus contributing to Internet inefficiencies. Paid-peering is one ISP strategy to gain profits. For the eyeball ISPs that control access to consumers, there is another way. Charge higher prices by creating a premium class of service with faster speeds. The eyeball ISPs, however, are in a power position because, unlike transit ISPs, eyeball ISPs have essentially no competition. Content providers like Netflix are in a much weaker position than the eyeball ISPs. Content providers need ISPs much more than the ISPs need them. If Netflix were to disappear, other streaming services would rush to fill in the gap. For the ISPs, it matters little whether it’s Amazon, Hulu, or another service (and worryingly, services run by the eyeball ISPs themselves) providing streaming services. If ISPs don’t need Netflix, neither do customers. Customers unhappy with Netflix’s service can simply choose another such service. They can’t, however, normally choose a different ISP. The monopolistic power of the eyeball ISPs may soon be made stronger. Occupying a position of power and knowing customers are stuck, the eyeball ISPs can and do play hardball with content providers. This was effectively illustrated in the recent Netflix-vs-Comcast standoff when Comcast demanded Netflix pay additional charges (above what Netflix was already paying for bandwidth). When Netflix initially refused, Comcast customers with Netflix service started reporting download speeds so slow that some customers quit Netflix. These speed problems seemed to resolve themselves right around the time Netflix agreed to Comcast’s demands. It would have been relatively inexpensive for Comcast to add capacity. But why should it? Monopolies such as Comcast have no real incentive to upgrade their networks. There is in fact an incentive to not upgrade since a limited commodity commands a higher price than a bountiful one. By limiting bandwidth, Comcast can force Netflix and other providers to pay more or opt into the premium class. Besides charging more for fast Internet lanes, ISPs have other ways to extract revenues from content providers. What Netflix paid for in its deal with Comcast was not a fast lane in the Internet, but a special arrangement whereby Comcast connects directly to Netflix’s servers to speed up content delivery. It is important to note that this arrangement is not currently covered under conventional net neutrality, which bans fast lanes over the Internet backbone. In the Netflix-Comcast deal, Netflix’s content is being moved along a private connection and never reaches the global Internet.


Smart broadband pipes leads to more revenue to last-mile ISP:


Choosing an Internet service provider (ISP):

Once you have decided which type of Internet access you’re interested in, you can determine which ISPs are available in your area that offer the type of Internet access you want. Then you’ll need to purchase Internet service from one of the available ISPs. Talk to friends, family members, and neighbors to see which ISPs they use. Below are some things to consider as you research ISPs:



•Ease of installation

•Service record

•Technical support

•Contract terms

Although dial-up has traditionally been the least expensive option, many ISPs have raised dial-up prices to be the same as broadband. This is intended to encourage people to switch to broadband.


Bandwidth cost to ISP:

The Internet is witnessing explosive growth in demand for bulk content. Examples of bulk content transfers include downloads of music and movie files, distribution of large software and games, online backups of personal and commercial data, and sharing of huge scientific data repositories. Recent studies of Internet traffic in commercial backbones as well as academic and residential access networks show that such bulk transfers account for a large and rapidly growing fraction of bytes transferred across the Internet. The bandwidth costs of delivering bulk data are substantial. A recent study reported that average monthly wholesale prices for bandwidth vary from $30,000 per Gbps/month in Europe and North America to $90,000 in certain parts of Asia and Latin America. The high cost of wide-area network traffic means that increasingly economic rather than physical constraints limit the performance of many Internet paths. As charging is based on peak bandwidth utilization (typically the 95th percentile over some time period), ISPs are incentivized to keep their bandwidth usage on inter-AS links much lower than the actual physical capacity. To control their bandwidth costs, ISPs are deploying a variety of ad-hoc traffic shaping policies today. These policies target specifically bulk transfers, because they consume the vast majority of bytes. However, these shaping policies are often blunt and arbitrary. For example, some ISPs limit the aggregate bandwidth consumed by bulk flows to a fixed value, independently of the current level of link utilization. A few ISPs even resort to blocking entire applications. So far, these policies are not supported by an understanding of their economic benefits relative to their negative impact on the performance of bulk transfers, and thus their negative impact on customer satisfaction.


Data caps:

Internet data caps are monthly limits on the amount of data you can use over your Internet connection. When an Internet user hits that limit, different network operators engage in different actions, including slowing down data speeds, charging overage fees, and even disconnecting a subscriber. These caps come into play when a user either uploads or downloads data. Caps are most restrictive for wireless Internet access, but wired Internet access providers are also imposing these caps. Whatever the variation of data cap, they all have the same effect—they discourage the use of the Internet and the innovative applications it spawns. Think of the effect data caps have on visual artists, for example. Films, photographs, images of paintings, and other works of art are often data-rich, requiring significant bandwidth. These artists rely on the ability of new audiences to easily discover their work, but in a world with data caps, people may be less inclined to explore new things because of concerns about exceeding their cap. Data caps also make it impossible to do all the important things 4G LTE supposedly lets you do. Recently, T-Mobile released evidence that showed that users with capped or throttled broadband use 20x-30x less broadband than users with uncapped broadband. and 37% of subscribers don’t use streaming media because they fear going over their data caps. This hurts not only the ability of consumers to use broadband to its fullest potential, but it has serious implications for net neutrality.


Network congestion:

Users’ appetite for services and applications which require continuous data exchange keeps growing. Mirroring the market evolution, the traffic conveyed on networks has been increasing continuously. Overall IP traffic is estimated by Cisco to almost quadruple by 2016 & reach 110.2 exabytes per month. One of the main objectives behind the use of traffic management is the reduction of network congestion resulting from this outstanding growth in data traffic. ISPs commonly apply differential treatment of traffic, in particular during certain times of the day, to ensure that the end user’s experience is not disrupted by network congestion. Users may share access over a common network infrastructure. Since most users do not use their full connection capacity all of the time, this aggregation strategy (known as contended service) usually works well and users can burst to their full data rate at least for brief periods. However, peer-to-peer (P2P) file sharing and high-quality streaming video can require high data-rates for extended periods, which violates these assumptions and can cause a service to become oversubscribed, resulting in congestion and poor performance. The TCP protocol includes flow-control mechanisms that automatically throttle back on the bandwidth being used during periods of network congestion. This is fair in the sense that all users that experience congestion receive less bandwidth, but it can be frustrating for customers and a major problem for ISPs. In some cases the amount of bandwidth actually available may fall below the threshold required to support a particular service such as video conferencing or streaming live video–effectively making the service unavailable. When traffic is particularly heavy, an ISP can deliberately throttle back the bandwidth available to classes of users or for particular services. This is known as traffic shaping and careful use can ensure a better quality of service for time critical services even on extremely busy networks. However, overuse can lead to concerns about fairness and network neutrality or even charges of censorship, when some types of traffic are severely or completely blocked.


Bandwidth hogs:

Net neutrality is shorthand for the concept that all Internet traffic should be treated equally irrespective of the nature of the traffic. So the bytes that make up a 10KB email should be shuttled about cyberspace in the same unbiased way the bytes that make up a 10GB HD movie are. Broadband providers generally do not like the concept of net neutrality. Streaming a 10GB movie will use up a lot more bandwidth than a 10KB email. While vast, there is still a limit on the total amount of bandwidth available at any given point in time. Also, broadband providers charge end users for access. At least up until recently, a user who is streaming a 10GB represents the same revenue as the individual who sends the 10KB email but uses one millionth the bandwidth. Get enough of those high-use consumers on your system and you will crowd out the other paying customers who then cannot send their 10KB emails. Broadband providers have chosen several ways of dealing with the bandwidth hogs. Providers can charge end users more if they use more bandwidth. They also slow or impede the delivery of large files or entire classes of files to ensure capacity is never constrained. This slowdown could frustrate the high-use consumer who might switch to a more reliable service. The proponents of net neutrality believe that broadband providers should not be the gatekeepers for the type of content any particular individual seeks. The Internet is a great free market of ideas and commerce, and these should flow with as little regulation as possible. From a cultural and philosophical perspective, it’s hard to argue with the proponents of net neutrality. From an economic standpoint, it seems fairly clear that net neutrality promotes inefficiency. Bandwidth hogs are a form of free riders. Normally, consumers pay an amount that is correlated to what they consume. In the early days of the Internet, the technical structure of the Internet generally allowed consumers to consume as much as they want for a single price. If a resource has no capacity constraints, then one individual’s consumption of the resource will not affect another’s. If the resource has a capacity constraint, however, there may be a point at which a single user’s consumption will negatively affect another’s. Larry Lessig of Stanford University recognized this potential problem with the Internet. He sees the Internet as a great “commons.” A commons is a public resource consumption of which is free to the members of the community. A classic commons would be a natural area owned by the government where any farmer could take its livestock to graze. A problem can occur because consumers are not required to pay directly for their consumption. (They may pay indirectly through taxes.) Since there is no immediate cost associated with consumption, they could take as much as they want with impunity. This is the tragedy of the commons: collectively, the members of the community are benefited from the collective ownership and stewardship of the commons; but individually, each is incentivized to consume as much of the commons as possible. The economic inefficiency occurs when an overconsuming consumer consumes more of the commons than he needs to obtain his optimal output. It can also occur where a disadvantaged consumer cannot consume enough to produce an economically optimal amount. The cure for the tragedy of the commons is regulating consumption or charging the user for access. Overconsumption of a depleting asset reduces the amount of the asset available to others who may need it to reach their economically optimal output. If the two parties are competitors, the overconsumption can, theoretically, harm competition. Strategic overconsumption of a depleting asset can be a form of foreclosure—it can deprive a competitor of the ability to produce an economically optimal amount and so serves as an artificial capacity constraint that reduces total output of the market. To the extent a competitor must then seek higher priced inputs to achieve the same output, the competitor’s costs have been raised. On the other hand, capacity on the Internet is a vast but ultimately depletable resource at any given point in time. (The depletion is transient. Once the file is downloaded, full capacity is restored.) In order to prevent overcrowding, the broadband providers impede the flow of this high bandwidth content. Doing so discourages the consumers that are using more of the bandwidth than anyone else, and allows low-volume, and therefore high-value (under the one-price-for-all model), customers the access they would want to remain on the service. By slowing these bandwidth hogs down, the broadband providers are in fact enforcing, although inartfully, an efficient allocation of bandwidth. Indeed, if the broadband providers charge either the content providers, the customers or both, the content provider best suited to provide content will bid the most for the most bandwidth, and the consumers to whom the added bandwidth has the most value will pay more, ensuring an efficient outcome.


ISP and conflict of interest:

The conflict is between internet service providers like Verizon, and content providers like Netflix. Netflix wants to deliver their video content to Verizon’s customers with flawless quality. But to do that, it needs a lot of bandwidth. Netflix alone currently accounts for around 30% of U.S. internet traffic, and growing. Verizon (and other providers) see this as unsustainable and has demanded that Netflix pay for peering arrangements if they want to their traffic delivered to customers. But it’s not that simple.  Verizon has its own video streaming service. In the ISP’s ideal world, Netflix wouldn’t exist and the ISP would be the provider of this type of service. So it’s difficult to trust the ISPs when they say that this is purely to ensure a stable network. There is also evidence that the disruptions are artificial and not a consequence of a network filled to capacity. It’s not clear to users whose fault this is. When a user sees their Netflix stream fail, they perceive a failure of Netflix, no matter where the failure actually was. So any network disruption destroys their credibility as a dependable service.


ISPs bundle bandwidth with other services:

Bundling broadband with other services gives ISPs an unfair advantage over new competitive Internet services. The insidious part about the broadband business that is not being discussed in the net neutrality debate is that bundling services enables the ISPs to blend pricing among broadband, TV, telephone and, in some cases, mobile services. The ISPs know that everyone needs and will subscribe to a broadband service, so they can charge whatever they can get. In fact, U.S. high-speed broadband service costs nearly three times as much as in the UK and France, and more than five times that of South Korea. Since broadband service is the ISP’s highest margin offering, exceeding 90% gross profit, in a bundled offering they can afford to cut the prices of other services to inhibit competition. This allows ISPs to continue to increase prices for broadband Internet service with impunity, and price their bundled TV service based on TV competition. If a new over-the-top (OTT) video service attempts to offer a competitive TV service, the ISP can simply lower the cost of their own video service to break-even or to a small loss – losing money on video services. OTT services will not be able to compete because they cannot allot profit margins among other services. As an analogy, imagine if your electric utility bundled your electric service with your television service. You couldn’t receive one without the other without large increases in price. The critical need to maintain an open and competitive Internet is paramount not just to net neutrality – to ensure equal and open Internet access – but also to prevent ISPs from bundling their bandwidth itself (and those magnificent profit margins) with the other services the ISPs carry. New OTT video services companies trying to compete with established ISP Internet services will not be able to succeed. And this is in addition to the data caps being imposed by some ISPs.


Instead of increasing their capacity, ISP deliberately keeps it scarce:

Perhaps most damaging of all, network operators would have a powerful incentive to continue to under-invest in infrastructure. They would be allowed to charge for preferential access to a resource they could manage to ensure remained artificially scarce.


Some examples that illustrate how an ISP could violate net neutrality principles:

•Blocking – some users could be prevented from visiting specific websites or accessing specific services, such as those of a competitor to the ISP;

•Throttling – different treatment could be given to specific sites or services, such as slower speeds for Netflix;

•Re-direction – users could be automatically redirected from one website to a competing website;

•Cross-subsidization – users of a service could be offered free or discounted access to other services;

•Paid prioritization – companies might buy priority access to an ISP’s customers (e.g., Google or Facebook could (in theory) pay ISP’s to provide faster, more reliable access to their websites than to potential competitors).


From the ISP’s perspective, net neutrality places restrictions on potentially revenue-generating functionality.  It may also impact how private networks might co-exist with shared public networks.  Net neutrality enforcement can also be an important governance issue. Net neutrality is good for end users, as it ensures that all traffic is equitably handled.  Net neutrality is bad for ISPs who want to leverage their position as network providers to give their own services special treatment and thereby make more profit as a result. The real debate here is whether or not ISPs should have the legal protections afforded to ‘common carriers’.  In other industries (e.g., transportation, telephony), carriers are not responsible for the content of their networks.  They simply provide a service and if the traffic on their network is not legal, it’s not their problem as they will carry anything for anyone. ISPs who want to retain common carrier status should welcome net neutrality.  However, those who are willing to forgo common carrier status should then be held liable for their traffic content.


The figure below depicts internet classification:


What is an OTT?

OTT or over-the-top refers to applications and services which are accessible over the internet and ride on operators’ networks offering internet access services. The best known examples of OTT are Skype, Viber, WhatsApp, e-commerce sites, Ola, Facebook messenger. The OTTs are not bound by any regulations.  An OTT provider can be defined as a service provider offering ICT (Information Communication Technology) services, but neither operates a network nor leases network capacity from a network operator. Instead, OTT providers rely on the global internet and access network speeds (ranging from 256 Kilobits for messaging to speeds in the range of 0.5 to 3 Megabits for video streaming) to reach the user, hence going “over-the-top” of a telecom service provider’s (TSP’s)network. Services provided under the OTT umbrella typically relate to media and communications and are generally free or lower in cost as compared to traditional methods of delivery.


Today, users can directly access these OTT applications online from any place, at any time, using a variety of internet connected consumer devices.  The characteristics of OTT services are such that ISPs realise revenues solely from the increased data usage of the internet-connected customers for various applications (henceforth, apps). The ISPs do not realise any other revenues, be it for carriage or bandwidth. They are also not involved in planning, selling, or enabling OTT apps. On the other hand, OTT providers make use of the ISPs’ infrastructure to reach their customers and offer products/services that not only make money for them but also compete with the traditional services offered by ISPs. Leave aside ISPs, these apps also compete with brick and mortar rivals e.g. e-commerce sites, banking etc.  OTTs can impact revenue of all the three real time application verticals – video, voice and messaging. The various other non-real time applications include e-payments, e-banking, entertainment apps, mobile location based services and digital advertising.  Table below provides a bird’s eye view of how OTTs can potentially have an adverse impact on incumbent ISPs or other business entities.



Is the growth of OTT impacting the traditional revenue stream of ISPs?

Should OTT players pay for use of ISPs network over and above data charges paid by consumers?

The availability of Voice over the Internet Protocol (“VoIP”) services offering flat rated long distance telephone service on a monthly subscription rate, or per call rates for a few pennies a minute, show how software applications riding on top of a basic transmission link can devastate an existing business plan that anticipates ongoing, large profit margins for core services. VoIP and wireless services have adversely impacted wireline local exchange revenues as consumers migrate to a triple play bundle of services from cable television companies offering local and long distance telephone service and Internet access coupled with their core video programming services. To retain subscribers the incumbent telephone companies have created their own triple play bundles at prices that generate lower margins for the voice telephony portion of the package deal. The apparent inability of ISPs to raise subscription rates and to receive payment from content providers has frustrated senior managers and motivated them to utter provocative claims that heavy users of their networks, such as Google, have become free riders: Now what they would like to do is use my pipes free, but I ain’t going to let them do that because we have spent this capital and we have to have a return on it. So there’s going to have to be some mechanism for these people who use these pipes to pay for the portion they’re using. Why should they be allowed to use my pipes? The Internet can’t be free in that sense, because we and the cable companies have made an investment and for a Google or Yahoo! or Vonage or anybody to expect to use these pipes [for] free is nuts!  On the other hand, Airtel India CEO Gopal Vittal had said during the company’s earnings conference call, earlier this year, that there’s no evidence of VoIP cannibalisation of voice services. Last year, Idea Cellular MD Himanshu Kapania had also said that OTT apps like Viber have had some impact on their International calling business, but on regular voice calls, there was no impact. A study jointly done by AT Kearney and Google estimated that telecom companies will earn an additional $8 billion in revenues by 2017 due to the proliferation of data and data-based services. Charging users extra for specific apps or services will overburden them, which in turn will lead to them not using the services at all. It is also akin to breaking up the Internet into pieces, which is fundamentally against what Net Neutrality stands for. Also, the Internet depends on interconnectivity and the users being able to have seamless experience – differential pricing will destroy the very basic tenets of the Internet.


ISP testing software:

Test your ISP whether they are respecting net neutrality:

At a minimum, consumers deserve a complete description of what they are getting when they buy “unlimited Internet access” from an ISP. Only if they know what is going on and who is to blame for deliberate interference can consumes make informed choices about which ISP to prefer (to the extent they have choices among residential broadband providers) or what counter-measures they might employ. Policy-makers, as well, need to understand what is actually being done by ISPs in order to pierce the evasive and ambiguous rhetoric employed by some ISPs to describe their interference activities. Accordingly, Electronic Frontier Foundation (EFF) is developing information and software tools intended to help subscribers test their own broadband connections.


Switzerland Network Testing Tool:

Is your ISP interfering with your BitTorrent connections? Cutting off your VoIP calls? Undermining the principles of network neutrality? In order to answer those questions, concerned Internet users need tools to test their Internet connections and gather evidence about ISP interference practices. After all, if it weren’t for the testing efforts of Rob Topolski, the Associated Press, and EFF, Comcast would still be stone-walling about their now-infamous BitTorrent blocking efforts. Developed by the Electronic Frontier Foundation, Switzerland is an open source software tool for testing the integrity of data communications over networks, ISPs and firewalls. It will spot IP packets which are forged or modified between clients, inform you, and give you copies of the modified packets.


Known ISP Testing Software:

Tool Active / Passive # Participants per Test Platform Protocols Notes
Gemini Active(?) Bilateral Bootable CD ? Uses pcapdiff
Glasnost Active 1.5 sided Java applet BitTorrent
ICSI Netalyzr Active 1.5 sided Java applet + some javascript Firewall characteristics, HTTP proxies, DNS environment
ICSI IDS Passive 0 sided (on the network) IDS Forged RSTs Not code users can run
Google/New America MeasurementLab Active 2 sided PlanetLab (server), Any (client) Any A server platform for others’ active testing software
NDT Active 1.5 sided Java applet / native app TCP performance A sophisticated speed test
Network Neutrality Check Active 1.5 sided Java applet No real tests yet Real tests forthcoming here ; discussion here
NNMA Passive Unilateral (currently) Windows app Any
pcapdiff / tpcat Either Bilateral Python app Any A tool to make manual tests easier. EFF is no longer working on pcapdiff, but development continues with the tpcat project.
Switzerland Passive Multilateral Portable Python app Any Sneak preview release just spots forged/modified packets
Web Tripwires Passive 1.5 sided Javascript embed HTTP Must be deployed by webmasters


Find out if your ISP is slowing down your connection:

Take the Internet Health Test to check if ISPs are throttling websites. The tool checks for any degradation of the internet connection and will provide you with details. This web tool is supported by three well-known open internet groups: Demand Progress, Fight for the Future and the Free Press Action Fund. They note: “This test measures whether interconnection points are experiencing problems. It runs speed measurements from your (the test user’s) ISP, across multiple interconnection points, thus detecting degraded performance.” A number of different processes are launched once you start the test. First, it checks your connection by sending data to different locations throughout the Internet. This helps you to know whether there are choke points in between various connections that are slowing you down. The Internet Health Test is designed to help users check if ISPs are breaking the rules and slowing you down. To perform the Internet Health Test, just visit the site at  and click on the green “Start the test” button. This will open a new window where the test will check for signs for degradation. The test will tell you your internet speed as well as latency. The supporting open internet organizations wants many people as possible to take the test so that everyone can know which ISPs are throttling down connections. So go ahead and take the test, it will take only a minute or two, and then you’ll know whether your provider is really giving you the internet  “fast lane” or not.



Quality of Service (QoS):

Internet routers forward packets according to the diverse peering and transport agreements that exist between network operators. Many networks using Internet protocols now employ quality of service (QoS). QoS is the measure of transmission quality and service availability of a network (or internetworks). Service availability is a crucial foundation element of QoS. The network infrastructure must be designed to be highly available before you can successfully implement QoS. The target for High Availability is 99.999 % uptime, with only five minutes of downtime permitted per year. The transmission quality of the network is determined by the following factors:

1. Loss—A relative measure of the number of packets that were not received compared to the total number of packets transmitted. Loss is typically a function of availability. If the network is Highly Available, then loss during periods of non-congestion would be essentially zero. During periods of congestion, however, QoS mechanisms can determine which packets are more suitable to be selectively dropped to alleviate the congestion.

2. Delay—The finite amount of time it takes a packet to reach the receiving endpoint after being transmitted from the sending endpoint. In the case of voice, this is the amount of time it takes for a sound to travel from the speaker’s mouth to a listener’s ear.

3. Delay variation (Jitter)—The difference in the end-to-end delay between packets. For example, if one packet requires 100 ms to traverse the network from the source endpoint to the destination endpoint and the following packet requires 125 ms to make the same trip, then the delay variation is 25 ms.


Each end station in a Voice over IP (VoIP) uses a jitter buffer to smooth out changes in the arrival times of voice data packets. Although jitter buffers are dynamic and adaptive, they may not be able to compensate for instantaneous changes in arrival times of packets. This can lead to jitter buffer over-runs and under-runs, both of which result in an audible degradation of call quality.


QoS technologies refer to the set of tools and techniques to manage network resources and are considered the key enabling technology for network convergence. The objective of QoS technologies is to make voice, video and data convergence appear transparent to end users. QoS technologies allow different types of traffic to contend inequitably for network resources. Voice, video, and critical data applications may be granted priority or preferential services from network devices so that the quality of these strategic applications does not degrade to the point of being unusable. Therefore, QoS is a critical, intrinsic element for successful network convergence. QoS tools are not only useful in protecting desirable traffic, but also in providing deferential services to undesirable traffic such as the exponential propagation of worms.


QoS toolset:


Implementing QoS involves combining a set of technologies defined by the Internet Engineering Task Force (IETF) and the Institute of Electrical and Electronic Engineers (IEEE). These technologies are designed to alleviate the problems caused by shared network resources and finite bandwidth. Although the concept of QoS encompasses a variety of standards and mechanisms, QoS for Windows Server 2003 IP-based networks is centered on traffic control, which includes mechanisms for prioritization and traffic shaping (the smoothing of traffic bursts). QoS can be used in any network environment in which bandwidth, latency, jitter, and data loss must be controlled for mission-critical software, such as Enterprise Resource Planning (ERP) applications, or for latency-sensitive software, such as video conferencing, IP telephony, or other multimedia applications. QoS also can be used to improve the throughput of traffic that crosses a slow link, such as a dial-up connection.


Advocates of net neutrality have proposed several methods to implement a net neutral Internet that includes a notion of quality-of-service:

1. An approach offered by Tim Berners-Lee allows discrimination between different tiers, while enforcing strict neutrality of data sent at each tier: If I pay to connect to the Net with a given quality of service, and you pay to connect to the net with the same or higher quality of service, then you and I can communicate across the net, with that quality and quantity of service”. “[We] each pay to connect to the Net, but no one can pay for exclusive access to me.”

2. United States lawmakers have introduced bills that would now allow quality of service discrimination for certain services as long as no special fee is charged for higher-quality service.


Traffic shaping:

Traffic shaping (also known as “packet shaping”) is a computer network traffic management technique which delays some or all datagrams to bring them into compliance with a desired traffic profile. Traffic shaping is used to optimise or guarantee performance, improve latency, and/or increase usable bandwidth for some kinds of packets by delaying other kinds. It is often confused with traffic policing, the distinct but related practice of packet dropping and packet marking. The most common type of traffic shaping is application-based traffic shaping. In application-based traffic shaping, fingerprinting tools are first used to identify applications of interest, which are then subject to shaping policies. Some controversial cases of application-based traffic shaping include P2P bandwidth throttling. Many application protocols use encryption to circumvent application-based traffic shaping. Another type of traffic shaping is route-based traffic shaping. Route-based traffic shaping is conducted based on previous-hop or next-hop information. If a link becomes saturated to the point where there is a significant level of contention (either upstream or downstream) latency can rise substantially. Traffic shaping can be used to prevent this from occurring and keep latency in check. Traffic shaping provides a means to control the volume of traffic being sent into a network in a specified period (bandwidth throttling), or the maximum rate at which the traffic is sent (rate limiting), or more complex criteria such as GCRA (generic cell rate algorithm). This control can be accomplished in many ways and for many reasons; however traffic shaping is always achieved by delaying packets. Traffic shaping is commonly applied at the network edges to control traffic entering the network, but can also be applied by the traffic source (for example, computer or network card) or by an element in the network. Traffic shaping is sometimes applied by traffic sources to ensure the traffic they send complies with a contract which may be enforced in the network by a policer. It is widely used for network traffic engineering, and appears in domestic ISPs’ networks as one of several Internet Traffic Management Practices (ITMPs). Some ISPs may use traffic shaping against peer-to-peer file-sharing networks, such as BitTorrent.


Traffic Policing vs. Traffic Shaping:

The following diagram illustrates the key difference.

Traffic policing propagates bursts. When the traffic rate reaches the configured maximum rate, excess traffic is dropped (or remarked). The result is an output rate that appears as a saw-tooth with crests and troughs. In contrast to policing, traffic shaping retains excess packets in a queue and then schedules the excess for later transmission over increments of time. The result of traffic shaping is a smoothed packet output rate. Shaping implies the existence of a queue and of sufficient memory to buffer delayed packets, while policing does not. Queueing is an outbound concept; packets going out an interface get queued and can be shaped. Only policing can be applied to inbound traffic on an interface. Ensure that you have sufficient memory when enabling shaping. In addition, shaping requires a scheduling function for later transmission of any delayed packets. This scheduling function allows you to organize the shaping queue into different queues.


Queue management:

To queue something is to store it, in order, while it awaits processing. An Internet router typically maintains a set of queues, one per interface, that hold packets scheduled to go out on that interface. Historically, such queues use a drop-tail discipline: a packet is put onto the queue if the queue is shorter than its maximum size (measured in packets or in bytes), and dropped otherwise. Active queue disciplines drop or mark packets before the queue is full. Typically, they operate by maintaining one or more drop/mark probabilities, and probabilistically dropping or marking packets even when the queue is short. A FIFO (first-in, first-out) queue works like the line-up at a supermarket’s checkout — the first item into the queue is the first processed. As new packets arrive they are added to the end of the queue. If the queue becomes full, and here the analogy with the supermarket stops, newly arriving packets are dropped. This is known as tail-drop. Besides FIFO queueing, we have Class Based Queueing and Priority Queueing


Queuing delay:

In telecommunication and computer engineering, the queuing delay (or queueing delay) is the time a job waits in a queue until it can be executed. It is a key component of network delay. In a packet-switched network, queueing delay is the sum of the delays encountered by a packet between the time of insertion into the network and the time of delivery to the addressee.  This term is most often used in reference to routers. When packets arrive at a router, they have to be processed and transmitted. A router can only process one packet at a time. If packets arrive faster than the router can process them (such as in a burst transmission) the router puts them into the queue (also called the buffer) until it can get around to transmitting them. Delay can also vary from packet to packet so averages and statistics are usually generated when measuring and evaluating queuing delay.  As a queue begins to fill up due to traffic arriving faster than it can be processed, the amount of delay a packet experiences going through the queue increases. The speed at which the contents of a queue can be processed is a function of the transmission rate of the facility. This leads to the classic delay curve. The average delay any given packet is likely to experience is given by the formula 1/(μ-λ) where μ is the number of packets per second the facility can sustain and λ is the average rate at which packets are arriving to be serviced. This formula can be used when no packets are dropped from the queue. The maximum queuing delay is proportional to buffer size. The longer the line of packets waiting to be transmitted, the longer the average waiting time is. The router queue of packets waiting to be sent also introduces a potential cause of packet loss. Since the router has a finite amount of buffer memory to hold the queue, a router which receives packets at too high a rate may experience a full queue. In this case, the router has no other option than to simply discard excess packets.


The Process of Buffering (queueing):

Routers store packets in a few different locations depending on the congestion level:

As a packet enters a router, the packet is stored inside of the ingress buffer waiting to be processed. As you can see, VoIP gets priority over other packets.


Prioritization of Network Traffic:

Prioritization of network traffic is simple in concept: give important network traffic precedence over unimportant network traffic. That leads to some interesting questions. What traffic should be prioritized? Who defines priorities? Do people pay for priority or do they get it based on traffic type (e.g., delay-sensitive traffic such as real-time voice)? For Internet traffic, where are priorities set (at the ingress based on customer preassigned tags in packets, or by service provider policies that are defined by service-level agreements)? Prioritization is also called CoS (class of service) since traffic is classed into categories such as high, medium, and low (gold, silver, and bronze), and the lower the priority, the more “drop eligible” is a packet. E-mail and Web traffic is often placed in the lowest categories. When the network gets busy, packets from the lowest categories are dropped first. Prioritization/CoS should not be confused with QoS. It is a subset of QoS. A package-delivery service provides an analogy. You can request priority delivery for a package. The delivery service has different levels of priority (next day, two-day, and so on). However, prioritization does not guarantee the package will get there on time. It may only mean that the delivery service handles that package before handling others. To provide guaranteed delivery, various procedures, schedules, and delivery mechanisms must be in place. The problem with network priority schemes is that lower-priority traffic may be held up indefinitely when traffic is heavy unless there is sufficient bandwidth to handle the highest load levels. Even high-priority traffic may be held up under extreme traffic loads. One solution is to overprovision network bandwidth, which is a reasonable option given the relatively low cost of networking gear today. As traffic loads increase, router buffers begin to fill, which adds to delay. If the buffers overflow, packets are dropped. When buffers start to fill, prioritization schemes can help by forwarding high-priority and delay-sensitive traffic before other traffic. This requires that traffic be classed (CoS) and moved into queues with the appropriate service level. One can imagine an input port that classifies traffic or reads existing tags in packets to determine class, and then moves packets into a stack of queues with the top of the stack having the highest priority. As traffic loads increase, packets at the top of the stack are serviced first.


Prioritize Packets to improve Quality:

Voice traffic competes for available bandwidth on your broadband connection. If there is not enough bandwidth, packets get dropped. VoIP media streams require a constant, uninterrupted data flow. This data flow is composed of UDP packets that each carry between 10 and 30 milliseconds of sound information. Ideally, each packet in a media stream is evenly spaced and of the same size. In a perfect world, a packet never arrives out of sequence or gets dropped. Voice over IP media packets are framed in a highly precise, performance-sensitive way. Dropped packets and packet jitter (packets arriving out of order) cause problems—big problems—for an ongoing call. These problems can cause the voices on the call to sound robotic, to cut in and out, or to go silent altogether. Most of the packet-drop problems you’ll encounter while VoIPing will be the fault of your bandwidth-limited ISP connection—the link from the ISP’s network to your broadband router.


Traffic control and management:



In the Internet world, everything is packets. Managing a network means managing packets: how they are generated, router, transmitted, reorder, fragmented, etc… Traffic control works on packets leaving the system. It doesn’t initially have as an objective to manipulate packets entering the system (although you could do that, if you really want to slow down the rate at which you receive packets). The Traffic Control code operates between the IP layer and the hardware driver that transmits data on the network. We are discussing a portion of code that works on the lower layers of the network stack of the kernel. In fact, the Traffic Control code is the very one in charge of constantly furnishing packets to send to the device driver. It means that the TC module, the packet scheduler, is permanently activated in the kernel. Even when you do not explicitly want to use it, it’s there scheduling packets for transmission. By default, this scheduler maintains a basic queue (similar to a FIFO type queue) in which the first packet arrived is the first to be transmitted. At the core, TC is composed of queuing disciplines, or qdisc, that represent the scheduling policies applied to a queue. Several types of qdisc exist. We have FIFO (first in first out), FIFO with multiple queues, FIFO with hash and round robin (SFQ). We also have a Token Bucket Filter (TBF) that assigns tokens to a qdisc to limit it flow rate.


Traffic control is a collection of mechanisms that segregate traffic into the appropriate service types and regulate its delivery to the network. Traffic control involves classifying, shaping, scheduling, and marking traffic. The fundamental technical challenge is getting the Net to carry traffic that it was never meant to handle. Internet packet switching was designed for digital file transfers between computers, and it was later adapted for e-mail and Web pages. For these purposes the digital data does not have to be delivered at a specific rate or even in a specific order, so it can be chopped into packets that are routed over separate paths to be reassembled, in leisurely fashion, at their destinations. By contrast, voice and video signals must come fast and in a specific sequence.

Classifying Traffic:

During classification, packets are separated into distinct data flows and then directed to the appropriate queue on the forwarding interface. Queues are based on service type. The algorithm that services a queue determines the rate at which traffic is forwarded from the queue.

Traffic control in Windows Server 2003 supports the following service types:

Best effort:

Best-effort is the standard service level in many IP-based networks. It is a connectionless model of delivery that provides no guarantees for reliability, delay, or other performance characteristics.

Controlled load:

Controlled load data flows are treated similarly to best-effort data flows in unloaded (uncongested) conditions. This means that a very high percentage of transmitted packets will be delivered successfully to the receiving end node and a very high percentage of packets will not exceed the minimum delay in delivery. Controlled load service provides high quality delivery without guaranteeing minimum latency.

Guaranteed service:

Guaranteed service provides high-quality delivery with guaranteed minimum latency. The impact of guaranteed traffic on the network is heavy, so guaranteed service is typically used only for traffic that does not adapt easily to change. Network control:

Network control, the highest service level, is designed for network management traffic.

Qualitative service:

Qualitative service is designed for applications that require prioritized traffic handling but cannot quantify their QoS traffic requirements. These applications typically send traffic that is intermittent or burst-like in nature. For this service type, the network determines how the data flow is treated.


Engineers decided that the best way to manage traffic flow was to label each packet with codes based on the time sensitivity of the data, so routers could use them to schedule transmission. Everyone called them priority codes, but the name wasn’t meant to imply that some packets were more important than others, only that they were more perishable. It’s like the difference between a shipment of fresh fruit and one of preserves. Here’s a set of such codes that the IEEE P802.1p task force defined in 1998 for local area networks. The highest priority values are for the most time-sensitive services, with the top two slots going to network management, followed by slots for voice packets, then video packets and other traffic.


PCP Priority Acronym Traffic types
1 0 (lowest) BK Background
0 1 BE Best Effort
2 2 EE Excellent Effort
3 3 CA Critical Applications
4 4 VI Video, < 100 ms latency and jitter
5 5 VO Voice, < 10 ms latency and jitter
6 6 IC Internetwork Control
7 7 (highest) NC Network Control

Although these codes have been accepted as potentially useful, they haven’t been widely used for wire line, fiber broadband, or the backbone Internet. Those systems generally have adequate internal capacity.


Reasons for traffic management:

The primary reason that is given by ISPs for traffic management is to prevent a small number of their customers from clogging up access to the Internet by using a disproportionate share of the available bandwidth. In this way, proponents of traffic management say that ISPs are justified in controlling the flow of data because it is necessary to maintain the quality of service that is required to ensure all users have an enjoyable browsing experience.


Traffic management techniques:

1. Data caps: A wide variety of data caps and “fair use” policies may be used by operators to implement a specific business model. In general, a data cap will be imposed to support the operator’s pricing strategy, so that the price of traffic is based on volume. Data caps are a technical measure that requires monitoring traffic volume and throttling data or charging for extra volume once a pre-defined data cap is reached. Data caps provide a price signal to end users in relation to the cost of their bandwidth consumption.

2.  Application-agnostic congestion management: To respond to network congestion, an ISP can react to daily fluctuations or unexpected network environment changes by implementing “congestion controls” at the edge of the network, where the source of the traffic (e.g. computers) slows down the transmission rate when packet loss is occurring.

3. Prioritization: An ISP might prioritize transmission of certain types of data over others (most often used to prioritize time-sensitive traffic, such as VoIP and IPTV). ISPs may be required to prioritize emergency services, and this is generally not a concern from a net neutrality perspective.

4. Differentiated throttling: The capacity available for a particular type of content (most often peer-to-peer traffic, particularly in peak times) may be restricted, which preserves capacity for the unrestricted content. Unlike application-agnostic congestion management, this technique is aimed at a specific type of content; generally a type that is bandwidth-hungry and non-time-critical.

5. Access-tiering: An ISP may prioritize a specific application or content – for a price to be paid by a content provider. By selling access to a “lane”, access providers can generate greater revenue to fund the network in-vestments necessary to handle increasingly bandwidth-hungry services. This can be distinguished from prioritization in that access-tiering is typically open to all service providers (that can afford to pay for it) and that it is generally used to promote a particular service provider, rather than a type of content. Access-tiering has been criticized for its potential harms to innovation, particularly to start-ups unable to afford the fee. It is also commercially possible that a service prioritization arrangement could be made on an exclusive-by-service basis, to prevent competitors of the preferred content provider from purchasing a similar level of priority.

6. Blocking: End users may be prevented from using or accessing a particular website or a type of content (e.g., the blocking of VoIP traffic on a mobile data network). Blocking may be implemented to:

- inhibit competition, particularly if the access provider offers a service that competes with the service being blocked;

- manage costs, particularly where the cost of carrying a particular service or type of service places a disproportionate burden on the access provider’s network; and

- block unlawful or undesirable content, such as child abuse, viruses or spam. This may be necessary to comply with government or court orders, or done at the request of the end user. The blocking of unlawful and undesirable content generally raises few net neutrality concerns. Lawful interception measures, while not constituting “blocking”, are similarly non-controversial from a net neutrality perspective.

Specific restrictions may be applied discriminately or indiscriminately between users and they may be permanent or implemented over certain periods (e.g. peak time). The nature of the restriction will often be contractually disclosed by the ISP, so that the user is made aware that their access to a particular service will be restricted in certain circumstances.


As critics point out, that there is a fine line between correctly applying traffic management to ensure a high quality of service and wrongly interfering with Internet traffic to limit applications that threaten the ISP’s own lines of business. For example, the VoIP application Skype uses peer-to-peer technology to provide free phone calls, which compete directly with the phone services offered by many ISPs. It would be easy at a technical level for an ISP to use its traffic management equipment to limit a customer’s Skype experience in an effort to protect its own fixed or mobile telephony services.


Figure below shows Internet Plan charging different value for different sites at different speeds:



The figure below shows spectrum of traffic management conducts:



If the core of a network has more bandwidth than is permitted to enter at the edges, then good QoS can be obtained without policing. An alternative to complex QoS control mechanisms is to provide high quality communication by generously over-provisioning a network so that capacity is based on peak traffic load estimates. This approach is simple for networks with predictable peak loads. The performance is reasonable for many applications. This might include demanding applications that can compensate for variations in bandwidth and delay with large receive buffers, which is often possible for example in video streaming. Over-provisioning can be of limited use, however, in the face of transport protocols (such as TCP) that over time exponentially increase the amount of data placed on the network until all available bandwidth is consumed and packets are dropped. Such greedy protocols tend to increase latency and packet loss for all users. Commercial VoIP services are often competitive with traditional telephone service in terms of call quality even though QoS mechanisms are usually not in use on the user’s connection to their ISP and the VoIP provider’s connection to a different ISP. Under high load conditions, however, VoIP may degrade to cell-phone quality or worse. The mathematics of packet traffic indicates that network requires just 60% more raw capacity under conservative assumptions. The amount of over-provisioning in interior links required to replace QoS depends on the number of users and their traffic demands. This limits usability of over-provisioning. Newer more bandwidth intensive applications and the addition of more users results in the loss of over-provisioned networks. This then requires a physical update of the relevant network links which is an expensive process. Thus over-provisioning cannot be blindly assumed on the Internet.


Data Discrimination on Internet:

The extent to which network operators should be allowed to discriminate among Internet packet to block selectively, adjust price or quality of service is one of the most fundamental issue in the network neutrality debate. The networks favor some traffic or packet streams over others by using variety of data differentiation techniques or algorithms. There are various methods by which the ISP’s are able to discriminate, by determining which types of packets are in the network. The first type is flow classification, ISP’s are able to determine the nature of packet by examining the amount of time since the packet stream began, the amount of time between consecutive packets, and the sizes of packets in a stream. The information about every packet stream going through the network can be maintained by using the second method called as deep packet inspection. It can categorize traffic based, not just on what it can learn from the packet it is currently handling but also on the combination of the content of many consecutive packets. Instead of looking only at the information needed to get the packet to its destination, using deep packet inspection a device is aware of the information at the application layer as illustrated in Table below:


Examples of header data showing which information is stored in which data field:

Data field Information
MAC address Manufacturer of device that is attached to network.
IP address Identity of sender and recipient, location of sender and recipient.
Transport protocol Type of application.
Traffic class in IP version 4 / IP version 6 Type of application, priority desired by sender.
Packet length Type of application.
Source port and destination port Type of application.


Types of discrimination:

1. Discrimination by protocol:

Discrimination by protocol is the favoring or blocking information based on aspects of the communications protocol that the computers are using to communicate.  Comcast in 2008 deliberately prevented some subscribers from using peer-to-peer file-sharing service BitTorrent to download large files.

2. Discrimination by IP address:

a.) During the early decades of the Internet, creating a non-neutral Internet was technically infeasible.  Originally developed to filter malware, the Internet security company NetScreen Technologies released network firewalls in 2003 with so called deep packet inspection. Deep packet inspection helped make real-time discrimination between different kinds of data possible, and is often used for Internet censorship.

b.) In a practice called zero-rating, companies will reimburse data use from certain addresses, favoring use of those services. Examples include Facebook Zero and Google Free Zone, and are especially common in the developing world.

c.) Sometimes ISPs will charge some companies, but not others, for the traffic they cause on the ISP’s network. French telecoms operator Orange, complaining that traffic from YouTube and other Google sites consists of roughly 50% of total traffic on the Orange network, reached a deal with Google, in which they charge Google for the traffic incurred on the Orange network. Some also thought that Orange’s rival ISP Free throttled YouTube traffic. However, an investigation done by the French telecommunications regulatory body revealed that the network was simply congested during peak hours.

3. Peering discrimination:

There is some disagreement about whether peering is a net neutrality issue. In the first quarter of 2014, streaming website Netflix reached an arrangement with ISP Comcast to improve the quality of its service to Netflix clients. This arrangement was made in response to increasingly slow connection speeds through Comcast over the course of 2013, where average speeds dropped by over 25% of their values a year before to an all-time low. After the deal was struck in January 2014, the Netflix speed index recorded a 66% increase in connection.


The Benefits of Discrimination:

There are several benefits for discrimination and it ranges from security to quality of service control. One of the most important benefits of discrimination on network level is security. A network operator can determine whether a packet stream is carrying a virus or a dangerous piece of spyware by using deep packet inspection. It would be a huge damage to network security if a network neutrality policy that prohibits networks from dropping dangerous traffic/packet stream of this kind. The network can prevent customers from using equipment which would hinder their neighbors’ traffic by ensuring that only authorized devices are attached to the network. The devices may access adult-only material contrary, or that consumes more of the shared resources than is allowed contrast to the customer’s stated wishes. Different applications have different QoS needs so discrimination with respect to QoS is also important. So it is not required to give equal access to all services. Pricing also plays an important role in congestion control by using price discrimination. There are quantifiable advantages of price discrimination over more traditional technical approaches; it is done by convincing some users to delay their transmissions by adjusting prices dynamically based on congestion levels. The internet traffic is increasing at tremendous rate. This flow of data/traffic can suffer from congestion at a number of points on the internet. The increasing use of multimedia technology is worsening the congestion of the flow of data/traffic. An internet user attempting to retrieve a file from a file repository in another country will generally be unable to tell whether the dominant cause of congestion is the hardware at the file repository, or the various network links between the repository and the user. Thus, the effect on internet users is generally the same, although the distinction between hardware and data/traffic congestion is important to internet providers. Thus by discriminating the ISP’s can provide better service to maximum of the customers.


The Risks of Discrimination:

One of the serious risks with discrimination is that it may lead to protecting legacy services from competition. In the current ISP market cable and telephone companies are dominant broadband providers. In this case without any network neutrality the ISP’s can block the traffic or degrade the QoS for rival services. For e.g., a telephone company can degrade the VoIP services forcing customers to use traditional telephone services, as well this can be the case with cable companies for degrading streaming videos. Discrimination may also lead to charging oligopoly rents in the broadband. In this scenario the ISP’s may be able to maximize their profit depending on the will of the customers to pay for particular services.


Jon Peha from Carnegie Mellon University believes it is important to create policies that protect users from harmful traffic discrimination, while allowing beneficial discrimination. Peha discusses the technologies that enable traffic discrimination, examples of different types of discrimination, and potential impacts of regulation. Google Chairman Eric Schmidt aligns Google’s views on data discrimination with Verizon’s: “I want to be clear what we mean by Net neutrality: What we mean is if you have one data type like video, you don’t discriminate against one person’s video in favor of another. But it’s okay to discriminate across different types. So you could prioritize voice over video. And there is general agreement with Verizon and Google on that issue.” Echoing similar comments by Schmidt, Google’s Chief Internet Evangelist and “father of the internet”, Vint Cerf, says that “it’s entirely possible that some applications needs far more latency, like games. Other applications need broadband streaming capability in order to deliver real-time video. Others don’t really care as long as they can get the bits there, like e-mail or file transfers and things like that. But it should not be the case that the supplier of the access to the network mediates this on a competitive basis, but you may still have different kinds of service depending on what the requirements are for the different applications.”


Much of the net neutrality debate centres around the management of Internet traffic by Internet Service Providers (ISPs) and what constitute reasonable traffic management. Traffic management is the tool used by ISPs to effectively protect the security and integrity of networks, to restrict the transmission to consumers of unsolicited communication (e.g. spam) or to give effect to a legislative provision or court order. It is also essential for the delivery of certain time-sensitive services (such as real-time IPTV and video conferencing) that may require a prioritisation of traffic to ensure a predefined higher quality of service. However, there is a fragile balance between ensuring the openness of the Internet and the reasonable and responsible use of traffic management by ISPs. Drawing the line between legitimate and unjustified traffic management is challenging.


Types of net control:

Bandwidth throttling:

Bandwidth throttling is the intentional slowing of Internet service by an Internet service provider. It is a reactive measure employed in communication networks in an apparent attempt to regulate network traffic and minimize bandwidth congestion. Bandwidth throttling can occur at different locations on the network. On a local area network (LAN), a sysadmin may employ bandwidth throttling to help limit network congestion and server crashes. On a broader level, the Internet service provider may use bandwidth throttling to help reduce a user’s usage of bandwidth that is supplied to the local network. Throttling can be used to limit a user’s upload and download rates actively on programs such as video streaming, BitTorrent protocols and other file sharing applications to even out the usage of the total bandwidth supplied across all users on the network. Bandwidth throttling is also often used in Internet applications, in order to spread a load over a wider network to reduce local network congestion, or over a number of servers to avoid overloading individual ones, and so reduce their risk of crashing, and gain additional revenue by compelling users to use more expensive pricing schemes where bandwidth is not throttled.


Typically an ISP will allocate a certain portion of bandwidth to a neighbourhood, which is then sold to residents within the neighbourhood. It is common practice for ISP companies to oversell the amount of bandwidth as typically most customers will only use a fraction of what they’re allotted.  By overselling, ISP companies can lower the price of service to their customers per gigabit allotted. On some ISPs, however, when one or a few customers use a larger amount than expected, the ISP company will purposely reduce the speed of that customer’s service for certain protocols, thus throttling their bandwidth. This is done through a method called Deep Packet Inspection (DPI), which allows an ISP to detect the type of traffic being sent and throttle it if it is not high priority and using a large fraction of the bandwidth. Bandwidth throttling of certain types of traffic (i.e. peer-to-peer file sharing) can be scheduled during specific times of the day to avoid congestion at peak usage hours. As a result, customers should all have equal Internet speeds. Encrypted data may be throttled or filtered causing major problems for businesses that use Virtual Private Networks (VPN) and other applications that send and receive encrypted data.


Throttling vs. capping:

The difference is that bandwidth throttling regulates a bandwidth intensive device (such as a server) by limiting how much data that device can accept or receive. Bandwidth capping on the other hand limits the total transfer capacity, upstream or downstream, of data over a medium.


Deep pocket inspection (DPI):

The “deep” in deep packet inspection (DPI) refers to the fact that these boxes don’t simply look at the header information as packets pass through them. Rather, they move beyond the IP and TCP header information to look at the payload of the packet. The goal is to identify the applications being used on the network, but some of these devices can go much further. Imagine a device that sits inline in a major ISP’s network and can throttle P2P traffic at differing levels depending on the time of day. Imagine a device that allows one user access only to e-mail and the Web while allowing a higher-paying user to use VoIP and BitTorrent. Imagine a device that protects against distributed denial of service (DDoS) attacks, scans for viruses passing across the network, and siphons off requested traffic for law enforcement analysis. Imagine all of this being done in real time, for 900,000 simultaneous users, and you get a sense of the power of deep packet inspection (DPI) network appliances. Ellacoya, which recently completed a study of broadband usage, says that 20 percent of all web traffic is really just YouTube video streams. This is information an ISP wants to know; at peak hours, traffic shaping hardware might downgrade the priority of all streaming video content from YouTube, giving other web requests and e-mails a higher priority without making YouTube inaccessible. This only works if the packet inspection is “deep.” In terms of the OSI layer model, this means looking at information from layers 4 through 7, drilling down as necessary until the nature of the packet can be determined. For many packets, this requires a full layer 7 analysis, opening up the payload and attempting to determine which application generated it (DPI gear is generally built as a layer 2 device that is transparent to the rest of the network). Procera, for instance, claims to detect more than 300 application protocol signatures, including BitTorrent, HTTP, FTP, SMTP, and SSH. Ellacoya claims that their boxes can look deeper than the protocol, identifying particular HTTP traffic generated by YouTube and Flickr, for instance. Of course, the identification of these protocols can be used to generate traffic shaping rules or restrictions. DPI can also be used to root out viruses passing through the network. While it won’t cleanse affected machines, it can stop packets that contain proscribed byte sequences. It can also identify floods of information characteristic of denial of service attacks and can then apply rules to those packets.


Privacy issues:

There are two main categories of inspection techniques by ISPs which are more or less intrusive:

•one based on the Internet Protocol header information, which enables ISPs to identify the subscriber and apply specific policies according to what he or she has subscribed to e.g. routing the packet through a slower or faster link;

•one based on a deeper inspection (called DPI, Deep Packet Inspection), which enables ISPs to access the data payload which may contain personal information.


How do you prevent DPI to read your text sent over internet?

By encrypting data.

Encryption is basically the method of turning plaintext information into unintelligible format (cipher), using different algorithms. This way, even if unauthorized parties manage to access the encrypted data, all they find is nothing but streams of unintelligent, alphanumerical characters.  Encryption has widely been used to protect data in numerous areas, such as e-commerce, online banking, cloud storage, online communication and so forth. A simple example of a cipher can be, for instance, the replacing of the letters in a message with the ones one forward in the alphabet. So if your original message read “Meet you at the cafe tonight” the encrypted message reads as follows: “Nffu zpv bu uif dbgf upojhiu” Of course, advanced encryption software programs can generate extremely complicated algorithms to achieve complex ciphers. DPI from an ISP cannot read truly encrypted packets – in any way. They may be in bits and pieces as they are downloaded, but they are still like pieces of a scrambled puzzle that can only be put back together with the decryption key. Of course, ISP can throttle encrypted message without knowing what the message is.


Is your ISP throttling your Bandwidth?

1.) What is the contention ratio in your neighbourhood?

At the core of all Internet service is a balancing act between the number of people who are sharing a resource and how much of that resource is available. For example, a typical provider starts out with a big pipe of Internet access that is shared via exchange points with other large providers. They then subdivide this access out to their customers in ever smaller chunks — perhaps starting with a gigabit exchange point and then narrowing down to a 10 megabit local pipe that is shared with customers across a subdivision or area of town. The speed you, the customer, can attain is limited to how many people might be sharing that 10 megabit local pipe at any one time. If you are promised one megabit service, it is likely that your provider would have you share your trunk with more than 10 subscribers and take advantage of the natural usage behavior, which assumes that not all users are active at one time. The exact contention ratio will vary widely from area to area, but from experience, your provider will want to maximize the number of subscribers who can share the pipe, while minimizing service complaints due to a slow network. In some cases, there are as many as 1,000 subscribers sharing 10 megabits. This is a bit extreme, but even with a ratio as high as this, subscribers will average much faster speeds when compared to dial-up.

2.) Does your ISP’s exchange point with other providers get saturated?

Even if your neighbourhood link remains clear, your provider’s connection can become saturated at its exchange point. The Internet is made up of different provider networks and backbones. If you send an e-mail to a friend who receives service from a company other than your provider, then your ISP must send that data on to another network at an exchange point. The speed of an exchange point is not infinite, but is dictated by the type of switching equipment. If the exchange point traffic exceeds the capacity of the switch or receiving carrier, then traffic will slow.

3.) Does your provider give preferential treatment to speed test sites?

It is possible for an ISP to give preferential treatment to individual speed test sites. Providers have all sorts of tools at their disposal to allow and disallow certain kinds of traffic.

4.) Are file-sharing queries confined to your provider network?

Another common tactic to save resources at the exchange points of a provider is to re-route file-sharing requests to stay within their network. For example, if you were using a common file-sharing application such as BitTorrent, and you were looking some non-copyrighted material, it would be in your best interest to contact resources all over the world to ensure the fastest download. However, if your provider can keep you on their network, they can avoid clogging their exchange points. Since companies keep tabs on how much traffic they exchange in a balance sheet, making up for surpluses with cash, it is in their interest to keep traffic confined to their network, if possible.

5.) Does your provider perform any usage-based throttling?

The ability to increase bandwidth for a short period of time and then slow you down if you persist at downloading is another trick ISPs can use. Sometimes they call this burst speed, which can mean speeds being increased up to five megabits, and they make this sort of behavior look like a consumer benefit. Perhaps Internet usage will seem a bit faster, but it is really a marketing tool that allows ISPs to advertise higher connection speeds – even though these speeds can be sporadic and short-lived. For example, you may only be able to attain five megabits at 12:00 a.m. on Tuesdays, or some other random unknown times. Your provider is likely just letting users have access to higher speeds at times of low usage. On the other hand, during busier times of day, it is rare that these higher speeds will be available.


IP blocking:

IP blocking by an ISP company is purposely preventing its Internet service customer access to a specific website or IP address. Certain ISP companies have been found to block certain websites. While some blocking (e.g., of child pornography sites) is considered acceptable or required and is even stated in an ISP company’s acceptable Internet use policy, ISP companies have absolute control over the content transmitted over their wires, without adequately informing service subscribers.


Unfair traffic management practices:

1. The blocking and throttling (i.e. intentionally slowing down the speed) of Peer-to-Peer (P2P) services (such as file sharing and media streaming) and Voice over Internet Protocol (VoIP) services (i.e. Internet telephony) are the most common examples. Other – less prevalent – instances are the restricted access to specific applications such as gaming, streaming, e-mails or instant messaging services.

2. Weakening the competition

This practice can stem from the desire to weaken the competition, the most prominent example of this is limiting access to VoIP services, as revealed by the traffic management investigation carried out by the Body of European Regulators (BEREC). Indeed, while ISPs provide voice calls through the traditional fixed or mobile networks, cheaper (or even free) VoIP substitutes can be found over the Internet.

3. The decrease of innovation

Developers of content and applications are likely to reconsider their investments into new applications if there is a risk ISPs might discriminate against them. Moreover, excessive restrictions on competing applications might remove the incentive for ISPs to improve and innovate their own products which are challenged by those applications.

4. The potential degradation of quality of service

BEREC has identified two main types of degradation of quality of service: the Internet access service as a whole (e.g. caused by congestion on a regular basis), and individual applications using Internet access service (e.g. VoIP blocking and P2P throttling).


Lack of transparency:

1. Regarding traffic management practices

ISPs tend not to openly publicise information regarding traffic management practices. Such information can most frequently be found only when looking at the detailed terms and conditions of the ISPs’ offers, if at all. A recent report from the UK consumer organisation – Consumer Focus – has found that consumers have very limited awareness of the term ‘traffic management’.

2. On actual quality of service

In some cases, consumers are not even aware of the level of quality they can expect from their Internet service, for example possible discrepancies between advertised speeds and actual broadband speeds.



Wireless networks and net neutrality:

So far I discussed traffic management vis-à-vis net neutrality in wired networks. There is considerable debate over whether and how net neutrality should apply to wireless networks. The issue is whether differences between wired and wireless network technology merit different treatment with respect to net neutrality. The primary focus is on applications and traffic management, rather than device attachment. Wireless networks differ substantially from wired networks at the network layer and below but despite differences in traffic management, similar net neutrality concerns apply. Since the differences lie in lower layers, net neutrality in both wired and wireless networks can be effectively accomplished by requiring an open interface between network and transport layers.


The network neutrality debate has focused almost exclusively on Internet access via wireline carriers. Recently the issue of wireless Internet access has surfaced in light of the growing importance of wireless services and consumer frustration with carrier tactics that disable handset functions and block access to competing services.  While wireless handsets generally can access Internet services, most carriers attempt to favor content they provide or secure from third parties under what critics deem a “walled garden” strategy: deliberate efforts to lock consumers into accessing and paying for favored content and services. Just about every nation in the world has established policies that mandate the right of consumers to own their own telephone and to use any device to access any carrier, service or function provided it does not cause technical harm to the telecommunications network.  Once regulators unbundled telecommunications service from devices that access network services, a robustly competitive market evolved for both devices and services. Remarkably wireless carriers in many nations, including the United States, have managed to avoid having to comply with this open network concept. Even though consumers own their wireless handset, the carrier providing service will operate only with specific types of handsets programmed only to work with one carrier’s network. Carriers justify this lock in and high fees for early termination of service, because the carriers sell wireless handsets at subsidized rates—sometimes “free”—based on a two year subscription term. Of course the value of a two year lock in period offsets the handset subsidy, particularly in light of next generation wireless networks that will offer many services in addition to voice communications. In the United States wireless carriers and their “big box” retail store partners sell more than 60% of all wireless handsets, typically when a subscriber commences service or renews a subscription. No market for used handsets has evolved, because wireless carriers do not offer lower service rates for subscribers who do not need or want a subsidized handset. Wireless network neutrality would require carriers to stop blocking the use of non-carrier affiliated handsets and locking handsets so that they work only on a single carrier network. More broadly wireless network neutrality would prevent wireless carriers from preventing subscribers from using their handsets to access the content, services, and software applications of ventures unaffiliated with the carrier. It also would require carriers to support an open interface so that handset manufacturers and content providers can develop equipment and services that do not have any potential for harming wireless carrier networks. Opponents of wireless network neutrality consider the initiative unnecessary government intrusion in a robustly competitive marketplace. They claim that imposing such requirements would risk causing technical harm to wireless networks and such generate regulatory uncertainty that the carriers might refrain from investing in next generation network enhancements. Opponents claim that separating equipment from service constituted an appropriate remedy when a single wireline carrier dominated, but that such compulsory unbundling should not occur when consumers have a variety of carrier options.


India is currently debating the merits of net neutrality. However, the Indian population’s access to and use of the Internet provides unique parameters to the discussion. For starters, although India has the third largest number of Internet users, only 19 percent of the Indian population currently has Internet access.  In comparison, 87 percent of the U.S. population can access the Internet.  On the losing end of India’s digital divide is India’s poor and often rural class, where Internet access is limited, or if available, too expensive for marginal customers. India also lacks the large scale infrastructure necessary for broad fixed-line Internet access. For this reason, mobile platforms are the easiest way to bring Internet access to the population, particularly to the less affluent and rural areas of the country where residents suffer not only from poor broadband infrastructure, but also from the lack of basic access to the electricity needed to power fixed Internet lines. To a large extent, India’s net neutrality debate has paralleled the recent debate in the United States, and Indian net neutrality proponents have adopted their U.S. counterparts’ arguments when criticizing zero-rating projects. However, the disparity between mobile and fixed-line Internet access marks an important difference between the net neutrality debate in India and in the U.S.  Due to widespread Internet access in the United States, the domestic net neutrality debate was able to focus largely on the quality of Internet access.  In the U.S., the discussion centered on regulations that would ensure equal access to all legal digital content, and promote commercial and non-commercial innovations, particularly by start-ups and small businesses.  In India where Internet access is beyond the reach of so many, the calculus may be very different.


Wireless networks have their own internal congestion, which results from sharing a limited radio spectrum among many users. In 2G and 3G wireless systems, data and voice traffic are kept apart; they shunt the data over the Internet and the voice over a circuit-switched network linked to the backbone. The first 4G LTE (long term evolution) phones sent data over the new LTE network but used the old 3G network for voice. Now carriers are phasing in a new generation of 4G LTE phones that use a protocol called Voice over LTE (VoLTE) that converts voice directly to packets for transmission on 4G networks along with data. VoLTE phones have an audio bandwidth of 50 to 7,000 Hz, twice that of conventional phones, which is supplied by a service called HD voice. VoLTE phones also use network management tools to manage the flow of time-sensitive packets. The packet coding built into LTE and VoLTE is a different matter because that traffic goes over wireless networks, which do have limited internal capacity. The LTE packet coding standard reflects the mobile environment and the introduction of new services. It assigns a special priority code to real-time gaming traffic, which requires very fast transit times to keep competition even. It also divides video into two classes with distinct requirements. Real-time “conversational” services such as conferencing and videophone are similar to voice telephony in that delays degrade their usability. Buffered streaming video can better tolerate packet delays because it is not interactive.


Here are the LTE codes:

QCI Resource Type Priority Packet Delay Budget Packet Error Loss Rate Example Services
1 GBR 2 100 ms 10-2 Conversational Voice
2 4 150 ms 10-3 Conversational Video (live streaming)
3 3 50 ms 10-3 Real-Time Gaming
4 5 300 ms 10-6 Non-Conversational Video (buffered streaming)
5 Non-GBR 1 (highest) 100 ms 10-6 IMS Signalling
6 6 300 ms 10-6
  • Video (Buffered Streaming)
  • TCP-based (e.g., web, e-mail, chat, FTP, point-to-point file sharing, progressive video, etc.)
7 7 100 ms 10-3
  • Voice
  • Video (Live Streaming)
  • Interactive Gaming
8 8 300 ms 10-6
  • Video (Buffered Streaming)
  • TCP-based (e.g., web, e-mail, chat, FTP, point-to-point file sharing, progressive video, etc.)
9 9 (lowest)

QCI = QoS Class Identifier

GBR = Guaranteed Bit Rate

A minimum bit rate is requested by an application for optimal functioning. In LTE, minimum GBR bearers and non-GBR bearers may be provided. Minimum GBR bearers are typically used for applications like Voice over Internet Protocol (VoIP), with an associated GBR value; higher bit rates can be allowed if resources are available. Non-GBR bearers do not guarantee any particular bit rate, and are typically used for applications as web-browsing.


These new Net management tools allowed carriers to improve their existing services and offer new ones. Carriers now boast of the good voice quality of VoLTE phones, after years of ignoring the poor sound of 2G and 3G phones. Premium-price services could follow, such as special channels for remote real-time control of Internet of Things devices. Yet the differential treatment of packets worries advocates of Net neutrality, who fear that carriers could misuse those technologies to limit customer access to sites and services.



Net neutrality dilemma:

Net neutrality means different things to different people. Some want equal treatment for all bits; others merely want equal treatment for all information providers, which would then be free to assign priorities to their own services. Still others say that carriers should be able to charge extra for premium services, but not to block or throttle access. Each approach has different implications for network management. Treating all bits equally has become a popular mantra. It says just what it means, giving it a charming simplicity that leaves little wiggle room for companies trying to game the system. Championed by the nonprofit Electronic Frontier Foundation (EFF), the purists’ position seems to be gaining advocates. Yet its philosophical clarity could come at the cost of telephone clarity. LTE uses expedited forwarding services and [packet] priority to reduce jitter, which reduces voice quality. But that involves giving some bits priority over others.  Some observers doubt that Net neutrality purists mean what they say. Yet Jeremy Gillula, a technologist for EFF, says “network operators shouldn’t be doing any sort of discrimination when it comes to managing their networks.” One reason is that EFF advocates the encryption of Internet traffic, and as Gillula points out, encrypted data can’t be examined to see whether it should get priority. Moreover, he adds, “by allowing some packets to be treated better than others, we’re closing off a universe of new ways of using the Internet that we haven’t even discovered yet, and resigning ourselves to accepting only what already exists.” Other advocacy groups take a less restrictive approach. “We realize that the network needs management to provide the desired services,” says Danielle Kehl, a policy analyst for the New America Foundation’s Open Technology Institute. “The key is to make sure network management is not an excuse to violate Net neutrality.” Thus they would allow carriers to schedule conversational video packets differently than those carrying streaming video, which is less time sensitive. But they would not allow carriers to differentiate between streaming video packets from two different companies. A key argument for this approach is the 2003 observation by Tim Wu, now a Columbia University law professor, that packet switching inherently discriminates against time-sensitive applications. That is, packet switching without Net management can’t prevent degradation of time-sensitive services on a busy network. President Obama largely followed this lead in his November 2014 speech advocating Net neutrality. He did not say that all bits should be treated equally but specified four rules: no blocking, no throttling, no special treatment at interconnections, and no paid prioritization to speed content transmission. The industry’s view of Net neutrality has another key difference—it should allow companies to offer premium-priced services. A Nokia policy paper says that users should be able to “communicate with any other individual or business and access the lawful content of their choice free from any blocking or throttling, except in the case of reasonable network management needs, which are applied to all traffic in a consistent manner.” But the paper adds that “fee-based differentiation” should be allowed for specialized services, as long as it is transparent. Carriers like this approach because adding premium services would give them a financial incentive to improve their networks. Critics counter that offering an express lane to premium customers could relegate other users to the slow lane, particularly in busy wireless networks. A crucial issue to be resolved is who pays for premium service. The big technology question in the debate over Net neutrality is which approach to packet management would give the best performance now and in the future. Cisco’s Baker says that equal treatment for all packets “would be setting the industry back 20 years.” That’s particularly true of wireless networks, where high demand and limited bandwidth make network management crucial. Take away priority coding and you break VoLTE, the first technology to offer major improvements in cellular voice quality. And without VoLTE or a similar packet-management scheme, there’s no obvious way to move wire-line telephony onto the Internet without degrading voice quality to cellphone level. Other proposed services also depend on priority coding. “If the Internet of Things develops, a lot of applications will require accurate real-time data to work well,” says Jeff Campbell, vice president of global policy and government affairs at Cisco. Telemedicine, teleoperation of remote devices, and real-time interaction among autonomous vehicles could be problematic if data packets could get stalled at peak congestion times. Some analysts argue that packet scheduling could throttle other traffic by limiting the unscheduled bandwidth. But others counter that this should not be a problem in a well-designed network, one with adequate capacity and interconnections. As undemocratic as packet scheduling may be, it seems the best technology available for delivering a mixture of time-sensitive and -insensitive services. “Some Net neutrality advocates are convinced that any kind of management will create bad results, but they’re not willing to accept that having no management will also have bad results,” says a senior Nokia engineer. So, Internet purists take heed: Traffic management is as vital on the Internet as it is on streets and highways.


Net neutrality definition:


Net neutrality (also network neutrality, Internet neutrality, or net equality) is the principle that Internet service providers and governments should treat all data on the Internet equally, not discriminating or charging differentially by user, content, site, platform, application, type of attached equipment, or mode of communication. The term was coined by Columbia University law professor Tim Wu in 2003, as an extension of the longstanding concept of a common carrier. Network neutrality is the principle that all Internet traffic should be treated equally. According to professor Tim Wu, the best way to explain network neutrality is as when designing a network: that a public information network will end up being most useful if all content, sites, and platforms are treated equally. A more detailed proposed definition of technical and service network neutrality suggests that service network neutrality is the adherence to the paradigm that operation of a service at a certain layer is not influenced by any data other than the data interpreted at that layer, and in accordance with the protocol specification for that layer. Net neutrality prohibits Internet service providers from speeding up, slowing down or blocking Internet traffic based on its source, ownership or destination. Net neutrality usually means that broadband service providers charge consumers only once for Internet access, do not favor one content provider over another, and do not charge content providers for sending information over broadband lines to end users. An example of a violation of net neutrality principles was the Internet service provider Comcast intentionally slowing uploads from peer-to-peer file sharing applications. And in 2007, Plusnet was using deep packet inspection to implement limits and differential charges for peer-to-peer, file transfer protocol, and online game traffic.


Network neutrality is best defined as a network design principle. The idea is that a maximally useful public information network aspires to treat all content, sites and platforms equally. This allows the network to carry every form of information and support every kind of application. Other net neutrality proponents argue that net neutrality means ensuring that all services are provided to all parties over the same quality of Internet pipe, with no degradation based on the service chosen by the end user and at the same cost. This definition is based on the assumption that data is transmitted on a “best efforts” basis, with limited exceptions.


Net Neutrality is the principle that every point on the network can connect to any other point on the network, without discrimination on the basis of origin, destination or type of data. This principle is the central reason for the success of the Internet. Net Neutrality is crucial for innovation, competition and for the free flow of information. Most importantly, Net Neutrality gives the Internet its ability to generate new means of exercising civil rights such as the freedom of expression and the right to receive and impart information. Advocates of net neutrality have raised concerns about the ability of broadband providers to use their last mile infrastructure to block Internet applications and content (e.g. websites, services, and protocols), and even to block out competitors. Opponents claim net neutrality regulations would deter investment into improving broadband infrastructure and try to fix something that isn’t broken.


Let’s say you want to watch a video online: you connect to the Internet, open your browser and navigate to the video service of your choice. This is possible because the access provider does not seek to restrict your options. Without Net Neutrality you might instead find that your connection to video service is being slowed down by your access provider in a way that makes it impossible for you to watch the video, at the same time, you would still be able to connect rapidly to video service B and maybe watch exactly the same content. Why would your access provider do such a thing? There are many reasons: for example, the internet access provider might a) have signed an exclusive agreement with this second video platform or b) provide their own video services and therefore want to encourage you to use these instead of the service that you initially preferred. This is just one of the many reasons for violations of Net Neutrality.


Net neutrality is the principle that every website (of the same class) should be treated equally and not given any preferential treatment in respect to other websites. In other words, if you click on Google and on Yahoo, your internet service provider (ISP) will use the fastest possible routes to deliver each website to you. It doesn’t have special routes or other preferences for one site versus another. Net Neutrality doesn’t prevent variations in overall service—in other words, it may be that you pay twice as much as your neighbour in order to have more bandwidth which could lead to Yahoo loading up faster on your computer than it does on hers even if you both clicked on Yahoo at the same time. Service providers can and should provide various tiers of overall service depending on your needs, but once you subscribe to a given tier of service, there shouldn’t be additional fees levied on you or on the sites you access


The openness of the Internet is closely linked to the application of the principle of network neutrality or net neutrality. The Electronic Communications’ Framework (ECF) defines it as the ability for consumers to “access and distribute information or run applications and services of their choice.”

The revised Framework supports the following aspects of network neutrality:

1. Choice

2. Transparency

3. Quality of Service

4. E-privacy


For a thoughtful definition, consider the one given by Daniel Weitzner, who cofounded the Center for Democracy & Technology, teaches at MIT, and works for the W3C. He lays out four points that neutral networks should adhere to:

1. Non-discriminatory routing of packets

2. User control and choice over service levels

3. Ability to create and use new services and protocols without prior approval of network operators

4. Nondiscriminatory peering of backbone networks.


Level playing field:

A level playing field is a concept about fairness, not that each player has an equal chance to succeed, but that they all play by the same set of rules. A metaphorical playing field is said to be level if no external interference affects the ability of the players to compete fairly. Government regulations tend to provide such fairness, since all participants must abide by the same rules. The internet is now a level-playing field. Anybody can start up a website, stream music or use social media with the same amount of data that they have purchased with a particular ISP. The Internet has had net neutrality since its inception, which has levelled the playing field for all participants. It refers to the absence of restrictions or priorities placed on the type of content carried over the Internet by the carriers and ISPs that run the major backbones. It states that all traffic be treated equally; that packets are delivered on a first-come, first-served basis regardless from where they originated or to where they are destined. Net neutrality became an issue as major search engines such as Google and Yahoo! increasingly generated massive amounts of traffic compared with other sites. It also became an issue because some carriers that offered subscription-based VoIP services were also transporting their competitors’ VoIP traffic. Although it might seem reasonable to charge sites that disseminate huge amounts of content, ISPs may have conflicts of interest. For example, if an ISP also streams on-demand movies, it can block access to its competitors or demand fees to lift the blockade. The implications down the road are even more alarming. If net neutrality were abandoned entirely, at some point, owners of all Web sites might have to pay the carriers’ fees to prevent their content from bogging down in a low-priority delivery queue. In the absence of neutrality, your ISP might favour certain websites over others for which you might have to pay extra. Website A might load at a faster speed than Website B because your ISP has a deal with Website A that Website B cannot afford. It’s like your electricity company charging you extra for using the washing machine, television and microwave oven above and beyond what you are already paying.


Net neutrality vs. open internet:

The idea of an open Internet is the idea that the full resources of the Internet and means to operate on it are easily accessible to all individuals and companies. This often includes ideas such as net neutrality, open standards, transparency, lack of Internet censorship, and low barriers to entry. The concept of the open Internet is sometimes expressed as an expectation of decentralized technological power, and is seen by some as closely related to open-source software.  Proponents often see net neutrality as an important component of an open Internet, where policies such as equal treatment of data and open web standards allow those on the Internet to easily communicate and conduct business without interference from a third party. A closed Internet refers to the opposite situation, in which established persons, corporations or governments favor certain uses. A closed Internet may have restricted access to necessary web standards, artificially degrade some services, or explicitly filter out content. Tim Wu, who is credited with crafting the term and concept, took great pains to distinguish “Net Neutrality” from “Open Access” in his original paper that introduced the topic. Open Access is about opening essential infrastructure to competition. Net Neutrality accepts that there is no Open Access, and begins to regulate the Internet rather than just essential facilities used to access the Internet.


The FCC defines “Open Internet” in 2015 as consisting of three fundamental building blocks.

1. No Blocking:

Broadband providers may not block access to legal content, applications, services, or non-harmful device.

2. No Throttling:

Broadband providers may not impair or degrade lawful Internet traffic on the basis of content, applications, services, or non-harmful devices.

3. No Paid Prioritization:

Broadband providers may not favour some lawful Internet traffic over other lawful traffic in exchange for consideration — in other words, no “fast lanes.” This rule also bans ISPs from prioritizing content and services of their affiliates.


Consumers’ rights:

• Broadband Internet access consumers should have access to their choice of legal Internet content within the bandwidth limits and quality of service of their service plan.

• Broadband Internet access consumers should be able to run applications of their choice, within the bandwidth limits and quality of service of their service plans, as long as they do not harm the provider’s network.

• Consumers should be permitted to attach any devices they choose to their broadband Internet access connection at their premises, so long as there is no harm to the network.

• Consumers should receive meaningful information regarding their broadband Internet access service plans in order to make informed decisions in the marketplace.


Net neutrality is based on two general technical principles that are inherent in today’s Internet standards:

1. Best efforts delivery – the network attempts to deliver every packet to its destination equally, with no discrimination and provides no guarantee of quality or performance;

2. End-to-end principle – in a general purpose network, application-specific functions should only be implemented at the endpoints of the network, not in intermediate nodes.


The foundation of net neutrality is ensuring that consumer choice is not influenced by differential ease or cost of access for Internet services. It means equal business opportunity for all Internet businesses, based on the premise that the ISP or telecom operator doesn’t create artificial distinctions between them on the basis of commercial relationships between them and some websites.

Three basic points of neutrality:

1. All sites must be equally accessible.

ISPs and telecom operators shouldn’t block certain or apps just because they don’t pay them. They should also not create gateways which influence discovery of sites, giving preference to some sites over others.

2. All sites must be accessible at the same speed.

This means no speeding up of certain sites because of business deals and more importantly, it means no slowing down some sites.

3. The cost of access must be the same for all sites (per Kb/Mb or as per data plan).

That means no zero rating. In countries like India, Net Neutrality is more about cost of access than speed of access because they don’t have fast and slow lanes. Given the paucity of 3G spectrum and a very poor, sparse wireline network, they only have slow lanes. In India, the proposal of an internet access provider to charge for usage of a free communication platform that only needs Wi-Fi connection and circumvents the need for a mobile communication platform, has sparked off a raging controversy on whether this country is violating net neutrality. If this were offensive to customers of that service provider, they could always shift to another provider who does not impose such charges, retaining the same mobile number. If multiple service providers get into a cartel to impose such charges, the competition regulator could step in and impose crippling penalties on members of the cartel.


Network neutrality advocates seek to require ISPs to maintain the Internet as a “network of networks” seamlessly interconnecting facilities without favouring any category of content provider or consumer. Network neutrality in application would require ISPs to continue routing traffic on a best efforts basis, ostensibly to foreclose the potential for the Internet to fragment and balkanize into various types of superior access arrangements, available at a premium, and a public Internet increasingly prone to real or induced congestion. Opponents to compulsory network neutrality seek to differentiate service, in terms of quality, price and features to accommodate increasingly diverse user requirements. For example, on line game players, IPTV viewers and VoIP subscribers may need prioritization of their traffic streams so that their bits arrive on time, even if this outcome requires the ISPs to identify and favour these traffic streams. ISPs want the flexibility to offer different options for consumer access to the Internet and how content providers reach consumers. Consumer tiering could differentiate service in terms of bit rate speeds, amount of permissible traffic carried per month and how an ISP would handle specific types traffic, including “mission critical” content that might require special treatment, particularly when network congestion likely may occur. While consumer tiering addresses quality of service and price discrimination at the first and last kilometer, access tiering could differentiate how ISPs handle content upstream into the Internet cloud that links content providers and end users. Network neutrality advocates have expressed concern that the potential exists for ISPs to use diversifying service requirements as cover for a deliberate strategy to favour their own content and to extort additional payments from users and content providers threatened with intentionally degraded service. Many network neutrality advocates speak and write in apocalyptic terms about the impact of price and service discrimination and how it will eviscerate the Internet and enable carriers to delay or shut out competitors and ventures unwilling or unable to pay surcharges. The head of a consumer group claims that incumbent telephone and cable companies’ can reshape the nation’s digital destiny by branding the Internet and foreclosing much of its societal and cultural benefits.  On the other hand, opponents of network neutrality categorically reject as commercially infeasible any instance of unreasonable discrimination or service degradation. Network neutrality opponents also note that ISPs typically qualify for a regulatory “safe harbor” that largely insulates them from regulation, because they operate as value added, information service providers and not telecommunications service providers. While the latter group incur traditional common carrier, public utility responsibilities, including the duty not to discriminate, the former group enjoys quite limited government oversight in most nations. Opponents of network neutrality see no actual or potential problems resulting from ISPs having freedom to discriminate and diversify service. Without such flexibility, opponents of network neutrality express concern whether ISPs will continue to risk investing the billions of dollars needed for construction of next generation network infrastructure.


There are the four broad issues with reference to net neutrality.


The case in which all bits are accorded the same priority, but are priced differently is a hybrid case of net neutrality. While it satisfies net neutrality with respect to priority it does not satisfy with respect to price. Zero rating is an indicative of this where the bits of selected applications are priced at zero for the consumer that fall under this plan while they are not given either higher/ lower priority compared to others. However, zero rating is a form of an extreme pricing. We can envision a situation that this will lead to large OTTs tying up with large TSPs/ISPs to provide zero rating scheme. Smaller and start-up OTTs will be left out of this equation due to economics of subsidy. The other case is when ISP charges the same for each bit, but prioritizes certain OTT content. This case involves ISP implementing technologies such as advanced cache management and Deep Packet Inspection amongst others. From the consumer point of view, it provides better Quality of Experience (QoE) without additional price. Hence can possibly increase consumer surplus. This may also involve close cooperation and agreement between select content and service providers. This also might decrease the quality of experience of other content services that are not in the scheme.


The internet access providers claim that service providers, like Netflix and Google, are getting a “free ride” on their network, since those services are popular with their users, and they’d like to get those (very successful) companies to pay. Wait, so internet companies don’t pay for bandwidth? They absolutely do pay for their bandwidth. And here’s the tricky part of this whole thing. Everyone already pays for their own bandwidth. You pay your access provider, and the big internet companies pay for their bandwidth as well. And what you pay for is your ability to reach all those sites on the internet. What the internet access providers are trying to do is to get everyone to pay twice. That is, you pay for your bandwidth, and then they want, say, Netflix, to pay again for the bandwidth you already paid for, so that Netflix can reach you. This is under the false belief that when you buy internet service from your internet access provider, you haven’t bought with it the ability to reach sites on the internet. The big telcos and cable companies want to pretend you’ve only bought access to the edge of their network, and then internet sites should have to pay extra to become available to you. In fact, they’ve been rather explicit about this. Back in 2006, AT&T’s Ed Whitacre stated it clearly: “I think the content providers should be paying for the use of the network – obviously not the piece for the customer to the network, which has already been paid for by the customer in internet access fees, but for accessing the so-called internet cloud.” In short, the broadband players would like to believe that when you pay your bandwidth, you’re only paying from your access point to their router. Proponents say net neutrality is fundamental to advancing the Internet. If deep-pocketed digital media brands are allowed to pay for faster broadband connections, then relatively young, small ones will be at a competitive disadvantage, they argue. The anti-net neutrality argument is that ISPs should be able to allocate their resources and establish business partnerships however they deem fit, and allowing the FCC to regulate how they do business would actually stifle innovation.


High Internet use and how it affects Net Neutrality:

An example is the use of a movie streaming service such as Netflix. When sending small text messages such as email, only a small amount of data needs to move over the Internet since the email is a small piece of data. A full-length motion picture in High-Definition is a dramatically larger piece of data, and it takes up a lot more of the pipe to get from one point to another. It’s estimated that Netflix, during peak movie watching times such as Saturday night, accounts for as much as 1/3 of all the data moving on the Internet. If every user had to get to the same place on the Internet to start watching a Netflix show, the connections to Netflix would become congested, which in fact can happen. In addition to Netflix, individual Internet users watch YouTube videos, search Google, download files, and listen to music streaming services such as Pandora and Spotify. A common way for individuals to connect to the Internet is to pay an Internet Service Provider (ISP) a fee for an Internet connection. An ISP provides access to everything on the Internet for you as a consumer. However, moving large things like movies and music is much more expensive and requires larger and faster Internet “pipes” than moving emails and simpler web pages and requires more expensive Internet “pipes.” So who pays for the pipe that delivers the data to you has become one of the hot issues for Net Neutrality. The issue of Net Neutrality for bandwidth gets down to some people having deep pockets and others not.  Could someone pay the company that delivers the Internet to consumers a fee to get to people’s homes faster, and if they could, should they have to pay, and if so, what happens to the Internet users (like small businesses, schools, or individual websites) who don’t pay?


The following are the major concerns of network neutrality:

1. Non-Discrimination: Internet services should be provided all over the world without any discrimination. Anyone can post or develop their own blogs or website comments. Users can search for anything and search engines will show all available matches without any discrimination.

2. Content Diversity: A service provider cannot change the contents of a website according to its requirements.

3. Commercial Use: Network neutrality governs the rules and principles that are suitable for every business owner. There are no specific boundaries for commercial website and e-business owners.

4. IP Telephones: The IP telephone, which uses Voice over Internet Protocol (VoIP), allows anyone to make a call using a computer connected to the Internet. Voice chats, Skype and other chat services are the best example of VoIP. These should not be restricted.


Why do we want network neutrality in the first place?

1.  free and open internet is the single greatest technology of our time, and control should not be at the mercy of corporations.

2. free and open internet stimulates ISP competition.

3. free and open internet helps prevent unfair pricing practices.

4. free and open internet promotes innovation.

5. free and open internet promotes the spread of ideas.

6. free and open internet drives entrepreneurship.

7. free and open internet protects freedom of speech.


An Internet user should be able to connect to any other legal endpoint without interference from the service provider.  This is analogous to ensuring every telephone can call every other telephone, anywhere, without restrictions on connectivity or quality.

From the user’s perspective, net neutrality eliminates:

•Connection, admission and access discrimination;

•Paid prioritization or scheduling of access and/or transport;

•Controls and limitations on applications and contents.


Freedom and Net Neutrality:

Freedom is the value that people can do what they want, make their own decisions, and express their own opinions. From the content providers’ perspective, the Internet is the platform that gives tremendous freedom to individual users and innovators. They argue the remarkable success of the Internet is based on “a few simple network principles – end-to-end design, layered architecture, and open standards – which together give consumers choice and control over their online activities”. Academics and interest groups also invoke freedom to support Net neutrality legislation. However, freedom is also invoked by opponents of Net neutrality. For example, one anti-Net neutrality academic argues that “the best broadband policy for the United States would result in lots of choice, innovation, and low prices. An anti-Net neutrality service provider downplays the extent to which differentiation among users is a hindrance to consumer choice and emphasizes that, “what would be a threat to consumers and to free speech is the elimination of competition”. Thus, freedom was used to argue both sides of the debate. We must keep the Internet free and open. If you care about any of the following freedoms, then you should care about preserving net neutrality:

Freedom from monopolies

Freedom to start a business and compete on a level playing field

Freedom of online speech

Freedom to visit any Website you want at the fastest browsing speed


Net neutrality, free speech and media:

According to the Pew Research Center, half of all Americans cite the Internet as their main source for national and international news. For young people, that number is 71 percent. I do not mean to imply that we have reached a point where newspapers are becoming obsolete or that broadcast television is a relic of the past. Much of the news online still comes from broadcast and print outlets, either on their own websites or on other sites that “aggregate” and repeat their content. But the Internet is undoubtedly shaping how we distribute and consume the news today. The future of journalism is inextricably linked with the future of the Internet. That is why Net Neutrality matters and why publishers, journalists and everyone who seeks to influence or contribute to our shared culture should be worried. Verizon could strike a deal with CNN and hinder its subscribers’ abilities to access alternative news sources. Or, once its merger conditions expire, Comcast could slow access to Al Jazeera because it wants to promote its NBC news offerings. Computer scientists at Microsoft have shown that people will visit a website less often if it’s slower than a rival site by more than 250 milliseconds. That’s a blink of an eye. The absence of Net Neutrality means that Internet service providers will have the power to silence anyone who cannot or will not pay their tolls. And that is why, in 2010, Senator Al Franken called Net Neutrality the First Amendment issue of our time. No journalist or creator should be subject to the commercial or political whims of an ISP. True, many of the biggest media companies may be able to afford to pay for prioritization. They may even like the idea because their deep pockets can ensure their content continues to be seen. So it’s not the big guys who would suffer in the absence of net neutrality. It is the independent journalists, rising stars and diverse voices who have grown up with and thrived on the open Web who would suffer in the absence of net neutrality.


Broadband providers could take away your most basic rights in the absence of net neutrality:

1. Freedom of the Press

If you wanted to start an international media company 20 years ago, you couldn’t do it on your own. The barriers to entry — getting access to a printing press and developing the complex infrastructure to distribute your work — were huge. With the open Internet, however, anyone can start a news site and publish articles or videos, without worrying about whether people can read them. Without net neutrality, ISPs can block or slow down news sites for any reason, be it commercial or ideological. For example, Comcast could block or slow it to a crawl because the online newspaper published an op-ed in favor of net neutrality or even reported something negative about the company.

2. Free and Fair Elections

The large ISPs already have a major influence on politics, with Verizon alone spending $53 million on campaign donations and lobbying since 2010. However, without net neutrality, there’s nothing to stop the ISPs from influencing elections even more directly. Your broadband provider could block the website of one candidate while speeding up that of another. It could even censor the sites for political action committees that support a viewpoint or candidate it opposes. Back in 2007, Verizon initially refused to send out text messages from a pro-abortion rights group, but backed down under pressure. What if AT&T decides one day that it opposes capital punishment so much that it blocks and the site of any gubernatorial candidates that support the practice? Activists of any stripe should be concerned about their right to publish content that an ISP might disagree with. In 2005, Canadian ISP Telus blocked the site of a labor group that encouraged its workers to strike. Even more insidiously, ISPs can selectively block government websites that provide voter information like polling locations and registration forms. If they succeed in lowering voter turnout in certain areas, that could change the course of an election.

3. Freedom of Association

You always talk to your mom on Skype, but then your ISP signs an exclusive deal to make Google Hangouts its only allowed chat service. Meanwhile, Mom’s ISP on the other side of the country serves Hangouts at unusable speeds, but gives Skype its fast lane. This scenario may sound crazy, but without any legal constraint, your ISP has every incentive to swing priority access deals with some messaging services while blocking others. There’s already a precedent for blocking messaging clients in the world of wireless broadband. Back in 2009, AT&T blocked the iPhone from making Skype calls on its mobile network, but relented under pressure from the FCC. In 2012, the company also blocked Apple FaceTime on the iPhone. Of course, you and your mom can always talk on old-fashioned landline phones if you both still have them. Unlike broadband providers, wired phone services are defined as common carriers and are legally obligated to accept calls from anyone. VoIP services such as Vonage are exempt, as are cell phone carriers.

4. Freedom to Start a Business

If you decide to open up a restaurant in town and a street gang demands money not to destroy your place, you’d call the police. But if your business lives on the Internet, you could have as many as a dozen different ISPs in the U.S. shaking you down and, without net neutrality, no legal recourse against them. The most important question raised by net neutrality is not “should the government regulate the Internet” but “should a dozen ISPs be allowed to control thousands of other companies?” Whether you’re trying to start the next Netflix or you’re a mommy blogger eking out a living on ad revenue, you could be forced to pay broadband providers in order to reach their customers. If you can’t pay, providers could slow your site or service down to the point that nobody wants to use it. The smart money is already abandoning many Internet startups

5. Freedom of Choice

You like to do all your shoe shopping at Zappos, but your ISP has an exclusive clothing deal with Walmart, so it slows down so badly that each page takes a minute to load and you get timeout messages when submitting your credit card information. You might be determined enough to keep visiting your favourite online shoe store despite these roadblocks, but most people won’t. You’ve been using Gmail as your primary email address for years, but your ISP decides to slow down that service and speed up Microsoft’s instead. How long will you stick with the slow email over the fast one? In a world where ISPs can slow down or outright block whatever services they like, your freedom to choose everything from your email client to your online university could disappear.

6. Freedom of expression

Proponents of net neutrality believe that an open internet, where users can connect to any site or use any application, is the best guarantee of freedom of expression. They fear that traffic-control techniques like DPI represent a step toward censorship, whereby governments could censure (or pressure commercial companies to censure) opposing points of view. By blocking or slowing down certain sites, or even just excluding certain services from specialised offers, network operators could make it harder for citizens to access sites expressing certain points of view. Opponents of net neutrality regulation suggest that guidelines could indicate what kinds of traffic management techniques are permitted and under what circumstances (e.g. judicial supervision). One legal scholar has argued that private organisations (most ISPs are private) performing reasonable traffic management (including prioritising traffic) would likely not be acting contrary to the European Convention on Human Rights (though practices that clearly aimed at restricting competition or media plurality would be). On the other hand (perhaps surprisingly) a very strict codification of net neutrality principles might be held by the same measure to restrict unfairly the freedom of ISPs to offer different levels of service (like different classes on airlines) and manage their businesses as they saw fit.

7. Privacy

When your ISP is using DPI to read your data, it is violating your privacy.

8. Equality

Equality is the state of being equal, especially in having the same rights, status, and opportunity for all people. The value of equality is invoked in this case to refer to network players and consumers having the same rights and opportunities. Proponents of Net neutrality claim that service providers “should not discriminate among content or application providers”. To assure the equal competition among service providers, Net neutrality regulation is thus viewed as necessary by these Net neutrality advocates. Service providers, not surprisingly, view equality differently from Net neutrality advocates. Service providers argue that discrimination does not exist in the reality of competition between service providers. They argue that it is inappropriate to excessively rely on equality. For example, in the words of one service provider, “Unfortunately, because network neutrality seems like such a sensible idea and has so much momentum, various parties have sought to extend the definition beyond this basic principle — in ways that favor their own interests and which are, ironically, non-neutral”. Thus, the opponents in the Net neutrality debate have very different and contrasting views on equality.

9. Creativity

Creativity is the ability to create new ideas or things involving uniqueness and imagination. Both proponents and opponents of Net neutrality agree on the need for innovation. As one content provider explains, “It is innovation, not legislation that created our service and brought this competition to consumers”. He further urges, “The Internet remains an open and competitive foundation for innovation”. Service providers also see the importance of investment on innovation, noting that “we need to ensure that government policy encourages vigorous investment in continually upgrading network capacity”. Thus, Net neutrality supporters and opponents agree that creativity is an important value in this debate.

10. Social justice

Social justice is related to correcting injustice and caring for the weak. Net neutrality proponents say that net neutrality allows level playing field to start-ups, dissidents, underprivileged, oppressed and small entrepreneurs.  Net neutrality opponents frequently invoke social justice to support the notion that “those who cause the costs should be charged” in the words of one academic. As an interest group representative explains, “businesses that seek to profit on the use of next-generation networks should not be free of all costs associated with the increased capacity that is required for delivery of the advanced services and applications they seek to market”. Thus, Net neutrality opponents also place emphasis on Net neutrality as a social justice issue.


There are two ways to undermine net neutrality. One is to segregate the premium market from the rest by allowing telcos to charge premium prices for high quality bandwidth. Another is to segregate the low end from the rest through initiatives such as  The result will be identical – a segregated market that goes against the concept of the Internet as a utility where users pay fees that match long-term average costs.

The Reliance and Airtel playbook is simple:

1. Restrict access under the guise of a public good.

2. Charge companies like Facebook, WhatsApp and Skype for access or start charging users additional fees on top of data plans.

3. Anti-competitive: Preferential treatment to in-house content. For example, if startup X creates a disruptive new product, the telco conglomerate can copy it and use the guise of or Airtel zero to gain distribution.


There are many reasons why Net Neutrality is not respected, among the most frequent ones are:

1. Access providers violate Net Neutrality to optimise profits

Some Internet access providers demand the right to block or slow down Internet traffic for their own commercial benefit. Internet access providers are not only in control of Internet connections, they also increasingly start to provide content, services and applications. They are increasingly looking for the power to become the “gatekeepers” of the Internet. For example, the Dutch telecoms access provider KPN tried to make their customers use KPN’s own text-messaging service instead of web-based chat services by blocking these free services. Another notable example of discrimination is T-Mobile’s blocking of Internet telephony services (Voice over IP), provided for example by Skype, in order to give priority to their own and their business partners’ services.

2. Access providers violate Net Neutrality for privatised censorship

In the UK, blocking measures by access providers have frequently been misused to block unwanted content. For instance, on 4 May 2012, the website of anti-violence advocates “conciliation resources” was accidentally blocked by child protection filters on UK mobile networks. Another example is Virgin Media. The company provides access to the Internet and increasingly uses Deep Packet Inspection. Virgin is now using this same privacy invasive technology to police their network in attempt to protect its own music business. In all of these cases, private companies police their users’ connections to censor what they guess may be unwanted content.

3. Access providers violate Net Neutrality to comply with the law

Governments are increasingly asking access and service providers to restrict certain types of traffic, to filter and monitor the Internet to enforce the law. A decade ago, there were only four countries filtering and censoring the Internet

Worldwide – today, they are over forty. In Europe, website blocking has been introduced for instance in Belgium, France, Italy, the UK and Ireland. This is done for reasons as varied as protecting national gambling monopolies and implementing demonstrably ineffective efforts to protect copyright.


India and Net Neutrality:

There’s a big debate on this going on in the United States, but why in India?

India has 1 billion people without internet access and it is imperative in democracy to have an open and free internet where users are free to choose the services they want to access—instead of a telecom operator deciding what information they can access. Internet apps and services are expected to contribute 5% to India’s GDP by 2020. That will only happen of entrepreneurs, big and small, have a level playing field that encourages innovation and non-preferential treatment—something that net neutrality ensures. Assuming there is no net neutrality, only the big players will be able to strike deals with telcos while the smaller players remain inaccessible, which will go against the principles of net neutrality. The problem began with Indian telecom players like Airtel, Vodafone and Reliance who realised that users were replacing traditional texting with WhatsApp or Viber and traditional network calling with apps such as Skype. They now want the right to charge what they want, when they want and how they want. In effect, if Airtel doesn’t like YouTube, but wants to push its own video app Wynk, it wants the right to offer that for free, while charging you a bomb to access YouTube. Reliance already has a Facebook-driven scheme called, where you can access Bing for free, but you have to pay to access Google; and you have access to BabaJob for free, while you have to pay for


Net neutrality protest in India:


Although Indian telecom companies’ argument that they have invested a lot in buying the spectrum and building the infrastructure is not without merit, but equal access to the internet can’t be compromised for two basic reasons. One of the primary reasons is that if a data provider enters into a tie-up with a giant like Facebook to provide free access to it and to charge money from its rivals – most of them very small players – for the same, it’s like killing entrepreneurship and innovation. The second clinching argument in support of net neutrality is that once the telecom companies have charged for the data, they have no right to tell the user where to use that data. What you do with the data you pay for — watch a YouTube video, send a WhatsApp message or make a Skype call — is entirely your prerogative.


On December 25, 2014, Airtel, the country’s largest mobile operator with over 200 million active subscribers, dropped a bombshell: it wanted to charge customers extra for using services like Skype, Viber and Google Hangouts even though they had already paid for Internet access. If customers wanted to use a service that used Internet data to make voice calls — something known as VoIP — they would need to subscribe to an additional VoIP pack, the company said. Airtel was double-dipping and customers were furious. The tweets flew thick and fast. In less than four days, Airtel backtracked on its plans. It’s important to remember that it’s not just telecom companies that are interested in a non-neutral Internet in India. According to the TRAI consultation paper, 83 percent of India’s Internet users access the Internet from their mobile phones. This massive audience is crucial for multi-billion dollar corporations like Twitter, Facebook and Google. In February 2015, Reliance Communications and Facebook partnered to launch in India, a service whose ostentatious aim was to bring the Internet to the next billion people. In reality, grossly violated net neutrality by offering free access to a handpicked list of websites and social networks for free, while making users pay for others; Google bundled free data with its Android One phones; and WhatsApp tied up with multiple providers across the country to provide “WhatsApp Packs.”  But if things are bad for consumers, they’re worse for businesses and startups that rely on an open Internet to reach customers. Telecom operators should be seeking to maximise revenues by making us use more of the Internet. They’re slicing the pie instead of growing the pie.


Services like Whatsapp that have been adopted by Indian public themselves are a result of innovation that the telcos did not do on their own. Whatsapp succeeded in India because the Multimedia Messaging Service (MMS) provided by telcos were prohibitively expensive and really hard to use. So instead of making money per message as telcos intended to do, they are now making money out of running pipes, something that they have the license for. By creating their own walled gardens in this free, equal and open internet structure, they are now trying to impose their own distribution channels which would prevent more disruptions like Whatsapp in the future.  In case of a service like the SMS, the user was charged only at one end but in a service like Whatsapp both the sender and the receiver are billed for the data they consume.


Telcos say that the evidence is there of OTT communication services cannibalizing the revenues of the ISPs. Messaging revenues have already declined from 7-10% to 3%. Further, VOIP services like Skype, Viber, etc. have already begun to erode the voice telephony revenues. This decline is at present far more evident in the international calling segment. The revenue earned by the telecom operators for one minute of use in traditional voice is Re. 0.50 per minute on an average, as compared to data revenue for one minute of VOIP usage which is around Re.0.04, which is 12.5 times lesser than traditional voice. This clearly indicates that the substitution of data with voice is bound to adversely impact the revenues of the telecom operators and consequently impact both their infrastructure related spends and the prices consumers pay.


What will happen if there is no net neutrality?



If there is no net neutrality, ISPs will have the power (and inclination) to shape internet traffic so that they can derive extra benefit from it. For example, several ISPs believe that they should be allowed to charge companies for services like YouTube and Netflix because these services consume more bandwidth compared to a normal website. Basically, these ISPs want a share in the money that YouTube or Netflix make. Without net neutrality, the internet as we know it will not exist. Instead of free access, there could be “package plans” for consumers. For example, if you pay Rs 500, you will only be able to access websites based in India. To access international websites, you may have to pay a more. Or maybe there can be different connection speed for different type of content, depending on how much you are paying for the service and what “add-on package” you have bought. Lack of net neutrality will also spell doom for innovation on the web. It is possible that ISPs will charge web companies to enable faster access to their websites. Those who don’t pay may see that their websites will open slowly. This means bigger companies like Google will be able to pay more to make access to Youtube or Google+ faster for web users but a startup that wants to create a different and better video hosting site may not be able to do that.  With the loss of net neutrality, small businesses, including those in the digital creative industries, would become strained or simply unable to compete with larger, more established businesses purely because of their inability to pay cable companies for fast lanes. Irving [2014] writes “under the current proposal, most small business websites in America could be relegated to the slow lane – transformed into second-class players overnight.” The inability for smaller companies and businesses within the digital creative industries to promote themselves and do business over the internet would mean the end for most of said businesses. And thus, an internet without net neutrality could possibly look very different from the internet we’re all familiar with today. Of course, the larger more familiar websites like Google (and Youtube), Facebook, Twitter, Netflix, Reddit, WordPress etc. would all look and function in near enough the same way. After all, despite their protests against cable companies, these large corporations are the ones who would have the resources and funds to pay for the fast lanes in order to stay in business.


What will happen to your Website if Net Neutrality is lost?

1. Slower internet

One of the most immediate and obvious effects that tiered internet would have is slower internet for lower-paying customers. As you probably know, slow upload and download speed is one of the top pet peeves for customers, and a site load time of even one second can decrease conversion by seven percent. If a significant proportion of your customers are on a lower-tiered plan, that could mean a huge bounce rate for your site.

2. SEO (search engine optimization):

A significant portion of the population being subjected to internet throttling could also have strong and limiting implications for SEO.

There are two major ways that search results could be affected:

•Search results are limited to only the sites that certain subscriber levels are able to access

•Search results remain mostly the same, and users have to guess blindly to find a result that is covered by their subscription or one that won’t choke under narrow bandwidth

3. Simpler content

Loading a gif is essentially the equivalent of loading 10-50 images – which is why gifs often load more slowly than JPEGs and even YouTube videos.

4. Effect on entertainment & education sites

Slower web speeds means that rich content sites like Buzzfeed, Udacity, and other sites that rely on images and video may suffer from decreased performance, and may be forced to simplify if they wish to keep appealing to a wide audience.

5. Effect on e-commerce sites

This may not only be restricted to entertainment sites – many e-commerce sites use rich content such as interactive visuals and product videos to explain their product details.  Watching product videos can be an essential part of the online shopping experience. Testing thousands of websites shows that rich content on ecommerce sites is highly valued by users, and is often requested if it is not already present. Can you imagine shopping Apple without having all those demos and details?


Net neutrality affects poor the most:

School, public, and college libraries rely upon the public availability of open, affordable internet access for school homework assignments, distance learning classes, e-government services, licensed databases, job-training videos, medical and scientific research, and many other essential services. We must ensure the same quality access to online educational content as to entertainment and other commercial offerings. But without net neutrality, we are in danger of prioritizing Mickey Mouse and Jennifer Lawrence over William Shakespeare and Teddy Roosevelt. This may maximize profits for large content providers, but it minimizes education for all. And with education comes innovation. While we tend to glorify industrial-park incubators and think-tanks, the fact is that many of the innovative services we use today were created by entrepreneurs who had a fair chance to compete for web traffic. By enabling internet service providers to limit that access, we are essentially saying that only the privileged can continue to innovate. Meanwhile, small content creators, such as bloggers and grassroots educators, would face challenges from ISPs placing restrictions on information traveling over their networks. Protecting net neutrality and considering its effect on libraries isn’t just a feel-good sentiment about education and innovation, however. Network neutrality is actually an issue of economic access, because those who can’t afford to pay more for internet services will be relegated to the “slow lane” of the information highway. Many institutions and organizations will not want to be disadvantaged by slow load times, but they will not be able to afford the ISP’s fees. So they will charge consumers. Want to get the news, get your health report, get your homework in a reasonable period of time? Pay extra!

Who will be most hurt by the end of net neutrality?

-Not big corporations. They will pay up and probably pass the cost to the consumer.

-Not the 1%. They will pay what they need to pay to get fast internet.

The people who will be hurt the most are those who need the internet most:

-Dissident, radical, but also innovative and entrepreneurial voices — people with new and different ideas.

-Small business owners especially owners of businesses that currently use the “cloud” to store data and to connect employees.

-Educators and librarians who will be stuck on the back roads of the internet.

-Moderate- to low-income people who will undergo frustrating waits to get information because they can’t pay for the fast lane.




Zero rating:

Zero-rating (also called toll-free data or sponsored data) is the practice of mobile network operators (MNO) and mobile virtual network operators (MVNO) not to charge end customers for data used by specific applications or internet services through the MNO’s mobile network, in limited or metered data plans. It allows customers to use data services like video streaming, without worrying about bill shocks, which could otherwise occur if the same data was normally charged according to their data plans. Internet services like Facebook, Wikipedia and Google have built special programs to use zero-rating as means to provide their service more broadly into developing markets. The benefit for these new customers, who will mostly have to rely on mobile networks to connect to the Internet, would be a subsidised access to services from these service providers. The results of these efforts have been mixed, with adoption in a number of markets, sometimes overestimated expectations and perceived lack of benefits for mobile network operators. In Chile the Subsecretaria de Telecomunicaciones of Chile ruled that this practice violated net neutrality laws and had to end by June 1, 2014.  Zero-rating is essentially the practice of providing consumers with free access through sponsored data plans, arising out of the nexus between telecom companies and well-funded portals, websites and apps. While many may find nothing wrong with this practice, it’s important to understand that this will fragment the internet into a free part of the internet, much akin to a walled garden, and a non-free part of the internet from the perspective of users. This actually goes against the very DNA of the internet and its egalitarian nature, which is about universal access to all sites without any limitations placed by the telecom companies, or the content providers trying to act as gate-keepers. Zero-rated mobile traffic is blunt anti-competitive price discrimination designed to favor telcos’ own or their partners’ apps while placing competing apps at a disadvantage. A zero-rated app is an offer consumers can’t refuse. If consumers choose a third-party app, they will either need to use it only over Wi-Fi use or pay telcos hundreds of dollars to use data over 3G networks on their smartphones or tablets.


A problem worsened by volume caps:

Zero-rating isn’t new. Telcos have been zero-rating their fixed broadband IPTV offerings from day one. The difference is that the overwhelming majority of fixed broadband connections were, and still are, volume uncapped. Contrary to fixed-lines, internet over smartphones and tablets comes with very restrictive volume caps in most markets. Zero-rated mobile traffic doesn’t need to be delivered at higher speeds and with a higher quality of service, nor does it need to be prioritized.


Airtel defence for zero rating:

If the application developer is on the platform they pay for the data and their customer does not. If the developer is not on the platform the customer pays for data as they do now. Companies are free to choose whether they want to be on the platform or not. This does not change access to the content in any way whatsoever. Customers are free to choose which web site they want to visit, whether it is toll free or not. If they visit a toll free site they are not charged for data. If they visit any other site normal data charges apply. Finally every web site, content or application will always be given the same treatment on our network whether they are on the toll free platform or not. As a company we do not ever block, throttle or provide any differential speeds to any web site. We have never done it and will never do it. We believe customers are the reason we are in business. As a result we will always do what is right for our customers.


Digital divide, net neutrality and zero rating:

A digital divide is an economic and social inequality according to categories of persons in a given population in their access to, use of, or knowledge of information and communication technologies. The divide within countries (such as the digital divide in the United States) may refer to inequalities between individuals, households, businesses, or geographic areas, usually at different socioeconomic levels or other demographic categories. The divide between differing countries or regions of the world is referred to as the global digital divide, examining this technological gap between developing and developed countries on an international scale.


Is Net Neutrality more important than Internet Access through zero rating to reduce digital divide?

Millions more people were using the Internet in Africa because of and that job search was among the most popular activities. That’s amazing when you think about it – by offering a portion of the Internet for free (which definitely goes against net neutrality), millions of people – too poor or previously unwilling to pay for the net – were now online and searching for better livelihoods in the span of a few months. The problem with the Internet in India today – the one that exists now where most bits are charged the same – is that it’s far too expensive for most folks. According to TRAI, only 20% of India is online – and that includes all those people who turned on 2G once on their phone. In short, the Internet is simply not relevant enough to most Indians. It’s largely in English, it’s expensive and it’s full of content that’s not particularly relevant for most women, the elderly, the poor, the under-educated and the rural. In many other markets – TV being a great example – content was made free and then supported by advertising e.g. a limited number of broadcasters got to choose the TVs shows that aired and paid for it all with ads. This made TV accessible to everyone, assuming they could afford the price of entry of a TV set. You could definitely argue that a broadcaster choosing a TV show went against the concept of “TV-neutrality” but in exchange, everyone got free content aka TV shows. In Facebook’s case, they included sites like Babajob that work on low-end phones, made their sites available in local languages and offered something useful to them (like the chance to find a better job or get health information). In markets like the US where Internet penetration is much higher, the debate about net neutrality is much more relevant. But if we hope to bring most of the Indian population online, something’s got to give. Either the government needs to stop charging a bundle for bandwidth licenses (e.g. witness the recent $18 billion 3G & LTE spectrum auction – who do you think will ultimately pay for that, Internet users), or those in power need to stop taking bribes to put up cell towers and fiber across the country (again who ultimately pays? Mobile and internet users), or new business models are developed such that net users pay less or nothing (e.g. people watch ads, things like cover their fees, companies that can afford it pay for their user’s bandwidth, richer users give subsidies to help the poor pay for internet access, etc.). Arguments about net neutrality shouldn’t be used to prevent the most disadvantaged people in society from gaining access or to deprive people of opportunity. Eliminating programs that bring more people online won’t increase social inclusion or close the digital divide. It will only deprive all of us of the ideas and contributions of the two thirds of the world who are not connected. We also must realize that this is India, not the US, not China. Over 60 per cent of Indian population lives on less than US$2 per day. Indian market dynamics are very different from the rest of the world. We must to give room to both sides of the industry to experiment to bring the cost of the internet down significantly for the one billion plus population in India without blocking, throttling or discriminating against any service, with the basic principles of net neutrality intact. Because at the end of the day that’s what matters. Bringing over a billion Indians online. Technically, is an open platform any website or app can join, but as Zuckerberg notes, it would be impossible to give the entire Internet away for free. “Mobile operators spend tens of billions of dollars to support all of Internet traffic,” he writes. “If it was all free they’d go out of business.” That means most services necessarily must be left out if is to be financially viable for carriers. This creates a system of fundamentally unequal access for the companies trying to reach these users and for users themselves. Facebook founder Mark Zuckerberg said ‘some access to the Internet is better than none at all and unequal access is better than no access’. But let’s start with the fact that only Reliance customers are eligible for free and selective Internet access through Facebook’s platform. Would any telecom operator who has been crying foul over losses in revenue due to people using cheaper communications options like WhatsApp, offer the web at no charge unless there was profit in it? By tempting users through free Internet, network operators can ensure they reach a wider audience.

_, the Facebook-led initiative to provide select apps and services to mobile phone users in emerging markets for free, recently passed 800,000 subscribers in India. Along with local telecom companies there, the service provides access to around 30 websites and services without charging the user for the mobile data necessary to use those services, which include Facebook and Wikipedia. 20% of users currently on the platform did not previously access mobile data. Only 7% of the data used by subscribers came through the initiative’s free, zero-rated offerings; other paid services accounted for the remaining 93%. This proves that zero rating only allows initial internet access to customers but later on it almost becomes paid service. Studies have showed that internet access reduces poverty and create jobs.


Are there better ways to bridge the digital divide than or Airtel Zero?

The strongest argument in favour of zero-rating is that it helps to broaden the access and get the hitherto excluded population on the internet. In India, this translates to 80% of the population, which underscores the huge digital divide that we need to bridge. While this is a noble goal, what needs to be understood is that the scope for abuse of market power through such zero-rated services is tremendous. It is ironic that it is those websites, then startups, which benefitted from the level playing field of the internet implicit in the principles of net neutrality that are now engaged in a bid to expand their reach and in the process damage the internet as we know it, and skew the balance against current startups. This practice is akin to the eventually disallowed practice of Microsoft bundling its own browser, Internet Explorer, along with its operating system. Solving the issue of access and bridging the digital divide can just as easily and cost-effectively be addressed through other transparent and competition enhancing method like cash transfer to poor people by government.


Neither by Facebook, Airtel Zero nor any other major zero rating platform gives the choice to the consumer. Instead, the decisions are made by big telcos working in partnership with large Internet companies. Smaller firms are forced to commercially lobby and sign up in order to prevent their competitors from being able to deal in and crush them. This reduces entrepreneurship and local Internet innovation by placing firms in a situation where their local consumers are all locked in to a limited platform under the control of a few giants. This is why regulators in Chile, the Netherlands, Slovenia and Canada have prohibited zero-rating, while their counterparts in Germany, Austria and Norway have publicly stated that zero-rating violates network neutrality.  At times, this is a battle between Access and Neutrality. Facebook and Wikipedia are pitching free Internet access, as a means of bringing the Internet to more people who can’t afford it.  Facebook is using this to become the gateway to the Internet on mobile, so that access to the web is through it. Remember that Google won its dominance by creating a great search product…it is also a gateway to the Internet, but mostly via desktops and laptops.


Even if it were the case that some zero-rating programs might create some barriers to market entry for new start-ups, as net neutrality supporters argue, India may need to consider that not all zero-rating programs are likely to create such barriers.  Further, India may need to balance any potential loss against the immediate benefit that zero-rating programs can provide, by expanding access to Internet services. These platforms could provide rural areas with mobile access to basic search engines, social platforms, and e-commerce sites.  The access could help small business owners and farmers tap into a larger market for their goods, and can bring basic education and information to rural areas.  Even outside of the zero-rating context, policymakers in India are crafting new telecom regulations to achieve a greater balance between the benefits of net neutrality with opportunities for more widespread Internet access.



Search neutrality:


The figure below shows that search neutrality is part of net neutrality:


Neutrality of search engines is called Search Neutrality. If ISPs should be subjected to “net neutrality,” should companies like Google be subjected to “search neutrality”?   Search neutrality is a principle that search engines should have no editorial policies other than that their results be comprehensive, impartial and based solely on relevance. This means that when a user queries a search engine, the engine should return the most relevant results found in the provider’s domain (those sites which the engine has knowledge of), without manipulating the order of the results (except to rank them by relevance), excluding results, or in any other way manipulating the results to a certain bias. Search neutrality should be understood as the remedy to the conduct that involves any manipulation or shaping of search results. This conduct is also commonly known as “search bias”. In this work, search neutrality should be understood in its broadest sense. It is the idea that search results should be free of political, financial or social pressures and that their ranking is determined by relevance, not by the interests or the opinions of the search engines’ owners. The importance attributed to search neutrality and search bias in recent years is closely linked to the role that search engines play in our information society. Indeed, search engines are currently the “gatekeepers” of considerable amounts of information scattered over the World Wide Web. Many users consider search engines to be the most important intermediaries in their quest for information. Users also believe that search engines are reliable without realising that they have the power to hide and to show democratically sensitive information. Search neutrality is related to network neutrality in that they both aim to keep any one organization from limiting or altering a user’s access to services on the Internet. Search neutrality aims to keep the organic search results (results returned because of their relevance to the search terms, as opposed to results sponsored by advertising) of a search engine free from any manipulation, while network neutrality aims to keep those who provide and govern access to the Internet from limiting the availability of resources to access any given content. Google is in the uncomfortable position of trying to stave off a corollary principle of search neutrality. Search neutrality has not yet coalesced into a generally understood principle, but at its heart is some idea that Internet search engines ought not to prefer their own content on adjacent websites in search results but should instead employ “neutral” search algorithms that determine search result rankings based on some “objective” metric of relevance.  Whatever the merits of the net neutrality argument, a general principle of search neutrality would pose a serious threat to the organic growth of Internet search. Although there may be a limited case for antitrust liability on a fact-specific basis for acts of naked exclusion against rival websites, the case for a more general neutrality principle is weak. Particularly as Internet search transitions from the ten blue links model of just a few years ago to a model where search engines increasingly provide end information and interface with website information, a neutrality principle becomes incoherent.


Search engines produce immense value by identifying, organizing, and presenting the Internet´s information in response to users´ queries. Search engines efficiently provide better and faster answers to users´ questions than alternatives. Recently, critics have taken issue with the various methods search engines use to identify relevant content and rank search results for users. Google, in particular, has been the subject of much of this criticism on the grounds that its organic search results—those generated algorithmically—favor its own products and services at the expense of those of its rivals.  Almost four years have now passed since the European Commission started to investigate Google’s behaviour for abuse of dominant position in the Internet search market. During the investigation Google was accused of favourably ranking its own vertical search services to the detriment of its rivals. Competitors and other stakeholders argued that Google should be regulated through a “search neutrality principle”. Similar claims were expressed during the US Federal Trade Commission investigation relating to the same abusive conduct of Google. An independent analysis finds that own‐content bias is a relatively infrequent phenomenon. Google references its own content more favorably than rival search engines for only a small fraction of terms, whereas Bing is far more likely to do so.


It is widely understood that search engines´ algorithms for ranking various web pages naturally differ. Likewise, there is widespread recognition that competition among search engines is vigorous, and that differentiation between engines´ ranking functions is not only desirable, but a natural byproduct of competition, necessary to survival, and beneficial to consumers.  Rather than focus upon competition among search engines in how results are identified and presented to users, critics and complainants craft their arguments around alleged search engine “discrimination” or “bias.” While a broad search neutrality principle is neither feasible nor desirable, this does not mean that dominant search engines should never be liable for intentionally interfering with their rivals’ hits in search results.  Any such liability should be narrow, carefully tailored, and predictable.  Search neutrality may thus have future, not as a general principle, but as the misfiting tag line on fact-specific findings of egregious abuses by dominant search engines.


Search engines are attention lenses; they bring the online world into focus. They can redirect, reveal, magnify, and distort. They have immense power to help and to hide. We use them, to some extent, always at our own peril. And out of the many ways that search engines can cause harm, the thorniest problems of all stem from their ranking decisions. The need for search neutrality is particularly pressing because so much market power lies in the hands of one company: Google. With 71 percent of the United States search market (and 90 percent in Britain), Google’s dominance of both search and search advertising gives it overwhelming control. Google’s revenues exceeded $21 billion last year, but this pales next to the hundreds of billions of dollars of other companies’ revenues that Google controls indirectly through its search results and sponsored links. One way that Google exploits this control is by imposing covert “penalties” that can strike legitimate and useful Web sites, removing them entirely from its search results or placing them so far down the rankings that they will in all likelihood never be found. Consider an example. The U.K. technology company Foundem offers ““vertical search””——it helps users compare prices for electronics, books, and other goods. That makes it a Google competitor. But in June 2006, Google applied a penalty to Foundem’s website, causing all of its pages to drop dramatically in Google’s rankings and hence its business dropped off dramatically as a result. The experience led Foundem’s co-founder, Adam Raff, to become an outspoken advocate: creating the site, filing comments with the Federal Communications Commission (FCC), and taking his story to the op-ed pages of The New York Times, calling for legal protection for the Foundems of the world. Another way that Google exploits its control is through preferential placement. With the introduction in 2007 of what it calls “universal search,” Google began promoting its own services at or near the top of its search results, bypassing the algorithms it uses to rank the services of others. Google now favors its own price-comparison results for product queries, its own map results for geographic queries, its own news results for topical queries, and its own YouTube results for video queries. And Google’s stated plans for universal search make it clear that this is only the beginning. Because of its domination of the global search market and ability to penalize competitors while placing its own services at the top of its search results, Google has a virtually unassailable competitive advantage. And Google can deploy this advantage well beyond the confines of search to any service it chooses. Wherever it does so, incumbents are toppled, new entrants are suppressed and innovation is imperiled. Without search neutrality rules to constrain Google’s competitive advantage, we may be heading toward a bleakly uniform world of Google Everything — Google Travel, Google Finance, Google Insurance, Google Real Estate, Google Telecoms and, of course, Google Books. Some will argue that Google is itself so innovative that we needn’t worry. But the company isn’t as innovative as it is regularly given credit for. Google Maps, Google Earth, Google Groups, Google Docs, Google Analytics, Android and many other Google products are all based on technology that Google has acquired rather than invented. Even AdWords and AdSense, the phenomenally efficient economic engines behind Google’s meteoric success, are essentially borrowed inventions: Google acquired AdSense by purchasing Applied Semantics in 2003; and AdWords, though developed by Google, is used under license from its inventors, Overture.


Google was quick to recognize the threat to openness and innovation posed by the market power of Internet service providers, and has long been a leading proponent of net neutrality. But it now faces a difficult choice. Will it embrace search neutrality as the logical extension to net neutrality that truly protects equal access to the Internet? Or will it try to argue that discriminatory market power is somehow dangerous in the hands of a cable or telecommunications company but harmless in the hands of an overwhelmingly dominant search engine? Google is dominant because customers recognise that it is the best service, not because they are locked. This success has been built through important investments in software and hardware, especially huge data centres. The continuing search activities all over the years reinforced its position, creating a kind of “information barrier” for potential competitors.


Although search neutrality is a part of net neutrality, there are fundamental differences:

Internet search can never completely be neutral. Search tools and criteria are never completely objective, since they are designed, in a way, to meet the profile of users. If this is done well, the search engine will be successful, and consumers will recognise it. While for ISPs the lock-in is a fundamental barrier for changing provider, in the search engine market the lock-in does not work. If there is an alternative search engine, a simple “click” is enough. One difference between net neutrality and search neutrality is that the search engines are already suppressing and biasing our access to net information.  All the majors maintain “banning” departments that routinely block or suppress access by their users to individually hand-picked web sites without notice and for arbitrary and undisclosed reasons. Search engines maintain that they are publishers and therefore have editorial free-speech rights to delete, bias, edit, or otherwise manipulate organic (non-sponsored) search results in nearly any manner.  Most users think of major search “engines” as automated mechanical connection services as opposed to editorial entities and specifically want such a service; if edited information is desired there are much better sources. As with net neutrality, the lack of search neutrality is especially injurious to small business.  Political bias in search could allow a tiny group of people to significantly alter our “democratic discourse.”  Another functional difference is that while ISPs are local, major search engines are global and can substantially control user access worldwide.  Google’s total impact on Internet information access is much larger than that of any single ISP. ISPs claim they need the additional fees (beyond the existing Internet access fees at both ends of a communication) for improving their broadband networks and therefore should be allowed to set up “tiered” access with different levels of priority.  Search engines claim they need the ability to block or suppress access by their users to particular, hand-picked web sites for arbitrary and undisclosed reasons in order to improve the quality of search results they deliver to their customers and that each deleted site has violated some unspecified content rule.  Neither claim is really credible, especially in light of the massive self-interest in both cases.


Search engines are essential to our ability to connect to information on the Internet.  Search engines also have the structural capacity to interfere with access by their users to specific web information.  Search engines also have an economic incentive to control access by their users in order to leverage their own or a partner’s Internet content. There are only three major search engines; together Google, Yahoo, and Microsoft control more than 90 percent of U.S. web searches. Search users are not given the option of seeing editorially deleted sites, even if their search produces no results. Users are not even told that hand-picked sites are being deleted.  If search engines provide a connection service, then they should follow rules similar to those applied to telcos and other information carriers. Solving the neutrality issue needs regulation or legislation that constrains search engines as well as ISPs.



Search engine optimization (SEO) is the process of affecting the visibility of a website or a web page in a search engine’s unpaid results – often referred to as “natural,” “organic,” or “earned” results. In general, the earlier (or higher ranked on the search results page), and more frequently a site appears in the search results list, the more visitors it will receive from the search engine’s users. SEO may target different kinds of search, including image search, local search, video search, academic search, news search and industry-specific vertical search engines. Whenever you enter a query in a search engine and hit ‘enter’ you get a list of web results that contain that query term. Users normally tend to visit websites that are at the top of this list as they perceive those to be more relevant to the query. If you have ever wondered why some of these websites rank better than the others then you must know that it is because of a powerful web marketing technique called Search Engine Optimization (SEO). SEO is a technique which helps search engines find and rank your site higher than the millions of other sites in response to a search query. SEO thus helps you get traffic from search engines.


Here are the eight possible bases for search-neutrality regulation:

•Equality: Search engines shouldn’t differentiate at all among websites.

•Objectivity: There are correct search results and incorrect ones, so search engines should return only the correct ones.

•Bias: Search engines should not distort the information landscape.

•Traffic: Websites that depend on a flow of visitors shouldn’t be cut off by search engines.

•Relevance: Search engines should maximize users’ satisfaction with search results.

•Self-interest: Search engines shouldn’t trade on their own account.

•Transparency: Search engines should disclose the algorithms they use to rank webpages.

•Manipulation: Search engines should rank sites only according to general rules, rather than promoting and demoting sites on an individual basis.



How do I circumvent biased search results?

1. I always search multiple search engines for information e.g. I search google, yahoo, bing sequentially for any information.

2. I never trust first page and top ranked websites as the best source information.


Cloud computing neutrality:

Does neutrality apply equally to cloud computing or are they completely different issues?  If net neutrality applies to (public) Internet services, then would cloud neutrality relate to public information processing and storage (i.e., SaaS) services?

The three rules of net neutrality also apply to cloud computing:

•No service blocking – SaaS providers should not arbitrarily restrict or block access to computing and storage services;

•No service throttling – SaaS providers should not favour one customer over another in areas such as capacity, elasticity, accessibility, resilience or responsiveness;

•No paid priority services – SaaS providers should not selectively offer (or provide) better services to selected customers at the expense of others.

For example, the following might hypothetically be possible:

•A SaaS provider could favour one search engine over another by preventing or slowing down search scanning;

•A SaaS provider could degrade response times for certain companies (such as a re-seller or broker) or users.

It would seem that the whole question of neutrality – the fair and open availability of public IT services – is more complicated it would be for other utilities such as water, roads or electricity.  There is a need to look at net neutrality both holistically and technically as well as commercially and politically.



Is net already non-neutral?  Do we already have fast lanes?

It turns out that our layman’s understanding of how the Internet works — a worldwide Web of computers linked on a free, open network — is a bit of a fairy tale. The truth is that those fast lanes demonized by net neutrality advocates already exist. Highly successful and high-traffic Web companies like Google, Facebook and Netflix already pay for direct access — inside access, in some cases — to Internet service providers like Comcast, AT&T and Verizon. They do so by bypassing internet backbone:



There are three types of fast lanes that exist today:

1. Peering: – Most Web companies need to send their data across the broader Internet backbone (the cables and data centers operated by companies around the world) before it arrives at an ISP and is served to individual users. Wealthier companies can pay ISPs for a direct connection called peering that bypasses the Internet backbone and speeds data transfers. This is called paid peering.

2. Content Delivery Network: – Ever wonder how Google can serve up search results so quickly? The search giant pays for the privilege to set up its own servers inside the bowels of ISPs so it can deliver the most popular searches and images even faster.

3. Paid prioritization: – Paid prioritization is a financial agreement in which a company that provides content, services, and applications over the Internet (an “edge provider”) pays a broadband provider to essentially jump the queue at congested nodes. These fast lanes actually work like toll booths, where paying companies get to go through the gate first when traffic is congested. Paid prioritization also covers the cases of broadband providers prioritizing their own content or that of an affiliate over the data from a competing edge provider (also called “vertical prioritization”).


Today, privileged companies—including Google, Facebook, and Netflix—already benefit from what are essentially internet fast lanes, and this has been the case for years. Such web giants—and others—now have direct connections to big ISPs like Comcast and Verizon, and they run dedicated computer servers deep inside these ISPs. In technical lingo, these are known as “peering connections” and “content delivery servers,” and they’re a vital part of the way the internet works. The real issue is that the Comcasts and Verizons are becoming too big and too powerful. Because every web company has no choice but to go through these ISPs, the Comcasts and the Verizons may eventually have too much freedom to decide how much companies must pay for fast speeds. Net isn’t neutral now. What we should really be doing is looking for ways we can increase competition among ISPs—ways we can prevent the Comcasts and the AT&Ts from gaining so much power that they can completely control the market for internet bandwidth.


Google is already running internet fast lanes:


Starting in 2012, Comcast got in a fight with Netflix over the amount of bandwidth the streaming video site required from Comcast-owned networks. Comcast refused to upgrade its equipment to handle the increased traffic unless Netflix paid up. The battle waged on for two years, during which Netflix service for millions of Comcast subscribers slowed to a crawl. Since Comcast essentially owns the last-mile connection to 22 million homes, Netflix had no choice but to pay for a direct peering arrangement. Verizon pulled a similar strong-arm tactic to get more money from Netflix in an earlier backroom deal.


By 2009, half of all internet traffic originated in less than 150 large content and content-distribution companies, and today, half of the internet’s traffic comes from just 30 outfits, including Google, Facebook, and Netflix.  Because these companies are moving so much traffic on their own, they’ve been forced to make special arrangements with the country’s internet service providers that can facilitate the delivery of their sites and applications. Basically, they’re bypassing the internet backbone, plugging straight into the ISPs. Today, a typical webpage request can involve dozens of back-and-forth communications between the browser and the web server, and even though internet packets move at the speed of light, all of that chatter can noticeably slow things down. But by getting inside the ISPs, the big web companies can significantly cut back on the delay. Over the last six years, they’ve essentially rewired the internet. Google was the first. As it expanded its online operation to a network of private data centers across the globe, the web giant also set up routers inside many of the same data centers used by big-name ISPs so that traffic could move more directly from Google’s data centers to web surfers. This type of direct connection is called “peering.” Plus, the company set up servers inside many ISPs so that it could more quickly deliver popular YouTube videos, webpages, and images. This is called a “content delivery network,” or CDN. “Transit network providers” such as Level 3 already provide direct peering connections that anyone can use. And companies such as Akamai and Cloudflare have long operated CDNs that are available to anyone. But Google made such arrangements just for its own stuff, and others are following suit. Netflix and Facebook have built their own CDNs, and according to reports, Apple is building one too.


CDN (content delivery networks):

Let’s take a real example: This website is hosted on a web server that’s located in some part of America. Now if we have a visitor from Singapore, the page loading time for him will be relatively high because of the geographic distance between Singapore and America. Had there been a mirror server in either India or Australia, the page would load much faster for that visitor from Singapore. Now a content delivery network has servers across the world and they automatically determine the fastest (or the shortest) route between the server hosting the site and the end-user. So your page will be served from the server in Australia to a visitor in Singapore and from America for a visitor in UK. Of course there are other advantages but this example should give you a good idea of why we need a Content Delivery Network. The use of a Content Delivery Network (CDN) is imperative for content providers who wish to improve the availability of their content to their end users. Apart from increasing speed of access of websites, CDN also increases content availability.


If Web companies can already pay ISPs for preferential treatment, then why are net neutrality advocates making such a stink about fast lanes?  Net neutrality loses its meaning and becomes irrelevant when ISPs and content providers arrange private pathways that avoid the global links. Technically you restrict fast lanes on a public highway, but you are freely allowing private highways for paying content providers. The effect is much the same. In the face of the monopolistic power of ISPs and their stiff resistance to regulation (which they are increasingly able to avoid in any case) and with perverse incentives to not increase bandwidth, the hope for maintaining a free and open Internet would seem to be a lost cause.


Do we indeed practice non net neutrality?



Non-NN scenarios can be categorized along two dimensions: The network regime and the pricing regime. The pricing regime denotes whether an access ISP employs one-sided pricing (as is traditionally the case) or two-sided pricing. We already have two-sided pricing. The network regime refers to the QoS mechanisms and corresponding business models that are in place. Under strict NN, which prohibits any prioritization or degradation of data flows, only capacity-based differentiation is allowed. This means that CSPs or IUs may acquire Internet connections with different bandwidth, however, all data packets that are sent over these connections are handled according to the BE principle, and thus, if the network becomes congested, they are all equally worse off. In a managed network, QoS mechanisms are employed as a preferential treatment of certain data packets. We already have voice/video priority over emails. So both from network regime and pricing regime, there is already no net neutrality.


In developing Countries, Google and Facebook already defy Net Neutrality:

In much of the world, the concept of “net neutrality” generates less public debate, given there’s no affordable Net in the first place. The next billion Internet users will be arriving mostly in the developing world, on low-end smartphones. In the emerging economies of the world, that’s pretty much how things already work, thanks to a growing number of deals Google and Facebook have struck with mobile phone carriers from the Philippines to Kenya. In essence, these deals give people free access to text-only version of things like Facebook news feeds, Gmail, and the first page of search results under plans like Facebook Zero or Google Free Zone. Only when users click links in e-mails or news feeds, go beyond the first page of search results, or visit websites by other means do they incur data charges. For people who have no Internet in the first place, the idea of net neutrality is not exactly top of mind. Getting online cheaply in the first place is a greater concern, and the American companies are often enabling that to happen. Internet access is expensive in developing countries—exorbitantly so for the vast majority of people. In Kenya the top four websites are Google, Facebook, YouTube (which is owned by Google), and the Kenyan version of Google. That pattern is fairly typical of Web usage in dozens of developing nations. And free services like Facebook Zero and Google Free Zone don’t have many critics among users. But the existence of a free and dominant chat, e-mail, search, and social-networking service makes it awfully hard for any competitor to arise. And Susan Crawford, visiting professor of law at Harvard University and a co-director of Harvard’s Berkman Center for Internet & Society, calls it “a big concern” that Google and Facebook are the ones becoming the portal to Web content for many newcomers. “For poorer people, Internet access will equal Facebook. That’s not the Internet—that’s being fodder for someone else’s ad-targeting business,” she says. “That’s entrenching and amplifying existing inequalities and contributing to poverty of imagination—a crucial limitation on human life.” Google had struck a deal with the major Indian mobile network Airtel to offer Free Zone, in this case giving people up to one gigabit per month of free access to Gmail, Google+, and Google search. Some critics have called this unfair treatment that disadvantages competitors.  Google and Facebook are doing more than just providing various forms of free data access. Those two companies and others, like Microsoft, are increasingly in the business of trying to expand infrastructure and related data-efficiency technologies that will, inevitably, be deployed in ways that benefit themselves. And because most of the smartphones that convey the Internet to users will be low-end Android phones, Google and Facebook are also battling to develop dominant apps for those phones. Some Internet service providers in the developing world talk about trying to charge companies like Google to carry their traffic, but that is probably unlikely to happen. They recognize that free versions of popular sites like Google and Facebook draw people into greater data usage, producing revenue.



Economics of net neutrality:

Since the controversial term “net neutrality” was coined by Professor Tim Wu of Columbia Law School in 2003, much of the debates on net neutrality revolved around the potential consequences of network owners exercising additional control over the data traffic in their networks. Presently the obvious villains in the show are the Telecom Service Providers (TSPs) and the Internet Service Providers (ISPs) as they provide the last mile bandwidth to carry the content and applications to the end users. Net neutrality is a specific approach to the economic regulation of the Internet and requires context in the wider literature on the economic regulation of two-sided markets. A platform provider (TSPs; ISPs) connects end consumers and over-the-top content (OTT) as given in the diagram below.



As per the theory of two-sided markets, the provider is justified in charging a toll to avoid congestion in the network. The toll can be raised from the end user or the OTT or both. The consumer is usually more price sensitive than the OTT so the tendency of the ISP is to raise the toll from the OTT. The market power of the provider would result in a total levy (sum of levy on OTT and end user) that is too high from the point of view of efficiency. Even if the levy falls within the efficient range, it may tend to be in the higher parts of the range. This tendency is checked by ‘cross-group externalities’ – the value enhancement obtained by the end users from the presence of OTTs and vice versa. Cross group externalities soften the impact of the market power of the provider. Nevertheless, the fact of market power cannot be denied, given the low likelihood that an ordinary OTT can hurt a provider’s business by refusing to connect through that provider. The principles of static efficiency outlined above do not suggest that over-the-top contents cannot be charged, only that market power of the ISP needs regulation. However the principle of dynamic efficiency, i.e. the optimization of the rate of technological progress in the industry, suggests that OTTs, especially startups, need extra support. Indeed the rapid growth of the Internet is a result of the low barriers to entry provided by the Internet. When the considerations of dynamic efficiency outweigh the considerations of static efficiency there may be justification in reversing the natural trend of charging OTTs and, instead, charging consumers. This has been the practice so far. It must however be noted that innovation is also needed in the ISP layer. The situation becomes more complex when there is vertical integration between an OTT and a TSP/ISP. This vertical integration can take several forms:

1. App store of a Content Service Provider (CSP) bundled (preferred bundling) by the TSP/ISP;

2.  Arrangements between ISP and CSP;

3. ISPs providing content caching services, becoming content distribution networks, or even content providers.

Examples of such vertical integration, small and big, are many. Recent announcements by Google that they will provide Internet access through balloons and storing its data on the servers of ISPs for better access speeds. One view on vertical integration is that it allows complementarities to be tapped and is undertaken when the gains outweigh the known restrictions in choice faced by the consumer. For this view to hold, all linkages should be made known to the consumer, who must be deemed to be aware enough to understand the consequences. The other view is that vertical integration inhibits competition as potential competitors have to enter both markets in order to compete. Further, when a provider provides communications services on its own, there is a conflict of interest with communications over-the-top content, for example between voice services provided by a telco and Skype. As we contemplate moving away from the traditional regime of the Internet, we must therefore be prepared to countenance the curbing of dynamic efficiency and the limitation to competition due to vertical integration and conflicts of interest between the provider and the OTT.


Internet Pricing Structure:


Above is a simple diagram that represents the structure of the Internet. On the right side are content providers who upload their applications and websites onto the Web usually via an Internet Service Provider, but it could be any of a variety of types of companies that sell access to the Internet. This is typically the only fee that content providers pay to access the Internet and Internet subscribers. ISPs connect their private networks to the Internet in the center of the figure. Broadband subscribers in homes and businesses across the country pay an ISP like a phone or cable company for online access. The pipes between this ISP’s Internet access point and its subscribers’ computers constitute a privately owned and operated subnetwork. This last stretch of wires and pipes are often referred to as the “last mile” of the Internet—the part that connects the network to individuals (depicted on the left side of the figure below) The last mile is the heartland of the net neutrality debate. The cost of building a last mile network is extremely high and is often borne entirely by the ISP that constructs the network. Building this type of network requires physical or wireless connections to be built between and ISP’s Internet access point and each subscriber’s household or business. This last mile network is the ISP’s most valuable asset. Some say that content providers profit from the last mile but do not compensate the ISP companies for their investment in the infrastructure that enables that profit.


ISP vs. OTT:

Vodafone has said that the government should tax over the top (OTT) players like WhatsApp, Viber, Hike and Facebook as they are getting a “free ride” on telecom networks without paying for spectrum or any other fee. The operators have to pay taxes, license fee and have to share revenue (with the government). The other guys have a complete free ride. WhatsApp accounts for nearly 60 million user base in India. Services like WhatsApp have nearly eliminated revenues from SMS, while free calling on Skype and Viber (especially from international markets) is hitting their voice revenues. Popular Internet companies such as Google, Yahoo! and Facebook should start sharing revenues with telecom companies, according to Bharti Airtel. The company said that the telecom regulator should impose interconnection charges for data services just like it is applied for voice calls. Today, Google, Yahoo! and others are enjoying at the cost of network operator. ISPs are the ones investing in setting up data pipes and OTPs make the money. Amid raging debate over Net Neutrality, mobile operators said if they are not offered a level playing field with Net-based services such as Skype and WhatApp, then their businesses would be viable only by raising data prices by up to six times. Such high rates would become unaffordable for a large number of people, denying them access to the Internet. Nasscom  discounted any notion of revenue loss from OTT players to TSPs. The apps created have made the internet more useful, and opened up avenues for not just service providers, but increased convenience, transparency and enabled newer services for consumers. This is driving data revenues for telecom companies. Loss of revenue arguments from TSPs are not evident in some of the recent quarterly results announced. In the long-run it is likely that it would result in a win: win situation for both ISPs and OTT players. The growth of OTT will spur demand for data that in turn generates additional revenues for TSPs, leading to a synergistic ecosystem. Nasscom felt it would be better if the government and the telecom industry work together to create a balanced environment for ISPs to invest in network infrastructure, rather than targeting the fledgling internet-based product and service providers.


Pricing models:

Broadband Internet access has most often been sold to users based on Excess Information Rate or maximum available bandwidth. If Internet service providers (ISPs) can provide varying levels of service to websites at various prices, this may be a way to manage the costs of unused capacity by selling surplus bandwidth (or “leverage price discrimination to recoup costs of ‘consumer surplus’”). However, purchasers of connectivity on the basis of Committed Information Rate or guaranteed bandwidth capacity must expect the capacity they purchase in order to meet their communications requirements. Various studies have sought to provide network providers the necessary formulas for adequately pricing such a tiered service for their customer base. But while network neutrality is primarily focused on protocol based provisioning, most of the pricing models are based on bandwidth restrictions.


It’s all about the money:


Internet users currently pay a flat rate for their service, whether they simply use it for checking emails or performing more data-heavy tasks including streaming movies and television programs.


It’s true that without net neutrality rules, ISPs could theoretically block or throttle access to some sites to extort fees, promote their own services, and so on. But in practice, such abuses of market power have been extremely rare. In reality, the net neutrality debate is about how costs will be shared to improve the Internet for everyone. Peak-hour Internet traffic surged 32% in 2013, according to Cisco. To meet consumers’ appetites for more bandwidth, ISPs are spending hundreds of billions of dollars on network expansion and fiber deployments. In the U.S., Netflix is a key driver of traffic growth. Not only is the company adding subscribers, it is also encouraging existing subscribers to spend more time watching Netflix, allowing families to stream multiple shows at once, and promoting higher-quality video streaming, including 4K TV (also known as Ultra-HD). Netflix encourages all of these behaviors because they will lead to a larger, more loyal, subscriber base, which will boost the company’s profit in the long run. Yet they all will require a hugely expensive Internet capacity that does not exist today. ISPs like AT&T want data-heavy services like Netflix to help fund investments in faster broadband. Internet service providers want companies such as Netflix — which are the primary beneficiaries of faster Internet service — to chip in for these upgrades. Netflix believes the ISPs should shoulder the full costs, which would ultimately be spread among all Internet users, whether or not they subscribe to Netflix. Understandably, most Americans don’t want to pay more for Internet service. But the possibility of tougher regulation on broadband service has spooked ISPs, which don’t want to invest tens or hundreds of billions of dollars unless they can be sure of recouping their costs. AT&T recently froze its plans for a massive investment to expand its high-speed fiber network. This underscores the point that people should not worry so much about ISPs artificially slowing their service or blocking some websites to make way for priority users. The real concern is that ISPs won’t invest enough money to keep pace with the extraordinary growth in Internet traffic, especially for peak periods. Unfortunately, until ISPs, content providers, and the government agree on who will share in funding the hundreds of billions of dollars of investment needed to drive a step-change in U.S. broadband speeds, more and more people will find themselves stuck in “slow lanes.”


How Net Neutrality changes could Impact Your Business:

The following are three examples of how these changes could affect your small business.

1. Higher Costs:

Without net neutrality, Internet Service Providers are able to create their own payment options for individuals and businesses. Although nothing is official, these Internet companies could charge higher fees for higher speeds. For example, with Netflix being the leading streaming video provider on the Internet, they may have to pay more to ISPs in order to provide customers with fast content.  According to USA Today, “Netflix may face an incremental $75 million to $100 million in annual content delivery costs.” This additional expense will be incurred to provide the same service levels consumers already expect from Netflix.  For companies that can’t afford the more expensive fees, possibly small businesses like yours, they would be subject to a slower website than larger competitors – effectively squeezing smaller companies out of the marketplace.

2. No Longer an Even Playing Field:

Net neutrality ensures small businesses are able to compete with larger companies. With both having the same access to the Internet, they are able to have the same opportunities for their businesses. If net neutrality is eliminated, small businesses may not be able to afford to share content and therefore, unable to compete with their larger competitors.

3. Changes to Video Marketing:

A lot of time and effort are spent to creating videos that feature and promote products.  Small businesses that rely on video and YouTube as part of their marketing strategy, could see changes if net neutrality is eliminated. If we can’t afford to pay Internet providers to share our content, our potential customers may not be able to view as many product videos and may not be enticed to purchase our products. Furthermore, the investment to produce and optimize these videos will be result in a monetary loss.


On the other hand, there are reasons why business should oppose Net Neutrality:

Up until now, the debate over net neutrality has largely focused on how broadband consumers would be affected by net neutrality. But for at least two reasons, businesses — even those outside of the communications sector — have a dog in this fight too.

1. First, businesses need ISPs to continue investing in their broadband networks. It is well established that price regulation often truncates the returns on an investment in a regulated industry, and thereby decreases investment. According to the Columbia Institute of Tele-Information, ISPs are set to invest $30 billion annually over the next five years to blanket the country with next-generation broadband networks, nearly half of which ($14 billion) will support wireless networks. It is difficult to estimate with precision what portion of the $30 billion would be neutered in the presence of net neutrality rules, but the direction of the impact — negative — is clear. Noted telecom analyst Craig Moffett of Bernstein Research opined that, with the imposition of net neutrality rules, Verizon FiOS “would be stopped in its tracks,” and AT&T’s U-Verse “deployments would slow.” Outcomes like these clearly would not serve the interests of the business community.

2. Second, businesses need the opportunity to innovate. The ability to purchase priority delivery from ISPs would spur innovation among businesses, large and small. Priority delivery would enable certain real-time applications to operate free of jitter and generally perform at higher levels. Absent net neutrality restrictions, entrepreneurs in their garages would devote significant energies trying to topple Google with the next killer application. But if real-time applications are not permitted to run as they were intended, these creative energies will flow elsewhere. The concept of premium services and upgrades should be second-nature to businesses. From next-day delivery of packages to airport lounges, businesses value the option of upgrading when necessary. That one customer chooses to purchase the upgrade while the next opts out would never be considered “discriminatory.”


Competition and consumer protection:

An efficiently operating market for broadband internet access could avoid many of the concerns raised by potential blocking of, or discrimination against, specific internet content or services. Though numbers vary between Member States (MS), a 2012 study showed there were nearly 250 fixed-line and over 100 mobile operators in the EU, with no MS reporting less than three in either category except Cyprus (only two mobile operators). Informed consumers could make a choice among offers from different providers and choose the price, quality of service and range of applications and content that suited their particular needs. Given that 85% of fixed-line operators and 76% of mobile operators offer at least one unrestricted plan, consumers could punish any supplier who blocked or throttled an innovative new service by changing to another supplier, provided that contracts made switching quick and easy. This free-market philosophy makes sense to experts who feel it is only normal that people have to pay higher prices to access applications that require a higher quality of service. A consumer association in the UK found that traffic management concepts were poorly understood by consumers. In some cases, actual rates of delivery were much lower than those that had been promised and it may be difficult for consumers to detect whether access providers throttle certain kinds of services, such as P2P services or VoIP. Even if consumers identify problems such as insufficient speed or blocked applications, switching may not be easy: access contracts may be bundled with other services (e.g. telephone or television) or with subsidised or leased equipment that makes it harder to switch. Moreover if a particular service is blocked not by the consumer’s ISP but by a network operator in another MS, consumers will still not get access to that service even if they change internet-access supplier at their end. Even more critically, if high quality specialised services take up a large chunk of existing bandwidth, network operators may downgrade the ‘standard’ open internet service, leading to poorer service for those who cannot afford to pay more. This may encourage a ‘multi-lane’ or ‘multi-tier’ internet that could lead to less competition and greater social exclusion. However to some extent a multi-lane internet already exists. Large content providers like YouTube have built, or have contracted for, Content Delivery Networks (CDN) that use private networks to deliver their content to servers located at various places on the edge of the internet in close geographic proximity to their customers. Their content has less distance to travel over the internet to reach the end user, and thus can arrive faster and more reliably than content of smaller competitors who cannot afford a CDN. By 2017, it is estimated that more than half of the world’s internet traffic will pass through a CDN. As for risks that standard internet service will be degraded because specialised services take up too much bandwidth, NRA already have the power to impose a minimum level of service if public internet access becomes too degraded.


Net Neutrality vis-à-vis innovation:

Abandoning network neutrality factors will certainly alter innovation due to threats of exclusion and extraction. It is perhaps safe to say that the best innovations are produced with open and uncontrollable surroundings, or when the mind is allowed to operate freely without any constraints. An online giant worth mentioning here is Google. Google allows its employees to freely work on whatever they please twenty percent of the employees’ time in the day, and in turn the innovations belong to the company. G Mail is one example that resulted from such an incentive. However, now that Google is a dominant force in in various aspects, imposing regulation on the Internet would perhaps not slow this Internet giant down. If there is regulation imposed on the content providers, larger organizations like Google will be able to continue to dominate the Internet, and organizations like Yahoo! could be facing the threat of exclusion.  On the other hand, should there be no regulation imposed on the Internet both companies could continue to innovate and provide users with ways for a more efficient use of the Internet. So, as a result it is evident that when no constraints or regulations are put into place at times, the results can be rewarding to every Internet user in the world. An open and free Internet has been the foundation of innovation and it can certainly continue to benefit users and contribute to innovation.


Net neutrality and education:

In today’s environment, it is impossible to imagine education without internet. And most of such content is free for the student. Absence of Net Neutrality, destroying level playing field might create an environment that favors big money and disadvantages everyone else, specifically non-profit educational institutions. The central issue with “paid prioritization”—where one content provider pays for a ‘fast lane’—is that those with the greatest financial resources will be best able to speed their content to all who use that provider. This would hurt small startups and public or non-profit content providers (like education institutions) that can’t afford to buy a ‘fast lane’ for educational, research, or other digital collections. As you know, educational content, due to media rich format, requires better internet bandwidth and higher amount of data consumption compared to ecommerce or other form of internet usage. So it is necessary that educational content gets equal priority on internet. Absence of which may make online education unviable. Such scenario will force, the quality focused not-for-profit education institutions to join in paid prioritization or fast lane platforms like Airtel Zero or This increased cost will add to the problems of education institutions which are already facing financial crunch so it will be difficult for them to absorb this increase in cost. They will be forced to pass on this additional cost onto students in terms of fees hike, and in country like India, where online education is the only hope for economically reaching out to masses, it is also possible that the cost of online education will grow. Further, technological innovations are making education affordable. Absence of Net Neutrality will hamper EdTech startups focusing on innovative to make education affordable. Creating preferential access to further social causes and service penetration is one thing , using it to create only commercial monopolies as could be the case or fear now is quite another. Preferential treatment, if given sensitively, may help in accelerating penetration and better QoS by ISP. However, absence of right set of regulations can lead to monopolies and cartels and emerge as threat to larger objectives of delivering education to the masses.



Legal aspects of net neutrality:

Net neutrality law refers to laws and regulations which enforce the principle of net neutrality. Opponents of net neutrality enforcement claim regulation is unnecessary, because broadband service providers have no plans to block content or degrade network performance. Opponents of net neutrality regulation also argue that the best solution to discrimination by broadband providers is to encourage greater competition among such providers, which is currently limited in many areas.


Without strong legislation protecting Net Neutrality, the following examples will become the norm:

1. In 2004, North Carolina ISP Madison River blocked their DSL customers from using any rival web-based phone service (like Vonage, Skype, etc.).

2. In 2005, Canada’s telephone giant Telus blocked customers from visiting a website sympathetic to the Telecommunications Workers Union during a labor dispute.

3. Shaw, a big Canadian cable TV company, is charging $10 extra a month to subscribers in order to “enhance” competing Internet telephone services.

4. Time Warner’s AOL blocked all emails that mentioned – and advocacy campaign opposing the company’s pay-to-send email plan.


Why is the legal enforcement of net neutrality so challenging?

It did not take us much to be able to define net neutrality in the technical and service domains, but there still are some loose ends that prevent this definition from being applicable as a normative regulation; that is, other than lobbies and politics. The service provider, to exercise network neutrality, has to avoid exploiting any data for providing its service, other than the data specified by the networking protocol. However, this is not realistically achievable to the fullest extent. The ISP has to carry out some business-oriented packet shaping to prevent one user from absorbing all bandwidth, not leaving anything for other users. Obviously, there is some business logic involved in preference of packets which is acceptable. If you pay for a certain bandwidth, some network neutrality is violated by merely enforcing this deal. So where does the line cross? How is restricting a user to only use the bandwidth he pays for is okay, while preferring traffic based on payment by service providers is not? Usually, when we encounter such situations in which we cannot make up sustainable rules, one approach is to revert to demanding transparency. The ISP can do whatever it wishes, but it must openly disclose its operations and thus let economy control what is acceptable by the public and what is not, and penalize the ISPs that are below the norm. This could work. We could require that any ISP can do whatever it wishes with its traffic: prioritize, block sites at its own will, etc., just as long as it openly publishes its practices to the users, who may elect to take their business elsewhere. The reason this approach is not favorable is that network neutrality has too much significance to economy and to democracy to have it left to user preferences. There are too many potential “market failures” here: users may not understand the trade-offs well enough, the ISPs may form cartels that allow them all to offer the same terms of service in this respect; or in some cases there is just not enough choice between ISPs in the first place. Transparency is a good requirement; but it is not enough. We need to protect network neutrality by law. Even if we cannot get it a hundred percent right at first, we need to pose a firm start.


Potential for banning legitimate activity:

Poorly conceived legislation could make it difficult for Internet Service Providers to legally perform necessary and generally useful packet filtering such as combating denial of service attacks, filtering E-Mail spam, and preventing the spread of computer viruses. Quoting Bram Cohen, the creator of BitTorrent, “I most definitely do not want the Internet to become like television where there’s actual censorship…however it is very difficult to actually create network neutrality laws which don’t result in an absurdity like making it so that ISPs can’t drop spam or stop…attacks”. Some pieces of legislation, like The Internet Freedom Preservation Act of 2009, attempt to mitigate these concerns by excluding reasonable network management from regulation.


The figure below shows net neutrality laws in various countries:


Legal enforcement of net neutrality principles takes a variety of forms, from provisions that outlaw anti-competitive blocking and throttling of Internet services, all the way to legal enforcement that prevents companies from subsidizing Internet use on particular sites.  Contrary to popular rhetoric and various individuals involved in the ongoing academic debate, research suggests that a single policy instrument (such as a no-blocking policy or a quality of service tiering policy) cannot achieve the range of valued political and economic objectives central to the debate. As Bauer and Obar suggest, “safeguarding multiple goals requires a combination of instruments that will likely involve government and nongovernment measures. Furthermore, promoting goals such as the freedom of speech, political participation, investment, and innovation calls for complementary policies.”  Here we look into the some countries that have already adopted net neutrality:



It is the first country to enact a net neutrality law in 2010. Interestingly, the law was a culmination of a citizen’s movement; in particular the efforts of citizen group Neutralidad Si. In 2014, Chilean telecommunications regulator Subtel banned mobile operators from zero-rating, whereby internet companies strike deals with mobile telecom operators to offer consumers free internet usage.



It is the first country in Europe to pass a law on net neutrality in 2011. Even zero-rating deals between internet companies and mobile operators have been banned as per the new law.



In 2014, Brazil passed a legislation bringing into effect an `Internet Law’, which saw the introduction of the principle of Net Neutrality. Brazil’s principle of Net Neutrality meant “that all data transmissions (i.e. online traffic) must be treated equally by network operators regardless of its content, origin, destination, service, terminal or application. The aim of this provision is to prevent operators from charging higher rates for accessing content that uses greater bandwidth, like video streaming or voice communication services.



As of 2015, India had no laws governing net neutrality and there have been violations of net neutrality principles by some service providers. While the Telecom Regulatory Authority of India (TRAI) guidelines for the Unified Access Service license promote net neutrality, they are not enforced. The Information Technology Act, 2000 does not prohibit companies from throttling their service in accordance with their business interests. In March 2015, the TRAI released a formal consultation paper on Regulatory Framework for Over-the-top (OTT) services, seeking comments from the public. The consultation paper was criticised for being one sided and having confusing statements. It was condemned by various politicians and internet users. By 24 April 2015, over a million emails had been sent to TRAI demanding net neutrality.



On 26 February 2015, the U.S. Federal Communications Commission (FCC) ruled in favor of net neutrality by reclassifying broadband access as a telecommunications service and thus applying Title II (common carrier) of the Communications Act of 1934 to Internet service providers. On 12 March 2015, the FCC released the specific details of its new net neutrality rule. And on 13 April 2015, the FCC published the final rule on its new regulations. U.S. Federal Communications Commission (FCC) approved “net neutrality” rules that prevent Internet providers such as Comcast and Verizon from slowing or blocking Web traffic or from creating Internet fast lanes that content providers such as Netflix could pay to use. However, the FCC is facing several lawsuits that challenge its open Internet order.


Europe’s Current Policy:

The EU’s dealings with net neutrality have been something of an intricate dance — or you might define it as more of a roller coaster. Shifting policies and the task of weighing consumer welfare against economic welfare have resulted in Europe’s current policy. Basically, their approach is that ISPs should be reasonable in how they manage their networks, considering both their own interests and those of Internet users. As Financier Worldwide explains it, the current policy “advocates that an approach be taken which sits somewhere between a light-touch approach, at one extreme, to one which seeks to eliminate market power, promote consumer awareness, increase transparency, and to lower switching costs for end-users, at the other.”  Is that really a viable approach? Perhaps officials think that if an ISP blocks certain websites or delivers some content slower than others, unhappy consumers can always switch to a different ISP, so there is no need for tighter regulations. That line of thought seems like a slippery slope and puts a lot of trust in big businesses.


Anti-trust laws:

Competition law is a law that promotes or seeks to maintain market competition by regulating anti-competitive conduct by companies.  Competition law is implemented through public and private enforcement. Competition law is known as antitrust law in the United States and European Union, and as anti-monopoly law in China and Russia. The antitrust laws apply to virtually all industries and to every level of business, including manufacturing, transportation, distribution, and marketing. They prohibit a variety of practices that restrain trade. Examples of illegal practices are price-fixing conspiracies, corporate mergers likely to reduce the competitive vigor of particular markets, and predatory acts designed to achieve or maintain monopoly power. Microsoft, ATT, and J.D. Rockefeller Oil are companies who have been convicted of antitrust practices.


Can antitrust laws prevent discrimination on internet?

At first blush, the broadband providers and content providers don’t compete. One sells content, the other passes that content to customers. But a good number of broadband providers are also in the content delivery business. Comcast, for example, not only provides broadband services, but it also delivers movies through its cable channels, on-demand services, as well as applications that allow users to stream video to their tablets, handhelds and computers. The disadvantaged content provider may be able to show that its content was being delayed deliberately by the vertically-integrated broadband/content provider, and that this delay had a material effect on the disadvantaged content provider’s ability to provide services in the relevant market that includes its content. Netflix would first have to show that it competes with Comcast. They also would have to show consumers dropping Netflix for Comcast’s services. (That showing would be complex in a market where consumers can access content in multiple formats which themselves can vary in price, scope and availability over time.) They would then have to show causality—that they lost sales to Comcast because of the discrimination and that those lost sales resulted in users paying Comcast higher prices or that Netflix lost revenue or otherwise was harmed and perhaps even had to exit the market. Showing the effect on price is particularly complex given the byzantine pricing structure that exists in the cable markets and low marginal cost of the products being delivered. In response to such an argument, the broadband/content provider would likely re-characterize the disadvantaged content provider’s antitrust claim as being a “refusal to deal” and that, absent an actual refusal to deal on any terms, the disadvantaged content provider’s claim fails. Competitors are generally not required to deal with other competitors. So long as consumers can download content even if it is at maddeningly slow rates, the monopolist broadband provider will likely not have violated the antitrust laws. Also, antitrust provides no solution at all if the disadvantaged provider does not compete with the broadband provider. A cable company may slow VPN traffic because it uses too much bandwidth. If the broadband provider doesn’t sell VPN functionality, then the discrimination does not harm competition. An antitrust solution to traffic discrimination would ultimately only address situations where a broadband provider is impeding traffic to gain an advantage in a market in which it competes and has in fact done very well in that market in terms of market share. Antitrust is therefore inadequate to obtain universal traffic neutrality. Antitrust may play a role at the fringe of net neutrality. It is by no means a complete answer.



Pros and Cons of net neutrality:

There has been extensive debate about whether net neutrality should be required by law in the United States. Advocates of net neutrality have raised concerns about the ability of broadband providers to use their last mile infrastructure to block Internet applications and content (e.g. websites, services, and protocols), and even to block out competitors. Opponents claim net neutrality regulations would deter investment into improving broadband infrastructure and try to fix something that isn’t broken.


A non-neutral Internet would allow telecom companies to load certain websites and applications faster or slower than others or restrict access to them altogether. For example, subscriber to network X might be forced to use Bing as their search engine because Google partners with network Y and network X would either take longer to load Google (compared with Bing) or might refuse access to Google altogether. Similarly, telecom companies could also discriminate between consumers, allowing richer consumers access to a greater range of websites and applications for higher fees and forcing poorer consumers to opt for schemes that include only certain websites or applications. Thus a farmer in rural India for instance, might be able to access his Facebook profile cheaply but may have to pay much more to get reliable weather updates or track vegetable trading prices. Critics of net neutrality claim that Internet data usage is not uniform. Basic Internet services such as sending e-mails or reading news are insensitive to delays or signal distortion. Services such as Skype however require a minimum quality of service in order to be effective and thus justify higher fees. The TRAI paper argues that Internet-based communications applications such as Skype and WhatsApp are cannibalizing services from which telecom operators traditionally profited as traditional caller plans and SMS services become increasingly redundant. If operators are to continue investing in better Internet technology, they must have the incentive to do so by earning greater returns on that investment. Critics also argue that zero-rating and tiered services enable greater Internet penetration by making cheaper plans available for poorer citizens. As a trade-off, cheaper plans entail slower Internet speeds or restrict access to only certain applications and websites. But as Facebook CEO Mark Zuckerberg said, “For people who are not on the Internet, having some connectivity and some ability to share is always much better than having no ability to connect and share at all.”


Arguments for net neutrality:

Proponents of net neutrality argue that a neutral Internet encourages everyone to innovate without permission from the phone and cable companies or other authorities. A more level playing field spawns countless new businesses. Allowing unrestricted information flow becomes essential to free markets and democracy as commerce and society increasingly move online. Heavy users of network bandwidth would pay higher prices without necessarily experiencing better service. Even those who use less bandwidth could run into the same situation. Proponents of net neutrality invoke the human psychological process of adaptation where when people get used to something better, they would not ever want to go back to something worse. In the context of the Internet, the proponents argue that a user who gets used to the “fast lane” on the Internet would find the “slow lane” intolerable in comparison, greatly disadvantaging any provider who is unable to pay for the “fast lane”.


Proponents of net neutrality include consumer advocates, human rights organizations, online companies and some technology companies. Many major Internet application companies are advocates of neutrality. Yahoo!, Vonage, eBay, Amazon, IAC/InterActiveCorp. Microsoft, Twitter, Tumblr, Etsy, Daily Kos, Greenpeace, along with many other companies and organizations, have also taken a stance in support of net neutrality.  Cogent Communications, an international Internet service provider, has made an announcement in favor of certain net neutrality policies. In 2008, Google published a statement speaking out against letting broadband providers abuse their market power to affect access to competing applications or content. They further equated the situation to that of the telephony market, where telephone companies are not allowed to control who their customers call or what those customers are allowed to say. However, Google’s support of net neutrality was called into question in 2014.  Several civil rights groups, such as the ACLU, the Electronic Frontier Foundation, Free Press, and Fight for the Future support net neutrality. Individuals who support net neutrality include Tim Berners-Lee, Vinton Cerf, Lawrence Lessig, Robert W. McChesney, Steve Wozniak, Susan P. Crawford, Marvin Ammori, Ben Scott, David Reed, and U.S. President Barack Obama.


Reasons for being in favor of network neutrality:

The reasons that people are in favor of net neutrality is because they want to make it such that they are preventing a monopoly from happening within the last mile of connection. The last mile is the final leg of delivering connectivity from a communications provider to a customer.  At this point within the transportation of a packet, it has to reach through many service providers’ equipment.  What people are worried about is that a provider, who owns the physical cable lines in a given space, would charge a higher amount for a certain content provider to deliver its services versus another.  This would make the content providers’ cost of doing business to go up.  Going into the idea of preventing the last mile monopoly, content providers want to keep ambiguity down as ambiguity is inherent within risk.  If a content provider relies on a last mile provider to distribute their services, they would have to be paying for the service. However, the last mile provider could unexpectedly increase their rates which were not accounted for within the content providers’ budget.  This could throw off the content providers’ business which can lead to incorrect projections of profitability.


Control of data:

Supporters of network neutrality want to designate cable companies as common carriers, which would require them to allow Internet service providers (ISPs) free access to cable lines, the model used for dial-up Internet. They want to ensure that cable companies cannot screen, interrupt or filter Internet content without court order.  Common carrier status would give the FCC the power to enforce net neutrality rules. accuses cable and telecommunications companies of wanting the role of gatekeepers, being able to control which websites load quickly, load slowly, or don’t load at all. According to these companies want to charge content providers who require guaranteed speedy data delivery… to create advantages for their own search engines, Internet phone services, and streaming video services – and slowing access or blocking access to those of competitors. Vinton Cerf, a co-inventor of the Internet Protocol argues that the Internet was designed without any authorities controlling access to new content or new services.  He concludes that the principles responsible for making the Internet such a success would be fundamentally undermined were broadband carriers given the ability to affect what people see and do online.


Digital rights and freedoms:

Lawrence Lessig and Robert W. McChesney argue that net neutrality ensures that the Internet remains a free and open technology, fostering democratic communication. Lessig and McChesney go on to argue that the monopolization of the Internet would stifle the diversity of independent news sources and the generation of innovative and novel web content. Network neutrality protects the right of freedom of speech. The reason is that network neutrality restricts ISPs from blocking or prioritizing content on the Internet. Countries that have not implemented the principle of network neutrality in their legislation often control or suppress the publishing or accessing of information on the Internet. For example, in China, the government uses a system that does not allow the residents of China to access certain online content. As a result, if an Internet user searches in Google or other search engines for the word “Tibetan independence,” “democracy movements,” or other blacklisted words, he or she will be redirected to a blank page stating “page cannot be displayed.”


Competition and innovation:

Net neutrality advocates argue that allowing cable companies the right to demand a toll to guarantee quality or premium delivery would create an exploitative business model based on the ISPs position as gatekeepers.  Advocates warn that by charging websites for access, network owners may be able to block competitor Web sites and services, as well as refuse access to those unable to pay.  According to Tim Wu, cable companies plan to reserve bandwidth for their own television services, and charge companies a toll for priority service.  Proponents of net neutrality argue that allowing for preferential treatment of Internet traffic, or tiered service, would put newer online companies at a disadvantage and slow innovation in online services.  Tim Wu argues that, without network neutrality, the Internet will undergo a transformation from a market ruled by innovation to one ruled by deal-making. argues that net neutrality puts everyone on equal terms, which helps drive innovation. They claim it is a preservation of the way the internet has always operated, where the quality of websites and services determined whether they succeeded or failed, rather than deals with ISPs.  A failure to enact Net Neutrality protections will undermine content and application providers’ freedom to do business. A non-neutral regime would hinder innovation in content, as start-ups and smaller companies would suddenly be faced with barriers to enter the market – and uncertainty about what new barriers may be created. The innovators’ freedom to impart information is therefore limited – as is their freedom to do business. Lawrence Lessig and Robert W. McChesney argue that eliminating net neutrality would lead to the Internet resembling the world of cable TV, so that access to and distribution of content would be managed by a handful of massive companies. These companies would then control what is seen as well as how much it costs to see it. Speedy and secure Internet use for such industries as health care, finance, retailing, and gambling could be subject to large fees charged by these companies. They further explain that a majority of the great innovators in the history of the Internet started with little capital in their garages, inspired by great ideas. This was possible because the protections of net neutrality ensured limited control by owners of the networks, maximal competition in this space, and permitted innovators from outside access to the network. Internet content was guaranteed a free and highly competitive space by the existence of net neutrality. The involvement of ISPs in determining what content or services reach consumers will stifle innovators. For instance, if Google can pay ISPs to deliver YouTube videos faster than other sources of Internet video, any startups offering better services than YouTube will have tremendous difficulties enter the online video market. Network neutrality does not allow ISPs to restrict content and/or services provided by their competitors. As known, restrictions of competition may lead to increased prices of services and/or goods. For example, in 2009, Deutsche Telekom announced plans to prohibit the use of Skype over iPhones. Such a prohibition will harm the interests of consumers who can otherwise save money on calls by using Skype.


Preserving Internet standards:

Network neutrality advocates have sponsored legislation claiming that authorizing incumbent network providers to override transport and application layer separation on the Internet would signal the decline of fundamental Internet standards and international consensus authority. Further, the legislation asserts that bit-shaping the transport of application data will undermine the transport layer’s designed flexibility. Network neutrality preserves the existing Internet standards. The reason is that, at present, the Internet runs on technical standards created by variety of organizations, such as the internet engineering task force (IETF). By using the existing Internet standards, computers, services, and software created by different companies can be integrated together. Without network neutrality, the Internet will be regulated by ISPs under standards chosen by them.


Preventing pseudo-services:

Alok Bhardwaj, founder of Epic Privacy Browser, argues that any violations to network neutrality, realistically speaking, will not involve genuine investment but rather payoffs for unnecessary and dubious services. He believes that it is unlikely that new investment will be made to lay special networks for particular websites to reach end-users faster. Rather, he believes that non-net neutrality will involve leveraging quality of service to extract remuneration from websites that want to avoid being slowed down.


End-to-end principle:

Some advocates say network neutrality is needed in order to maintain the end-to-end principle. Network neutrality maintains the end-to-end principle. It allows nodes of the network to send packets to all other nodes of the network, without requiring intermediate network elements to maintain status information about the transmission. The principle allows people using the Internet to innovate free of any central control. According to Lawrence Lessig and Robert W. McChesney, all content must be treated the same and must move at the same speed in order for net neutrality to be true. They say that it is this simple but brilliant end-to-end aspect that has allowed the Internet to act as a powerful force for economic and social good. Under this principle, a neutral network is a dumb network, merely passing packets regardless of the applications they support. This point of view was expressed by David S. Isenberg in his paper, “The Rise of the Stupid Network”. He states that the vision of an intelligent network is being replaced by a new network philosophy and architecture in which the network is designed for always-on use, not intermittence and scarcity. Rather than intelligence being designed into the network itself, the intelligence would be pushed out to the end-user’s device; and the network would be designed simply to deliver bits without fancy network routing or smart number translation. The data would be in control, telling the network where it should be sent. End-user devices would then be allowed to behave flexibly, as bits would essentially be free and there would be no assumption that the data is of a single data rate or data type. Contrary to this idea, the research paper titled End-to-end arguments in system design by Saltzer, Reed, and Clark argues that network intelligence doesn’t relieve end systems of the requirement to check inbound data for errors and to rate-limit the sender, nor for a wholesale removal of intelligence from the network core.


Regulation vs. Competition issue:

Some of the “pipe” owners argue that net neutrality is unnecessary regulation that will stifle competition and slow deployment of broadband technologies. But the truth is there is already only a little competition between broadband providers. In most parts of the U.S., there are at most two companies that provide a broadband pipe to your home: a telephone company and a cable company. Both of these industries are already regulated because they are natural monopolies: once a cable is laid to your house, there really is no rational, non-wasteful reason to lay another cable to your house, since you only need one at a time; therefore, most communities only allow one cable or telephone company to provide service to an area, and then regulate that company so to prevent abuse of the state-granted monopoly. Thus, we don’t allow phone companies to charge exorbitant amounts for local service; nor do we permit a cable company to avoid providing service to poor neighborhoods. Contrast the quasi-monopoly on broadband pipes with the intensely competitive market of web content and services. There are millions of websites out there and countless hours of video and audio, all competing for your time, and sometimes your money. With the advent of broadband connections, the telecom and cable companies have found a new way to exploit their state-granted monopoly: leverage it into a market advantage in Internet services and content. This would harm competition in the dynamic, innovative content and services industry without solving the lack of real competition in the broadband access market. In contrast, net neutrality will encourage competition in online content and services to stay strong. By keeping broadband providers from raising artificial price barriers to competition, net neutrality will preserve the egalitarian bit-blind principles that have made the Internet the most competitive market in history.


ISPs are trying to ‘Double Dip’!

ISPs argue that they should be incentivized to invest in infrastructure that results in a faster internet. This argument ignores that they are already charging consumers for their infrastructure and are now trying to ‘double dip‘ by charging content providers too. To make matters worse, ISPs effectively have a monopoly in most markets – inhabitants in large cities have just a few cable/internet options and small markets often have one.



Arguments against net neutrality:

Network owners believe regulation like the bills proposed by net neutrality advocates will impede U.S. competitiveness by stifling innovation and hurt customers who will benefit from ‘discriminatory’ network practices. U.S. Internet service already lags behind other nations in overall speed, cost, and quality of service, adding credibility to the providers’ arguments. Obviously, by increasing the cost of heavy users of network bandwidth, telecommunication and cable companies and Internet service providers stand to increase their profit margins.  Those who oppose network neutrality include telecommunications and cable companies who want to be able to charge differentiated prices based on the amount of bandwidth consumed by content being delivered over the Internet. Some companies report that 5 percent of their customers use about half the capacity on local lines without paying any more than low‐usage customers. They state that metered pricing is “the fairest way” to finance necessary investments in its network infrastructure. Internet service providers point to the upsurge in piracy of copyrighted materials over the Internet as a reason to oppose network neutrality. Comcast reported that illegal file sharing of copyrighted material was consuming 50 percent of its network capacity. The company posits that if network transmission rates were slower for this type of content, users would be less likely to download or access it. Those who oppose network neutrality argue that it removes the incentive for network providers to innovate, provide new capabilities, and upgrade to new technology.


Reasons for not being in favor of network neutrality:

The world knows that Internet Service Providers (ISPs) are not in favor of net neutrality.  A specific reason that they would like to be sure that net neutrality does not exist is so that they can gain the ability to offer tiered services.  They want to offer tiered services because they believe that a user should be able to pay for the quality of service provided to them (in terms of throughput).  Instead of offering a flat level of service (in terms of throughput) for all customers both content providers and end users, the ISPs would like to offer different tiers of service.  They want to provide a content provider with a guaranteed level of service based on the tier that which they are willing to pay.  For a top tier service, this would allow that content provider for be able to provide their content at a fast rate to its users.  It would also let content providers, who do not want to pay for an extremely high level of throughput, to be able to save money by not paying for a high level.  People for net neutrality argue that this would provide a disadvantage to content providers who cannot afford top-tier services.  However ISPs say that this form of increased level of service already exists when there is a content provider who has thousands of services strategically placed throughout the world. They have the ability to provide a consistently high level of service to its users due to physical access in relation to the users’ location.  ISPs think that because this already exists that it should not a problem; offering tiered services.  Another reason that ISPs are against net neutrality is because they believe that by offering the tiered services, they will have the ability to offer a higher level of service to their subscribers through tiered filtration.  By offering different tiers that users can opt in and out off, users will be signing up for the service that they are happy with. The ISP can better throttle its bandwidth as people will be in different tiers.  If a user subscribes to a low level of throughput then they will be paying for a low level of throughput and they will suffice with receiving a low level of throughput because they paid for it.  Likewise, if a user subscribes for a high level of throughput then they will be paying a higher fee and will be content with the high level of throughput.


It was reported that Netflix consumed approximately 35% of all broadband traffic in the U.S. and Canada. In fact, both Netflix and YouTube combined take up half of the Internet’s bandwidth. Half! So wait…these companies shouldn’t pay more? In a world of net neutrality, this would all be okay. They would not be charged any more for faster lanes or special access. Our largest internet service providers, like Comcast and Verizon, would be required to let them consume as much as they want for the same price that you and I pay. And so the argument for net neutrality weakens.  Net neutrality will curtail our Internet access, speed and performance. You love to watch movie, but what about your neighbor who’s not a Netflix subscriber? Should she be punished with slower Internet speeds caused by bottlenecks because she’s battling for half of what’s left? And just because those around her choose to subscribe to Netflix and stream movies and she did not? And who’s to say that in a few years other services like Netflix won’t appear that will consume even more bandwidth. Or, let’s suppose you’re staying in a hotel (or you’re on a plane) where everyone pays the same for Internet access, except there’s one guy in room 866 who’s hogging up 50% of the bandwidth watching God knows what. With net neutrality, he would have the right to the same bandwidth as you do and would pay the same. Except he’s abusing his right. And you’re suffering with slower speeds and less productivity. Net neutrality will increase our costs. The Internet cannot yet be treated as a utility because it’s not billed as a utility. If it were billed as a utility, you and your business would be paying for usage/downloads/uploads instead of a flat monthly fee. Far richer companies like Netflix, YouTube and others on the horizon would be allowed to consume as much of it as they want and pay the same fees you and I are paying. This is not equal. This is not neutral. And companies are competing everywhere where space, whether it’s real estate, market share or Internet bandwidth is valuable. This is why there are $8 million studio apartments in New York City and why a 30 second advertisement on the Super Bowl costs $4 million.  Opponents of net neutrality regulations include AT&T, Verizon, IBM, Intel, Cisco, Nokia, Qualcomm, Broadcom, Juniper, dLink, Wintel, Alcatel-Lucent, Corning, Panasonic, Ericsson, and others. Notable technologists who oppose net neutrality include Marc Andreessen, Scott McNealy, Peter Thiel, David Farber, Nicholas Negroponte, Rajeev Suri, Jeff Pulver, John Perry Barlow, and Bob Kahn. Nobel Prize-winning economist Gary Becker’s paper titled, “Net Neutrality and Consumer Welfare”, published by the Journal of Competition Law & Economics, alleges that claims by net neutrality proponents “do not provide a compelling rationale for regulation” because there is “significant and growing competition” among broadband access providers.  Google Chairman Eric Schmidt states that, while Google views that similar data types should not be discriminated against, it is okay to discriminate across different data types—a position that both Google and Verizon generally agree on, according to Schmidt. The supporters of net neutrality regulation believe that more rules are necessary. In their view, without greater regulation, service providers might parcel out bandwidth or services, creating a bifurcated world in which the wealthy enjoy first-class Internet access, while everyone else is left with slow connections and degraded content. That scenario, however, is a false paradigm. Such an all-or-nothing world doesn’t exist today, nor will it exist in the future. Without additional regulation, service providers are likely to continue doing what they are doing. They will continue to offer a variety of broadband service plans at a variety of price points to suit every type of consumer.  Computer scientist Bob Kahn has said net neutrality is a slogan that would freeze innovation in the core of the Internet. Farber has written and spoken strongly in favor of continued research and development on core Internet protocols. He joined academic colleagues Michael Katz, Christopher Yoo, and Gerald Faulhaber in an op-ed for the Washington Post strongly critical of network neutrality, essentially stating that while the Internet is in need of remodelling, congressional action aimed at protecting the best parts of the current Internet could interfere with efforts to build a replacement.


Reduction in innovation and investments:

According to a letter to key Congressional and FCC leaders sent by 60 major ISP technology suppliers including IBM, Intel, Qualcomm, and Cisco, Title II regulation of the internet means that instead of billions of broadband investment driving other sectors of the economy forward, any reduction in this spending will stifle growth across the entire economy. This is not idle speculation or fear mongering…Title II is going to lead to a slowdown, if not a hold, in broadband build out, because if you don’t know that you can recover on your investment, you won’t make it.  Opponents of net neutrality argue that prioritization of bandwidth is necessary for future innovation on the Internet. The prioritization of bandwidth stimulates innovation because the ISPs can use the money paid for preferential treatment of Internet traffic to pay for the building of network infrastructure that would increase broadband access to more consumers. Telecommunications providers such as telephone and cable companies, and some technology companies that supply networking gear, argue telecom providers should have the ability to provide preferential treatment in the form of tiered services, for example by giving online companies willing to pay the ability to transfer their data packets faster than other Internet traffic. The added revenue from such services could be used to pay for the building of increased broadband access to more consumers. Marc Andreessen states that “a pure net neutrality view is difficult to sustain if you also want to have continued investment in broadband networks. If you’re a large telco right now, you spend on the order of $20 billion a year on capex. You need to know how you’re going to get a return on that investment. If you have these pure net neutrality rules where you can never charge a company like Netflix anything, you’re not ever going to get a return on continued network investment — which means you’ll stop investing in the network. And I would not want to be sitting here 10 or 20 years from now with the same broadband speeds we’re getting today.”


Net neutrality rules could hamper the development of new technologies and prevent ISPs and wireless companies from offering special deals and incentives:

You shouldn’t regulate data packets:

Treating all Internet traffic equally would actually make it harder to keep the data flowing smoothly, handicap cloud computing services like voice recognition and even muck up phone calls. That’s because the Internet isn’t just for downloading and streaming; it’s increasingly used for real-time interactions among computers, servers, cellphones and other connected gadgets — where every millisecond really does matter. For these types of applications, prioritizing some packets over others could make a difference.  Carriers are not looking to build a tollbooth. They are looking for ways to build a special-purpose network. These special purposes, he said, include voice and video calls. If the video data packets get priority in the data queue over a snippet of email, the call would run a lot better, and the email would still get through in time. But an explicit ban on prioritization would make that difficult or impossible. Prioritizing data will be important for a new generation of wireless service — voice over LTE, or VOLTE. Here, bits of conversations are mixed into the same wash of digits that carries your emails, Facebook messages, Spotify streams and selfie posts, none of which are as sensitive to delays as a phone call (or video call) is. And latency — the delay for a packet to get where it’s going — is worse with bandwidth-strapped wireless networks. Prioritization is the only way to do voice over data. Latency could also kneecap new services that require split-millisecond connections to massive computers far away. Voice-recognition apps don’t live on your phone or TV. Rather, they live on servers that record your voice, figure out what it really means and tell the app back on your device how to respond — all in an instant. The Electronic Frontier Foundation, an organization solidly on the side of net neutrality regulation, is skeptical of these arguments for prioritization. Jeremy Gillula, the EFF’s staff technologist, said that prioritization doesn’t work once data leaves the ISP and goes on to the larger Internet. “Most transit providers and interconnections today completely ignore packet prioritization codes,” Gillula said. He added that data encryption, which is becoming increasingly common, would obscure any labels that, say, distinguish a voice packet from a piece of a Web page. Gillula also argued that even well-intentioned prioritization could be unfair to users. “If I use my connection primarily for VoIP, but my neighbor uses hers primarily for gaming [and we have the same ISP], why should one person’s traffic be prioritized over another based on the type of traffic?” he asked.

Regulations quash deals for consumers:

Shopping is full of special offers. But regulating wireless providers and ISPs as utilities would require uniform pricing and prohibit the offering of deals.


Counterweight to server-side non-neutrality:

Those in favor of forms of non-neutral tiered Internet access argue that the Internet is already not a level playing field: large companies achieve a performance advantage over smaller competitors by replicating servers and buying high-bandwidth services. Should prices drop for lower levels of access, or access to only certain protocols, for instance, a change of this type would make Internet usage more neutral, with respect to the needs of those individuals and corporations specifically seeking differentiated tiers of service. Network expert Richard Bennett has written, “A richly funded Web site, which delivers data faster than its competitors to the front porches of the Internet service providers, wants it delivered the rest of the way on an equal basis. This system, which Google calls broadband neutrality, actually preserves a more fundamental inequality.”


Consumer fees:

Network neutrality decreases the revenues earned by the ISPs. The decreased revenues of the ISPs increase the level of the employment and decrease GDP. Moreover, the decreased revenues of ISPs prevent them from deploying and maintaining networks, and improving them over time. In order to recoup the decreased revenues, the ISPs may charge their customers increased fees. 142 wireless ISPs (WISPs) said that FCC’s new “regulatory intrusion into our businesses…would likely force us to raise prices, delay deployment expansion, or both.”


Significant and growing competition:

A 2010 paper on net neutrality by Nobel Prize economist Gary Becker and his colleagues stated that “there is significant and growing competition among broadband access providers and that few significant competitive problems have been observed to date, suggesting that there is no compelling competitive rationale for such regulation.”  Becker and fellow economists Dennis Carlton and Hal Sidler found that “Between mid-2002 and mid-2008, the number of high-speed broadband access lines in the United States grew from 16 million to nearly 133 million, and the number of residential broadband lines grew from 14 million to nearly 80 million. Internet traffic roughly tripled between 2007 and 2009. At the same time, prices for broadband Internet access services have fallen sharply.”  The PPI reports that the profit margins of U.S. broadband providers are generally one-sixth to one-eighth of companies that use broadband (such as Apple or Google), contradicting the idea of monopolistic price-gouging by providers.


Broadband choice:

A report by the Progressive Policy Institute in June 2014 argues that nearly every American can choose from at least 5-6 broadband internet service providers, despite claims that there are only a ‘small number’ of broadband providers. Citing research from the FCC, the Institute wrote that 90 percent of American households have access to at least one wired and one wireless broadband provider at speeds of at least 4 Mbps downstream and 1 Mbit/s upstream and that nearly 88 percent of Americans can choose from at least two wired providers of broadband disregarding speed (typically choosing between a cable and telco offering).


Potentially increased taxes:

The ruling issued by the FCC to impose Title II regulations explicitly opens the door to billions of dollars in new fees and taxes on broadband by subjecting them to the telephone-style taxes under the Universal Service Fund. Net neutrality proponent Free Press argues that, “the average potential increase in taxes and fees per household would be far less” than the estimate given by net neutrality opponents, and that if there were to be additional taxes, the tax figure may be around $4 billion. Under favorable circumstances, “the increase would be exactly zero.”  Meanwhile, the Progressive Policy Institute claims that Title II could trigger taxes and fees up to $11 billion a year.  Financial website Nerd Wallet did their own assessment and settled on a possible $6.25 billion tax impact, estimating that the average American household may see their tax bill increase $67 annually.  FCC spokesperson Kim Hart said that the ruling does not raise taxes or fees.


Prevent overuse of bandwidth:

Since the early 1990s, Internet traffic has increased steadily. The arrival of picture rich websites and MP3s led to a sharp increase in the mid-1990s followed by a subsequent sharp increase since 2003 as video streaming and Peer-to-peer file sharing became more common. YouTube streamed as much data in three months as the world’s radio, cable and broadcast television channels did in one year, 75 petabytes. Networks are not remotely prepared to handle the amount of data required to run these sites. Global Internet video traffic was 57 percent of all consumer traffic in 2012. The global Internet video traffic will be 69 percent of all consumer Internet traffic in 2017. This statistic does not include video exchanged through peer-to-peer (P2P) file sharing. The sum of all forms of video traffic, including P2P, will be in the range of 80 to 90 percent of global consumer traffic by 2017. In order to deal with the increased bandwidth requirements, ISPs will need to build more infrastructure.  Net neutrality would prevent broadband networks from being built, which would limit available bandwidth and thus endanger innovation.


High costs to entry for cable broadband:

According to a Wired magazine article by TechFreedom’s Berin Szoka, Matthew Starr, and Jon Henke, local governments and public utilities impose the most significant barriers to entry for more cable broadband competition: “While popular arguments focus on supposed ‘monopolists’ such as big cable companies, it’s government that’s really to blame.” The authors state that local governments and their public utilities charge ISPs far more than they actually cost and have the final say on whether an ISP can build a network. The public officials determine what hoops an ISP must jump through to get approval for access to publicly owned “rights of way” (which lets them place their wires), thus reducing the number of potential competitors who can profitably deploy internet service—such as AT&T’s U-Verse, Google Fiber, and Verizon FiOS. Kickbacks may include municipal requirements for ISPs such as building out service where it isn’t demanded, donating equipment, and delivering free broadband to government buildings.


Unnecessary regulations:

According to PayPal founder and Facebook investor Peter Thiel, “Net neutrality has not been necessary to date. I don’t see any reason why it’s suddenly become important, when the Internet has functioned quite well for the past 15 years without it…. Government attempts to regulate technology have been extraordinarily counterproductive in the past.”  Max Levchin, the other co-founder of PayPal, echoed similar statements, telling CNBC, “The Internet is not broken, and it got here without government regulation and probably in part because of lack of government regulation.”  FCC Commissioner Ajit Pai, who was one of the two commissioners who opposed the net neutrality proposal, criticized the FCC’s ruling on internet neutrality, stating that the perceived threats from ISPs to deceive consumers, degrade content, or disfavor the content that they don’t like are non-existent: ” The evidence of these continuing threats? There is none; it’s all anecdote, hypothesis, and hysteria. A small ISP in North Carolina allegedly blocked VoIP calls a decade ago. Comcast capped BitTorrent traffic to ease upload congestion eight years ago. Apple introduced Facetime over Wi-Fi first, cellular networks later. Examples this picayune and stale aren’t enough to tell a coherent story about net neutrality.


Increasing Governmental Influence:

Net neutrality proponents want government to enact laws or use governmental agencies like FCC/TRAI to enforce net neutrality with strong rules. However, phone companies and ISPs have a much greater influence on the Federal Government than individuals. This influence is primarily made manifest in the money large companies spend on lobbying the FCC and the campaign contributions these companies make to politicians that are on the committees that make the decisions about net neutrality. If net neutrality supporters want government intervention to strengthen net neutrality, then they are making a mistake because governments and large corporations are always hand in glove with each other and so far internet has worked well without government meddling.


Market Demand should control the priority of content on the internet!

One can make a ‘collective good’ argument that popular content deserves higher serving priority (regardless of whether the ISP can charge for it). It’s great that a blogger with one reader has the same chance to distribute on the internet as the creators of Game of Thrones, but do millions of got watchers collectively have a greater right to their content than the hundred or so viewers of a small time video blogger? Many consumers argue that without Net Neutrality, ISPs can give preferential treatment to the content they profit from, but the market dictates that popular content will be the most profitable, so isn’t that a good thing?


Potential disadvantages of net neutrality are:

1. Users will have to pay more for internet services as ISP will pass on the cost of more bandwidth purchased to ensure they are not stretched.

2. Slower internet access speed if the ISPs are unable to have more bandwidth to handle the increased load.

3. Increase in high latency and high jitter rate due to insufficient bandwidth which would make Voice over IP perform poorly.




There were four basic Internet freedoms that everyone should agree with: the freedom to access lawful content of one’s choice, the freedom to access applications that don’t harm the network, the freedom to attach devices to the network, and the freedom to get information about your service plan. Everybody, or virtually everybody, agrees on that. Free and open Internet stimulates ISP competition, helps prevent unfair pricing practices, drives entrepreneurship and most importantly protects freedom of speech. Advocates for net neutrality say that cable companies cannot screen, interrupt or filter Internet content without court order; should ensure the internet remains a free and open technology, create an even-playing field for competition and innovation. The question is how do we operationalize that? The government is a pretty poor arbiter of what is reasonable and what is not, and it’s exceptionally poor when it comes to having a track record of promoting innovation and investment in broadband networks. That’s something the private sector has done a remarkable job of on its own. The Internet has speedily evolved from a collaborative project among governments and universities to a promising commercial medium operated primarily by private ventures. The next generation World Wide Web will not appear as a standard, “one size fits” all medium primarily because consumers expect more and different features and service providers need to find ways to recoup frequent network upgrades to accommodate ever increasing throughput requirements. For example, Internet Service Providers offer on line game players, Voice over the Internet Protocol (VoIP) and Internet Protocol Television (IPTV) with “better than best efforts”  routing of bits to promote timely delivery with higher quality of service. Similarly content providers can use caching and premium traffic routing and management service to secure more reliable service than that available from standard “best efforts” routing. Service diversification can result in many reasonable and lawful types of discrimination between Internet users notwithstanding a heritage in the first two generations of nondiscrimination and best efforts routing of traffic. ISPs increasingly have the ability to examine individual traffic streams and prioritize them creating a dichotomy between plain vanilla, best efforts routing and more expensive, superior traffic management services. However the potential exists for carriers operating the major networks used to switch and route bit-streams to exploit network management capabilities to achieve anticompetitive and consumer harming outcomes. Some internet service providers are trying to fundamentally alter the way the internet works and collecting money from companies like Netflix and Facebook to guarantee their data can continue to reach users unimpeded. This is called paid prioritisation, which is against ethics like paid news in print medium, which should not be allowed at all. Advocates for the principle of network neutrality claim the potential exists for ISPs to engineer a fragmented and “balkanized” next generation Internet through unreasonable degradation of traffic even when congestion does not exist. The worst case scenario envisioned by network neutrality advocates sees a reduction in innovation, efficiency, consumer benefits and national productivity occasioned by a divided Internet: one medium prone to congestion and declining reliability and one offering superior performance and potential competitive advantages to users able and willing to pay, or affiliated with the ISP operating the bit-stream transmission network. Opponents of network neutrality mandates scoff at the possibility of the worst case scenario, and view government intervention as anathema. Proponents of net neutrality are worried that corporations will buy influence with ISPs to disrupt access to competitors, or smother online speech that’s critical of a company or its products.


On balance, internet neutrality is desirable.

1. Without net neutrality, large companies will interfere with online communication between users.

If control of the Internet and its contents are given to large companies, they can easily interfere with communication between users that was previously taken for granted. Comcast limited user access to BitTorrent, a peer-to-peer exchange.  Proponents of network neutrality imagine that if unrestrained, internet service providers would block large portions of the Internet, and make other parts of the Internet accessible only behind a high-pay wall. While this is possible in theory, robust competition among service providers ensures companies will be punished for providing such egregious service… If any company adopted the measures network neutrality supporters envision, customers would jump ship to an I.S.P that gives better service.

2. Net neutrality ensures innovation and contributions from a variety of smaller users.

Part of what makes the Internet so unique is that anybody can contribute content, creating a wealth of information. However, the loss of net neutrality would mean that Internet providers would be able to create exclusive deals with existing companies, effectively shutting out smaller companies. More than 60 percent of Web content is created by regular people, not corporations. How will this innovation and production thrive if creators must seek permission from a cartel of network owners? Net neutrality promotes innovation by testing ideas on a large number of consumers. Also the per user cost of internet provision is reduced greatly because of economies of scale. Currently millions of companies profit from the internet. ISPs are just being greedy. That is why they say net neutrality interferes with innovation because it prevents companies from charging its users more to access more content, giving companies less profit and interfering with their ability to innovate.

3. Net neutrality preserves choice on the Internet and the idea that a website’s success is determined by its quality.

The Internet is special in that anybody can contribute content, and the actual success of websites is determined by the users themselves. If a website is unpopular, it will ultimately fail because not enough people are visiting that website and using it. Net neutrality ensures that this system stays in place because any user will be able to access any website. However, without net neutrality, the idea that the best and most popular websites will succeed is no longer true, as competition will be distorted by larger companies making deals and preventing access to certain websites.


Key points of concerns vis-à-vis net neutrality requirements:

1. Transparency requirement:

A person engaged in the provision of broadband Internet access service shall publicly disclose accurate information regarding the network management practices, performance, and commercial terms of its broadband Internet access services sufficient for consumers to make informed choices regarding use of such services and for content, application, service, and device providers to develop, market, and maintain Internet offerings.


While transparency and informed choice are absolutely important for consumers, we should also expect, if not demand, that internet access providers also undertake several network management practices to protect our safety, privacy and security that they do not make public. For example, ISP’s today have several mechanisms in place to identify images of child sexual exploitation, and it would seriously undermine this vital work to make public the ways in which they manage this on their networks. Additionally, there are many aspects of network management, performance that would be a boon to those interested in hacking, infecting or harming the networks to advance their financial or political goals.

The transparency rule therefore hinges on the concept of ‘sufficient’ information for consumers to make informed choices which is left undefined, while the overall directive makes a demand for transparency that may not serve individuals, companies, or the national security well.


2, No Blocking requirement:

A person engaged in the provision of fixed broadband Internet access service … shall not block lawful content, applications, services, or non-harmful devices..[or], consumers from accessing lawful websites, subject to reasonable network management; nor shall such person block applications that compete with the provider’s voice or video telephony services, subject to reasonable network management.” This point carries the caveat “No Unreasonable Discrimination,” defined as follows: … [Access providers] shall not unreasonably discriminate in transmitting lawful network traffic over a consumer’s broadband Internet access service.  Reasonable network management shall not constitute unreasonable discrimination.


How the word “block” or the phrase “reasonable network management’ is defined raises safety concerns. While blocking legal content may be undesirable, slowing some content streaming in favor of other content types will be important for consumer’s overall experience – and safety. For example, according to Cisco’s Networking Index Forecast, Internet traffic will more than quadruple by 2014, with some form of video content accounting for more than 90% of all content transmitted through the internet. While some of that video streaming will be for critical purposes like remote medical assistance, most will be for entertainment. Should these two types of content be given equal priority? Should video streaming be given the same priority as phone calls (VoIP)? While a 5-second delay in video download means your video isn’t ready quite as fast as it might be, the same delay in a phone call is intolerable – and if that call is to 911, it is a clear a safety concern.  Again, there is a clear need to prioritize content types from a safety perspective, particularly given the exponential growth in bandwidth use, and the faltering economic model for bandwidth development.


3. Reasonable network management:

It is defined by the FCC as follows: A network management practice is reasonable if it is appropriate and tailored to achieving a legitimate network management purpose, taking into account the particular network architecture and technology of the broadband Internet access service. Legitimate network management purposes include: ensuring network security and integrity, including by addressing traffic that is harmful to the network; addressing traffic that is unwanted by users (including by premise operators), such as by providing services or capabilities consistent with a user’s choices regarding parental controls or security capabilities; and by reducing or mitigating the effects of congestion on the network.


This looks at three aspects of network management in narrowly defined categories: 1) technical management of a service, including security defences 2) providing consumers with safety tools to manage their own content access, and 3) managing network congestion. The future may show that several additional categories are needed, and that there is more overlap between categories than suspected. At a time when new threats emerge on a daily basis, and where entirely new categories of exploits continue to emerge, this definition has the potential to hamper proactive measures of defence in new and unforeseen areas. It also risks stifling healthy competition between service providers in areas of consumer safety, and discouraging innovation of new – or hybrid – safety, security and privacy solutions that would look beyond these narrow confines. Our personal safety as well as the safety of the internet as a whole depends on ISPs taking strong protective measures on our behalf. We need to be pushing for greater safety measures, and creating an environment that encourages and rewards service providers for doing so. An ‘open’ internet is an illusion if we do not have a secure environment in which consumers can safely embrace the web. Otherwise it’s only open to the crooks, scammers, and cyber-thugs.


The essential argument is that ISPs provide better service by being allowed to actively manage their network. Some examples of this better service would be:

1. Protecting the average user from the power user: Users who download gigabytes of data may unfairly hog bandwidth resources from those who don’t. By throttling certain users or types of data, ISPs can be sure that every user has an optimal experience.

2. Preventing illegal activity: ISPs generally want to prevent illegal file swapping over their networks, both due to the legal issues and for basically the same bandwidth reasons as above.

3. Privilege Special Services: Certain important Internet services require heavy and uninterrupted bandwidth use, such as medical services or VOIP. ISPs want to give special preference to these unique services that could benefit from special treatment, and possibly could not exist without this preferential treatment. This is one of the key arguments in the Verizon/Google Proposal of 2010.


Is net neutrality technically possible?

Building a net neutral network is technologically not possible to implement. It’s a utopian idea – no basis in technology. No telecom engineer will say that network neutrality is feasible. The concept that each data is treated equally does not hold good. You can’t design data. The Internet inherently prioritises data on a scale of 0-7 points basis. Network architecture gives highest priority to network management, followed by online gaming, speech, videos and then still images, music files and last file transfers and emails. These cannot be on the same footing.


The debate over bandwidth utilization:

When the BTIG Research firm began covering the Internet pipe operator Cogent Communications, its report contained an amusing insight. Cogent’s last-mile business customers buy a service that offers 100 megabits per second. The average use by these customers, though, is only about 12 mbps, and barely “one or two dozen of their customers have ever reached 50% utilization of the 100 MB pipe,” says BTIG. So the existing infrastructure meets the requirements of the overwhelming majority of customers, and only a small minority require more. The implication is that we don’t need network neutrality, because the users are not using what is there! However, the conclusions are misleading for a variety of reasons:

First, there’s a difference between sustained bandwidth utilization and bandwidth spikes in demand.

For sustained bandwidth utilization, while network operators may differ, in general, a user should not exceed 50% to 70% utilization of a 100 MB pipe (the provisioned bandwidth provided by the ISP). Periodic spikes in demand will put the user in the 80% to 90% bandwidth for short bursts of time. When the demand spikes occur, bandwidth is available. However, if the sustained bandwidth utilization were consistently 90% of the 100 MB pipe, then random spikes in demand would exceed bandwidth, quickly creating an under-provisioned network; the user would urgently need an upgrade. The bandwidth is not a static, monolithic phenomenon, but in fact it is dynamic and ever changing.

Let me give example from India.

It is a known within the telecommunications industry that companies do not make any money off of the last mile connection(s). This is because the initial investment within the last mile connection is so expensive, and the fees at which that they can charge their customers are so competitive. Due to this, technology lags so much in the last mile connection as they cannot make any money off of it.  This is what makes a non-net neutrality environment to ISPs because it now presents a way in that they can make money off of the last mile connection.  Whenever a new ISP comes, it offers high bandwidth to each customer and people do get fast internet speed for few months. For example, a 3G mobile broadband tower is emitting 100 Mbps to a small town. If town has 100 customers, each will get 1 Mbps and if 50 % are not using internet, each using will get 2 Mbps. After three months number of consumers becomes 1000, then each will get 0.1 Mbps and if 50 % consumers use internet at a time, each will get 0.2 Mbps. This is because ISP is not upgrading infrastructure to give more bandwidth. Instead they prefer to have consumers continue to use the present infrastructure and continue to pay monthly fee. With traffic engineering, the incumbent carriers could deliver higher capacity (bandwidth) to a select group of customers and charge them more. However, the network neutrality ruling precludes them from doing so.

Secondly, the term “bandwidth” has never been well defined in the industry and remains largely ambiguous. There’s no agreement on what bits are counted as part of bandwidth. For example, do Ethernet header bits or CRC bits count? Certainly the carriers continue to obfuscate the terms by giving their service offerings names like “100 Ultra” that some users interpret as a bidirectional 100 MB connection. Keeping the user confused seems to be the goal. At least with network neutrality we can open the door for demanding a clear definition for how bandwidth is tested and measured, and how bandwidth utilization is tested and measured.

Third, the carrier networks are the equivalent of one-lane, dirt roads with potholes. Is it any wonder that no one makes high quality, high performance luxury automobiles to travel on a one lane dirt road with potholes? This is a classic chicken and egg problem.  Great products and services requiring a high speed super highway are possible, but if the product creators only see a low capacity, inadequate Internet, why bother creating those products? Thus, the products are never created. Would network neutrality help get the super highway built? Maybe. We’ve seen the difference between HD videos from Netflix and YouTube delivered over the satellite (beautiful quality) as compared to delivery over the cable network (poor quality) and of course worst quality over mobile broadband. Maybe that’s acceptable to a large class of users, but since they have not seen any alternatives, how would they know?


Balanced Network Neutrality Policy:

The debate over network neutrality has two very different points of view. Network neutrality advocates worry about ISP’s discriminating internet traffic and opponents argue that enforcing network neutrality would be difficult and error prone (Felten, 2006). The solution for this is a balanced policy which would limit the harmful uses of discrimination and allow its beneficial uses, because making wrong decision with respect to network neutrality regulation can hamper Internet’s development (Peha, 2007). For the protection of beneficial discrimination a policy can be designed which might allow the following: Network operators could provide different QoS to different classes of traffic by using explicit prioritization or other techniques. A stricter QoS requirements for traffic sent using a higher-priced service can be favored using these techniques (Alleven, 2009). Network operators could charge a different price for both the senders and recipients of data depending on different classes of traffic (Peha, 2007). The higher price could be charged for traffic which consumes more of a limited resource or which requires superior quality of service and has adverse effect on neighbors traffic (Felten, 2006). Traffic that can pose a threat to network security or which can be harmful to the network could be blocked by the network operator; it can be either by not following certain protocols or defined algorithms (Crowcroft, 2007). Network operators could also benefit by offering unique services or proprietary content to their customers (Peha, 2007).  If and only if, the broadband market is not highly competitive, a policy designed to limit harmful uses of discrimination would not allow the following:  (Peha, 2007). A network operator cannot charge more for a 50kbps VoIP stream than 50kbps gaming application where the QoS requirements are the same (Felten, 2006). One user could not be charged more than another by a network operator whether the user is a service provider, content provider or a consumer for a comparable information transfer or monthly service or even whether the user is the sender or receiver (Peha, 2007). Until and unless a network operator has a reasonable belief that certain traffic poses a security threat to the network it could not block traffic based on content or application alone (Crowcroft, 2007). Degradation of QoS only on the basis of content alone couldn’t be done by the network operators (Crowcroft, 2007) (Alleven, 2009). A network operator could not offer different QoS and price for traffic that competes with a legacy circuit-switched service (Peha, 2007).


To simplify, the Internet marketplace can be analytically split into three categories: content providers (Google, Netflix, porn sites, blogs), ISPs (Comcast, Verizon, CenturyLink), and end-users (you and me). The end-users are consumers, whose consumption preferences ultimately determine the value of content. ISPs interact directly with consumers by selling the high-speed connections that allow their customers to access content. ISPs interact with content providers by managing the networks over which information flows. Thus ISPs are resource owners, not because they own the networks, but they are also entrepreneurs, insofar as they strive to maintain the profitability of their networks under rapidly evolving market conditions. To be successful, ISPs must serve consumer demand in a cost-effective manner. FCC regulation of the Internet is rooted in the belief that a “virtuous circle” of broadband investment is ultimately driven by content providers. The more good content that providers make available, the more consumers will demand access to sites and apps, and the more ISPs will invest in the infrastructure to facilitate delivery. Minimize the financial and transaction costs imposed by ISPs on content providers, and content will flourish and drive the engine. That’s the theory, anyway. But in practice, there’s no good evidence that myopically favoring content providers over infrastructure owners is beneficial even to content providers themselves, let alone to consumers. Rather, the two markets are symbiotic; gains for one inevitably produce gains for the other. Without an assessment of actual competitive effects, it is impossible to say that consumers are best served by policies that systematically favor one over the other. Somehow, even with absent net neutrality regulation, ISPs have invested heavily in infrastructure and broadband. End-users have benefitted immensely, with 94 percent of U.S. households having access to at least two providers offering fixed broadband connections of at least 10 megabits per second, not to mention the near-ubiquitous coverage of wireless carriers offering 3G and LTE service at comparable speeds. Broadband networks are expensive to build and, particularly for mobile networks, increasingly prone to congestion as snowballing consumer use outpaces construction and upgrades. In order to earn revenue, economize the scarce resource of network capacity, and provide benefits to consumers, ISPs may engage in various price-discrimination and cross-subsidization schemes—i.e., the much-maligned “paid prioritization” motivating net neutrality activists. The non-Internet economy is replete with countless business models that use similar forms of discrimination or exclusion to consumers’ benefit. From Priority Mail to highway toll lanes to variable airline-ticket pricing, discriminatory or exclusionary arrangements can improve service, finance investment, and expand consumer choices. The real question is why we would view these practices any differently when they happen on the Internet.


Comcast’s policy toward the peer-to-peer data packets made economic sense:

A small minority of its customers was consuming much of its bandwidth by downloading large movie files with Bit Torrent’s technology, thereby reducing data transfer rates for the majority of customers who used Comcast’s service primarily for Web surfing and email. By identifying peer-to-peer data packets and slowing, or “de-prioritizing,” their passage through its network, Comcast made available more capacity for the majority of its customers and avoided raising its rates in order to foot the cost of the infrastructure improvements that would be required to accommodate peer-to-peer file transfers as they grew in popularity. Given that these peer-to-peer file transfers were being made on its property, Comcast had the right to do so. According to the FCC, Comcast’s actions violated the principles of net neutrality because they unfairly “discriminated” against the Bit Torrent data packets.


It’s really about costs and there isn’t a clear cut answer.  Network capacity costs money to build and maintain.  There are tons and tons of over the top (OTT) services that cause broadband subscribers to use more capacity than they would have otherwise.  This is why during US prime time about 1/3 of the total capacity of the US portion of the Internet is consumed by NetFlix.  Now, from the outside looking most consumers respond with a shrug, after all they bought a package that advertises X amount of bandwidth, so the service provider should be able to provide that to all customers on that package. The problem for the service provider is twofold.  One, all networks are designed around over subscription and no residential broadband network in the world is designed to handle all its users running at full rate all the time. The second problem is that in many cases customers choosing OTT video (also telephone) services are reducing or eliminating their video subscription from their service provider. This is key to understanding the issue because virtually all of the broadband networks were built around a multi-purpose model. This includes DSL (phone, data, and sometimes video), DOCSIS cable (video, data, and often phone), and fiber (FTTx) which is almost always a triple play (FIOS and Uverse work this way).  This matters because these networks were built on the assumption that the operator would get revenue from most subscribers for both or all three services.  When many subscribers choose to eliminate one or more of those services and, increase their usage on the third (data) to make up for it the service provider faces a double whammy. We cannot think of a single large facilities based (they own the gear) operator that built around simply offering data and data plans are less expensive historically because much of the cost of the network is shared with the other service(s). The reason that operators would like flexibility is to find a way to indirectly monetize the OTT services that are consuming resources on their infrastructure.


Bandwidth and net neutrality:

The resurgent issue of net neutrality—whether all net traffic has an equal opportunity to go at the same speed—is very much related to bandwidth, with Netflix using 32.25% of the total web bandwidth of North American home users nightly, followed by BitTorrent, YouTube streaming, pirate sites and porn. With more streaming services that provide movies or music as content emerging every day, one can see that the pipeline is being hogged by certain businesses, while others deal with what space is left. With a neutral net, everyone has an equal right to that same pipeline, so if lots of folks all stream Netflix or other services’ movies tonight, the music file upload  that I’m trying to get to friend will go slower and slower—in fact everything slows down equally.


High bandwidth used by Netflix could lead to traffic jam and slower speed to other users. This is because video needs lot of bandwidth and priority to maintain video quality (low latency) which in turn would slow internet for emails and other webs. Is it not violation of net neutrality?  Why should a consumer suffer if he is not using Netflix by a customer who uses Netflix?  Net Neutrality means that as more folks use “what’s left,” Netflix movies begin to buffer, jitter and eventually deliver pixilated images to compensate. The pipeline isn’t infinite. So we have proposal is to create a separate higher speed pipeline that companies like Netflix pay for (and pass those costs on to their customers no doubt). Netflix has already paid Comcast and Verizon to get in the high-speed lane. That also violates net neutrality.


Only allow discrimination based on type of data:

Columbia University Law School professor Tim Wu observed the Internet is not neutral in terms of its impact on applications having different requirements. It is more beneficial for data applications than for applications that require low latency and low jitter, such as voice and real-time video. He explains that looking at the full spectrum of applications, including both those that are sensitive to network latency and those that are not, the IP suite isn’t actually neutral. He has proposed regulations on Internet access networks that define net neutrality as equal treatment among similar applications, rather than neutral transmissions regardless of applications. He proposes allowing broadband operators to make reasonable trade-offs between the requirements of different applications, while regulators carefully scrutinize network operator behavior where local networks interconnect.  However, it is important to ensure that these trade-offs among different applications be done transparently so that the public will have input on important policy decisions. This is especially important as the broadband operators often provide competing services—e.g., cable TV, telephony—that might differentially benefit when the need to manage applications could be invoked to disadvantage other competitors. The proposal of Google and Verizon would allow discrimination based on the type of data, but would prohibit ISPs from targeting individual organizations or websites: Google CEO Eric Schmidt explains Google’s definition of Net neutrality as follows: if the data in question is video, for example, then there is no discrimination between one purveyor’s data versus that of another. However, discrimination between different types of data is allowed, so that voice data could be given higher priority than video data. On this, both Verizon and Google are agreed.


Individual prioritization without throttling or blocking:

Some opponents of net neutrality argue that under the ISP market competition, paid-prioritization of bandwidth can induce optimal user welfare.  Although net neutrality might protect user welfare when the market lacks competition, they argue that a better alternative could be to introduce a neutral public option to incentivize competition, rather than enforcing existing ISPs to be neutral. Some ISPs, such as Comcast, oppose blocking or throttling, but have argued that they are allowed to charge websites for faster data delivery. AT&T has made a broad commitment to net neutrality, but has also argued for their right to offer websites paid prioritization and in favor of its current sponsored data agreements.


What if the costs were the consumers’ decision?

What if there was a meter on your Internet connection: when your bandwidth exceeded a threshold, a warning popped up and you could decide whether or not to pay more for the massive amounts of data you were streaming or downloading. This would not be a restriction of the Internet, but a realistic reflection of usage. Every Internet service—new, old or yet to come—would have an equal opportunity to offer their content or data. This would not stifle innovation or prevent small emerging companies from competing with Amazon, Apple and Netflix. There would be no discrimination, but it would be up to the consumer to decide if the surcharge to their Internet service was worth adding to the cost of that Netflix movie.


Research on protocol defined by users rather than network:

In their paper, ‘Putting Home Users in Charge of their Network’, the research team discuss why users should be the ones making the decisions. The researchers explain: “The user should define which traffic gets what type of service, and when this happens; while the ISP figures out how and where in the network, provisioning is implemented.

The researchers’ reasons are:

•Users expect the Internet to be fast, always on, reliable, and responsive.

•Users do not want the network to stand in the way of the application.

•ISPs struggle with how to share available bandwidth among users’ applications.

The research team then made the point that the current “one size fits all” approach is not working, and that each individual user should be able to choose the priority of their applications, indicate that preference to the ISP, and have the ISP implement the required changes. The researchers also feel this is entirely doable:  “We could use existing methods, such as Resource ReSerVation Protocol (RSVP), but we can go one step further and exploit recent trends in networking that make it even easier for ISPs to have more programmatic control over their networks, therefore making it easier for the ISP to implement the user’s desire.”


Fast internet to save life:

Take a rural basic service hospital, which after a serious accident may have to serve as operating room, and the University clinic with a senior surgeons performs it via telemedicine– if this digital and electronic surgery is to be possible, it can only work with perfect internet connection quality and capacity for the transmission of the instructions given by the senior surgeon working on the organs (lungs or heart or cardiovascular vessels) of the patient. We’ve got to be willing to pay a price for this. And you just can’t talk about perfect equality there.


Tiered services:

Tiered service structures allow users to select from a small set of tiers at progressively increasing price points to receive the product or products best suited to their needs. Such systems are frequently seen in the telecommunications field, specifically when it comes to wireless service, digital and cable television options, and broadband internet access. When a wireless company, for example, charges customers different amounts based on the number of voice minutes, text messages, and other features they desire, the company is utilizing the principle of tiered service. This is also seen in charging different prices for services such as the speed of one’s internet connection and the number of cable television channels one has access to. Tiered pricing allows customers access to these services which they may not otherwise due to financial constraints, ultimately reflecting the diversity of consumer needs and resources. Tiered service helps to keep quality of service standards for high-profile applications like streaming video or VoIP. This comes at a cost of increasing costs for better service levels.  Major players in the Net Neutrality debate have proposed tiered internet so content providers who pay more to service providers get better quality service. The way ISPs tier services for content providers and application providers is through “access-tiering”. This is when a network operator grants bandwidth priority to those willing to pay for quality service. “Consumer-tiering” is where different speeds are marketed to consumers and prices are based on the consumers’ willingness to pay. A tiered internet gives priority to packets sent and received by end users that pay a premium for service.  Network operators do this to simplify things such as network management and equipment configuration, traffic engineering, service level agreements, billing, and customer support. Initial reasoning against tiered service was that ISPs would use it to block content on the internet. Internet service providers could use this to prioritize affiliated partners instead of unaffiliated ones. Many argue that one fast network is much more efficient than deliberately throttling traffic to create a tiered internet.



My proposal of ‘two lane’ internet:


Let me start with few examples.

1. You want to travel from Mumbai to Delhi via railways. You have Rajdhani Express taking you in 17 hours with air-condition. You have Firozpur Janata express taking you in 36 hours without air-condition. The origin, the destination and the railways are the same. What is different is speed and comfort that comes with money. There is no railway neutrality. More money gives you fast speed and better quality.


2. You want to send letter to your wife. You can send by ordinary mail or by speed post. Again; the origin, the destination and the postal service are the same. What is different is speed driven by extra money spent.  We have no neutrality in postal service.


3. You want bail in high court. If you are a celebrity, you get bail in 48 hours of conviction. If are a common man, you wait for months to get bail. What is different is speed driven by top lawyers of celebrity who charge so much that a common man cannot afford.


4. You visit Tirumala Tirupati temple to offer worshipping to God. If you pay Rs.300 per person, you get fast access to God. You have to wait for hours if you want free access. There is multi-tiering even at temples.


5. Water and Electricity: Tap water you get in your home cost everybody same no matter where you use it but if you want to drink bottled pure water, you have to pay extra. I drink bottled pure water every day to prevent waterborne disease. As per socialism of net neutrality, I must drink tap water. And electricity cost is not same for every unit. For first 50 units, the rate is 1.2 rupees per unit and for over 400 units, the rate is 2.55 rupees per unit as per electricity bill I receive. Where is neutrality in so called common carriers?


Are we doing injustice at railways, posts, courts, temples, water and electricity?


The non-Internet economy is replete with countless business models that use discrimination or exclusion to consumers’ benefit. From priority mail to highway toll lanes to variable airline-ticket pricing, discriminatory or exclusionary arrangements can improve service, finance investment, and expand consumer choices. The real question is why we would view these practices any differently when they happen on the Internet.


We are humans and we have evolved ways to access anything depending on quantum of money we spend. We have also evolved habits to do fast or slow. The same logic applies to internet. There are so many people who have money but no time. Why should they suffer under socialism of net neutrality?  Why are we so hypocritical when we talk about internet? I am busy doctor, busy teacher and busy blogger. I hardly get 20 minutes in 24 hours to read comments posted by people on my website. There are thousands of comments. If internet is slow, it will take lot of time to access comments and then approve it. If I pay money to my ISP so that I get faster access to my website, how does it violate net neutrality?  There are so many people in world who have money but no time. Why do injustice to them under pretext of net neutrality?  Then there are net habits. You cannot change net habits. Some people want to download videos all the time reducing speed of internet to others who don’t see videos.


Technically and commercially net neutrality does not exist anyway even today:

The Institute of Electrical and Electronic Engineers (IEEE) decided that the best way to manage traffic flow was to label each packet with codes based on the time sensitivity of the data, so routers could use them to schedule transmission. The highest priority values are for the most time-sensitive services, with the top two slots going to network management, followed by slots for voice packets, then video packets and other traffic. That is why when you are downloading YouTube video; your neighbour connected to your ISP will find his email going slowly as most bandwidth is used by video which gets prioritized transmission. Net neutrality is violated. We already have two-sided pricing. ISPs collect revenues from consumers as well as from content/service/application providers, and this two-sided pricing is often linked to QoS. So we already have non-net neutrality regime. Paid peering, paid prioritization and use of CDN by large companies like Google, Netflix and Facebook get them faster and better internet access than my website. Again net neutrality is violated. While wireless handsets generally can access Internet services, most ISPs favour content they provide or secure from third parties under a “walled garden” strategy: deliberate efforts to lock consumers into accessing and paying for favoured content and services. Net neutrality is already violated on mobile phones using mobile broadband. We are living in the world where net is not neutral anyway technically and commercially.


All bits and all packets are not same as we need to differentiate between time sensitive (VoIP) and time insensitive services (email) and we need to differentiate between bandwidth hogging (video) and bandwidth sparing services (simple web page). Interestingly, many bandwidth hogging are also time sensitive and many bandwidth sparing are also time insensitive. Under strict net neutrality principle, all these would be transmitted at same speed and at same priority. It would reduce the quality of internet experience.


What any user or consumer want on internet anyway?

1. Fast speed

2. Cheap monthly fees

3. Access to all legal contents, applications and services at his/her choice

4. No blocking or slowing of any lawful website/application/service

5. Transparency by ISPs about their networking policies

6. Privacy maintained

7. Good quality of service


The key Internet services that have more demands on connectivity ought to be treated differently from other traffic. Telemedicine, teleoperation of remote devices, and real-time interaction among autonomous vehicles (driver-less cars) could be problematic if data packets could get stalled at peak congestion times. The Internet as a service should be split in two, with one “lane” providing equal and unfettered access to websites, but with another “lane” for the special services with greater demand, such as telemedicine, Netflix or HD IP television. However, both services would run over the same Internet infrastructure. An innovation-friendly Internet means that there is a guaranteed reliability for special services.  These can only develop when predictable quality standards are available.  Fast special lanes are necessary for the development of new, advanced uses of the internet, like telemedicine or driver-less cars. Without guaranteed, fast-access internet connections, such innovations won’t come to market. The current “one size fits all” approach is not working, and that each individual user should be able to choose the priority of their applications, indicate that preference to the ISP, and have the ISP implement the required changes. In other words, special lane can be used by any user or any content/service/application provider, provided they pay required charges to ISPs.


The internet shall be innovated into two lane internet having same infrastructure. Two lane internet will be high speed low latency network:

1. First lane is common internet

2. Second lane is special internet


Common internet:

Common internet is for common people treating all data on the Internet equally, not discriminating or charging differentially by user, content, site, platform, application, type of attached equipment, or mode of communication. More importantly, common internet would have basal internet speed of at least 4 Mbps in developed nations and 1 Mbps in developing nations with average network latency less than 125 millisecond for all users, most of the time, for all websites, all services and all applications without any discrimination. Flat rate for any type of data use will be charged to consumers depending on quantum of data used. When you are on common internet, the download speed of Goggle, Facebook, Netflix, YouTube, my website or a student’s website is same with no prioritization, no slowing, no blocking with transparent network traffic management policies using best efforts. No technical prioritization or bandwidth throttling and no commercial paid prioritization, paid peering, CDN or slowing of any data/voice/video.  All bits, bytes and packets are equal at network level and at pricing level. The more bandwidth you consume, the more you pay irrespective whether you use YouTube, email, skype or VoIP or visit my website. In order for common internet to become successful, ISPs have to upgrade infrastructure.


ISPs can have second lane special internet only if they fulfil criteria of common internet. If ISP cannot give internet speed of 4 Mbps in developed nation and 1 Mbps in developing nations to all consumers most of the time, it cannot have second lane. The job of the regulatory bodies like FCC/TRAI is to check that prescribed speed of common lane is maintained otherwise cancel licence of ISP for second lane.


Special internet:

This is specialized fast reliable and secure internet for special services like telemedicine, teleoperation of remote devices, and driver-less cars. The internet speed is very fast of at least 10 Mbps with average network latency less than 100 millisecond anytime to any customer or content provider with selective network prioritization of data/video/voice at router level and transparent use of peering, P2P & CDN. Both users/customers and content/service/application provider can use fast prioritized data by paying more than common internet. Netflix, YouTube, Google, Facebook, Skype, WhatsApp, Bit-Torrent or any website/application/service can use special internet on any ISP by paying more provided that ISP fulfils conditions of common internet. In other words, special internet is built on common internet. If common internet is having slow speed or discrimination, special internet is disallowed legally.


Medical use of broadband:

While broadband alone cannot substitute for doctors, nurses and health care workers, the benefits of Internet applications in healthcare are potentially large. Appropriate mobile solutions can improve the quality of life for patients, increase efficiency of healthcare delivery models, and reduce costs for healthcare providers. It has been estimated that the use of telemedicine delivered by broadband could achieve cost savings of between 10% and 20%.


As you can see in above figure above, tele-medicine, tele-surgery and tele-imaging need time sensitive large bandwidth that is possible only in special internet.


The availability of content is a factor that stimulates broadband investment. Revenues from broadband and mobile access are dependent on demand for web-based content and applications. This has been empirically proven through the PLuM study, which found that “the ability of consumers to access Internet content, applications and services is the reason consumers are willing to pay Internet access providers. Access providers are dependent on this demand to monetise their substantial investments.”  Special internet will ensure that time sensitive content/application/service would never be delayed and data packets of special services like telemedicine & driver-less car would never get stalled at peak congestion times. In return, ISPs would get ample profit both from consumers and content/service/application provider as a return on their investment and for further innovation.


The figure below is an overview of ‘Two Lane Internet’:


The job of ISP is to provide two lane internet and charge differentially depending on which lane you use and not to enforce choice on users. Consumers would be informed about traffic management practices and the level of quality they can expect from their Internet service by ISPs. It is possible that consumer may use both lanes, common internet for common surfing and special internet for videos or VoIP. It is also possible that consumer may use common internet for all uses including downloading videos. It is also possible that consumer may use special internet for all uses. Let consumer be master of his/her destiny rather than destiny scripted by ISPs or CSPs. However, special services like telemedicine, teleoperation of remote devices, and driver-less cars would work only on special internet and would always get priority transmission over any other data on special internet.



The moral of the story:


1. Net neutrality is a principle that Internet service providers (ISPs) should treat all data on the Internet equally, not discriminating or charging differentially by user, content, site, platform, application, type of attached equipment, or mode of communication. ISP is said to operate in net neutrality if it provides the service in a way that is strictly “by the book”. It means that that all packets of data on the internet are transported equally using best effort, without discrimination on the basis of content, user or design. In other words, net neutrality means internet is free, open and fair. Violation of net neutrality is not synonymous with internet censorship. Internet censorship is suppression or deletion of any data on internet which may be considered objectionable, harmful or sensitive as determined by the censor. Usually the censor is government or court of law.


2. Net neutrality does not mean there can be no discrimination at all among customers – a customer who is willing to pay for higher broadband speed gets that even today. Killing discrimination absolutely would mean killing competition among service providers. Net neutrality means that discrimination should not be unreasonable and arbitrary.


3. Net neutrality debate involves predominantly wired transmission (cable/fiber/DSL) in America while net neutrality debate involves predominantly wireless transmission (3G/4G mobile broadband) in India. Net neutrality rules affect wired and wireless transmission differentially because wired network has large capacity of data transmission while wireless network has limited capacity of data transmission due to scarce resource spectrum, and wired connection speed is near maximum throughput while wireless connection speed is much less than maximum throughput due to various factors reducing signal strength. Wired broadband networks have enough capacity to transmit voice and video packets uninterrupted while due to limited capacity in wireless broadband networks, there could be packet loss, high latency and jitters in voice and video packets transmission making voice conversation difficult and poor video quality especially during network congestion. Due to 87% population having Internet access in the United States, the domestic net neutrality debate was able to focus largely on the quality of Internet access. Due to only 19% population having internet access in India, and out of which 83% are getting internet access solely on mobile phones, the priority in India is internet access to a billion people rather than quality of internet.


4. Search neutrality is an indispensable part of net neutrality. You can circumvent biased search results by searching multiple engines sequentially and not to give undue importance to first page and top results of any search engine results. I have been doing this for years to acquire information on internet.


5. The increase in network traffic is consequence of the on-going transition of the Internet to a fundamental universal access technology. The Internet has become a trillion dollar industry and has emerged from a mere network of networks to the market of markets. Much of the net neutrality debate is devoted to the question whether the market for Internet access should be a free market or regulated.


6. Internet without net neutrality would adversely affect start-ups, dissidents, underprivileged, oppressed, activists, small entrepreneurs, small companies, educators and poor people.


7. Eliminating net neutrality would lead to the Internet resembling the world of cable/satellite TV, so that access to and distribution of content would be managed by a handful of big companies. These companies would then control what is seen as well as how much it costs to see it.


8. The quality of websites and services determines whether they succeed or fail rather than deals with ISPs. Majority of the great innovators in the history of the Internet started with little capital in their garages, inspired by great ideas. This was possible because the protections of net neutrality ensured limited control by owners of the networks. Without net neutrality, the Internet will undergo a transformation from a market ruled by innovation to one ruled by deal-making.


9. To obtain the best possible speed for your Internet connection, it is not enough to have a high bandwidth connection. It is also important that your latency is low, to ensure that the information reaches you quickly enough. This is especially true with satellite Internet connections, which can offer speeds of up to 15 Mbps – but will still feel slow due high latency of 500 milliseconds. On the other hand, you should have enough bandwidth as low latencies without enough bandwidth would still result in a very slow connection. Latency and bandwidth are independent of each other. Best internet connection ought to have high bandwidth and low latency.


10. Two human factors are responsible for provoking humans to choose one site over another besides obvious cost & quality factors:

a) Consumers are intolerant to slow-loading sites. Viewers start to abandon a video if it takes more than 2 seconds to start up, and if the video hasn’t started in five seconds, about one-quarter of those viewers are gone, and if the video doesn’t start in 10 seconds, almost half of those viewers are gone. Also, users with faster Internet connectivity (e.g., fiber-optic) abandon a slow-loading video at a faster rate than users with slower Internet connectivity (e.g., cable or mobile).

b) Human audio-visual perception is another important factor. Conversations become difficult if words or syllables go missing or are delayed by more than a couple of tenths of a second. Even twenty milliseconds of sudden silence can disturb a conversation. Human eyes can tolerate a bit more variation in video than ears can tolerate in voice. Voice and video packets must flow at the proper rate and in proper sequence. Internet discards packets that arrive after a maximum delay, and it can request retransmission of missing packets. That’s okay for Web pages and downloads, but real-time conversations can’t wait. Consonants are short and sharp, so losing a packet at the end of “can’t” turns it into “can.” Severe congestion can cause whole sentences to vanish and make conversation impossible. Wired broadband networks generally have enough capacity to transmit voice and video and therefore less affected than wireless mobile broadband.

ISPs have been using these two human factors to provoke consumers to change sites. ISPs affect consumer’s choice by reducing speed of internet for specific site provoking them to view another competitive site. ISPs also increase latency to make voice conversation over VoIP difficult provoking consumer to use another mode of conversation.


11. Assuming all other factors same, broadband internet speed is directly proportional to investment in broadband infrastructure and inversely proportional to number of users. That is why in India, whenever new ISP is set up, people get fast speed but after 3 months, speed falls as number of users increases and broadband infrastructure cannot cope up with so many users.


12. It is known within the telecommunications industry that companies do not make any money off of the last mile connection. This is because the initial investment within the last mile connection is so expensive, and the fees at which that they can charge their customers are so competitive. Due to this, technology lags so much in the last mile connection as they cannot make any money off of it. The incumbent ISPs simply do not want to make investment in upgrading their network infrastructure. The counter view is that ISPs deliberately create physical limits. Instead of increasing their capacity, ISP deliberately keeps it scarce by under-investing in broadband infrastructure to charge for preferential access to resource.


13. The availability of good content is a factor that stimulates broadband investment. The more good content that content providers make available, the more consumers will demand access to sites and apps, and the more ISPs will invest in the infrastructure to facilitate delivery.


14. Average bandwidth cost to ISP varies from $30,000 per Gbps per month in Europe and North America to $90,000 in certain parts of Asia and Latin America. Therefore to control their bandwidth costs, ISPs are deploying a variety of ad-hoc traffic shaping policies that target specifically bulk transfers, because they consume the vast majority of bytes. Examples of bulk content transfers include downloads of music and movie files, distribution of large software and games, online backups of personal and commercial data, and sharing of huge scientific data repositories. Increasingly economic rather than physical constraints limit the performance of many Internet paths.


15. The hardware and the software that run Internet treat every byte of data equally. All bits of internet transmission are fragmented into data packets that are routed through the network autonomously (end-to-end principle) and as fast as possible (best-effort principle). Internet packets generally travel the path of least resistance while travelling from one computer to another. However, there is a desire for reliable transmission of information that is time critical (low latency), or for which it is desired that data packets are received at a steady rate and in a particular order (low jitter). Voice communication, for example, requires both, low latency and low jitter. So we have quality of service (QoS) at router lever where voice transmission is prioritized over other data. Voice, video, and critical data applications are granted priority or preferential services from network devices so that the quality of these strategic applications does not degrade to the point of being unusable. This QoS technology of traffic management is implemented at router level. There is a fine line between correctly applying traffic management to ensure a high quality of service and wrongly interfering with Internet traffic to limit applications that threaten the ISP’s own lines of business. An alternative to complex QoS control mechanisms is to provide high quality communication by generously over-provisioning a network so that capacity is based on peak traffic load estimates. Remember, greater the broadband infrastructure & capacity, lesser the need for traffic control & management.


16. Fast lanes (paid peering, CDN and paid prioritization), slow lane, increased latency, zero rating, blocking, re-direction, degrading quality of service, weakening competition, and unwillingness to upgrade network (over selling service) are ways by which last-mile ISPs generate profits and all these ways are against net neutrality. It’s all about money and greed. Net neutrality places restrictions on potentially revenue-generating functionality of ISP. They could do all these things because of monopoly in the last mile connection. End-users can be left in a restricted, low quality slow lane or a fast lane with fewer destinations to reach, without even knowing about it as there is absolute lack of transparency by ISPs. Most consumers do not know anything about traffic management practices and the level of quality they can expect from their Internet service. ISPs also give preferential treatment to individual speed test sites, so when you test internet speed, it will be higher than actual.


17. Moving large data like movies and music videos requires larger and faster Internet “pipes” (more expensive pipes) than moving emails and simpler web pages. On the top, these large data are time-sensitive, need low latency and therefore prioritized. As opposed to text files, video streaming require more resources and potentially slow down the process for everyone else. There is merit in argument that all data is not same. Additionally, different types of data are obtained at different prices and therefore they all cannot be sold at the same rates. Peer-to-peer (P2P) file sharing and high-quality streaming video require high bandwidth data rates for extended periods and can cause a service to become oversubscribed, resulting in congestion and poor performance. If the resource has a capacity constraint, there may be a point at which a single user’s consumption will negatively affect another’s. This is what happens when one consumer uses too much bandwidth to download video or P2P service; it will affect other paying customers who then cannot send their 10KB emails. One out of every two bytes of data traveling across the Internet is streaming video from Netflix or YouTube. If ISPs start giving further preferential treatment to the biggest players, would there be any bandwidth left for the independent video producers, upstart social media sites, bloggers and podcasters?   Higher price should be charged for traffic which consumes more of a limited resource or which requires superior quality of service and has adverse effect on neighbour’s traffic.


18. Please do not confuse between peering and peer-to-peer (P2P) file transfer. Peering is direct connection between ISP and content provider (e.g. Google) bypassing internet backbone while peer to peer is sharing files between client computers rather than downloading file from content provider. During peering, you are getting file from content provider at faster speed while during P2P, you are getting file from another user’s computer at faster speed. Peering is violation of net neutrality by ISP while P2P is violation of net neutrality by consumers.


19. Arguments about net neutrality shouldn’t be used to prevent the most disadvantaged people in society from gaining access to internet in India. Eliminating zero rating programs that bring more people online won’t increase social inclusion or close the digital divide. Only 7% of the data used by subscribers came through the initiative’s free, zero-rated offerings; other paid services accounted for the remaining 93%. This proves that zero rating only allows initial internet access to customers but later on it almost becomes paid service. Studies have showed that internet access reduces poverty and create jobs.  Even if it was the case that some zero-rating programs might create some barriers to market entry for new start-ups, the access could help small business owners and farmers tap into a larger market for their goods, and can bring basic education and information to rural areas. On the other hand, for poor people using zero rating, internet means Goggle and Facebook, making awfully hard for any competitor to arise. Creating preferential access to further social causes and service penetration is one thing, using it to create commercial monopolies and business cartels is quite another.


20. Almost all of ISPs are built on multipurpose model earning revenue from most customers for all 3 services, phone plus SMS, data and video. When many customers choose to eliminate one of those services (phone plus SMS) and, increase their usage on the data by using OTT like WhatsApp and Skype to make up for it, the service provider faces a double whammy. The revenue earned by the telecom operators for one minute of use in traditional voice call is Rupees 0.50 per minute on an average, as compared to data revenue for one minute of VoIP call usage which is around Rupees 0.04, which is 12.5 times lesser than traditional voice call. This clearly indicates that the substitution of voice with data is bound to adversely impact the revenues of the telecom operators and consequently impact both their infrastructure related spends and the prices consumers pay. Since OTTs are consuming resources on ISPs infrastructure and also hurting ISPs business interest, it would be unfair to invoke net neutrality saying all data are same. Free riders counter by saying that users already pay for content and applications, which allows ISP to profit from their investment in networks but this argument appear hollow as profit margins of U.S. broadband providers are generally one-sixth to one-eighth of companies that use broadband (such as Apple or Google).


21. I am of the view that we should keep government miles away from net neutrality because internet has worked well without government meddling, and governments & corporates are always hand in glove with each other.


22. In my view, the best way to maintain net neutrality is to increase numbers of ISPs to increase competitiveness among them and each one having large capacity to cater internet traffic. Informed consumers could make a choice among offers from different providers and choose the price, quality of service and range of applications and content that suited their particular needs.


23. Technically and commercially net neutrality does not exist anyway even today. The Institute of Electrical and Electronic Engineers (IEEE) decided that the best way to manage traffic flow was to label each packet with codes based on the time sensitivity of the data, so routers could use them to schedule transmission. The highest priority values are for the most time-sensitive services, with the top two slots going to network management, followed by slots for voice packets, then video packets and other traffic. That is why when you are downloading YouTube video; your neighbour connected to your ISP will find his email going slowly as most bandwidth is used by video which gets prioritized transmission. Net neutrality is anyway violated. We already have two-sided pricing. ISPs collect revenues from consumers as well as content/service/application providers, and this two-sided pricing is often linked to QoS. So we are already on non-net neutrality regime.  Paid peering, paid prioritization and use of CDN by large companies like Google, Netflix and Facebook get them faster and better internet access than my website. Again net neutrality is violated. While wireless handsets generally can access Internet services, most ISPs favour content they provide or secure from third parties under a “walled garden” strategy: deliberate efforts to lock consumers into accessing and paying for favoured content and services. Net neutrality is already violated on mobile phones using mobile broadband. We are living in the world where net is not neutral anyway technically and commercially.


24. I propose that the internet shall be innovated into two lane internet having same infrastructure: first lane common internet and second lane special internet. Two lane internet will be high speed low latency network. Common internet is for common people treating all data on the Internet equally, not discriminating or charging differentially by user, content, site, platform, application, type of attached equipment, or mode of communication. More importantly, common internet would have basal internet speed of at least 4 Mbps in developed nations and 1 Mbps in developing nations for all users with average network latency less than 125 millisecond. Flat rate for any type of data use will be charged to consumers depending on quantum of data used. ISPs can have second lane special internet only if they fulfil criteria of common internet. The second lane special internet is specialized fast reliable and secure internet for special services like telemedicine, teleoperation of remote devices, and driver-less cars. The internet speed is very fast of at least 10 Mbps with average network latency less than 100 millisecond anytime to any consumer or content provider with selective network prioritization of data/video/voice at router level and transparent use of peering, P2P & CDN. Both users/consumers and content/service/application provider can use fast prioritized data by paying more than common internet. The job of ISP is to provide two lane internet and charge differentially depending on which lane you use and not to enforce choice on users. Consumers would be informed about traffic management practices and the level of quality they can expect from their Internet service by ISPs. It is possible that consumer may use both lanes, common internet for common surfing and special internet for videos or VoIP. It is also possible that consumer may use common internet for all uses including downloading videos. It is also possible that consumer may use special internet for all uses. Let consumer be master of his/her destiny rather than destiny scripted by ISPs or CSPs. However, special services like telemedicine, teleoperation of remote devices, and driver-less cars would work only on special internet and would always get priority transmission over any other data on special internet.



Dr. Rajiv Desai. MD.

June 15, 2015



I am grateful to internet for my survival as governments and media have done everything to degrade me. Whether it is ISPs or whether it is content providers Google, Facebook, and Netflix or whether it is service provider OTT or whether it is internet users, we all belong to internet family. Net neutrality issue ought to be resolved within family without meddling by governments and courts.



May 8th, 2015



The figure above shows a victim of smallpox.



“You let a doctor take a dainty, helpless baby, and put that stuff from a cow, which has been scratched and had dirt rubbed into her wound, into that child. Even, the Jennerians now admit that infant vaccination spreads disease among children. More mites die from vaccination than from the disease they are supposed to be inoculated against.” –George Bernard Shaw, 1929. The world has come a long way since George Bernard Shaw fulminated against vaccination in the 1920s. Small pox was declared eradicated from world in 1980 largely due to small pox vaccine. In 2008, Barack Obama called science on vaccines ‘inconclusive’. But in 2015, the same Barack Obama called science on vaccines “indisputable”. Vaccination was voted by readers of the British Medical Journal in 2007 as one of the four most important developments in medicine of the past 150 years, alongside sanitation, antibiotics and anaesthesia. Vaccination currently saves an estimated three million lives per year throughout the world and so topped the list in terms of lives saved, making it one of the most cost-effective health interventions available. Vaccines are widely recognized as one of the greatest public health successes of the last century, significantly reducing morbidity and mortality from a variety of bacteria and viruses. Diseases that were once the cause of many outbreaks, common causes of loss of health and life, are now rarely seen, because they have been prevented by vaccines. However, vaccines can in rare cases themselves cause illness. A rare potential for harm can loom large when people no longer experience or fear the targeted disease. In this regard, the public opinion of vaccines can be a victim of their success. The fact that vaccines are administered to healthy people to prevent diseases which have become rare, largely thanks to vaccination, contributes to concerns about vaccine safety. Because the devastating effects of the diseases are no longer so prominent, public attention is focused on side effects from vaccination. This influences how a person weighs up the risks and benefits of vaccination. Vaccine opponents have questioned the effectiveness, safety, and necessity of all recommended vaccines. Most of the arguments against vaccination appeal to parents’ understandable deep-seated concerns for the health of their children, particularly very young babies. These arguments have reduced vaccination rates in certain communities, resulting in outbreaks of preventable and fatal childhood illnesses. Is vaccine really safe? Is vaccine really effective?  What would happen if I don’t vaccinate my child?  I attempt to answer these questions by analysing both sides of vaccine story.



This article is about scientific rationale for vaccination amid anti-vaccine movement and not about individual vaccines and hence it is beyond the scope of this article to discuss in detail production, administration, efficacy and safety of individual vaccines. However, whenever necessary individual vaccines are discussed.


Abbreviations and synonyms:

DT= diphtheria toxoid

GBS = Guillain-Barré syndrome

HPV = human papillomavirus

MMR = measles, mumps, and rubella

TD = tetanus and diphtheria toxoids = Td

TDaP = tetanus, diphtheria toxoids and acellular pertussis  = DTaP

TDwP = tetanus, diphtheria toxoids and whole cell pertussis = DTwP

TT = tetanus toxoid

Hib = haemophilus influenzae type b

HepB = hepatitis B

IPV = inactivated polio vaccine

OPV = oral polio vaccine

AEFI = adverse event following immunization

MS = multiple sclerosis

PCV= Pneumococcal conjugated vaccine

PPV = Pneumococcal polysaccharide vaccine

WHO = World Health Organization

UNICEF = United Nations Children’s Fund

CDC = Centers for Disease Control and Prevention (U.S.)

GAVI = Global Alliance for Vaccines and Immunization

GIVS = Global Immunization Vision and Strategy

GVAP = Global Vaccination Action Plan

CD = cluster of differentiation

APC = antigen presenting cell

DC = dendritic cell


Edward Jenner and history of vaccination:

As long ago as 429 BC, the Greek historian Thucydides observed that those who survived the smallpox plague in Athens did not become re-infected with the disease. The ancient Greeks knew that people who had recovered from the bubonic plague were resistant to getting it again. Based on this observation, the authorities in Athens used survivors from previous epidemics to nurse sufferers when the same diseases re-emerged. The Chinese were the first to discover and use a primitive form of vaccination, called variolation. It was carried out as early as the 10th century, and particularly between the 14th and 17th centuries. The aim was to prevent smallpox by exposing healthy people to tissue from the scabs caused by the disease. They did this by either putting it under the skin or, more often, inserting powdered scabs from smallpox pustules up the nose. These initial crude attempts at immunization led to further experimentation with immunization by Lady Mary Wortley Montagu in 1718 and Edward Jenner in 1798.



The word “vaccine” comes from the Latin word vaccinus, which means “pertaining to cows.” Vacca is Latin for cow. What do cows have to do with vaccines? The first vaccine was based on the relatively mild cowpox virus, which infected cows as well as people. This vaccine protected people against the related, but much more dangerous, smallpox virus.  More than 200 years ago, Edward Jenner, a country physician practicing in England, noticed that milkmaids rarely suffered from smallpox. The milkmaids often did get cowpox, a related but far less serious disease, and those who did never became ill with smallpox. In an experiment that laid the foundation for modern vaccines, Jenner took a few drops of fluid from a skin sore of a woman who had cowpox and injected the fluid into the arm of a healthy young boy who had never had cowpox or smallpox. Six weeks later, Jenner injected the boy with fluid from a smallpox sore, but the boy remained free of smallpox.  Dr. Jenner had discovered one of the fundamental principles of immunization. He had used a relatively harmless foreign substance to evoke an immune response that protected someone from an infectious disease. His discovery would ease the suffering of people around the world and eventually lead to the elimination of smallpox, a disease that killed a million people, mostly children, each year in Europe. These early endeavors have led to the plethora of vaccines that are available today. Although these attempts were successful in providing immunity, the underlying processes required to produce this immunity were unknown. By the beginning of the 20th century, vaccines were in use for diseases that had nothing to do with cows—rabies, diphtheria, typhoid fever, and plague—but the name stuck.


Louis Pasteur further developed the technique during the 19th century, extending its use to killed agents protecting against anthrax and rabies. The method Pasteur used entailed treating the agents for those diseases so they lost the ability to infect, whereas inoculation was the hopeful selection of a less virulent form of the disease, and Jenner’s vaccination entailed the substitution of a different and less dangerous disease for the one protected against. Pasteur adopted the name vaccine as a generic term in honor of Jenner’s discovery. Louis Pasteur’s experiments spearheaded the development of live attenuated cholera vaccine and inactivated anthrax vaccine in humans (1897 and 1904, respectively). Plague vaccine was also invented in the late 19th Century. Between 1890 and 1950, bacterial vaccine development proliferated, including the Bacillis-Calmette-Guerin (BCG) vaccination, which is still in use today. In 1923, Alexander Glenny perfected a method to inactivate tetanus toxin with formaldehyde. The same method was used to develop a vaccine against diphtheria in 1926. Pertussis (1914), diphtheria (1926), and tetanus (1938) were combined in 1948 and given as the DTP vaccine. Viral tissue culture methods developed from 1950-1985, and led to the advent of the Salk (inactivated) polio vaccine and the Sabin (live attenuated oral) polio vaccine. Mass polio immunization has now eradicated the disease from many regions around the world. In 1963 the measles vaccine was developed, and by the late 1960s, vaccines were also available to protect against mumps (1967) and rubella (1969). These three vaccines were combined into the MMR vaccine in 1971.  Maurice Hilleman was the most prolific vaccine inventor, and developed successful vaccines for measles, mumps, hepatitis A, hepatitis B, chickenpox, meningitis, pneumonia and Haemophilus influenzae. In modern times, the first vaccine-preventable disease targeted for eradication was smallpox. The World Health Organization (WHO) coordinated this global eradication effort. The last naturally occurring case of smallpox occurred in Somalia in 1977. The disease has since been eliminated from natural occurrences in the world, so the vaccine is no longer given. In 1988, the governing body of WHO targeted polio for eradication by 2000. Although the target was missed, eradication is very close. The next disease to be targeted for eradication would most likely be measles, which has declined since the introduction of measles vaccination in 1963. In 2000, the Global Alliance for Vaccines and Immunization (GAVI) was established to strengthen routine vaccinations and introduce new and under-used vaccines in countries with a per capita GDP of under US$1000. GAVI is now entering its second phase of funding, which extends through 2015. The past two decades have seen the application of molecular genetics and its increased insights into immunology, microbiology and genomics applied to vaccinology. Current successes include the development of recombinant hepatitis B vaccines, the less reactogenic acellular pertussis vaccine, and new techniques for seasonal influenza vaccine manufacture. Molecular genetics sets the scene for a bright future for vaccinology, including the development of new vaccine delivery systems (e.g. DNA vaccines, viral vectors, plant vaccines and topical formulations), new adjuvants, the development of more effective tuberculosis vaccines, and vaccines against cytomegalovirus (CMV), herpes simplex virus (HSV), respiratory syncytial virus (RSV), staphylococcal disease, streptococcal disease, pandemic influenza, shigella, HIV, malaria and schistosomiasis among others. Therapeutic vaccines may also soon be available for cancer, allergies, autoimmune diseases and addictions.




Vaccine definition:

Vaccine is an antigenic substance prepared from the causative agent of a disease or a synthetic substitute, used to provide immunity against one or several diseases. A vaccine is a biological preparation that provides active acquired immunity to a particular disease. A vaccine typically contains an agent that resembles a disease-causing microorganism and is often made from weakened or killed forms of the microbe, its toxins or one of its surface proteins. The agent stimulates the body’s immune system to recognize the agent as a threat, destroy it, and keep a record of it, so that the immune system can more easily recognize and destroy any of these microorganisms that it later encounters. The administration of vaccines is called vaccination. The effectiveness of vaccination has been widely studied and verified; for example, polio vaccine, HPV vaccine, and the chicken pox vaccine. Vaccination is the most effective method of preventing infectious diseases; widespread immunity due to vaccination is largely responsible for the worldwide eradication of smallpox and the restriction of diseases such as polio, measles, and tetanus from much of the world. The World Health Organization (WHO) reports that licensed vaccines are currently available to prevent or contribute to the prevention and control of twenty-five infections. Vaccines can be prophylactic (example: to prevent or ameliorate the effects of a future infection by any natural or “wild” pathogen), or therapeutic (e.g., vaccines against cancer are also being investigated). Many believe vaccines are among the greatest achievements of modern medicine – in industrial nations, they have eliminated naturally occurring cases of smallpox, and nearly eliminated polio, while other diseases, such as typhus, rotavirus, hepatitis A and B and others are well controlled. Conventional vaccines, however, only cover a small number of diseases, and infections that lack effective vaccines kill millions of people every year, with AIDS, hepatitis C and malaria being particularly common.

List of Vaccine-Preventable Diseases (2009):

Vaccines are available for all of the following vaccine-preventable diseases (unless otherwise noted):


•Cervical Cancer (Human Papillomavirus)


•Hepatitis A

•Hepatitis B

•Haemophilus influenzae type b (Hib)

•Human Papillomavirus (HPV)

•Influenza (Flu)

• Japanese encephalitis (JE)

• Lyme disease-



•Monkeypox-There is no monkey pox vaccine. The smallpox vaccine is used for this disease.



• Pneumococcal





•Shingles (Herpes Zoster)




•Tuberculosis (TB)

•Varicella (Chickenpox)

•Yellow Fever



Global Considerations:

Protecting health is a major priority of society, families, and individual parents. Over the past 100 years there has been a revolution in the ability to protect health in the developed world, where there are resources to enable this to happen. In 1900, among every 1,000 babies born in the United States, 100 would die before their first birthday, and five before 5 years of age. By 2007, fewer than seven were expected to die before their first birthday, and only 0.29 per 1,000 before 5 years of age. Diseases severe enough to kill children and adults can also leave survivors disabled in some way, and as mortality has fallen, so has the chance of severe disability from these diseases. Among the dangers for children and adults that have greatly diminished over the past century are infectious diseases. For bacterial diseases, antibiotics have been developed to treat infections before permanent harm can occur. For many viral and bacterial diseases, vaccines now exist. In the early 20th century, smallpox (which has 30 percent mortality and a very high rate of disfigurement and other less common sequelae including blindness and encephalopathy) and rabies (virtually 100 percent fatal) could be prevented with immunization. With the fast growing understanding of microbes and immunity from 1920 onward, the development of immunizations became a race to “conquer” infectious disease. Beginnings with the combination diphtheria, pertussis, and tetanus immunization during World War II and most recently with immunization to prevent cervical cancer (the human papillomavirus vaccine), immunizations have changed our expectations for child and adult health. Infections are less of a terror, and children are expected to survive to adulthood.


Immunization is a proven tool for controlling and even eradicating disease. An immunization campaign, carried out by the World Health Organization (WHO) from 1967 to 1977, eradicated smallpox. Eradication of poliomyelitis is within reach. Since Global Polio Eradication Initiative in 1988, infections have fallen by 99%, and some five million people have escaped paralysis. Although international agencies such as the World Health Organization (WHO) and the United Nations Children’s Fund (UNICEF) and now Global Alliance for Vaccines and Immunization (GAVI) provide extensive support for immunization activities, the success of an immunization program in any country depends more upon local realities and national policies. Successful immunization strategy for the country goes beyond vaccine coverage in that self-reliance in vaccine production, creating epidemiological database for infectious diseases and developing surveillance system are also integral parts of the system. The WHO created the Expanded Program on Immunization (EPI) in 1974 as a means to continue the great success that had been achieved earlier with the eradication of smallpox. At that time less than 5 percent of the world’s children in the developing world were receiving immunizations. The six diseases chosen to be tackled under this new initiative were tuberculosis, diphtheria, tetanus, pertussis, polio, and measles. It was not until 1988 that the WHO recommended that yellow fever vaccine be added to the national immunization programs of those countries with endemic disease (WHO and UNICEF 1996). Later, in 1992, the World Health Assembly recommended hepatitis B vaccination for all infants. Most recently the WHO has recommended that the Haemophilus influenzae type B (Hib) conjugate vaccines be implemented into national immunization programs unless epidemiological evidence exists of low disease burden, lack of benefit, or overwhelming obstacles to implementation (WHO 2006).


The World Health Organization (WHO) estimate that vaccination averts 2 to 3 million deaths per year (in all age groups), and up to 1.5 million children die each year due to diseases which could have been prevented by vaccination. They estimate that 29% of deaths of children under five years old in 2013 were vaccine preventable.  Global vaccination coverage—the proportion of the world’s children who receive recommended vaccines—has remained steady for the past few years. During 2013, about 84% (112 million) of infants worldwide received 3 doses of diphtheria-tetanus-pertussis (DTP) vaccine, protecting them against infectious diseases that can cause serious illness and disability or be fatal. By 2013, 129 countries had reached at least 90% coverage of DTP vaccine. In 2013, an estimated 21.8 million infants worldwide were not reached with routine immunization services, of whom nearly half live in 3 countries: India, Nigeria and Pakistan. Priority needs to be given to strengthening routine vaccination globally, especially in the countries that are home to the highest number of unvaccinated children. Particular efforts are needed to reach the underserved, especially those in remote areas, in deprived urban settings, in fragile states and strife-torn regions. The American Red Cross, the World Health Organization (WHO), the United Nations Foundation, the United Nations Children’s Fund (UNICEF), and the Centers for Disease Control and Prevention (CDC) are partners in the Measles Initiative, which targeted reduction of worldwide measles deaths by 90% from 2000 to 2010. During 2000–2008, global measles mortality rates declined by 78%—i.e., from an estimated 733,000 deaths in 2000 to 164,000 deaths in 2008. Rotary International, UNICEF, the CDC, and the WHO are leading partners in the global eradication of polio, an endeavor that reduced the annual number of paralytic polio cases from 350,000 in 1988 to <2000 in 2009. The GAVI Alliance and the Bill and Melinda Gates Foundation have brought substantial momentum to global efforts to reduce vaccine-preventable diseases, expanding on earlier efforts by the WHO, UNICEF, and governments in developed and developing countries.



In 2006, the World Health Organization and UNICEF created the Global Immunization Vision and Strategy (GIVS). This organization created a ten-year strategy with four main goals:

•to immunize more people against more diseases

•to introduce a range of newly available vaccines and technologies

•to integrate other critical health interventions with immunization

•to manage vaccination programs within the context of global interdependence

The Global Vaccination Action Plan (GVAP) was created by the World Health Organization and endorsed by the World Health Assembly in 2012. The plan which is set from 2011-2020 is intended to “strengthen routine immunization to meet vaccination coverage targets; accelerate control of vaccine-preventable diseases with polio eradication as the first milestone; introduce new and improved vaccines and spur research and development for the next generation of vaccines and technologies”. These global actions lead to progression of vaccinations. Living in a globalized world that is extremely connected, diseases that are preventable by vaccinations have become part of a larger public health movement leading to global herd immunity. These task forces and political campaigns that have erected in order to spread availability and knowledge of vaccination are modern attempts to protect the world from vaccination-preventable diseases. The plan was the result of a global collaboration involving governments and elected officials, health professionals, academic institutions, vaccine manufacturers, nongovernmental organizations, and civil society organizations. If the global community meets the plan’s objectives, childhood mortality around the world will be reduced below the targets set by the United Nations Millennium Development Goals.


World Immunization Week:

The last week of April each year is marked by WHO and partners as World Immunization Week. It aims to raise public awareness of how immunization saves lives, encouraging people everywhere to vaccinate themselves and their children against deadly diseases. In 2014, under the global slogan “Are you up-to-date?”, more than 180 countries, territories and areas marked the week with activities including vaccination campaigns, training workshops, round-table discussions and public information campaigns. This year’s campaign focuses on closing the immunization gap and reaching equity in immunization levels as outlined in the Global Vaccine Action Plan, which is a framework to prevent millions of deaths by 2020 through universal access to vaccines for people in all communities.



Immunology and vaccinology:

The basic concepts of immunology are an essential component of the foundations of modern vaccinology. To understand the immunology of vaccines, it is important first to examine the key players of the immune system and to understand how they are produced, activated and regulated. Immunology is the study of the structure and function of the immune system. Vaccinology is the science of vaccine development and how the immune system responds to vaccines, but also includes ongoing evaluation of immunization programs and vaccine safety and effectiveness, as well as surveillance of the epidemiology of vaccine-preventable diseases.


Human immune system:

The human immune system has evolved over millions of years from both invertebrate and vertebrate organisms to develop sophisticated defense mechanisms to protect the host from microbes and their virulence factors. The normal immune system has three key properties: a highly diverse repertoire of antigen receptors that enables recognition of a nearly infinite range of pathogens; immune memory to mount rapid recall immune responses; and immunologic tolerance to avoid immune damage to normal self-tissues. From invertebrates, humans have inherited the innate immune system, an ancient defense system that uses germ line–encoded proteins to recognize pathogens. Cells of the innate immune system, such as macrophages, dendritic cells, and natural killer (NK) lymphocytes, recognize pathogen-associated molecular patterns (PAMPs) that are highly conserved among many microbes and use a diverse set of pattern recognition receptor molecules (PRRs). Important components of the recognition of microbes by the innate immune system include (1) recognition by germ line–encoded host molecules, (2) recognition of key microbe virulence factors but not recognition of self-molecules, and (3) nonrecognition of benign foreign molecules or microbes. Upon contact with pathogens, macrophages and NK cells may kill pathogens directly or, in concert with dendritic cells, may activate a series of events that both slow the infection and recruit the more recently evolved arm of the human immune system, the adaptive immune system. Adaptive immunity is found only in vertebrates and is based on the generation of antigen receptors on T and B lymphocytes by gene rearrangements, such that individual T or B cells express unique antigen receptors on their surface capable of specifically recognizing diverse antigens of the myriad infectious agents in the environment. Coupled with finely tuned specific recognition mechanisms that maintain tolerance (nonreactivity) to self-antigens, T and B lymphocytes bring both specificity and immune memory to vertebrate host defenses.


The immune system can be divided into two main subsystems, the innate/general resistance system and the adaptive system. Both the innate system and the adaptive system continually interact with each other to provide an effective immune response.


The figure above shows key players of the immune system. The innate and adaptive immune systems are populated by many different cells that vary in their roles and responsibilities.


Innate and adaptive immunity:

All organisms have some form of innate protection against the outside world, which may be as simple as a cell wall or waxy coating. The innate immune system acts as a first line of defense which comprises both cellular and non-cellular effectors. This system provides early containment and defense during the lag time before adaptive immune effectors are available. Innate immunity comprises both soluble (e.g. complement, lysozyme) and cellular effectors (e.g. natural killer [NK] cells, macrophages and dendritic cells [DCs]). The innate and adaptive immune systems are principally bridged by the action of specialised APCs (antigen presenting cells), which translate and transfer information from the body tissues and innate immune system to the adaptive immune system, allowing a systemic response to a localised threat. The innate immune system therefore drives and shapes the development of adaptive immune responses via chemical and molecular signals delivered by APCs to induce the most appropriate type of adaptive response. The adaptive immune system forms the second, antigen-specific line of defense, which is activated and expanded in response to these signals.


The innate immune system:

Major Components of the Innate Immune System:

Pattern recognition receptors (PRR) C-type lectins, leucine-rich proteins, scavenger receptors, pentraxins, lipid transferases, integrins, inflammasome proteins
Antimicrobial peptides -Defensins, -defensins, cathelin, protegrin, granulysin, histatin, secretory leukoprotease inhibitor, and probiotics
Cells Macrophages, dendritic cells, NK cells, NK-T cells, neutrophils, eosinophils, mast cells, basophils, and epithelial cells
Complement components Classic and alternative complement pathway, and proteins that bind complement components
Cytokines Autocrine, paracrine, endocrine cytokines that mediate host defense and inflammation, as well as recruit, direct, and regulate adaptive immune responses


Cells of innate immune system:

Cells of the innate immune system are produced in the bone marrow and then migrate to different anatomical locations. The innate immune cell repertoire includes tissue-resident cells such as macrophages and immature DCs, and cells which circulate via blood and the lymphatic system, such as monocytes, neutrophils, eosinophils, NK cells and innate T cells. Non-immune system cells at vulnerable locations, including keratinocytes and other epithelial and mucus-producing cells, fibroblasts and endothelial cells, can also exhibit innate defensive behaviours.


The innate immune system or general resistance includes a variety of protective measures which are continually functioning and provides a first-line of defense against pathogenic agents. However, these responses are not specific to a particular pathogenic agent. Instead, the innate immune cells are specific for conserved molecular patterns found on all microorganisms. This prevents the innate immune system from inadvertently recognizing host cells and attacking them. However, this prevents the innate immune responses from improving their reactions with repeated exposure to the same pathogenic agent. In other words, the innate immune system does not have memory. The protective defenses of the innate immune system begin with the anatomic barriers such as intact skin and mucous membranes which prevent the entrance of many microorganisms and toxic agents. The skin also has an acidic environment of pH 3-5 which retards the growth of microorganisms. In addition, the normal microorganisms or flora, which inhabit the skin and mucous membranes compete with other microorganisms for nutrients and attachment sites. Further, the mucus and cilia on the mucous membranes aid in trapping microorganisms and propelling them out of the body. Next, the innate immune system includes such physiologic barriers as the normal body temperature, fever, gastric acidity, lysozyme, interferon, and collectins. The normal body temperature range inhibits a variety of microorganisms; and the development of a fever can further inhibit many of these pathogenic organisms. The gastric acidity of the stomach is also quite effective in eliminating many ingested microorganisms. Lysozyme, which is a hydrolytic enzyme found in tears and mucous secretions, can cleave the peptidoglycan layer of the bacterial cell wall thus lysing the microorganism. Interferon(s), which include(s) a group of proteins that are produced by virally infected cells, can bind to noninfected cells and produce a generalized antiviral state. Collectins are surfactant proteins that are present in serum, lung secretions, and on mucosal surfaces. They can directly kill certain pathogenic microorganisms by disrupting their lipid membranes or indirectly by clumping microorganisms to enhance their susceptibility to phagocytosis.


Innate immunity:

◦does not depend upon previous exposure to the pathogen

◦does not produce immunologic memory

◦does not improve with repeated exposure to the pathogen.


The complement pathways are also a part of the defensive measures of the innate immune system. The complement system consists of approximately 25 proteins that work together to ‘complement’ the action of the adaptive immune response in destroying bacteria. Complement proteins circulate in the blood in an inactive form. Once activated, complement components serve several effector roles including the recruitment of phagocytes, the opsonisation of pathogens to promote phagocytosis, the removal of antibody antigen complexes and the lysis of antibody-coated cells. There are three complement pathways. The classical pathway is triggered when IgM antibodies or certain IgG antibody subclasses bind surface markers/antigens on microorganisms. The alternative or properdin pathway is triggered by the deposition of complement protein, C3b, onto microbial surfaces and does not require antibodies for activation. The third pathway, the lectin pathway, is triggered by the attachment of plasma mannose-binding lectin (MBL) to microbes and does not require antibodies for activation. These three pathways merge into a common pathway which leads to the formation of the membrane attack complex that can form pores in the membrane of targeted cells. The complement pathways are also integral in the opsonization (or increased susceptibility) of particulate antigens to phagocytosis and in triggering a localized inflammatory response.


The inflammatory response is another essential part of the innate immune response. The inflammatory response is the body’s reaction to invasion by an infectious agent, antigenic challenge, or any type of physical damage. The inflammatory response allows products of immune system into area of infection or damage and is characterized by the cardinal signs of redness, heat, pain, swelling, and loss of function.


In addition to the anatomic and physiologic mechanisms, there are also Pattern recognition receptors or PRRs which contribute to the innate immune response. Pattern recognition receptors are not specific for any given pathogen or antigen, but can provide a rapid response to antigens. PRRs are classified as membrane proteins because they are associated with the cell membrane; and, they can be found in all the membranes of the cells in the innate immune system. Although there are several hundred varieties, all the genes of the PRRs are encoded in the germline to ensure limited variability in their molecular structures. Examples of PRRs include MBL, pulmonary surfactant protein, C-reactive protein, toll-like receptors (TLRs), C-Type lectin, NOD, and MX. The PRRs recognize PAMPs or pathogen associated molecular patterns which can trigger cytokine release. Examples of PAMPs include LPS (endotoxin), peptidoglycan (cell walls), lipoproteins (bacterial capsules), hypomethylated DNA (CpG found in bacteria and parasites), double-stranded DNA (viruses), and flagellin (bacterial flagella). These antigens are produced by microbial cells and not by human cells. Recognition of PAMPs by PRRs leads to complement activation, opsonization, cytokine release, and phagocyte activation.  Finally, the mononuclear phagocytes and granulocytic cells are also important to the innate response and help link the innate immune response to the adaptive immune response. Mononuclear phagocytes include monocytes which circulate in the blood and macrophages which are in the tissues. Monocytes and macrophages are highly important in antigen presentation, phagocytosis, cytokine production, and antimicrobial and cytotoxic activities. Upon maturity of the monocytes, the monocytes circulate in the blood for approximately 8 h, then migrate into the tissues and differentiate into specific tissue macrophages or into dendritic cells. There are several types of dendritic cells which are involved in different aspects of immune functions. Many dendritic cells are important in presenting antigen to T-helper cells. However, follicular dendritic cells are found only in lymph follicles and are involved in the binding of antigen–antibody complexes in lymph nodes. Granulocytic cells include neutrophils, eosinophils, and basophils/mast cells. Neutrophils are highly active phagocytic cells and generally arrive first at a site of inflammation. Eosinophils are also phagocytic cells; however, they are more important in resistance to parasites. Basophils in the blood and mast cells in the tissues release histamine and other substances and are important in the development of allergies.


Effectors of the innate response:

Under some circumstances, pathogen clearance may be achieved by innate immune effectors without activation of an adaptive immune response. Activated innate cells act as phagocytes, engulfing and destroying the pathogen within intracellular vesicles containing digestive enzymes. To be efficient, this response requires both the recruitment and activation of phagocytes at the site of infection, a process often referred to as the inflammatory response. Cells residing in proximity to the infection site are activated upon recognition of PAMPs, and secrete a large array of soluble mediators, including chemokines and cytokines. Chemokines behave as chemoattractants, favouring the recruitment of innate immune cells to the site of infection, while cytokines (including tumour necrosis factor and interferons) act by increasing the phagocytic activity of cells. Innate immune cells also produce a series of soluble chemical factors (such as peptides) that are able to directly target the invading microbes. Additionally, antigens are taken up by innate cells, with immature DCs the most specialised among them. The antigen is subsequently processed and the DC differentiates into an APC. Antigen-carrying APCs then migrate to the draining lymph node and provide the link between the innate and adaptive immune responses.


The innate system may be able to eradicate the pathogenic agent without further assistance from the adaptive system; or, the innate system may stimulate the adaptive immune system to become involved in eradicating the pathogenic agent.


Adaptive immune system:

In contrast to the innate immune system, the actions of adaptive immune system are specific to the particular pathogenic agent. This response will take longer to occur than the innate response. However, the adaptive immune system has memory which means that the adaptive immune system will respond more rapidly to that particular pathogen with each successive exposure. The adaptive immune response is composed of the B–cells/antibodies and T-cells. These are the two arms of the adaptive immune system. The B–cells and antibodies compose humoral immunity or antibody-mediated immunity; and, the T-cells compose cell-mediated immunity. As a note, natural killer cells are also from the lymphocyte lineage like B–cells and T-cells; however, natural killer cells are only involved in innate immune responses.


Antigen and antibody:

An antigen is a substance that the body recognizes as foreign and that triggers immune responses. The terms immunogen and antigen are often used interchangeably. Antibodies are proteins that are produced in response to antigens introduced into the body.

Antibodies protect the body from disease by:

•binding to the surface of the antigen to block its biological activity (neutralization)

•binding or coating (opsonisation) of the antigen to make it more susceptible to destruction and clearance by phagocytes (phagocytosis)

•opsonisation of special receptors on various cells, allowing them to recognise and respond to the antigen

•activation of the complement system to cause disintegration (lysis) of the pathogen and to enhance phagocytosis.


The first arm of the adaptive immune system is humoral immunity, functions against extracellular pathogenic agents and toxins. B–cells are produced in the bone marrow and then travel to the lymph nodes. Within the lymph nodes, naïve B–cells continue to mature and are exposed to pathogenic agents caught in the particular lymph node. Unlike T-cells, B–cells can recognize antigens in their native form which means that B–cells can recognize antigens without requiring that the antigen be processed by an antigen-presenting cell and then presented by a T-helper cell. These antigens are called T-independent antigens because T-cell activation is not required to activate the B–cells. Examples of these T-independent antigens include lipopolysaccharide, dextran, and bacterial polymeric flagellin. These antigens are typically large polymeric molecules with repeating antigenic determinants. These antigens can also induce numerous B–cells to activate; however, the immune response is weaker and the induction of memory is weaker than with T-helper cell activation. In contrast, activation of B–cells with T-helper cell activation results in a much better immune response and more effective memory. This long-term, effective immune response is the type of reaction that is the goal of immunizations. With the binding of the antigen to the Fab region on the B–cell receptor and secondary signaling from cytokines released by T-helper cells, B–cells begin somatic hypermutation at the Fab region which further increases the corresponding fit between the Fab region and the antigen. This process then stimulates the B–cell(s) to mature into a plasma cell(s) which then begins production of the particular antibody with the best corresponding fit to the antigen. From these stimulated B-cells, clones of B-cells with the specificity for the particular antigen will arise. These cells may become plasma cells producing antibodies or memory cells which will remain in the lymph nodes to stimulate a new immune response to that particular antigen. This occurs during the primary immune response when the immune system is first exposed to a particular antigen. This process of clonal selection and expansion will take several days to occur; and, primarily involves the production of IgM. IgM is the first antibody produced during a primary immune response. As the immune response progresses, the activated plasma cells will begin producing IgG specific to the particular antigen. Although IgM is the first antibody produced and is a much larger antibody, IgG is a better neutralizing antibody. IgG binds more effectively to the antigen and aids in opsonization. As a note, other antibodies can be produced by plasma cells. These antibodies include IgD, IgA, and IgE. IgD is primarily found as a receptor bound to the surfaces of mature B–cells. While IgA is the antibody found in secretions such as mucous, saliva, tears, and breast milk; and, IgE is the antibody involved in allergic reactions and parasitic infections. However, the most important antibody for vaccines is IgG. With the memory cells that have been produced with the primary immune response, any succeeding exposures to the antigen will result in a more rapid and effective secondary immune response. With this secondary immune response, the reaction will be quicker, larger, and primarily composed of IgG.


As for the other arm of adaptive immunity, cell-mediated immunity, it functions primarily against intracellular pathogens. T-cells mature in the thymus and are then released into the bloodstream. There are two main types of T-cells, CD4 cells and CD8 cells. CD4 cells or T-helper cells have the CD4 co-receptor and only recognize the major histocompatibility complex (MHC) II protein. The MHC II protein is found on all immune cells and acts as a marker of immune cells. CD4 cells are essential for antibody-mediated immunity and in helping B–cells control extracellular pathogens. There are two subsets of CD4 cells, Th1 and Th2. Upon activation by cytokines, B cells differentiate into memory B cells (long-lived antigen-specific B cells) or plasma cells (effector B cells that secrete large quantities of antibodies). Most antigens activate B cells using activated T helper (Th) cells, primarily Th1 and Th2 cells. Th1 cells secrete IFN-γ, which activates macrophages and induces the production of opsonizing antibodies by B cells. The Th1 response leads mainly to a cell-mediated immunity (cellular response), which protects against intracellular pathogens (invasive bacteria, protozoa and viruses). The Th1 response activates cytotoxic T lymphocytes (CTL), a sub-group of T cells, which induce death of cells infected with viruses and other intracellular pathogens. Natural killer (NK) cells are also activated by the Th1 response, these cells play a major role in the induction of apoptosis in tumors and cells infected by viruses. Th2 cells secrete cytokines, including IL-4, which induces B cells to make neutralizing antibodies. Th2 cells generally induce a humoral (antibody) response critical in the defense against extracellular pathogens (helminthes, extracellular microbes and  toxins). CD8 cells or T-cytotoxic cells have the CD8 co-receptor and only recognize the major histocompatibility complex (MHC) I protein. The MHC I protein is found on all nucleated body cells except for mature erythrocytes and acts as a marker of body cells. CD8 cells are essential for cell-mediated immunity and in helping control of intracellular pathogens. Unlike B-cells, T-cells can only recognize antigen that has been processed and presented by antigen-presenting cells. There are two types of antigen processing. The first type of antigen processing involves attaching intracellular antigens along with MHC I proteins to the surface of antigen-processing cells. This occurs with viral antigens and tumor cells. The other type of antigen processing involves attaching extracellular antigens along with MHC II proteins to the surface of antigen-presenting cells. This occurs with bacterial and parasitic antigens. Once the T-cell has been activated by the antigen-presenting cell, it begins to carry out its functions depending on whether it is a CD4 cell or a CD8 cell. As with B-cells, activated T-cells also undergo clonal expansion which produces additional effector T-cells for the current infection and memory T-cells for future infections with this antigen.


Adaptive immunity is the body’s second level of defense, which develops as a result of infection with a pathogen or following immunization. It defends against a specific pathogen and takes several days to become protective. Adaptive immunity:

◦has the capacity for immunologic memory

◦provides long-term immunity which may persist for a lifetime but may wane over time

◦increases in strength and effectiveness each time it encounters a specific pathogen or antigen.


The figure above shows organs and tissues of the immune system. The innate immune system is formed from a combination of physical barriers (skin, mucus), chemical defenses (acids, antimicrobial peptides) and specialised cells capable of responding to pathogens without needing to recognise specific antigens (A). The adaptive immune system consists of a network of primary and secondary organs, where immune cells are either produced or reside until they become activated (B). The primary lymphoid organs (bone marrow and thymus) are where lymphocytes are generated, and the secondary lymphoid organs (peripheral lymph nodes, spleen, tonsils, Peyer’s patches) are where immune responses are initiated and regulated.


Summary of differences between the innate and adaptive immune systems:



An antigen-presenting cell (APC) is a cell that displays foreign antigens complexed with major histocompatibility complexes (MHCs) on their surfaces; this process is known as antigen presentation. T-cells may recognize these complexes using their T-cell receptors (TCRs). These cells process antigens and present them to T-cells. T cells cannot recognize, and therefore cannot respond to, ‘free’ antigen. T cells can only ‘see’ an antigen that has been processed and presented by cells via carrier molecules like MHC and CD1 molecules. Most cells in the body can present antigen to CD8+ T cells via MHC class I molecules and, thus, act as “APCs”; however, the term is often limited to specialized cells that can prime T cells (i.e., activate a T cell that has not been exposed to antigen, termed a naive T cell). These cells, in general, express MHC class II as well as MHC class I molecules, and can stimulate CD4+ (“helper”) T cells as well as CD8+ (“cytotoxic”) T cells, respectively. APCs could be dendritic cell (DC), macrophage or certain B cell/epithelial cell.


Innate and adaptive immune responses are bridged by the actions of APCs:

The innate immune system provides an essential link between the first encounter with a pathogen at the site of infection and the eventual establishment of immune memory. Innate cells (such as macrophages and DCs) are strategically located at body sites with a high risk of infection (such as epithelia and mucosal surfaces). These types of cells act as both a first line of defense against danger and as key messengers that are able to alert the adaptive immune system. Since naïve T and B cells are not pre-positioned in most organs and tissues of the body, they rely on the innate immune system to sense an infectious event. Among tissue-resident cells, the most efficient APCs are DCs. Immature DCs which have captured antigen become activated and mature into functional APCs, while migrating to the regional draining lymph node or submucosal lymphoid tissue. Mature DCs express high levels of antigen/MHC complexes at the cell surface and undergo morphological changes, which render them highly specialised, to activate naïve T cells. When they arrive in the lymph node, DCs present processed antigen and express co-stimulatory signals. The signals provided by DCs promote T-cell differentiation and proliferation, initiating the adaptive T cell-mediated immune response. APC activation is therefore a necessary prerequisite for an efficient adaptive immune response. DCs not only provide antigen and co-stimulation to naïve T cells, but also contribute to the initial commitment of naïve T helper cells into Th1, Th2 or other subsets. This directs the efficient induction of T helper cells with appropriate cytokine profiles early during infections, without the need for direct contact between antigen-specific T cells and pathogens. Undigested pathogen-derived antigens are also drained by the lymph and transported to the B cell-rich area of the lymph node, where they are exposed to BCR-expressing cells. An adaptive immune response is therefore initiated in a draining lymph node by the concerted action of innate immune cells and free antigens. These activate T and B lymphocytes, respectively, to proliferate and differentiate into effector and memory cells.


The table below shows cells of the Innate Immune System and their major roles in triggering Adaptive Immunity:

Cell Type Major Role in Innate Immunity Major Role in Adaptive Immunity
Macrophages Phagocytose and kill bacteria; produce antimicrobial peptides; bind (LPS); produce inflammatory cytokines Produce IL-1 and TNF- to upregulate lymphocyte adhesion molecules and chemokines to attract antigen-specific lymphocyte. Produce IL-12 to recruit TH1 T helper cell responses; upregulate co-stimulatory and MHC molecules to facilitate T and B lymphocyte recognition and activation. Macrophages and dendritic cells, after LPS signaling, upregulate co-stimulatory molecules B7-1 (CD80) and B7-2 (CD86) that are required for activation of antigen-specific antipathogen T cells. There are also Toll-like proteins on B cells and dendritic cells that, after LPS ligation, induce CD80 and CD86 on these cells for T cell antigen presentation.
Plasmacytoid dendritic cells (DCs) of lymphoid lineage Produce large amounts of interferon- (IFN-), which has antitumor and antiviral activity, and are found in T cell zones of lymphoid organs; they circulate in blood IFN- is a potent activator of macrophage and mature DCs to phagocytose invading pathogens and present pathogen antigens to T and B cells
Myeloid dendritic cells are of two types; interstitial and Langerhans-derived Interstitial DCs are strong producers of IL-12 and IL-10 and are located in T cell zones of lymphoid organs, circulate in blood, and are present in the interstices of the lung, heart, and kidney; Langerhans DCs are strong producers of IL-12; are located in T cell zones of lymph nodes, skin epithelia, and the thymic medulla; and circulate in blood Interstitial DCs are potent activators of macrophage and mature DCs to phagocytose invading pathogens and present pathogen antigens to T and B cells
Natural killer (NK) cells Kill foreign and host cells that have low levels of MHC+ self-peptides. Express NK receptors that inhibit NK function in the presence of high expression of self-MHC. Produce TNF- and IFN-, which recruit TH1 helper T cell responses
NK-T cells Lymphocytes with both T cell and NK surface markers that recognize lipid antigens of intracellular bacteria such as Mycobacterium tuberculosis by CD1 molecules and kill host cells infected with intracellular bacteria. Produce IL-4 to recruit TH2 helper T cell responses, IgG1 and IgE production
Neutrophils Phagocytose and kill bacteria, produce antimicrobial peptides Produce nitric oxide synthase and nitric oxide, which inhibit apoptosis in lymphocytes and can prolong adaptive immune responses
Eosinophils Kill invading parasites Produce IL-5, which recruits Ig-specific antibody responses
Mast cells and basophils Release TNF-, IL-6, and IFN- in response to a variety of bacterial PAMPs Produce IL-4, which recruits TH2 helper T cell responses and recruit IgG1- and IgE-specific antibody responses
Epithelial cells Produce antimicrobial peptides; tissue-specific epithelia produce mediator of local innate immunity; e.g., lung epithelial cells produce surfactant proteins (proteins within the collectin family) that bind and promote clearance of lung-invading microbes Produces TGF-, which triggers IgA-specific antibody responses

LPS, lipopolysaccharide; PAMP, pathogen-associated molecular patterns; TNF-, tumor necrosis factor-alpha; IL-4, IL-5, IL-6, IL-10, and IL-12, interleukin 4, 5, 6, 10, and 12, respectively.


Immunological memory:

Immunologic memory is the immune system’s ability to remember its experience with an infectious agent, leading to effective and rapid immune response upon subsequent exposure to the same or similar infectious agents. Development of immunologic memory requires participation of both B and T cells; memory B cell development is dependent on the presentation of antigens by T cells. Irrespective of the type of immune response required for protection, for almost all vaccines long-lasting protection (memory) is a desirable objective. However, while it is easy to state this, it is less certain how it should be achieved, although a great deal has been learnt about immunological memory over the last two decades. During a primary immune response, lymphocytes proliferate and change their phenotype. Memory populations of cells are, therefore, both quantitatively and qualitatively different from those that have not yet encountered antigen. Thus memory consists of expanded clones of lymphocytes with altered function. Among thymus-derived (T) lymphocytes, this is reflected in rapid production of effector cytokines such as IFN-γ or interleukins. Primed cells express higher levels of several adhesion molecules, such as ICAM-1 and integrins, as well as homing molecules such as CD44, CD62L and the cutaneous lymphocyte antigen (CLA). Among B-cells, the hallmark of immunological memory is the production of isotype switched, somatically mutated, high affinity immunoglobulin. It is also clear that memory is a dynamic state. In both man and experimental animals, phenotypically defined memory cells have been shown to divide more rapidly than naive cells. This appears to be an inherent property of memory cells since division continues in the absence of antigen.


Constraints on the duration of memory:

In vitro at least, human T lymphocyte clones can only undergo a finite number of cell divisions and, as they approach senescence, no longer express the co-stimulatory molecule CD28, can no longer up-regulate telomerase on activation, and show progressive shortening of telomeres. These mechanisms may limit the duration of memory in the absence of re-exposure to antigen, which would recruit new clones. In addition to these constraints on survival of individual clones, there is also the constraint of space in the memory pool. Although during an acute infection lymphocyte numbers may increase greatly, in the longer term numbers of cells with naive and memory phenotypes change only slowly. Thus every time a new antigen is encountered and a new set of clones undergoes expansion and enters the memory pool, other cells must die to provide space. What factors favour one cell or clone over another in this competition for survival are not known. However, experimental evidence suggests that memory persists longer if the initial clonal expansion is large. Alternatively, persistence of antigen may favour clonal survival as occurs in chronic infections such as EBV or CMV. It is now clear that there is considerable heterogeneity among antigen-specific T-cell populations detected by binding to MHC-peptide tetramers and it is thought that some memory cells may revert to a more slowly dividing state. This suggests two alternative strategies for ensuring persistence of memory. Either vaccines should be designed to ensure maximal clonal expansion by providing an optimal dose of antigen and appropriate adjuvant, or vectors should be chosen to ensure long persistence of antigen.


The figure above shows the kinetics of primary and recall (memory) immune responses. On first exposure to a pathogen or antigen (referred to as ‘priming’ in vaccination), the innate immune system must detect, process and translate the threat into a form that can be understood by the adaptive immune system. This occurs via the bridging actions of APCs and takes days/weeks. Following resolution of the challenge, a specialised ‘memory’ cell population remains. The cells within this population are maintained for a long time (months/years) and may remain within the host for the rest of their host’s life. On subsequent exposure to the same antigen (referred to as ‘boosting’ in vaccination), the innate immune response is triggered as before but now the memory cell populations are able to mount a greater and more rapid response as they do not need to undergo the same activation process as naïve cells. The adaptive response on secondary exposure leads to a rapid expansion and differentiation of memory T and B cells into effector cells, and the production of high levels of antibodies. A higher proportion of IgG and other isotypes of antibodies compared with the level of IgM characterises memory antibody responses.


By definition, all effective vaccines lead to the development of immune memory, by mimicking the threat of an infection and providing antigens derived from the specific pathogen. The ability to generate immune memory is the key attribute of the adaptive immune system, which is crucial for the long-term protection of individuals and populations. Generating immune memory depends on a high degree of interaction among many different cell types, which maintains higher numbers of T and B cells that were selected as the most useful in the primary immune response. However, while the relative contribution of clonal memory cells to protection can be inferred from the molecules they express and their functional behaviour, the presence of memory cells per se is not indicative of absolute protection against disease.


Immune response to vaccine:


The figure above shows the flow of information following intramuscular vaccination. An antigen delivered by a vaccine is taken up by macrophages and immature APCs (1). APCs migrate to the lymph node draining the site of vaccination (2). The adaptive immune response is now initiated and effectors, such as CD4 effector T cells, cytotoxic T cells and soluble antibodies (3), are produced which travel throughout the bloodstream and back to the site of vaccination.


Vaccines function by stimulating the immune system and prompting a primary immune response to an infecting pathogen or to molecules derived from a particular pathogen. The immune response elicited by this primary exposure to vaccine pathogen creates immunological memory, which involves the generation of a pool of immune cells that will recognize the pathogen and mount a more robust or secondary response upon subsequent exposure to the virus or bacterium. In successful immunization, the secondary immune response is sufficient to prevent disease in the infected individual, as well as prevent the transmission of the pathogen to others. For communicable diseases, immunizations protect not only the individual who receives the immunization, but also others with whom he or she has contact. High levels of vaccination in a community increase the number of people who are less susceptible or resistant to illness and prevent propagation of the infectious agent. Unvaccinated individuals or those who have not developed immunity to this pathogen are afforded an indirect measure of protection because those with immunity reduce the spread of the pathogen throughout the entire population. The larger the proportion of people with immunity, the greater the protection of those without immunity. This effect is called “herd immunity.” [Vide infra] Herd immunity is an important phenomenon as immunization programs rarely achieves 100 percent immunization in a population; and in some cases, previously vaccinated persons may not exhibit effective immunity and disease may result from exposure to the pathogen. For protection, immunization of not only ourselves but also our neighbors is important.


As with any challenge to the immune system, the body must first detect the threat whether it is a pathogenic agent or an immunization. This initial detection typically is done by the innate immune system; although, B-cells may also perform this function. This detection process begins when the immune system recognizes epitopes on antigens. Epitopes are small subregions on the antigens that stimulate immune recognition. Multiple components of the innate immune system will then respond to this challenge. These components of innate immunity will opsonize or bind to the agent and aid in its engulfment by antigen-presenting cells such as macrophages or monocytes. These antigen-presenting cell(s) will then process the antigens from this pathogenic agent and insert the processed antigen along with the MHC protein onto the surface on the antigen-presenting cell. If it is a viral antigen, the antigen will be bound with MHC I protein and presented by the antigen-presenting cell to a CD8 cell which will likely trigger cell-mediated immunity. If it is a bacterial or parasitic antigen, the antigen will be bound with MHC II protein and presented by the antigen-presenting cell to a CD4 cell which will likely trigger antibody-mediated immunity.


Vaccine-induced immune effectors (see table below) are essentially antibodies—produced by B lymphocytes—and capable of binding specifically to a toxin or a pathogen. Other potential effectors are cytotoxic CD8+ T lymphocytes (CTL) that may limit the spread of infectious agents by recognizing and killing infected cells or secreting specific antiviral cytokines. The generation and maintenance of both B and CD8+ T cell responses is supported by growth factors and signals provided by CD4+ T helper (Th) lymphocytes, which are commonly subdivided into T helper 1 (Th1) and T helper 2 (Th2) subtypes. These effectors are controlled by regulatory T cells (Treg) that are involved in maintaining immune tolerance. Most antigens and vaccines trigger both B and T cell responses, such that there is no rationale in opposing antibody production (‘humoral immunity’) and T cell responses (‘cellular immunity’). In addition, CD4+ T cells are required for most antibody responses, while antibodies exert significant influences on T cell responses to intracellular pathogens.



How type of vaccine affect immune response:

The nature of the vaccine exerts a direct influence on the type of immune effectors that are predominantly elicited and mediate protective efficacy (see table below). Capsular polysaccharides (PS) elicit B cell responses in what is classically reported as a T-independent manner (e.g. PPV) although increasing evidence supports a role for CD4+ T cells in such (e.g. glycoconjugate vaccines) provides foreign peptide antigens that are presented to the immune system and thus recruits antigen-specific CD4+ Th cells in what is referred to as T-dependent antibody responses (e.g. PCV). A hallmark of T-dependent responses, which are also elicited by toxoid, protein, inactivated or live attenuated viral vaccines, is to induce both higher-affinity antibodies and immune memory. In addition, live attenuated vaccines usually generate CD8+ cytotoxic T cells. The use of live vaccines/vectors or of specific novel delivery systems (e.g. DNA vaccines) appears necessary for the induction of strong CD8+ T cell responses. Most current vaccines mediate their protective efficacy through the induction of vaccine specific antibodies, whereas BCG-induced T cells produce cytokines that contribute to macrophage activation and control of M. tuberculosis.


Correlates of vaccine induced immunity:







How route of vaccine administration affect immune response:

Following injection, the vaccine antigens attract local and systemic dendritic cells, monocytes and neutrophils. Innate immune responses activate these cells by changing their surface receptors and migrate along lymphatic vessels, to the draining lymph nodes where the activation of T and B lymphocytes takes place. In case of killed vaccines, there is only local and unilateral lymph node activation. Conversely for live vaccines, there is multifocal lymph node activation due to microbial replication and dissemination. Consequently the immunogenicity of killed vaccines is lower than the live vaccines; killed vaccines require adjuvants which improve the immune response by producing local inflammation and recruiting higher number of dendritic cells/ monocytes to the injection site. Secondly, the site of administration of killed vaccines is of importance; the intramuscular route which is well vascularised and has a large number of patrolling dendritic cells is preferred over the subcutaneous route. Intradermal route recruits the abundant dendritic cells in the skin and offers the advantage of antigen sparing and early & effective protection but the GMT’s (geometric mean [antibody] titre) are lower than that achieved with IM and may wane faster. The site of administration is usually of little significance for live vaccines. Finally due to focal lymph node activation, multiple killed vaccines may be administered at different sites with a little immunologic interference. Immunologic interference may occur with multiple live vaccines unless they are given on the same day or at least 4 weeks apart or by different routes. Immunological (immune) interference is defined as reduction in the immunogenicity of a vaccine antigen when it is administered as a component of a vaccine that includes multiple vaccine antigens or reduction in the immunogenicity of a vaccine when it is administered separately or concurrent with another vaccine. [see also vaccine interference]


Immunological requirement of a vaccine:

1. Identification and selection of the most appropriate antigen:

Vaccines aim to prevent the disease symptoms that are the consequences of a pathogenic infection. In most cases, this does not occur by completely preventing infection but by limiting the consequences of the infection. In other words, vaccine prevents disease and not infection by direct effect, but by indirect herd effect, it also prevents infection and infectiousness. An understanding of the disease pathogenesis and natural immune control is, therefore, very useful when selecting appropriate antigens upon which to base a vaccine. Vaccines developed from pathogens can vary in the complexity of the pathogen-derived material they contain. Our understanding of fundamental immunology, as well as the selection techniques used, has resulted in new vaccines that are better characterised than ever before, and has also initiated a more rational approach to vaccine design.

2. Induction of innate immune responses:

The immune system is triggered by a combination of events and stimuli, as described previously. The requirement for more than the presence of a ‘foreign’ antigen to elicit an immune response must therefore always be considered in vaccine design, particularly when using highly purified or refined antigens. Highly refined subunit antigen formulations, and some inactivated whole pathogens, do not contain many of the molecular features and defensive triggers that are required to alert the innate immune system. These types of antigen are designed to minimize excessive inflammatory responses but, as a result, may be suboptimally immunogenic. Under these circumstances, the addition of adjuvants can mimic the missing innate triggers, restoring the balance between necessary defensive responses and acceptable tolerability.

3. Induction of CD4 T cell help:

The induction of CD4 T cells is essentially controlled by the nature of this initial inflammatory response. Therefore, vaccine adjuvants can play a role in guiding how CD4 T cells are induced and how they further differentiate and influence the quality and quantity of the adaptive immune response.

4. Selection and targeting of effector cells:

It is important to recognise that the dominant immune response to a given pathogen or antigen may not necessarily be the optimum response for inducing protection; indeed through evolution some pathogens have developed strategies to evade or subvert the immune response, as is the case with Neisseria gonorrhoeae, where the dominant antibody response actually facilitates infection by preventing complement-dependent bactericidal activity. Antibody titers are often considered to represent adequate indicators of immune protection but, as discussed above, may not be the actual mechanism by which optimal protection is achieved. Useful specific so-called immune correlates of immunity/protection may be unknown or incompletely characterised. Therefore, modern vaccine design still looks to clinical trials to provide information about clinical efficacy and, if possible, the immunological profiles of protected individuals. Immunogenicity is assessed by laboratory measurement of immune effectors, typically antibodies. Increasingly, however, specific T-cell activation is included in the parameters assessed, as adequate T-cell immunity may be essential for recovery from some infections and improved assay techniques have allowed these evaluations to become more standardised and offer more robust data. This can then open the door to understanding observed clinical efficacy (or lack of) and to defining at least some of the features of vaccine-induced protection. By preferentially targeting the best immunological effectors, vaccines can then hope to mimic or improve on nature’s own response to infection.


Booster dose:

In medicine, a booster dose is an extra administration of a vaccine after an earlier dose. After initial immunization, a booster injection or booster dose is a re-exposure to the immunizing antigen. It is intended to increase immunity against that antigen back to protective levels after it has been shown to have decreased or after a specified period. For example, tetanus shot boosters are often recommended every 10 years. If a patient receives a booster dose but already has a high level of antibody, then a reaction called an Arthus reaction could develop, a localized form of Type III hypersensitivity, induced by fixation of complement by preformed circulating antibodies. In severe cases, the degree of complement fixation can be so substantial that it induces local tissue necrosis.


Both the innate and adaptive immune subsystems are necessary to provide an effective immune response whether to an actual pathogenic agent or to an immunization. Further, effective immunizations must induce long-term stimulation of both the humoral and cell-mediated arms of the adaptive system by the production of effector cells for the current infection and memory cells for future infections with the pathogenic agent. At least seven different types of vaccines are currently in use or in development that produce this effective immunity and have contributed greatly to the prevention of infectious disease around the world.



Immunization is the process whereby a person is made immune or resistant to an infectious disease, typically by the administration of a vaccine. Vaccines stimulate the body’s own immune system to protect the person against subsequent infection or disease.  Immunization is a proven tool for controlling and eliminating life-threatening infectious diseases and is estimated to avert between 2 and 3 million deaths each year. It is one of the most cost-effective health investments, with proven strategies that make it accessible to even the most hard-to-reach and vulnerable populations. It has clearly defined target groups; it can be delivered effectively through outreach activities; and vaccination does not require any major lifestyle change. The overwhelming safety and effectiveness of vaccines in current use in preventing serious disease has allowed them to gain their preeminent role in the routine protection of health. Before an immunization is introduced for population-wide use, it is tested for efficacy and safety. However, immunization is not without risks. For example, it is well established that the oral polio vaccine on rare occasion causes paralytic polio and that vaccines sometimes lead to anaphylactic shock. Given the widespread use of vaccines; state mandates requiring vaccination of children for entry into school, college, or day care; and the importance of ensuring that trust in immunization programs is justified, it is essential that safety concerns receive assiduous attention.


Immunization is the process by which an individual’s immune system becomes fortified against an agent (known as the immunogen). When this system is exposed to molecules that are foreign to the body, called non-self, it will orchestrate an immune response, and it will also develop the ability to quickly respond to a subsequent encounter because of immunological memory. This is a function of the adaptive immune system. Therefore, by exposing an animal to an immunogen in a controlled way, its body can learn to protect itself: this is called active immunization. The most important elements of the immune system that are improved by immunization are the T cells, B cells, and the antibodies B cells produce. Memory B cells and memory T cells are responsible for a swift response to a second encounter with a foreign molecule. Passive immunization is when these elements are introduced directly into the body, instead of when the body itself has to make these elements. Immunization is done through various techniques, most commonly vaccination. Vaccines against microorganisms that cause diseases can prepare the body’s immune system, thus helping to fight or prevent an infection. The fact that mutations can cause cancer cells to produce proteins or other molecules that are known to the body forms the theoretical basis for therapeutic cancer vaccines. Other molecules can be used for immunization as well, for example in experimental vaccines against nicotine (NicVAX) or the hormone ghrelin in experiments to create an obesity vaccine. Before the introduction of vaccines, the only way people became immune to an infectious disease was by actually getting the disease and surviving it. Smallpox (variola) was prevented in this way by inoculation, which produced a milder effect than the natural disease.


Inherited immunity:

Mothers can pass on immunity to their babies across the placenta during the final months of pregnancy. The amount of inherited immunity varies by disease and is an important factor in deciding when a child should be immunized. The neonate is protected against disease by maternal immunoglobulins (Ig). Maternal IgG is transported across the placenta before birth and maternal secretory IgA is present in breast milk and colostrum. These passively acquired antibodies provide protection against pathogens to which the mother was immune. However, protection provided by passively transferred antibodies is short-lived. Passively acquired maternal IgG declines during the first few months of life, and most infants are not breastfed beyond several months of age. More importantly, maternal antibodies offer limited immunologic protection when compared with protection afforded by an infant’s active immune response. A mother’s antibodies may protect a child from measles for 6 to 12 months. But, in the case of diseases such as pertussis, immunity may last only for a few weeks. Tetanus is one example where inherited immunity is critical and the mother must be immunized to offer protection to her newborn.


Types of immunization: active and passive immunization:

Immunization can be derived from either passive or active means. These means can be from either natural or artificial sources. Natural sources are due to exposure to the environment, humans, and animals. In contrast, artificial sources are due to medical interventions. Passive immunization occurs with the transfer to preformed antibodies to an unimmunized individual. This individual would then develop a temporary immunity to a particular organism or toxin due to the presence of these preformed antibodies. Once these preformed antibodies have been destroyed, the individual would no longer have immunity to this microorganism or toxin. Passive immunization can occur either naturally or artificially. Excellent examples of natural passive immunization are the passage of maternal antibodies through the placenta to the fetus and the passage of these maternal antibodies to the infant through the colostrum and milk. Excellent examples of artificial passive immunization include the administration of pooled human immune gamma globulin and antivenin. These gamma globulins and antivenins provide temporary immunity to either a particular illness or venom. Passive immunity refers to the process of providing IgG antibodies to protect against infection; it gives immediate, but short-lived protection—several weeks to 3 or 4 months at most. Concurrent with these effects of this temporary immunity from the preformed antibodies, the individual’s own body is likely to be in the early stages of developing its own active immune response. Active immunization occurs with the exposure of an unimmunized individual to a pathogenic agent. The immune system of this individual then begins the process of developing immunity to this agent. In contrast to passive immunization, active immunization typically produces long-term immunity due to the stimulation of the individual’s immune system. Active immunization can occur either naturally or artificially. An excellent example of natural active immunization is exposure to influenza. The body then begins the process of developing long-term immunity to the influenza virus. Excellent examples of artificial active immunization include the different types of vaccines. These immunizations mimic the stimulation necessary for immune development yet do not produce active disease. Wild infection for example with hepatitis A virus (HAV) and subsequent recovery gives rise to a natural active immune response usually leading to lifelong protection. In a similar manner, administration of two doses of hepatitis A vaccine generates an acquired active immune response leading to long-lasting (possibly lifelong) protection. Hepatitis A vaccine has only been licensed since the late 1980s so that follow-up studies of duration of protection are limited to <25 years—hence, the preceding caveat about duration of protection.


Immunizing Agents:

Immunizing agents are classified as active or passive, depending on the process by which they confer immunity; prevention of disease through the use of immunizing agents is called immunoprophylaxis. Active immunization is the production of antibodies against a specific agent after exposure to the antigen through vaccination. Active immunizing agents are typically referred to as vaccines. Passive immunization involves the transfer of pre-formed antibodies, generally from one person to another or from an animal product, to provide temporary protection, since transferred antibody degrades over time. It can occur by transplacental transfer of maternal antibodies to the developing foetus, or it can be provided by administration of a passive immunizing agent prepared from the serum of immune individuals or animals.


Active immunizing agents – vaccines:

Vaccines are complex biologic products designed to induce a protective immune response effectively and safely. An ideal vaccine is safe with minimal adverse effects, and effective in providing lifelong protection against disease after a single dose that can be administered at birth. Also ideally, it would be inexpensive, stable during shipment and storage, and easy to administer. Some vaccines come closer to fulfilling these criteria than others. Although each vaccine has its own benefits and risks, and indications and contraindications, all vaccines offer protection against the disease for which they were created. In addition to the active component (the antigen), which induces the immune response, vaccines may contain additional ingredients such as preservatives, additives, adjuvants and traces of other substances necessary in the production of the vaccine. Vaccine antigens include: inactivated (killed) or attenuated (weakened) live organisms; products secreted by organisms that are modified to remove their pathogenic effects (e.g., tetanus toxoid); and components of the organism, some of which some are made in the laboratory through recombinant technology.


Passive immunizing agents – immune globulins:

Passive immunization with immune globulins provides protection when vaccines for active immunization are unavailable or contraindicated, or in certain instances when unimmunized individuals have been exposed to the infectious agent and rapid protection is required (post-exposure immunoprophylaxis) as vaccine immune response takes time and disease incubation period is short. Passive immunization also has a role in the management of immunocompromised people who may not be able to respond fully to vaccines or for whom live vaccines may be contraindicated. The duration of the beneficial effects provided by passive immunizing agents is relatively short and protection may be incomplete.

The four most commonly used immunoglobulin preparations are as follows.

(i) Hepatitis B Immunoglobulin

(ii) Rabies Immunoglobulin

(iii)Tetanus Immunoglobulin

(iv) Varicella-Zoster Immunoglobulin


Monoclonal Antibodies:

Increasingly, technology is being used to generate monoclonal antibodies (MAbs)– “mono” meaning that they are a pure, single type of antibody targeted at a single site on a pathogen, and “clonal” because they are produced from a single parent cell. These antibodies have wide-ranging potential applications to infectious disease and other types of diseases. To date, only one MAb treatment is commercially available for the prevention of an infectious disease. This is a MAb preparation for the prevention of severe disease caused by RSV in high-risk infants. Physicians are also increasingly using MAbs to combat noninfectious diseases, such as certain types of cancer, multiple sclerosis, rheumatoid arthritis, Crohn’s disease, and cardiovascular disease. Scientists are researching other new technologies for producing antibodies in the laboratory, such as recombinant systems using yeast cells or viruses and systems combining human cells and mouse cells, or human DNA and mouse DNA.

Bioterror threats:

In the event of the deliberate release of an infectious biological agent, biosecurity experts have suggested that passive immunization could play a role in emergency response. The advantage of using antibodies rather than vaccines to respond to a bioterror event is that antibodies provide immediate protection, whereas a protective response generated by a vaccine is not immediate and in some cases may depend on a booster dose given at a later date. Candidates for this potential application of passive immunization include botulinum toxin, tularemia, anthrax, and plague. For most of these targets, only animal studies have been conducted, and so the use of passive immunization in potential bioterror events is still in experimental stages.


Advantages and Disadvantages of Passive Immunization:

Vaccines typically need time (weeks or months) to produce protective immunity in an individual and may require several doses over a certain period of time to achieve optimum protection. Passive immunization, however, has an advantage in that it is quick acting, producing an immune response within hours or days, faster than a vaccine. Additionally, passive immunization can override a deficient immune system, which is especially helpful in someone who does not respond to immunization. Antibodies, however, have certain disadvantages. First, antibodies can be difficult and costly to produce. Although new techniques can help produce antibodies in the laboratory, in most cases antibodies to infectious diseases must be harvested from the blood of hundreds or thousands of human donors. Or, they must be obtained from the blood of immune animals (as with antibodies that neutralize snake venoms). In the case of antibodies harvested from animals, serious allergic reactions can develop in the recipient. Another disadvantage is that many antibody treatments must be given via intravenous injection, which is a more time-consuming and potentially complicated procedure than the injection of a vaccine. Finally, the immunity conferred by passive immunization is short lived: it does not lead to the formation of long-lasting memory immune cells. In certain cases, passive and active immunity may be used together. For example, a person bitten by a rabid animal might receive rabies antibodies (passive immunization to create an immediate response) and rabies vaccine (active immunity to elicit a long-lasting response to this slowly reproducing virus).


What is the difference between antiserum and vaccine?

Simplistically, a vaccine primes your immune system to prevent a future infectious disease. An antiserum either neutralizes the present “infection” or helps your immune system to attack the present infection. Vaccines are generally prophylactic where as antiserums are generally a form of treatment. A vaccine stimulates your immune system to prevent disease, essentially giving you immunity. It generally imparts a long-term immunity that can last years. Specific examples include the polio vaccine which provides immunity to polio virus; or the tetanus vaccine which stimulates your immune system to quickly identify the toxins produced by the clostridium tetani bacteria and to produce the necessary antibodies. An antiserum (sometimes called a serum or the plural antisera) contains preformed specific antibodies (immunoglobulin) to neutralize infection. This provides a temporary immunity, but a long-term immunity generally still requires the use of a vaccine. An example is the tetanus antitoxin which contains antibodies from previously infected animals. If you are infected with clostridium tetani bacteria and you are not immunized, then you would treat the infection with an antiserum. You may also administer a vaccine at the same time (or shortly after) to create immunity to prevent future infections.


Vaccination vs. immunization vs. inoculation:

Understanding the difference between vaccines, vaccinations, and immunizations can be tricky. Below is an easy guide that explains how these terms are used:

•A vaccine is a product that produces immunity from a disease and can be administered through needle injections, by mouth, or by aerosol. The administration of vaccines is called vaccination.

•An immunization is the process by which a person or animal becomes protected from a disease. Vaccines cause immunization, and there are also some diseases that cause immunization after an individual recovers from the disease.


Most people do not realize that when you receive a shot or a vaccine, it does not mean you are immunized. Many people believe that once you are vaccinated you are completely protected. That belief is wrong. The use of the word “immunization” instead of “vaccination” is found everywhere. Most importantly, news outlets tell the public that immunization is the same as vaccination. However, there is a large difference between the two. No vaccine is 100% effective in preventing disease. Most routine childhood vaccines are effective for 85% to 95% of recipients. Since no vaccine is 100% effective, vaccination does not automatically mean the person is immunized against the disease. Immunization means to make someone immune to something. Vaccination, by contrast, just means to inject a suspension of attenuated or killed microorganisms…administered for prevention of infectious disease. Vaccination does not guarantee immunity. Everyone’s immune system reacts differently. For reasons related to the individual, some will not develop immunity. Also, immunization not only refers to the use of all vaccines but also extends to the use of antitoxin, which contains preformed antibody to e.g. diphtheria or tetanus exotoxins. So vaccination is not synonymous with immunization although both terms are used interchangeably in this article. Natural immunity happens only after one recovers from the actual disease.


The terms inoculation, vaccination, immunization and injection are often used synonymously to refer to artificial induction of immunity against various infectious diseases. This is supported by some dictionaries.  However, there are some important historical and current differences. In English medicine inoculation referred only to the prevention of smallpox until the very early 1800s. When Edward Jenner introduced smallpox vaccine in 1798 this was initially called cowpox inoculation or vaccine inoculation. Soon, to avoid confusion, smallpox inoculation was referred to as variolation (from variola = smallpox) and cowpox inoculation was referred to as vaccination (from Jenner’s use of Variolae vaccinae = smallpox of the cow). Then, in 1891 Louis Pasteur proposed that the terms vaccine/vaccination should be extended to include the new protective procedures being developed. Inoculation is now more or less synonymous in nontechnical usage with injection etc., and the question e.g. ‘Have you had your flu injection/vaccination/inoculation/immunization?’ should not cause confusion. The focus is on what is being given and why, not the literal meaning of the technique used. Inoculation also has a specific meaning for procedures done in vitro. These include the transfer of microorganisms into and from laboratory apparatus such as test tubes and petri dishes in research and diagnostic laboratories, and also in commercial applications such as brewing, baking and the production of antibiotics. In almost all cases the material inoculated is called the inoculum, or less commonly the inoculant, although the term culture is also used for work done in vitro.


Active Immunity in neonates:

Neonates are capable of generating both humoral and cellular immune responses to pathogens at the time of birth. Active immunity in the newborn includes the full range of B-cell responses including the production of IgM, IgG, and secretory and monomeric IgA, as well as the development of helper T-cell (Th) and cytotoxic T-cell responses. In addition, neonates can produce specific Th-cell subsets, including Th1-type cells that participate in cell-mediated immune responses and Th2-type cells that are primarily involved in promoting B-cell responses. The development of active humoral and cellular immune responses in the newborn is necessary to meet the tremendous number of environmental challenges encountered from the moment of birth. When children are born, they emerge from the relatively sterile environment of the uterus into a world teeming with bacteria and other microorganisms. Beginning with the birth process, the newborn is exposed to microbes from the mother’s cervix and birth canal, then the surrounding environment. Within a matter of hours, the gastrointestinal tract of the newborn, initially relatively free of microbes, is heavily colonized with bacteria. The most common of these colonizing bacteria include facultative anaerobic bacteria, such as Escherichia coli and streptococci, and strict anaerobic bacteria, such as Bacteroides and Clostridium. Specific secretory IgA responses directed against these potentially harmful bacteria are produced by the neonate’s intestinal lymphocytes within the first week of life.


Functional Differences between Infant and Adult Immune Responses:

Although infants can generate all functional T-cells (i.e., Th1, Th2, and cytotoxic T-cells), infant B-cell responses are deficient when compared with older children and adults. Infants respond well to antigens (such as proteins) that require T-cell help for development. However, until about 2 years of age, the B-cell response to T-cell-independent antigens (such as polysaccharides) is considerably less than that found in adults. For this reason, infants are uniquely susceptible to bacteria that are coated with polysaccharides (such as Haemophilus influenzae type b [Hib] and Streptococcus pneumoniae).


Immune response to vaccines by neonates:

The neonate is capable of mounting a protective immune response to vaccines within hours of birth. For example, neonates born to mothers with hepatitis B virus infection mount an excellent protective immune response to hepatitis B vaccine given at birth, even without additional use of hepatitis B virus-specific immunoglobulin.  In addition, BCG vaccine given at birth induces circulating T-cells that protect against bacteremia and subsequent development of miliary tuberculosis and tuberculous meningitis.


Immune response to vaccines by infants:

The young infant is fully capable of generating protective humoral and cellular immune responses to multiple vaccines simultaneously. Approximately 90% of infants develop active protective immune responses to the primary series of diphtheria-tetanus-acellular-pertussis, hepatitis B, pneumococcus, Hib, and inactivated polio vaccines given between 2 months and 6 months of age. To circumvent the infant’s inability to mount T-cell-independent B-cell responses, polysaccharide vaccines (Hib and S pneumoniae) are linked to proteins (i.e., diphtheria toxoid, diphtheria toxin mutant protein, tetanus toxoid, or meningococcal group B outer-membrane protein) that engage the infant’s Th-cells. By converting a T-cell-independent immune response to a T-cell-dependent response, conjugate vaccines can be recognized by the infant’s B-cells. Conjugate vaccines, therefore, induce protective immune responses in infants that are often greater than those found after natural infection. Bacterial polysaccharide-protein conjugate vaccines (Haemophilus influenzae type b [Hib], pneumococcal and meningococcal conjugates) have revolutionized pediatric vaccination strategies. The widely used carrier proteins are tetanus toxoid (TT), diphtheria toxoid (DT) and diphtheria toxoid variant CRM197 protein.


Immune response to vaccines by children with immunodeficiency:

Severely immunocompromised children (specifically, those with T-cell defects) who receive live viral vaccines (e.g., measles or varicella vaccines) or live bacterial vaccines (e.g., BCG vaccine) may develop disseminated infections with these attenuated pathogens. However, the only live vaccine that was routinely given in the United States in the first year of life, the oral polio vaccine (OPV), has now been replaced with inactivated polio vaccine. Therefore, children do not receive their first live viral vaccines until about 12 to 15 months of age. Most children with severe T-cell deficiencies (e.g., severe combined immunodeficiency syndrome) will have been identified by 6 to 8 months of age. However, many children with immunodeficiencies respond well to live viral vaccines. Because the risk of severe infection is greater after natural infection with wild-type viruses than immunization with highly attenuated viruses, the Advisory Committee on Immunization Practices and American Academy of Pediatrics recommend that certain immunocompromised children should receive live viral vaccines. For example, children with human immunodeficiency virus (HIV) infection without severe T-cell deficiencies (Centers for Disease Control and Prevention class N1 or A1 and age-specific percentage of CD4+ lymphocytes greater than 25%) should receive the measles-mumps-rubella (MMR), and varicella vaccines. Immunizations are well-tolerated by this subset of HIV-infected children and confer protective immunity. Immunization with live viral vaccines has also been demonstrated to be safe and effective in certain children with malignancies and in children following bone marrow transplantation.


Immune response to vaccine by children with mild, moderate or severe illnesses:

Some parents may be concerned that children with acute illnesses are, in a sense, immunocompromised, and that they are less likely to respond to vaccines or more likely to develop adverse reactions to vaccines than healthy children. Alternatively, parents may believe that children who are ill should not further burden an immune system already committed to fighting an infection. However, vaccine-specific antibody responses and rates of vaccine-associated adverse reactions of children with mild or moderate illnesses are comparable to those of healthy children. For example, the presence of upper respiratory tract infections, otitis media, fever, skin infections, or diarrhea does not affect the level of protective antibodies induced by immunization. Data on the capacity of vaccines to induce protective immune responses in children with severe infections (such as those with bacterial pneumonia or meningitis) are lacking. Although a delay in vaccines is recommended for children with severe illnesses until the symptoms of illness resolve, this recommendation is not based on the likelihood that the child will have an inadequate immune response to the vaccine. Rather, the reason for deferring immunization is to avoid superimposing a reaction to the vaccine on the underlying illness or to mistakenly attribute a manifestation of the underlying illness to the vaccine.



Do vaccines overwhelm the immune system of a child?

1. Infants have the capacity to respond to an enormous number of antigens:

Studies on the diversity of antigen receptors indicate that the immune system has the capacity to respond to extremely large numbers of antigens. Current data suggest that the theoretical capacity determined by diversity of antibody variable gene regions would allow for as many as 109 to 1011 different antibody specificities.  But this prediction is limited by the number of circulating B cells and the likely redundancy of antibodies generated by an individual.  A more practical way to determine the diversity of the immune response would be to estimate the number of vaccines to which a child could respond at one time. If we assume that 1) approximately 10 ng/mL of antibody is likely to be an effective concentration of antibody per epitope (an immunologically distinct region of a protein or polysaccharide), 2) generation of 10 ng/mL requires approximately 103 B-cells per mL, 3) a single B-cell clone takes about 1 week to reach the 103 progeny B-cells required to secrete 10 ng/mL of antibody (therefore, vaccine-epitope-specific immune responses found about 1 week after immunization can be generated initially from a single B-cell clone per mL), 4) each vaccine contains approximately 100 antigens and 10 epitopes per antigen (i.e., 103 epitopes), and 5) approximately 107 B cells are present per mL of circulating blood, then each infant would have the theoretical capacity to respond to about 10,000 vaccines at any one time (obtained by dividing 107 B cells per mL by 103 epitopes per vaccine).  Of course, most vaccines contain far fewer than 100 antigens (for example, the hepatitis B, diphtheria, and tetanus vaccines each contain 1 antigen), so the estimated number of vaccines to which a child could respond is conservative. But using this estimate, we would predict that if 11 vaccines were given to infants at one time, then about 0.1% of the immune system would be “used up.”  However, because naive B- and T-cells are constantly replenished, a vaccine never really “uses up” a fraction of the immune system. For example, studies of T-cell population dynamics in HIV-infected patients indicate that the human T-cell compartment is highly productive. Specifically, the immune system has the ability to replenish about 2 billion CD4+ T lymphocytes each day. Although this replacement activity is most likely much higher than needed for the normal (and as yet unknown) CD4+ T-cell turnover rate, it illustrates the enormous capacity of the immune system to generate lymphocytes as needed.


2. Children are exposed to fewer antigens in Vaccines today than in the past:

Parents who are worried about the increasing number of recommended vaccines may take comfort in knowing that children are exposed to fewer antigens (proteins and polysaccharides) in vaccines today than in the past.  Although we now give children more vaccines, the actual number of antigens they receive has declined.


Number of Vaccines and Possible Number of Injections over the Past 100 Years

Year Number of Vaccines Possible Number of Injections by 2 Years of Age Possible Number of Injections at a Single Visit
1900 1 1 1
1960 5 8 2
1980 7 5 2
2000 11 20 5


Whereas previously 1 vaccine, smallpox, contained about 200 proteins, now the 11 routinely recommended vaccines contain fewer than 130 proteins in total. Two factors account for this decline: first, the worldwide eradication of smallpox obviated the need for that vaccine, and second, advances in protein chemistry have resulted in vaccines containing fewer antigens (e.g., replacement of whole-cell with acellular pertussis vaccine).


3. Researchers discovered that:

•In each cubic meter of air, there are between 1.6 million and 40 million viruses.

•In each cubic meter of air, there are between 860,000 and 11 million bacteria.

A child inhales about 5 liters of air per minute (or about .005 cubic meters), so a few hundred thousand viruses and bacteria are inhaled every minute every day of the year. And the researchers discovered that many were unknown species of viruses and bacteria, so the immune system has to adapt to them with each breath. Thus, the 25-30 antigens from vaccines (depending on the age of the child and the number of different flu vaccines that they’ve received) is not even a significant number compared to the millions upon millions of viral and bacterial antigens that enter a child’s lungs every day or week. The tiny number of antigens introduced by vaccines barely register on the immune system’s massive and robust power to deal with antigens.


4. Children respond to Multiple Vaccines given at the same time in a manner similar to Individual Vaccines:

If vaccines overwhelmed or weakened the immune system, then one would expect lesser immune responses when vaccines are given at the same time as compared with when they are given at different times.  However, the following vaccines induce similar humoral immune responses when given at the same or different times: 1) MMR and varicella, 2) MMR, diphtheria-tetanus-pertussis (DTP), and OPV, 3) hepatitis B, diphtheria-tetanus, and OPV, 4) influenza and pneumococcus, 5) MMR, DTP-Hib, and varicella,  6) MMR and Hib, and 7) DTP and Hib.  Achieving similar immune responses by giving vaccines at the same time at different sites may be more easily accomplished than by combining vaccines in the same syringe. Challenges to giving many vaccines in a single injection are based partly on incompatibilities of agents used to buffer or stabilize individual vaccines.



Do Vaccines weaken the immune system? Do Vaccines increase the risk of other infections?

Vaccines may cause temporary suppression of delayed-type hypersensitivity skin reactions or alter certain lymphocyte function tests in vitro. However, the short-lived immunosuppression caused by certain vaccines does not result in an increased risk of infections with other pathogens soon after vaccination. Vaccinated children are not at greater risk of subsequent infections with other pathogens than unvaccinated children. On the contrary, in Germany, a study of 496 vaccinated and unvaccinated children found that children who received immunizations against diphtheria, pertussis, tetanus, Hib, and polio within the first 3 months of life had fewer infections with vaccine-related and -unrelated pathogens than the nonvaccinated group. Bacterial and viral infections, on the other hand, often predispose children and adults to severe, invasive infections with other pathogens. For example, patients with pneumococcal pneumonia are more likely to have had a recent influenza infection than matched controls. Similarly, varicella infection increases susceptibility to group A β-hemolytic streptococcal infections such as necrotizing fasciitis, toxic shock syndrome, and bacteremia.


Immunological impediments to effective vaccination: tolerance, interference and neutralization:

Other important considerations in vaccine immunology include the phenomena of immune tolerance and immunological/antigenic interference, which can suppress or prevent development of adequate immune responses following vaccination. Immune tolerance refers to the induction of immunological non-responsiveness by repeated exposure to similar antigens, such as polysaccharide antigens; this effect is dose-dependent and may be limited in time as increasing the interval between subsequent doses can partially restore responsiveness. Immunological/antigenic interference occurs when previous or concomitant exposure to another antigen prevents the development of adequate responses to the vaccine antigen, which may be due to previous or concurrent vaccinations. Another potential cause of reduced vaccine efficiency is the presence of passively induced immunity e.g. which transferred from mother to foetus, where the vaccine antigen is neutralised by pre-existing maternally-derived antibody without triggering a host-derived immune response in the infant. These phenomena can be avoided, however, by taking them into account in immunisation schedules.


Gut microbiome and vaccines:

Your body contains about 100 trillion bacteria, and bacteriophages (viral components) outnumber the bacteria by 10 to one. All of these bacteria, viruses, and other microorganisms make up your body’s microbiome. While we commonly view all viruses as “bad,” this is really not the case. Some viruses, known as bacteriophages, appear to promote health by infecting and killing bacteria that might otherwise cause disease. There’s a broad and compelling scientific base of evidence showing that a healthy human immune system is the most powerful way to resist infectious diseases or heal after infection and the efficient functioning of your immune system is dependent on gut flora. About 80 percent of your immune system is in your microbiome. Understanding how gut microbes might modulate vaccine responses could help improve the efficiency of certain vaccinations. Clinical trials testing the efficacy of oral vaccines against polio, rotavirus, and cholera have showed a lower immunogenicity of these vaccines in individuals from developing countries when compared to individuals from the developed world. Clinical trials for a killed oral cholera vaccine in Swedish and Nicaraguan children have also shown blunted antibody responses in Nicaraguan children compared to Swedish children. In a study testing a live cholera oral vaccine, Lagos and colleagues demonstrated that excessive bacterial growth in the small intestine of children in less developed countries might contribute to the low antibody response to the vaccine. Different vaccine strains of Shigella flexneri also showed differential protection on individuals from developing countries. In a study testing Bangladeshi adults and children, no significant immune response to this vaccine was mounted, although the same antigen was reactogenic in North American individuals. A recent review article concluded that the composition of your gut microbiome can influence whether a vaccine has an effect in your body. Unhealthy gut microbiome composition (or “dysbiosis”) can lead to inflammation. And that means more bacterial cells pass through the damaged lining of the gut, which stimulates further immune system responses. This is called “leaky gut.” Vaccines may not be as effective because the immune system is already busy dealing with these bacterial cells “leaking” through the gut. On the other hand, having a diverse and “healthy” gut microbiome, and thus no gut inflammation and “leakiness,” might allow a person’s immune system to focus on responding to the vaccine effectively. It is also possible that the gut microbiota of individuals with increased exposure to microorganisms (and therefore antigens) make them more tolerant to vaccination, being unable to mount a proper response compared to individuals living in better socioeconomic conditions. Recent research has also found that the effectiveness of the seasonal flu shot could be enhanced by intestinal bacteria. The immune system detects specific proteins from the bacteria, and this detection seems to increase the immune system’s response to the flu vaccine. Then your body has an easier time mounting an immune response if you are exposed to the real flu virus.


Ideal vaccine:

Properties of an ideal vaccine

• Should give life-long immunity

• Should be broadly protective against all variants of an organism

• Should prevent disease transmission, e.g. by preventing shedding

• Should induce effective immunity rapidly

• Should be effective in all vaccinated subjects, including infants and the elderly

• Should transmit maternal protection to the fetus

• Requires few (ideally one) immunisations to induce protection

• Would not need to be administered by injection

• Should be cheap, stable (no requirement for cold chain), and safe

An ideal vaccine is relatively easy to define, but few real vaccines approach the ideal and no vaccines exist for many organisms, for which a vaccine is the only realistic protective strategy in the foreseeable future. Many difficulties account for the failure to produce these vaccines. All micro-organisms deploy evasion mechanisms that interfere with effective immune responses and, for many organisms, it is not clear which immune responses provide effective protection. It is easy to define the properties of an ideal vaccine. Most of these are obvious, but few vaccines approach the ideal. In addition, vaccines do not yet exist for many organisms and it is worth considering why this is so. First it is notable that most successful vaccines are against relatively small organisms. There are excellent vaccines against several viruses and some against bacteria, although several of these do not protect against infection but rather the toxic effects of infection. As yet there are no satisfactory vaccines against parasites. Generally, therefore, successful vaccines are against organisms with smaller genomes although there are of course exceptions to this general rule, for example so far we do not have an effective vaccine against HIV or hepatitis C.  Without prior immunisation, most organisms gain a foothold in their host but from very early on in the infectious process must deploy mechanisms to interfere with the host immune response. Even those organisms that rely on rapid multiplication and spread to new hosts must combat innate (non-specific) immune mechanisms. Organisms with a life-style involving co-existence with their host over long periods have also to combat the adaptive (specific) immune response. Thus all micro-organisms have evolved complex defense mechanisms that interfere with every stage of the immune response. Organisms with large genomes have sufficient genetic capacity to carry multiple genes capable of affecting immune response. The sheer magnitude of the enterprise involved in working out all these mechanisms means that there is more complete information available for smaller organisms. A number of viruses have been well studied. Numerous viral gene products that interfere in immune function have been described. These include a large variety of molecules that mimic important regulatory molecules of the immune system, such as interferons, interleukins and chemokines and their receptors. Interference with antigen processing is common and viruses may also prevent apoptosis. Genes dedicated to viral escape may represents at least 10% of viral genomes, indicating the potential magnitude of the task involved in understanding how a complex organism such as a bacterium avoids elimination by the immune system since 10% of a bacterial genome might be 200–400 genes!  Smaller organisms do not have the luxury of devoting tens or hundreds of genes to combating the immune system and must adopt other strategies, one of which is rapid change. Many viruses use this method including influenza, HIV and hepatitis C. Larger organisms also employ this strategy including malaria. Most often, the variation takes place after infection of the host. Of course if the organism has a secondary host, change may take place during infection of this species as is thought to occur in the case of influenza virus. Pre-existing immunity can prevent the opportunity for multiplication and development of escape variants such as has been well described for HIV. Thus immunisation against an epidemic strain of influenza virus can provide very effective preventive immunity against spread of that strain but not against future variants. The ability of micro-organisms to deploy escape mechanisms even early in immune responses suggests that, for organisms so far insusceptible to vaccines, we need to decide what the vaccine is intended to do. Do we wish to prevent infection completely, or simply suppress replication of the organisms to an extent compatible with a normal life-span? Is prevention of transmission to other and perhaps more susceptible individuals (for example infants) the objective, or is the aim not the prevention of infection but pathology? Recent understanding of the complex interactions of micro-organisms with their hosts suggests that if we are to make progress in containing many infectious diseases caused by complex organisms, we should better define our objective and tailor our vaccine strategies accordingly. Better understanding of the crucial events in immune responses will help in doing this and may lead to development of new vaccines capable of combating infections in very different ways.


Bacterial pathogen genomics and vaccines:

Infectious diseases remain a major cause of deaths and disabilities in the world, the majority of which are caused by bacteria. Although immunisation is the most cost effective and efficient means to control microbial diseases, vaccines are not yet available to prevent many major bacterial infections. Examples include dysentery (shigellosis), gonorrhoea, trachoma, gastric ulcers and cancer (Helicobacter pylori). Improved vaccines are needed to combat some diseases for which current vaccines are inadequate. Tuberculosis, for example, remains rampant throughout most countries in the world and represents a global emergency heightened by the pandemic of HIV. The availability of complete genome sequences has dramatically changed the opportunities for developing novel and improved vaccines and facilitated the efficiency and rapidity of their development. Complete genomic databases provide an inclusive catalogue of all potential candidate vaccines for any bacterial pathogen. In conjunction with adjunct technologies, including bioinformatics, random mutagenesis, microarrays, and proteomics, a systematic and comprehensive approach to identifying vaccine discovery can be undertaken. Genomics must be used in conjunction with population biology to ensure that the vaccine can target all pathogenic strains of a species. A proof in principle of the utility of genomics is provided by the recent exploitation of the complete genome sequence of Neisseria meningitidis group B.


In a nutshell:

The human immune system consists of two connected compartments, the innate and the adaptive immune systems which function via the actions of secreted and cellular effectors. The innate and the adaptive immune systems work sequentially to identify invaders and formulate the most appropriate response; this interaction is crucially bridged by specialised antigen-presenting cells (APCs). The innate response, via the action of APCs, sets the scene for the subsequent adaptive response by providing information about the nature of the threat.  Primary exposure to a pathogen or antigen induces the production of a population of adaptive immune cells with antigen specificity that are retained for long periods and provide a rapid response upon subsequent exposure. The vaccine concept is based on stimulating the body’s defense mechanisms against a specific pathogen to establish this immunological memory.  Current vaccine strategies take advantage of immunological mechanisms, and often target the innate immune system and APCs to induce the desired specific adaptive immune response.



How Vaccines Work:

To understand how vaccines work you need to understand the story of two 5-year-old children, John and Robert:

John plays with a child in his class who has measles. Ten days later, John develops high fever, runny nose, “pink eye” and a rash. The rash consists of red bumps that start on his face and work their way down to the rest of his body. After two more days, John starts to have trouble breathing. His breaths are short and rapid. John’s mother takes him to the doctor where he gets an X-ray of his chest. The X-ray shows that John has pneumonia (a common complication of measles infection). John is admitted to the hospital where he stays for five days and finally recovers. After having fought off his measles infection, John will never get measles again. Or, said another way, John has immunity to measles. John is immune to measles because he has cells in his body that can make “antibodies” to measles virus. These cells, called “memory B cells,” developed during the infection, and will hang around for the rest of John’s life.  Robert also plays with the child who has measles. However, Robert never develops symptoms of measles. He doesn’t get fever, rash or pneumonia. Robert was infected with measles virus, but didn’t get any of the symptoms of measles. This is called an “asymptomatic infection.” Because Robert, like John, also develops “memory B cells,” he too is immune to measles for the rest of his life. Whereas John had to pay a high price for his immunity, Robert didn’t. Robert was lucky. Although some children don’t get severe infections when they are exposed to measles, most do. Before a measles vaccine was developed in 1963, measles would infect about 4 million children each year and kill 3,000.

Vaccines take the luck out of it:

By causing “asymptomatic infections,” vaccines mimic what happened to Robert. This allows children to benefit from the natural immunity that comes with infection without having to suffer the severe, and occasionally fatal, consequences of natural infection. Vaccines remove the element of luck by controlling:

•The potential severity of the pathogen

•The dose of the exposure (smallest amount needed)

•The timing of exposure (before the period of highest risk)


Vaccines work at an individual level to protect the immunized person against the specific disease, as well as at a population level to reduce the incidence of the disease in the population, thereby reducing exposure of susceptible persons and consequent illness. Although the primary measure of effectiveness occurs at an individual level, there is also interest in decreasing or even eliminating disease at a population level.


How vaccines work at the individual level:

The administration of a vaccine antigen triggers an inflammatory reaction that is initially mediated by the innate immune system and subsequently expands to involve the adaptive immune system through the activation of T and B cells. While the majority of vaccines provide protection through the induction of humoral immunity (primarily through B cells), some vaccines such as Bacille Calmette-Guerin (BCG) and herpes zoster act principally by inducing cell-mediated immunity (primarily though T cells). Long-term immunity requires the persistence of antibodies, and/or the creation and maintenance of antigen-specific memory cells (priming), that can rapidly reactivate to produce an effective immune response upon subsequent exposure to the same or similar antigen.


Immunogenicity and markers of protection induced by vaccination:

Immunogenicity means the vaccine’s ability to induce an immune response. Vaccine-induced seroconversion is the development of detectable antigen-specific antibodies in the serum as a result of vaccination; seroprotection is a predetermined antibody level as a result of vaccination, above which the probability of infection is low. The seroprotective antibody level differs depending on the vaccine. A correlate of protection is a specific immune response that is responsible for and statistically linked to protection against infection or disease. Following administration of most vaccines, prevention of infection has been shown to correlate predominantly with the production of antigen-specific antibodies. Serologic markers can be measured using enzyme-linked immunosorbent assays (ELISA), functional antibody activity such as the opsonophagocytic assay (OPA), or both. A surrogate of protection is a substitute immune marker, which may not be linked to protection against infection or disease. For example, serum antibodies may be produced for mucosal vaccines against rotavirus. Although serum antibodies against rotavirus serve as surrogates of protection, they are not necessarily directly protective against infection as this may require mucosal antibodies.


How vaccines work at the population level:

Vaccine efficacy:

Vaccine efficacy is defined as the reduction in the incidence of a disease among people who have received a vaccine compared with the incidence in unvaccinated people. Vaccine efficacy refers to the vaccine’s ability to prevent illness in people vaccinated in controlled studies. Vaccine effectiveness refers to the vaccine’s ability to prevent illness in people vaccinated in broader settings (i.e., the “real world”).



Measuring protection: efficacy versus effectiveness:

In the clinical development of a vaccine, an efficacy study asks the question “Does the vaccine work?” In contrast, an effectiveness study asks the question “Does vaccination help people?” In general, vaccine development proceeds from a study of immunogenicity to a randomized controlled trial that determines vaccine efficacy under ideal conditions. Efficacy studies, however, have several limitations. In an immunogenicity study, when a vaccine is given according to different schedules, the object of the study is not the vaccine itself but the schedules; i.e., what is important is not the “relative immunogenicity” of the vaccine, but which schedule is more protective given the occurrence of the disease that is to be prevented. Furthermore, a clinical trial of vaccine efficacy is unable to predict accurately the level of protection that will be achieved in public health practice. Vaccination effectiveness can be evaluated in a prospective clinical trial, although few such studies have been undertaken. Effectiveness is usually assessed retrospectively, sometimes using a screening test, but more often in a case-control or cohort study. In these studies, rigorous risk adjustment is necessary to ensure the comparability of study populations. Retrospective studies also provide a means for assessing serious but rare vaccine-associated adverse events, an undertaking often needed to maintain public confidence in vaccination programs.


Immunization and Herd Immunity:

R0 is the average number of secondary cases produced by a primary case in a wholly susceptible population. Clearly, an infection cannot maintain itself or spread if R0 is less than 1. The larger the value of R0, the harder it is to eradicate the infection from the community in question. A rough estimate of the level of immunization coverage required can be estimated in the following manner: eradication will be achieved if the proportion immunized exceeds a critical value Vc = 1-1/R0. For example, measles has an estimated R0 of 15; therefore, at least 94% (1 minus 1/15 = 94%) of the population needs to be immune to prevent transmission of measles. Thus the larger the R0, the higher the coverage is required to eliminate the infection. Thus the global eradication of measles, with its R0 of 10 to 20 or more, is almost sure to be more difficult to eradicate than smallpox, with its estimated R0 of 4 to 5. Another example is rubella and measles immunization in the US. Rubella has an R0 roughly half that of measles and indeed rubella has been effectively eradicated in the US while the incidences of measles have declined more slowly. Immunization (vaccine) coverage refers to the proportion of the population (either overall or for particular risk groups) that has been immunized against a disease. High immunization coverage is especially required for diseases that have a high reproduction number (R0) to prevent further transmission.


Herd immunity:

Herd immunity or herd effect, also called community immunity, population immunity, or social immunity, describes a form of indirect immunity that occurs when large percentages of a population have become immune to an infectious disease, thereby providing a measure of protection for individuals who are not immune. In a population in which a large number of individuals are immune, chains of infection are likely to be disrupted, stopping or slowing the spread of disease. The greater the proportion of individuals in a community who are immune, the smaller the probability that those who are not immune will come into contact with an infectious individual. An individual’s immunity can be gained through recovering from a natural infection or through artificial means such as vaccination. Some individuals cannot become immune due to medical reasons, so it is important to develop herd immunity to protect these individuals. Once a certain threshold has been reached, herd immunity will gradually eliminate a disease from a population. This elimination, if achieved worldwide, results in the eradication of the disease. Herd immunity does not apply to all diseases, just those that are contagious, meaning that they can be transmitted from one individual to another. Tetanus, for example, is infectious but not contagious, so herd immunity does not apply to it. The term “herd immunity” is widely used but carries a variety of meanings. Some authors use it to describe the proportion immune among individuals in a population. Others use it with reference to a particular threshold proportion of immune individuals that should lead to a decline in incidence of infection. Still others use it to refer to a pattern of immunity that should protect a population from invasion of a new infection. A common implication of the term is that the risk of infection among susceptible individuals in a population is reduced by the presence and proximity of immune individuals (this is sometimes referred to as “indirect protection” or a “herd effect”).


The top box shows an outbreak in a community in which a few people are ill (shown in red) and the rest are healthy but unimmunized (shown in blue); the illness spreads freely through the population. The middle box shows the same population where a small number have been immunized (shown in yellow); those immunized are unaffected by the illness, but others are not. In the bottom box, a critical proportion of the population has been immunized; this prevents the illness from spreading significantly, even to unimmunized people.


Mathematical model of herd immunity:

An important milestone was the recognition by Smith in 1970 and Dietz in 1975 of a simple threshold theorem—that if immunity (i.e., successful vaccination) were delivered at random and if members of a population mixed at random, such that on average each individual contacted R0 individuals in a manner sufficient to transmit the infection, then incidence of the infection would decline if the proportion immune exceeded (R0 − 1)/R0, or 1 –1/R0. This is illustrated in two figures below.


Definitions of Terms:

Term Symbolic Expression Definition
Basic reproduction number R0 Number of secondary cases generated by a typical infectious individual when the rest of the population is susceptible (i.e., at the start of a novel outbreak)
Critical vaccination level Vc Proportion of the population that must be vaccinated to achieve herd immunity threshold, assuming that vaccination takes place at random
Vaccine effectiveness against transmission E Reduction in transmission of infection to and from vaccinated compared with control individuals in the same population (analogous to conventional vaccine efficacy but measuring protection against transmission rather than protection against disease).


Diagram above illustrating transmission of an infection with a basic reproduction number R0 = 4 in A, Transmission over 3 generations after introduction into a totally susceptible population (1 case would lead to 4 cases and then to 16 cases). B, Expected transmissions if (R0 − 1)/R0 = 1 − 1/R0 = ¾ of the population is immune. Under this circumstance, all but 1 of the contacts for each case is immune, and so each case leads to only 1 successful transmission of the infection. This implies constant incidence over time. If a greater proportion are immune, then incidence will decline. On this basis, (R0 − 1)/R0 is known as the “herd immunity threshold.” When a critical proportion of the population becomes immune, called the herd immunity threshold (HIT) or herd immunity level (HIL), the disease may no longer persist in the population, ceasing to be endemic.


Much of the early theoretical work on herd immunity assumed that vaccines induce solid immunity against infection and that populations mix at random, consistent with the simple herd immunity threshold for random vaccination of Vc = (1−1/R0), using the symbol Vc for the critical minimum proportion to be vaccinated (assuming 100% vaccine effectiveness). More recent research has addressed the complexities of imperfect immunity, heterogeneous populations, nonrandom vaccination, and “freeloaders”.  Assuming a vaccine is 100% effective, then the equation used for calculating the herd immunity threshold can be used for calculating the vaccination level needed to eliminate a disease, written as Vc. Vaccines are usually imperfect however, so the effectiveness, E (E standing for effectiveness against infection transmission in the field) of a vaccine must be accounted for:


From this equation, it can be observed that if E is less than (1 – 1/R0), then it will be impossible to eliminate a disease even if the entire population is vaccinated. Similarly, waning vaccine-induced immunity, as occurs with acellular pertussis vaccines, requires higher levels of booster vaccination in order to sustain herd immunity. Important among illustrations of this principle are the shifts to multiple doses (up to 20) and to monovalent vaccines in the effort to eliminate polio in India, where the standard trivalent oral polio vaccines and regimens produce low levels of protection.  If a disease has ceased to be endemic to a population, then natural infections will no longer contribute to a reduction in the fraction of the population that is susceptible; only vaccination will contribute to this reduction.


Herd immunity through vaccination:

The primary way to boost herd immunity is through the use of vaccine. Their use is originally based on the observation that milkmaids exposed to cowpox were immune to smallpox, so the practice of inoculating people with the cowpox virus began as a way to prevent smallpox cases from occurring. Well-developed vaccines provide this protection in a far safer way than natural infections, as vaccines generally do not cause the diseases they protect against and severe adverse effects are significantly less common than complications from natural infections. The immune system does not distinguish between natural infections and vaccines, forming an active response to both, so immunity induced via vaccination is similar to what would have occurred from contracting and recovering from the disease. In order to achieve herd immunity through vaccination, vaccine manufacturers aim to produce vaccines with low failure rates and policy makers aim to encourage their use.  After the successful introduction and widespread use of a vaccine, sharp declines in the incidence of diseases it protects against can observed, necessarily decreasing the number of hospitalizations and deaths caused by such diseases.


Herd immunity through passive immunization:

The transfer of maternal antibodies, primarily IgG across the placenta helps protect fetuses and newborns from disease. After birth, newborns can also acquire these antibodies from colostrum. Since these antibodies provide some degree of protection, newborns provide a slight boost to herd immunity. This boost, however, is temporary, being gradually lost as the presence of maternal antibodies wanes during the first few months of life. The presence of maternal antibodies in a newborn’s body often, but not always, adversely affects vaccine effectiveness, so additional doses are recommended for some vaccines while others are not first administered to the infant until after such antibodies are no longer present in the body. For some diseases that are particularly severe for fetuses and newborns, such as influenza and tetanus, pregnant women may be immunized in order to transfer antibodies to the child. In contrast to natural passive immunity, acquired passive immunity refers to the process of obtaining serum or plasma from immune individuals, then taking antibodies from this and injecting it to protect certain susceptible persons. High-risk groups, such as certain newborns, pregnant women, organ transplantation recipients, and the immunocompromised, including HIV-seropositive individuals, may receive antibody preparations to prevent infections or to reduce the severity of symptoms. As with natural passive immunity, protection is immediate but wanes over time. Because antibody preparations are capable of producing a certain degree of herd immunity, they have been used to control disease outbreaks.


Many examples of herd immunity have been described, illustrating the importance of indirect protection for predicting the short- and long-term impact of vaccination programs, for justifying them economically, and for understanding the nature of the immunity induced by various vaccines.  Among the classic examples was the recognition that periodic epidemics of ubiquitous childhood infections such as measles, mumps, rubella, pertussis, chickenpox, and polio, arose because of the accrual of a critical number of susceptible individuals in populations and that epidemics could be delayed or averted by maintaining numbers of susceptible individuals below this critical density (i.e., by maintaining the proportion immune above some threshold).  Impressive examples of indirect protection have been observed after the introduction of conjugate vaccines against pneumococcal and Haemophilus infections. Reductions in disease incidence among cohorts too old to have been vaccinated have been responsible for one- to two-thirds of the total disease reduction attributable to these vaccines in some populations. These are due to the ability of conjugate vaccines to protect vaccinees not only against disease but also against nasal carriage, and hence infectiousness.


Selective vaccination of groups that are important in transmission can slow transmission in general populations or reduce incidence among population segments that may be at risk of severe consequences of infection. Schools play an important role in community transmission of influenza viruses, and thus there has been discussion of slowing transmission either by closing schools or by vaccinating schoolchildren. Selective vaccination of schoolchildren against influenza was policy in Japan during the 1990s and was shown to have reduced morbidity and mortality among the elderly. Analogous issues relate to vaccination against rubella and human papillomavirus (HPV) in males; for each of these examples the consequences of infection (with rubella or HPV) in males are relatively minor, so the policy issue becomes whether vaccination of males is warranted to protect females, and many societies have decided in favor for rubella but not for HPV. A particularly interesting example of using vaccines to reduce transmission is the potential for “transmission blocking vaccines” for malaria. These vaccines would not protect the individual recipient against infection or disease, but would produce antibodies that block life cycle stages of the malaria parasite in the mosquito. Recent work has shown the biologic feasibility of such vaccines, and models have shown their potential contribution to reducing overall transmission in malaria-endemic communities. They would thus provide the first example of a vaccine that in theory would provide no direct benefit to the recipient. Finally we may refer to eradication programs based on vaccines—globally successful in the case of smallpox and rinderpest, and at least regionally successful to date in the case of wild polio virus. The Americas have been free of wild polio virus circulation for almost 20 years, though the thresholds for herd immunity have proved more elusive in parts of Asia and Africa. Each of these programs has used a combination of routine vaccination, itself successful in some populations, supplemented by campaigns in high-risk regions and populations in order to stop the final chains of transmission. Such examples illustrate how the direct effect of immunity (i.e., successful vaccination) in reducing infection or infectiousness in certain individuals can decrease the risk of infection among those who remain susceptible in the population. Importantly, it is a vaccine’s effect on transmission that is responsible for the indirect effect. If the only effect of a vaccine were to prevent disease but not to alter either the risk of infection or infectiousness, then there would be no indirect effect, and no herd immunity. It was once wrongly argued, for example, that inactivated polio vaccines protected only against paralysis and not against infection. We now know that this is wrong, and that inactivated polio vaccines can decrease both infection risk and infectiousness, as demonstrated in several countries that interrupted wild poliovirus transmission using only these vaccines.


Life cycle of virus/bacteria vis-à-vis herd immunity:

The entire concept of herd immunity fails to acknowledge that there is a life cycle of the viruses and the bacteria all on their own, and that what turns them on and off may have nothing to do with the percentage of people who have been infected. All you have to do is look at the SARS outbreak. That virus didn’t infect 70 or 80 percent of the population, which would then impart herd immunity on the 20 or 30 percent that didn’t get the disease. This is because the virus itself had a life cycle of its own. And so it came and went without any percentage of the population being protected. There wasn’t herd immunity, and yet the virus died out on its own. We fail to include that viruses have a life cycle, and that they are in relationship to other organisms and to us. Something activates them and something actually stops them, and it has nothing necessarily to do with the percentage of people who would have the illness or who have been vaccinated.


Evolutionary pressure:

Herd immunity itself acts as an evolutionary pressure on certain viruses, influencing viral evolution by encouraging the production of novel strains, in this case referred to as escape mutants, that are able to “escape” from herd immunity and replicate more easily.  At the molecular level, viruses escape from herd immunity through antigenic drift, which is when mutations accumulate in the portion of the viral genome that encodes for the virus’s surface antigen, typically a protein of the virus capsid, producing a change in the viral epitope.  Alternatively, the reassortment of separate viral genome segments, or antigenic shift, which is more common when there are more strains in circulation, can also produce new serotypes.  When either of these occur, memory T cells no longer recognize the virus, the virus becomes resistant to certain existing antiviral drugs, and herd immunity ceases to be relevant to the dominant circulating strain.  For both influenza and norovirus, epidemics temporarily induce herd immunity until a new dominant strain emerges, causing successive waves of epidemics.  As this evolution poses a challenge to herd immunity, broadly neutralizing antibodies and “universal” vaccines that can provide protection beyond a specific serotype are in development.


Why emergence of strains of pathogen resistant to vaccines is rare?

Disease control exerts evolutionary pressures that can lead to the evolution of resistance. This has been seen in a spectacular fashion in the evolution of resistance to antibiotics, anti-virals and anti-parasitics. Despite intense (and often successful) attempts to control infectious diseases through vaccination, there is still rather little evidence of the emergence of strains of pathogen resistant to vaccines. If vaccine induced immunity is less cross-reactive than naturally acquired immunity, there may be a level of vaccine coverage above which a vaccine resistant strain will emerge as a result of the vaccination campaign. This situation is illustrated in following example. Vaccination begins at time 3 years. There follows a period of very low incidence (the honeymoon period) before epidemics of the wild-type strain restart. Note that vaccine efficacy remains at 80% during these post-honeymoon epidemics. The post honeymoon epidemic that starts at time 15 years is a result of the slow accumulation of unvaccinated susceptible. A small number of those who have been vaccinated are also infected because of the incomplete protection conferred by the vaccine. Several decades later, a much larger epidemic occurs and at the same time vaccine efficacy plummets. The vaccine resistant strain has achieved competitive dominance as a result of the growing number of vaccinated individuals. These vaccinated people are well protected against wild-type strain, but have only minimal protection against the vaccine resistant strain. The vaccinated reproductive rate for the vaccine resistant strain is larger than that for the wild-type strain. It takes several decades of accumulation of vaccinated people before this shift in competitive advantage manifests itself in epidemics of the vaccine resistant strain. It is not, however, an unavoidable consequence of vaccination. Highly cross-reactive and immunogenic vaccines can lead to the eradication of both strains at coverage levels below those at which the vaccine resistant strain gains the competitive advantage. A vaccine with greater cross reactivity will not face these problems.  Alternatively, low levels of vaccination leave the wild-type strain the competitive superior. If vaccination coverage had been much lower, the vaccine resistant strain would never have gained the competitive advantage. Thus, there are three possible explanations why we have not seen outbreaks of vaccine resistance in response to the major vaccination campaigns against childhood infectious diseases. The first is that we haven’t yet reached several decades of post-vaccination period. The second is that vaccine coverage is too low to give the competitive advantage to resistant strains. The third is that current vaccines give enough cross-immunity so that resistant strains will never emerge.


Free riding:

Herd immunity is a public good because it is non-excludable, meaning that there is no way to exclude people from using it, and non-rivalrous, meaning that one person’s use of herd immunity does not restrict others’ use of it.  Like other public goods, herd immunity is vulnerable to the free rider problem.  Individuals who lack immunity, primarily those who choose not to vaccinate, free ride off of the herd immunity created by those who are immune, enabling them to benefit from herd immunity without contributing to it. Not all free riders are adamantly opposed to vaccination, some may just be hesitant to vaccinate.  As the number of free riders in a population increases, outbreaks of preventable diseases become more common.  Individuals may choose to free ride for a variety of reasons, including bandwagoning or groupthinking, social norms or peer pressure, religious beliefs, the perceived effectiveness of a vaccine, mistrust of vaccines or public health officials, and flawed assessment of infection and vaccine risks. Most importantly though is that individuals are more likely to free ride if vaccination rates are high enough so as to convince a person that he or she may not need to be immune since a sufficient number of others already are. This makes vaccination itself a social dilemma as individuals can benefit from being selfish by choosing not to vaccinate, but if everyone behaves in this manner, then the entire community suffers. As a major goal of public health officials is to control the spread of infectious diseases, it is necessary to deal with free riders in a responsible manner. The availability of philosophical and personal belief exemptions from vaccination significantly increases the number of free riders over time, jeopardizing herd immunity in certain communities, so efforts should be made to either prevent their use or make their use more difficult. Some free riders can be encouraged to become immune by emphasizing to them the educational, social, and economic benefits of vaccination, such as improved school attendance, decreased health care expenditures, and increased life expectancy. Likewise, encouraging altruism and social responsibility may shift some individuals from being self-interested to doing what is best for the entire community. Many nonvaccinators lack a general understanding of or are unsure about vaccines and the diseases they protect against, so education campaigns have the potential to positively influence these individuals’ vaccination decisions. Punishing nonvaccinators for not vaccinating could undermine trust between public health officials and the community, so creating incentives to become immune and rewards for doing so should be made instead. Some people may not be able to be immune due to medical reasons, in which case ideally only these individuals should be permitted to free ride.


My view on infectious disease eradication by vaccines:

Let me start with example.

Suppose you have a vaccine which is 100 % effective and given to 100 % population and no extra-human reservoir for microorganism exists. The logic says that in this case, disease will be eradicated as all people receive vaccine and all are protected. Eradication of a disease is a more demanding goal than control, usually requiring the reduction to zero of cases in a defined geographic area. Eradication of a disease is achieved when its elimination can be sustained without ongoing interventions. Is it possible? No. There is no vaccine that is 100 % effective. No vaccine is given to 100 % population but larger the vaccine recipient population greater the likelihood of disease eradication. Herd immunity says that once a threshold population is vaccinated, the disease outbreak will be prevented and unvaccinated individuals will be protected. If basic reproduction number Ro is less than one, the disease will die out in long run and get spontaneously eradicated without any vaccine. As Ro increases, the disease becomes highly transmissible; the likelihood of eradication fails consequently. R0 is the average number of secondary cases produced by a primary case in a wholly susceptible population. The larger the R0, the higher the vaccine coverage is required to eliminate the infection. If vaccination does not confer solid immunity against infection to all recipients, the herd threshold level of vaccination required to protect a population increases. In other words, if vaccine is not 100 % effective, you need more people to be vaccinated to confer herd immunity; but below a critical level of effectiveness, no disease eradication will be achieved even if 100 % population is vaccinated. So diseases with average R0 and effective vaccine can be eradicated (e.g. small pox). Tuberculosis cannot be eradicated as BCG is not effective. Measles cannot be eradicated despite effective vaccine as R0 is very high. In other words, disease eradication possibility is directly proportional to vaccine efficacy and vaccinated population but inversely proportional to Ro.


When vaccine efficacy is 100 % and vaccinated population is 100% and no extra-human reservoir for microorganism exists, disease will be eradicated irrespective of Ro.

When Ro is less than 1, disease will be eradicated irrespective of vaccination.

We are discussing diseases in-between these two extremes where this formula becomes relevant.


Let us discuss small pox eradication. The Ro of small pox is between 3.5 to 6 and vaccine efficacy is 95% but WHO did not need to vaccinate majority of world population to eradicate small pox because of unique features of small pox. Smallpox was totally eradicated by a lengthy and painstaking process, which identified all cases and their contacts and ensured that they were all vaccinated. Smallpox eradication was accomplished with a combination of focused surveillance—quickly identifying new smallpox cases—and ring vaccination. “Ring vaccination” meant that anyone who could have been exposed to a smallpox patient was tracked down and vaccinated as quickly as possible, effectively corralling the disease and preventing its further spread. Smallpox was a good candidate for eradication for several reasons. First, the disease is highly visible: smallpox patients develop a rash that is easily recognized. In addition, the time from exposure to the initial appearance of symptoms is fairly short, so that the disease usually can’t spread very far before it’s noticed. Workers from the World Health Organization found smallpox patients in outlying areas by displaying pictures of people with the smallpox rash and asking if anyone nearby had a similar rash. Second, only humans can transmit and catch smallpox. Some diseases have an animal reservoir, meaning they can infect other species besides humans. Yellow fever, for example, infects humans, but can also infect monkeys. If a mosquito capable of spreading yellow fever bites an infected monkey, the mosquito can then give the disease to humans. So even if the entire population of the planet could somehow be vaccinated against yellow fever, its eradication could not be guaranteed. The disease could still be circulating among monkeys, and it could re-emerge if human immunity ever waned. (The discovery of an animal reservoir for yellow fever was in fact what derailed a yellow fever eradication effort in the early 1900s.) Smallpox, however, can infect only humans. In effect, aside from the human population, it has nowhere to hide. WHO trained vaccinators quickly, and they could immunize large groups of people in a short time. Such unique features of small pox are missing in polio/measles and therefore polio/measles is not yet eradicated despite worldwide immunization and effective vaccine. Diseases that are transmissible in incubation period (e.g. measles) do not allow catching contacts fast enough for vaccination. Diseases that are invisible (e.g. 90 % patients infected with polio virus) would prevent detection of new cases and hinder vaccination efforts based on the location of new cases and potential exposure to other individuals. Measles has Ro of 14 to 18 and therefore despite effective vaccine and worldwide immunization, measles eradication is a dream.


Vaccine and virus mutation:

The purpose of vaccines is not to prevent a virus from mutating. The purpose of a vaccine is to prevent infection/disease by a virus. However, with certain viruses, like smallpox, effective vaccination programs can eliminate the virus before it has a chance to mutate. This works best with viruses which are slow to mutate (e.g., DNA viruses), only infect humans and cannot survive for long outside of the host. The faster a virus mutates, the more likely a person is to be reinfected (e.g., a person can catch the flu every year or potentially even multiple times within the same season) and the more often a vaccine is reformulated and administered (e.g., flu vaccine requires reformulation and readministration every year). If someone is infected, the immune system will try to stop the infection by killing the viruses. There is potential for some of the viruses to survive long enough to spread to another host. Those viruses can have some slight mutations that allowed them to survive long enough to reproduce. So, the only real way to stop a virus from mutating is to kill every last one before it can reproduce and spread to a new host. Thus far, only vaccines have allowed us to accomplish this feat (e.g., smallpox). During natural infection, it takes time for immune system to kill virus and that time allows virus multiplication and subsequent mutation and spread of mutated virus while during infection of immunized individual, the immune response is quick and robust, thereby killing viruses fast before it has chance to mutate. In other words, vaccination has a greater ability to prevent virus mutation as compared to natural infection.


Determinants of vaccine response in individuals:

The strength and duration of the immune system’s response to a vaccine is determined by a number of factors as outlined in Table below.

Vaccine type The type of vaccine antigen and its immunogenicity directly influence the nature of the immune response that is induced to provide protection:

  • Live, attenuated vaccines generally induce a significantly stronger and more sustained antibody response.
  • Inactivated vaccines often require adjuvants to enhance antibody response, usually require multiple doses to generate high and sustained antibody responses, and induce vaccine antibodies that decline over time to below protective thresholds unless repeat exposure to the antigen reactivates immune memory. Pure polysaccharide vaccines induce limited immune response and do not induce immunologic memory.
Vaccine adjuvants and carrier proteins
  • The addition of adjuvants to inactivated vaccines enhances immune response and extends the duration of B and T cell activation.
  • Conjugating (linking) a polysaccharide with a carrier protein (protein that is easily recognized by the immune system such as diphtheria or tetanus) leads to a significantly higher immune response.
Optimal dose of antigen
  • Higher doses of inactivated antigens up to a certain threshold elicit higher antibody responses.
Interval between doses
  • The recommended interval between doses allows development of successive waves of antigen-specific immune system responses without interference, as well as the maturation of memory cells
Age of vaccine recipient
  • In early life, the immune system is immature, resulting in limited immune responses to vaccines. For example, children less than 2 years of age do not respond to polysaccharide vaccines.
  • In general, antibody responses to vaccines received early in life decline rapidly for most, but not all (e.g., hepatitis B) vaccines.
  • In older age, immune responses decline (immunosenescence) and can result in an increased incidence and severity of infectious diseases and a reduction in the strength and persistence of antibody responses to vaccines.
Pre-existent antibodies
  • The immune response to vaccines received early in life may be influenced by the presence of maternal antibodies transferred across the placenta.
  • The immune response to live vaccines will be influenced by passively transferred antibodies, such as after blood product transfusion and immune globulins.
Status of the immune system
  • Immune response to vaccines will be modified by the status of vaccine recipient’s immune system.


Vaccine-induced seropositivity:

Vaccine-induced seropositivity or VISP is the phenomenon wherein a person who has received a vaccine against a disease would thereafter give a positive or reactive test result for having that disease when tested for it, despite not actually having the disease. This happens because many vaccines encourage the body to produce antibodies against a particular disease, and blood tests often determine whether a person has those antibodies, regardless of whether they came from the infection or just a vaccination. VISP is especially a concern in vaccine trials for HIV vaccine research because people who give a positive result in an HIV test, even if that result is because of a vaccine and not because of an infection, may face discrimination because of HIV infection.


Influenza vaccine:

What does vaccine “match” and “mismatch” mean?

Vaccines against seasonal influenza must be frequently updated and the process for selecting the viruses and manufacturing the influenza vaccines starts several months before the influenza season begins. Detailed, timely data on viruses that are circulating and infecting humans globally are gathered, shared among countries and scientists, and are eventually used to formulate the upcoming seasonal influenza vaccines. Influenza viruses are constantly changing, including during the time between vaccine virus selection and the influenza season. If these changes lead to antigenic differences between the circulating seasonal influenza viruses and those viruses that are included in the seasonal influenza vaccine, then the vaccine and circulating viruses may not be closely related. The degree of similarity or difference between the circulating viruses and the viruses in the vaccines is often referred to as “vaccine match” or “vaccine mismatch”.


Can I get vaccinated and still get the influenza?


It is possible to get influenza-like illness even if you have been vaccinated. This is possible for the following reasons:

• Antibodies that protect against infection take approximately two weeks to develop after vaccination. You may be exposed to an influenza virus shortly before or after getting vaccinated. This exposure may result in your becoming ill before the vaccine begins to protect you.

• The influenza vaccine is made to protect against viruses that were identified in the previous influenza season as likely to become widespread. You may be exposed to a virus not included in the vaccine and develop illness.

• The effectiveness of influenza vaccines can vary widely. Moreover, a person’s susceptibility to infection and response to vaccination are influenced by numerous factors. In general, the influenza vaccine works best among healthy younger adults and older children. Influenza vaccination is not a perfect tool, but it is the best available way to protect against influenza infection.

• Respiratory pathogens that are not related to influenza viruses can cause “flu-like” symptoms. The influenza vaccine does not protect you against these pathogens. You won’t know for sure that you are infected with influenza virus unless you are tested.


What is the vaccine effectiveness of seasonal influenza vaccines?

The vaccine effectiveness of seasonal influenza vaccines is a measure of how well the seasonal influenza vaccine prevents influenza virus infection in the general population during a given influenza season. If the vaccine effectiveness is high, it indicates that individuals who have received the seasonal influenza vaccine are less likely to have an influenza illness. If the vaccine effectiveness is low, it indicates that the seasonal influenza vaccine may not be as likely to prevent influenza illness in the vaccinated population. It is important to remember that even with low vaccine effectiveness, substantial numbers of influenza-related illnesses can still be prevented.  Each season, studies are conducted in some countries to measure the effectiveness of influenza vaccine. During seasons when most circulating influenza viruses are similar to the viruses in the influenza vaccine, the vaccine can reduce the risk of illness caused by influenza virus infection by about 50-60% among the overall population. The efficacy of influenza vaccine in the year 2014 was only 30 %.


M2e-based Universal Influenza Vaccines:

The successful isolation of a human influenza virus in 1933 was soon followed by the first attempts to develop an influenza vaccine. Nowadays, vaccination is still the most effective method to prevent human influenza disease. However, licensed influenza vaccines offer protection against antigenically matching viruses, and the composition of these vaccines needs to be updated nearly every year. Vaccines that target conserved epitopes of influenza viruses would in principle not require such updating and would probably have a considerable positive impact on global human health in case of a pandemic outbreak. The extracellular domain of Matrix 2 (M2e) protein is an evolutionarily conserved region in influenza A viruses and a promising epitope for designing a universal influenza vaccine.  Such a universal influenza vaccine could be used to prevent seasonal influenza, provided that it proves to be non-inferior to the existing seasonal influenza vaccines that mainly rely on the induction of strain-specific virus neutralizing antibodies.  M2e is a highly conserved target for universal influenza A vaccine development. Different types of M2e-based vaccine, such as DNA vaccine, protein vaccine, VLPs vaccine, and vectored vaccine, are all able to provide a certain level of broad-spectrum protection in animal models. M2e-specific antibodies, mainly IgG, are the main actors in immune protection and do so by engaging Fc Receptor expressing immune cells such as alveolar macrophages. It is also well documented that mucosal immunization with M2e-based vaccines offers better protection in mouse models compared to parenteral immunization strategies. This improved protection may be attributable to the induction of M2e-specific IgA. The infection-permissive character of M2e-based vaccines can be considered as an advantage when vaccinating immunologically naïve individuals. Because M2e-immunity does not neutralize the virus, the limited virus replication still induces cross-reactive T cell responses against other conserved viral antigens such as NP and M1. However, M2e will likely not be a complete substitute for the currently licensed influenza vaccines that are able to confer much stronger protection, be it against a very narrow antigenic range of viruses. In the future, with many other universal influenza vaccine candidates on the horizon, M2e-conjugate vaccines will likely find a place as part of a vaccine that is a blend of different conserved epitopes that together may offer strong, long lasting, and foremost broad immune protection. Whether such a vaccine will perform better clinically than a fully antigenic matched seasonal vaccine remains to be seen. However, such universal vaccines would prove their value in the case of a pandemic.


Polio vaccine:

Two polio vaccines are used throughout the world to combat poliomyelitis (or polio). The first was developed by Jonas Salk through the use of HeLa cells and first tested in 1952. Announced to the world by Dr Thomas Francis Jr. on 12 April 1955, it consists of an injected dose of inactivated (dead) poliovirus. An oral vaccine was developed by Albert Sabin using attenuated poliovirus. Human trials of Sabin’s vaccine began in 1957, and it was licensed in 1962. There is no long term carrier state for poliovirus in immunocompetent individuals, polioviruses have no non-primate reservoir in nature (although they have been induced in transgenic mice), and survival of the virus in the environment for an extended period of time appears to be remote. Therefore, interruption of person to person transmission of the virus by vaccination is the critical step in global polio eradication. The two vaccines have eradicated polio from most countries in the world, and reduced the worldwide incidence from an estimated 350,000 cases in 1988 to just 223 cases in 2012. This represents a 99.9% reduction, but recently there has been an alarming bounce back in some countries towards more cases. In November 2013, the World Health Organization announced a polio outbreak in Syria. May 2014, WHO declared a global health emergency for only the second time since regulations permitting it to do so were adopted in 2007, due to a spread of polio.  As per the WHO, Pakistan, Syria and Cameroon have recently allowed the virus to spread—to Afghanistan, Iraq and Equatorial Guinea, respectively.




HIV vaccine:

An HIV vaccine is a vaccine which would either protect individuals who do not have HIV from contracting that virus, or otherwise may have a therapeutic effect for persons who have or later contract HIV/AIDS. Currently, there is no effective HIV vaccine but many research projects managing clinical trials seek to create one. There is evidence that a vaccine may be possible. Work with monoclonal antibodies (MAb) has shown or proven that the human body can defend itself against HIV, and certain individuals remain asymptomatic for decades after HIV infection. Potential candidates for antibodies and early stage results from clinical trials have been announced. One HIV vaccine candidate which showed some efficacy was studied in RV 144, which was a trial in Thailand beginning in 2003 and first reporting a positive result in 2009. Many trials have shown no efficacy, including the STEP study and HVTN 505 trials. The urgency of the search for a vaccine against HIV stems from the AIDS-related death toll of over 25 million people since 1981. Indeed, in 2002, AIDS became the primary cause of mortality due to an infectious agent in Africa.


There are a number of factors that cause development of an HIV vaccine to differ from the development of other classic vaccines:

•Classic vaccines mimic natural immunity against reinfection generally seen in individuals recovered from infection; there are almost no recovered AIDS patients.

•Most vaccines protect against disease, not against infection; HIV infection may remain latent for long periods before causing AIDS.

•Most effective vaccines are whole-killed or live-attenuated organisms; killed HIV-1 does not retain antigenicity and the use of a live retrovirus vaccine raises safety issues.


HIV structure:

The epitopes of the HIV viral envelope are more variable than those of many other viruses. Furthermore, the functionally important epitopes of the gp120 protein are masked by glycosylation, trimerisation and receptor-induced conformational changes making it difficult to block with neutralising antibodies.

The ineffectiveness of previously developed vaccines primarily stems from two related factors.

•First, HIV is highly mutable. Because of the virus’ ability to rapidly respond to selective pressures imposed by the immune system, the population of virus in an infected individual typically evolves so that it can evade the two major arms of the adaptive immune system; humoral (antibody-mediated) and cellular (mediated by T cells) immunity.

•Second, HIV isolates are themselves highly variable. HIV can be categorized into multiple clades and subtypes with a high degree of genetic divergence. Therefore, the immune responses raised by any vaccine need to be broad enough to account for this variability. Any vaccine that lacks this breadth is unlikely to be effective.

The difficulties in stimulating a reliable antibody response have led to the attempts to develop a vaccine that stimulates a response by cytotoxic T-lymphocytes. Another response to the challenge has been to create a single peptide that contains the least variable components of all the known HIV strains.


According to Gary J. Nabel of the Vaccine Research Center, NIH, in Bethesda, Maryland, several hurdles must be overcome before scientific research will culminate in a definitive AIDS vaccine. First, greater translation between animal models and human trials must be established. Second, new, more effective, and more easily produced vectors must be identified. Finally, and most importantly, there must arise a robust understanding of the immune response to potential vaccine candidates. Emerging technologies that enable the identification of T-cell-receptor specificities and cytokine profiles will prove valuable in hastening this process. In July 2012 a science group speculated that an effective vaccine for HIV would be completed in 2019. A killed whole HIV vaccine, SAV001, that has had success in the US FDA phase 1 human clinical trial in Sep. 2013. This HIV vaccine uses a “dead” version of HIV-1 for the first time. The outcome of the phase 1 human clinical trial has turned out that the vaccine has shown no serious adverse effects while boosting HIV-1 specific antibody. According to Dr. Chil-Yong Kang of Western University’s Schulich School of Medicine & Dentistry in Canada, the developer of this vaccine, the antibody against gp120 surface antigen and p24 capsid antigen increased to 8-fold and 64-fold, respectively after vaccination. There have been reports that HIV patients coinfected with GBV-C can survive longer than those without GBV-C, but the patients may be different in other ways. There is current active research into the virus’ effects on the immune system in patients coinfected with GBV-C and HIV. A promising new approach to a live attenuated HIV-1 vaccine is being pursued by scientists, using a genetically modified form of the HIV virus. The new method involves manipulating the virus’ codons, this is a sequence of three nucleotides that form genetic code, to rely on an unnatural amino acid for proper protein translation, which allows it to replicate. Because this amino acid is foreign to the human body, the virus cannot continue to reproduce.


Types of HIV vaccine:

To date, over 40 different HIV vaccines have been tested in several thousand volunteers. Most of this research has consisted of early safety and efficacy studies of recombinant proteins, produced in a variety of different systems. Despite some encouraging evidence of immune responses in people, it is unclear whether many of these would prevent HIV infection. Typically, vaccines are administered to large numbers of people at high risk of infection. After a certain time, the vaccinated participants’ experiences are compared to those of people who received a placebo. As described in what an HIV vaccine would have to do, this may involve assessing the antibodies present in their blood, or the response of their CD8 T-cells to HIV in the test tube, or looking for HIV seroconversions in the trial participants. Researchers have explored a number of strategies that they hope will produce protective immune responses.

These include:

•Live attenuated vaccines.

•Inactivated vaccines.

•Recombinant sub-unit vaccines.

•Modified envelope vaccines.

•Peptide vaccines.

•DNA vaccines.

•Recombinant vectored vaccines.

•Other vectors.


•Vaccines against viral toxins.

More often than not, studies use a combination of the above types of vaccine in ‘prime and boost’ vaccines, in which two or more different vaccines are used to try and broaden or intensify immune responses. Examples include a vector virus to prime a T-cell response with a subunit (peptide) booster or DNA vaccine to produce antibodies, or two different vector viruses expressing the same gene sequence.



One of the most frustrating quests has been for a malaria vaccine. The most common parasites responsible for malaria (plasmodia) have demonstrated an impressive ability to circumvent eradication efforts by becoming drug-resistant. The fact that the WHO recently announced that it was exceedingly pleased with a new vaccine that protects just 30 percent of those immunized indicates the immense difficulty of producing a malaria vaccine. Although this percentage is very low compared with other vaccines, given the severity of malaria worldwide and the fact that it kills more than one million and infects more than 300 million children a year, even such limited coverage could save thousands if not millions of lives in the hardest-hit areas of the globe.


Types of malaria vaccines:

Early malaria vaccine development efforts focused on the parasite’s pre-erythrocytic stage—the period during which the organism, in the form of a sporozoite, enters a person’s blood stream and heads for the liver, where it matures and begins a prolific multiplication process. Today, vaccine developers are trying to develop three types of vaccines:

•Pre-erythrocytic vaccine candidates

•Blood-stage vaccine candidates

•Transmission-blocking vaccine candidates


1. Pre-erythrocytic vaccine candidates:

Pre-erythrocytic vaccine candidates aim to protect against the early stage of malaria infection—the stage at which the parasite enters or matures in an infected person’s liver cells. These vaccines would elicit an immune response that would either prevent infection or attack the infected liver cell if infection does occur. These candidates include:

•Recombinant or genetically engineered proteins or antigens from the surface of the parasite or from the infected liver cell.

•DNA vaccines that contain the genetic information for producing the vaccine antigen in the vaccine recipient.

•Live, attenuated vaccines that consist of a weakened form of the whole parasite (the sporozoite) as the vaccine’s main component.

2. Blood-stage vaccine candidates:

Blood-stage vaccine candidates target the malaria parasite at its most destructive stage—the rapid replication of the organism in human red blood cells. Blood-stage vaccines do not aim to block all infection. They are expected to decrease the number of parasites in the blood, and in so doing, reduce the severity of disease. Evidence suggests that people who have survived regular exposure to malaria develop natural immunity over time. The goal of a vaccine that contains antigens or proteins from the surface of the blood-stage parasite (the merozoite) would be to allow the body to develop that natural immunity with much less risk of getting ill.

3. Transmission-blocking vaccine candidates:

Transmission-blocking vaccine candidates seek to interrupt the life cycle of the parasite by inducing antibodies that prevent the parasite from maturing in the mosquito after it takes a blood meal from a vaccinated person. These vaccines would not prevent a person from getting malaria, nor would they lessen the symptoms of disease. They would, however, limit the spread of infection by preventing mosquitoes that fed on an infected person from spreading malaria to new hosts. A successful transmission-blocking vaccine would be expected to reduce deaths and illness related to malaria in at-risk communities.


Ebola vaccine:

Ebola vaccines currently in trials include:

•A DNA-based plasmid vaccine that primes host cells with some of the Ebola proteins.

•A vaccine based on a replication incompetent chimpanzee respiratory virus engineered to express a key Ebola protein.

•A live attenuated virus from the same family of viruses that causes rabies, also engineered to express a critical Ebola protein.

•A vaccine based on a vaccinia virus and engineered to express a critical Ebola protein.

Each of those strategies has drawbacks in terms of safety and delivery.

Whole virus vaccines have long been used to successfully prevent serious human diseases, including polio, influenza, hepatitis and human papillomavirus-mediated cervical cancer. The advantage conferred by inactivated whole virus vaccines such as the one devised by Halfmann, Kawaoka and their colleagues is that they present the complete range of proteins and genetic material to the host immune system, which is then more likely to trigger a broader and more robust immune response. An Ebola whole virus vaccine, constructed using a novel experimental platform, has been shown to effectively protect monkeys exposed to the often fatal virus. The vaccine described in the journal Science (march 2015), was developed by a group led by Yoshihiro Kawaoka, a University of Wisconsin-Madison expert on avian influenza, Ebola and other viruses of medical importance. It differs from other Ebola vaccines because as an inactivated whole virus vaccine, it primes the host immune system with the full complement of Ebola viral proteins and genes, potentially conferring greater protection. In terms of efficacy, this affords excellent protection. It is also a very safe vaccine. The vaccine was constructed on an experimental platform first devised in 2008 by Peter Halfmann, a research scientist in Kawaoka’s lab. The system allows researchers to safely work with the virus thanks to the deletion of a key gene known as VP30, which the Ebola virus uses to make a protein required for it to reproduce in host cells. Ebola virus has only seven genes and, like most viruses, depends on the molecular machinery of host cells to grow and become infectious. By engineering monkey kidney cells to express the VP30 protein, the virus can be safely studied in the lab and be used as a basis for devising countermeasures like a whole virus vaccine. The vaccine reported by Kawaoka and his colleagues was additionally chemically inactivated using hydrogen peroxide, according to the new Science report.


Dengue vaccine:

While no licensed dengue vaccine is available, several vaccine candidates are currently being evaluated in clinical studies.

In light of the increasing rate of dengue infections throughout the world despite vector-control measures, several dengue vaccine candidates are in development. The candidate currently at the most advanced clinical development stage, a live-attenuated tetravalent vaccine based on chimeric yellow fever-dengue virus (CYD-TDV), has progressed to phase III efficacy studies. Potential dengue vaccines must be tetravalent to induce an immune response against each of dengue’s four serotypes. Results from a phase III multicentric efficacy study in Latin America have been published in November 2014. Sanofi Pasteur, the vaccines division of Sanofi, announced the publication of the detailed results of the final landmark phase III clinical efficacy study in Latin America in The New England Journal of Medicine. Overall efficacy against any symptomatic dengue disease was 60.8 percent in children and adolescents 9-16 years old who received three doses of the vaccine. Analyses show a 95.5 percent protection against severe dengue and an 80.3 percent reduction in the risk of hospitalization during the study. The results of this second phase III efficacy study confirm the high efficacy against severe dengue and the reduction in hospitalization observed during the 25-month active surveillance period of the first phase III efficacy study conducted in Asia, highlighting the consistency of the results across the world. However, efficacy questions linger over the results from the Asian phase III trials. The three dose regimen was least effective against Dengue-2, the most prevalent strain in Asia, and efficacy increased with the child’s age. Due to different epidemiologic profiles for dengue in Asia and Latin America, Sanofi is proposing vaccine roll-out tailored to each region. For example, dengue infection among children is not as common in Latin America as it is in Asia—so, it would make little sense to include a dengue vaccine in the Expanded Program on Immunization (EPI) in Latin America. Sanofi hopes to reduce dengue mortality by 50% and morbidity by 25% before 2020. Other pharmaceutical companies such as Novartis, Merck, and GlaxoSmithKline also have dengue candidates in development, although none have progressed to phase III clinical trials to date.  The Sanofi Dengue vaccine is expected to be licensed sometime this year.



Classification of vaccines: types of vaccine:

There are two basic types of vaccines: live attenuated and inactivated. The characteristics of live and inactivated vaccines are different, and these characteristics determine how the vaccine is used. Live attenuated vaccines are produced by modifying a disease-producing (“wild”) virus or bacteria in a laboratory. The resulting vaccine organism retains the ability to replicate (grow) and produce immunity, but usually does not cause illness. Live attenuated vaccines include live viruses and live bacteria.

Inactivated vaccines can be composed of either whole viruses or bacteria, or fractions of either:

• Fractional vaccines are either protein-based or polysaccharide-based.

• Protein-based vaccines include toxoids (inactivated bacterial toxin), and subunit or subvirion products.

• Most polysaccharide-based vaccines are composed of pure cell-wall polysaccharide from bacteria.

• Conjugate polysaccharide vaccines are those in which the polysaccharide is chemically linked to a protein. This linkage makes the polysaccharide a more potent vaccine.


However, recent advances in molecular biology had provided alternative methods for producing vaccines.

Listed below are the possibilities;-

1. Subunit vaccines – purified or recombinant viral antigen

2. Recombinant virus vaccines

3. Anti-idiotype antibodies

4. DNA vaccines


Why are some vaccines live and some dead?

The bottom line is that the decision is entirely driven by the science. If scientists can make a killed vaccine that is effective, that is what they will do. It’s all about trial and error. Most viral diseases require live-attenuated vaccines, but the vast majority of bacterial illnesses are prevented with inactivated vaccines. There are some exceptions to this rule, though. For example:

•Some travelers to less-developed countries get the vaccine to prevent typhoid fever. There are live and killed forms of this vaccine.

•Rabies is a viral infection that is 100 percent fatal once it has progressed. The disease is simply too dangerous to give, even in a weakened state. Fortunately, science allowed the development of an inactivated rabies vaccine.


There are a variety of vaccine types that are either currently in use or in development for the prevention of infectious diseases. Under ideal conditions, vaccines should trigger the innate immune system and both arms of the adaptive immune system.  However, each vaccine type has both advantages and disadvantages which can affect the stimulation of the immune system and thus limit the usefulness of the vaccine type.  First, live, attenuated vaccines as exemplified by the vaccines against measles, mumps, and chickenpox contain laboratory-weakened versions of the original pathogenic agent. Therefore, these vaccines produce a strong cellular and antibody responses and typically produce long-term immunity with only one to two doses of vaccine. Typically, it is less difficult to create live, attenuated vaccines with viruses rather than bacteria because viruses have fewer genes so it is easier to control the viral characteristics. However, because these vaccines contain living microorganisms, refrigeration is required to preserve potency; and, there is the possibility of reversion to the original virulent form of the pathogenic agent. In addition, live vaccines cannot be given to individuals with weakened immune systems because the vaccine produces actual disease. Inactivated vaccines as exemplified by the inactivated influenza vaccine are produced by destroying a pathogenic agent with chemicals, heat, or radiation. This inactivation of the microorganism makes the vaccine more stable. These vaccines do not require refrigeration and can be freeze-dried for transport. However, these vaccines produce weaker immune responses therefore additional booster shots are required to maintain immunity. In experiments with mice by Raz et al., a vaccine made from irradiated Listeria monocytogenes bacteria, rather than heat-killed bacteria, showed protection against a challenge with live Listeria. The irradiated vaccine also stimulated a protective response from T-cells which previously had only been shown to occur with vaccines made from live, weakened Listeria bacteria.  Subunit vaccines as exemplified by the recombinant hepatitis B vaccine include only epitopes (specific parts of antigens to which antibodies or T-cells recognize and bind) that most readily stimulate the immune system. Because these vaccines only use a few specific antigens, this reduces the likelihood of adverse reactions; however, this specificity increases the difficulty of determining which antigens should be included in the vaccine. Toxoid vaccines as exemplified by the diphtheria and tetanus vaccines are produced by inactivating bacterial toxins with formalin. These toxoids stimulate an immune response against the bacterial toxins. Conjugate vaccines as exemplified by the Haemophilus influenzae type B (Hib) vaccine are a special type of subunit vaccine. In a conjugate vaccine, antigens or toxoids from a microbe are linked to polysaccharides from the outer coating of that microbe to stimulate immunity (especially in infants). Naked DNA vaccines are still in the experimental stages of development. These vaccines would use DNA specific for microbial antigens to stimulate immunity. This DNA would be administered by injection and then body cells would take up the DNA. These body cells would then start producing the antigen and displaying it on their surfaces which would then stimulate the immune system. These vaccines would produce both a strong antibody response to the free antigen and a strong cellular response to the microbial antigens displayed on the cell surfaces. These vaccines are also considered relatively easy and inexpensive to create and produce. Naked DNA vaccines for influenza and herpes are still in the developmental stages.


Vaccine type  Childhood (ages 0-6) Immunization Schedule
Live, attenuated Measles, mumps, rubella (MMR combined vaccine)
Varicella (chickenpox)
Influenza (nasal spray)
Inactivated/Killed Polio (IPV)
Hepatitis A
Toxoid (inactivated toxin) Diphtheria, tetanus (part of DTaP combined immunization)
Subunit/conjugate Hepatitis B
Influenza (injection)
Haemophilus influenza type b (Hib)
Pertussis (part of DTaP combined immunization)


Vaccine type Other available vaccines
Live, attenuated Zoster (shingles)Yellow fever
Inactivated/Killed Rabies
Subunit/conjugate Human papillomavirus (HPV)


Live attenuated vaccine (LAV):

Some vaccines contain live, attenuated microorganisms. Many of these are active viruses that have been cultivated under conditions that disable their virulent properties, or that use closely related but less dangerous organisms to produce a broad immune response. Although most attenuated vaccines are viral, some are bacterial in nature. Examples include the viral diseases yellow fever, measles, rubella, and mumps, and the bacterial disease typhoid. The live Mycobacterium tuberculosis vaccine developed by Calmette and Guérin is not made of a contagious strain, but contains a virulently modified strain called “BCG” used to elicit an immune response to the vaccine. The live attenuated vaccine-containing strain Yersinia pestis EV is used for plague immunization. Attenuated vaccines have some advantages and disadvantages. They typically provoke more durable immunological responses and are the preferred type for healthy adults. But they may not be safe for use in immunocompromised individuals, and may rarely mutate to a virulent form and cause disease.


Viruses are simple microbes containing a small number of genes, and scientists can therefore more readily control their characteristics. Viruses often are attenuated through a method of growing generations of them in cells in which they do not reproduce very well. This hostile environment takes the fight out of viruses: As they evolve to adapt to the new environment, they become weaker with respect to their natural host, human beings.  Fully potent viruses (known as natural or ‘wild-type’ viruses) cause disease by reproducing themselves many thousands or millions of times in the body’s cells. However, vaccine viruses usually reproduce fewer than 20 times. Vaccine viruses replicate just well enough to cause the immune system to produce protective antibodies and to make very long-lived ‘memory B cells’ that remember the infection and produce more antibodies if the natural infectious virus is encountered in the future. Live, attenuated vaccines are more difficult to create for bacteria. Bacteria have hundreds of genes and thus are much harder to control. Scientists working on a live vaccine for a bacterium, however, might be able to use recombinant DNA technology to remove several key genes. This approach has been used to create a vaccine against the bacterium that causes cholera, Vibrio cholerae, although the live cholera vaccine has not been licensed in the United States.


Safety and stability:

Since LAVs contain living organisms, there is a degree of unpredictability raising some safety and stability concerns.

•Attenuated pathogens have the very rare potential to revert to a pathogenic form and cause disease in vaccinees or their contacts. Examples for this are the very rare, serious adverse events of:

◦vaccine-associated paralytic poliomyelitis (VAPP) and

◦disease-causing vaccine-derived poliovirus (VDPV) associated with oral polio vaccine (OPV).

•Functional immune systems eliminate attenuated pathogens in their immune response. Individuals with compromised immune systems, such as HIV-infected patients may not be able to respond adequately to the attenuated antigens.

•Sustained infection, for example tuberculosis (BCG) vaccination can result in local lymphadenitis or a disseminated infection.

•If the vaccine is grown in a contaminated tissue culture it can be contaminated by other viruses (e.g. retro viruses with measles vaccine).

•As a precaution, LAVs tend not to be administered during pregnancy. However, the actual potential for fetal damage remains theoretical. For example, numerous studies have demonstrated that accidental rubella vaccination during pregnancy did not result in an increased risk of birth defects.

•LAVs can have increased potential for immunization errors:

◦Some LAVs come in lyophilized (powder) form. They must be reconstituted with a specific diluent before administration, which carries the potential for programmatic errors if the wrong diluent or a drug is used.

◦Many LAVs require strict attention to the cold chain for the vaccine to be active and are subject to program failure when this is not adhered to.


Live vs. dead (inactivated) vaccine:

Feature Live Dead
Dose Low High
Number of doses Single Multiple
Need for adjuvant No Yes
Duration of immunity Many years Less
Antibody responses IgG IgA, IgG
Cell mediated immunity Good Poor
Reversion to virulence Possible Impossible


Inactivated vaccine:

Some vaccines contain inactivated, but previously virulent, micro-organisms that have been destroyed with chemicals, heat, radioactivity, or antibiotics. Examples are influenza, cholera, bubonic plague, polio, hepatitis A, and rabies. The term killed generally refers to bacterial vaccines, whereas inactivated relates to viral vaccines. Typhoid was one of the first killed vaccines to be produced and was used among the British troops at the end of the 19th century. Polio and hepatitis A are currently the principal inactivated vaccines used and in many countries, whole cell pertussis vaccine continues to be the most widely used killed vaccine. The adaptive immune response to a killed/inactivated vaccine is very similar to a toxoid vaccine with the exception that the antibody response generated is directed against a much broader range of antigens. Thus, following injection, the whole organism is phagocytosed by immature dendritic cells; digestion within the phagolysosome produces a number of different antigenic fragments which are presented on the cell surface as separate MHC II: antigenic fragment complexes. Within the draining lymph node, a number of TH2, each with a TCR for a separate antigenic fragment, will be activated through presentation by the activated mature dendritic cell. B cells, each with a BCR for a separate antigenic fragment, will bind antigens that drain along lymph channels. Release by the TH2 of IL2, IL4, IL5 and IL6 induces B-cell activation, differentiation and proliferation with subsequent isotype switch (IgM to IgG) and memory cell formation. This process takes a minimum of 10–14 days but on subsequent exposure to the organism, a secondary response through activation of the various memory B cells is induced which leads to high levels of the different IgG molecules within 24–48 h. Hepatitis A is an example of an inactivated vaccine that might be used by occupational health practitioners. It is a formalin inactivated, cell culture adapted, strain of HAV; vaccination generates neutralizing antibodies and protective efficacy is in excess of 90%. Vaccination should be considered for laboratory workers working with HAV and sanitation workers in contact with sewage. Additionally, staff working with children who are not toilet trained or in residential situations where hygiene standards are poor may also be offered vaccination. Primary immunization with a booster between 6 and 12 months after the first should provide a minimum 25 years protection.


Such vaccines are more stable and safer than live vaccines: The dead microbes can’t mutate back to their disease-causing state. Inactivated vaccines usually don’t require refrigeration, and they can be easily stored and transported in a freeze-dried form, which makes them accessible to people in developing countries. Killed/inactivated vaccines have a number of disadvantages. They usually require several doses because the microbes are unable to multiply in the host and so one dose does not give a strong signal to the adaptive immune system; approaches to overcome this include the use of several doses and giving the vaccine with an adjuvant. This could be a drawback in areas where people don’t have regular access to health care and can’t get booster shots on time. Local reactions at the vaccine site are more common—this is often due to the adjuvant. Using killed microbes for vaccines is inefficient because some of the antibodies will be produced against parts of the pathogen that play no role in causing disease. Some of the antigens contained within the vaccine, particularly proteins on the surface, may actually down-regulate the body’s adaptive response—presumably, their presence is an evolutionary development that helps the pathogen overcome the body’s defenses. And finally, killed/inactivated vaccines do not give rise to cytotoxic T cells which can be important for stopping infections by intracellular pathogens, particularly viruses.


Toxoid Vaccines:

For bacteria that secrete toxins, or harmful chemicals, a toxoid vaccine might be the answer. These vaccines are used when a bacterial toxin is the main cause of illness. Scientists have found that they can inactivate toxins by treating them with formalin, a solution of formaldehyde and sterilized water. Such “detoxified” toxins, called toxoids, are safe for use in vaccines.  When the immune system receives a vaccine containing a harmless toxoid, it learns how to fight off the natural toxin. The immune system produces antibodies that lock onto and block the toxin. Vaccines against diphtheria and tetanus are examples of toxoid vaccines. Tetanus toxoid vaccine is manufactured by growing a highly toxigenic strain of Clostridium tetani in a semi-synthetic medium: bacterial growth and subsequent lysis release the toxin into the supernatant and formaldehyde treatment converts the toxin to a toxoid by altering particular amino acids and inducing minor molecular conformational changes. Ultrafiltration then removes unnecessary proteins left as a residual from the manufacturing process to produce the final product. The toxoid is physico-chemically similar to the native toxin thus inducing cross-reacting antibodies but the changes induced by formaldehyde treatment render it non-toxigenic. Following deep subcutaneous/intramuscular (sc/im) administration of tetanus vaccine, the toxoid molecules are taken up at the vaccination site by immature dendritic cells: within this cell, they are processed through the endosomal pathway (involving the phagolysosome) where they are bound to major histocompatibility complex type II (MHC II) molecules; the MHC II:toxoid complex then migrates to the cell surface. While this process is happening within the cell, the now activated mature dendritic cell migrates along lymph channels to the draining lymph node where they encounter naive T helper type 2 cells (TH2), each with their own unique T-cell receptor (TCR). Identifying and then binding of the MHC II:toxoid to the specific TH2 receptor then activates the naive T cell, causing it to proliferate. Simultaneously, toxoid molecules not taken up by dendritic cells pass along lymph channels to the same draining lymph nodes where they come into contact with B cells, each with their own unique B-cell receptor (BCR). Binding to the B cell through the specific immunoglobulin receptor that recognizes tetanus toxoid results in the internalization of toxoid, processing through the endosomal pathway and presentation on the cell surface as an MHC II:toxoid complex as happens in the dendritic cell. These two processes occur in the same part of the lymph node with the result that the B cell with the MHC II:toxoid complex on its surface now comes into contact with the activated TH2 whose receptors are specific for this complex. The process, termed linked recognition, results in the TH2 activating the B cell to become a plasma cell with the production initially of IgM, and then there is an isotype switch to IgG; in addition, a subset of B cells becomes memory cells. The above mechanism describes the adaptive immune response to a protein antigen-like tetanus toxoid; such antigens are termed T-dependent vaccines since the involvement of T helper cells is essential for the immune response generated. Polysaccharide antigens in contrast generate a somewhat different response as will be described in the section on subunit vaccines. The rationale for tetanus vaccination is thus based on generating antibodies against the toxoid which have an enhanced ability to bind toxin compared with the toxin receptor binding sites on nerve cells; in the event of exposure to C. tetani, this large toxin:antibody complex is then unable to bind to the receptor so neutralizing the toxin and preventing disease development. Diphtheria and pertussis toxoid (in acellular pertussis vaccines) are two commercially available toxoid vaccines against which antibodies are produced in an exactly analogous manner as described above. Toxoid vaccines tend not to be highly immunogenic unless large amounts or multiple doses are used: one problem with using larger doses is that tolerance can be induced to the antigen. In order therefore to ensure that the adaptive immune response is sufficiently effective to provide long-lasting immunity, an adjuvant is included in the vaccine. For diphtheria, tetanus and acellular pertussis vaccines, an aluminium salt (either the hydroxide or phosphate) is used; this works by forming a depot at the injection site resulting in sustained release of antigen over a longer period of time, activating cells involved in the adaptive immune response. Aluminium adjuvants are also readily taken up by immature dendritic cells and facilitate antigen processing in the spleen/lymph nodes where the necessary cell–cell interactions take place that lead to the development of high-affinity clones of antibody producing B cells.


There are three principal advantages of toxoid vaccines. First, they are safe because they cannot cause the disease they prevent and there is no possibility of reversion to virulence. Second, because the vaccine antigens are not actively multiplying, they cannot spread to unimmunized individuals. Third, they are usually stable and long lasting as they are less susceptible to changes in temperature, humidity and light which can result when vaccines are used out in the community. Toxoid vaccines have two disadvantages. First, they usually need an adjuvant and require several doses for the reasons discussed above. Second, local reactions at the vaccine site are more common—this may be due to the adjuvant or a type III (Arthus) reaction—the latter generally start as redness and induration at the injection site several hours after the vaccination and resolve usually within 48–72 h. The reaction results from excess antibody at the site complexing with toxoid molecules and activating complement by the classical pathway causing an acute local inflammatory reaction.


Subunit Vaccines:

Subunit vaccines are a development of the killed vaccine approach: however, instead of generating antibodies against all the antigens in the pathogen, a particular antigen (or antigens) is used such that when the antibody produced by a B cell binds to it, infection is prevented; the key therefore to an effective subunit vaccine is to identify that particular antigen or combination of antigens. Instead of the entire microbe, subunit vaccines include only the antigens that best stimulate the immune system. In some cases, these vaccines use epitopes—the very specific parts of the antigen that antibodies or T cells recognize and bind to. Because subunit vaccines contain only the essential antigens and not all the other molecules that make up the microbe, the chances of adverse reactions to the vaccine are lower.  Subunit vaccines can contain anywhere from 1 to 20 or more antigens. Of course, identifying which antigens best stimulate the immune system is a tricky, time-consuming process. Once scientists do that, however, they can make subunit vaccines in one of two ways:

1. They can grow the microbe in the laboratory and then use chemicals to break it apart and gather the important antigens.

2. They can manufacture the antigen molecules from the microbe using recombinant DNA technology. Vaccines produced this way are called “recombinant subunit vaccines.”  A recombinant subunit vaccine has been made for the hepatitis B virus. Scientists inserted hepatitis B genes that code for important antigens into common baker’s yeast. The yeast then produced the antigens, which the scientists collected and purified for use in the vaccine. Research is continuing on a recombinant subunit vaccine against hepatitis C virus.


Subunit vaccines can be further categorized into:

1. Protein-based vaccines

2. Polysaccharide vaccines

3. Conjugate vaccines


Hepatitis B and Haemophilus influenzae b (Hib) are examples of subunit vaccines that use only one antigen; influenza is an example of a subunit vaccine with two antigens (haemagglutinin and neuraminidase). The adaptive immune response to a subunit vaccine varies according to whether the vaccine antigen is a protein or a polysaccharide—subunit vaccines based on protein antigens, for example hepatitis B and influenza, are T-dependent vaccines like toxoid vaccines (as previously discussed) whereas polysaccharides generate a T-independent response.  An example of a T-independent subunit vaccine that might be administered in the occupational setting is PPV 23 made up of the capsular polysaccharide from 23 common pneumococcal serotypes which uses the capsular polysaccharide as the vaccine antigen. The vaccine is administered into the deep subcutaneous tissue or intramuscularly. At the injection site, some polysaccharide molecules are phagocytosed by immature dendritic cells (and macrophages), which subsequently migrate to the local lymph nodes where they encounter naive TH2. However, the TCR only recognizes protein molecules and so even though presented by a mature dendritic cell and displayed on MHC II molecules, the TH2 is not activated. Simultaneously, non-phagocytosed polysaccharide molecules pass along lymph channels to the same draining lymph nodes where they encounter B cells, each with their own unique BCR. Because the vaccine antigen consists of linear repeats of the same high molecular weight capsular polysaccharide, it binds with high avidity to multiple receptors on a B cell with the appropriate specificity. Such multivalent binding is able to activate the B cell without the need for TH2 involvement, leading to the production of IgM. Because, however, the TH2 is not involved, there is only limited isotype switching so that only small amounts of IgG are produced and few memory B cells formed. In an adequately immunized individual, when Streptococcus pneumoniae crosses mucosal barriers, specific IgM antibody in serum will bind to the pathogen’s capsular polysaccharide facilitating complement-mediated lysis. IgM is highly effective at activating complement; it is significantly less able to act as a neutralizing or opsonizing antibody.  PPV 23 should be offered to workers with chronic respiratory, heart, renal and liver disease, asplenia or hyposplenia, immunosuppression or the potential for a CSF leak: for those individuals with chronic renal disease and splenic dysfunction, where attenuation of the immune response may be expected further doses every 5 years are recommended.  T-independent vaccines can be converted to efficient T-dependent vaccines by covalently binding them (a process termed conjugation) to a protein molecule. Following phagocytosis by immature dendritic cells, the conjugated protein and polysaccharide molecules are presented both as MHC II:protein and MHC II:polysaccharide complexes at the cell surface. Migration to the draining lymph node will bring this activated mature dendritic cell into the T-cell-rich area and lead to activation of a TH2 with high specificity for the carrier protein.  Simultaneous passage of vaccine antigen along draining lymph channels to the B-cell-rich area of draining lymph nodes results in binding between the polysaccharide:protein conjugate and a B cell whose BCR has a high specificity for the polysaccharide. The polysaccharide:protein complex is internalized, phagocytosed and the protein is expressed as a cell surface complex with MHC II. There is then linked recognition between the activated TH2 with high specificity for the carrier protein and this B cell. TH2 involvement leads to co-stimulation and cytokine release resulting in IgM then IgG and generation of memory cells.


The advantages of subunit vaccines are the same as toxoid vaccines with the added benefit that one can distinguish vaccinated people from infected people—for example with hepatitis B vaccination, only an adaptive immune response to the surface antigen is possible whereas with infection core and e antigen responses occur. Subunit vaccines share the same disadvantages as toxoid vaccines, namely the need for an adjuvant (and often multiple doses), together with the frequent occurrence of local reactions at the injection site.


Vaccine Type Disease Advantage Disadvantage
Live, weakened vaccines Measles, mumps, rubella (German measles), polio (Sabin vaccine) and chicken pox Produces a strong immune response so can provide life-long immunity with 1-2 doses. Not safe for people with compromised immune systems. Needs refrigeration to stay potent.
Inactivated or “killed” vaccines Cholera, flu, hepatitis A, rabies, polio (Salk vaccine) Safe for people with compromised immune systems. Easily stored and transported; does not require refrigeration. Usually requires booster shots every few years to remain effective.
Subunit Vaccines Hepatitis B Lower chance of adverse reaction. Research can be time-consuming and difficult.
Conjugate Vaccines Haemophilus influenzae B (or Hib) and pneumococcal vaccine Safe for people with immune compromised systems. Usually requires booster shots every few years to remain effective.


While most vaccines are created using inactivated or attenuated compounds from micro-organisms, synthetic vaccines are composed mainly or wholly of synthetic peptides, carbohydrates, or antigens.


Synthetic Peptides:

The development of synthetic peptides that might be useful as vaccines depends on the identification of immunogenic sites. Several methods have been used. The best known example is foot and mouth disease, where protection was achieved by immunizing animals with a linear sequence of 20 aminoacids. Synthetic peptide vaccines would have many advantages. Their antigens are precisely defined and free from unnecessary components which may be associated with side effects. They are stable and relatively cheap to manufacture. Furthermore, less quality assurance is required. Changes due to natural variation of the virus can be readily accommodated, which would be a great advantage for unstable viruses such as influenza.  Synthetic peptides do not readily stimulate T cells. It was generally assumed that, because of their small size, peptides would behave like haptens and would therefore require coupling to a protein carrier which is recognized by T-cells. It is now known that synthetic peptides can be highly immunogenic in their free form provided they contain, in addition to the B cell epitope, T- cell epitopes recognized by T-helper cells. Such T-cell epitopes can be provided by carrier protein molecules, foreign antigens or within the synthetic peptide molecule itself.  Synthetic peptides are not applicable to all viruses. This approach did not work in the case of polioviruses because the important antigenic sites were made up of 2 or more different viral capsid proteins so that it was in a concise 3-D conformation.


Anti-idiotype antibodies:

The ability of anti-idiotype antibodies to mimic foreign antigens has led to their development as vaccines to induce immunity against viruses, bacteria and protozoa in experimental animals. Anti-idiotypes have many potential uses as viral vaccines, particularly when the antigen is difficult to grow or hazardous. They have been used to induce immunity against a wide range of viruses, including HBV, rabies, Newcastle disease virus and FeLV, reoviruses and polioviruses.


Another classification of vaccines:


Vaccines may be monovalent (also called univalent) or multivalent (also called polyvalent). A monovalent vaccine is designed to immunize against a single antigen or single microorganism. A multivalent or polyvalent vaccine is designed to immunize against two or more strains of the same microorganism, or against two or more microorganisms. The valency of a multivalent vaccine may be denoted with a Greek or Latin prefix (e.g., tetravalent or quadrivalent). In certain cases a monovalent vaccine may be preferable for rapidly developing a strong immune response. A monovalent vaccine contains a single strain of a single antigen (e.g. Measles vaccine), whereas polyvalent vaccine contains two or more strains/serotypes of the same antigen (e.g. OPV).


Combination vaccines:

Combination vaccines consist of two or more antigens in the same preparation. This approach has been used for over 50 years in many vaccines such as DTaP and MMR. Combination vaccines can be useful to overcome logistic constraints of multiple injections, and accommodate for a children’s fear of needles and pain. Combination products simplify vaccine administration and allow for the introduction of new vaccines without requiring additional health clinic visit and injections. It is very important, however, that combination vaccines are carefully tested before introduction. For instance, adjuvants are pharmacological agent (e.g., aluminum salt, oil-in-water emulsions) that modifies the effect of other agents, such as a drug or vaccine, while having few if any direct effects when given by itself. Adjuvants are often included in vaccines to enhance the recipient’s immune response to a supplied antigen, while keeping the injected foreign material to a minimum. Adjuvants in a combination vaccine could reduce the activity of one antigen and excessively increase the reactivity of another antigen. There could also be interactions with other vaccine components such as buffers. Buffers are substances that minimize changes in the acidity of a solution when an acid or base is added to the solution. Buffers are used in the manufacturing process of some vaccines. Stabilizers are compounds that are used to help vaccine maintain its effectiveness during storage. Vaccine stability is essential, particularly where the cold chain is unreliable. Factors affecting stability are temperature and pH and preservatives. With all combinations, manufacturers must therefore evaluate the potency. Potency is measure of strength or immunogenicity in vaccines. Potency of each antigenic component and the effectiveness of the vaccine components when combined to induce immunity, risk of possible reversion to toxicity, and reaction with other vaccine components must be evaluated.  Licensed combination vaccines undergo extensive testing before approval by national regulatory authorities to assure that the products are safe, effective, and of acceptable quality.


Heterotypic vaccines:

Also known as Heterologous or “Jennerian” vaccines, these are vaccines that are pathogens of other animals that either do not cause disease or cause mild disease in the organism being treated. The classic example is Jenner’s use of cowpox to protect against smallpox. A current example is the use of BCG vaccine made from Mycobacterium bovis to protect against human tuberculosis.


A number of innovative vaccines are also in development and in use:

•Dendritic cell vaccines combine dendritic cells with antigens in order to present the antigens to the body’s white blood cells, thus stimulating an immune reaction. These vaccines have shown some positive preliminary results for treating brain tumors.

•Recombinant Vector – by combining the physiology of one micro-organism and the DNA of the other, immunity can be created against diseases that have complex infection processes (vide infra).

•DNA vaccination – an alternative, experimental approach to vaccination called DNA vaccination, created from an infectious agent’s DNA, is under development (vide infra).

•T-cell receptor peptide vaccines are under development for several diseases using models of Valley Fever, stomatitis, and atopic dermatitis. These peptides have been shown to modulate cytokine production and improve cell mediated immunity.

•Targeting of identified bacterial proteins that are involved in complement inhibition would neutralize the key bacterial virulence mechanism.


Recombinant (antigen) vaccines:

Vaccine antigens may also be produced by genetic engineering technology. These products are sometimes referred to as recombinant vaccines. There are four genetically-engineered vaccines are currently available:

• Hepatitis B vaccines are produced by insertion of a segment of the hepatitis B virus gene into the gene of a yeast cell. The modified yeast cell produces pure hepatitis B surface antigen when it grows.

• Human papillomavirus vaccines are produced by inserting genes for a viral coat protein into either yeast (as the hepatitis B vaccines) or into insect cell lines. Viral-like particles are produced and these induce a protective immune response.

• Live typhoid vaccine (Ty21a) is Salmonella typhi bacteria that has been genetically modified to not cause illness.

• Live attenuated influenza vaccine (LAIV) has been engineered to replicate effectively in the mucosa of the nasopharynx but not in the lungs.


Recombinant Vector Vaccines:

Recombinant vector vaccines are experimental vaccines that use either an attenuated virus or microbe to introduce microbial DNA into body cells. “Vector” refers to the virus or bacterium used as the carrier. In nature, viruses latch on to cells and inject their genetic material into them. In the lab, scientists have taken advantage of this process. They have figured out how to take the roomy genomes of certain harmless or attenuated viruses and insert portions of the genetic material from other microbes into them. The carrier viruses then ferry that microbial DNA to cells. These viral vaccines would readily mimic a natural infection thus stimulating the immune system. Attenuated bacteria could also have genetic material for antigens from a pathogenic microbe inserted. These antigens from the pathogenic microbe would then be displayed on the harmless microbe this mimicking the pathogen and stimulating the immune system. Recombinant vector vaccines closely mimic a natural infection and therefore do a good job of stimulating the immune system. Both bacterial and viral-based recombinant vectors vaccines for HIV, rabies, and measles are in the experimental stages. Recombinant vector vaccines are experimental vaccines similar to DNA vaccines, but they use an attenuated virus or bacterium to introduce microbial DNA to cells of the body.


In this approach, a gene encoding a major viral antigen (that is a target for neutralizing Ab) is inserted (cloned) into another, non-virulent viral vector so that the cloned gene is expressed and the protein produced during viral infection. Animals are then infected with the recombinant virus, and mount an immune response (both humoral and cellular) against the introduced antigen. This is approach is somewhat analagous to an attenuated virus, except that the vector virus can be unrelated to the original virus except for the introduced genes. In addition to eliciting both  humoral and cellular immune response, recombinant viruses can also induce secretory immunity if administered via an appropriate route. Examples of vectors that are being tried are adenovirus and vaccinia virus. It may even be possible to introduce genes from more than one pathogenic virus into the same vector (poxviruses can accommodate up to 25 kb of DNA) in order to make a vaccine strain that will protect against several pathogens. Like attenuated viruses, a potential problem with recombinant viruses is that they could cause disease in immunocompromised animals.


Hybrid virus vaccine:

An alternative application of recombinant DNA technology is the production of hybrid virus vaccines. The best known example is vaccinia; the DNA sequence coding for the foreign gene is inserted into the plasmid vector along with a vaccinia virus promoter and vaccinia thymidine kinase sequences. The resultant recombination vector is then introduced into cells infected with vaccinia virus to generate a virus that expresses the foreign gene. The recombinant virus vaccine can then multiply in infected cells and produce the antigens of a wide range of viruses. The genes of several viruses can be inserted, so the potential exists for producing polyvalent live vaccines. HBsAg, rabies, HSV and other viruses have been expressed in vaccinia. Hybrid virus vaccines are stable and stimulate both cellular and humoral immunity. They are relatively cheap and simple to produce. Being live vaccines, smaller quantities are required for immunization. As yet, there are no accepted laboratory markers of attenuation or virulence of vaccinia virus for man. Alterations in the genome of vaccinia virus during the selection of recombinant may alter the virulence of the virus. The use of vaccinia also carries the risk of adverse reactions associated with the vaccine and the virus may spread to susceptible contacts. At present, efforts are being made to attenuate vaccinia virus further and the possibility of using other recombinant vectors is being explored, such as attenuated poliovirus and adenovirus.


Vaccine generations:

From whole organism vaccine (live or dead) to DNA vaccine:

First generation vaccines are whole-organism vaccines – either live and weakened, or killed forms. Live, attenuated vaccines, such as smallpox and polio vaccines, are able to induce killer T-cell (TC or CTL) responses, helper T-cell (TH) responses and antibody immunity. However, there is a small risk that attenuated forms of a pathogen can revert to a dangerous form, and may still be able to cause disease in immunocompromised vaccine recipients (such as those with AIDS). While killed vaccines do not have this risk, they cannot generate specific killer T cell responses, and may not work at all for some diseases.  In order to minimise these risks, so-called second generation vaccines were developed. These are subunit vaccines, consisting of defined protein antigens (such as tetanus or diphtheria toxoid) or recombinant protein components (such as the hepatitis B surface antigen). These, too, are able to generate TH and antibody responses, but not killer T cell responses. DNA vaccines are third generation vaccines, and are made up of a small, circular piece of bacterial DNA (called a plasmid) that has been genetically engineered to produce one or two specific proteins (antigens) from a pathogen. The vaccine DNA is injected into the cells of the body, where the “inner machinery” of the host cells “reads” the DNA and uses it to synthesize the pathogen’s proteins. Because these proteins are recognised as foreign, when they are processed by the host cells and displayed on their surface, the immune system is alerted, which then triggers a range of immune responses. These DNA vaccines were developed from “failed” gene therapy experiments. The first demonstration of a plasmid-induced immune response was when mice inoculated with a plasmid expressing human growth hormone elicited antibodies instead of altering growth.


DNA Vaccines:

In this approach, genes (DNA) encoding specific viral proteins are injected into an animal (either in muscle or skin). The DNA is then taken up by cells, where it is transcribed into mRNA which is then translated to give rise to the viral protein. This protein is expressed on the surface of cells, either alone or in association with MHC molecules. It is recognized as a foreign molecule by the immune system, and elicits an immune response. It was discovered almost 20 years ago that plasmid DNA, when injected into the skin or muscle of mice, could induce immune responses to encoded antigens. Since that time, there has since been much progress in understanding the basic biology behind this deceptively simple vaccine platform and much technological advancement to enhance immune potency. Among these advancements are improved formulations and improved physical methods of delivery, which increase the uptake of vaccine plasmids by cells; optimization of vaccine vectors and encoded antigens; and the development of novel formulations and adjuvants to augment and direct the host immune response. The ability of the current, or second-generation, DNA vaccines to induce more-potent cellular and humoral responses opens up this platform to be examined in both preventative and therapeutic arenas. So-called naked DNA vaccines consist of DNA that is administered directly into the body. These vaccines can be administered with a needle and syringe or with a needle-less device that uses high-pressure gas to shoot microscopic gold particles coated with DNA directly into cells. Sometimes, the DNA is mixed with molecules that facilitate its uptake by the body’s cells. Naked DNA vaccines being tested in humans include those against the viruses that cause influenza and herpes.  As of 2015, DNA vaccination is still experimental and is not approved for human use. This approach has several advantages, including the following:

1) Since no infectious agent (attenuated or inactivated) is involved, there is no chance of producing an infection, even in immunocompromised animals.

2) Expression of the protein is relatively long-term (the gene may integrate into the cell DNA and be stably expressed) so that long term immunity may be solicited.

3) Since the antigen is endogenously synthesized inside cells, it elicits a strong cellular immune response.

4) Once a gene is cloned, DNA is inexpensive to make and is stable, so the vaccines should be inexpensive.

5) No adjuvant is required.


DNA vaccines for non-infectious diseases offer new treatments for tumour and allergy. Vaccines against allergies need to suppress or alter an unwanted immune response, while a cancer DNA vaccine has to overcome tolerance and/or immune suppression and initiate a powerful immune response.


Delivering antibody genes:

Another method that can provide lasting antibodies is gene transfer. This method involves using DNA or a viral vector to deliver a gene for the monoclonal antibody into a person’s cells. The DNA or vector carries instructions for making the antibody inside a person’s cells, allowing them to make the HIV-specific antibodies on their own, rather than getting injections of them. This method is similar to that used with DNA vaccines and viral vector vaccines. The major difference is that in this case a copy of an antibody gene is delivered and for vaccines, a copy of an HIV gene is delivered.  Studies with broadly neutralizing antibodies may lead directly to a strategy to prevent HIV. They could also tell us which antibodies work and what amounts are needed to prevent HIV infection. From there, scientists can work to develop future vaccines to reproduce this response.


Genetically modified (GM) vaccine:

Today we have several different types of GM vaccines in production, development or research phases, such as:

1. DNA vaccines & Naked DNA vaccines

2. Recombinant Vector vaccines

3. Recombinant (antigen) vaccine

GM vaccines are already in use and being administered to people; these include vaccines for hepatitis B, rotavirus and HPV, among others. There are experimental GM vaccines being developed that use tumorigenic cancer cells and cells from humans, dogs, monkeys, cows, pigs, rodents, birds and insects. Use of foreign DNA in various forms has the potential to cause a great deal of trouble, not only because there is the potential for it to recombine with our own DNA, but also there is the potential for it to turn the DNA “switches,” the epigenetic parts of the DNA, on and off.


Humans and animals receiving certain live virus-vectored vaccines will be shedding and transmitting genetically modified vaccine strains that may pose unpredictable risks to the vaccinated, close contacts and environment. For example, vaccine developers creating an experimental AIDS vaccine by genetically engineering the live-attenuated measles virus to express a fusion protein containing HIV-1 antigens, face challenges in trying to limit shedding and transmission of infectious virus by the recently vaccinated. These very real risks should be thoroughly quantified before licensure and widespread use of GMO vaccines because the ability of vaccine strain viruses to recombine with wild-type viruses and produce new hybrid viruses with potentially serious side effects.


3D vaccine:

Cancer cells are generally ignored by the immune system. This is because—for the most part—they more closely resemble cells that belong in the body than pathogens such as bacterial cells or viruses. The goal of cancer vaccines is to provoke the immune system to recognize cancer cells as foreign and attack them. One way to do this is by manipulating dendritic cells, the coordinators of immune system behavior. Dendritic cells constantly patrol the body, sampling bits of protein found on the surface of cells or viruses called antigens. When a dendritic cell comes in contact with an antigen that it deems foreign, it carries it to the lymph nodes, where it instructs the rest of the immune system to attack anything in the body displaying that antigen. Though similar to healthy cells, cancer cells often display unique antigens on their surface, which can be exploited to develop cancer immunotherapies. For example, in dendritic cell therapy, white blood cells are removed from a patient’s blood, stimulated in the lab to turn into dendritic cells, and then incubated with an antigen that is specific to a patient’s tumor, along with other compounds to activate and mature the dendritic cells. These “programmed” cells are then injected back into the bloodstream with the hopes that they will travel to the lymph nodes and present the tumor antigen to the rest of the immune system cells. While this approach has had some clinical success, in most cases, the immune response resulting from dendritic cell vaccines is short-lived and not robust enough to keep tumors at bay over the long run. In addition, cell therapies such as this, which require removing cells from patients and manipulating them in the lab, are costly and not easily regulated. To overcome these limitations, Mooney’s lab has been experimenting with a newer approach that involves reprogramming immune cells from inside the body using implantable biomaterials.



The idea is to introduce a biodegradable scaffold under the skin that temporarily creates an “infection-mimicking microenvironment”, capable of attracting, housing, and reprogramming millions of dendritic cells over a period of several weeks. In a 2009 paper published in Nature Materials, Mooney demonstrated that this could be achieved by loading a porous scaffold—about the size of a dime—with tumor antigen as well as a combination of biological and chemical components meant to attract and activate dendritic cells. Once implanted, the scaffold’s contents slowly diffused outward, recruiting a steady stream of dendritic cells, which temporarily sought residence inside the scaffold while being simultaneously exposed to tumor antigen and activating factors. When the scaffold was implanted in mice, it achieved a 90% survival rate in animals that otherwise die from cancer within 25 days. Now, Mooney and his team have taken this approach a step further, creating an injectable scaffold that can spontaneously assemble once inside the body. This second generation vaccine would prevent patients from having to undergo surgery to implant the scaffold and would also make it easier for clinicians to administer it. The new 3D vaccine is made up of many microsized, porous silica rods dispersed in liquid. When injected under the skin, the liquid quickly diffuses, leaving the rods behind to form a randomly assembled three-dimensional structure resembling a haystack. The spaces in between the rods are large enough to house dendritic cells and other immune cells, and the rods have nano-sized pores that can be loaded with a combination of antigens and drugs. When injected into mice that were then given a subsequent injection of lymphoma cells, the 3D vaccine generated a potent immune response and delayed tumor growth. Compared to a bolus injection containing the same drugs and antigens (but no scaffold), the 3D vaccine was more effective at preventing tumor growth, with 90% of mice receiving the 3D vaccine still alive at 30 days compared with only 60% of mice given the bolus injection. While the 3D injectable scaffold is being tested in mice as a potential cancer vaccine, any combination of different antigens and drugs could be loaded into the scaffold, meaning it could also be used to treat infectious diseases that may be resistant to conventional treatments. Mooney says that in addition to continuing to develop the cancer vaccine, he also plans to explore how the injectable scaffold can be used to both treat and prevent infectious diseases. More broadly, Mooney predicts that spontaneously assembling particles will be adopted by many fields in the future.


Cancer vaccine:

Cancer vaccines are medicines that belong to a class of substances known as biological response modifiers. Biological response modifiers work by stimulating or restoring the immune system’s ability to fight infections and disease. Cancer vaccines are designed to boost the body’s natural ability to protect itself, through the immune system, from dangers posed by damaged or abnormal cells such as cancer cells. There are two broad types of cancer vaccines:

1. Preventive (or prophylactic) vaccines, which are intended to prevent cancer from developing in healthy people; and

2. Treatment (or therapeutic) vaccines, which are intended to treat an existing cancer by strengthening the body’s natural defenses against the cancer.

The U.S. Food and Drug Administration (FDA) has approved two types of vaccines to prevent cancer: vaccines against the hepatitis B virus, which can cause liver cancer, and vaccines against human papillomavirus types 16 and 18, which are responsible for about 70 percent of cervical cancer cases. Cancer cells can carry both self antigens and cancer-associated antigens. The cancer-associated antigens mark the cancer cells as abnormal, or foreign, and can cause B cells and killer T cells to mount an attack against them. Cancer cells may also make much larger amounts of certain self antigens than normal cells. Because of their high abundance, these self antigens may be viewed by the immune system as being foreign and, therefore, may trigger an immune response against the cancer cells. The FDA has approved one cancer treatment vaccine for certain men with metastatic prostate cancer. Researchers are developing treatment vaccines against many types of cancer and testing them in clinical trials. Several studies have suggested that cancer treatment vaccines may be most effective when given in combination with other forms of cancer therapy. In addition, in some clinical trials, cancer treatment vaccines have appeared to increase the effectiveness of other cancer therapies. The most commonly reported side effect of cancer vaccines is inflammation at the site of injection, including redness, pain, swelling, warming of the skin, itchiness, and occasionally a rash. People sometimes experience flu-like symptoms after receiving a cancer vaccine, including fever, chills, weakness, dizziness, nausea or vomiting, muscle ache, fatigue, headache, and occasional breathing difficulties. Blood pressure may also be affected.


Antigens are often not strong enough inducers of the immune response to make effective cancer treatment vaccines. Researchers often add extra ingredients, known as adjuvants, to treatment vaccines. These substances serve to boost immune responses that have been set in motion by exposure to antigens or other means. Patients undergoing experimental treatment with a cancer vaccine sometimes receive adjuvants separately from the vaccine itself. Adjuvants used for cancer vaccines come from many different sources. Some microbes, such as the bacterium Bacillus Calmette-Guérin (BCG) originally used as a vaccine against tuberculosis, can serve as adjuvants. Substances produced by bacteria, such as Detox B, are also frequently used. Biological products derived from nonmicrobial organisms can be used as adjuvants, too. One example is keyhole limpet hemocyanin (KLH), which is a large protein produced by a sea animal. Attaching antigens to KLH has been shown to increase their ability to stimulate immune responses. Even some nonbiological substances, such as an emulsified oil known as montanide ISA–51, can be used as adjuvants. Natural or synthetic cytokines can also be used as adjuvants. Cytokines are substances that are naturally produced by white blood cells to regulate and fine-tune immune responses. Some cytokines increase the activity of B cells and killer T cells, whereas other cytokines suppress the activities of these cells. Cytokines frequently used in cancer treatment vaccines or given together with them include interleukin 2 (IL2), interferon alpha (INF–a), and GM–CSF, also known as sargramostim.


Although researchers have identified many cancer-associated antigens, these molecules vary widely in their capacity to stimulate a strong anticancer immune response. Two major areas of research aimed at developing better cancer treatment vaccines involve the identification of novel cancer-associated antigens that may prove more effective in stimulating immune responses than the already known antigens and the development of methods to enhance the ability of cancer-associated antigens to stimulate the immune system. Research is also under way to determine how to combine multiple antigens within a single cancer treatment vaccine to produce optimal anticancer immune responses. Perhaps the most promising avenue of cancer vaccine research is aimed at better understanding the basic biology underlying how immune system cells and cancer cells interact. New technologies are being created as part of this effort. For example, a new type of imaging technology allows researchers to observe killer T cells and cancer cells interacting inside the body. Researchers are also trying to identify the mechanisms by which cancer cells evade or suppress anticancer immune responses. A better understanding of how cancer cells manipulate the immune system could lead to the development of new drugs that block those processes and thereby improve the effectiveness of cancer treatment vaccines. For example, some cancer cells produce chemical signals that attract white blood cells known as regulatory T cells, or Tregs, to a tumor site. Tregs often release cytokines that suppress the activity of nearby killer T cells. The combination of a cancer treatment vaccine with a drug that would block the negative effects of one or more of these suppressive cytokines on killer T cells might improve the vaccine’s effectiveness in generating potent killer T cell antitumor responses.


Limitations of cancer treatment vaccines:

Developing successful cancer treatment vaccines is difficult. Some limitations of these vaccines are:

•Cancer cells suppress the immune system; this is how the cancer is able to grow and develop in the first place. Researchers are using adjuvants in vaccines to try to overcome this problem.

•Because cancer cells develop from a person’s own healthy cells, they may not “look” harmful to the immune system. Therefore, instead of being identified as harmful to the body and eliminated, the cancer cells are ignored.

•Larger or more advanced tumors are hard to eliminate using only a vaccine. This is one reason why cancer vaccines are usually given in addition to other treatments.

•The immune systems of people who are sick or older may not be able to produce a strong immune response following vaccination, limiting the vaccine’s effectiveness. Also, some cancer treatments may damage a person’s immune system, limiting its ability to respond to a vaccine.

Because of these reasons, some researchers think that a cancer treatment vaccine may be more effective for patients with smaller tumors or early-stage cancers.


Autologous Vaccine delays progression following surgery for Ovarian Cancer:

Treatment with the immunotherapy Vigil delayed time to progression in all patients with stage III/IV ovarian cancer who were treated with the autologous tumor cell vaccine compared with those who were not, according to an open-label phase II trial. In the 31-patient trial, which was presented at the 2015 SGO Annual Meeting, patients were randomized to receive the vaccine or no treatment following surgery. Of the 20 patients who received the vaccine, a median time to progression had not yet been reached compared with a median of 14.5 months in those who were not treated. Additionally, the Vigil vaccine, composed of granulocyte macrophage colony-stimulating factor [GM-CSF] bi-shRNAi furin vector-transfected autologous tumor cells, demonstrated an acceptable safety profile, and participants showed a high rate of immune response via T-cell activation.


Measles Vaccine cures woman of Cancer:

Mayo Clinic researchers employing “virotherapy”—or virus-based treatment—completely eradicated a 49-year-old woman’s blood cancer using an extremely heavy dose of the measles vaccine (enough to vaccinate 100 million people), according to a newly released report in the journal Mayo Clinic Proceedings. The study team injected two cancer patients with “the highest possible dose” of an engineered measles virus. (Past research had shown the virus was capable of killing myeloma-infected plasma cells while sparing normal tissue.) Both patients responded to the treatment and showed reductions in bone marrow cancer and myeloma protein. One of the patients, Stacy Erholtz, experienced complete remission and has been cancer-free for 6 months. This is the first study to show that this type of virotherapy may be effective when it comes to some types of cancers. Viruses naturally destroy tissues and the measles virus appears to cause cancer cells to group together and “explode” which not only destroys them but also helps alert the patient’s immune system to their presence. While the second myeloma patient did not experience such a dramatic recovery, the virotherapy was still effective in targeting and treating sites of her tumor growth, the Mayo researchers say. The two women included in the study were chosen because their cancer had failed to respond to other treatments, and so they were out of options, the study authors say. Also, neither of the women had much previous exposure to measles, which means they had few antibodies to the virus. While a lot more work has to be done to develop the treatment for other cancer sufferers, Russell says the ultimate goal for this therapy is “a single-shot cure for cancer.”


US scientists report promising new melanoma vaccines:

Experimental tailor-made vaccines targeting melanoma patients’ individual genetic mutations have given encouraging preliminary results, researchers have said. The clinical test on three patients with this form of aggressive skin cancer in an advanced stage is unprecedented in the United States. The vaccines appear to boost the number and diversity of T-cells, which are key to the human immune system and attack tumors, researchers said in a report published in the journal Science. Melanoma accounts for around five percent of all new cancer cases diagnosed in the United States, but that proportion is rising. Last year 76,000 Americans were diagnosed with melanoma and nearly 10,000 died of it, according to the National Cancer Institute. The vaccines were developed by sequencing the genomes of the three patients’ tumors and comparing them to samples of healthy tissue to identify proteins that had mutated. These are known as neoantigens, and are unique to cancer cells. The researchers then used computer programs and laboratory trials to predict and test the neoantigens most likely to trigger a strong immune response and thus be added to the vaccine. The vaccine was administered to patients whose tumors had been removed but without preventing cancer cells from spreading to the lymph nodes, which is an indication that the melanoma is going to reappear. The initial clinical results have been good enough to start a phase 1 clinical trial approved by the US Food and Drug Administration on six patients. If this broader test proves the vaccines work, it would pave the way for immunotherapy that prevents melanoma from resurfacing in patients. The study was led by Gerald Linette, an oncologist at the University of Washington in St. Louis, Missouri. Although the test was preliminary, it was based on the breadth and diversity of the T-cells, meaning these vaccines are promising as a therapy, he said. But the researchers cautioned that it was too early to say if these vaccines would continue to work long-term. None of the three patients tested so far have suffered major negative side effects. Immunotherapy, already used with success against melanoma, is a promising new strategy against very aggressive cancer cells for which there is currently no effective treatment.


Cervical cancer vaccine:

Worldwide, it is estimated that 274,000 women died from cervical cancer in 2002. The cause of cervical cancer is almost 100 percent attributable to genital infection with the human papillomavirus (HPV). HPVinfection is also the cause of other anogenital cancers, recurrent respiratory papillomatosis, and genital warts in both men and women. Given that HPV is sexually transmitted, the prevalence of HPV infection in the population peaks among persons in their late teens or early twenties during the years following sexual debut. Up to 70 percent of women will acquire genital HPV infection sometime during their lifetime. Most women clear the infection; however, some experience persistent infections that can lead to cervical cancer. The progression from persistent infection to cervical cancer typically evolves slowly, often over a period of 20 years or longer. During this time, the disease develops through a precancerous stage (i.e., cervical intraepithelial neoplasia (CIN)) that can be detected through regular cytologic screening of the cervix with a Papanicolaou test. If screening confirms an abnormality (i.e., dysplasia), then additional testing and treatment can usually eliminate disease. Countries that have adopted organized cervical cancer screening programs have significantly reduced the morbidity and mortality associated with cervical cancer in the population. Recently, a number of landmark clinical studies have demonstrated that a prophylactic HPV vaccine can prevent HPV infection and disease.


 Human papillomavirus (HPV) vaccines Gardasil 9 Gardasil Cervarix
Who makes it? Merck & Co., Inc. Merck & Co., Inc. GlaxoSmithKline plc
What kinds of HPV does it protect against? HPV types 16, 18, 6, 11, 31, 33, 45, 52, and 58 which cause several types of cancer and genital warts HPV types 16, 18, 6, and 11, which cause several types of cancer and genital warts HPV types 16, 18, which cause several types of cancer
Who should get the vaccine?*
  • Routine vaccination for girls and boys aged 11 or 12 years
  • Catch-up vaccination for teen girls and young women through age 26
  • Catch-up vaccination for teen boys and young men through age 21
  • Routine vaccination for girls aged 11 or 12 years
  • Catch-up vaccination for teen girls and young women through age 26
  • Gay, bisexual, and other men who have sex with men through age 26
  • People with compromised immune systems (including people living with HIV/AIDS) through age 26
When should the vaccine be given?
  • There are three shots in the HPV vaccine series
  • The second dose is given 1 to 2 months after the first dose
  • The third dose is administered 6 months after the first dose

CDC has carefully studied the risks and benefits of HPV vaccination. HPV vaccination is recommended because the benefits, such as prevention of cancer, far outweigh the risks of possible side effects.   The cervical cancer vaccine is most effective in women who have never been exposed to HPV types 16 and 18 infections. Current data showed that it is 100% effective in preventing precancerous changes (CIN) caused by HPV types 16 and 18. However, HPV 16 and 18 infections account for 70% of cervical cancer. Therefore, vaccinated women can still be infected or have CIN caused by other HPV types. As the vaccines cannot protect against infection by other high risk types of HPV, nor can they clear the virus in those who are already infected, they cannot eliminate the need of cervical screening.


Therapeutic vaccine for HPV associated cancer:

Inovio was given accolades by its industry peers for “Best Therapeutic Vaccine” for its DNA-based immunotherapy, VGX-3100, which was designed to treat HPV-associated precancers and cancers. In a large, controlled phase II efficacy trial Inovio reported top line data demonstrating regression of disease and clearance of the underlying cause of the condition – the HPV virus. Inovio expects to publish the complete data set in a peer-reviewed journal this year, is advancing this product into a phase III trial early next year, and has expanded studies of this immunotherapy to include cervical and head and neck cancer.


Therapeutic vaccine:

Vaccines have classically been developed to prevent infectious diseases. Thanks to a technological revolution in molecular biology, immunology and other vaccine-related techniques along with an enhanced understanding of disease mechanisms, it is possible to consider immune-related approaches to the treatment of various diseases rather than limiting immune approaches to the prevention of infectious diseases. In theory, multiple immunotherapy products can now be envisaged for every major therapeutic category from anti-infectives to autoimmune disorders, oncology, cardiovascular or neurological conditions. Therapeutic cancer vaccines are already discussed above and now I discuss therapeutic vaccine for many chronic diseases. The next great frontier for vaccine development will be vaccines against chronic diseases such as peptic ulcer disease, atherosclerotic heart disease, type I and II diabetes, and Alzheimer’s disease, to mention a few. In some instances the feasibility for vaccination is based on the discovery that infection with a specific pathogen is (or is likely) responsible for the chronic disease. Whereas the association between hepatitis B virus and hepatocellular carcinoma has long been known, some pathogens that have more recently been associated with chronic diseases include human papilloma virus with cervical cancer, Helicobacter pylori with peptic ulcer disease and gastric carcinoma, an association between Chlamydia and atherosclerotic heart disease (and perhaps with cervical cancer). In other instances, vaccine development is based on immunisation with chemical moieties that play a role in the pathogenesis of the chronic disease. Thus, immunisation against certain lipids may be an approach to prevent atherosclerotic heart disease and vaccination with β-amyloid protein may thwart the progression of Alzheimer’s dementia.


Vaccines for non-infectious illness could help developing nations tackle the growing burden of chronic disease. Chronic, non-communicable diseases are responsible for almost 60 per cent of all deaths annually worldwide, with half of these from cardiovascular disease. Non-communicable diseases of old age and poor lifestyles, such as heart disease, cancer and diabetes, are the biggest killers in the developing world. Certainly, vaccines have long been a mainstay in the arsenal against infectious diseases. But recently there have been murmurs about a new type of vaccine, designed for non-infectious diseases. These therapeutic vaccines still use the immune system to attack the disease, but, as the name suggests, they’re designed to treat rather than prevent illness. The idea isn’t as outlandish as it may sound. In 1999, the US-based Institute of Medicine ranked chronic illnesses such as type one diabetes and melanoma as promising vaccine candidates, labeling the development of these vaccines a matter of public health urgency. Biotechnology companies took the hint, and since then, several vaccines targeting cancers, cardiovascular disease and hypertension have made it to phase I and II clinical trials. Two companies, Switzerland-based Cytos and UK-based Protherics, are testing hypertension vaccines in phase II trials. The vaccine is designed to prompt the immune system to produce antibodies against the hormone angiotensin, which constricts blood vessels and raises blood pressure. The vaccine will need to be administered once or twice a year. A vaccine against atherosclerosis the build-up of fatty plaques of cholesterol on artery walls, which can lead to heart attack or stroke is in phase I trials. Developed by Swedish company Bioinvent, in conjunction with US company Genentech, the vaccine is made up of the human antibody BI-204 that, when injected in the body, is designed to recognise a type of cholesterol (low-density lipoprotein, or LDL) that forms plaques as foreign and attack it. The company hopes the vaccine will prevent heart attacks in patients with acute coronary artery disease. These heart disease vaccines are still in the early stages of development.


Diabetes vaccine:

There are also a number of vaccines in progress for type 1 diabetes. Diabetes is an autoimmune disease that usually strikes in adolescence or young adulthood. The cause is unknown but the process can be detected years before overt diabetes by measuring antibody forming against the islet cells. In time the islet cells are destroyed and blood sugar rises as insulin is no longer produced or produced in adequate amounts. In Australia, scientists have been developing a nasal vaccine which desensitizes the immune response to insulin. Phase 1 studies are complete and later stage trials in Australia, New Zealand and Germany are underway. One vaccine, from Selecta Biosciences given nasally to young people with antibody but not overt disease, showed a depressed immune response. The investigators believe that this may be an approach to multiple autoimmune diseases. The idea is to develop an antigen-specific tolerogenic vaccine while not damaging the normal immune response to pathogens and other foreign invaders. Theoretically if a person recently diagnosed with type 1 diabetes was vaccinated, there might be hope for preserving whatever islet cells still existed and perhaps for some regrowth. Or, should stem cell therapies prove effective, the vaccine would prevent the autoimmune mechanism from destroying the newly implanted stem cells and their daughter cells.  A study in 80 patients, published in the journal Science Translational Medicine in 2013, showed a vaccine could retrain their immune system. Experts described the results as a “significant step”. In patients with type 1 diabetes, the immune system destroys beta cells in the pancreas. This means the body is unable to produce enough insulin and regular injections of the hormone are needed throughout life. The vaccine was targeted to the specific white blood cells which attack beta cells. After patients were given weekly injections for three months, the levels of those white blood cells fell. Blood tests also suggested that beta cell function was better in patients given the vaccine than in those treated only with insulin. The research is at an early stage and trials in larger groups of people, which measure the long-term effect of the vaccine, are still needed.


Life-style vaccine:

Life-style vaccines are defined as vaccines to manage chronic conditions in healthy individuals. Three major examples of such candidate vaccines are discussed: contraceptive vaccines, vaccines to treat drug addiction, and anti-caries vaccines.

1. Contraceptive vaccines:

Current approaches to contraception are essentially based on hormonal control, condoms and surgery. Vaccination against hormones controlling reproduction is a promising immunological approach to contraception. It may rely on hormones that control the production of gametes or are involved in the survival of the fertilized egg. On the other hand, contraceptive vaccines could also induce antibodies against surface proteins of the gametes in order to block fertilization of ova by sperm.

2. Vaccination and drug addiction:

Two examples of drug addiction will be described here: addiction to cocaine and nicotine. Cocaine addiction and nicotine dependence are major health concerns, and new strategies for the treatment of drug abuse are urgently needed. Addiction usually depends on activation of receptors expressed by cells of the central nervous system. Vaccination is expected to induce antibodies against systemic drug molecules and thereby block their further uptake into the brain. However, cocaine and nicotine are molecules too small to be immunogenic; they can be considered as haptens and need to be linked to a carrier. Moreover, a vaccine has to be formulated with appropriate adjuvants in order to induce a high and long-lasting antibody response to neutralize these drugs.


TA-CD is an active vaccine developed by the Xenova Group which is used to negate the effects of cocaine, making it suitable for use in treatment of addiction. It is created by combining norcocaine with inactivated cholera toxin. It works in much the same way as a regular vaccine. A large protein molecule attaches to cocaine, which stimulates response from antibodies which destroy the molecule. This also prevents the cocaine from crossing the blood–brain barrier, negating the euphoric high and rewarding effect of cocaine caused from stimulation of dopamine release in the mesolimbic reward pathway. The vaccine does not affect the users “desire” for cocaine, only the physical effects of the drug.

3. Vaccination and dental caries:

Dental caries is the most common infectious disease affecting humans. The main causative agents are a group of streptococcal species collectively referred to as the mutans streptococci. Streptococcus mutans has been identified as the major etiological agent of human dental caries. The first step in the initiation of infection by this pathogenic bacterium is its attachment to a suitable receptor. Two groups of proteins from mutans streptococci represent primary candidates for a human caries vaccine: (i) glucosyltransferase enzymes, which synthesize adhesive glucans and allow microbial accumulation; and (ii) cell-surface fibrillar proteins that mediate adherence to the salivary pellicle. It is hypothesized that a mucosal vaccine against a combination of S. mutans surface proteins would protect against dental caries by inducing specific salivary immunoglobulin A (IgA) antibodies. These IgAs may reduce bacterial pathogenesis and adhesion to the tooth surface by affecting several adhesins simultaneously.



Vaccine production, adjutants and preservatives:


Vaccine production:

There are only about 30 different vaccine types (but many more product formulations) compared with approximately 20,000 drugs.  Accordingly, there are relatively few vaccine manufacturers and a limited number of countries where vaccines are produced. Most countries use vaccines that are imported from elsewhere. To support countries with limited national regulatory (NRA) capacity, WHO provides a system of vaccine prequalification that has been adopted as a standard for procurement by United Nations agencies and some countries. Alternatively, countries can procure their vaccines directly on the domestic or international market.


Vaccines are made using the disease-causing virus or bacteria, but in a form that will not harm your child. Instead, the weakened, killed, or partial virus or bacteria prompts your baby’s immune system to develop antibodies, or defenders, against the disease. Once it is determined how the virus and bacteria will be modified, vaccines are created through a general three-step process:

1. Antigen is generated. Viruses are grown in primary cells (i.e. chicken eggs for the influenza vaccine), or on continuous cell lines (i.e. human cultured cells for hepatitis b vaccine); bacteria is grown in bioreactors (i.e. Hib vaccine).

2. Antigen is isolated from the cells used to create it.

3. Vaccine is made by adding adjuvant, stabilizers and preservatives. Adjuvants increase immune response of the antigen; stabilizers increase the vaccine’s storage life; and preservatives allow for the use of multi-dose vials.

It is important to remember that vaccines undergo rigorous safety testing prior to FDA approval and are continually monitored for safety. The vaccine production process involves several vaccine manufacturer-funded testing phases over many years to ensure that it is safe to administer. The vaccines are also studied to be administered in groups, to work together to protect your child.


Three ways to make a vaccine:

While processes may differ slightly from company to company, here are the basic steps in these three methods:


Combination vaccines are harder to develop and produce, because of potential incompatibilities and interactions among the antigens and other ingredients involved.  Vaccine production techniques are evolving. Cultured mammalian cells are expected to become increasingly important, compared to conventional options such as chicken eggs, due to greater productivity and low incidence of problems with contamination. Recombination technology that produces genetically detoxified vaccine is expected to grow in popularity for the production of bacterial vaccines that use toxoids. Combination vaccines are expected to reduce the quantities of antigens they contain, and thereby decrease undesirable interactions, by using pathogen-associated molecular patterns. In 2010, India produced 60 percent of the world’s vaccine worth about $900 million(€670 million).


How influenza vaccine is produced:

Influenza vaccine is the best available protection against the disease. Among all vaccines, however, the process of making influenza vaccines is considered uniquely complicated and difficult. One reason is that the constantly evolving nature of influenza viruses requires continuous global monitoring and frequent reformulation of the vaccine strains. Another reason is that the rapid spread of these viruses during seasonal epidemics, as well as the occasional pandemic, means that each step in the vaccine process must be completed within very tight time frames if vaccine is to be manufactured and delivered in time. In response to the realities imposed by influenza, a highly functional process has evolved over decades in which the public and private sectors work together to develop and produce influenza vaccine.


Manufacturing methods of flu vaccine:

1. For the inactivated vaccines, the virus is grown by injecting it, along with some antibiotics, into fertilized chicken eggs. About one to two eggs are needed to make each dose of vaccine. The virus replicates within the allantois of the embryo, which is the equivalent of the placenta in mammals. The fluid in this structure is removed and the virus purified from this fluid by methods such as filtration or centrifugation. The purified viruses are then inactivated (“killed”) with a small amount of a disinfectant. The inactivated virus is treated with detergent to break up the virus into particles, and the broken capsule segments and released proteins are concentrated by centrifugation. The final preparation is suspended in sterile phosphate buffered saline ready for injection. This vaccine mainly contains the killed virus but might also contain tiny amounts of egg protein and the antibiotics, disinfectant and detergent used in the manufacturing process. In multi-dose versions of the vaccine, the preservative thimerosal is added to prevent growth of bacteria. In some versions of the vaccine used in Europe and Canada, an adjuvant is also added, this contains a fish oil called squalene, vitamin E and an emulsifier called polysorbate 80.

2. For the live vaccine, the virus is first adapted to grow at 25 °C (77 °F) and then grown at this temperature until it loses the ability to cause illness in humans, which would require the virus to grow at our normal body temperature of 37 °C (99 °F). Multiple mutations are needed for the virus to grow at cold temperatures, so this process is effectively irreversible and once the virus has lost virulence (become “attenuated”), it will not regain the ability to infect people. To make the vaccine, the attenuated virus is grown in chicken eggs as before. The virus-containing fluid is harvested and the virus purified by filtration; this step also removes any contaminating bacteria. The filtered preparation is then diluted into a solution that stabilizes the virus. This solution contains monosodium glutamate, potassium phosphate, gelatin, the antibiotic gentamicin, and sugar. A new method of producing influenza virus is used to produce the Novartis vaccine Optaflu. In this vaccine the virus is grown in cell culture instead of in eggs. This method is faster than the classic egg-based system and produces a purer final product. Importantly, there are no traces of egg proteins in the final product, so the vaccine is safe for people with egg allergies.


What viruses are recommended by WHO to be included in influenza vaccines for use in the 2015-16 northern hemisphere influenza season?

WHO recommends that influenza vaccines for use in the 2015-16 northern hemisphere influenza season contain the following viruses:

– an A/California/7/2009 (H1N1)pdm09-like virus

– an A/Switzerland/9715293/2013 (H3N2)-like virus

– a B/Phuket/3073/2013-like virus.

It is recommended that quadrivalent vaccines containing two influenza B viruses contain the above three viruses and a B/Brisbane/60/2008-like virus.


Vaccine antigen production in transgenic plants: strategies, gene constructs and perspectives:

Stable integration of a gene into the plant nuclear or chloroplast genome can transform higher plants (e.g. tobacco, potato, tomato, banana) into bioreactors for the production of subunit vaccines for oral or parental administration. This can also be achieved by using recombinant plant viruses as transient expression vectors in infected plants. The use of plant-derived vaccines may overcome some of the major problems encountered with traditional vaccination against infectious diseases, autoimmune diseases and tumours. They also offer a convenient tool against the threat of bio-terrorism.


Vaccine components:

Any vaccine consists of two parts, active ingredient and excipients. Active ingredient means immunogen (antigen) that stimulates immunity and excipient is any vaccine component besides immunogen that help improve vaccine efficacy, safety and shelf life.


Vaccine Excipients (components of vaccine besides immunogen):

Beside the active vaccine itself, the following excipients are commonly present in vaccine preparations:

Type of Ingredient Examples Purpose
Preservatives Thimerosal (only in multi-dose vials of flu vaccine) To prevent contamination
Adjuvants Aluminum salts To help stimulate the body’s response to the antigens
Stabilizers Sugars, gelatin To keep the vaccine potent during transportation and storage
Residual cell culture materials Egg protein To grow enough of the virus or bacteria to make the vaccine
Residual inactivating ingredients Formaldehyde To kill viruses or inactivate toxins during the manufacturing process
Residual antibiotics Neomycin To prevent contamination by bacteria during the vaccine manufacturing process


Vaccines contain live viruses, killed viruses, purified viral proteins, inactivated bacterial toxins, or bacterial polysaccharides. In addition to these immunogens, vaccines often contain other substances. For example, vaccines may contain preservatives that prevent bacterial or fungal contamination (e.g., thimerosal); adjuvants that enhance antigen-specific immune responses (e.g., aluminum salts); or additives that stabilize live, attenuated viruses (e.g., gelatin, human serum albumin). Furthermore, vaccines may contain residual quantities of substances used during the manufacturing process (e.g., formaldehyde, antibiotics, egg proteins, yeast proteins). Researchers reviewed data on thimerosal, aluminum, gelatin, human serum albumin, formaldehyde, antibiotics, egg proteins, and yeast proteins. Both gelatin and egg proteins are contained in vaccines in quantities sufficient to induce rare instances of severe, immediate-type hypersensitivity reactions. However, quantities of mercury, aluminum, formaldehyde, human serum albumin, antibiotics, and yeast proteins in vaccines have not been found to be harmful in humans or experimental animals. Parents should be reassured that quantities of mercury, aluminum, and formaldehyde contained in vaccines are likely to be harmless on the basis of exposure studies in humans or experimental studies in animals. Although severe anaphylactic reactions may occur rarely after receipt of vaccines that contain sufficient quantities of egg proteins (e.g., influenza, yellow fever) or gelatin (e.g., MMR), children who are at risk for severe infection with influenza can be desensitized to influenza vaccine, and gelatin-specific allergies are very rare. Immediate-type hypersensitivity reactions to neomycin or yeast proteins have not been clearly documented and remain theoretical.



Preservatives may be defined as compounds that kill or prevent the growth of microorganisms, particularly bacteria and fungi. They are used in vaccines to prevent microbial growth in the event that the vaccine is accidentally contaminated, as might occur with repeated puncture of multi-dose vials. In some cases, preservatives are added during manufacture to prevent microbial growth; with changes in manufacturing technology, however, the need to add preservatives during the manufacturing process has decreased markedly. The United States Code of Federal Regulations (the CFR) requires, in general, the addition of a preservative to multi-dose vials of vaccines; indeed, worldwide, preservatives are routinely added to multi-dose vials of vaccine. Tragic consequences have followed the use of multi-dose vials that did not contain a preservative and have served as the impetus for this requirement. One particularly telling incident from Australia is described by Sir Graham S. Wilson in his classic book, The Hazards of Immunization. In January 1928, in the early stages of an immunization campaign against diphtheria, Dr. Ewing George Thomson, Medical Officer of Health of Bundaberg, began the injection of children with toxin-antitoxin mixture. The material was taken from an India-rubber-capped bottle containing 10 mL of TAM. On the 17th, 20th, 21, and 24th January, Dr. Thomson injected subcutaneously a total of 21 children without ill effect. On the 27th a further 21 children were injected. Of these children, eleven died on the 28th and one on the 29th. This disaster was investigated by a Royal Commission and the final sentence in the summary of their findings reads as follows: The consideration of all possible evidence concerning the deaths at Bundeberg points to the injection of living staphylococci as the cause of the fatalities. From this experience, the Royal Commission recommended that biological products in which the growth of a pathogenic organism is possible should not be issued in containers for repeated use unless there is a sufficient concentration of antiseptic (preservative) to inhibit bacterial growth.


Several preservatives are available, including thiomersal, phenoxyethanol, and formaldehyde. Thiomersal is more effective against bacteria, has a better shelf-life, and improves vaccine stability, potency, and safety; but, in the U.S., the European Union, and a few other affluent countries, it is no longer used as a preservative in childhood vaccines, as a precautionary measure due to its mercury content. Although controversial claims have been made that thiomersal contributes to autism, no convincing scientific evidence supports these claims. Over the past several years, because of an increasing awareness of the theoretical potential for neurotoxicity of even low levels of organomercurials and because of the increased number of thimerosal containing vaccines that had been added to the infant immunization schedule, concerns about the use of thimerosal in vaccines and other products have been raised. Indeed, because of these concerns, the Food and Drug Administration has worked with, and continues to work with, vaccine manufacturers to reduce or eliminate thimerosal from vaccines. Thimerosal has been removed from or reduced to trace amounts in all vaccines routinely recommended for children 6 years of age and younger, with the exception of inactivated influenza vaccine. A preservative-free version of the inactivated influenza vaccine (contains trace amounts of thimerosal) is available in limited supply at this time for use in infants, children and pregnant women. Some vaccines such as Td, which is indicated for older children (≥ 7 years of age) and adults, are also now available in formulations that are free of thimerosal or contain only trace amounts. Vaccines with trace amounts of thimerosal contain 1 microgram or less of mercury per dose.


Thimerosal (thiomersal):

•Thimerosal is a preservative that is used in the manufacturing process of some vaccines and other medicines to prevent the growth of bacteria and fungi, which could otherwise cause illness or injury.

•It metabolizes into ethylmercury, not methylmercury, a mistake commonly made by anti-vaxxers who claim that the amount of mercury that used to be in vaccine exceeded EPA exposure guidelines of 0.1mcg/kg/day. Those guidelines are for methylmercury, a compound that has a half-life in the body of several weeks to months and is often found in fish or other environmental exposures. Ethylmercury, on the other hand, has a half-life of a few days to about a week, meaning that it is not in the body long enough for it to build up to toxic levels from vaccination to vaccination.

•Concern has been raised about ethylmercury in thimerosal causing damage to the brain. However, this compound does not readily cross the blood-brain barrier. Some may counter this by talking about inorganic mercury, but this form of mercury, also, does not readily cross the barrier; only prolonged (regular) exposure to mercury leads to accumulation in the central nervous system. Moreover, total mercury levels that do accumulate in the brain are cleared much more rapidly after ethylmercury exposure than after methylmercury exposure (though limitations of the linked study are that it was done in animals and overall dosing may not accurately reflect dosing in humans).

•Using the EPA guidelines for methylmercury, a 3.2 kg newborn could be exposed to 0.32mcg of methylmercury every day without adverse health effects.  This amounts to 116.8mcg of methylmercury in the course of a year, assuming an exposure of 0.1mcg/kg every single day.  This also assumes that the child does not gain any weight over the course of that year, which would drive the adverse effect-free exposure limit higher.  Keeping in mind that ethylmercury is eliminated significantly faster than methylmercury, the maximum 25mcg/dose of ethylmercury in a thimerosal-containing flu shot is much lower than the EPA one-year exposure. Therefore, unless the child is regularly exposed to other sources of mercury, it is highly unlikely that the minute amounts in a flu vaccine will cause any adverse developmental effects.  But, for those who are still concerned about thimerosal, thimerosal-free versions of the flu vaccine are available.

•Some people say that you get much less mercury when you eat it than when you inject it.  Looking at the most common form of ingested mercury (methylmercury), which is found in varying amounts in nearly all seafood, we will see that there is actually greater exposure from eating 6 oz. of white tuna, for example, than receiving one flu shot, the only recommended vaccine that has greater than trace amounts, though thimerosal-free versions are available.  According to the DHHS Agency for Toxic Substances and Disease Registry, roughly 95% of ingested methylmercury is absorbed via the gastrointestinal tract (stomach and intestines), from whence it can then spread to other body organs.  White albacore tuna contains about 0.407 ppm (mcg/g) methylmercury.  A 6 oz. (170 g) can of white tuna would then contain on average about 69.19 mcg of mercury (170 g X 0.407 mcg/g = 69.19 mcg).  Eating the full 6 oz. can, then, would mean that you are absorbing 65.73 mcg of methylmercury.  That’s over two and a half times the amount of mecury from a thimerosal-containing flu vaccine (which tops out at 25 mcg/dose).  And remember, the methylmercury from the tuna sticks around much longer than the ethylmercury from the vaccine.

•It was removed from the final product of nearly all U.S. vaccines around 2001/2002. This was a political move, due in large part to public pressure, rather than based on sound science. This was a recommendation rather than a regulatory requirement. A handful of studies that suggested problems with thimerosal, but which were inconclusive, prompted a “better safe than sorry” approach from the FDA while the issue was investigated by FDA, CDC and others. No follow-up studies have found any health risks beyond local hypersensitivity.

•Some vaccines still use it during the manufacturing process, but remove it from the final product, leaving, at most, trace amounts. The influenza vaccine still uses thimerosal, though thimerosal-free versions are available.

•Despite the removal of thimerosal from vaccines, resulting in exposure levels lower than anytime in the past, autism rates have not declined, suggesting that there is no connection between thimerosal and autism.

•To date, no properly controlled study has shown a causal link between thimerosal and autism.


Thimerosal was removed from all childhood vaccines in 2001 with the exception of inactivated flu vaccine in multi-dose vials. However, thimerosal has been removed from all single-dose preparations of flu vaccine for children and adults. There has never been thimerosal in live attenuated flu vaccine or recombinant flu vaccine. No acceptable alternative preservative has yet been identified for multi-dose flu vaccine vials.


Thiomersal is a toxic compound, there is no denying that. But let’s get back to math. The toxicity of compounds is measured through an analysis called the dose-response relationship, which describes the change in effect on an organism caused by differing doses of a compound after a certain exposure time. Table salt is tasty and safe in small amounts, but could kill you if taken in huge amounts. The dose-response relationship provides a graph that mathematically establishes what amounts of a compound causes what effects. This would seem to be a logical, and easily understood concept, but for many individuals, a bad substance is always bad. First of all, the half-life of thiomersal in blood is around 2.2 days. That might seem long, but it means half is gone in a couple of days, cleared out by the kidneys. It does not accumulate. But the math is even more telling. This flu vaccine, given once a year, has a maximum dose of 25 micrograms of mercury (but not elemental mercury). According to the thiomersal Material Safety Data Sheet (MSDS), the LD50, that is, the approximate dose at which 50% of organisms will die (in this case a mouse), is 5011 mg/kg body weight. Suppose a 20 kg child would get 25 micrograms of non-elemental mercury in one injection once a year. The theoretical LD50 dose for that same child would be around 100 grams of thiomersal, or about 4 million times higher than the amount of thiomersal in one vaccine dose–if vaccines used in children actually had thiomersal, which it doesn’t. So, you would have to inject your child 4 million times a day, every day, to make it potentially toxic. And dose-response relationships are not linear. That doesn’t mean that there’s some tiny risk of death from even a small dose of thiomersal–there is actually no risk. And again, since there’s no thiomersal in pediatric vaccines this argument is ridiculous. But more than all that, we have solid scientific data that show us that thiomersal is totally unrelated to autism, and is completely safe in vaccines. This illogical removal of thiomersal from vaccines makes it nearly impossible to have multi-use vials, so every vaccine has to be in a single-use prefilled syringe, which has rapidly driven up the costs of vaccines.


A recent study published in the Lancet medical journal showed the blood mercury levels of infants who received vaccines that contained thimerosal were well below all the safety levels set by government agencies. The Lancet study looked at 61 infants, most having blood-mercury levels below 2 nanograms per milliliter after vaccination; the highest safety limit, set by the Environmental Protection Agency, is 5.8 nanograms. Some critics counter that the study was too small and that delays in testing some of the infants may have missed the peak blood-mercury levels. In an effort to address some of these issues, another study of 200 children is going on in Argentina.



An adjuvant (from Latin, adiuvare: to aid) is a pharmacological and/or immunological agent that modifies the effect of other agents. Adjuvants may be added to vaccine to modify the immune response by boosting it such as to give a higher amount of antibodies and a longer lasting protection, thus minimizing the amount of injected foreign material. Adjuvants may also be used to enhance the efficacy of vaccine by helping to subvert the immune response to particular cells type of immune system, for example by activating the T cells instead of antibody-secreting B cells depending on the type of the vaccine. Adjuvants are also used in the production of antibodies from immunized animals. The adjuvants would fall into two classes, either delivery systems (such as cationic microparticles) or immune potentiators (such as cytokines or PRRs). The delivery systems would possibly be used to concentrate and display antigens in repetitious patterns, to assist in localizing antigens and immune potentiators, and to target the antigens in the vaccine to the antigen-presenting cells. While, the immune potentiators would be used activate the innate immune system directly. There are different classes of adjuvants that can push immune response in different directions, but the most commonly used adjuvants include aluminum hydroxide and paraffin oil.


Immunologic adjuvants are added to vaccines to stimulate the immune system’s response to the target antigen, but do not in themselves confer immunity. Adjuvants can act in various ways in presenting an antigen to the immune system. Adjuvants can act as a depot for the antigen, presenting the antigen over a long period of time, thus maximizing the immune response before the body clears the antigen. Examples of depot type adjuvants are oil emulsions. Adjuvants can also act as an irritant which causes the body to recruit and amplify its immune response. A tetanus, diphtheria, and pertussis vaccine, for example, contains minute quantities of toxins produced by each of the target bacteria, but also contains some aluminium hydroxide. Such aluminium salts are common adjuvants in vaccines and have been used in vaccines for over 80 years. The body’s immune system develops an antitoxin to the bacteria’s toxins, not to the aluminium, but would not respond enough without the help of the aluminium adjuvant.


Types of adjuvants:

•Inorganic compounds: alum, aluminum hydroxide, aluminum phosphate, calcium phosphate hydroxide

•Mineral oil: paraffin oil

•Bacterial products: killed bacteria Bordetella pertussis, Mycobacterium bovis, toxoids

•Nonbacterial organics: squalene, thimerosal

•Delivery systems: detergents (Quil A)

•Cytokines: IL-1, IL-2, IL-12

•Combination: Freund’s complete adjuvant, Freund’s incomplete adjuvant


Alum as an adjuvant:

Alum is the most commonly used adjuvant in human vaccination. It is found in numerous vaccines, including diphtheria-tetanus-pertussis, human papillomavirus, and hepatitis vaccines. For almost 80 years, aluminium salts (referred to as ‘alum’) have been the only adjuvant in use in human vaccines. Only in the last two decades, have novel adjuvants (MF59®, AS04) been introduced in the formulation of new licensed vaccines. As our understanding of the mechanisms of ‘immunogenicity’ and ‘adjuvancy’ increases, new adjuvants and adjuvant formulations are being developed.


Mechanisms of adjuvant action:

Adjuvants may exert their effects through different mechanisms. Some adjuvants, such as alum and emulsions (e.g. MF59®), function as delivery systems by generating depots that trap antigens at the injection site, providing slow release in order to continue the stimulation of the immune system. These adjuvants enhance the antigen persistence at the injection site and increase recruitment and activation of antigen presenting cells (APCs). Particulate adjuvants (e.g. alum) have the capability to bind antigens to form multi-molecular aggregates which will encourage uptake by APCs. Some adjuvants are also capable of directing antigen presentation by the major histocompatibility complexes (MHC).  Other adjuvants, essentially ligands for pattern recognition receptors (PRR), act by inducing the innate immunity, predominantly targeting the APCs and consequently influencing the adaptative immune response. Adjuvants accomplish this task by mimicking specific sets of evolutionarily conserved molecules, so called PAMPs, which include liposomes, lipopolysaccharide (LPS), molecular cages for antigen, components of bacterial cell walls, and endocytosed nucleic acids such as double-stranded RNA (dsRNA), single-stranded DNA (ssDNA), and unmethylated CpG dinucleotide-containing DNA.  Because immune systems have evolved to recognize these specific antigenic moieties, the presence of an adjuvant in conjunction with the vaccine can greatly increase the innate immune response to the antigen by augmenting the activities of dendritic cells (DCs), lymphocytes, and macrophages by mimicking a natural infection. Alum is the most commonly used adjuvant in human vaccination. It is found in numerous vaccines, including diphtheria-tetanus-pertussis, human papillomavirus and hepatitis vaccines. Alum provokes a strong Th2 response, but is rather ineffective against pathogens that require Th1–cell-mediated immunity. Alum induces the immune response by a depot effect and activation of APCs. Recently, the NLRP3 inflammasome has been linked to the immunostimulatory properties of alum although its role in adjuvant-induced antibody responses remains controversial.  Emulsions (either oil-in-water or water-in-oil), such as Freund’s Incomplete Adjuvant (IFA) and MF59®, can trigger depot generation and induction of MHC responses. IFA induces a predominantly Th2 biased response with some Th1 cellular response. MF59® is a potent stimulator of both cellular (Th1) and humoral (Th2) immune responses. However, the precise mode of action of emulsion-based adjuvants is still unclear. A complication with emulsion-based adjuvants is their potential to induce autoimmunity.


Aluminum salts include aluminum hydroxide, aluminum phosphate, and potassium aluminum sulfate (alum). Aluminum-containing vaccines are prepared by adsorption of antigens onto aluminum hydroxide or aluminum phosphate gels or by precipitation of antigens in a solution of alum. Aluminum salts were found initially to enhance immune responses after immunization with diphtheria and tetanus toxoids in studies performed in the 1930s, 1940s, and 1950s.The safety of aluminum has been established by experience during the past 80 years, with hundreds of millions of people inoculated with aluminum-containing vaccines. Adverse reactions including erythema, subcutaneous nodules, contact hypersensitivity, and granulomatous inflammation have been observed rarely. Aluminum-containing vaccines are not the only source of aluminum exposure for infants. Because aluminum is one of the most abundant elements in the earth’s crust and is present in air, food, and water, all infants are exposed to aluminum in the environment. For example, breast milk contains approximately 40 μg of aluminum per liter, and infant formulas contain an average of approximately 225 μg of aluminum per liter. Vaccines contain quantities of aluminum similar to those contained in infant formulas. However, because large quantities of aluminum can cause serious neurologic effects in humans, guidelines are established.


For determining the quantity of aluminum below which safety is likely, data were generated in mice that were inoculated orally with various quantities of aluminum lactate. No adverse reactions were observed when mice were fed quantities of aluminum as high as 62 mg/kg/day. By applying uncertainty factors of 3 (for extrapolation to humans) and 10 (for human variability), the ATSDR concluded that the minimum risk level for exposure to aluminum was 2 mg/kg/day. The half-life of elimination of aluminum from the body is approximately 24 hours. Therefore, the burden of aluminum to which infants are exposed in food and vaccines is clearly less than the guideline established by the ATSDR and far less than that found to be safe in experimental animals.


Vaccines contain aluminum in a salt form. Anti-vaxers claim this is toxic, and some will cite that 4ppm will cause blood to coagulate. However, individuals are not exposed to such amounts of aluminum in a single vaccination visit. Below are the vaccines containing aluminum, with the corresponding parts per million (ppm) for an infant (~251 mL of blood in the body) and an 80lb. child (~4000 mL of blood); note the two numbers for DTaP represent extreme ranges of aluminum content:

Vaccine ppm in infant ppm in child age received (in months)
DTaP (170mcg) 0.677 0.043 2, 4, 6, w/ final ~4-6 yrs
DTaP(625mcg) 2.490 0.156
Hep A 0.996 0.063 12 w/ final ~6 mo. later
Hep B 0.996 0.063 birth, 1 or 2, final at 6+
HiB 0.896 0.056 2, 4
HPV 0.896 0.056 11 or 12 yrs., then 2, 6 mo.
Pediatrix 3.386 0.213 2, 4, 6 (in lieu of DTaP, IPV and Hep B)
Pentacel 1.315 0.083 2, 4, 6, 15-18 (in lieu of DTaP, IPV and HiB)
Pneumococcus 0.498 0.031 2, 4, 6, 12-15


CFR 610.15 lists the maximum amount of aluminum per dose in vaccines, depending on the method of calculation.  This ranges from 0.85mg (that’s milligrams) to 1.25 mg.  HepB vaccine contains 250 mcg (that’s micrograms) per dose, or 0.25mg.


This table lists main categories of adjuvants and formulations evaluated in humans:

Adjuvant/formulations Pathogen (antigen) Trial results
Aluminium salts (hydroxide, phosphate, alum) Numerous antigens Licensed for human use. Induction of strong antibody responses
Calcium phosphate DT Was found to be better than Al(OH)3 in a booster trial
SBAS-4/ASO4 (alum + MPL) HBV (HBs antigen), HSV (gD) Increased antibody titres and lymphoproliferative responses when compared with alum, increased seroconversion rate after 2 immunizations
MF59 (stabilized squalene/sater) Flu (split trivalent) Component of a licensed influenza vaccine. Increase vaccine immunogenicity in young adults and in elderly (HAI titre). Safe (only mild local reactions), even after repeated injections in elderly
HBV(rPreS2-S) More immunogenic than alum-adsorbed licensed hepatitis B vaccine
HSV-2 (rgB + rgD) Prophylactic vaccination: humoral and cellular immunity after 3 injections is superior to natural immunity after HSV-2 infection. A therapeutic vaccination trial in patients with recurrent genital herpes showed no improvement in rate of recurrence but both severity and duration of 1st outbreak were reduced
HIV1 (gp120), CMV (rgB) Improved immunogenicity over alum
MF59 + MTP-PE Flu (trivalent split), HIV1 (env) MTP-PE increases reactogenicity, with no overall improvement in terms of immunogenicity (equivalent to MF59)
QS21 (purified saponin from Quillaja saponaria) Malaria (SPf), HIV (gp120), melanoma, pneumo conj Some local reactions. Enhanced antibody responses. Limited cellular responses in humans, despite good results obtained in animal models. QS21 enhances by 2-fold the booster effect (antibody response) of second dose of conjugate polysaccharide vaccine against Neisseria pneumoniae
SBAS-2/ASO2 (squalene/water + MPL + QS21) Malaria (RTS,S) High anti-CSP titres (better than with squalene/water or with MPL + alum) after 3 immunizations. Short-lived protection (less than 6 months) of 7 out of 8 naive individuals against challenge (infected mosquito bites). RTS,S-specific lymphoproliferative and antibody responses but no induction of CD8+ CTLs
HIV-1 (rgp120) Increased seroconversion rate in seronegative subjects after single immunization (superior to MPL + QS21 or alum). Strong cell-mediated immunity (T-cell proliferation; superior to MPL + QS21), but no CD8+ CTLs. No detectable neutralizing antibodies against primary isolates
Incomplete Freund adjuvant (IFA, stabilized water/Drakeol) gp120-depleted inactivated HIV-1 REMUNE vaccine. Increased anti-p24 titres and DTH responses. In seropositive subjects: increased lymphoproliferation and β-chemokine (Rantes, MIP-1α, MIP-1β) production following p24 stimulation
Melanoma (gp100) Induction of T-cell responses (evaluated by ELISPOT/IFN production) against gp100 HLA A2 restricted epitopes
Montanide ISA51 (stabilized water/Drakeol) HIV-1 (Tat toxoid) Well tolerated. Increased anti-Tat antibody titres in 100% of the subjects. DTH response and lymphoproliferation to Tat in 50% of the subjects
Montanide ISA720 (stabilized water/squalene) Malaria (MSP1, MSP2, Resa AMA1) Well tolerated (minor local effects – tenderness, swelling and discomfort of use). Low antibody responses (equivalent to alum, despite superior antibody responses observed in animals). Strong lymphoproliferation
Monophosphoryl lipid A (MPL) Various antigens Well tolerated in humans when administered in association with bacterial antigens or TAAs. Limited increase of cellular responses
Detox (stabilized squalene/water + MPL + CWS) Malaria (R32NS18) Some side-effects in malaria naive individuals (tenderness, induration, oedema + malaise and fever). Induction of anti-CSP antibodies after 3 immunizations (better than alum). Protection of 2/11 naive individuals against challenge with infected mosquitoes
Melanoma cell lysates Induction of cellular and humoral responses against melanoma associated antigens. Increase in survival in patients with metastatic melanoma. Vaccine (Melacine) has been registered for this indication in Canada
RC-529 (synthetic MPL-like acylated monosaccharide) HBV (HBs) Th1 and mucosal adjuvant in mice. Found to enhance, in association with alum, antibody responses against HBs antigen in humans (faster and stronger seroconversion)
OM-174 (lipid A derivative, E. coli), OM triacyl Malaria (CSP), cancer OM-174 was found to be safe in a phase I study in cancer patients (i.m. route). OM triacyl adjuvants are synthetic analogues based on a common triacyl motif, which induce maturation of human dendritic cells in vitro
Holotoxins (CT, PT, LT) Various antigens Utilization of detoxified bacterial toxins (mutated toxins or B subunits) devoided of ADP-ribosyltransferase activity. Enhancement of serous and mucosal IgA production. On-going evaluation of CT and LT as adjuvants in patch-based transcutaneous immunization. A flu vaccine with LT mutants is about to be tested intranasally in humans
CpG oligonucleotides Hepatitis B (HBs) Act as potent Th1 adjuvants in mice, chimpanzees and orang utangs. Two phase I trials conducted in humans (in association with alum) have shown enhanced antibody responses against the HBs antigen. CTL responses not documented. Based on the motif and chemical backbone, three classes of oligonucleotides are now defined with respect to their distinct capacity to activate either human B-, NK- or dendritic cells in vitro
Cytokines (IL-2, IL-12, GM-CSF) TAAs, malaria (CSP, MSP1), hepatitis A and B Utilization of cytokines as recombinant proteins, with limitations including short biological half-life and some severe toxicity (vascular leak syndrome, hepatotoxicity for IL-2 and IL-12, respectively). Enhancement of antibody responses with GM-CSF. More recently, utilization of recombinant vectors expressing locally (intratumourally) immunostimulatory cytokines (e.g. poxviruses)
Accessory molecules (B7.1) Colorectal cancer (CEA) The accessory molecule (B7.1), which provides co-stimulatory signals to T lymphocytes, has been included in association with the CEA antigen within the canarypox vector ALVAC, thereby enhancing cellular responses
Liposomes (DMPC/Chol) Flu (monovalent split) Well tolerated. No increase in antibody titers (equivalent to vaccine alone). Slight increase in CD8+ CTL response
DC Chol H. pylori (urease) Despite enhanced antibody and Th2/Th1 responses in animal models, no significant enhancement of cellular immune responses in humans
Virosomes Hepatitis A, flu Well tolerated. Rapid seroconversion leading to protective anti-hepatitis A or anti-influenza virus antibodies
ISCOMS (structured complex of saponins and lipids) Flu (trivalent split), HPV16 (E6/E7) Increase of influenza-specific CD8+ CTL response (when compared with flu vaccine alone)
PLGA TT PLGA particles were shown to elicit Th1 (presentation of CTL epitopes) and Th2 responses in mice. On-going trial with the tetanus toxoid: a difficulty is to prepare GMP-grade PLGA particles under aseptic conditions

CSP, P. falciparum circumsporozoite; CWS, cell wall skeleton from Mycobacterium phlei; DT, diphtheria toxoid; MTP-PE, muramyl tripeptide dipalmitoyl phosphatidyl ethanolamine; PLGA, poly-(D,L)-lactide-co-glycolic acid; TAAs, tumour associated antigens; TT, tetanus toxoid.


Antigen particulate formulations:

Apart from simply admixing the antigen with the adjuvant, formulation strategies may aim to facilitate the capture and the entry of the antigen into antigen presenting cells. For example, formulating T-cell antigens, expressed as peptides, proteins, plasmid DNA or even RNA into cationic liposomes appears to increase CTL responses in vivo in animal models. Liposomes are artificial, spherical, closed vesicles which consist of one or more lipid bilayers. Liposome-encapsulated antigens are delivered more efficiently to the cytoplasm of APCs, presumably as a result of membrane fusion. Usually, liposomes are made from ester phospholipids. More recently, polar phospholipids from archebacteriae have also been used, leading to so-called ‘archeosomes’. The latter are based on regularly branched phytanyl chains, with 20 or 40 carbon length. Archeosomes demonstrate better stabilities to high temperature, alkaline pH, serum proteins, when compared with conventional liposomes. Other formulations being explored include spherulites (multilamellar vesicles made of biocompatible amphiphiles) and transfersomes (highly deformable vesicles which can deliver small molecules non-invasively through the skin). One liposome-based approach has proven successful in humans: in this approach, antigens derived from the hepatitis A or influenza virus have been incorporated into a mixture of natural and synthetic phospholipids, called virosomes . Such vaccines were shown to be well tolerated and to induce both a 100% seroconversion rate and high antibody titers within 2 weeks.


Immunostimulating complexes (ISCOMS):

1. An alternative vaccine vehicle

2. The antigen is presented in an accessible, multimeric, physically well defined complex

3. Composed of adjuvant (Quil A) and antigen held in a cage like structure

4. Adjuvant is held to the antigen by lipids

5. Can stimulate CMI

6. Mean diameter 35nm

In the most successful procedure, a mixture of the plant glycoside saponin, cholesterol and phosphatidylcholine provides a vehicle for presentation of several copies of the protein on a cage-like structure. Such a multimeric presentation mimics the natural situation of antigens on microorganisms. These immunostimulating complexes have activities equivalent to those of the virus particles from which the proteins are derived, thus holding out great promise for the presentation of genetically engineered proteins.  Similar considerations apply to the presentation of peptides. It has been shown that by building the peptide into a framework of lysine residues so that 8 copies instead of 1 copy are present, the immune response induced was of a much greater magnitude. A novel approach involves the presentation of the peptide in a polymeric form combined with T cell epitopes. The sequence coding for the foot and mouth disease virus peptide was expressed as part of a fusion protein with the gene coding for the Hepatitis B core protein. The hybrid protein, which forms spherical particles 22nm in diameter, elicited levels of neutralizing antibodies against foot and mouth disease virus that were at least a hundred times greater than those produced by the monomeric peptide.



Additives are used to prevent antigens from adhering to the sides of glass vials with a resultant loss in immunogenicity. The types of additives used in vaccines include sugars (e.g., sucrose, lactose), amino acids (e.g., glycine, monosodium salt of glutamic acid), and proteins (e.g., gelatin or human serum albumin). Three issues surround the use of protein additives in vaccines: 1) the observation that immediate-type hypersensitivity reactions are a rare consequence of receiving gelatin-containing vaccines, 2) the theoretical concern that human serum albumin might contain infectious agents, and 3) the theoretical concern that bovine-derived materials used in vaccines might contain the agent associated with bovine spongiform encephalopathy (“mad-cow” disease).



Stabilizers are used to help the vaccine maintain its effectiveness during storage. Vaccine stability is essential, particularly where the cold chain is unreliable. Instability can cause loss of antigenicity and decreased infectivity of LAV. Factors affecting stability are temperature and acidity or alkalinity of the vaccine (pH). Bacterial vaccines can become unstable due to hydrolysis and aggregation of protein and carbohydrate molecules. Stabilizing agents include MgCl2 (for OPV), MgSO4 (for measles), lactose-sorbitol and sorbitol-gelatine.



A diluent is a liquid used to dilute a vaccine to the proper concentration. In vaccines, this is usually sterile saline or water.


Manufacturing residuals:

Residuals are substances used in the production of vaccine that remain as residual quantity in the final product. They are inactivating agents (e.g., formaldehyde), antibiotics, and cellular residuals (e.g., egg and yeast proteins).


Inactivating Agents:

Inactivating agents separate a pathogen’s immunogenicity from its virulence by eliminating the harmful effects of bacterial toxins or ablating the capacity of infectious viruses to replicate. Examples of inactivating agents include formaldehyde, which is used to inactivate influenza virus, poliovirus, and diphtheria and tetanus toxins; β-propiolactone, which is used to inactivate rabies virus; and glutaraldehyde, which is used to inactivate toxins contained in acellular pertussis vaccines. Formaldehyde deserves special consideration.



Concerns about the safety of formaldehyde have centered on the observation that high concentrations of formaldehyde can damage DNA and cause cancerous changes in cells in vitro. Although formaldehyde is diluted during the manufacturing process, residual quantities of formaldehyde may be found in several current vaccines. Fortunately, formaldehyde does not seem to be a cause of cancer in humans and animals that are exposed to large quantities of formaldehyde (a single dose of 25 mg/kg or chronic exposure at doses of 80–100 mg/kg/day) do not develop malignancies. The quantity of formaldehyde contained in individual vaccines does not exceed 0.1 mg. This quantity of formaldehyde is considered to be safe for 2 reasons. First, formaldehyde is an essential intermediate in human metabolism and is required for the synthesis of thymidine, purines, and amino acids. Therefore, all humans have detectable quantities of formaldehyde in their circulation (approximately 2.5 μg of formaldehyde/mL of blood). Assuming an average weight of a 2-month-old of 5 kg and an average blood volume of 85 mL/kg, the total quantity of formaldehyde found naturally in an infant’s circulation would be approximately 1.1 mg—a value at least 10-fold greater than that contained in any individual vaccine. Second, quantities of formaldehyde at least 600-fold greater than that contained in vaccines have been given safely to animals.


We are exposed to formaldehyde anyway:

Our primary route of exposure is breathing it, indoors or outdoors. Much of this inhaled formaldehyde comes from car exhaust, tobacco smoke, power plants, forest fires and wood stoves. Outdoors, we are exposed to anywhere from 0 to 100 parts per billion (ppb) every day. Indoors, it can be as much as 500 to 2,000 ppb (temporary housing such as that used after hurricane Katrina measured from 3-590 ppb). To a smaller degree, we ingest it in our food and water (the average American diet contains about 10-20mg of formaldehyde from things like apples, carrots, pears, milk, etc.), as well as some exposure via cosmetics. According to the U.S. Environmental Protection Agency, humans can consume 0.2mg of formaldehyde per kilogram of weight every day without seeing any adverse effects. When setting these levels, the EPA uses a safety buffer of about 10-100 times, meaning that the true safe level for daily exposure is likely around 2-20mg/kg every day.


Looking at the recommended schedule of vaccines from the CDC, let’s pick the vaccines from that list that a child might receive in their first 6 years of life (picking the highest amounts, just for illustration):

•HepB – Recombivax – 3 doses (birth, 1-2 mos. and 6-18 mos.) – 7.5μg/dose

•DTaP – Infanrix – 5 doses (2 mos., 4 mos., 6 mos., 15-18 mos. and 4-6 yrs.) – 100μg/dose

•Hib – ActHIB – 3 doses (2 mos., 4 mos. and 12-15 mos.) – 0.5μg/dose

•IPV – IPOL – 4 doses (2 mos., 4 mos., 6-18 mos. and 4-6 yrs.) – 100μg/dose

•Influenza – Fluzone – 7 doses (6 mos., 12 mos. and yearly 2-6 yrs.) – 100μg/dose

•HepA – Havrix – 2 doses (12 mos. and 6-18 mos. after first dose) – 100μg/dose

That’s all of the vaccines on the recommended schedule for 0-6 years that contain formaldehyde. If a child got all of those doses all at once (which they never would), they would get a total of 1,824μg, or 1.824mg, of formaldehyde. A 3.2kg (~7lb) newborn with an average blood volume of 83.3mL/kg would naturally have, at any given time, about 575-862μg of formaldehyde circulating in their blood. By the time they are 6 years old (~46lb or 21kg), they’ll naturally have 3,562-5,342μg of formaldehyde in their blood. Bear in mind that the formaldehyde from each shot will not build up in their bodies from shot to shot, as it is very rapidly (within hours) metabolized and eliminated as formate in the urine or breathed out as CO2. So what’s the most a child might get in a single office visit? That would probably be at their 6 month visit (when they are, on average, 16.5lbs or 7.5kg) with HepB, DTaP, IPV and flu, for a total of 307.5μg. That is about 160 times less than the total amount their body naturally produces every single day. Compare that to the 428.4-1,516.4μg of formaldehyde in a single apple.



Antibiotics are present in some vaccines to prevent bacterial contamination during the manufacturing process. Because antibiotics can cause immediate-type hypersensitivity reactions in children, some parents are concerned that antibiotics that are contained in vaccines might be harmful. However, antibiotics that are most likely to cause immediate-type hypersensitivity reactions (e.g., penicillins, cephalosporins, sulfonamides) are not contained in vaccines. Antibiotics that are used during vaccine manufacture include neomycin, streptomycin, polymyxin B, chlortetracyline, and amphotericin B. Only neomycin is contained in vaccines in detectable quantities. However, immediate-type hypersensitivity reactions to the small quantities of neomycin contained in vaccines has not been clearly documented. Although neomycin-containing products have been found to cause delayed-type hypersensitivity reactions, these reactions are not a contraindication to receiving vaccines.


Cellular Residuals:

Egg Proteins:

Egg allergies occur in approximately 0.5% of the population and in approximately 5% of atopic children. Because influenza and yellow fever vaccines both are propagated in the allantoic sacs of chick embryos (eggs), egg proteins (primarily ovalbumin) are present in the final product. Residual quantities of egg proteins found in the influenza vaccine (approximately 0.02–1.0 μg/dose) are sufficient to induce severe and rarely fatal hypersensitivity reactions in children with egg allergies.  Unfortunately, children with egg allergies also have other diseases (e.g., asthma) that are associated with a high risk of severe and occasionally fatal influenza infection.  For this reason, children who have egg allergies and are at high risk of severe influenza infection should be given influenza vaccine via a strict protocol. In contrast to influenza vaccine, measles and mumps vaccines are propagated in chick embryo fibroblast cells in culture. The quantity of residual egg proteins found in measles- and mumps-containing vaccines is approximately 40 pg—a quantity at least 500-fold less than those found for influenza vaccines. The quantity of egg proteins found in measles- and mumps-containing vaccines is not sufficient to induce immediate-type hypersensitivity reactions, and children with severe egg allergies can receive these vaccines safely.

Yeast Proteins:

Hepatitis B vaccines are made by transfecting cells of Saccharomyces cerevisiae (baker’s yeast) with the gene that encodes hepatitis B surface antigen, and residual quantities of yeast proteins are contained in the final product. Engerix-B (GlaxoSmithKline) contains no more than 5 mg/mL and Recombivax HB (Merck and Co) contains no more than 1 mg/mL yeast proteins.  Immediate-type hypersensitivity reactions have been observed rarely after receipt of hepatitis B vaccine (approximately 1 case per 600 000 doses). However, yeast-specific IgE has not been detected in patients with immediate-type hypersensitivity or in nonallergic patients after receipt of hepatitis B vaccine. Therefore, the risk of anaphylaxis after receipt of hepatitis B vaccine as a result of allergy to baker’s yeast is theoretical.


The table below shows various excipients (components besides immunogen) contained in various vaccines:


Animal experiments and clinical trials for vaccine safety and efficacy:


Development of New Vaccines:

The general stages of the development cycle of a vaccine are:

•Exploratory stage

•Pre-clinical stage

•Clinical development

•Regulatory review and approval


•Quality control


Nonclinical evaluation of vaccines:

Vaccines are administered to healthy humans, often in the first year of life. The demands for safety and efficacy are therefore very high. Nonclinical testing is a prerequisite to moving a candidate vaccine from the laboratory to the clinic and includes all aspects of testing, product characterization, proof of concept/immunogenicity studies and safety testing in animals conducted prior to clinical testing of the product in humans. The nonclinical evaluation of vaccines includes the initial testing of candidate formulations in animal models. In vivo and in vitro toxicity studies conducted before the start of clinical trials (preclinical) identify potential safety concerns and serve to avoid possible harm to human subjects. Potential concerns include toxicity due to the active ingredients or excipients, reactions to trace impurities such as production substrates, and interactions between components of other vaccines administered simultaneously.  Studies designed to determine the right dose to induce an immune response in appropriate animal models can provide valuable information on the immune response that can be expected in humans, and guide the determination whether the candidate vaccine will be beneficial to both the human study participant and the wider population once marketed. But it must be recognized that there are limitations in animal testing; susceptibility to infection by viruses, bacteria, and other microorganisms are often highly specific, and the immune responses in an animal model, particularly at the elevated doses used for nonclinical testing, may not be predictive of what will ultimately occur in humans. Nevertheless, few people would accept the administration of a candidate medicinal product without some level of assurance of its acceptability in a living animal. Therefore, nonclinical testing continues to be a balance between the desire to reduce the use of animals for testing purposes against the rights of humans to be administered safe and effective vaccines. International harmonization of testing requirements is therefore an essential tool needed to establish uniform approaches to the determination of the safety and efficacy of medicinal products, as well as to restrict animal testing to those critical areas where it cannot be replaced by alternative means.


Nonclinical evaluation of vaccine adjuvants and adjuvanted vaccines:

Over the past decades, strategies and approaches for the development and delivery of vaccine antigens have been expanded. Some of these antigens are weakly immunogenic and require the presence of adjuvants for the induction or enhancement of an adequate immune response. Vaccines with aluminium-based adjuvants have been used extensively in immunization programs worldwide and a significant body of safety information has accumulated for them. As the knowledge of immunology and the mechanisms of vaccine adjuvant action have developed, the number of vaccines containing novel adjuvants being evaluated in clinical trials has increased. Vaccines containing adjuvants other than aluminium-containing compounds have been authorized for use in many countries (e.g., human papillomavirus and hepatitis B vaccines), and a number of vaccines with novel adjuvants are currently under development, including, but not limited to, vaccines against human immunodeficiency virus (HIV), malaria and tuberculosis, as well as new-generation vaccines against influenza and other diseases.


The vaccine development paradigm:

Considerable attention has been focused in recent years on the versatile advances in modern biotechnology that are giving rise to the exciting new candidates that fill the upstream portion of the vaccine development pipeline. On the other hand, less notice has generally been paid to the series of sophisticated clinical vaccine studies that must be properly executed to advance a vaccine candidate, incrementally, towards ultimate licensure, based on proof of the vaccine’s safety, immunogenicity and efficacy in target populations.


Prelicensure Evaluations of Vaccine Safety:

Before vaccines are licensed by the FDA (any regulatory agency), they are evaluated in clinical trials with volunteers. These trials are conducted in three progressive phases:

Phase I trials:

Phase I trials preliminarily examine the candidate’s safety and immunogenicity in small numbers of healthy adults. Such early dose/response tests detect common adverse reactions and provide an initial glimpse of whether relevant immune responses are generated.

Phase II trials:

In Phase II, the clinical study is expanded and vaccine is given to people who have characteristics (such as age and physical health) similar to those for whom the new vaccine is intended. Phase II trials assess the vaccine in increasingly larger numbers of subjects, are typically placebo-controlled to better measure the rate of adverse reactions versus background rates of complaints. The level of shedding of a live viral or bacterial vaccine or of a recombinant strain is often intensively examined in phase II trials, as is its propensity to be transmitted to household contacts and to survive in the environment. For vaccines that will ultimately be used in infants and children, phase I and II trials must be undertaken in progressively younger subjects. Particularly demanding is the design of phase II clinical trials to evaluate the reactogenicity and immunogenicity of the new multivalent combination vaccines in infants. The ultimate objective of combining vaccine antigens into a single inoculation is worthy, but experience has shown that interactions may occur that depress the immune response to some antigens or that enhance the overall reactogenicity. Thus, phase II clinical trials must rigorously demonstrate that acceptable immune responses to all antigens can indeed be stimulated without undue reactogenicity. Phase I and II trials of certain candidate vaccines require special considerations (e.g. vaccines against RSV and group A Streptococcus pyogenes) because of safety concerns.

Experimental challenge studies:

In some instances, as with candidate vaccines to prevent influenza, Shigella dysentery, cholera or Plasmodium falciparum malaria, preliminary assessments of vaccine efficacy can be obtained through carefully performed experimental challenge studies with wild-type organisms in fully informed, consenting, adult community volunteers.

Phase III trials:

Large-scale, randomized, controlled phase III field trials remain the gold standard for demonstrating the efficacy of a vaccine. Such trials tend to be expensive, require several years to complete and are subject to the vagaries of year-to-year variation in disease incidence. Moreover, in prelicensure efficacy trials, the protective activity of a vaccine is measured under idealized conditions where extra personnel participate in the vaccination and only fully vaccinated subjects are included in calculations of efficacy; therefore, the practicality of programmatic use of the vaccine is not readily estimated. Epidemiological methods to estimate vaccine ‘efficacy’ (i.e. effectiveness) after licensure and large-scale use have also been developed.

Phase IV trials:

Many vaccines undergo Phase IV formal, ongoing studies after the vaccine is approved and licensed. Phase IV trial are optional studies that drug companies may conduct after a vaccine is released. The manufacturer may continue to test the vaccine for safety, efficacy, and other potential uses. Most phase IV assessments involve case/control studies which are relatively inexpensive and simple to perform, but have inherent limitations that can distort the estimation of ‘efficacy’. Nevertheless, a few controlled phase IV post-licensure selective vaccination trials have been performed that directly measure effectiveness of vaccine used under real-life, programmatic conditions.



A long journey fraught with many potential pitfalls and considerable attrition awaits any vaccine candidate as it attempts to run the gauntlet from inventive concept to licensed product and public health tool. Few of the vaccines that enter phase I trials reach the point of a phase III efficacy trial, and only a handful of vaccine candidates ultimately become licensed by regulatory agencies. Moreover, the step-wise paradigm by which vaccine candidates are advanced from initial phase I dose response safety/immunogenicity trials to phase II reactogenicity/immunogenicity trials in larger numbers of subjects, and finally to large-scale phase III efficacy trials is becoming increasingly complex and expensive. In particular, the cost of generating clinical trial data while strictly adhering to the rules and regulations of Good Clinical Practice and of performing quality assurance and monitoring to verify the validity of such data has greatly escalated during the past decade.



Postlicensure Monitoring of Vaccine Safety:

After licensure, a vaccine’s safety is assessed by several mechanisms. The NCVIA of 1986 requires health care providers to report certain adverse events that follow vaccination of children. As a mechanism for that reporting, the Vaccine Adverse Event Reporting System (VAERS) was established in 1990 and is jointly managed by the CDC and the FDA. This safety surveillance system collects reports of adverse events associated with vaccines currently licensed in the United States. Adverse events are defined as health effects that occur after immunization and that may or may not be related to the vaccine. While VAERS was established in response to the NCVIA, any adverse event following vaccination—whether in a child or an adult, and whether or not it is believed to have been caused by vaccination—may be reported through VAERS. In 2008, VAERS received >25,000 reports of adverse events following vaccination. Of those, 9.5% were reportedly serious, causing disability, hospitalization, life-threatening illness, or death. VAER related issues are discussed later on. Enhanced post-licensure epidemiological surveillance has proven its value by demonstrating herd immunity effects (as with H. influenzae type b and meningococcal C conjugate vaccines), non-target consequences of vaccine use (e.g. the rare occurrence of vaccine-associated paralytic poliomyelitis in household contacts of infants who have received Sabin live oral polio vaccine) and rare vaccine-associated adverse events.


Vaccine policy:

The following questions should be asked when a vaccination policy against a particular virus is being developed.

  1. What proportion of the population should be immunized to achieve eradication.
  2. What is the best age to immunize?
  3. How is this affected by birth rates and other factors
  4. How does immunization affect the age distribution of susceptible individuals, particularly those in age-classes most at risk of serious disease?
  5. How significant are genetic, social, or spatial heterogeneities in susceptibility to infection?
  6. How does this affect herd immunity?



Vaccine schedule for children, adolescents and adults:

Scheduling vaccine:

A vaccination schedule is a series of vaccinations, including the timing of all doses, which may be either recommended or compulsory, depending on the country of residence.  A vaccine is an antigenic preparation used to produce active immunity to a disease, in order to prevent or reduce the effects of infection by any natural or “wild” pathogen. Many vaccines require multiple doses for maximum effectiveness, either to produce sufficient initial immune response or to boost response that fades over time. For example, tetanus vaccine boosters are often recommended every 10 years. Vaccine schedules are developed by governmental agencies or physicians groups to achieve maximum effectiveness using required and recommended vaccines for a locality while minimizing the number of health care system interactions. Over the past two decades, the recommended vaccination schedule has grown rapidly and become more complicated as many new vaccines have been developed. Some vaccines are recommended only in certain areas (countries, subnational areas, or at-risk populations) where a disease is common. For instance, yellow fever vaccination is on the routine vaccine schedule of French Guiana, is recommended in certain regions of Brazil but in the United States is only given to travelers heading to countries with a history of the disease. In developing countries, vaccine recommendations also take into account the level of health care access, the cost of vaccines and issues with vaccine availability and storage. Sample vaccination schedules discussed by the World Health Organization show a developed country using a schedule which extends over the first five years of a child’s life and uses vaccines which cost over $700 including administration costs while a developing country uses a schedule providing vaccines in the first 9 months of life and costing only $25. This difference is due to the lower cost of health care, the lower cost of many vaccines provided to developing nations, and that more expensive vaccines, often for less common diseases, are not utilized.


Until recently, most vaccines were aimed at babies and children alone. Now more and more vaccines are developed for use among elderly, pregnant mothers, adolescents, travelers and adults in a population. In addition, vaccines are increasingly being administered in form of combination of more than one component. Vaccinations of animals are being used both to prevent their contracting diseases and to prevent transmission of disease to humans. In 1900, the smallpox vaccine was the only one administered to children. By the early 1950s, children routinely received three vaccines, for protection against diphtheria, pertussis, tetanus and smallpox, and as many as five shots by two years of age. Since the mid-1980s, many vaccines have been added to the schedule. As of 2009, the US Centers for Disease Control and Prevention (CDC) now recommends vaccination against at least fourteen diseases. By two years of age, U.S. children receive as many as 24 vaccine injections, and might receive up to five shots during one visit to the doctor. The use of combination vaccine products means that, as of 2013, the United Kingdom’s immunization program consists of 10 injections by the age of two, rather than 25 if vaccination for each disease was given as a separate injection.


Main objectives of scheduling of vaccines are to achieve maximum effectiveness using recommended vaccines for a country while minimizing the number of health care system interactions. Epidemiological, immunological and programmatic aspects are taken into account while scheduling vaccines. In past two decades, many new vaccines have been developed, vaccination schedule is undergoing rapid changes and has become more complex. Traditionally, public sector in developing countries is slow to incorporate newer vaccines as compared to private sector after the vaccine is licensed for use. Cost effectiveness, safety and effectiveness for a given region are important issues for introduction of newer vaccines. As such vaccination schedule in public sector has lesser number of vaccines as compared to those developed by private sector. It often becomes a matter of debate what is the best schedule, but the knowledge of principles that go behind making each schedule will help pediatricians to build an informed opinion.


The figure below shows vaccine schedule in U.S. for immunization up to 18 years of age (2015):

BCG is not generally recommended for use in the United States because of the low risk of infection with Mycobacterium tuberculosis, the variable effectiveness of the vaccine against adult pulmonary TB, and the vaccine’s potential interference with tuberculin skin test reactivity.


Vaccination schedule recommended by Indian Academy of Pediatrics (IAP) 2013 include BCG:

Birth – 15 days BCG + OPV (zero dose) +HepB1
6 weeks - 8 weeks OPV1 +IPV1 + DPT1* + HepB2 + Hib1 + Rotavirus1 + PCV1
10 weeks- 12 weeks OPV2 + IPV2 + DPT2* + Hib2 + Rotavirus2 + PCV2
14 weeks - 16 weeks OPV3 + IPV3 + DPT3* + Hib3 + Rotavirus3# + PCV3
6 months HepB3 + OPV1
9 months (completed) Measles vaccine + OPV2
12 months Hepatitis A1
15 months MMR1 + Varicella + PCV booster
18 months OPV4 + IPV booster1 + DPT*booster1 + Hib booster1 + Hepatitis A2
2 years Typhoid1 (give repeat shots every 3 years)
5 years OPV5 + DPT* booster2 +MMR2^ + Varicella2$$
10 – 12 years Tdap/Td (Every 10 years then give Td)+ HPV**

*DPT: It is given either as DPaT or DPwT
**HPV is given only in females (3 doses at 0,1-2 months and 6 months interval)
#Rotavirus 3rd dose may be required only with one brand
^ MMR 2nd dose can be given at any time 4-8 weeks after the first dose
$$ Varicella 2nd dose can be given anytime 3 months from the first dose
PCV= Pneumococcal conjugated vaccine, IPV= Injectable polio vaccine, Td = Tetanus toxoid + adult dose of pertussis toxoid, HPV= Human papillomavirus


Vaccinations and booster shots are recommended for pre-teens and adolescents. Teenagers are more vulnerable than smaller children to exposure to such diseases as HPV and meningitis. If a teen is behind on their immunizations there is a catch-up schedule that can be followed to protect him or her from the diseases that can still do harm.

•Young adults planning to live in a dormitory situation should be vaccinated against meningococcal disease.

•Tetanus, diphtheria toxoids and acellular pertussis vaccine (Tdap) should be given to 11-12 year olds who have completed the childhood series; 13-18 year olds who missed the 11-12 year old Tdap dose or who received Td instead a dose of Tdap should be given the vaccine five years after the last Td or DTaP dose.

•HPV (Human Papillomavirus Vaccine) is important for females at ages 13-18 to reduce the risk of contracting HPV which can increase the risk of cervical cancer later in life.

•Influenza vaccine should be attained yearly prior to flu season to protect against the anticipated flu viruses in circulation. It is impossible to get the flu from a flu shot. The flu shot takes approximately two weeks to be effective so it is important to get it as early as possible.

•Hepatitis B is a disease that many adults don’t know that they have contracted. While often thought of as a strictly sexually transmitted disease, it has been shown that this disease can be transmitted by the exchange of saliva as in situations where teenagers share food and drink or kissing. The vaccine is very effective at preventing this disease which can lead to liver cancer in later life.

•Inactivated Polio is a vaccine that an adolescent should get if he or she failed to get this immunization as a child. Polio still exists in some parts of the world and this vaccine will protect the adolescent from it being brought back into the country by a traveler.

•Measles, mumps and rubella vaccine is very important if the adolescent has not previously had this vaccine. Mumps can sterilize an adolescent male. Measles hospitalizes one out of five people who contract the disease. Rubella can spread to a pregnant woman and cause fetal damage. This is a very important combination for an adolescent to receive if he or she did not get it as a child.


Administration of Vaccines:


Routes of administration:


The route of administration is the path by which a vaccine (or drug) is brought into contact with the body. This is a critical factor for success of the immunization. A substance must be transported from the site of entry to the part of the body where its action is desired to take place. Using the body’s transport mechanisms for this purpose, however, is not trivial. Intramuscular (IM) injection administers the vaccine into the muscle mass. Vaccines containing adjuvants should be injected IM to reduce adverse local effects. Subcutaneous (SC) injection administers the vaccine into the subcutaneous layer above the muscle and below the skin. Intradermal (ID) injection administers the vaccine in the topmost layer of the skin. BCG is the only vaccine with this route of administration. Intradermal injection of BCG vaccine reduces the risk of neurovascular injury. Health workers say that BCG is the most difficult vaccine to administer due to the small size of newborns’ arms. A short narrow needle (15 mm, 26 gauge) is needed for BCG vaccine. All other vaccines are given with a longer, wider needle (commonly 25 mm, 23 gauge), either SC or IM. Oral administration of vaccine makes immunization easier by eliminating the need for a needle and syringe. Intranasal spray application of a flu vaccine offers a needle free approach through the nasal mucosa of the vaccinee.


Various routes of administration used by various vaccines:


General instructions on immunization:

1. Vaccination at birth means as early as possible within 24 to 72 hours after birth or at least not later than one week after birth.

2. Whenever multiple vaccinations are to be given simultaneously, they should be given within 24 hours if simultaneous administration is not feasible due to some reasons.

3. The recommended age in weeks/months/years mean completed weeks/months/years.

4. Any dose not administered at the recommended age should be administered at a subsequent visit, when indicated and feasible.

5. The use of a combination vaccine generally is preferred over separate injections of its equivalent component vaccines.

When two or more live parenteral/intranasal vaccines are not administered on the same day, they should be given at least 28 days (4 weeks) apart; this rule does not apply to live oral vaccines.

6. If given <4 weeks apart, the vaccine given 2nd should be repeated.

7. The minimum interval between 2 doses of inactivated vaccines is usually 4 weeks (exception rabies).

8. Vaccine doses administered up to 4 days before the minimum interval or age can be counted as valid (exception rabies). If the vaccine is administered > 5 days before minimum period, it is counted as invalid dose.

9. Any number of antigens can be given on the same day.

10. Changing needles between drawing vaccine into the syringe and injecting it into the child is not necessary.

11. Different vaccines should not be mixed in the same syringe unless specifically licensed and labeled for such use.

12. Patients should be observed for an allergic reaction for 15 to 20 minutes after receiving immunization(s).

13. When necessary, 2 vaccines can be given in the same limb at a single visit.

14. The anterolateral aspect of the thigh is the preferred site for 2 simultaneous IM injections because of its greater muscle mass.

15. The distance separating the 2 injections is arbitrary but should be at least 1 inch so that local reactions are unlikely to overlap.

16. Although most experts recommend “aspiration” by gently pulling back on the syringe before the injection is given, there are no data to document the necessity for this procedure. If blood appears after negative pressure, the needle should be withdrawn and another site should be selected using a new needle.

17. A previous immunization with a dose that was less than the standard dose or one administered by a nonstandard route should not be counted, and the person should be reimmunized as appropriate for age.


Aspiration is the process of pulling back on the plunger of the syringe after skin penetration but prior to injection to ensure that the contents of the syringe are not injected into a blood vessel. Although this practice is advocated by some experts, aspiration is not required because of the lack of large blood vessels at the recommended vaccine injection sites. Multiple vaccines can be administered at the same visit; indeed, administration of all needed vaccines at one visit is encouraged. Studies have shown that vaccines are as effective when administered simultaneously as they are individually, and simultaneous administration of multiple vaccines is not associated with an increased risk of adverse effects. If more than one vaccine must be administered in the same limb, the injection sites should be separated by 1–2 inches so that any local reactions can be differentiated. If a vaccine and an immune globulin preparation are administered simultaneously (e.g., Td vaccine and tetanus immune globulin), a separate anatomic site should be used for each injection.


For certain vaccines (e.g., HPV vaccine and hepatitis B vaccine), multiple doses are required for an adequate and persistent antibody response. The recommended vaccination schedule specifies the interval between doses. Many adults who receive the first dose in a multiple-dose vaccine series do not complete the series or do not receive subsequent doses within the recommended interval; in these circumstances, vaccine efficacy and/or the duration of protection may be compromised. Providers should implement recall systems that will prompt patients to return for subsequent doses in a vaccination series at the appropriate intervals. With the exception of oral typhoid vaccination, an interruption in the schedule does not require restarting of the entire series or the addition of extra doses.


The desirability of administering vaccines by non-parenteral routes:

The Sabin oral polio vaccine set a precedent among vaccines for practicality and ease of administration to subjects of any age. There is great interest to identify ways to administer other vaccines by non-parenteral routes, e.g. orally, nasally or transcutaneously. Certain live vector vaccines, antigen delivery systems and powerful adjuvants offer promise as strategies to successfully administer vaccines via mucosal and transcutaneous surfaces. There already exists considerable experience with several other oral and intranasal vaccines including: Ty21a live oral typhoid vaccine; a live oral cholera vaccine (CVD 103-HgR) and a non-living oral cholera vaccine (whole vibrio cells plus B subunit); and a live (cold adapted) and a non-living (virosomes plus LT adjuvant) intranasal influenza vaccine. From this considerable experience, several observations have been made:

•In most populations, oral or intranasal vaccines are preferred over parenteral vaccines, thereby increasing compliance.

•Mucosal immunisation precludes problems of injection safety found in some non-industrialised countries where the sporadic use of non-sterile needles and syringes can result in the inadvertent spread of hepatitis B, hepatitis C and HIV.

•Specialized microfold cells overlying mucosa-associated lymphoid tissues found both along the intestine and in the nose constitute competent portals of entry to inductive sites for immune responses.

•Because they elicit secretory IgA (usually in addition to systemic immune responses), mucosal vaccines are particularly attractive for pathogens that primarily cause mucosal infection of the gastrointestinal, respiratory or genito-urinary tracts or that invade via the mucosa lining those tracts.

•Properly formulated, mucosally administered vaccines can be adapted to stimulate any relevant type of immune response, in addition to secretory IgA, including serum IgG neutralizing antibodies (against toxins and viruses) and a variety of cell-mediated responses including lymphocyte proliferation accompanied by release of cytokines, and classical MHC I-restricted CD8+ lymphocytes.

•Some mucosal vaccines (e.g. Ty21a) have stimulated long-term protection enduring for up to 7 years.

•Mucosal immunisation is not a panacea. Problems that require research include the observation that several oral vaccines are less immunogenic in subjects living in under-privileged conditions in non-industrialised countries and whether oral immunisation with certain vaccines (such as live rotavirus strains) increases the risk of intussusception during a short period of time immediately following vaccination.


Why do children receive so many vaccinations?

Vaccines are our best defense against many diseases, which often result in serious complications such as pneumonia, meningitis (swelling of the lining of the brain), liver cancer, bloodstream infections, and even death. Vaccination is recommended to protect children against many infectious diseases, including measles, mumps, rubella (German measles), varicella (chickenpox), hepatitis B, diphtheria, tetanus, pertussis (whooping cough), Haemophilus influenza type B (Hib), polio, influenza (flu), and pneumococcal disease.

Why are these vaccines given at such a young age? Wouldn’t it be safer to wait?

Children are given vaccines at a young age because this is when they are most vulnerable to certain diseases. Newborn babies are immune to some diseases because they have antibodies given to them from their mothers. However, this immunity only lasts a few months. Further, most young children do not have maternal immunity to diphtheria, whooping cough, polio, tetanus, hepatitis B, or Hib. If a child is not vaccinated and is exposed to a disease, the child’s body may not be strong enough to fight the disease.  An infant’s immune system is more than ready to respond to the very small number of weakened and killed infectious agents (antigens) in vaccines. From the time they are born, babies are exposed to thousands of germs and other antigens in the environment and their immune systems are readily able to respond to these large numbers of antigenic stimuli.

Why vaccines are administered in combination or simultaneously?

A combination vaccine consists of two or more different vaccines that have been combined into a single shot. Combination vaccines have been in use in the United States since the mid-1940′s. Examples of combination vaccines in current use are: DTaP (diphtheria-tetanus-pertussis), trivalent IPV (three strains of inactivated polio vaccine), MMR (measles-mumps-rubella), DTaP-Hib, and Hib-Hep B (hepatitis B). Simultaneous vaccination is when more than one vaccine shot is administered during the same doctor’s visit, usually in separate limbs (e.g. one in each arm). An example of simultaneous vaccination might be administering DTap in one arm or leg and IPV in another arm or leg during the same visit. Giving a child several vaccinations during the same visit offers two practical advantages. First, we want to immunize children as quickly as possible to give them protection during the vulnerable early months of their lives. Second, giving several vaccinations at the same time means fewer office visits. This saves parents both time and money, and may be less traumatic for the child

Is simultaneous vaccination with multiple vaccinations safe? Wouldn’t it be safer to separate vaccines and spread them out, vaccinating against just one disease at a time?

The available scientific data show that simultaneous vaccination with multiple vaccines has no adverse effect on the normal childhood immune system. A number of studies have been conducted to examine the effects of giving various combinations of vaccines simultaneously. These studies have shown that the recommended vaccines are as effective in combination as they are individually, and that such combinations carry no greater risk for adverse side effects. Consequently, both the Advisory Committee on Immunization Practices and the American Academy of Pediatrics recommend simultaneous administration of all routine childhood vaccines when appropriate. Research is underway to find methods to combine more antigens in a single vaccine injection (for example, MMR and chickenpox).

Can so many vaccines, given so early in life, overwhelm a child’s immune system, suppressing it so it does not function correctly?

No evidence suggests that the recommended childhood vaccines can “overload” the immune system. In contrast, from the moment babies are born, they are exposed to numerous bacteria and viruses on a daily basis. Eating food introduces new bacteria into the body; numerous bacteria live in the mouth and nose; and an infant places his or her hands or other objects in his or her mouth hundreds of times every hour, exposing the immune system to still more antigens. When a child has a cold they are exposed to at least 4 to 10 antigens and exposure to “strep throat” is about 25 to 50 antigens. In the face of these normal events, it seems unlikely that the number of separate antigens contained in childhood vaccines …would represent an appreciable added burden on the immune system that would be immunosuppressive.

Why do children need so many doses of certain vaccines?

Most vaccines require at least 2 doses. With inactivated (killed) vaccines, each dose of vaccine contains a fixed amount of disease antigen (virus or bacteria). Immunity is built in phases with each dose boosting immunity to a protective level. Live vaccines are different, in that the antigen in the vaccine reproduces and spreads throughout the body. One dose produces satisfactory immunity in most children. But a second dose is recommended, because not all children respond to the first one.

Can a child get a disease even after being vaccinated?

It isn’t very common, but it can happen. About 1% to 5% of the time, depending on the vaccine, a child who is vaccinated fails to develop immunity. If these children are exposed to that disease they could get sick. Sometimes giving an additional vaccine dose will stimulate an immune response in a child who didn’t respond to one dose. For example, a single dose of measles vaccine protects about 95% of children, but after two doses almost 100% are immune. Sometimes a child is exposed to a disease just prior to being vaccinated, and gets sick before the vaccine has time to work. Sometimes a child gets sick with something that is similar to a disease they have been vaccinated against. This often happens with flu. Many viruses cause symptoms that look like flu, and people even call some of them flu, even though they are really something else. Flu vaccine doesn’t protect from these viruses.

Can a child actually get the disease from a vaccine?

Almost never. With inactivated (killed) vaccines, it isn’t possible. A dead virus or bacteria, or part of a virus or bacteria, can’t cause disease. With live vaccines, some children get what appears to be a mild case of disease (for example what looks like a measles or chickenpox rash but with only a few spots). This isn’t harmful, and can actually show that the vaccine is working. A vaccine causing full-blown disease would be extremely unlikely. One exception was the live oral polio vaccine, which could very rarely mutate and actually cause a case of polio. This was a rare but tragic side effect of this otherwise effective vaccine. Oral polio vaccine is no longer used in the U.S.

Why does the government require children to be vaccinated to attend school in the U.S.?

School immunization laws are not imposed by the federal government, but by the individual states. But that doesn’t answer the question, which is often asked by people who see this as a violation of their individual rights. The mission of a public health system, as its name implies, is to protect the health of the public — that is, everybody. Remember that vaccines protect not only the person being vaccinated but also people around them. Immunization laws exist not only to protect individual children, but to protect all children. If vaccines were not mandatory, fewer people would get their children vaccinated — they would forget; they would put it off; they would feel they couldn’t afford it; they wouldn’t have time. This would lead to levels of immunity dropping below what are needed for herd immunity, which would lead in turn to outbreaks of disease. So mandatory vaccination, while it might not be a perfect solution, is at least a practical solution to a difficult problem. In a sense, school immunization laws are like traffic laws. We’re not allowed to drive as fast as we want on crowded streets or to disobey traffic signals. This could be seen as an imposition on individual rights too. However, these laws are not so much to prevent drivers from harming themselves, which you could argue is their right, but to prevent them from harming others, which is not.


Why aren’t vaccines available for all diseases?

The procedure for developing a vaccine takes many years and even more money, often hundreds of millions of dollars. It’s because of this that vaccines are prioritized in this order:

•Vaccines that fight diseases that cause the most deaths and damage, like meningitis

•Vaccines that prevent severe diseases like measles and influenza

•Vaccines, like the one for rotavirus, that prevent significant suffering

Additionally, vaccines are studied and produced by companies, so the return on investment must be significant in order to justify the large expense. Vaccines are currently in development to prevent malaria. The malaria vaccine has been slighted in the past because the financial return was not worth the investment the industry had to make. Another reason that vaccines can be tricky to produce is that some viruses mutate so quickly that traditional vaccines are ineffective. A prime example is the HIV/AIDS virus. Despite these hurdles, there is currently a tremendous movement to develop a vaccine to fight HIV/AIDS.


Regular, Alternative, Selective Vaccine Schedules:

The regular vaccine schedule for children aged 0-6 is approved by the CDC, American Academy of Pediatrics (AAP), and the American Academy of Family Physicians in the United States. It recommends 25 shots in the first 15 months of life. The shots immunize against whooping cough (pertussis), diphtheria, tetanus, mumps, measles, rubella, rotavirus, polio, hepatitis B, and other diseases. The alternative and selective vaccination schedules aren’t reviewed or approved by the CDC or other public health group. They come solely from Sears. Sears’ alternative vaccine schedule spreads the shots out over a longer period of time, up to age 5-6 years. For instance, he recommends not giving kids more than two vaccines at a time. It also changes the order of vaccines, prioritizing what Sears believes are the most crucial vaccines to get, based on how common and severe the diseases are. Many countries have regular vaccine schedule approved by their academy of pediatrics or WHO. Any deviation from regular schedule is alternative schedule.


Delaying Vaccines increases risks—with no added Benefits:

Some parents delay vaccines out of a misinformed belief that it’s safer, but that decision actually increases the risk of a seizure after vaccination and leaves children at risk for disease longer. Children receiving delayed vaccinations tend to fall into one of two groups: those whose parents intentionally delay vaccines and those whose families have difficulty getting vaccines on time.

No benefit to waiting to vaccinate: two studies:

2010 study:

No evidence to date reveals any benefits to delaying vaccines. A study in 2010 showed that children who received delayed vaccinations performed no better at ages seven to 10 on behavioral and cognitive assessments than children who received their vaccines on time. There was not a single outcome for which the delayed group did better. Authors note that delaying vaccines leaves children at risk for disease longer, and that many parents have little firsthand experience with those diseases. In this context, any potential side effect—real or perceived—may be enough to convince a parent that it’s safe to defer vaccines. However, that is not a safe choice, especially as vaccine-preventable diseases like measles are making a comeback.

2014 study:

The study published in Pediatrics, found that administering the MMR shot or the less frequently used MMRV one (which includes the varicella, or chickenpox vaccine) later, between 16 and 23 months, doubles the child’s risk of developing a fever-caused, or febrile seizure as a reaction to the vaccine. The risk of a febrile seizure following the MMR is approximately one case in 3,000 doses for children aged 12 to 15 months but one case in 1,500 doses for children aged 16 to 23 months. This study adds to the evidence that the best way to prevent disease and minimize side effects from vaccines is to vaccinate on the recommended schedule and an undervaccinated child is left at risk of infectious disease for a longer period. Delaying also makes for increased visits to the doctor’s office along with the time and hassle and risk of exposure to other infectious diseases in the doctor’s office.  It’s not clear why the MMR and MMRV vaccines increase febrile seizure risk in the older children, but it may be simply that they receive the vaccines when they are already more susceptible to the seizures.


Parenteral administration in adults:

Parenteral vaccines recommended for routine administration to adults are given by either the IM or the SC route. Most parenteral vaccines are given to adults by the IM route. Vaccines given by the SC route include live-virus vaccines such as varicella, zoster, and MMR vaccines as well as the inactivated meningococcal polysaccharide vaccine. The 23-valent pneumococcal polysaccharide vaccine may be given by either of these routes, but IM administration is preferred because it is associated with a lower risk of injection-site reactions. Vaccines given to adults by the SC route are administered with a 5/8-inch needle into the upper outer-triceps area as seen in the figure below. Vaccines administered to adults by the IM route are injected into the deltoid muscle with a needle whose length should be selected on the basis of the recipient’s sex and weight to ensure adequate penetration into the muscle. Current guidelines indicate that, for men and women weighing <130 lbs (<60 kg), a 5/8-inch needle is sufficient; for women weighing 130–200 lbs (60–90 kg) and men weighing 130–260 lbs (60–118 kg), a 1- to 1.5-inch needle is needed; and for women weighing >200 lbs (>90 kg) and men weighing >260 lbs (>118 kg), a 1.5-inch needle is required.


Enhancing Immunization in Adults:

Although immunization has become a centerpiece of routine pediatric medical visits, it has not been as well integrated into routine health care visits for adults. Accumulating evidence suggests that immunization coverage can be increased through efforts directed at consumer-, provider-, institution-, and system-level factors. The literature suggests that the application of multiple strategies is more effective at raising coverage rates than is the use of any single strategy.


The figure below shows immunization schedule from age 19 years onwards till elderly:


Some recommended adult immunization schedules:

Influenza vaccination:

Annual vaccination against influenza is recommended for all persons aged 6 months and older, including all adults. Healthy, nonpregnant adults aged less than 50 years without high-risk medical conditions can receive either intranasally administered live, attenuated influenza vaccine (FluMist), or inactivated vaccine. Other persons should receive the inactivated vaccine. Adults aged 65 years and older can receive the standard influenza vaccine or the high-dose (Fluzone) influenza vaccine.  The US Food and Drug Administration (FDA) recently approved several new flu vaccines, including trivalent (three strain) and quadrivalent (four strain) vaccines. The available choices this year will include:

Standard three-strain flu vaccine. This year’s version includes influenza strains H1N1 and H3N2, and an influenza B virus Egg-free vaccine (FluBlok), in which the influenza virus’ were grown in caterpillar cells instead of chicken eggs
Quadrivalent, or four-strain vaccine, which includes two A class of viruses and two from the B class, which tends to cause illness primarily in young children High-dose vaccines, promoted for seniors aged 65 and over
Nasal spray, called FluMist. This year it will contain four strains opposed to three, matching the quadrivalent injection Intradermal vaccine, promoted for those afraid of needles. The vaccine is delivered through a panel of micro-needles rather than a single needle


Tetanus, diphtheria, and acellular pertussis (Td/Tdap) vaccination:

Administer a one-time dose of Tdap to adults aged less than 65 years who have not received Tdap previously or for whom vaccine status is unknown to replace one of the 10-year Td boosters, and as soon as feasible to all 1) postpartum women, 2) close contacts of infants younger than age 12 months (e.g., grandparents and child-care providers), and 3) healthcare personnel with direct patient contact. Adults aged 65 years and older who have not previously received Tdap and who have close contact with an infant aged less than 12 months also should be vaccinated. Other adults aged 65 years and older may receive Tdap.


Pneumococcal polysaccharide (PPV 23) vaccination:

Vaccinate all persons with the following indications:

Medical: Chronic lung disease (including asthma); chronic cardiovascular diseases; diabetes mellitus; chronic liver diseases; cirrhosis; chronic alcoholism; functional or anatomic asplenia (e.g., sickle cell disease or splenectomy [if elective splenectomy is planned, vaccinate at least 2 weeks before surgery]); immunocompromising conditions (including chronic renal failure or nephrotic syndrome); and cochlear implants and cerebrospinal fluid leaks. Vaccinate as close to HIV diagnosis as possible.

Other: Residents of nursing homes or long-term care facilities and persons who smoke cigarettes. Routine use of PPSV is not recommended for American Indians/Alaska Natives or persons aged less than 65 years unless they have underlying medical conditions that are PPSV indications. However, public health authorities may consider recommending PPSV for American Indians/Alaska Natives and persons aged 50 through 64 years who are living in areas where the risk for invasive pneumococcal disease is increased

Revaccination with PPV 23 (booster PPV 23):

One-time revaccination after 5 years is recommended for persons aged 19 through 64 years with chronic renal failure or nephrotic syndrome; functional or anatomic asplenia (e.g., sickle cell disease or splenectomy); and for persons with immunocompromising conditions. For persons aged 65 years and older, one-time revaccination is recommended if they were vaccinated 5 or more years previously and were aged less than 65 years at the time of primary vaccination.


PPV 23 made up of the capsular polysaccharide from 23 common pneumococcal serotypes which uses the capsular polysaccharide as the vaccine antigen. PPV 23 is T-independent vaccine and its efficacy can be increased to efficient T-dependent vaccines by covalently binding them (a process termed conjugation) to a protein molecule. That is 13-valent pneumococcal conjugate vaccine (PCV 13). PPV23 is licensed only for individuals aged >2 years as T independent immunity is absent below age of 2 years. According to the ACIP recommendations published in September 2014, both pneumococcal conjugate vaccine (PCV13) and pneumococcal polysaccharide vaccine (PPV23) should be administered routinely in a series to all adults age 65 years and older. The two vaccines should not be given at the same visit. PCV13 is recommended to be given first because of the immune response to the vaccine when given in this sequence. An evaluation of immune response after a second pneumococcal vaccination administered 1 year after an initial dose showed that subjects who received PPSV23 as the initial dose had lower antibody responses after subsequent administration of PCV13 than those who had received PCV13 as the initial dose followed by a dose of PPSV23.


Vaccine Information Statements (VIS):

Vaccine Information Statements (VISs) are information sheets produced by the Centers for Disease Control and Prevention (CDC). VISs explain both the benefits and risks of a vaccine to adult vaccine recipients and the parents or legal representatives of vaccinees who are children and adolescents. Federal law requires that VISs be handed out whenever certain vaccinations are given (before each dose). As required under the National Childhood Vaccine Injury Act, all health care providers in the United States who administer, to any child or adult, any of the following vaccines – diphtheria, tetanus, pertussis, measles, mumps, rubella, polio, hepatitis A, hepatitis B, Haemophilus influenzae type b (Hib), trivalent influenza, pneumococcal conjugate, meningococcal, rotavirus, human papillomavirus (HPV), or varicella (chickenpox) – shall, prior to administration of each dose of the vaccine, provide VIS. If there is not a single VIS for a combination vaccine, use the VISs for all component vaccines. VISs should be supplemented with visual presentations or oral explanations as appropriate


Vaccination in pregnancy:

Immunization during pregnancy, that is the administration of a vaccine to a pregnant woman, is not a routine event as it is generally preferred to administer vaccines either prior to conception or in the postpartum period. When widespread vaccination is used, the risk for an unvaccinated pregnant patient to be exposed to a related infection is low, allowing for postponement, in general, of routine vaccinations to the postpartum period. Nevertheless, immunization during pregnancy may occur either inadvertently, or be indicated in a special situation, when it appears prudent to reduce the risk of a specific disease for a potentially exposed pregnant woman or her fetus. As a rule of thumb the vaccination with live virus or bacteria is contraindicated in pregnancy as live virus or bacteria can have adverse effect on developing fetus. Also during pregnancy immune system is downgraded to allow fetus growth and live organisms could disseminate.




Tetanus toxoids appear safe during pregnancy and are administered in many countries of the world to prevent neonatal tetanus. The World Health Organization (WHO) states more than 180,000 newborns die and over 30,000 women die each year from tetanus. It is recommended by the American Congress of Obstetrics and Gynecologists (ACOG) the following schedule for pregnant women to receive the vaccine–

•Schedule if never immunized: three doses in 0, 4, and 6-12 months

•Schedule if unknown immunization: at least two doses in the late second or third trimester. The National Business Group on Health (NBGH) states an analysis of pregnant women who received at least two doses had 98% effectiveness of the tetanus vaccine (NBGH, 2011).

One of the doses during pregnancy should be the Tdap (ACOG, 2012).

Immune globulins:

Immune globulins are used for post exposure prophylaxis and not associated with reports that harm is done to the fetus. Such agents are considered in pregnant women exposed to hepatitis B, rabies, tetanus, varicella, and hepatitis A.


The following vaccines are considered safe to give to women who may be at risk of infection:

• Hepatitis B: Pregnant women who are at high risk for this disease and have tested negative for the virus can receive this vaccine. It is used to protect the mother and baby against infection both before and after delivery. A series of three doses is required to have immunity. The 2nd and 3rd doses are given 1 and 6 months after the first dose.

• Influenza (Inactivated):  This vaccine can prevent serious illness in the mother during pregnancy. All women who will be pregnant (any trimester) during the flu season should be offered this vaccine. Talk to your doctor to see if this applies to you.

• Tetanus/Diphtheria/Pertussis (Tdap):  Tdap is recommended during pregnancy, preferably between 27 and 36 weeks’ gestation, to protect baby from whooping cough. If not administered during pregnancy, Tdap should be administered immediately after the birth of your baby.


What vaccinations are not recommended during pregnancy?

Don’t get these vaccines during pregnancy:

•BCG (tuberculosis)



•Nasal spray flu vaccine (called LAIV) (Pregnant women can get the flu shot, which is made with killed viruses.)



Wait at least 1 month after getting any of these vaccinations before you try to get pregnant.


Flu vaccines safe for Pregnant Moms:

A review of data from the 2009 flu season showed that the use of flu vaccines can help prevent fetal death, a major concern for pregnant mothers. For years, pregnant women have been unsure about whether getting the flu shot could harm their unborn child. The report, published in the New England Journal of Medicine, also confirmed the safety of flu vaccinations for women in the later stages of pregnancy.


Vaccination to health care workers, travelers and people suffering from various medical disorders:

Vaccine to health care worker:

Is there anything different that health-care workers need to do compared with non-health-care workers?

Health-care workers are treated a little differently than other adults for two reasons. First, a health-care worker is more likely to be exposed to certain risks of infection (such as hepatitis B) than the normal population. Second, if a health-care worker becomes infected, they may transmit those infections to their patients (chickenpox, pertussis).

Special recommendations:

•Tetanus/diphtheria/pertussis (Td/Tdap):

◦It is recommended that any health-care worker who may have patient contact receive a Tdap shot if they have not received one as an adolescent (as long as it has been two years since their last Td shot). This helps prevent the spread of pertussis.

•Hepatitis B:

◦Health-care workers who have not been vaccinated should receive the three-dose series and obtain anti-hepatitis B serology testing one to two months after their third dose.

•Measles/mumps/rubella (MMR):

If there is no serologic evidence of immunity, the health-care worker should receive two doses of MMR separated by 28 days or more.


◦All health care workers must have a history of varicella disease (chickenpox), prior vaccination, or serologic evidence of immunity. If not, the worker should receive two doses of vaccine 28 days apart.


◦Health-care workers should receive one dose of either the flu shot or the nasal flu vaccine annually.


Vaccine to foreign traveler:


Traveler’s vaccination:


According to the World Tourism Organization, international tourist arrivals grew exponentially from 25 million in 1950 to >900 million in 2008. Not only are more people traveling; travelers are seeking more exotic and remote destinations. Travel from industrialized to developing regions has been increasing, with Asia and the Pacific, Africa, and the Middle East now emerging destinations. Studies show that 50–75% of short-term travelers to the tropics or subtropics report some health impairment. Most of these health problems are minor: only 5% require medical attention, and <1% require hospitalization. Although infectious agents contribute substantially to morbidity among travelers, these pathogens account for only 1% of deaths in this population. Immunizations for travel fall into three broad categories: routine (childhood/adult boosters that are necessary regardless of travel) as listed in the figure above, required (immunizations that are mandated by international regulations for entry into certain areas or for border crossings), and recommended (immunizations that are desirable because of travel-related risks). Required and recommended vaccines commonly given to travelers are listed in Table below:


Vaccines Commonly Used for Travel:

Vaccine Primary Series Booster Interval
Cholera, live oral (CVD 103 – HgR) 1 dose 6 months
Hepatitis A 2 doses, 6–12 months apart, IM None required
Hepatitis A/B combined (Twinrix) 3 doses at 0, 1, and 6–12 months or 0, 7, and 21 days plus booster at 1 year, IM None required except 12 months (once only, for accelerated schedule)
Hepatitis B (Engerix B): accelerated schedule 3 doses at 0, 1, and 2 months or 0, 7, and 21 days plus booster at 1 year, IM 12 months, once only
Hepatitis B (Engerix B or Recombivax): standard schedule 3 doses at 0, 1, and 6 months, IM None required
Immune globulin (hepatitis A prevention) 1 dose IM Intervals of 3–5 months, depending on initial dose
Japanese encephalitis (JE-VAX) 3 doses, 1 week apart, SC 12–18 months (first booster), then 4 years
Japanese encephalitis (Ixiaro) 2 doses, 1 month apart, SC Optimal booster schedule not yet determined
Meningococcus, quadrivalent [Menimmune (polysaccharide), Menactra, Menveo (conjugate)] 1 dose SC >3 years (optimal booster schedule not yet determined)
Rabies (HDCV), rabies vaccine absorbed (RVA), or purified chick embryo cell vaccine (PCEC) 3 doses at 0, 7, and 21 or 28 days, IM None required except with exposure
Typhoid Ty21a, oral live attenuated (Vivotif) 1 capsule every other day x 4 doses 5 years
Typhoid Vi capsular polysaccharide, injectable (Typhim Vi) 1 dose IM 2 years
Yellow fever 1 dose SC 10 years



The figure below shows vaccine schedule for adults suffering from various medical disorders:



Vaccine storage, handling, cold chain and delivery system:

Storage and Handling:

Injectable vaccines are packaged in multidose vials, single-dose vials, or manufacturer-filled single-dose syringes. The live attenuated nasal-spray influenza vaccine is packaged in single-dose sprayers. Oral typhoid vaccine is packaged in capsules. Some vaccines, such as MMR, varicella, zoster, and meningococcal polysaccharide vaccines, come as lyophilized (freeze-dried) powders that must be reconstituted (i.e., mixed with a liquid diluent) before use. The lyophilized powder and the diluent come in separate vials. Diluents are not interchangeable but rather are specifically formulated for each type of vaccine; only the specific diluent provided by the manufacturer for each type of vaccine should be used. Once lyophilized vaccines have been reconstituted, their shelf-life is limited and they must be stored under appropriate temperature and light conditions. For example, varicella and zoster vaccines must be protected from light and administered within 30 minutes of reconstitution; MMR vaccine likewise must be protected from light but can be used up to 8 h after reconstitution. Single-dose vials of meningococcal polysaccharide vaccine must be used within 30 minutes of reconstitution, while multidose vials must be used within 35 days.


Vaccines are stored either at refrigerator temperature (2–8°C) or at freezer temperature (–15°C or colder). In general, inactivated vaccines (e.g., inactivated influenza, pneumococcal polysaccharide, and meningococcal conjugate vaccines) are stored at refrigerator temperature, while vials of lyophilized-powder live-virus vaccines (e.g., varicella, zoster, and MMR vaccines) are stored at freezer temperature. Diluents for lyophilized vaccines may be stored at refrigerator or room temperature. Live attenuated influenza vaccine—a live-virus liquid formulation administered by nasal spray—is stored at refrigerator temperature. To avoid temperature fluctuations, vaccines should be placed in the body of a refrigerator and not in the door, in vegetable bins, on the floor, next to the wall, or next to the freezer—locations where temperatures may differ significantly. Frequent opening of a refrigerator door to retrieve food items can adversely affect the internal temperature of the unit and damage vaccines; thus food and drink should not be stored in the same refrigerator as vaccines. Frozen vaccines must be stored in the body (not the door) of a freezer that has its own external door separate from the refrigerator. They should not be stored in small “dormitory-style” refrigerators. The temperature of refrigerators and freezers used for vaccine storage must be monitored and the temperature recorded at least twice a day. Ideally, continuous thermometers are used that measure and record temperature all day and all night.


What is the Cold Chain?

“Cold chain” refers to the process used to maintain optimal conditions during the transport, storage, and handling of vaccines, starting at the manufacturer and ending with the administration of the vaccine to the client. The optimum temperature for refrigerated vaccines is between +2°C and +8°C (e.g. OPV, HepB vaccine). For frozen vaccines the optimum temperature is -15°C or lower. In addition, protection from light is a necessary condition for some vaccines.


An estimated 17% to 37% of providers expose vaccines to improper storage temperatures, and refrigerator temperatures are more commonly kept too cold than too warm. One study involving site visits showed that 15% of refrigeration units had temperatures of +1°C or lower. Freezing temperatures can irreversibly reduce the potency of vaccines required to be stored at 35°F to 46°F (2°C to 8°C). Certain freeze-sensitive vaccines contain an aluminum adjuvant that precipitates when exposed to freezing temperatures. This results in loss of the adjuvant effect and vaccine potency. Physical changes are not always apparent after exposure to freezing temperatures and visible signs of freezing are not necessary to result in a decrease in vaccine potency. Although the potency of the majority of vaccines can be affected adversely by storage temperatures that are too warm, these effects are usually more gradual, predictable, and smaller in magnitude than losses from temperatures that are too cold. In contrast, varicella vaccine and LAIV are required to be stored in continuously frozen states and lose potency when stored above the recommended temperature range.


Importance of maintaining the Cold Chain:

Vaccines are sensitive biological products which may become less effective, or even destroyed, when exposed to temperatures outside the recommended range. As India is witnessing wastage of at least 50 per cent of vaccines stocked by various healthcare agencies owing to heat exposure, the Central Drugs Standard Control Organisation (CDSO) has decided to explore the idea of developing thermostable vaccines. Cold-sensitive vaccines experience an immediate loss of potency following freezing. Vaccines exposed to temperatures above the recommended temperature range experience some loss of potency with each episode of exposure. Repetitive exposure to heat episodes results in a cumulative loss of potency that is not reversible. However, information on vaccine degradation is sparse and multipoint stability studies on vaccines are difficult to perform. In addition, information from manufacturers is not always available, so it can be difficult to assess the potency of a mishandled vaccine.

Maintaining the potency of vaccines is important for several reasons.

1. There is a need to ensure that an effective product is being used. Vaccine failures caused by administration of compromised vaccine may result in the re-emergence or occurrence of vaccine preventable disease.

2. Careful management of resources is important. Vaccines are expensive and can be in short supply. Loss of vaccines may result in the cancellation of immunization clinics resulting in lost opportunities to immunize.

3. Revaccination of people who have received an ineffective vaccine is professionally uncomfortable and may cause a loss of public confidence in vaccines and/or the health care system.


Vaccine vial monitor:

A vaccine vial monitor (VVM) is a thermochromic label put on vials containing vaccines which gives a visual indication of whether the vaccine has been kept at a temperature which preserves its potency. The labels were designed in response to the problem of delivering vaccines to developing countries where the cold chain is difficult to preserve, and where formerly vaccines were being rendered inactive and administered ineffectively due to their having been denatured by exposure to ambient temperature.  A vaccine vial monitor (VVM) is a label containing a heat sensitive material which is placed on a vaccine vial to register cumulative heat exposure over time. VVM is the only tool among all time temperature indicators that is available at any time in the process of distribution and at the time a vaccine is administered indicating whether the vaccine has been exposed to a combination of excessive temperature over time and whether it is likely to have been damaged. It clearly indicates to health workers whether a vaccine can be used. The combined effects of time and temperature cause the inner square of the VVM to darken, gradually and irreversibly. A direct relationship exists between the rate of colour change and temperature:

-The lower the temperature, the slower the colour change.

-The higher the temperature, the faster the colour change.



The World Health Organization has described VVMs as crucial in the spread of polio vaccination programs. Critically, VVMs have been shown to save lives in a cost-effective way. A joint study by WHO and PATH from 2006 identified more than 23 million doses of vaccine that had been overexposed to heat, and therefore were not administered to children. This is critical, as it clearly identified that these children could then be targeted for a future vaccine intervention, to ensure they were administered with an effective vaccine, as opposed to being marked in data as having received vaccine (when in fact, the vaccine could have been ineffective). In this way, VVMs help ascertain a clearer picture of the true vaccination coverage in a given population. The same study identified 31 million doses that had been exposed to potentially-damaging heat, but were still useable – thus greatly avoiding vaccine wastage. WHO estimates that over the next ten years, VVMs will enable the delivery of an additional 140 million doses of vaccine, saving 140,000 lives and decreasing the morbidity rates for countless others.


Pitfalls of VVM:

Studies have shown that health workers without proper training sometimes do not understand what a VVM is or how it works. A 2007 study in urban areas of Valsad in India showed that vaccine administrators were unaware of the purpose of the monitors. A study was done in the context of the temperatures in the states of Uttar Pradesh and Bihar in India where polio has been difficult to control and where summer temperatures rise to 45°C routinely and sometimes go as high as 50°C.  Its findings suggest that the VVMs are not reliable when exposed to high environmental temperatures. Previous studies have shown deterioration in virus levels resulting from thaw-freeze cycles which are not indicated by the VVMs. This makes the practice of returning vials exposed to ambient temperatures, to the freezer for storage at night and reuse later, particularly risky.


Comparable technology:

Electronic time–temperature indicators can detect all temperature changes, including issues of freezing vaccines which heat-detecting VVMs would not detect.


Public Health Reporting and Outbreak Detection & Control:

Outbreak Detection & Control:

Clusters of cases of a vaccine-preventable disease detected in an institution, a medical practice, or a community may signal important changes in the pathogen, vaccine, or environment. Several factors can give rise to increases in vaccine-preventable disease, including (1) low rates of immunization that result in an accumulation of susceptible people (e.g., measles resurgence among vaccination abstainers); (2) changes in the infectious agent that permit it to escape vaccine-induced protection (e.g., nonvaccine-type pneumococci); (3) waning of vaccine-induced immunity (e.g., pertussis among adolescents and adults vaccinated in early childhood); and (4) point-source introductions of large inocula (e.g., food-borne exposure to hepatitis A virus). Reporting episodes of outbreak-prone diseases to public health authorities can facilitate recognition of clusters that require further interventions.


Public Health Reporting:

Recognition of suspected cases of diseases targeted for elimination or eradication—along with other diseases that require urgent public health interventions, such as contact tracing, administration of chemo- or immunoprophylaxis, or epidemiologic investigation for common-source exposure)—is typically associated with special reporting requirements. Clinicians and laboratory staff have a responsibility to report some vaccine-preventable disease (notifiable diseases) occurrences to local or state public health authorities according to specific case-definition criteria. All providers should be aware of state or city disease-reporting requirements and the best ways to contact public health authorities. A prompt response to vaccine-preventable disease outbreaks can greatly enhance the effectiveness of control measures.



Vaccine benefits, effectiveness and impact:


Effectiveness of vaccine:

Vaccines have historically been the most effective means to fight and eradicate infectious diseases. Limitations to their effectiveness, nevertheless, exist. Sometimes, protection fails because the host’s immune system simply does not respond adequately or at all. Lack of response commonly results from clinical factors such as diabetes, steroid use, HIV infection or age. However it also might fail for genetic reasons if the host’s immune system includes no strains of B cells that can generate antibodies suited to reacting effectively and binding to the antigens associated with the pathogen. Even if the host does develop antibodies, protection might not be adequate; immunity might develop too slowly to be effective in time, the antibodies might not disable the pathogen completely, or there might be multiple strains of the pathogen, not all of which are equally susceptible to the immune reaction. However, even a partial, late, or weak immunity, such as a one resulting from cross-immunity to a strain other than the target strain, may mitigate an infection, resulting in a lower mortality rate, lower morbidity and faster recovery. If a vaccinated individual does develop the disease vaccinated against, the disease is likely to be less virulent than in unvaccinated victims. The following are important considerations in the effectiveness of a vaccination program:

1. careful modeling to anticipate the impact that an immunization campaign will have on the epidemiology of the disease in the medium to long term

2. ongoing surveillance for the relevant disease following introduction of a new vaccine

3. maintenance of high immunization rates, even when a disease has become rare.


The efficacy or performance of the vaccine is dependent on a number of factors:

•the disease itself (for some diseases vaccination performs better than for others)

•the strain of vaccine (some vaccines are specific to, or at least most effective against, particular strains of the disease)

•whether the vaccination schedule has been properly observed

•idiosyncratic response to vaccination; some individuals are “non-responders” to certain vaccines, meaning that they do not generate antibodies even after being vaccinated correctly

•assorted factors such as ethnicity, age, or genetic predisposition


Control, Elimination, and Eradication of Vaccine-Preventable Diseases:

Immunization programs are associated with the goals of controlling, eliminating, or eradicating a disease. Control of a vaccine-preventable disease reduces illness outcomes and often limits the disruptive impacts associated with outbreaks of disease in communities, schools, and institutions. Control programs can also reduce absences from work for ill persons and for parents caring for sick children, decrease absences from school, and limit health care utilization associated with treatment visits. Elimination of a disease is a more demanding goal than control, usually requiring the reduction to zero of cases in a defined geographic area but sometimes defined as reduction in the indigenous sustained transmission of an infection in a geographic area. As of 2010, the United States had eliminated indigenous transmission of measles, rubella, poliomyelitis, and diphtheria. Importation of pathogens from other parts of the world continues to be important, and public health efforts are intended to react promptly to such cases and to limit forward spread of the infectious agent. Eradication of a disease is achieved when its elimination can be sustained without ongoing interventions. The only vaccine-preventable disease that has been globally eradicated thus far is smallpox. Although smallpox vaccine is no longer given routinely, the disease has not naturally reemerged because all chains of human transmission were interrupted through earlier vaccination efforts and humans were the only natural reservoir of the virus. Currently, a major health initiative is targeting the global eradication of polio. Sustained transmission of polio has been eliminated from most nations but has never been interrupted in four countries: Afghanistan, India, Nigeria, and Pakistan. Detection of a case of disease that has been targeted for eradication or elimination is considered a sentinel event that could permit the infectious agent to become reestablished in the community or region. Hence, such episodes must be promptly reported to public health authorities.


Vaccine-preventable diseases and vaccine-preventable deaths:

A vaccine-preventable disease is an infectious disease for which an effective preventive vaccine exists. If a person acquires a vaccine-preventable disease and dies from it, the death is considered a vaccine-preventable death. The most common and serious vaccine-preventable diseases tracked by the World Health Organization (WHO) are: diphtheria, Haemophilus influenzae serotype b infection, hepatitis B, measles, meningitis, mumps, pertussis, poliomyelitis, rubella, tetanus, tuberculosis, and yellow fever. The WHO reports licensed vaccines being available to prevent, or contribute to the prevention and control of, 25 vaccine-preventable infections.  Vaccine-preventable deaths are usually caused by a failure to obtain the vaccine in a timely manner. This may be due to financial constraints or to lack of access to the vaccine.


Progress against diseases for which vaccines already exist and deaths from diseases for which vaccines might be developed:

Annual deaths
(all ages) if no
Prevented Occurring % prevented
Smallpox 5.0 million 5.0 million 100
Diphtheria 260,000 223,000 37,000 86
Whooping cough 990,000 630,000 360,000 64
Measles 2.7 million 1.6 million 1.1 million 60
Neonatal tetanus 1.2 million 0.7 million 0.5 million 58
Hepatitis B 1.2 million 0.4 million 0.8 million 33
Tuberculosis 3.2 million 0.2 million 3.0 million 6
Polio (cases of lifelong paralysis) 640,000 550,000 90,000 86
Malaria/other parasitic infections 2.2 million 2.2 million 0
HIV/sexually transmitted diseases 1.3 million 1.3 million 0
Diarrhoea/enteric fevers 3.0 million 3.0 million 0
Acute respiratory infections 3.7 million 3.7 million 0

SOURCE Estimates supplied by Children’s Vaccine Initiative, Geneva, February 1996.


Decline in death rates due to reduction in vaccine preventable diseases in the U.S.:


Reduction in mortality after introduction of immunization:


Routine vaccines save lives, says science. A study from CDC researchers led by Anne Schuchat analyzed what happened to disease rates as childhood vaccination rates increased starting in the early 1990s. The researchers used these findings to model the resulting effect over the kids’ lifetimes. In the analysis, the researchers factored in most routine vaccines recommended for children below age 6 (among them the MMR and whooping cough vaccines). Their findings: Routine childhood vaccinations given between 1994 and 2013 would save 732,000 lives and prevent 322 million cases of illness and 21 million hospitalizations over the course of the children’s lifetimes.


Other vaccine successes:

•In North America, diphtheria vaccines reduced the diphtheria-related deaths by more than 99%.

•Before the varicella-zoster virus (VZV) vaccine was introduced, almost 350,000 cases of chickenpox occurred annually in Canada. There were 53 deaths due to chickenpox between 1987 and 1996. A vaccine is now available.

•Thanks to vaccines, there has not been a single case of smallpox in the world since 1977.

•The discovery and use of polio vaccines has all but eliminated polio in the Americas. In 1960, there were 2,525 cases of paralytic polio in the United States. By 1965, there were 61. Between 1980 and 1990, cases averaged 8 per year, and most of those were induced by vaccination! There has not been a single case of polio caused by the wild virus since 1979, with a rare case reported each year from persons coming into the country carrying the virus. In 1994, polio was declared eradicated in all of the Americas.


About 1.5 million children die each year from vaccine-preventable diseases. More than 70 percent of the world’s unvaccinated children live in 10 countries with large populations and weak immunization systems. Vaccines save millions of lives each year and are among the most cost-effective health interventions ever developed. Immunization has led to the eradication of smallpox, a 74 percent reduction in childhood deaths from measles over the past decade, and the near-eradication of polio.  Despite these great strides, there remains an urgent need to reach all children with life-saving vaccines. One in five children worldwide are not fully protected with even the most basic vaccines. As a result, an estimated 1.5 million children die each year—one every 20 seconds—from vaccine-preventable diseases such as diarrhea and pneumonia. Tens of thousands of other children suffer from severe or permanently disabling illnesses. Vaccines are often expensive for the world’s poorest countries, and supply shortages and a lack of trained health workers are challenges as well. Unreliable transportation systems and storage facilities also make it difficult to preserve high-quality vaccines that require refrigeration.


Deaths due to vaccine-preventable diseases:

Total number of children who died from diseases preventable by vaccines currently recommended by WHO are 1.5 million:

Hib: 199 000

Pertussis: 195 000

Measles: 118 000

Neonatal tetanus: 59 000

Tetanus (non-neonatal): 2 000

Pneumococcal disease: 476 000

Rotavirus: 453 000


Benefits of immunization:

Immunization is one of the most important advances in public health and is estimated to have saved more lives in the world over the past 50 years than any other health intervention. Before vaccines became available, many children died from diseases such as diphtheria, measles and polio that are now preventable by immunization. Immunization programs are responsible for the elimination, containment or control of infectious diseases that were once common; however, the viruses and bacteria that cause vaccine preventable diseases still exist globally and can be transmitted to people who are not protected by immunization. If immunization programs were reduced or stopped, diseases that are now rarely seen because they are controlled through immunization would re-appear, resulting in epidemics of diseases causing sickness and death. This phenomenon has been seen in many countries; for example, large epidemics of diphtheria and measles have occurred in Europe in recent decades after immunization rates declined. Immunization is important in all stages of life. Infants and young children are particularly susceptible to vaccine preventable diseases because their immune systems are not mature enough to fight infection; as a result, they require timely immunization. Older children and adults also require immunization to restore waning immunity and to build new immunity against diseases that are more common in adults. Immunization directly protects individuals who receive vaccines. Through herd immunity, immunization against many diseases also prevents the spread of infection in the community and indirectly protects:

•infants who are too young to be vaccinated,

•people who cannot be vaccinated for medical reasons (e.g., certain immune-suppressed people who cannot receive live vaccines),

•people who may not adequately respond to immunization (e.g. the elderly).


Without doubt, vaccines are among the most efficient tools for promoting individual and public health and deserve better press:

Disease control benefits:


Unless an environmental reservoir exists, an eradicated pathogen cannot re-emerge, unless accidentally or malevolently reintroduced by humans, allowing vaccination or other preventive measures to be discontinued.  While eradication may be an ideal goal for an immunization program, to date only smallpox has been eradicated, allowing discontinuation of routine smallpox immunization globally. Potentially, other infectious diseases with no extrahuman reservoir can be eradicated provided an effective vaccine and specific diagnostic tests are available. Eradication requires high levels of population immunity in all regions of the world over a prolonged period with adequate surveillance in place. The next disease targeted for eradication is polio, which is still a global challenge. Although high coverage with oral polio vaccine (OPV) has eliminated type 2 poliovirus globally, transmission of types 1 and 3 continues in limited areas in a few countries. OPV-caused paralytic disease, directly or by reversion to virulence, and persistent vaccine-virus excretion in immunodeficient individuals are problems yet to be solved. Global use of monovalent type 1 and type 3 OPV and inactivated polio vaccine (IPV) may eventually be required.


Diseases can be eliminated locally without global eradication of the causative microorganism. In four of six WHO regions, substantial progress has been made in measles elimination; transmission no longer occurs indigenously and importation does not result in sustained spread of the virus. Key to this achievement is more than 95% population immunity through a two-dose vaccination regimen. Combined measles, mumps and rubella (MMR) vaccine could also eliminate and eventually eradicate rubella and mumps. Increasing measles immunization levels in Africa, where coverage averaged only 67% in 2004, is essential for eradication of this disease. Already, elimination of measles from the Americas, and of measles, mumps and rubella in Finland has been achieved, providing proof in principle of the feasibility of their ultimate global eradication. It may also be possible to eliminate Haemophilus influenzae type b (Hib) disease through well implemented national programs, as experience in the West has shown. Local elimination does not remove the danger of reintroduction, such as in Botswana, polio-free since 1991, with importation of type 1 poliovirus from Nigeria in 2004, and in the United States of America (USA) with measles reintroduced to Indiana in 2005 by a traveler from Romania. For diseases with an environmental reservoir such as tetanus, or animal reservoirs such as Japanese encephalitis and rabies, eradication may not be possible, but global disease elimination is a feasible objective if vaccination of humans (and animals for rabies) is maintained at high levels.


Control of mortality, morbidity and complications:

For the individual:

Efficacious vaccines protect individuals if administered before exposure. Pre-exposure vaccination of infants with several antigens is the cornerstone of successful immunization programs against a cluster of childhood diseases. Vaccine efficacy against invasive Hib disease of more than 90% was demonstrated in European, Native American, Chilean and African children in large clinical studies in the 1990s. In the United Kingdom, no infant given three doses developed Hib disease in the short-term (boosters may be required for long-term protection), and recent postmarketing studies have confirmed the high effectiveness of vaccination of infants against Hib in Germany and pertussis in Sweden. Many vaccines can also protect when administered after exposure – examples are rabies, hepatitis B, hepatitis A, measles and varicella.

For society:

Ehreth estimates that vaccines annually prevent almost 6 million deaths worldwide. In the USA, there has been a 99% decrease in incidence for the nine diseases for which vaccines have been recommended for decades, accompanied by a similar decline in mortality and disease sequelae. Complications such as congenital rubella syndrome, liver cirrhosis and cancer caused by chronic hepatitis B infection or neurological lesions secondary to measles or mumps can have a greater long-term impact than the acute disease. Up to 40% of children who survive meningitis due to Hib may have life-long neurological defects. In field trials, mortality and morbidity reductions were seen for pneumococcal disease in sub-Saharan Africa and rotavirus in Latin America.  Specific vaccines have also been used to protect those in greatest need of protection against infectious diseases, such as pregnant women, cancer patients and the immunocompromised.


Mitigation of disease severity:

Disease may occur in previously vaccinated individuals. Such breakthroughs are either primary – due to vaccine failure – or secondary. In such cases, the disease is usually milder than in the non-vaccinated. In a German efficacy study of an acellular pertussis vaccine, vaccinated individuals who developed whooping cough had a significantly shorter duration of chronic cough than controls. Such findings were confirmed in Senegal. Varicella breakthroughs exhibit little fever, fewer skin lesions and fewer complications than unvaccinated cases. Milder disease in vaccinees was also reported for rotavirus vaccine.


Prevention of infection:

Many vaccines are primarily intended to prevent disease and do not necessarily protect against infection. Some vaccines protect against infection as well. Hepatitis A vaccine has been shown to be equally efficacious (over 90% protection) against symptomatic disease and asymptomatic infections. Complete prevention of persistent vaccine-type infection has been demonstrated for human papillomavirus (HPV) vaccine. Such protection is referred to as “sterilizing immunity”. Sterilizing immunity may wane in the long term, but protection against disease usually persists because immune memory minimizes the consequences of infection.


Protection of the unvaccinated population:

Herd protection:

Efficacious vaccines not only protect the immunized, but can also reduce disease among unimmunized individuals in the community through “indirect effects” or “herd protection”. Hib vaccine coverage of less than 70% in the Gambia was sufficient to eliminate Hib disease, with similar findings seen in Navajo populations.  Another example of herd protection is a measles outbreak among preschool-age children in the USA in which the attack rate decreased faster than coverage increased.  Herd protection may also be conferred by vaccines against diarrhoeal diseases, as has been demonstrated for oral cholera vaccines. “Herd protection” of the unvaccinated occurs when a sufficient proportion of the group is immune. The decline of disease incidence is greater than the proportion of individuals immunized because vaccination reduces the spread of an infectious agent by reducing the amount and/or duration of pathogen shedding by vaccinees, retarding transmission. Herd protection as observed with OPV involves the additional mechanism of “contact immunization” – vaccine viruses infect more individuals than those administered vaccine. The coverage rate necessary to stop transmission depends on the basic reproduction number (R0), defined as the average number of transmissions expected from a single primary case introduced into a totally susceptible population. Diseases with high R0 (e.g. measles) require higher coverage to attain herd protection than a disease with a lower R0 (e.g. rubella, polio and Hib). Because of herd protection, some diseases can be eliminated without 100% immunization coverage.

Source drying:

Source drying is a related concept to herd protection. If a particular subgroup is identified as the reservoir of infection, targeted vaccination will decrease disease in the whole population. In North Queensland, Australia, there was a high incidence of hepatitis A in the indigenous population. Vaccination of indigenous toddlers, with catch-up up to the sixth birthday, had a rapid and dramatic impact in eliminating the disease in the indigenous population and in the much larger non-indigenous population (who were not vaccinated) across the whole of Queensland. Similar approaches have been very successfully applied in several other larger settings, including Israel and the USA. The success of source drying justifies vaccination of special occupational groups, such as food handlers, to control typhoid and hepatitis A. Pertussis vaccine boosters for close contacts (such as parents, grandparents, nannies, siblings and baby unit nurses), who are the most common sources of transmission to infants, protect those too young to be given primary vaccination with a surrounding “pertussis-free cocoon”.


Prevention of related diseases and cancer:

Protection against related diseases:

Vaccines will also protect against diseases related to the targeted disease. For example, in Finland, the USA and elsewhere, influenza vaccination has been found protective for acute otitis media in children, with a vaccine efficacy of more than 30%. Measles vaccination protects against multiple complications such as dysentery, bacterial pneumonia, keratomalacia and malnutrition. An enterotoxic Escherichia coli vaccine demonstrated protection against diarrhoea due to Salmonella enterica.

Cancer prevention:

Infective agents cause several cancers. Chronic hepatitis B infection leads to liver cancer. Vaccination against such pathogens should prevent the associated cancer as already observed for hepatocellular carcinoma in Taiwan, China. These results could be replicated in Africa. Reduction of the incidence of cervical cancer is expected with the use of HPV vaccines against serotypes 16 and 18, responsible for over 70% of the global cervical cancer burden, as reduction in precancerous lesions has been demonstrated in vaccinees.


Societal and other benefits:

Health-care and other savings for society:

Immunization programs require funding for infrastructure (e.g. cold-chain maintenance), purchase of vaccines and adequate staffing. However, the mortality and morbidity prevented translates into long-term cost savings and potential economic growth. Globally, the savings from vaccines were estimated by Ehreth in 2003 to be of the order of tens of billions of US dollars of direct savings. Malaria (for which there are currently several promising vaccines in development) costs sub-Saharan Africa US$ 100 billion worth of lost annual gross domestic product (GDP). Savings are enhanced if several antigens are delivered in a single vaccine. Combination vaccines bring the added benefit of better compliance, coverage, and injection safety. Introduction of a new antigen is facilitated with combination vaccines, ensuring early high coverage by maintaining previous immunization schedules, without compromising (and sometimes improving) immunogenicity and reactogenicity. When taking into account indirect costs, savings are higher for common diseases with lower mortality and morbidity (such as varicella) than for more severe diseases (such as polio). Indirect costs, such as lost productivity (as well as direct medical costs) have been emphasized by eminent health economists in assessing the full value of vaccination. Immunization programs, compared to other common public health interventions such as wearing seat-belts and chlorination of drinking water, are a good investment and more cost effective than, for example, advice on smoking cessation.  Cost savings will be achieved with the new live-attenuated rotavirus and conjugated pneumococcal vaccines, as well as wider use of hepatitis B and Hib vaccines.


Preventing development of antibiotic resistance:

By reducing the need for antibiotics, vaccines may reduce the prevalence and hinder the development of resistant strains. Introduction of a conjugate pneumococcal vaccine for infants in the USA in 2000 saw a 57% decline in invasive disease caused by penicillin-resistant strains and a 59% decline in strains resistant to multiple antibiotics by 2004 across a broad age spectrum: 81% among children under 2 years of age and 49% among persons aged 65 years and older. Vaccines against typhoid can prevent primary infection and the spread of antibiotic-sensitive as well as multidrug-resistant strains. The development of new vaccines against infectious pathogens where antibiotic resistance is a global threat (e.g. Staphylococcus aureus) is viewed as a better long-term option to control the problem of increasing resistance.


Extending life expectancy:

Vaccines can increase life expectancy by protecting against diseases against which one would not expect benefit. Elderly individuals given influenza vaccine in the USA had approximately 20% less chance of suffering cardiovascular and cerebrovascular disease and 50% lower risk of mortality from all causes compared to their unvaccinated counterparts. In Sweden, administration of polysaccharide pneumococcal vaccine and inactivated influenza vaccine significantly reduced the risk of in-hospital mortality for pneumonia and cardiac failure among elderly persons, with an additive effect when both vaccines had been administered.


Safe travel and mobility:

With global air travel rising, there is an increased risk of exposure to infectious diseases abroad. Travelers transmit and disseminate disease, as has been observed in the case of polio and in the dispersal of meningococcal strains by returning pilgrims from Saudi Arabia. In the case of the Muslim Hajj (the largest annual human gathering in the world), local authorities require meningococcal vaccination and recommend various other vaccinations, such as influenza and hepatitis B, for pilgrims. The most common vaccine-preventable diseases among travelers are influenza and hepatitis A. Other vaccines to consider for travel include rabies, hepatitis B, typhoid, cholera, yellow fever, Japanese encephalitis and measles.  Many vaccines can be given by flexible accelerated schedules to ensure early protection. Thus the traveler seeking health advice, even within a few weeks of departure, can travel overseas without vaccine-preventable health risks to themselves and others.


Other public health benefits:

In developing countries, vaccination programs are cornerstones of primary health-care services. The infrastructure and personnel required for an effective and sustainable immunization program give opportunities for better primary health-care services, particularly in the critical perinatal and early infancy period.


Empowerment of women:

With improvements in infant and child mortality, women tend to opt for fewer children as the need to have many children to ensure that some will reach adulthood is reduced. This has significant health, educational, social and economic benefits.


Protection against bioterrorism:

The current concern about the potential use of smallpox virus in bioterror is due to the cessation of vaccination (and of vaccine manufacture) following the monumental achievement of smallpox eradication. The potential of vaccines to protect populations from bioterrorism threats such as smallpox and anthrax has led many governments to ensure an adequate supply of the necessary vaccines in preparation against such an attack. Surveillance and response systems for vaccine-preventable and other diseases play a critical role in identification, characterization and response to biological weapons.


Promoting economic growth:

Poor health has been shown to stunt economic growth while good health can promote social development and economic growth. Health is fundamental to economic growth for developing countries and vaccinations form the bedrock of their public health programmes. The annual return on investment in vaccination has been calculated to be in the range of 12% to 18%, but the economic benefits of improved health continue to be largely underestimated.


Enhancing equity:

The burden of infectious, including vaccine-preventable, diseases falls disproportionately on the disadvantaged. Vaccines have clear benefits for the disadvantaged. Pneumococcal immunization programs in the USA have at least temporarily removed racial and socioeconomic disparities in invasive pneumococcal disease incidence, while in Bangladesh, measles vaccination has enhanced equity between high- and low-socioeconomic groups.


Promoting peace:

There were at least seven United Nations Children’s Fund (UNICEF) vaccine-mediated ceasefires during civil conflicts. These conflicts were in diverse parts of the world, from Liberia to Afghanistan, where even warring factions see the benefit of immunization programs. During protracted conflict it is possible to ensure that vaccination coverage remains high. This is seen in Sri Lanka, where despite unrest for the last two decades coverage in 2005 for both three doses of diphtheria–tetanus–pertussis vaccine and one dose of measles vaccine was 99%. The high cost-effectiveness and multiple benefits of relatively modest resource investments in immunization contrast starkly with profligate global military expenditures, currently over US$ 1 trillion annually.


In a nutshell:

The benefits of vaccination extend beyond prevention of specific diseases in individuals. They enable a rich, multifaceted harvest for societies and nations. Vaccination makes good economic sense, and meets the need to care for the weakest members of societies. Reducing global child mortality by facilitating universal access to safe vaccines of proven efficacy is a moral obligation for the international community as it is a human right for every individual to have the opportunity to live a healthier and fuller life. Achievement of the Millennium Development Goal 4 (two-thirds reduction in 1990 under-5 child mortality by 2015) will be greatly advanced by, and unlikely to be achieved without, expanded and timely global access to key life-saving immunizations such as measles, Hib, rotavirus and pneumococcal vaccines. So a comprehensive vaccination program is a cornerstone of good public health and will reduce inequities and poverty.


The accidental advantages of vaccines:

The Bacille Calmette-Guérin vaccination was given to protect you from tuberculosis. What we are only just realising is that, in common with several other vaccines, it may have done far more than that. There is growing evidence that vaccines have a wider-ranging influence on the immune system than we thought. In Africa, for instance, studies have shown that measles vaccine cuts deaths from all other infections combined by a third, mainly by protecting against pneumonia, sepsis and diarrhoea. Even in the West, where it is far less common for children to die from infectious illnesses, there are still surprising benefits: some vaccines seem to reduce our susceptibility to eczema and asthma. Exactly what causes these “non-specific effects”, as they are termed, is a mystery. But some scientists are arguing that, despite the uncertainties, it is time to start harnessing them more effectively. The World Health Organization, which is the main provider of vaccines in developing countries, has asked a group of vaccine experts to get to the bottom of it. “This could have huge implications for global healthcare,” says Christine Benn, a senior researcher at the Statens Serum Institute in Denmark and a member of the WHO committee. “Vaccines have been a fantastic success, but we can probably do much better by taking non-specific effects into account. An examination of these issues is long overdue.” Considering vaccines have been used since the 1800s and are the central plank of our public health system, it may seem hard to believe that such profound effects could have gone ignored all this time. In fact, an early 20th century Swedish physician called Carl Näslund did notice something was up after the BCG vaccine was introduced in his country. Vaccinated children had a much higher chance of reaching their first birthday – even though TB normally kills older children. In the 1940s and 50s, trials in the US and UK suggested that BCG-vaccinated children had a 25 per cent lower death rate from diseases other than TB. But no one took much notice until 30 years ago, when a Danish anthropologist called Peter Aaby began working in the West African state of Guinea-Bissau. In 1979 he witnessed a severe measles outbreak that killed 1 in 4 infants affected. Aaby arranged for measles vaccination to be introduced, but was surprised to see that even after the epidemic abated, immunised children were more likely to survive childhood. Aaby began digging, and discovered studies from elsewhere in Africa, as well as Bangladesh and Haiti, that also suggested measles vaccine gives a wider kind of protection. “We are collecting more and more data consistent with non-specific effects being very important,” says Aaby. What could the explanation be? Several lines of evidence suggest that our immune systems can be affected by many factors, including past encounters with microbes. Those microbes can be in the environment or a vaccine syringe. “If infections can alter the immunological milieu, it is not a major leap to suggest that vaccines might also do so,” said Andrew Pollard, head of the Oxford Vaccine Centre at the University of Oxford, in an editorial about Aaby’s work. According to the old view of vaccines, they work by priming what is known as our adaptive immune system. This consists of various defense cells circulating in the blood, which make antibodies and other molecules that recognise and latch on to specific foreign proteins on bacteria, viruses or other germs. It is this lock-and-key specificity that is responsible for our immune memory. On our first encounter with the measles virus, say, the immune cells that make potent antibodies to it reproduce, giving rise to successive generations of daughter cells that make progressively more powerful antibodies. The end product is highly proficient measles-killing machines that linger in our bodies for years. That’s why, if we re-encounter the virus, it is defeated so quickly we don’t even notice. But that may not be the whole story. Another, evolutionarily older, branch of our defences known as the innate immune system might also be playing a role. These cells are programmed to react to anything unfamiliar or untoward, such as the chemicals released when tissues are damaged, attacking any molecules or microorganisms that might pose a threat. Last year, surprising evidence emerged that BCG stimulates the innate immune system as well as the adaptive one. In people who received the shot, certain kinds of innate immune cells responded more strongly to bacterial and fungal pathogens completely unrelated to the TB bug. This is the first indication that the innate immune system reacts to vaccines, and the researchers suggested it could explain some of the general immune-boosting effects of BCG. “It’s quite preliminary data, but it’s very important,” says Nigel Curtis, head of infectious diseases at the Royal Children’s Hospital Melbourne and the University of Melbourne, Australia, who studies BCG. The discovery may be only one part of the explanation for BCG’s mysterious powers, though. For starters, it emerged recently that even memory cells of the adaptive immune system can target unrelated microbes, if there is sufficient cross-reactivity with a germ we have previously vanquished.

Tipping the balance:

But the theory that probably has most evidence behind it concerns two competing arms of the adaptive immune system, known as type 1 and type 2 helper T-cells. Broadly, type 1 cells promote immune reactions against bacteria and viruses, while type 2 cells are geared towards fighting off parasitic worms in the gut. Both BCG and the measles vaccine seem to tip the balance to type 1, according to studies of the antibodies released into the bloodstream after vaccination. Whatever the explanation is, we might be able to maximise the benefits, either by designing new vaccines or augmenting the effects of existing ones. But the WHO committee has another line of inquiry: there are suggestions that one vaccine could have harmful non-specific effects. The vaccine under suspicion is DTP, which prevents diphtheria, tetanus and pertussis, otherwise known as whooping cough. It was Aaby, again, who first drew attention to this. These days, he works as a vaccine researcher for the Danish Statens Serum Institute, but he is still based mainly in Guinea-Bissau. For several months in 2001 and 2002, health centers in the capital city, Bissau, ran out of DTP, and some infants never got their shot. Aaby noted that, among children who had been admitted to hospital for some reason, those who had had the shot were over twice as likely to die during their hospital stay. Further studies showed that the effect was particularly pronounced for girls. What no one knows is why DTP might have such an effect. One possible explanation is that the pertussis component is made from killed whooping cough bacteria. There are other ways to make vaccines, including using live but weakened bacteria or viruses, with both BCG and the measles shot being this type. Killed vaccines, on the other hand, seem to tip the type 1/type 2 balance away from the bacteria and virus-fighting type 1 arm. Animal studies show that, for unknown reasons, females have a naturally stronger type 2 bias, which could explain the sex difference in mortality seen in Guinea-Bissau. No one is suggesting we stop giving the DTP vaccine. Its protection from diptheria, tetanus and whooping cough is hugely beneficial – especially in the West.



Vaccine failure, interference and spread of disease:


There are two main reasons for failure of immunizations:

(1) Failure of the vaccine delivery system to provide potent vaccines properly to persons in need; and

(2) Failure of the immune response, whether due to inadequacies of the vaccine or factors inherent in the host.


Vaccine failure:

Vaccine failure is when disease occurs in a person despite being vaccinated for it. It is of two types:

1. Primary vaccine failure: This is when a person fails to produce antibodies (at detectable levels) or does not produce enough antibodies considered necessary to protect from the disease.

2. Secondary vaccine failure: This is when a person does produce antibodies in response to vaccination however the levels wane and decline at a faster rate than normally expected. However, antibodies to almost all vaccines decline over time, even after booster shots, so secondary vaccine failure in outbreaks of disease amongst the vaccinated is frequent.


Vaccine failure stems from poor vaccine efficacy. No vaccine is 100% efficacious meaning few percent people who received vaccine can still get disease. In the past three flu seasons, the CDC rated the influenza vaccine’s overall effectiveness between 47 and 62 percent meaning between 38 to 53 % population who received vaccine would still get influenza. A study, which involved nearly 9,000 high school students, found that by the age of 15, about 15 percent of teens who received the full series of hepatitis B shots as infants tested positive for hepatitis B surface antigen (HBsAg) as vaccinees may have lost their immunological memories against HBsAg.


Vaccine failure of Tdap vaccine:

An analysis of Washington state’s 2012 pertussis epidemic, the worst since 1942, found that the vaccine to prevent the disease waned sharply and quickly in teens who were fully inoculated. A new analysis of that epidemic finds the vaccine used to prevent pertussis waned quickly and sharply in adolescents, likely contributing to a surge of cases among those who already had their shots. Effectiveness of the Tdap vaccine — tetanus, diphtheria and acellular pertussis — was only about 64 percent overall, and it dropped to about 34 percent within two to four years after it was given, according to a study led by Dr. Anna Acosta, an epidemiologist with the Centers for Disease Control and Prevention (CDC). That helps explain why even kids who received all the CDC-recommended doses by age 11 were part of a spike in cases during the epidemic, the worst in Washington since 1942. The study confirms what others suggested, that a switch from whole-cell pertussis vaccine to acellular types in 1997 took a toll on the vaccine efficacy. The change was made because there was an “unacceptably high” level of reactions to the whole-cell shots, including febrile seizures.


Vaccine interference:

Vaccine interference may be intra- or inter-vaccine in nature. Intra-vaccine interference is determined by the nature and dose of the individual vaccine valences, the nature and quality of any additives and the pharmaceutical formulation of the product. Additionally, vaccinee factors including the presence of pre-existing immunity, the stage of immunological maturation, genetic and environmental background may also determine interference. The vaccine schedule and mode of delivery are further contributory factors. In practice, the phenomenon of vaccine interference argues that individual vaccines should not be combined or associated in the absence of specific data sheet recommendations to do so. Live-attenuated vaccines replicate at low concentrations and elicit protective immunity without causing disease. This strategy has proven to be successful when the vaccine targets one pathogen, as is the case for vaccines against yellow fever and Japanese encephalitis viruses. Translation of this straightforward idea to target dengue has proven frustrating, because dengue is a complex flaviviral disease that is caused by not one, but four antigenically distinct dengue viruses (DENV-1, 2, 3, and 4) and in tetravalent dengue vaccine, the DEN-3 serotype was found to predominate and suppress the response to DEN-1, −2 and −4 serotypes.  When two or more vaccines are mixed together in the same formulation, the two vaccines can interfere. This most frequently occurs with live attenuated vaccines, where one of the vaccine components is more robust than the others and suppresses the growth and immune response to the other components. This phenomenon was first noted in the trivalent Sabin polio vaccine, where the amount of serotype 2 virus in the vaccine had to be reduced to stop it from interfering with the “take” of the serotype 1 and 3 viruses in the vaccine.


Does flu vaccine make you more susceptible to influenza?  2015 study:

While a subject may be stimulating their immune system to build up specific antibodies, they are only doing this for a specific strain of virus. This allows other strains of influenza to have more influence when a subject comes in contact with them in real time. The flu shot also encourages viruses to mutate faster to survive, making flu shot subjects more susceptible to more powerful strains in the future. The Canadian research involves four studies and about 2,000 people. The observations were telling. The people who were subjected to seasonal flu vaccines in the past were most likely to come down with the H1N1 virus in the future. By focusing on one strain of virus, flu vaccines subject the body to future danger, facilitating the entry of other virus strains into the body. Dengue fever is another virus that commonly takes advantage of the seasonally vaccinated. The truth about flu vaccines is becoming so apparent that many health authorities in Quebec have considered canceling their recommendations for seasonal flu shots for healthy individuals.


Microbial adaptation following vaccination:

The widespread use of vaccinations may trigger bacterial adaptations leading to antibiotic-resistant bacterial diseases and vaccine-resistant viral diseases.

• This can happen through several mechanisms. These include mutation (Hepatitis B vaccine), reversion to virulence (Oral Polio vaccine) or strain replacement (PCV 7). Strain replacement, in the case of PCV 7, meant that following widespread vaccination with PCV 7, other pneumococcal strains that were not included in the vaccine became much more likely to cause pneumococcal infections. One of these, 19S, was known to be multi-antibiotic resistant.

• Hemophilus influenzae cases also increased, filling in the niche that had been created by PCV, and about 40% of these infections are multi-drug resistant.

• When PCV, a vaccine used against pneumonia, meningitis, and bloodstream infections, was first introduced, it protected against seven pneumococcal strains, but was soon linked to an increase in rates of antibiotic-resistant infections due to 19S pneumococci and hemophilus influenzae. The vaccine was then modified to include 13 pneumococcal strains to combat the 19S problem. However, the new vaccine will not improve the increase in hemophilus infections, and may make that problem worse.

• In the United States ear infections, sinus infections, bronchitis, pneumonia and meningitis, which are often caused by pneumococcal bacteria or hemophilus, have become much harder and more expensive to treat because of increasing resistance to antibiotics. This is due in part to the widespread use of the PCV vaccine.

• Vaccines have also been implicated in causing new, vaccine-resistant strains of whooping cough, hepatitis and polio.


Other cases noted in the literature include:

•Whooping Cough: In Australia, dangerous new strains of whooping cough bacteria were reported in March 2012. The vaccine, researchers said, was responsible. The reason for this is because, while whooping cough is primarily attributed to Bordetella pertussis infection, it is also caused by another closely related pathogen called B. parapertussis, which the vaccine does not protect against. Two years earlier, scientists at Penn State had already reported that receiving the pertussis vaccine significantly enhanced nasal colonization of B. parapertussis, thereby promoting vaccine-resistant whooping cough outbreaks.

•Hepatitis B: In 2007, immunologists discovered mutated vaccine-resistant viruses were causing disease.


Polio by polio vaccine:

The oral polio vaccine, which is still used in many third-world countries, is made from three live polio viruses, and carries a risk of causing polio. The viruses in the vaccine can also mutate or recombine into a deadlier version, igniting new outbreaks. The U.S. Centers for Disease Control and Prevention (CDC) admits that 154 cases of polio in the US that occurred between 1980 and 1999 were vaccine-associated, or on average 8 cases per year in the U.S. According to Nature, poliovirus reverts to virulence in 2 to 4 babies per million vaccinated. And, according to an article in Clinical Infectious Diseases, the risk of vaccine-associated polio ranged from 0 to 9 per million persons vaccinated for each of the three Sabin strains. The World Health Organization (WHO) acknowledges: “In very rare cases, the administration of OPV [oral polio vaccine] results in vaccine-associated paralysis associated with a reversion of the vaccine strains to the more neurovirulent profile of wild poliovirus. In a few instances, such vaccine strains have become both neurovirulent and transmissible and have resulted in infectious poliomyelitis.  This problem is so significant that oral polio vaccines are no longer used in the developed world. (They were stopped in the U.S. in 2000 and replaced by injected vaccines that were not live.) However, because they are cheaper to produce than injected vaccines, they are still used in the “less developed” world.


Recently Vaccinated individuals found to spread virus:

You can shed live virus in body fluids whether you have a viral infection or have gotten a live attenuated viral vaccine. The Johns Hopkins Patient Guide for immunocompromised patients used to mention avoiding “contact with children who are recently vaccinated”  You can be an asymptomatic carrier of a viral infection (acquired naturally or via vaccination), so while you may show no symptoms, you may still be able to transmit the virus to others. As of March 2015, the guide has been revised and this language has been removed. Live attenuated viral vaccines (LAV) that use live viruses try to, in essence, fool your immune system into believing that you’ve come into contact with a real virus, thereby stimulating the antibody response that will protect you. When you get these live viral vaccines, you shed live virus in your body fluids. Just like when you get a viral infection, you shed live virus. That’s how viral infections are transmitted.  Because viruses, unlike bacteria, need a living host… in order to multiply. Scientific evidence demonstrates that individuals vaccinated with live virus vaccines such as MMR (measles, mumps and rubella), rotavirus, chicken pox, shingles and influenza can shed the virus for many weeks or months afterwards and infect the vaccinated and unvaccinated alike. However, shedding of viruses in vaccines typically occurs in lower amounts than during shedding of wild-type viruses. In other words, weakened viruses in live attenuated vaccines can shed, but in weakened amounts. Thus, because weakened viruses in vaccines cause mild or no disease, shed weakened viruses also cause mild or no disease. Furthermore, vaccine recipients can carry diseases in the back of their throat and infect others while displaying no symptoms of a disease.


Vaccinated Individuals can be asymptomatic carriers of disease:

One of the dangers of any viral disease outbreak is that people often fail to realize is that you can be an asymptomatic carrier of a viral infection; so while you show no symptoms or only mild symptoms, you may still be able to transmit the virus to others. Even fewer people understand that this is also true for live-virus vaccines.  In an animal study, while whole cell DPT and acellular-pertussis-vaccinated baboons did not develop serious clinical disease symptoms—such as loss of appetite and cough—when they were exposed to the B. pertussis bacteria, they still colonized B. pertussis in their throats and were capable of transmitting the infection to other baboons. The study’s lead author Tod Merkel also explained that when exposed to B. pertussis after recently getting vaccinated, you could be an asymptomatic carrier and infect others, saying: “When you’re newly vaccinated, you are an asymptomatic carrier, which is good for you, but not for the population.”



Vaccine safety, adverse events and autism:

So far I discussed vaccine efficacy, benefits and impact. I also discussed vaccine failure. Now I will discuss the most contentious issues of vaccines, vaccine safety. This is so because vaccine is given to a healthy child who is at the mercy of parents and parents certainly don’t want to do anything that harms child. The adverse effect of vaccine gets too much importance & attention compared to adverse effect of any drug because vaccine is administered to a healthy child while drug is given to a sick child.


What is a vaccine safety crisis?

You may not be able to define it, but you certainly know when you are in one! Crises in vaccine safety are characterized by an unexpected series of events that initially seem to be out of control. The outcome is usually uncertain when the crisis is first identified, and there is a threat to the success of a vaccine or immunization program. A crisis may have a “real” basis arising from genuine vaccine reactions or immunization errors, or it may have no foundation in reality and be triggered entirely by mistaken rumours. Often a crisis in vaccine safety originates in the identification of AEFIs, but is aggravated by negative rumours. Whether a rumour triggers a series of events that build into a crisis depends on the nature of the rumour, how fast it spreads and whether prompt and effective action is taken to address it. When approaching a crisis, keep in mind that this may not only be a challenge, but also an opportunity to improve the communication on immunization issues. You have the opportunity to dispel negative rumours, to take action to upgrade policies and procedures if required, and to correct any errors or lapses in best practice.


Some examples of vaccine crisis:

17 Children die after receiving Hepatitis B Vaccine in china in 2013:

Over a period of two months, eight infants in China died within hours, and in some cases minutes, of receiving hepatitis B vaccines. Nine other deaths among Chinese children aged 5 and younger were also recently reported following hepatitis B vaccination.  Six of the deaths occurred in infants who had received the vaccine made by Shenzhen Kangtai Biological Products, while two occurred after hepatitis B vaccine produced by drug maker Beijing Tiantan Biological Product. Health authorities in China have since launched an investigation and have suspended the use of millions of doses of hepatitis B vaccine made by Shenzhen Kangtai. Serious questions regarding effectiveness, low transmission rates among babies and the steep risk of side effects make the hepatitis B vaccine’s use very hard to justify for healthy newborns. It’s interesting to note that US pharmaceutical giant Merck actually helped the Chinese build Shenzhen Kangtai in the 1990s. Merck also granted the company the biological technology to produce a hepatitis B vaccine royalty free in what the New York Times described as an “unusual joint venture aimed at improving health standards in China.”


Young Tribal Girls died following involvement in HPV Vaccine trial in India in 2009:

In 2009, tests had been carried out on 16,000 tribal school children in Andhra Pradesh, India, using the human papiloma virus (HPV) vaccine, Gardasil. According to the report, within a month of receiving the vaccine, many of the children fell ill and by 2010, five of them had died. A further two children were reported to have died in Vadodara, Gujarat, where an estimated 14,000 tribal children were vaccinated with another brand of the HPV vaccine, Cervarix, manufactured by GlaxoSmitheKline (GSK). According to a report, a total of 120 girls had been taken ill, suffering from a variety of symptoms, including “epileptic seizures, severe stomach aches, headaches and mood swings.” It said it was disturbed to find that ‘all the seven deaths were summarily dismissed as unrelated to vaccinations without in-depth investigations …’ the speculative causes were suicides, accidental drowning in well (why not suicide?), malaria, viral infections, subarachnoid hemorrhage (without autopsy) etc.”


Adverse effects following meningococcal A vaccine trial in Africa 2012:

In December 2012, in the small village of Gouro, Chad, Africa, situated on the edge of the Sahara Desert, five hundred children were locked into their school, threatened that if they did not agree to being force-vaccinated with a meningitis A vaccine, they would receive no further education. These children were vaccinated without their parents’ knowledge. This vaccine was an unlicensed product still going through the third and fourth phases of testing. Within hours, one hundred and six children began to suffer from headaches, vomiting, severe uncontrollable convulsions and paralysis. The children’s wait for a doctor began. They had to wait one full week for a doctor to arrive while the team of vaccinators proceeded to vaccinate others in the village. When the doctor finally came, he could do nothing for the children. The team of vaccinators, upon seeing what had happened, fled the village in fear. Forty children were finally transferred to a hospital in Faya and later taken by plane to two hospitals in N’Djamena, the capital city of Chad. After being shuttled around like cattle, many of these sick, weak children were finally dumped back in their village without a diagnosis and each family was given an unconfirmed sum of £1000 by the government. No forms were signed and no documentation was seen. They were informed that their children had not suffered a vaccine injury. However, if this were true, why would their government award each family £1000 in what has been described as hush money?


Why Japan banned MMR vaccine:

Japan stopped using the MMR vaccine seven years ago – virtually the only developed nation to turn its back on the jab. Government health chiefs claim a four-year experiment with it has had serious financial and human costs. Of the 3,969 medical compensation claims relating to vaccines in the last 30 years, a quarter had been made by those badly affected by the combined measles, mumps and rubella vaccine, they say.  The triple jab was banned in Japan in 1993 after 1.8 million children had been given two types of MMR and a record number developed non-viral meningitis and other adverse reactions.  Official figures show there were three deaths while eight children were left with permanent handicaps ranging from damaged hearing and blindness to loss of control of limbs. The government reconsidered using MMR in 1999 but decided it was safer to keep the ban and continue using individual vaccines for measles, mumps and rubella. The British Department of Health said Japan had used a type of MMR which included a strain of mumps vaccine that had particular problems and was discontinued in the UK because of safety concerns. The Japanese government realised there was a problem with MMR soon after its introduction in April 1989 when vaccination was compulsory. Parents who refused had to pay a small fine. An analysis of vaccinations over a three-month period showed one in every 900 children was experiencing problems. This was over 2,000 times higher than the expected rate of one child in every 100,000 to 200,000.  The ministry switched to another MMR vaccine in October 1991 but the incidence was still high with one in 1,755 children affected. No separate record has been kept of claims involving autism.  Tests on the spinal fluid of 125 children affected were carried out to see if the vaccine had got into the children’s nervous systems. They found one confirmed case and two further suspected cases.  In 1993, after a public outcry fuelled by worries over the flu vaccine, the government dropped the requirement for children to be vaccinated against measles or rubella.  Dr Hiroki Nakatani, director of the Infectious Disease Division at Japan’s Ministry of Health and Welfare said that giving individual vaccines cost twice as much as MMR ‘but we believe it is worth it’.  In some areas parents have to pay, while in others health authorities foot the bill.  However, he admitted the MMR scare has left its mark. With vaccination rates low, there have been measles outbreaks which have claimed 94 lives in the last five years.


Japan no longer recommends HPV vaccine:

Japan issued the first suspension of the government’s recommendation to get vaccinated against HPV in June 2013. The Japanese Government and Health authorities then organized a symposium on HPV vaccines, which occurred in February 2014. Important testimony was delivered by one doctor who had treated over 20 cases of multiple sclerosis (MS) after Gardasil vaccination. Pharmaceutical representatives were trying to say that such side effects are psychogenic, but how can a psychogenic disorder cause MS lesions in a person’s brain—and in a girl who was perfectly healthy prior to vaccination? They didn’t have an answer to that. All these problems started in temporal association with the vaccine. Just out of precautionary principle, you would think that they would have the common sense to at least halt the use of the vaccine until more research is done. But no, they just want to force it, and they parrot that it is safe. They do not have any proof of safety other than manipulated research. This symposium was followed by a large press conference, attended by Dr. Tomljenovic and research colleagues from France and the US. Since then, attempts by the makers of HPV vaccines to reinstate active recommendation of HPV vaccination by the Japanese Government have all failed, and Merck—which manufactures Gardasil—warned investors that Japan’s decision would have “a significant negative impact” on sales. GlaxoSmithKline’s HPV vaccine Cervarix also saw a downturn in sales immediately following the original suspension.


All above examples of vaccine harms are scary. I do not know the truth. Let us examine vaccine adverse reaction in detail.


Vaccine adverse effects: All Vaccines carry risks:

Vaccination given during childhood is generally safe. Adverse effects if any are generally mild. Most reactions to vaccines are mild and self-limited. Most reactions subside in 24-48 hours. The most common reactions involve inflammation at the site of vaccination – like redness, swelling or pain as well as fever. These vary by vaccine but are common affecting no more than 10 % of vaccines. Fortunately the reactions are usually mild and self limited lasting only a few days. Uncommon (from 0.1 to <1% of vaccinees) and rare (<1 in 1000 doses) reactions may occur – it is best to get specific information for each. With any medicine, there is always a small chance that someone might have an allergic reaction. True anaphylaxis reactions are rare (1 in 100,000 to 1 in a million doses) but are reversible with proper treatment. The reactions usually onset shortly after immunization is received. For this reason, a nurse or doctor will need to watch your child for 15 – 30 minutes (depending on the vaccine) after receiving a vaccine. It is important that you stay in the clinic for that period of time and watch for signs of an allergic reaction such as breathing problems or severe swelling and blotchy skin on your child’s body or around the mouth. If you see any of these symptoms or are concern about your child’s status, talk to your doctor or nurse immediately. It’s important to understand that all vaccines carry a risk for provoking an immediate acute adverse reaction, such as anaphylactic shock, fainting, or having a seizure. Further, vaccines can impair and alter immune system responses and can also cause brain inflammation (encephalopathy) that may lead to permanent brain damage. In addition, as Institute of Medicine Committees have pointed out in published reports, some individuals are more susceptible to suffering harm from vaccines because of biological, genetic, and environmental risk factors but, most of the time, doctors cannot predict who will be harmed because there are few scientific studies that have evaluated vaccine risks for individuals. Here are just some of the ways vaccines can impair or alter immune responses and brain function:

•Some components in vaccines are neurotoxic, including heavy metals such as mercury preservatives and aluminum adjuvants; residual toxins like endotoxin and bioactive pertussis toxin; and chemicals like formaldehyde and phenooxyethanol.

•The lab-altered and genetically engineered viruses and bacteria in vaccines may impair immune responses and do not stimulate the same kind of immunity that occurs when the body responds to an infectious disease

•Foreign DNA/RNA from human, animal and insect cell substrates used to produce vaccines may trigger serious health problems for some people

•Vaccines may alter your T-cell function and lead to chronic illness

•Vaccines can trigger allergies by introducing large foreign protein molecules into your body that have not been properly broken down by your digestive tract (since they are injected). Your body can have an allergic reaction to these foreign particles.


Vaccination induces immunity by causing the recipient’s immune system to react to antigens contained in the vaccine. Local and systemic reactions such as pain or fever can occur as part of the immune response. In addition, other vaccine components like adjuvants, stabilizers and preservative also contribute to adverse event. A successful vaccine keeps even minor reactions to a minimum while producing the best possible immune response. There is low public tolerance of vaccine adverse reactions. Vaccines are therefore only licensed when the frequency of severe reactions is very rare and when only minor, self-limiting reactions are reported.


Can we compare vaccine risk with other daily risks of life?

Vaccines prevent six million deaths worldwide every year, CNN’s Dr. Sanjay Gupta writes.  And there’s basically no reason not to get them. Only one in a million children has a serious adverse reaction. Those are great odds. You’re 100 times more likely to get struck by lightning than have an allergic reaction to a vaccine, Gupta says. Taking aspirin, for example, is much more likely to cause bleeding in brain.



Risk of disease is far greater than risk of vaccine:



Not all reported vaccine adverse events are indeed caused by vaccine:


Adverse event following immunization (AEFI):

Although vaccines are proven to be extremely safe, there is a potential risk of an adverse reaction, as with any other drug or medication. The Adverse Event Following Immunization (AEFI) is defined as “a medical incident that takes place after immunization, causes concern and is believed to be caused by the immunization”. Any untoward medical occurrence which follows immunization and which does not necessarily have a causal relationship with the use of the vaccine is clubbed under AEFI. The adverse event may be any unfavorable or unintended sign, an abnormal laboratory finding, a symptom or a disease. This risk of AEFI with vaccination is always weighed against the risk of not immunizing a child. It is only when the benefit outweighs the risk, that a vaccine is considered safe. However, even at a relatively low rate, because of the high absolute number of beneficiaries, there is risk of a few serious adverse events in the vaccinated children. These events may be recognized during clinical trials or during post-marketing surveillance e.g. intussusceptions following rotavirus vaccine. Tolerance to vaccine associated adverse events is generally lower as these are administered to healthy children unlike other pharmaceutical products used in morbid populations. Vaccine associated adverse events are more likely to be noticed and communicated and can often significantly impact immunization programs as noticed with MMR and pertussis vaccines. The vaccines are foreign for human bodies, given to healthy infants and children. In the natural process of developing immunity, a vaccine may cause fever, erythema, local pain, etc. Besides, there is a slight risk of foreign body reaction to the components in the vaccines. These factors are likely to cause some concerns in the caregivers/parents. Whatever the cause, an AEFI may upset people to the extent that they may refuse further vaccination for their children. This may lead to the children much more likely to get a vaccine preventable disease, become seriously ill, disabled, and risk death. AEFI surveillance, therefore, helps to preserve public confidence in the immunization program. Though, the majorities of AEFIs are mild, settle without treatment, and have no long-term consequences; very rarely, serious adverse reaction can occur. The vaccination programs work in a ‘paradox’ meaning thereby that the focus of attention changes with the implementation of immunization program—when the vaccination coverage increases and disease burden reduces drastically, more cases of AEFI attract the attention of the people than the disease in the community. Figure below depicts how AEFI impacts an ongoing immunization program.


As rate of adverse event peaks, vaccine coverage falls and disease resurges followed by increase vaccine coverage:


Classification of AEFI:

For the programmatic purpose, the AEFIs are classified in five broad categories. Table below provides brief description of each reaction.



Vaccine product-related reaction (vaccine reaction):

An AEFI that is caused or precipitated by a vaccine due to one or more of the inherent properties of the vaccine product e.g. extensive limb swelling following DTP vaccination. Vaccine reaction is untoward event caused or precipitated by the vaccine when given correctly, caused by the inherent properties of the vaccine. These reactions are caused by a constituent of the vaccine. In some cases this will be the vaccine antigen (the substance that generates immunity), and is thus a side effect of the immunological process of generating immunity. In other cases it will be caused by other vaccine constituents (e.g. preservatives, stabilisers, antibiotics, or residual substance from the manufacturing process) or the adjuvant that is added to boost the vaccine’s immunogenicity. Vaccine reactions can be categorised into two types:

• Common, usually minor and self-limiting

• Rare and more serious


Minor reactions:

1. Usually occur within a few hours of injection.

2. Resolve after short period of time and pose little danger.

3. Local (or localized): Restricted or limited to a specific body part or region and includes pain, swelling or redness at the site of injection.

4. Systemic:  Relating to a system, or affecting the entire body or an entire organism and includes fever, malaise, muscle pain, headache or loss of appetite.


Local reaction: swelling/redness at the site of injection.


Serious event:

An AEFI will be considered serious, if it:

•results in death,

•is life-threatening,

•requires in-patient hospitalization or prolongation of existing hospitalization,

•results in persistent or significant disability/incapacity,

•is a congenital anomaly/birth defect, or

•requires intervention to prevent permanent impairment or damage.


Severe vaccine reactions, onset interval, and rates associated with selected childhood vaccines:

Vaccine Reaction Onset interval Frequency per
doses given
BCG Fatal dissemination of BCG infection 1 – 12 months 0.19 – 1.56/1,000,000
OPV Vaccine associated paralytic poliomyelitis 4 – 30 days 2 – 4/1,000,000
DTwP Prolonged crying and seizures 0 – 24 hours < 1/100
HHE 0 – 24 hours < 1/1,000 – 2/1,000
Measles Febrile seizures 6 – 12 days 1/3,000
Thrombocytopenia 15 – 35 days 1/30,000
Anaphylaxis 1 hour 1/100,000




Vaccine quality defect-related reaction:

An AEFI that is caused or precipitated by a vaccine that is due to one or more quality defects of the vaccine product including its administration device as provided by the manufacturer.

Example: Failure by the manufacturer to completely inactivate polio virus in inactivated polio vaccine (IPV) leading to cases of paralytic polio.


Immunization error-related reaction:

An AEFI that is caused by inappropriate vaccine handling, storage, prescribing, preparation or administration and thus by its nature is preventable and often constitute the greatest proportion of AEFIs. Example: Transmission of infection by contaminated multidose vial. They are preventable and detract from the overall benefit of the immunization program. The identification and correction of these incorrect immunization practices are of great importance.

Examples of immunization errors and possible AEFIs:

Immunization error Possible AEFI
Non-sterile injection

  • Reuse of disposable syringe or needle leading to contamination of the vial, especially in multi-dose vials,
  • Improperly sterilized syringe or needle,
  • Contaminated vaccine or diluent.
  • Local injection site reactions (e.g., abscess, swelling, cellulitis, induration),
  • Sepsis
  • Toxic shock syndrome
  • Death
Reconstitution error

  • Inadequate shaking of vaccine,
  • Reconstitution with incorrect diluent,
  • Drug substituted for vaccine or diluent,
  • Reuse of reconstituted vaccine
  • Local abscess
  • Effect of drug
  • Toxic shock syndrome
  • Death.
Injection at incorrect site

  • BCG given subcutaneously
  • Injection into buttocks.
  • Local reaction or abscess or other local reaction
  • Sciatic nerve injury:  Vaccination of infants and children in the buttock is not recommended because of concern about potential injury to the sciatic nerve, which is well documented after injection into the buttock.
Vaccine transported/stored incorrectly
  • Increased local reaction from frozen vaccine


Immunization anxiety-related reaction:

An AEFI arising from anxiety about the immunization. Individuals can react in anticipation to and as a result of an injection of any kind. These reactions are not related to the vaccine, but to fear of the injection. It could be syncope, vomiting, hyperventilation or even rarely convulsion.


Coincidental event:

Coincidental events occur after a vaccination has been given but are not caused by the vaccine or its administration.

Vaccinations are normally scheduled in infancy and early childhood, when illnesses are common and congenital or early neurological conditions become apparent. Coincidental events are inevitable when vaccinating children in these age groups, especially during a mass campaign. Applying the normal incidence of disease and death in these age groups along with the coverage and timing of immunizations allows estimation of the expected numbers of coincidental events after immunization.


Knowing the background mortality of the AEFI that coincidentally follow vaccination is key when responding to AEFI reports:

Expected coincidental deaths following DTP vaccination in selected countries:

Country Infant Mortality Rate per 1000 live births (IMR) Number of births per year (N) Number of infant death during year in
Month after immunization Week after immunization Day after immunization
= (IMRxN/12)×nv×ppv = (IMR×N/52)×nv×ppv = (IMR×N/365)×nv×ppv
Australia 5 267,000 300 69 10
Cambodia 69 361,000 5,605 1,293 185
China 18 18,134,000 73,443 16,948 2,421
Japan 3 1,034,000 698 161 23
Laos 48 170,000 1,836 424 61
New Zealand 5 58,000 65 15 2
Philippines 26 2,236,000 13,081 3,019 431

Note: Assumes uniform distribution of deaths and that children who are near death will still be immunized.

nv = number of immunization doses: assumed here to be three dose schedule; 3.

ppv= proportion of population vaccinated: assumed here to be 90% for each dose; 0.9.


Based on the data in the table above, 2421 infant deaths are expected to occur coincidentally (i.e. not linked to the vaccine) in China the day after immunization with DTP. In other words, if more than 2421 infants died next day following DPT, that extra deaths can be attributed to vaccine.


One of the main challenges in surveillance of AEFIs is to differentiate coincidental events from events that are caused by a reaction to a vaccine or its components. Observing the rate of an adverse event in the vaccinated population and comparing it with the rate of this event among the unvaccinated population can help to distinguish genuine vaccine reactions. The following graphic shows how comparing the background rate with the observed rate of an event can help to determine the vaccine reaction rate (i.e. the rate of events that are actually caused by the vaccine).


Example: Fever following vaccination; in the graph above, 5 children got fever after vaccination per 1000 vaccination but only 2 were due to vaccine per se and remaining 3 was background fever rate.


Terminology How is this measured Example
Background rate Background rates can be determined in a population prior to the introduction of a new vaccine or simultaneously in non-vaccinated people. If we measured the temperatures
of a population of 1 000 unvaccinated children during one week, some children would present a fever (defined as >38°C) during the time of observation (e.g., infections). For example: a rate of 3 cases of fever per 1 000 children per week.
Observed (reported) rate The observed rate can be measured in pre-licensure clinical trials or post-licensure studies. If we observe the same population of 1 000 children but we now vaccinate all children and measure their temperatures daily there will be greater rate of fever. Thus, the rate of fever may increase to 5/1,000 children per week, with the increase concentrated in the 72 hours that follow vaccination.
Vaccine reaction rate (attributable rate) Randomised clinical trials which are placebo controlled.
Post-licensure studies – passive surveillance.
Thus, the vaccine attributable rate of fever will be 2/1 000 vaccinated children (that is the observed rate minus the background rate).


Imagine that rumours begin to circulate about a vaccine when cases of convulsions following immunization occur amongst vaccinated infants. The background rate of convulsions in this population is 1:1000 infants. The observed rate in vaccinated infants is 1.2:1000. The vaccine attributable rate derived from these figures is 2 additional cases in every 10000 vaccinations, compared with the background rate.


Comparing observed with “expected” rates of adverse events:

If the background rate of a particular adverse event is not known in a community (as is often the case), you will need to compare the observed rate in your population with the ‘expected rate’ published by the vaccine regulatory authorities. For example, the information from WHO shows the expected rates of AEFIs following some childhood vaccines:

Vaccine Estimated rate of severe reactions
BCG 1 in 1000 to 1 in 50000 doses
OPV (oral polio vaccine) 1 in 2–3 million doses (or 1 in 750000 doses for the first dose)
Measles 1 in 1 million doses
DTP 1 in 750000 doses


Other factors to consider when comparing rates of AEFIs: confounding variables:

Keep in mind the other confounding factors that may influence the comparison of rates of adverse events. Confounding variable or factor is interference by a third variable so as to distort the association being studied between two other variables, because of a strong relationship with both of the other variables. A confounding variable can adversely affect the relation between the independent variable (cause) and dependent variable (outcome/effect). This may cause the researcher to analyze the results incorrectly. The results may show a false correlation between the dependent and independent variables, leading to an incorrect rejection of the null hypothesis. Here are some factors to consider when comparing one observed AEFI rate with another:


Although a vaccine may have the same antigens, different manufacturers may produce vaccines (or ‘lots’ of the same vaccine) that differ substantially in their composition, including the presence of an adjuvant or other components. These variations result in vaccines with different reactogenicity (the ability to cause vaccine reactions), which in turn affects the comparison of their vaccine attributable rates.


The same vaccine given to different age groups may result in different vaccine-attributable rates. For example, MMR vaccine given to infants may cause febrile convulsions. This symptom does, however, not occur in adolescents who are given the same vaccine.

Vaccine doses:

The same vaccine given as a ‘primary dose’ may have a different reactogenicity profile than when it is given as a ‘booster dose’. For example, the DTaP vaccine given as a primary dose is less likely to result in extensive limb swelling when compared with this same vaccine given as a booster dose.

Case definitions:

Adverse events may be defined differently in research studies that do not stick to the same case definition. Not using standardized case definitions may consequently affect the estimation of the AEFI rate.

Surveillance methods:

The way that surveillance data is collected may alter the rate. For example, surveillance data may be collected actively or passively, using pre- or post-licensure clinical trials, with or without randomization and placebo controls.


Did vaccine indeed cause AEFI?

The figure below shows 5 factors to be considered for establishment of causality:


1. Consistency:

The association of a purported AEFI with the administration of vaccine should be consistent. The findings should be replicable in different localities, by different investigators not unduly influencing one another, and by different methods of investigation, all leading to the same conclusion.

2. Temporal relation:

There should be a temporal relationship between the vaccine and the adverse event. The vaccine should precede the earliest manifestation of event.

3. Biological plausibility:

The association should be coherent, plausible and explicable according to known facts in the natural history and biology of the disease.

4. Specificity:

The association should be distinctive. The adverse event should be linked specifically or uniquely with the vaccine concerned rather than occurring frequently, spontaneously or commonly in association with other external stimuli or conditions.

5. Strength of association:

The association between AEFI and the vaccine should be strong in terms of magnitude and also in dose-response relationship of the vaccine with the adverse event.




Anaphylactic Hypersensitivity to Egg and Egg-Related Antigens:

Egg allergy is one of the most common food allergies of childhood, with a prevalence of 1% to 3% in children under 3 years of age. It is often associated with eczema in infants and asthma in young children. As most children outgrow their egg allergy, the prevalence in adulthood is much lower and is estimated at 0.1%. The most common egg allergy is to egg white. Cross-sensitivity with egg yolk and chicken protein has been described.  Vaccines that contain small quantities of egg protein can cause hypersensitivity reactions in some people with allergies to eggs. There are several vaccines manufactured by processes involving hens’ eggs or their derivatives, such as chick cell cultures. This manufacturing process may result in the following vaccines containing trace amounts of residual egg and chicken protein:

•measles-mumps-rubella (MMR) vaccines

•measles-mumps-rubella-varicella (MMRV) vaccine

•influenza vaccines

•tick-borne encephalitis (TBE) vaccine

•RabAvert®rabies vaccine

•yellow fever (YF) vaccine

Hypersensitivity reactions occurring following receipt of these vaccines varies considerably in relation to the amount of residual egg and chicken protein in the vaccine. Anaphylaxis after vaccination is rare. It may occur in people with anaphylactic hypersensitivity to eggs and in those with no history of egg allergy, due to other components in the vaccine. Due to this lack of predictability, immunization should always be performed by personnel with the capability and facilities to manage anaphylaxis post-vaccination. Individuals should be asked about allergies to egg or chicken prior to vaccination with influenza, TBE, YF, or RabAvert®rabies vaccines. Prior egg ingestion is not a prerequisite for immunization with egg protein-containing vaccine. It should be noted that any vaccine is contraindicated in people who have had an anaphylactic reaction to a previous dose of the vaccine. Referral to an allergy specialist is recommended.  Atopic diseases are not a contraindication to immunization with egg protein-containing vaccine.


Can people with severe egg allergies still get an annual influenza vaccination?

The new vaccine, recombinant hemagglutinin influenza vaccine (RIV), is not made using eggs. This vaccine is safe for patients with egg allergy. Nasal-spray flu vaccines appear to be safe for children age 2 or older who have egg allergies or asthma, according to English researchers.


Vaccine-derived polioviruses (VDPV):

Vaccine-derived polioviruses (VDPVs) are rare strains of poliovirus that have genetically mutated from the strain contained in the oral polio vaccine. The oral polio vaccine contains a live, attenuated (weakened) vaccine-virus. When a child is vaccinated, the weakened vaccine-virus replicates in the intestine and enters into the bloodstream, triggering a protective immune response in the child. Like wild poliovirus, the child excretes the vaccine-virus for a period of six to eight weeks. Importantly, as it is excreted, some of the vaccine-virus may no longer be the same as the original vaccine-virus as it has genetically altered during replication. This is called a vaccine-derived poliovirus. Very rarely, vaccine-derived poliovirus can cause paralysis. Vaccine-associated paralytic poliomyelitis (VAPP) occurs in an estimated 1 in 2.7 million children receiving their first dose of oral polio vaccine. All cases of acute flaccid paralysis (AFP) among children under fifteen years of age are reported and tested for wild poliovirus or vaccine-derived polioviruses within 48 hours of onset.  In 2009, the Global Polio Laboratory Network started using a new method to routinely screen for vaccine-derived polioviruses. The method is based on real-time reverse transcription-polymerase chain reaction (rRT-PCR), which targets nucleotide substitutions that occur early in the emergence of the virus. Circulating vaccine-derived polioviruses must be managed in the same way as wild poliovirus outbreaks. The solution is the same for all polio outbreaks: immunize every child several times with the oral vaccine to stop polio transmission, regardless of whether the virus is wild or vaccine-derived. Vaccine-derived polioviruses appear to be less transmissible than wild poliovirus. Outbreaks are usually self-limiting or rapidly stopped with 2–3 rounds of high-quality supplementary immunization activities.  Once wild poliovirus transmission has been stopped globally, the vaccine-viruses will be the only source of live polioviruses in the community and could potentially lead to the re-emergence of polio. Use of the oral polio vaccine in routine immunization programs will therefore be phased out to eliminate the rare risks posed by vaccine-derived polioviruses.


Eczema Vaccinatum:

Eczema vaccinatum is a serious complication that occurs when people with eczema or atopic dermatitis get vaccinated. The lesion spreads to skin that is currently affected or has recently been affected by eczema. This complication can occur even if the eczema or atopic dermatitis is not active at the time, and requires immediate medical attention. Prior to 1960, eczema vaccinatum occurred in 10 to 39 people per 1 million people vaccinated. Vaccine Immune Globulin (VIG) is felt to be a useful treatment for eczema vaccinatum.


Bell palsy following intranasal vaccination:

Results from a case–control study and a case-series analysis indicate a significantly increased risk of Bell palsy developing following intranasal immunization with a new vaccine. This inactivated influenza vaccine, composed of influenza antigens in a virosomal formulation with E. coli derived LT adjuvant, was licensed in Switzerland in October 2000. Following spontaneous reports of Bell palsy, the company decided not to market the vaccine during the following season. In general, the etiology and pathogenesis of Bell palsy remain inadequately understood. The greater risk of Bell palsy following immunization with this vaccine may be due to specific vaccine components such as LT toxin, influenza antigens or virosomes, or simply to use of the intranasal administration route. It is thus possible that such complications of vaccine administration may also apply to other nasal vaccines. GACVS therefore recommends that any novel vaccine for nasal administration should be tested on a sufficiently large number of subjects before licensing and submitted to active post-marketing surveillance studies. Since the average time to onset of Bell palsy following intranasal immunization with this new vaccine was as much as 60–90 days, GACVS recommends that the follow-up period in the context of clinical trials should be routinely extended to 3 months following administration of a new intranasal vaccine.


Vaccine and mad cow disease:

Because vaccines are a natural product, they sometimes require the use of animal cells during production. This process is strictly controlled so that it does not pose a risk to people. No brain cells are used in manufacturing vaccines. During the manufacturing process, the vaccines are purified, and all animal cells are removed. However, each batch of vaccine is tested to ensure that it is free from infectious agents. For some vaccines, material derived from cows (for example, gelatin and lactose) have been used in the manufacturing process, and this has raised the question of whether vaccines can transmit “mad cow disease” to humans. Scientists in several countries have studied this risk and estimated that, in theory, there could be a risk of one person in 40 billion being exposed to the disease through a vaccine. Even though the risk is extremely small, vaccine manufacturers are working to find alternatives to these components.


OPV AIDS hypothesis:

The oral polio vaccine (OPV) AIDS hypothesis suggests that the AIDS pandemic originated from live polio vaccines prepared in rhesus macaque tissue cultures and then administered to up to one million Africans between 1957 and 1960 in experimental mass vaccination campaigns. Data analyses in molecular biology and phylogenetic studies contradict the OPV AIDS hypothesis; consequently, scientific consensus regards the hypothesis as disproven. The journal Nature has described the hypothesis as “refuted”.



Vaccine and autism:

Autism Spectrum Disorder (ASD) is really a collection of several disorders that have three abnormal areas in common: social skills, communication skills, and repetitive or obsessive traits. In the 1980s, one in 10,000 kids was diagnosed with autism. Today, one in 150 American 8-year-olds has some form of autism. Boys outnumber girls four to one. The United States is not the only coun­try seeing this trend. It is increasingly diagnosed worldwide. For starters, is it really an epidemic? Or, are more people being diag­nosed? Many children who were diagnosed with mental retardation 30 years ago are children who are diagnosed with classic autism today. And mildly disabled ASD kids today are children who never would have had a diagnosis 30 years ago. Those verbal, but socially awkward, children account for the majority of new ASD cases.


Although child vaccination rates remain high, some parental concern persists that vaccines might cause autism. Three specific hypotheses have been proposed: (1) the combination measles-mumps-rubella vaccine causes autism by damaging the intestinal lining, which allows the entrance of encephalopathic proteins; (2) thimerosal, an ethylmercury-containing preservative in some vaccines, is toxic to the central nervous system; and (3) the simultaneous administration of multiple vaccines overwhelms or weakens the immune system. A worldwide increase in the rate of autism diagnoses—likely driven by broadened diagnostic criteria and increased awareness—has fueled concerns that an environmental exposure like vaccines might cause autism. Theories for this putative association have centered on the measles-mumps-rubella (MMR) vaccine, thimerosal, and the large number of vaccines currently administered. However, both epidemiological and biological studies fail to support these claims.


The MMR vaccine controversy started with the 1998 publication of a fraudulent research paper in the medical journal The Lancet that lent support to the later discredited claim that colitis and autism spectrum disorders are linked to the combined measles, mumps and rubella (MMR) vaccine. The media have been criticized for their naïve reporting and for lending undue credibility to the architect of the fraud, Andrew Wakefield. Andrew Wakefield, the author of the original research paper, had multiple undeclared conflicts of interest, had manipulated evidence, and had broken other ethical codes. The Lancet paper was partially retracted in 2004, and fully retracted in 2010, when The Lancet’s editor-in-chief Richard Horton described it as “utterly false” and said that the journal had been “deceived.” Wakefield was found guilty by the General Medical Council of serious professional misconduct in May 2010 and was struck off the Medical Register, meaning he could no longer practice as a doctor in the UK. In 2011, Deer provided further information on Wakefield’s improper research practices to the British medical journal, BMJ, which in a signed editorial described the original paper as fraudulent. The BMJ editors conclude that Wakefield deliberately faked the study. “Is it possible that he was wrong but not dishonest: that he was so incompetent that he was unable to fairly describe the project or to report even one of the 12 children’s cases correctly?” they ask. “No. A great deal of thought and effort must have gone into drafting the paper to achieve the results he wanted.”  The scientific consensus is that no evidence links the MMR vaccine to the development of autism, and that this vaccine’s benefits greatly outweigh its risks. Following the initial claims in 1998, multiple large epidemiological studies were undertaken. Reviews of the evidence by the Centers for Disease Control and Prevention, the American Academy of Pediatrics, the Institute of Medicine of the US National Academy of Sciences, the UK National Health Service, and the Cochrane Library all found no link between the MMR vaccine and autism. While the Cochrane review expressed a need for improved design and reporting of safety outcomes in MMR vaccine studies, it concluded that the evidence of the safety and effectiveness of MMR in the prevention of diseases that still carry a heavy burden of morbidity and mortality justifies its global use, and that the lack of confidence in the vaccine has damaged public health.


The medical community has relied mainly on epidemiology – the statistical study of large populations. These studies have overwhelmingly found no link between autism and MMR. Opponents claim that some of these studies might have flaws, but there are over a dozen epidemiological studies in different countries that use different techniques that have reached the same conclusion. At the very least, these studies show that the large increases in rates of autism that have been reported in many countries around the world cannot be due to MMR. However, the opponents of the vaccine point out that epidemiology can’t rule out an increased risk to a small number of children – a vulnerable subset.  But perhaps the most crucial question is whether the measles virus really is persisting in the bodies of autistic children; and now that question too has been investigated. A new, unpublished study has examined blood samples from a group of 100 autistic children and 200 children without autism. These samples have been examined using the most sensitive methods available. They found 99% of the samples contained no trace of the measles virus, and the samples that did contain the virus were just as likely to be from non-autistic children. The study therefore found no evidence of any link between MMR and autism.


Studies that fail to support an association between measles-mumps-rubella vaccine and autism.


A large and growing body of scientific evidence has shown no connection between vaccines and autism. Parents can be confident that the medical and public health communities – including the prestigious Institute of Medicine (IOM), American Academy of Pediatrics (AAP), American Medical Association (AMA), World Health Organization (WHO), National Institutes of Health (NIH), Food and Drug Administration (FDA) and Centers for Disease Control and Prevention (CDC) – strongly support the safety and benefits of immunizations.


New evidence clears measles vaccine of autism link: 2015 study:

Receipt of the measles, mumps, and rubella (MMR) vaccine is not associated with increased autism risk even among high-risk children, a JAMA study finds. Researchers retrospectively studied over 95,000 children who were continuously enrolled in a large U.S. health plan from birth until at least age 5 years who also had older siblings enrolled in the health plan. Some 2% of the children had an older sibling with autism spectrum disorder (ASD). Overall, 1% of the children were diagnosed with ASD during follow-up. Children who received the MMR vaccine were no more likely to be diagnosed with ASD than unvaccinated children — a finding that held true even among children whose older siblings had autism.  An editorialist notes: “Taken together, some dozen studies have now shown that the age of onset of ASD does not differ between vaccinated and unvaccinated children, the severity or course of ASD does not differ between vaccinated and unvaccinated children, and now the risk of ASD recurrence in families does not differ between vaccinated and unvaccinated children.”


If the MMR vaccine doesn’t cause autism, why is the diagnosis made around the same time as the vaccination?

One of the criteria used to make a diagnosis of autism is a language delay. Because children do not have significant expressive language under a year of age, doctors have to wait until 15 to 18 months to confirm a language delay and make the diagnosis. That’s about the same time as the MMR vaccination, which leads some parents to wonder about autism and vaccination.


Does this have anything to do with thimerosal or mercury in vaccines?

No. Thimerosal is a mercury-based preservative. It cannot be used in live-virus vaccines such as the MMR. There has never been thimerosal in MMR vaccines. Research shows that thimerosal does not cause ASD. In fact, a 2004 scientific review by the IOM concluded that “the evidence favors rejection of a causal relationship between thimerosal–containing vaccines and autism.” Since 2003, there have been nine CDC-funded or conducted studies that have found no link between thimerosal-containing vaccines and ASD, as well as no link between the measles, mumps, and rubella (MMR) vaccine and ASD in children. Between 1999 and 2001, thimerosal was removed or reduced to trace amounts in all childhood vaccines except for some flu vaccines. This was done as part of a broader national effort to reduce all types of mercury exposure in children before studies were conducted that determined that thimerosal was not harmful.  It was done as a precaution. Currently, the only childhood vaccines that contain thimerosal are flu vaccines packaged in multidose vials. Thimerosal-free alternatives are also available for flu vaccine.


Italian court rules mercury and aluminum in vaccines cause autism:

An Italian court in Milan awarded compensation to the family of a young boy who developed autism from a six-in-one hexavalent vaccine manufactured by British drug giant GlaxoSmithKline.  On September 24, 2014, Italy’s version of the National Vaccine Injury Compensation Program agreed that GSK’s “INFANRIX Hexa” vaccine for polio, diphtheria, tetanus, hepatitis B, pertussis and haemophilus influenza type B induced permanent autism and brain damage in the previously healthy child, whose name has been kept private for safety. The vaccine, which contains multiple antigens, thimerosal (mercury), multiple forms of aluminum, formaldehyde, recombinant (genetically modified) viral components and various chemical preservatives,  demonstrably caused the young boy to regress into autism shortly after he received all three doses of the vaccine, prompting his family to petition the case before Italy’s Ministry of Health. When the Ministry rejected it, the family proceeded to file a lawsuit. After listening to expert medical testimony, the Italian court remarkably concluded that the boy suffered permanent harm as a result of the vaccine, and particularly its neurotoxic mercury and aluminum components. Also presented as evidence was a 1,271-page confidential GSK report revealing that the drug giant knew full well from human clinical trials that INFANRIX Hexa causes autism, but the company chose to release the vaccine anyway. At least five known cases of autism arising from the jab are listed in the report on page 626, in fact: At the conclusion of this damning report, GSK admits that INFANRIX Hexa can cause a wide range of deadly illnesses but insists that its risk-benefit profile “continues to be favourable.”


Anecdotal report does not mean scientific evidence and legal culpability is different than medical culpability. Studies that fail to support an association between thimerosal in vaccines and autism are listed in the table below:


The table below shows difference between autism and mercury poisoning:


Too many vaccines and autism:

The notion that children might be receiving too many vaccines too soon and that these vaccines either overwhelm an immature immune system or generate a pathologic, autism-inducing autoimmune response is flawed for several reasons:

1. Vaccines do not overwhelm the immune system. Although the infant immune system is relatively naive, it is immediately capable of generating a vast array of protective responses; even conservative estimates predict the capacity to respond to thousands of vaccines simultaneously. Consistent with this theoretical exercise, combinations of vaccines induce immune responses comparable to those given individually. Also, although the number of recommended childhood vaccines has increased during the past 30 years, with advances in protein chemistry and recombinant DNA technology, the immunologic load has actually decreased. The 14 vaccines given today contain <200 bacterial and viral proteins or polysaccharides, compared with >3000 of these immunological components in the 7 vaccines administered in 1980. Further, vaccines represent a minute fraction of what a child’s immune system routinely navigates; the average child is infected with 4–6 viruses per year. The immune response elicited from the vast antigen exposure of unattenuated viral replication supersedes that of even multiple, simultaneous vaccines.

2. Multiple vaccinations do not weaken the immune system. Vaccinated and unvaccinated children do not differ in their susceptibility to infections not prevented by vaccines. In other words, vaccination does not suppress the immune system in a clinically relevant manner. However, infections with some vaccine-preventable diseases predispose children to severe, invasive infections with other pathogens. Therefore, the available data suggest that vaccines do not weaken the immune system.

3. Autism is not an immune-mediated disease. Unlike autoimmune diseases such as multiple sclerosis, there is no evidence of immune activation or inflammatory lesions in the CNS of people with autism. In fact, current data suggest that genetic variation in neuronal circuitry that affects synaptic development might in part account for autistic behavior. Thus, speculation that an exaggerated or inappropriate immune response to vaccination precipitates autism is at variance with current scientific data that address the pathogenesis of autism.

4. No studies have compared the incidence of autism in vaccinated, unvaccinated, or alternatively vaccinated children (i.e., schedules that spread out vaccines, avoid combination vaccines, or include only select vaccines). These studies would be difficult to perform because of the likely differences among these 3 groups in health care seeking behavior and the ethics of experimentally studying children who have not received vaccines.


From science to law:

Do childhood vaccines cause autism? This scientific question has now become a legal one — perhaps inevitable in our society. Some families with autistic children are pursuing legal channels in an effort to prove that vaccines are responsible for their children’s condition. Most of them allege that the cause is the mercury-containing preservative thimerosal, which was formerly used in many vaccines in the United States and elsewhere. Others argue that the culprit is the measles, mumps, and rubella (MMR) vaccine itself or perhaps the vaccine in combination with thimerosal. Although most experts have concluded that there is no proof of a causal tie between autism and thimerosal or the MMR vaccine, some doctors and scientists, some groups representing families with autistic children, and many parents fervently believe there is a connection. Claimants not only want to prove that the federal government, the Institute of Medicine, vaccine makers, and mainstream science are wrong; they also want money. A child with autism is likely to require extraordinarily expensive services — and to have very limited employment prospects in adulthood. Besides, many parents of autistic children may feel better psychologically if they can blame profit-seeking drug companies for their children’s problems. More than 5000 such families have filed claims with the federal Vaccine Injury Compensation Program (VICP).


Vaccine and SIDS:

What about reports that vaccines are linked to chronic diseases or problems such as sudden infant death syndrome (SIDS)?

Vaccines do not cause SIDS. Fortunately, we have learned that other factors, such as sleeping position and second-hand smoke, are linked with SIDS, and successful public education campaigns about these factors have helped to reduce the rate of SIDS. Vaccines are sometimes blamed for conditions that are poorly understood. A child’s first year of life is a time of tremendous growth and development, and it is a time when serious problems may start to appear. It is also the time when most vaccines are given, but this does not mean that vaccines cause these problems. Many of our vaccines have been in use for decades with no evidence of long-term adverse effects. Still, research to ensure the safety of vaccines continues. Anti-vaccine books and web sites claim that vaccines cause autism, seizure disorders, multiple sclerosis (MS) or Crohn’s disease, among other health problems. These connections have never held up to scientific scrutiny. Recent research using the best scientific methods and reviews of studies from around the world provide very strong evidence that

•MMR vaccine does not cause autism or inflammatory bowel disease;

•Hepatitis B vaccine does not cause multiple sclerosis or relapses of pre-existing MS;

•Pertussis vaccine does not cause brain damage;

•Childhood vaccines do not increase the risk of asthma.



Vaccine Safety Monitoring and Adverse Event Reporting:

Historical Perspective:

Fortunately the scientific community takes even the slightest suggestion that a vaccine causes an affliction seriously. Every time that a vaccine is accused of a side effect or of causing damage to the person receiving the vaccine the scientific vaccine community goes into full research mode. When there is found to be a relationship between a side effect and a vaccine, the scientific community is alerted while the vaccine’s safety is reviewed. The vaccine may be temporarily or permanently suspended from use. For example, RotaShield®, was a rotavirus vaccine that was licensed by the Food and Drug Administration (FDA) in August 1998 and recommended for use in the United States by the ACIP. In July of 1999 with almost 1 million children having been immunized with the vaccine, it was noticed that an increase in the number of children who developed a serious bowel disease called “intussusception” was occurring. The common thread was the RotaShield® vaccine, and so the Centers for Disease Control and Prevention (CDC) recommended that use of the vaccine be suspended. Scientific investigation estimated that the risk of intussusception attributable to the vaccine was about one per 10,000 (or less) among vaccinated infants, which was significantly higher than for children who were not vaccinated with that vaccine. Action was quickly taken, and the vaccine was voluntarily withdrawn from the market by the manufacturer in October 1999. Further investigation showed that those who received the RotaShield® vaccine in 1998 and 1999 were not at continuing risk of developing intussusception.  This shows that the surveillance systems put in place by both the government and the scientific community work and that the continuous monitoring of vaccines and the diligence of the scientific community provides us with the safest vaccines possible today. Additionally, rigorous questioning of the safety of vaccines leads researchers to find new ways to develop and manufacture vaccines.


Vaccine Adverse Event Reporting System (VAERS) in the U.S.:

Vaccines are developed with the highest standards of safety. However, as with any medical procedure, vaccination has some risks. Individuals react differently to vaccines, and there is no way to predict how individuals will react to a particular vaccine. The National Childhood Vaccine Injury Act (NCVIA) requires health care providers to report adverse events (possible side effects) that occur following vaccination, so the Food and Drug Administration (FDA) and Centers for Disease Control and Prevention (CDC) established the Vaccine Adverse Events Reporting System (VAERS) in 1990. VAERS is a national passive reporting system that accepts reports from the public on adverse events associated with vaccines licensed in the United States.

VAERS data are monitored to–

•Detect new, unusual, or rare vaccine adverse events

•Monitor increases in known adverse events

•Identify potential patient risk factors for particular types of adverse events

•Identify vaccine lots with increased numbers or types of reported adverse events

•Assess the safety of newly licensed vaccines

Approximately 30,000 VAERS reports are filed annually, with 10-15% classified as serious (resulting in permanent disability, hospitalization, life-threatening illnesses or death). Anyone can file a VAERS report, including health care providers, manufacturers, and vaccine recipients or their parents or guardians. The VAERS form requests the following information:

•The type of vaccine received

•The timing of the vaccination

•The onset of the adverse event

•Current illnesses or medication

•Past history of adverse events following vaccination

•Demographic information about the recipient


While the VAERS provides useful information on vaccine safety, this passive reporting system has important limitations. One is that it only collects information about events following vaccination; it does not assess whether a given type of event occurs more often than expected after vaccination. A second is that event reporting is incomplete and is biased toward events that are believed to be more likely to be due to vaccination and that occur relatively soon after vaccination. To obtain more systematic information on adverse events occurring in both vaccinated and unvaccinated persons, the Vaccine Safety Datalink project was initiated in 1991. Directed by the CDC, this project includes eight managed-care organizations in the United States; member databases include information on immunizations, medical conditions, demographics, laboratory results, and medication prescriptions. The Department of Defense oversees a similar system monitoring the safety of immunizations among active-duty military personnel. In addition, postlicensure evaluations of vaccine safety may be conducted by the vaccine manufacturer. In fact, such evaluations are often required by the FDA as a condition of vaccine licensure.


Institute of medicine (IOM) report:

The report was released by the Institute of Medicine (IOM) in 2011, which is part of the National Academy of Science.

Over a period of three years, they reviewed over 1,000 studies on vaccines. Interestingly, they excluded studies funded by the pharmaceutical industry, although some of the studies were funded by government agencies independently.

The review focused on eight vaccines:

Hepatitis A-hepatitis B Measles, mumps, and rubella vaccine Meningococcal vaccine Pneumococcal vaccine
Diphtheria, tetanus, and acellular pertussis, also known as DTaP or Tdap Varicella zoster (chickenpox) HPV vaccine Influenza vaccine


Some of those serious health problems included:

Multiple sclerosis Lupus Encephalitis (brain inflammation)
Rheumatoid arthritis Autism Encephalopathy, involving permanent brain damage


Perhaps the most important thing IOM did in this review is that they looked at two categories of science:

1. Epidemiological research (large studies comparing different groups of people against each other)

2. Bench science (research into the biological mechanisms at work within cells and molecules)

This is very important because a lot of the studies that the CDC relies on as evidence that vaccines don’t cause any problems are epidemiological studies. This report is important because they looked at both kinds of science. The most shocking conclusion of this report is that for more than a hundred bad health outcomes that have been reported after these eight vaccines have been given to people, they could not come to a conclusion as to whether or not those vaccines did or did not cause those adverse events!

Individual Susceptibility was discussed as a Co-Factor:

The IOM report also discussed individual susceptibility; the fact that some people are more susceptible for biological reasons, including genetic reasons, to having an adverse event after a vaccination. According to the report, both epidemiologic and mechanistic research suggests that most individuals who experience an adverse reaction to vaccines have a preexisting susceptibility. However, the report also states that in most cases they don’t know what those individual susceptibilities are.

Potential predispositions suggested in the report include:

•Genetic variation


•Coinciding illness

•Environmental factors

Every physician who gives a vaccine should read this 600-page report. That it is their responsibility because this is the latest report on the science of vaccination; of what’s in the published literature.


Safety of Vaccines Used for Routine Immunization of US Children: A Systematic Review 2014:

Concerns about vaccine safety have led some parents to decline recommended vaccination of their children, leading to the resurgence of diseases. Reassurance of vaccine safety remains critical for population health. This study systematically reviewed the literature on the safety of routine vaccines recommended for children in the United States. Strength of evidence was high for measles/mumps/rubella (MMR) vaccine and febrile seizures; the varicella vaccine was associated with complications in immunodeficient individuals. There is strong evidence that MMR vaccine is not associated with autism. There is moderate evidence that rotavirus vaccines are associated with intussusception. Limitations of the study include that the majority of studies did not investigate or identify risk factors for AEFIs; and the severity of AEFIs was inconsistently reported. Authors found evidence that some vaccines are associated with serious AEFIs; however, these events are extremely rare and must be weighed against the protective benefits that vaccines provide.


Vaccine Regulation in the U.S.:

Because vaccines are given to healthy individuals, they undergo a more rigorous approval process than drugs which are given to cure sick people. Licensing of vaccines typically takes 15 years and an average of $800 million of manufacturers’ money. The Food and Drug Administration (FDA) ensures the safety, purity, potency and effectiveness of vaccines.

But it doesn’t stop there:

•Post-licensing monitoring is conducted – tracking any side-effects from the vaccine.

•Samples of every lot of medicine must be submitted to the FDA before it is sold. This ensures that each batch is as safe and effective as the last.

•Since 1990, the Vaccine Safety Datalink (VSD) has collected statistics from more than 7 million people in health plans who have received vaccines.

•In 1990, the Centers for Disease Control and Prevention (CDC) and the FDA established the Vaccine Adverse Event Reporting System (VAERS), which gathers information about any side effects patients have experienced. VAERS accepts any reported information without determining a cause and effect relationship. Clinical Immunization Safety Assessment Centers (CISA) were started in 2001. They conduct clinical research about vaccine adverse events (VAE) and the role of individual variation; provide clinicians with evidence-based counsel and empower them to make informed immunization decisions.


How is a vaccine or a batch recalled?

Vaccine recalls or withdrawals are almost always voluntary by the manufacturer. Only in rare cases will the Food and Drug Administration (FDA) request a recall. But in every case, FDA’s role is to oversee a manufacturer’s strategy and assess the adequacy of the recall.

Why would a vaccine or batch of vaccine be withdrawn or recalled?

There have been only a few vaccine recalls or withdrawals, most due to concerns about the vaccine’s effectiveness, not its safety. When the strength of a vaccine lot has been reduced, those vaccines may not produce an immune response that is strong enough to protect against disease. Although those vaccines may not be effective, they are still safe. Vaccines are tested carefully and monitored continuously before and after they are licensed for use. If a vaccine lot is found to be unsafe, the FDA recalls it immediately.


The Vaccine Injury Compensation Program (VICP) in the U.S.:

This legislation was adopted by Congress in 1988 in response to a somewhat similar scare over the pertussis portion of the diphtheria–pertussis–tetanus (DPT) vaccine. Alerted to a possible link by British researchers, many observers feared that the vaccine was causing some children grave neurologic harm — claims that were later generally discredited. Yet the alarm was so great that droves of British families refused the pertussis vaccine, substantial numbers of children became ill with whooping cough, and some 70 children died. In the United States, several parents sued the manufacturers of DPT vaccines. Even though most public health officials believed that the claims of side effects were unfounded, some families won substantial awards from sympathetic juries who were convinced otherwise. As a result, most companies making the DPT vaccine ceased production, and the remaining major manufacturer threatened to do so. Health officials feared the loss of herd immunity, and Congress responded by creating the VICP. This program provides compensation to children who have serious adverse effects from any childhood vaccine. The compensation covers medical and related expenses, lost future income, and up to $250,000 for pain and suffering. The funding for paying successful claims regarding vaccines administered before 1988 came from the U.S. Treasury. For claims regarding later vaccinations, funding comes from a patient fee of 75 cents per vaccination. The VICP trust fund currently contains more than $2 billion. About 7000 claims have been filed for adverse effects other than autism, and so far about 2000 have resulted in compensation, in amounts averaging about $850,000. Approximately 700 claims remain unresolved, since the VICP frequently takes more than 2 years to process a petition. To win a VICP award, the claimant does not need to prove everything that is required to hold a vaccine maker liable in a product liability lawsuit. But a causal connection must be shown. If medical records show that a child had one of several listed adverse effects within a short period after vaccination, the VICP presumes that it was caused by the vaccine (although the government can seek to prove otherwise). An advisory committee helps to amend the list of adverse effects as the consensus view changes with the availability of new studies. If families claim that a vaccine caused an adverse effect that is not on the list, the burden of proof rests with them. Autism is not on the list for any vaccine, and the VICP has rejected about 300 such claims outright. In 2011, the US Supreme Court ruled that vaccines are “unavoidably unsafe” and that the federal Vaccine Injury Compensation Program (VICP) should be the sole remedy for all vaccine injury claims. Most claims are now filed by adults suffering vaccine injury after receiving a flu vaccine.



Vaccine contraindications, concerns and exemptions:

Contraindications and precautions:

Before vaccination, all patients should be screened for contraindications and precautions. A contraindication is a condition that is believed to substantially increase the risk of a serious adverse reaction to vaccination. A contraindication is a situation in a vaccine should not be used because the risk outweighs any potential therapeutic benefit. A vaccine should not be administered when a contraindication is documented. For example, a history of an anaphylactic reaction to a dose of vaccine or to a vaccine component is a contraindication for further doses. A precaution is a condition that may increase the risk of an adverse reaction following immunization or that may compromise the ability of the vaccine to produce immunity. In general, vaccines are deferred when a precaution is present. However, there may be circumstances when the benefits of giving the vaccine outweigh the potential harm, or when reduced vaccine immunogenicity may still result in significant benefit to a susceptible, immunocompromised host.  In some cases, contraindications and precautions are temporary and may lead to mere deferral of vaccination until a later time. For example, moderate or severe febrile illnesses are generally considered transient precautions to vaccination and result in postponement of vaccine administration until the acute phase has resolved; thus the superimposition of adverse effects of vaccination on the underlying illness and the mistaken attribution of a manifestation of the underlying illness to the vaccine are avoided. It is important to recognize conditions that are not contraindications in order not to miss opportunities for vaccination. For example, in most cases, mild acute illness (with or without low-grade fever), a history of a mild to moderate local reaction to a previous dose of the vaccine, and breast-feeding are not contraindications to vaccination.


There are two types of contraindications (reasons not to give a vaccine): permanent and temporary.

•The following are permanent contraindications to vaccination:

1. Severe allergic reaction to a vaccine component (animal proteins [eggs], antibiotic, stabilizer, or preservative) or following a previous dose of the vaccine;

2. Encephalopathy within seven days of a pertussis vaccination (not from another identifiable cause).

•The following are precautions/temporary contraindications to vaccination:

1. Pregnancy: Although the risk of vaccination during pregnancy is mostly theoretical, caution is advised. Therefore, women who are known to be pregnant should not receive any of the live vaccines. Inactivated vaccines are considered generally safe during pregnancy and should be used when indicated.

2. Immunosuppression: People with active cancer, leukemia, or lymphoma (or people taking high doses of steroids) should not receive live vaccines but can receive inactivated vaccines.

◦Human immunodeficiency virus (HIV): Vaccination depends on the severity of the illness. In asymptomatic (without symptoms) individuals, many vaccines are considered safe. In general, the inactivated vaccines are safe for both symptomatic and asymptomatic individuals infected with HIV.

◦Moderate to severe illness: If someone is ill with more than a simple cold, earache, diarrhea, or other minor illness, vaccination should be postponed until the illness is over.


Vaccine Formulation Contraindications and Precautions
All vaccines Contraindication:

Severe allergic reacti