I am reminded of Abraham Lincoln’s remark: “The world has never had a good definition of the word liberty. We all declare for liberty, but in using the same word we do not mean the same thing.” Substitute ‘net neutrality’ for ‘liberty’, and that’s where we are today. The Internet has unleashed innovation, enabled growth, and inspired freedom more rapidly and extensively than any other technological advance in human history. Its independence is its power. Net neutrality means internet service providers (ISPs) should treat all data on internet equally. The ISPs have structural capacity to determine the way in which information is transmitted over the internet and the speed at which it is delivered. And the present internet network operators, principally large telephone and cable companies—have an economic incentive to extend their control over the physical infrastructure of the internet to leverage their control of internet content. If they went about it in the wrong way, these companies could institute changes that have the effect of limiting the free flow of information over the internet in a number of troubling ways. Network operators could prioritize the transmission of some content—their own for example—over other material produced by competitors. If this was to be allowed, web companies would lose revenues that they could otherwise devote to improvements in old products and innovations in new ones. Worse yet, the smaller content providers, who can now capitalize on the two-way nature of the internet—whether online stores or forums for democratic discourse—might be unable to secure quality service online. An entrepreneur’s fledgling company should have the same chance to succeed as established corporations, and that access to a high school student’s blog shouldn’t be unfairly slowed down to make way for advertisers with more money. At the core of the principle of net neutrality is thus the idea that all content on the internet should be accessible in a fully equitable way and once an internet user has accessed that content, he should be able to engage with that content in the same way that he would engage with any other content on the internet. Allowing broadband carriers to control what people see and do online would fundamentally undermine the principles that have made internet such a success. On the other hand, to be honest, there is no absolute neutrality. The world is neither neutral nor equal. Umpires in a game of cricket were perceived to be biased and so we have neutral umpires from countries not playing the present game. Humans have been subjective. They’ve got their own positions, opinions and priorities. So net neutrality cannot be seen in isolation of entire gambit of human behaviours but approached by combining different views and opinions.
Internet terminology, abbreviations and synonyms:
The collection of cables and data canters that make up the core of the internet. This is operated not by a single operator but by many independent companies spread across the globe.
Internet Service Provider (ISP):
A company, such as Comcast or Verizon or Airtel or Tata docomo, that plugs into the backbone and then provides internet connections to homes and businesses. ISP is also known as TSP (telecom service provider) or Telco or broadband carriers or network operators or internet access providers or platform operator. An ISP provides internet services to users via cable or wireless connections.
Access ISP = last mile ISP = eyeball ISP = ISP that provides internet access to user.
Companies such as Google, Facebook, and Netflix that provide the webpages, videos, and other content that moves across the internet. My website www.drrajivdesaimd.com is also a content provider. A content provider is anyone who has a website that delivers content to internet users. Content and service providers (CSPs) offer a wide range of applications and content to the mass of potential consumers.
Where one internet operation connects directly to another so that they can trade traffic. This could be a connection between an ISP such as Comcast and an internet backbone provider such as Level 3. But it could also be a direct connection between an ISP and a content provider such as Google.
Content Delivery Network (CDN):
A network of computer servers set up inside an ISP that delivers popular photos, videos, and other content. These servers can deliver this content faster to home users because they’re closer to home users. Companies such as Akamai and Cloudflare run CDNs that anyone can use. But content providers such as Google and Netflix now run their own, private CDNs as well.
FCC (federal communication commission of U.S.) and TRAI (telecom regulatory authority of India) are some examples of regulators that regulate ISPs.
ISP = internet service provider
CSP = content & service provider
IU = internet user = user = consumer
NN = net neutrality
IP = internet protocol
TCP = transmission control protocol
VoIP = voice over internet protocol
Kbps = kilobits per second
Mbps = megabits per second = 1000 Kbps
Gbps = gigabits per second = 1000 Mbps
QoS = quality of service
CDN = content delivery network
P2P = peer-to-peer file sharing
SMS = short message service
MMS = multimedia message service
OTT = over the top services
BE = best effort
LAN = local area network
WAN = wide area network
WLAN = wireless local area network
DSL = digital subscriber line
Packets = datagrams
IPTV = internet protocol television
Who’s an Internet user?
A user is a pretty broad term to describe someone who uses the Internet so let’s take a closer look at what “user” means. A user can be a person, small business, local city, state or national government agency, or a large organization, such as the U.S. Government, AT&T, Google, Microsoft, or Facebook. As you can see by this wide range of Internet users, an organization that makes laws, sets tariffs, owns portions of the cables that makeup the Internet, or has the money to buy faster speeds and pay for larger amounts of data could obtain an advantage over a smaller organization or user. In addition to size, governments of certain countries restrict both who is allowed to use the Internet and what the users can do when using the Internet. Some countries have tightly controlled Internets within their borders, and net neutrality is sometimes used more broadly to include the freedom to send and receive data without government restrictions.
The figure below depicts how internet works today. To understand net neutrality and how ISPs interfere to circumvent net neutrality, this figure must be memorised:
There are a lot of emotional terms used to describe various aspects of what makes the melting pot of the neutrality debate. For example, censorship or black-holing (where route filtering, fire-walling and port blocking might say what is happening in less insightful way); free-riding is often bandied about to describe the business of making money on the net (rather than overlay service provision); monopolistic tendencies, instead of the natural inclination of an organisation that owns a lot of kit that they’ve sunk capital into, to want to make revenue from it!
Growth of internet:
As the flood of data across the internet continues to increase, there are those that say sometime soon it is going to collapse under its own weight. Back in the early 90s, those of us that were online were just sending text e-mails of a few bytes each, traffic across the main US data lines was estimated at a few terabytes a month, steadily doubling every year. But the mid 90s saw the arrival of picture rich websites, and the invention of the MP3. Suddenly each net user wanted megabytes of pictures and music, and the monthly traffic figure exploded. For the next few years we saw more steady growth with traffic again roughly doubling every year. But since 2003, we have seen another change in the way we use the net. The YouTube generation want to stream video, and download gigabytes of data in one go. In one day, YouTube sends data equivalent to 75 billion e-mails; so it’s clearly very different. The network is growing up, is starting to get more capacity than it ever had, but it is a challenge. Video is real-time; it needs to not have mistakes or errors. E-mail can be a little slow. You wouldn’t notice if it was 11 seconds rather than 10, but you would notice that on a video.
Introduction to net neutrality:
The Internet owes much of its success to the fact that it is open and easily accessible, provided that the user has an Internet connection. Any content provider who has opportunity to test its ideas and their relative value in the marketplace can put its content on internet. The required investment, such as buying a domain name, renting a space on a server and implementing its application or software has been relatively low. As a result, new services have been made available to consumers: browsing, mailing, Peer-to-Peer (P2P), instant messaging, Internet telephony (Voice over Internet Protocol ‘VoIP’), videoconference, gaming online, video streaming, etc. This development has taken place mainly on a commercial basis without any regulatory intervention.
Net neutrality is the principle that all data on the internet is equal, and must be treated equally with no discrimination on the basis of content, user or design by governments and Internet Service Providers (ISP’s). Net Neutrality is the principle that all data on the internet is transported using best effort. This includes not discriminating for origin and service. Under this principle, consumers can make their own choices about what applications and services to use and are free to decide what lawful content they want to access, create, or share with others… Once you’re online, you don’t have to ask permission or pay tolls to broadband providers to reach others on the network. If you develop an innovative new website, you don’t have to get permission to share it with the world. For example, Times is a widely popular online newspaper, while Mirror has comparatively fewer visitors to their website. Right now, if Mirror wanted to boost their page views, they would have to write more engaging stories and find ways to share their content so that more people read it. They are not allowed to make deals with ISP’s to charge customers less money if they visit Mirror website. Net Neutrality means that Internet Service Providers should bill you on the amount of bandwidth you have consumed, and not on which website you visited. Net neutrality is the principle that all packets of data over the internet should be transmitted equally, without discrimination. So, for example, net neutrality ensures that my blog can be accessed just as quickly as, say the BBC website. Essentially, it prevents ISPs from discriminating between sites, organisations etc. whereby those with the deepest pockets can pay to get in the fast lane, whilst the rest have to contend with the slow lane. Instead, every website is treated equally, preventing the big names from delivering their data faster than a small independent online service. This ensures that no one organisation can deliver their data any quicker than anyone else, enabling a fair and open playing field that encourages innovation and diversity in the range of information material online. The principles of net neutrality are effectively the reason why we have a (reasonably) diverse online space that enables anyone to create a website and reach a large volume of people. Network neutrality is the idea that Internet service providers must allow customers equal access to content and applications regardless of the source or nature of the content. Presently the Internet is indeed neutral: All Internet traffic is treated equally on a first‐come, first‐serve basis by Internet backbone owners. The Internet is neutral because it was built on phone lines, which are subject to ‘common carriage’ laws. These laws require phone companies to treat all calls and customers equally. They cannot offer extra benefits to customers willing to pay higher premiums for faster or clearer calls, a model knows as tiered service.
Net neutrality is not a new concept relative to the age of the Internet; its roots are embedded within the founders. Net Neutrality refers to a guiding principle that preserves the free and open Internet with no discrimination. It makes it such that an Internet Service Provider (ISP) cannot discriminate the speed of the connection – or lack thereof – to one content provider versus another (Eudes 2008). When the Internet was first invented, founders wanted to be sure that it was to provide a safe haven for the transportation of information without any biases. They wanted to ensure that all people had a consistent way to use the Internet; regardless of their connection and social status (Margulius, 2003). Net Neutrality has two polarizing factions; those who are in favor, and those who are not. On this topic there is not a middle ground. Those who are in favor of Net Neutrality consist of organizations like Microsoft, Google, and other content providers. Those who are against Net Neutrality are generally made of telecommunication network organizations and/or ISPs (Owen 2007). Network neutrality, or open inter-working, means in accessing the World Wide Web, one is in full control over how to go online, where to go and what to do, as long as these are lawful. So firms that provide Internet services should treat all lawful Internet content in a neutral manner. It also required such companies not to charge users, content, platform, site, application or mode of communication differentially. These are also the founding principles of the Internet and what has made it the largest and most diverse platform for expression in recent history.
Net neutrality is when an ISP treats all content on the internet neutrally, and does not prioritize one over the other. ISPs are charging content companies because money makes their shareholders happy. Also because they believe they have the right to do so when a certain content provider (e.g. netflix) takes up the majority of the bandwidth from their data centers. Companies are concerned because it will give ISPs free reign to downright slow down any content they please and demand money to bring it to normal speed. Whether you’re accessing How-To Geek, Google, or a tiny website running on shared hosting somewhere, your Internet service provider treats these connections equally and forwards the data along without prioritizing any one party. Your Internet service provider could prioritize data from Google, charging them for the privilege. They could throttle Netflix while providing you with unlimited bandwidth to stream videos from their own video-streaming service. They could restrict the bandwidth available to VoIP applications and encourage you to keep paying for a phone line. They could throttle connections to websites run by startups and other individuals that haven’t signed a contract with the Internet service provider to pay for priority access. These actions would all be violations of net neutrality. However, by and large, Internet service providers don’t violate net neutrality in this way. They just forward packets along — that’s the way the Internet has worked and it has given us the Internet we have today.
The figure below shows how ISPs would like the internet to be without net neutrality:
One percent of the world’s population controls almost 50 percent of the world’s wealth, according to the poverty eradication nonprofit Oxfam. Advocates of net neutrality worry that loosening the rules for ISPs will result in a one-percent version of the Internet. Here’s how it could happen. In 2004, Internet traffic was more or less equally distributed across thousands of Web companies. Just 10 years later, half of all Internet traffic originated from only 30 companies. The top three websites by daily unique visitors and page views are Google, Facebook and YouTube. In terms of data, Netflix and YouTube hog more than half of all downstream traffic in North America. That means one out of every two bytes of data traveling across the Internet is streaming video from Netflix or YouTube. If the distribution of Internet traffic is so out of whack now, imagine what it would be like if ISPs were given the green light to give further preferential treatment to the biggest players. Would there be any bandwidth left for the 99 percent — independent video producers, upstart social media sites, bloggers and podcasters? This is a really important reason why you should care about net neutrality. The Internet, as it exists today, is an open forum for free speech and freedom of expression. Websites publishing both popular and unpopular viewpoints are treated equally in terms of how their data gets from servers to screens. If the FCC allows Internet service providers (ISPs) to charge extra money for access to Internet last-mile fast lanes, the playing field of free speech is no longer equal. Those with the money to pay for special treatment could broadcast their opinions more quickly and more smoothly than their opponents. Those without as many resources — activists, artists and political outsiders — could be relegated to the Internet slow lane.
If you’re lucky enough to live in a country that doesn’t regulate the information you access online, you probably take net neutrality for granted. You search the Web unrestricted by government censors, free to choose what information to believe or discard, and what websites and online services to patronize. In mainland China, citizens of the highly restrictive communist regime enjoy no such freedoms. This is what a heavily censored and closely monitored Internet looks like:
1. Chinese internet service providers (ISPs) block access to a long list of sites banned by the government.
2. Specific search terms are red flagged; type them into Google and you’ll be blocked from the search engine for 90 seconds.
3. Chinese ISPs are given lists of problematic keywords and ordered to take down pages that include those words.
4. The government and private companies employ 100,000 people to police the Internet and snitch on dissenters.
5. The government also pays people to post pro-government messages on social networks, blogs and message boards.
The unequal Web:
The figure above shows that richer countries rank highest for net access, freedom and openness. The web is becoming less free and more unequal, according to a report from the World Wide Web Foundation. Its annual web index suggests web users are at increasing risk of government surveillance, with laws preventing mass snooping weak or non-existent in over 84% of countries. It also indicates that online censorship is on the rise. The report led web inventor Sir Tim Berners-Lee to call for net access to be recognised as a human right. That means guaranteeing affordable access for all, ensuring internet packets are delivered without commercial or political discrimination, and protecting the privacy and freedom of web users regardless of where they live.
Net neutrality worldwide:
This map shows data from Glasnost, one of the measurement lab tools for examining your internet connection. Authors map the percentage of tests where violations of net neutrality was discovered worldwide. Data covers the period from 2012-12-26 00:02:11 to 2013-12-22 23:59:19.
Outline of computer, internet, bits, bytes, speed, packets and internet protocol:
Computer is defined as a programmable machine that computes (stores, processes and retrieves) information (data) according to a set of instructions (program). Computer processes data in numerical form and its digital electronic circuits perform mathematical operations using Binary System. Binary system means using only two digits for arithmetic processing, namely, 0 and 1 known as bits (binary digits).
0 means absence of current/voltage in electronic circuit = off
1 means presence of current/voltage in electronic circuit = on
A series of 8 consecutive bits is known as a byte which permits 256 different on/off combinations.
Computers see everything in terms of binary. In binary systems, everything is described using two values or states: on or off, true or false, yes or no, 1 or 0. A light switch could be regarded as a binary system, since it is always either on or off. As complex as they may seem, on a conceptual level computers are nothing more than boxes full of millions of “light switches.” Each of the switches in a computer is called a bit, short for binary digit. A computer can turn each bit either on or off. Your computer likes to describe on as 1 and off as 0. By itself, a single bit is kind of useless, as it can only represent one of two things. By arranging bits in groups, the computer is able to describe more complex ideas than just on or off. The most common arrangement of bits in a group is called a byte, which is a group of eight bits.
Internet is defined as a global communication system of data connectivity between computers using transmission control protocol (TCP) and internet protocol (IP) to serve billions of users in the world. Internet is the greatest invention in communication breaking barriers of age/distance/language/religion/race/region and making the world a better place to live in. If you do not have internet access in 21′st century, you are illiterate. Internet scores over media due to internet’s openness and neutrality. Every school must teach basics of computer and internet to students.
Data transfer rate (speed) of internet is usually in bits per second.
1000 bits per second = 1 kilobit per second (Kbps)
1000000 bits per second = 1 megabit per second (Mbps) = 1000 Kbps
Broadband means download internet speed of more than 4 Mbps and upload internet speed of more than 1 Mbps. Newer technology with fiber-optic cables can give internet speed of 100 Mbps.
The speed of travel of data from computer to computer through wireless technology (air) is same as the speed of radio waves (speed of light) which is 300,000 kilo meters per second. The speed of travel of data from computer to computer through wired network is same of speed of electricity which is also near speed of light. Please do not confuse between speed of data travel i.e. speed of light and internet speed i.e. data transfer rate in Kbps or Mbps which refers to the speed of digital data converted into radio waves or electricity and not the speed of data when traveling through the air or wires. Data transfer rate and data travel rate are different. The term latency is used to determine amount of time taken by packets to travel from source to destination. Since speed of light is constant and fastest, latency depends on time taken by packets to travel through routers (queuing) and other hardware/software.
The picture below illustrates two computers connected to the Internet; your computer with IP address 184.108.40.206 and another computer with IP address 220.127.116.11. The Internet is represented as an abstract object in-between.
An Internet Protocol address (IP address) is a numerical label assigned to each device (e.g., computer, printer) participating in a computer network that uses the Internet Protocol for communication. An IP address serves two principal functions: host or network interface identification and location addressing. A name indicates what we seek. An address indicates where it is. A route indicates how to get there. The designers of the Internet Protocol defined an IP address as a 32-bit number and this system, known as Internet Protocol Version 4 (IPv4), is still in use today. However, because of the growth of the Internet and the predicted depletion of available addresses, a new version of IP (IPv6), using 128 bits for the address, was developed in 1995. IP addresses are usually written and displayed in human-readable notations, such as 172.16.254.1 (IPv4), and 2001:db8:0:1234:0:567:8:1 (IPv6). Each version defines an IP address differently. Because of its prevalence, the generic term IP address typically still refers to the addresses defined by IPv4. IPv4 addresses are canonically represented in dot-decimal notation, which consists of four decimal numbers, each ranging from 0 to 255, separated by dots, e.g., 172.16.254.1. Each part represents a group of 8 bits (octet) of the address. In some cases of technical writing, IPv4 addresses may be presented in various hexadecimal, octal, or binary representations. There are about 4.3 billion IP addresses. The class-based, legacy addressing scheme places heavy restrictions on the distribution of these addresses. TCP/IP networks are inherently router-based, and it takes much less overhead to keep track of a few networks than millions of them. The rapid exhaustion of IPv4 address space, despite conservation techniques, prompted the Internet Engineering Task Force (IETF) to explore new technologies to expand the addressing capability in the Internet. The permanent solution was deemed to be a redesign of the Internet Protocol itself. This new generation of the Internet Protocol, intended to replace IPv4 on the Internet, was eventually named Internet Protocol Version 6 (IPv6) in 1995. The address size was increased from 32 to 128 bits or 16 octets. This, even with a generous assignment of network blocks, is deemed sufficient for the foreseeable future. Mathematically, the new address space provides the potential for a maximum of 2128, or about 3.403×1038 addresses. The Domain Name System (DNS) converts IP addresses to domain names so that users only need to specify a domain name to access a computer on the Internet instead of typing the numeric IP address. DNS servers maintain a database containing IP addresses mapped to their corresponding domain names.
IP address assignment:
Internet Protocol addresses are assigned to a host either anew at the time of booting, or permanently by fixed configuration of its hardware or software. Persistent configuration is also known as using a static IP address. In contrast, in situations when the computer’s IP address is assigned newly each time, this is known as using a dynamic IP address. An Internet Service Provider (ISP) will generally assign either a static IP address (always the same) or a dynamic address (changes every time one logs on). If you connect to the Internet from a local area network (LAN) your computer might have a permanent IP address or it might obtain a temporary one from a DHCP (Dynamic Host Configuration Protocol) server. In any case, if you are connected to the Internet, your computer has a unique IP address.
Packets and protocols:
When a file is sent from one computer to another, it is broken into small pieces called packets. A typical packet contains perhaps 1,000 or 1,500 bytes. It turns out that everything you do on the Internet involves packets. For example, every Web page that you receive comes as a series of packets, and every e-mail you send leaves as a series of packets. The packets are labelled individually with origin, destination and place in the original file. The packets are sent sequentially over network. Each packet carries the information that will help it get to its destination — the sender’s IP address, the intended receiver’s IP address, something that tells the network how many packets this e-mail message has been broken into and the number of this particular packet. When a packet get on a router, the router looks at the packet to see where it needs to go. The routers determine where to send information from one computer to another. Routers are specialized computers that send your messages and those of every other Internet user speeding to their destinations along thousands of pathways. The packets carry the data in the protocols that the Internet uses: Transmission Control Protocol/Internet Protocol (TCP/IP). Using “pure” IP, a computer first breaks down the message to be sent into small packets, each labelled with the address of the destination machine; the computer then passes those packets along to the next connected Internet machine (router), which looks at the destination address and then passes it along to the next connected internet machine, which looks the destination address and pass it along, and so forth, until the packets (we hope) reach the destination machine. IP is thus a “best efforts” communication service, meaning that it does its best to deliver the sender’s packets to the intended destination, but it cannot make any guarantees. If, for some reason, one of the intermediate computers “drops” (i.e., deletes) some of the packets, the dropped packets will not reach the destination and the sending computer will not know whether or why they were dropped. By itself, IP can’t ensure that the packets arrived in the correct order, or even that they arrived at all. That’s the job of another protocol: TCP (Transmission Control Protocol). TCP sits “on top” of IP and ensures that all the packets sent from one machine to another are received and assembled in the correct order. Should any of the packets get dropped during transmission, the destination machine uses TCP to request that the sending machine resend the lost packets, and to acknowledge them when they arrive. TCP’s job is to make sure that transmissions get received in full, and to notify the sender that everything arrived OK. Each packet is sent off to its destination by the best available route — a route that might be taken by all the other packets in the message or by none of the other packets in the message. This makes the network more efficient. First, the network can balance the load across various pieces of equipment on a millisecond-by-millisecond basis. Second, if there is a problem with one piece of equipment in the network while a message is being transferred, packets can be routed around the problem, ensuring the delivery of the entire message. Packets don’t necessarily all take the same path — they’ll generally travel the path of least resistance. That’s an important feature. Because packets can travel multiple paths to get to their destination, it’s possible for information to route around congested areas on the Internet. In fact, as long as some connections remain, entire sections of the Internet could go down and information could still travel from one section to another — though it might take longer than normal. When the packets get to you, your device arranges them according to the rules of the protocols. It’s kind of like putting together a jigsaw puzzle. When you send an e-mail, it gets broken into packets before zooming across the Internet. Phone calls over the Internet also convert conversations into packets using the Voice over Internet protocol (VoIP).
Many things can happen to packets as they travel from origin to destination, resulting in the following problems as seen from the point of view of the sender and receiver:
Due to varying load from disparate users sharing the same network resources, the bit rate (the maximum throughput) that can be provided to a certain data stream may be too low for realtime multimedia services if all data streams get the same scheduling priority.
The routers might fail to deliver (drop) some packets if their data loads are corrupted, or the packets arrive when the router buffers are already full. The receiving application may ask for this information to be retransmitted, possibly causing severe delays in the overall transmission.
Sometimes packets are corrupted due to bit errors caused by noise and interference, especially in wireless communications and long copper wires. The receiver has to detect this and, just as if the packet was dropped, may ask for this information to be retransmitted.
Latency is defined as the time it takes for a source to send a packet of data to a receiver. Latency is typically measured in milliseconds. The lower the latency (the fewer the milliseconds), the better the network performance. It might take a long time for each packet to reach its destination, because it gets held up in long queues, or it takes a less direct route to avoid congestion. This is different from throughput, as the delay can build up over time, even if the throughput is almost normal. In some cases, excessive latency can render an application such as VoIP or online gaming unusable. Ideally latency is as close to zero as possible.
Packets from the source will reach the destination with different delays. A packet’s delay varies with its position in the queues of the routers along the path between source and destination and this position can vary unpredictably. This variation in delay is known as jitter and can seriously affect the quality of streaming audio and/or video.
When a collection of related packets is routed through a network, different packets may take different routes, each resulting in a different delay. The result is that the packets arrive in a different order than they were sent. This problem requires special additional protocols responsible for rearranging out-of-order packets to an isochronous state once they reach their destination. This is especially important for video and VoIP streams where quality is dramatically affected by both latency and lack of sequence.
At their most basic level, protocols establish the rules for how information passes through the Internet. Protocols are to computers what language is to humans. Since this article is in English, to understand it you must be able to read English. Similarly, for two devices on a network to successfully communicate, they must both understand the same protocols. Without these rules, you would need direct connections to other computers to access the information they hold. You’d also need both your computer and the target computer to understand a common language. When you want to send a message or retrieve information from another computer, the TCP/IP protocols are what make the transmission possible. You’ve probably heard of several protocols on the Internet. For example, hypertext transfer protocol (HTTP) is what we use to view Web sites through a browser — that’s what the http at the front of any Web address stands for. If you’ve ever used an FTP server, you relied on the file transfer protocol. Protocols like these and dozens more create the framework within which all devices must operate to be part of the Internet.
So your computer is connected to the Internet and has a unique address. How does it ‘talk’ to other computers connected to the Internet? An example should serve here: Let’s say your IP address is 18.104.22.168 and you want to send a message to the computer 22.214.171.124. The message you want to send is “Hello computer 126.96.36.199!” Obviously, the message must be transmitted over whatever kind of wire connects your computer to the Internet. Let’s say you’ve dialled into your ISP from home and the message must be transmitted over the phone line. Therefore the message must be translated from alphabetic text into electronic signals, transmitted over the Internet, then translated back into alphabetic text. How is this accomplished? Through the use of a protocol stack. Every computer needs one to communicate on the Internet and it is usually built into the computer’s operating system (i.e. Windows, Unix, etc.). The protocol stack used on the Internet is referred to as the TCP/IP protocol stack because of the two major communication protocols used. The TCP/IP stack looks like this:
|Application Protocols Layer||Protocols specific to applications such as WWW, e-mail, FTP, etc.|
|Transmission Control Protocol Layer||TCP directs packets to a specific application on a computer using a port number.|
|Internet Protocol Layer||IP directs packets to a specific computer using an IP address.|
|Hardware Layer||Converts binary packet data to network signals and back.
(E.g. Ethernet network card, modem for phone lines, etc.)
If we were to follow the path that the message “Hello computer 188.8.131.52!” took from our computer to the computer with IP address 184.108.40.206, it would happen something like this:
Internet layers/protocol layers:
The internet layer is a group of internetworking methods, protocols, and specifications in the Internet protocol suite that are used to transport datagrams (packets) from the originating host across network boundaries to the destination host specified by a network address (IP address) which is defined for this purpose by the Internet Protocol (IP). A common design aspect in the internet layer is the robustness principle: ‘Be liberal in what you accept, and conservative in what you send’ as a misbehaving host can deny Internet service to many other users. The internet layer of the TCP/IP model is often compared directly with the network layer (layer 3) in the Open Systems Interconnection (OSI) protocol stack. OSI’s network layer is a catch-all layer for all protocols that facilitate network functionality. The internet layer, on the other hand, is specifically a suite of protocols that facilitate internetworking using the Internet Protocol. Protocol layers exist to reduce design complexity and improve portability and support for change. Networks are organised as series of layers or levels each built on the one below. The purpose of each layer is to offer services required by higher levels and to shield higher layers from the implementation details of lower layers.
OSI Reference Model:
OSI consists of 7 layers of protocols, i.e., of 7 different areas in which the protocols operate. In principle, the areas are distinct and of increasing generality; in practice, the boundaries between the layers are not always sharp. The model draws a clear distinction between a service, something that an application program or a higher-level protocol uses, and the protocols themselves, which are sets of rules for providing services.
The OSI Model was developed to help provide a better understanding of how a network operates. The better you understand the model the better you will understand networking. It is composed of seven OSI layers. Each layer is unique and supports the creation and control of data packets. The layers start with Physical and ends with the Application. The first three layers relate to network equipment. For example, switches are layer 2 devices and routers are layer 3 devices.
1. The first layer is the Physical layer and is where the data is either put onto the media or taken off the media. The media could be the network cable or wireless. The data is in the form of bits and is called Bits as the PDU (protocol data unit). These bits can be voltage levels that represent binary numbers of 1 or 0. They could also be light pulses traveling on a fiber optic cable or radio wave pulses for a wireless network.
2. The second layer is the Data Link layer and is where framing of the data takes place. The Frame is the PDU name at this layer. The MAC (media access control) physical address is added or removed depending on which direction the data is traveling. The MAC address is used by switches to switch the data to the appropriate computer or node that it is intended for in a LAN (local area network).
3. The third layer is the Network layer and is where the IP (internet protocol) address is added or removed and the PDU at this layer is called a Packet. Routers operate at this level and use the IP (logical address) to route the data to the appropriate network. Network locations are found by the routers using routing tables to locate the appropriate networks.
4. The fourth layer is the Transport layer and is where the data is segmented (broken into pieces) and used by the TCP protocol to ensure accurate and reliable data is transferred. The data segments are numbers so that proper sequencing can be determined on the receiving side in order to rebuild accurate files. The PDU name at this layer is called Segment.
5. The fifth layer is the Session layer and is where the session is created, maintained, and torn-down when finished.
6. The sixth layer is the Presentation layer and is where the data is formatted or decrypted into files that the user can understand.
7. The seventh layer is the Application layer and is the user interface to the network where that data is either being generated or received.
The Internet, like any other computer network, is defined in terms of layers; these are the often-referenced “OSI Layers”. This division into layers is a logical (rather than physical) one; the data traversing the network is eventually one long series of bits — 0′s and 1′s. Such “layers” is how we address the representation of those many bits; their grouping into clusters of bits that have meaning. The different network layers are different levels of interpretation of this large set of bits moving along the wire. Understanding the same raw traffic at different layers allows us to bridge the semantic gap between a bunch of 0′s and 1′s and an e-mail being sent or to a web site being browsed. After all, all emails and browsing sessions end up as 0′s and 1′s on a wire. Processing those sequences of bits at different layers of abstraction is what makes the network as versatile as it is and technically manageable. In a nutshell, Internet traffic is interpreted at seven layers, where each layer introduces meaningful data objects and uses the underlying layer to transfer these objects. Each of the many components of the Internet (applications sending and receiving data, routers, modems, and wires) knows how to process data at its own layer and needs not be aware of what the data represents at higher layers or of how data is processed by the lower layers.
Understanding the layered architecture of the Internet allows us to define net neutrality:
Network neutrality is the adherence to the paradigm that operation at a certain layer, by a network component (or provider) that is chartered for operating at that layer, is not influenced by interpretation of the processed data at higher layers. So network neutrality is an intended feature of the Internet. A component operating at a certain layer is not required to understand the data it processes at higher layers. The network card operating at Layer 2 does not need to know that it is sending an e-mail message (Layer 7). It only needs to know that it is sending a frame (Layer 2) with a certain opaque payload. Net-neutrality is thus built into the Internet. When expanding the notion of net neutrality from the purely technical domain to the service domain, we can define network neutrality as the adherence to the paradigm that operation of a service at a certain layer is not influenced by any data other than the data interpreted at that layer, and in accordance with the protocol specification for that layer. Therefore, a service provider is said to operate in net neutrality if it provides the service in a way what is strictly “by the book”, where “the book” is the specification of the network protocol it implements as its service. Its operation is network-neutral if it is not impacted by any other logic other than that of implementing the network layer protocol that it is chartered at implementing.
So how do packets find their way across the Internet? Does every computer connected to the Internet know where the other computers are? Do packets simply get ‘broadcast’ to every computer on the Internet? The answer to both the preceding questions is ‘no’. No computer knows where any of the other computers are, and packets do not get sent to every computer. The information used to get packets to their destinations are contained in routing tables kept by each router connected to the Internet. Routers are packet switches. A router is usually connected between networks to route packets between them. Each router knows about its sub-networks and which IP addresses they use. The router usually doesn’t know what IP addresses are ‘above’ it. When a packet arrives at a router, the router examines the IP address put there by the IP protocol layer on the originating computer. The router checks its routing table. If the network containing the IP address is found, the packet is sent to that network. If the network containing the IP address is not found, then the router sends the packet on a default route, usually up the backbone hierarchy to the next router. Hopefully the next router will know where to send the packet. If it does not, again the packet is routed upwards until it reaches a NSP (network service provider) backbone. The routers connected to the NSP backbones hold the largest routing tables and here the packet will be routed to the correct backbone, where it will begin its journey ‘downward’ through smaller and smaller networks until it finds its destination.
Modem vs. router:
A router is a device that forwards data packets along networks. A router is connected to at least two networks, commonly two LANs or WANs or a LAN and its ISP’s network. Routers are located at gateways, the places where two or more networks connect. While connecting to a router provides access to a local area network (LAN), it does not necessarily provide access to the Internet. In order for devices on the network to connect to the Internet, the router must be connected to a modem. While the router and modem are usually separate entities, in some cases, the modem and router may be combined into a single device. This type of hybrid device is sometimes offered by ISPs to simplify the setup process.
A modem (modulator-demodulator) is a device that modulates signals to encode digital information and demodulates signals to decode the transmitted information. The goal is to produce a signal that can be transmitted easily and decoded to reproduce the original digital data. A modem is a device that provides access to the Internet. The modem connects to your ISP. Modems can be used with any means of transmitting analog signals, from light emitting diodes to radio. A common type of modem is one that turns the digital data of a computer into modulated electrical signal for transmission over telephone lines and demodulated by another modem at the receiver side to recover the digital data. Modems which use a mobile telephone system (GPRS, UMTS, HSPA, EVDO, WiMax, etc.), are known as mobile broadband modems (sometimes also called wireless modems). Wireless modems can be embedded inside a laptop or appliance, or be external to it. External wireless modems are connect cards, USB modems for mobile broadband and cellular routers. A connect card is a PC Card or ExpressCard which slides into a PCMCIA/PC card/ExpressCard slot on a computer. USB wireless modems use a USB port on the laptop instead of a PC card or ExpressCard slot. A USB modem used for mobile broadband Internet is also sometimes referred to as a dongle. A cellular router may have an external datacard (AirCard) that slides into it. Most cellular routers do allow such datacards or USB modems. Cellular routers may not be modems by definition, but they contain modems or allow modems to be slid into them. The difference between a cellular router and a wireless modem is that a cellular router normally allows multiple people to connect to it (since it can route data or support multipoint to multipoint connections), while a modem is designed for one connection. By connecting your modem to your router (instead of directly to a computer), all devices connected to the router can access the modem, and therefore, the Internet. The router provides a local IP address to each connected device, but they will all have the same external IP address, which is assigned by your ISP.
The figure below shows request path and return path of internet utilizing modem, router and DNS server:
In order to retrieve this article, your computer had to connect with the Web server containing the article’s file. We’ll use that as an example of how data travels across the Internet. First, you open your Web browser and connect to our Web site. When you do this, your computer sends an electronic request over your Internet connection to your Internet service provider (ISP). The ISP routes the request to a server further up the chain on the Internet. Eventually, the request will hit a domain name server (DNS). This server will look for a match for the domain name you’ve typed in (www.drrajivdesaimd.com). If it finds a match, it will direct your request to the proper server’s IP address. If it doesn’t find a match, it will send the request further up the chain to a server that has more information. The request will eventually come to our Web server. Our server will respond by sending the requested file in a series of packets.
Peer to Peer file sharing:
Peer-to-peer file sharing is different from traditional file downloading. In peer-to-peer sharing, you use a software program (rather than your Web browser) to locate computers that have the file you want. Because these are ordinary computers like yours, as opposed to servers, they are called peers. The process works like this:
•You run peer-to-peer file-sharing software (for example, a Gnutella program) on your computer and send out a request for the file you want to download.
•To locate the file, the software queries other computers that are connected to the Internet and running the file-sharing software.
•When the software finds a computer that has the file you want on its hard drive, the download begins.
•Others using the file-sharing software can obtain files they want from your computer’s hard drive.
The file-transfer load is distributed between the computers exchanging files, but file searches and transfers from your computer to others can cause bottlenecks. Some people download files and immediately disconnect without allowing others to obtain files from their system, which is called leeching. This limits the number of computers the software can search for the requested file. As Peer-to-Peer (P2P) file exchange applications gain popularity, Internet service providers are faced with new challenges and opportunities to sustain and increase profitability from the broadband IP network. Unlike other P2P download methods, BitTorrent maximizes transfer speed by gathering pieces of the file you want and downloading these pieces simultaneously from people who already have them. This process makes popular and very large files, such as videos and television programs, download much faster than is possible with other protocols. Due to the unique and aggressive usage of network resources by Peer-to-Peer technologies, network usage patterns are changing and provisioned capacity is no longer sufficient. Extensive use of Peer-to-Peer file exchange causes network congestion and performance deterioration, and ultimately leads to customer dissatisfaction and churn.
Please do not confuse between peering and peer-to-peer file transfer. Peering is direct connection between ISP and content provider (e.g. Google) bypassing internet backbone while peer to peer is sharing files between client computers rather than downloading file from content provider. During peering, you are getting file from content provider at faster speed while during P2P, you are getting file from another user’s computer at faster speed. Peering is violation of net neutrality by ISP while P2P is violation of net neutrality by consumers.
The principle states that, whenever possible, communications protocol operations should be defined to occur at the end-points of a communications system, or as close as possible to the resource being controlled. This leads to the model of a minimal dumb network with smart terminals, a completely different model from the previous paradigm of the smart network with dumb terminals. All of the intelligence is held by producers and users, not the networks that connect them. End-to-end design of the network entails that the intelligence would be exclusively located at the edges of the Internet (i.e. with end users), and not at the core (i.e. with networks). If the hosts need a mechanism to provide some functionality, then the network should not interfere or participate in that mechanism unless it absolutely has to. Or, more simply put, the network should mind its own business. If a network function can be implemented correctly and completely using the functionalities available on the end-hosts, that function should be implemented on the end-hosts without delegating any task to the network (i.e., intermediary nodes in between the end-hosts). Because the end-to-end principle is one of the central design principles of the Internet, and because the practical means for implementing data discrimination violate the end-to-end principle, the principle often enters discussions about net neutrality (NN). The end-to-end principle is closely related, and sometimes seen as a direct precursor to the principle of net neutrality.
The Internet is a global, interconnected and decentralised autonomous computer network. We can access the Internet via connections provided by Internet access providers (ISP). These access providers transmit the information that we send over the Internet in so-called data “packets”. The way in which data is sent and received on the Internet can be compared to sending the pages of a book by post in lots of different envelopes. The post office can send the pages by different routes and, when they are received, the envelopes can be removed and the pages put back together in the right order. When we connect to the Internet, each one of us becomes an endpoint in this global network, with the freedom to connect to any other endpoint, whether this is another person’s computer (“peer-to-peer”), a website, an e-mail system, a video stream or whatever.
The success of the Internet is based on two simple but crucial components of its architecture:
1. Every connected device can connect to every other connected device.
2. All services use the “Internet Protocol,” which is sufficiently flexible and simple to carry all types of content (video, e-mail, messaging etc.) unlike networks that are designed for just one purpose, such as the voice telephony system.
Internet is the abbreviation of the term internetwork, which describes the connection between computer networks all around the world on the basis of the same set of communication protocols. At its start in the 1960s, the Internet was a closed research network between just a few universities, intended to transmit text messages. The architectural design of the Internet was guided by two fundamental design principles: Messages are fragmented into data packets that are routed through the network autonomously (end-to-end principle) and as fast as possible (best-effort principle [BE]). This entails that intermediate nodes, so-called routers, do not differentiate packets based on their content or source. Rather, routers maintain routing tables in which they store the next node that lies on the supposedly shortest path to the packet’s destination address. However, as each router acts autonomously along when deciding the path along which it sends a packet, no router has end-to-end control over which path the packet is send from sender to receiver. Moreover, it is possible, even likely, that packets from the same message flow may take different routes through the network. Packets are stored in a router’s queue if they arrive at a faster rate than the rate at which the router can send out packets. If the router’s queue is full, the package is deleted (dropped) and must be resent by the source node. Full router queues are the main reason for congestion on the Internet. However, no matter how important a data packet may be, routers would always process their queue according to the first-in-first-out principle. These fundamental principles always were (and remain in the context of the NN debate) key elements of the open Internet spirit. Essentially, they establish that all data packets sent to the network are treated equally and that no intermediate node can exercise control over the network as a whole. This historic and romantic view of the Internet neglects that Quality of Service (QoS) has always been an issue for the network of networks. Over and beyond the sending of mere text messages, there is a desire for reliable transmission of information that is time critical (low latency), or for which it is desired that data packets are received at a steady rate and in a particular order (low jitter). Voice communication, for example, requires both, low latency and low jitter. This desire for QoS was manifested in the architecture of the Internet as early as January 1, 1983, when the Internet was switched over to the Transmission Control Protocol / Internet Protocol (TCP/IP). In particular, the Internet protocol version 4 (IPv4), which constitutes the nuts and bolts of the Internet since then, already contains a type of service (TOS) field in its header by which routers could prioritize packets in their queues and thereby establish QoS. However, a general agreement on how to handle data with different TOS entries was never reached and thus the TOS field was not used accordingly. Consequently, in telecommunications engineering, research on new protocols and mechanisms to enable QoS in the Internet has spurred ever since, long before the NN debate came to life. In addition, data packets can even be differentiated solely based on what type of data they are carrying, without the need for an explicit marking in the protocol header. This is possible by means of so-called Deep Packet Inspection (DPI). All of these features are currently deployed in the Internet as we know it, and many of them have been deployed for decades. The NN debate, however, sometimes questions the existence and use of QoS mechanisms in the Internet and argues that the success of the Internet was only possible due to the BE principle. While the vision of an Internet that is based purely on the BE principle is certainly not true, some of these claims nevertheless deserve credit.
Another far-reaching event was the steady commercialization of the Internet in the 1990s. At about the same time, the disruptive innovation of content visualization and linkage via the Hyper Text Markup Language (HTML), the so called World Wide Web (WWW) made the Internet a global success. Private firms began to heavily invest in backbone infrastructure and commercial ISPs provided access to the Internet, at first predominately by dial up connections. The average data traffic per household severely increased with the availability of broadband and rich media content (Bauer et al., 2009). According to the Minnesota Internet Traffic Studies (Odlyzko et al., 2012) Internet traffic in the US is growing annually by about 50 percent. The increase in network traffic is the consequence of the on-going transition of the Internet to a fundamental universal access technology. Media consumption using traditional platforms such as broadcasting and cable is declining and content is instead consumed via the Internet. Today the commercial Internet ecosystem consists of several players. Internet users (IUs) are connected to the network by their local access provider (ISP), while content and service providers (CSPs) offer a wide range of applications and content to the mass of potential consumers. All of these actors are spread around the world and interconnect with each other over the Internet’s backbone, which is under the control of an oligopoly of big network providers (Economides, 2005). The Internet has become a trillion dollar industry (Pélissié du Rausas et al., 2011) and has emerged from a mere network of networks to the market of markets. Much of the NN debate is devoted to the question whether the market for Internet access should be a free market, or whether it should be regulated in the sense that some feasible revenue flows are to be prohibited.
The principal Internet services:
• E‐mail person‐to‐person messaging; document sharing.
• Newsgroups discussion groups on electronic bulletin boards.
• Chatting and instant messaging interactive conversations.
• Telnet logging on to one computer system and doing work on another.
• File Transfer Protocol (FTP) transferring files from computer to computer.
• World Wide Web retrieving, formatting, and displaying information (including text, audio, graphics, and video) using hypertext links.
The modern Internet was invented to be a free and open network that allows anyone with a Web connection to communicate directly with any individual or computer on that network. Over the past 25 years, the Internet has transformed the way we do just about everything. Think about the conveniences and services that wouldn’t exist without the Internet:
• instant access to information about everything email
• online shopping
• online social networks
• independent global news sources
• streaming movies, TV shows and music
• online banking
• video calls and videoconferencing
The Internet has evolved so quickly and works so well precisely because the technology behind the Internet is neutral. In other words, the physical cables, routers, switches, servers and software that run the Internet treat every byte of data equally. A streaming movie from Netflix shares the same crowded fiber optic cable as the pictures from your niece’s birthday. The Internet doesn’t pick favourites. That, at its core, is what net neutrality means. And that’s one of the most important reasons why you should care about it: to keep the Internet as free, open and fair as possible, just as it was designed to be.
Networking allows one computer to send information to and receive information from another. We may not always be aware of the numerous times we access information on computer networks. Certainly the Internet is the most conspicuous example of computer networking, linking millions of computers around the world, but smaller networks play a role in information access on a daily basis. We can classify network technologies as belonging to one of two basic groups. Local area network (LAN) technologies connect many devices that are relatively close to each other. Wide area network (WAN) technologies connect a smaller number of devices that can be many kilometers apart. Ethernet is a wired LAN technology while Wi-Fi is wireless LAN technology. WAN is a computer networking technologies used to transmit data over long distances, and between different LANs and other localised computer networking architectures. Network nodes can be connected using any given technology, from circuit switched telephone lines (DSL) through radio waves (wireless broadband/mobile broadband) through optic fibre.
The ideal telecommunication network has the following characteristics: broadband, multi-media, multi-point, multi-rate and economical implementation for a diversity of services (multi-services). The Broadband Integrated Services Digital Network (B-ISDN) intended to provide these characteristics. Asynchronous Transfer Mode (ATM) was promoted as a target technology for meeting these requirements.
A multi-media call may communicate audio, data, still images, or full-motion video, or any combination of these media. Each medium has different demands for communication quality, such as:
1. bandwidth requirement,
2. signal latency within the network, and
3. signal fidelity upon delivery by the network.
The information content of each medium may affect the information generated by other media. For example, voice could be transcribed into data via voice recognition, and data commands may control the way voice and video are presented. These interactions most often occur at the communication terminals, but may also occur within the network.
Internet access connects individual computer terminals, computers, mobile devices, and computer networks to the Internet, enabling users to access Internet services, such as email and the World Wide Web. Internet service providers (ISPs) offer Internet access through various technologies that offer a wide range of data signalling rates (speeds). Consumer use of the Internet first became popular through dial-up Internet access in the 1990s. By the first decade of the 21st century, many consumers in developed nations used faster, broadband Internet access technologies. As of 2014, broadband was ubiquitous around the world, with a global average connection speed exceeding 4 Mbit/s.
Choosing an Internet service:
It all depends on where you live and how much speed you need. Internet service providers (ISPs) usually offer different levels of speed based on your needs. If you’re mainly using the Internet for email and social networking, a slower connection might be all you need. However, if you want to download a lot of music or watch streaming movies, you’ll want a faster connection. You’ll need to do some research to find out what the options are in your area. Here are some common types of Internet service.
Dial-up is generally the slowest type of Internet connection, and you should probably avoid it unless it is the only service available in your area. Like a phone call, a dial-up modem will connect you to the Internet by dialling a number, and it will disconnect when you are done surfing the Web. Unless you have multiple phone lines, you will not be able to use your land line and the Internet at the same time with a dial-up connection.
DSL (digital subscriber line):
DSL service uses a broadband connection, which makes it much faster than dial-up. DSL is a high-speed Internet service like cable Internet. DSL provides high-speed networking over ordinary phone lines using broadband modem technology. DSL technology allows Internet and telephone service to work over the same phone line without requiring customers to disconnect either their voice or Internet connections. DSL technology theoretically supports data rates of 8.448 Mbps, although typical rates are 1.544 Mbps or lower. DSL Internet services are used primarily in homes and small businesses. DSL Internet service only works over a limited physical distance and remains unavailable in many areas where the local telephone infrastructure does not support DSL technology. However, it is unavailable in many locations, so you’ll need to contact your local ISP for information about your area. DSL connects to the Internet via phone line but does not require you to have a land line at home. Unlike dial-up, it will always be on once its set up, and you’ll be able to use the Internet and your phone line simultaneously.
Cable service connects to the Internet via cable TV, although you do not necessarily need to have cable TV in order to get it. It uses a broadband connection and can be faster than both dial-up and DSL service; however, it is only available in places where cable TV is available.
A satellite connection uses broadband but does not require cable or phone lines; it connects to the Internet through satellites orbiting the Earth. As a result, it can be used almost anywhere in the world, but the connection may be affected by weather patterns. A satellite connection also relays data on a delay, so it is not the best option for people who use real-time applications, like gaming or video conferencing.
3G and 4G:
3G and 4G service is most commonly used with mobile phones and tablet computers, and it connects wirelessly through your ISP’s network. If you have a device that’s 3G or 4G enabled, you’ll be able to use it to access the Internet away from home, even when there is no Wi-Fi connection. However, you may have to pay per device to use a 3G or 4G connection, and it may not be as fast as DSL or cable.
If you’re out and about with an internet device like a laptop, tablet or smartphone, you might want to connect at a wireless hotspot. Wireless ‘hotspots’ are places like libraries and cafés, which offer you free access to their broadband connection (Wi-Fi). You may need to be a member of the library or a customer at a café to get the password for the wireless connection.
Wired vs. wireless internet access:
A wired network connects devices to the Internet or other network using cables. The most common wired networks use cables connected to Ethernet ports on the network router on one end and to a computer or other device on the cable’s opposite end. A wireless local-area network (LAN) uses radio waves to connect devices such as laptops to the Internet and to your business network and its applications. When you connect a laptop to a Wi-Fi hotspot at a cafe, hotel, airport lounge, or other public place, you’re connecting to that business’s wireless network. Almost all of the discussion surrounding net neutrality has been confined to wired (that is, cable, DSL and fiber) broadband in the U.S. while in India, most internet is wireless mobile broadband. In India they have an abnormally high mobile to fixed broadband ratio of 4:1 and only 15.2 million wired broadband connections in a country of 1.25 billion. India has a fixed broadband penetration ratio of 1.2 per 100 as against the world average of 9.4 per 100. The Open Internet Order by FCC adopted definitions for “fixed” and “mobile” Internet access service. It defined “fixed broadband Internet access service” to expressly include “broadband Internet access service that serves end users primarily at fixed endpoints using stationary equipment … fixed wireless services (including fixed unlicensed wireless services), and fixed satellite services.” It defined “mobile broadband Internet access service” as “a broadband Internet access service that serves end users primarily using mobile stations.” So fixed internet access include wired and wireless technology while mobile internet access is always wireless. The transparency rule applies equally to both fixed and mobile broadband Internet access service. The no-blocking rule applied a different standard to mobile broadband Internet access services and mobile Internet access service was excluded from the unreasonable discrimination rule.
|Wired network||Wireless network|
|Consumers use cable (cable TV), copper wire (DSL) or fiber-optic to connect to internet||Consumers use radio waves to connect to internet via 3G/4G data card containing modem (mobile broadband) or through Wi-Fi using LAN|
|Large capacity of data transmission, volume uncapped||It requires the use of spectrum, which is a scarce public resource, limited capacity of data transmission, restrictive volume caps|
|Multiple simultaneous users do not significantly affect speed||Multiple simultaneous users significantly reduces speed|
|Majority of American population uses wired network||Majority of Indian population uses wireless network|
|Net neutrality debate mainly involve wired transmission in America||Net neutrality debate mainly involve wireless transmission in India|
|Wired connection speed is near maximum throughput||Wireless connection speed will be less than the maximum throughput due to various factors reducing signal strength|
|Wired connection generally have faster internet speed||Wireless connection generally have slower internet speed|
|You have to access internet at a fixed point||You can move around with device within network coverage area for internet access|
|Voice and video quality not significantly affected in network congestion||Voice and video quality significantly affected in network congestion|
What is spectrum?
Spectrum in wireless telephone/internet transmission is the radio frequency spectrum that ranges from very low frequency radio waves at around 10kHz (30 kilometres wavelength) up to 100GHz (3 millimetres wavelength). The radio spectrum is divided into frequency bands reserved for a single use or a range of compatible uses. Within each band, individual transmitters often use separate frequencies, or channels, so they do not interfere with each other. Because there are so many competing uses for wireless communication, strict rules are necessary to prevent one type of transmission from interfering with the next. And because spectrum is limited — there are only so many frequency bands — governments must oversee appropriate licensing of this valuable resource to facilitate use in all bands. Governments spend a considerable amount of time allocating particular frequencies for particular services, so that one service does not interfere with another. These allocations are agreed internationally, so that interference across borders, as well as between services, is minimised. Not all radio frequencies are equal. In general, lower frequencies can reach further beyond the visible horizon and are better at penetrating physical obstacles such as rain or buildings. Higher frequencies have greater data-carrying capacity, but less range and ability to pass through obstacles. For example, Mobile broadband uses the spectrum of 225 MHz to 3700 MHz while Wi-Fi uses 2.4 and 5 GHz frequency. Capacity is also dependent on the amount of spectrum a service uses — the channel bandwidth. For many wireless applications, the best trade-off of these factors occurs in the frequency range of roughly 400MHz to 4GHz, and there is great demand for this portion of the radio spectrum.
All communication devices that use digital radio transmissions operate in a similar way. A transmitter generates a signal that contains encoded voice, video or data at a specific radio frequency, and this is radiated into the environment by an antenna (also known as an aerial). This signal spreads out in the environment, of which a very small portion is captured by the antenna of the receiving device, which then decodes the information. The received signal is incredibly weak — often only one part in a trillion of what was transmitted. In the case of a mobile phone call, a caller’s voice is converted by the handset into digital data, transmitted via radio to the network operator’s nearest tower or base station, transferred to another base station serving the recipient’s location, and then transmitted again to the recipient’s phone, which converts the signal back into audio through the earpiece. There are a number of standards for mobile phones and base stations, such as GSM, WCDMA and LTE, which use different methods for coding and decoding, and ensure that users can only receive voice calls and data that are intended for them.
The bandwidth of a radio signal is the difference between the upper and lower frequencies of the signal. For example, in the case of a voice signal having a minimum frequency of 200 hertz (Hz) and a maximum frequency of 3,000 Hz, the bandwidth is 2,800 Hz (3 KHz). The amount of bandwidth needed for 3G services could be as much as 15-20 Mhz, whereas for 2G services a bandwidth of 30-200 KHz is used. Hence, for 3G huge bandwidth is required. Please do not confuse between bandwidth of 2G/3G spectrum and bandwidth of internet transmission i.e. internet speed.
What is Broadband?
Broadband is a technology that transmits data at high speed along cables, ISDN / DSLs (Digital Subscriber Lines) and mobile phone networks. The most common type of broadband is ADSL (carried along phone lines), though cable (using new fibre-optic cables) and mobile broadband (using 3G and 4G mobile reception) are hot contenders to topple ADSL’s dominance. ADSL broadband comes from your local telephone exchange, through a Fixed Line Access Network made out of copper wires. These are the telephone lines that you see in the street. The lines in the street connect to the wiring inside your house and provide you an internet and phone connection through the socket on the wall. Unlike the copper wires of an ADSL connection, cables are partially made of fibre-optic material, which allows for much faster broadband speeds and increased reliability. The other advantage of cable is that it also allows for the transmission of audio and visual signals, which means you can get both landline and digital TV services from your cable broadband provider. Mobile broadband uses 3G and 4G mobile phone technology. These are made possible by two complementary technologies, HSDPA and HSUPA (high speed download and upload packet access, respectively).
Broadband provides improved access to Internet services such as:
1. Faster World Wide Web browsing
2. Faster downloading of documents, photographs, videos, and other large file
3. Telephony, radio, television, and videoconferencing
4. Virtual private networks and remote system administration
5. Online gaming, especially massively multiplayer online role-playing games which are interaction-intensive
Broadband technologies supply considerably higher bit rates than dial-up, generally without disrupting regular telephone use. Various minimum data rates and maximum latencies have been used in definitions of broadband, ranging from 64 kbit/s up to 4.0 Mbit/s. In 1988 the CCITT standards body defined “broadband service” as requiring transmission channels capable of supporting bit rates greater than the primary rate which ranged from about 1.5 to 2 Mbit/s. A 2006 Organization for Economic Co-operation and Development (OECD) report defined broadband as having download data transfer rates equal to or faster than 256 kbit/s. And in 2015 the U.S. Federal Communications Commission (FCC) defined “Basic Broadband” as data transmission speeds of at least 25 Mbit/s downstream (from the Internet to the user’s computer) and 3 Mbit/s upstream (from the user’s computer to the Internet). The trend is to raise the threshold of the broadband definition as higher data rate services become available.
Proponents of net neutrality regulations say network operators have continued to under-invest in infrastructure. However, according to Copenhagen Economics, US investment in telecom infrastructure is 50 percent higher that of the European Union. As a share of GDP, The US’s broadband investment rate per GDP trails only the UK and South Korea slightly, but exceeds Japan, Canada, Italy, Germany, and France sizably. On broadband speed, Akamai reported that the US trails only South Korea and Japan among its major trading partners, and trails only Japan in the G-7 in both average peak connection speed and percentage of the population connection at 10 Mbit/s or higher, but are substantially ahead of most of its other major trading partners. The White House reported in June 2013 that U.S. connection speeds are the fastest compared to other countries with either a similar population or land mass. Broadband speeds in the United States, both wired and wireless, are significantly faster than those in Europe. Broadband investment in the United States is several multiples that of Europe. And broadband’s reach is much wider in the United States, despite its much lower population density. In other words, broadband speed is directly proportional to investment in broadband infrastructure. I live in small town Daman where maximum internet download speed I got from any ISP is 2.5 Mbps. This is because of poor broadband infrastructure in India.
Bandwidth (internet speed):
In computer networks, bandwidth is used as a synonym for data transfer rate (internet speed), the amount of data that can be carried from one point to another in a given time period (usually a second). Network bandwidth is usually expressed in bits per second (bps); modern networks typically have speeds measured in the millions of bits per second (megabits per second, or Mbps) or billions of bits per second (gigabits per second, or Gbps). How fast your internet is depends on three factors: Download speed (how fast you can retrieve something from the internet), upload speed (sending something to a remote location on the internet), and latency (lag time between each point during information transfer). Download speed is what you experience the most and can send you to tap your fingers for what seems like minutes before a web page shows up on your screen. If you’re streaming movies from Netflix, download speed is important. The higher the number for download speed, the quicker the movie will get from the Netflix website to your computer. A movie downloaded at 15 Mbps should take one-tenth as long as having a 1.5 Mbps connection. Note that bandwidth is not the only factor that affects network performance: There is also packet loss, latency and jitter, all of which degrade network throughput and make a link perform like one with lower bandwidth. A network path usually consists of a succession of links, each with its own bandwidth, so the end-to-end bandwidth is limited to the bandwidth of the lowest speed link (the bottleneck). Different applications require different bandwidths. This is important because some sites use much more bandwidth than others depending on their content and media. Video is one of the main ways to use a lot of bandwidth. For example, sites like Netflix and YouTube use almost half of North America’s Internet Bandwidth during peak hours of the day (according to CNET). An instant messaging conversation might take less than 1,000 bits per second (bps); a voice over IP (VoIP) conversation requires 56 kilobits per second (Kbps) to sound smooth and clear. Standard definition video (480p) works at 1 megabit per second (Mbps), but HD video (720p) wants around 4 Mbps, and HDX (1080p), more than 7 Mbps. Effective bandwidth — the highest reliable transmission rate a path can provide — is measured with a bandwidth test. This rate can be determined by repeatedly measuring the time required for a specific file to leave its point of origin and successfully download at its destination.
Speed vs. latency:
There is more to an Internet connection’s speed than just its bandwidth. This is especially true with satellite Internet connections, which can offer speeds of up to 15 Mbps – but will still feel slow. Latency is defined as the time it takes for a source to send a packet of data to a receiver. Latency is typically measured in milliseconds. Latency is independent of internet speed. Consider the analogy of a car travelling at 100 mph from A to B. This might sound fast but gives no indication of whether the car has driven the most direct route; if direct, fine; if from A to C to D to B the journey is going to take longer. So with network traffic; you might have a fast Internet connection, but if the route between the user’s computer and the server being accessed is indirect, response times will be slower. Latency is a true indicator of whether network traffic has taken the shortest possible route. The lower the latency (the fewer the milliseconds), the better the network performance. Together, latency and bandwidth define the speed and capacity of a network. Network latency is the term used to indicate any kind of delay that happens in data communication over a network. Network connections in which small delays occur are called low-latency networks whereas network connections which suffer from long delays are called high-latency networks. High latency creates bottlenecks in any network communication. It prevents the data from taking full advantage of the network pipe and effectively decreases the communication bandwidth. The impact of latency on network bandwidth can be temporary or persistent based on the source of the delays. On DSL or cable Internet connections, latencies of less than 100 milliseconds (ms) are typical and less than 25 ms desired. Satellite Internet connections, on the other hand, average 500 ms or higher latency. Wireless mobile broadband latency varies from 80 ms (LTE) to 125 ms (HSPA).
How to measure latency:
One of the first things to try when your connection doesn’t seem to be working properly is the ping command. Open a Command Prompt window from your Start menu and run a command like ping google.com or ping howtogeek.com. This command sends several packets to the address you specify. The web server responds to each packet it receives. In the command below, you can see % of packet loss and the time each packet takes. Ping cannot perform accurate measurements, principally because it uses the ICMP protocol that is used only for diagnostic or control purposes, and differs from real communication protocols such as TCP. Furthermore routers and ISP’s might apply different traffic shaping policies to different protocols. For more accurate measurements it is better to use specific software (for example: lft, paketto, hping, superping.d, NetPerf, IPerf)
A very good example when bandwidth would directly correlate to speed is when you are downloading a file across the network or Internet. Greater bandwidth means that more of the file is being transferred at any given time. The file would be therefore be downloaded faster. This is also applicable when you are browsing the Internet as greater bandwidth would result in web pages loading faster and video streaming to be smoother. But in certain cases, speed and bandwidth do not literally mean the same thing. This is true when you talk about real time applications like VoIP or online gaming. In these cases, latency or response time is more important than having more bandwidth. Even if you have a lot of bandwidth, you may experience choppy voice transmission or response lag if your latency is too high. Upgrading your bandwidth would probably not help since it would no longer be used. Latency can’t be upgraded easily as it requires that any noise be minimized as well as the amount of time that it takes for packets to move from source to destination and vice versa. To obtain the best possible speed for your network or Internet connection, it is not enough to have a high bandwidth connection. It is also important that your latency is low, to ensure that the information reaches you quickly enough. This only matters though if you have enough bandwidth as low latencies without enough bandwidth would still result in a very slow connection.
The speed at which websites download from the Internet is dependent on the following factors:
1. Web design:
•Design of the web page – number of graphics and their size, use of frames and tables
•Size of the web page – overall length of page. Note; having valid, compliant html/css coding on your website will allow your browser to render the page much more quickly
2. Your browsing history:
•Whether or not you have ever accessed the site before. If you have accessed it recently, the files may be in your cache and the site will load more quickly the second and subsequent times.
•How full your web browser cache is – you may need to clear your cache if you’ve set it to only reserve a small amount of space.
3. Your computer configuration and settings:
•How much memory you have in your computer – the more RAM the better.
•The size of your network buffer – most overlooked setting, have your IT staff review the settings.
•How fragmented the data on your hard drive is – you may need to run a defragment program.
•The number of programs you have running simultaneously while downloading. Running multiple programs hogs valuable RAM space.
•Cookies should be cleared regularly (bi-weekly or monthly) to help reduce the load on your browser thus slowing down your performance.
4. The network used to access the site:
•Speed of your connection to the Internet – your modem/cable/DSL/wireless speed.
•Quality of your telephone/broadband line – bad connections mean slower transmissions.
•Access speed on the server where the site is hosted – if the site is hosted on a busy server, it may slow down access speed.
•How much traffic there is to the site at the same time you are trying to access it.
•The load on the overall network at your ISP – how busy it is.
Any or all of the above can slow download time. Web designers only have control over the first two items!
5. Limitations of your computer:
There are also other ways you can improve your speeds, here are a few:
• Update to the latest Web browser and Operation System versions.
•Clear out your cache: Old information retained by your web browser may be making it perform slower than it could.
•Reformatting your hard drive: Although technical in nature, by reloading your Operating System, you will be able to get rid of unnecessary files that linger around your computer.
•Change your ISP: As drastic as this may sound, some providers oversell their services. As a result, they simply cannot supply their users with speeds needed by modern day web activities. Before you take the plunge to get a new ISP, make sure you read some reviews about them and ask your friends for recommendation.
•It may be time for an upgrade! Get a new computer! Modern software programs take up more and more resources and it is quite possible that your current hardware simply cannot keep up to date with the current standards.
There are other factors involved in internet speed:
1. End-User Hardware Issues: If you have an old router that just can’t keep up with modern speeds or a poorly configured Wi-Fi connection that’s being slowed down by interference, you won’t actually experience the connection speeds you’re paying for — and that’s not the Internet service provider’s fault.
2. Distance from ISP: The further you are away from your Internet service provider’s hardware, the weaker your signal can become. If you’re in a city, you’re likely to have a faster connection than you would in the middle of the countryside.
3. Congestion: You’re sharing an Internet connection line with many other customers from your Internet service provider, so congestion can result as all these people compete for the Internet connection. This is particularly true if all your neighbours are using BitTorrent 24/7 or using other demanding applications.
4. Time of Day: Because more people are probably using the shared connection line during peak hours — around 6pm to midnight for residential connections — you may experience slower speeds at these times.
5. Throttling; Your Internet service provider may slow down (or “throttle”) certain types of traffic, such as peer-to-peer traffic. Even if they advertise “unlimited” usage, they may slow down your connection for the rest of the month after you hit a certain amount of data downloaded. Throttling is a process by which the amount of bandwidth you use through your Internet provider is limited in some way, usually in the form of slower upload or download speeds. This is done to allow others to more effectively connect to the Internet Service Provider’s servers. Net neutrality supporters are concerned with throttling because they believe current legislation is leaving the doors open to allow ISPs to throttle based on their own discretion of what sites you visit. For example, if you plan to watch “House of Cards” in UltraHD on Netflix, but your ISP decides it’s going to “throttle” access to Netflix, you may have to settle for some grainy 720p or worse, 480p on your new giant, curved Ultra HDTV.
6. Server-Side Issues: Your download speeds don’t just depend on your Internet service provider’s advertised speeds. They also depend on the speeds of the servers you’re downloading from and the routers in between. For example, if you’re in the US and experience slowness when downloading something from a website in Europe, it may not be your Internet service provider’s fault at all — it may be because the website in Europe has a slow connection or the data is being slowed down at one of the routers in between you and the European servers.
Many factors can impact Internet connection speed, and it’s hard to know which is the precise problem. Nevertheless, in real-life usage, you’ll generally experience slower speeds than your Internet service provider advertises — if only because it’s so dependent on other people’s Internet connections.
Can your download/upload speed affected by number of simultaneous users on any network, wired or wireless?
The more users on any network, wired or wireless, the less bandwidth available to each of them. The type of activity also has a huge impact on performance. If everyone is only checking e-mail it’s not likely to cause slowdowns. But if you have someone trying to stream a Netflix movie and someone else running a Skype video chat you can probably forget about playing an online game as well.
When multiple connected computers or devices – such as a mobile phone and wireless router – connect simultaneously to the same network, the result of having to share the available bandwidth, Internet access speed could be reduced. Wireless connection throughput is subject to conditions such as the local radio environment, number of devices sharing the same wireless network, range of the wireless coverage, interferences, physical obstacles and capability of receiving end. As a result, actual wireless connection speed will be less than the maximum throughput. In practice very few wireless networks can ever achieve their full quoted data rate. It is strongly dependant on signal strength. There are then various overheads, for TCP, IP and the wireless transport layer, including traffic that manages the connection even if you are not actively using it currently. These overheads include acknowledgments that need to be sent when data is received (and vice-versa). Each website is served by a server connected to the network, and network bandwidth is distributed according to a website’s usage. So when the number of users is low, your connection speed will be faster. In contrast, when the number of simultaneous users is high, your linking network server will be congested, causing the connection speed to drop, especially for an overseas network server and the amount of users. In summary, the end-to-end data throughput is also dependent on the bandwidth of the connection from the web or network server to the internet. The speed of the flow not only depends on bandwidth and number of users but also depends on routers and network conditions between the two devices involved in the flow.
Upstream and Downstream Bandwidth:
When a device uses the Internet, information flows in two ways: to the device and from the device. When data flows to the device, the movement of information is downstream. When data flows from the device, the movement is upstream. Typical Internet processes involve more downstream usage than upstream usage; information flows to the device more than it flows from it. As a result, most Internet connections prioritize downstream bandwidth. However, for large data transfers, remote access, video chats and voice over IP calls, more upstream bandwidth is required. Many Internet routers have Quality of Service, or QoS, settings that can prioritize bandwidth usage in the case of increased upstream flow.
Multiple Users of a Single Connection:
When multiple people use a single connection, more devices consume the finite bandwidth of the connection. Therefore, each device is allocated a smaller portion of the available bandwidth. As a result, all devices may experience a slower data transfer. Some router QoS settings allow you to prioritize device bandwidth use so that certain devices have increased access to the bandwidth.
Now let me discuss two human factors responsible for provoking humans to choose one site over another besides obvious cost & quality factors:
1. Human intolerance for slow-loading sites
2. Human audio-visual perception
Consumer intolerance to slow-loading sites:
Video Stream Quality impacts viewer behaviour:
The Internet is radically transforming all aspects of human society by enabling a wide range of applications for business, commerce, entertainment, news and social networking. Perhaps no industry has been transformed more radically than the media and entertainment segment of the economy. As media such as television and movies migrate to the Internet, there are twin challenges that content providers face whose ranks include major media companies (e.g., NBC, CBS), news outlets (e.g., CNN), sports organizations (e.g., NFL, MLB), and video subscription services (e.g., Netflix, Hulu). The first major challenge for content providers is providing a high-quality streaming experience for their viewers, where videos are available without failure, they startup quickly, and stream without interruptions. A major technological innovation of the past decade that allows content providers to deliver higher-quality video streams to a global audience of viewers is the content delivery network (or, CDN for short). CDNs are large distributed systems that consist of hundreds of thousands of servers placed in thousands of ISPs close to end users. CDNs employ several techniques for transporting media content from the content provider’s origin to servers at the “edges” of the Internet where they are cached and served with higher quality to the end user. The second major challenge of a content provider is to actually monetize their video content through ad-based or subscription-based models. Content providers track key metrics of viewer behavior that lead to better monetization. Primary among them relate to viewer abandonment, engagement, and repeat viewership. Content providers know that reducing the abandonment rate, increasing the play time of each video watched, and enhancing the rate at which viewers return to their site increase opportunities for advertising and upselling, leading to greater revenues. The key question is whether and by how much increased stream quality can cause changes in viewer behavior that are conducive to improved monetization. Relatively little is known from a scientific standpoint about the all-important causal link between video stream quality and viewer behavior for online media. While understanding the link between stream quality and viewer behavior is of paramount importance to the content provider, it also has profound implications for how a CDN must be architected. An architect is often faced with trade-offs on which quality metrics need to be optimized by the CDN. A scientific study of which quality metrics have the most impact on viewer behavior can guide these choices. As an example of viewer behavior impacting CDN architecture, authors performed small-scale controlled experiments on viewer behavior a decade ago that established the relative importance of the video to startup quickly and play without interruptions. These behavioral studies motivated an architectural feature called prebursting that was deployed on Akamai’s live streaming network that enabled the CDN to deliver streams to a media player at higher than the encoded rate for short periods of time to fill the media player’s buffer with more data more quickly, resulting in the stream starting up faster and playing with fewer interruptions. It is notable that the folklore on the importance of startup time and rebuffering were confirmed in two recent important large-scale scientific studies. The current work sheds further light on the important nexus between stream quality and viewer behavior and, importantly, provides the first evidence of a causal impact of quality on behavior.
Authors study the impact of video stream quality on viewer behavior in a scientific data-driven manner by using extensive traces from Akamai’s streaming network that include 23 million views from 6.7 million unique viewers. They show that viewers start to abandon a video if it takes more than 2 seconds to start up, with each incremental delay of 1 second resulting in a 5.8% increase in the abandonment rate. Further, they show that a moderate amount of interruptions can decrease the average play time of a viewer by a significant amount. A viewer who experiences a rebuffer delay equal to 1% of the video duration plays 5% less of the video in comparison to a similar viewer who experienced no rebuffering. Finally, authors show that a viewer who experienced failure is 2.32% less likely to revisit the same site within a week than a similar viewer who did not experience a failure. On average, YouTube streams 4 billion hours of video per month. That’s a lot of video, but it’s only a fraction of the larger online-streaming ecosystem. For video-streaming services, making sure clips always load properly is extremely challenging, and this study reveals that it’s important to video providers, too. Maybe this has happened to you: You’re showing a friend some hilarious video that you found online. And right before you get to the punch line, a little loading dial pops up in the middle of the screen. Buffering kills comedic timing, and according to this study it kills attention spans, too. People are pretty patient for up to two seconds. If you start out with, say, 100 users — if the video hasn’t started in five seconds, about one-quarter of those viewers are gone, and if the video doesn’t start in 10 seconds, almost half of those viewers are gone. If a video doesn’t load in time, people get frustrated and click away. This may not come as a shock, but until now it hadn’t come as an empirically supported fact, either. This is really the first large-scale study of its kind that tries to relate video-streaming quality to viewer behavior.
User intolerance for slow-loading sites:
The figure above shows abandonment rate of online video users for different Internet connectivities. Users with faster Internet connectivity (e.g., fiber) abandon a slow-loading video at a faster rate than users with slower Internet connectivity (e.g., cable or mobile). A “fast lane” in the Internet can irrevocably decrease the user’s tolerance to the relative slowness of the “slow lane”.
Voice, video and human perception:
Voice and video signals must come fast and in a specific sequence. Conversations become difficult if words or syllables go missing or are delayed by more than a couple of tenths of a second. Our eyes can tolerate a bit more variation in video than our ears can tolerate in voice; on the other hand, video needs much more bandwidth. The human hearing system does not tolerate these flaws well because of its acute sense of timing. Twenty milliseconds of sudden silence can disturb a conversation. Voice and video can be converted into series of packets coded to identify their contents as requiring transmission at a regular rate. For telephony, the packet priority codes are designed to keep the conversation flowing without annoying jitter—variations in when the packets are received. Similar codes help keep video packets flowing at the proper rate. In practice, these flow controls are not crucial in today’s fixed broadband networks, which generally have enough capacity to transmit voice and video. But mobile apps are a different story. The Internet discards packets that arrive after a maximum delay, and it can request retransmission of missing packets. That’s okay for Web pages and downloads, but real-time conversations can’t wait. Software may skip a missing packet or fill the gap by repeating the previous packet. That’s tolerable for vowels, which are long, even sounds, so a packet lost from the middle of “zoom” would go unnoticed. But consonants are short and sharp, so losing a packet at the end of “can’t” turns it into “can.” Severe congestion can cause whole sentences to vanish and make conversation impossible. Such congestion is most serious on wireless networks, and it also already affects fixed broadband and backbone networks. Consumers frustrated by long video-buffering delays sometimes blame cable companies for intentionally throttling streaming video from companies like Netflix. But in 2014 the Measurement Lab consortium reported that the real bottlenecks are at interconnections between Internet access providers and backbone networks.
Is internet common carrier?
In common law countries, common carrier is a legal classification for a person or company which transports goods and is legally prohibited from discriminating or refusing service based on the customer or nature of the goods. The common carrier framework is often used to classify public utilities, such as electricity or water, and public transport. In the United States, there has been intense debate between some advocates of net neutrality, who believe Internet providers should be legally designated common carriers, and some Internet service providers, who believe the common carrier designation would be a heavy regulatory burden. You expect your home Internet connection to “just work” like water and electricity. But what if the electric company provided inadequate power to your Whirlpool refrigerator, because Whirlpool hadn’t paid a fee? And what if the water company completely cut off the flow from your Kohler faucet because it owned a stake in another faucet company? Unlike public utilities, your Internet service provider (ISP) can abuse its power to influence which Internet businesses win and lose by slowing down or even blocking sites and services. The idea that the Internet should be operated like a public “road” — carrying all traffic, with no discrimination against any traveller, no matter what size, shape or type — seems to many a bedrock principle. But should the Internet be regulated like other public utilities — like water or electricity? Under FCC policy, Internet service providers such as Verizon and Comcast (ISPs)had to treat all content equally, including news sites, Facebook and Twitter, cloud-based business activities, role-playing games, Netflix videos, peer-to-peer music file sharing, photos on Flickr — even gambling activity and pornography. Citizens can run all manner of applications and devices, and no content provider is given preferential treatment or a faster “lane” than anyone else. No content can be blocked by Internet service providers or charged differential rates. But it also meant that ISPs could not sell faster services to businesses willing to pay, a form of market regulation that, critics say, stifles innovation and legitimate commercial activity.
Is Internet an Information Service or a Telecommunications Service?
Another major issue in the Net Neutrality kerfuffle is whether the Internet is classified as an information service or a more regulated telecommunications service. The fact that the Internet was reclassified as an information service by the FCC in 2002, led to Verizon’s successful challenge of Net Neutrality rules. Net Neutrality proponents obviously want the Internet reclassified as a telecommunications service. They feel this extra regulation will allow the principles of Net Neutrality to once again guide the concept of a free Internet. Considering that many of you only have one or two options when choosing a local ISP, regulation may be ultimately necessary to prevent monopoly abuse. If telecommunications companies are successful in instituting an Internet fast lane for video traffic, expect your Netflix subscription to increase by $5 – 10 per month, especially with Ultra HD becoming more popular. The spectre of ISPs blocking content from other competing entities is another issue that may have to be solved separately from the Internet “fast lane” issue.
ISP (internet service/access provider):
Should ISPs be allowed to selectively prioritize communications between their customers and specific destinations on internet or should the transmission of data be done in a neutral way that does not consider the destination of a communication? Can ISPs arbitrarily assign preference to business partners or their own content? Can they charge additional fees to content providers for “priority” connections? Could they even arbitrarily block or severely degrade communications by their users to competitors such as competing Internet telephone (VoIP) companies, search engines, and online stores? For all the promise of the Internet, there is a serious threat to its potential for revitalizing democracy. The danger arises because there is, in most markets, a very small number of broadband network operators, and this may not change in the near future.
To understand what the ISPs are implying here, consider figure above. From an economic point of view ISPs are the operators of a two-sided market platform that connects the suppliers of content and services (CSPs) with the consumers (IUs) that demand these services. In a two-sided market, each side prefers to have many partners on the other side of the market. Thus, CSPs prefer to have access to many IUs, because these create advertisement revenues. Likewise IUs prefer the variety that is created by many CSPs. Suppose for a minute that there would only be one ISP in the world which connects CSPs with IUs. This ISP would consider these cross-side externalities and select a payment scheme for each side that maximizes its revenues. Instead of demanding the same payment from both sides, the classic result is that the platform operator chooses a lower fee from the side that is valued the most. In this vein, entry is stimulated and the added valuation can be monetized. There are several real world examples that demonstrate this practice: Credit card companies levy fees on merchants, not customers. Dating platforms offer free subscriptions to women, not men. Sometimes even a zero payment seems not enough to stimulate entry by the side that is valued the most. Then, the platform operator may consider to pay for entry (e.g., offer free drinks to women in a club). Such two-sided pricing is currently not employed in the Internet. One of the reasons is that CSPs and IUs are usually not connected to the same ISP, as depicted in figure above. The core of the Internet is comprised by several ISPs that perform different roles. More precisely, the core can be separated into (i) the customer access network: the physical connection to each household, (ii) the backhaul network, which aggregates the traffic from all connected households of a single ISP and (iii) the backbone network: the network that delivers the aggregated traffic from and to different ISPs. IUs are connected to a so-called access ISP which provides them with general access to the Internet. In most cases, IUs are subscribed to only one access ISP (known as single-homing) and cannot switch ISPs arbitrarily, either because they are bound by a long-term contract, or because they simply do not have a choice of ISPs in the region where they live. Conversely CSPs are usually subscribed to more than one backbone ISP (known as multi-homing), and sometimes, like in the case of Google, even maintain their own backbone network. This limits the extent of market power that each backbone ISP can exercise on the connected CSPs severely (Economides, 2005). The important message is that currently CSPs and IUs only pay the ISP through which they connect to the Internet. Interconnection between the backbone and access ISPs is warranted by a set of mutual agreements that are either based on bill-and-keep arrangements (peering) or volume-based tariffs (transit). In case of transit, the access ISP has to pay the backbone ISP, and not the other way around. Consequently, the IUs subscription fee is currently the main revenue source for access ISPs. Moreover, in many countries customers predominantly pay flat fees for their access to the Internet, and thus they are not sensitive with respect to how much traffic they are generating. Moreover, due to competition or fixed-mobile substitution, prices for Internet access have dropped throughout the years. Currently, it seems unlikely that access ISPs can evade from this flat rate trap. For example, in 2010 the big Canadian ISPs tried to return to a metered pricing scheme by imposing usage based billing on their wholesale products. As a consequence, smaller ISPs that rely on resale and wholesale products of the big Canadian ISPs would not be able to offer real flat rates anymore. With the whole country in jeopardy to loose unlimited Internet access, tremendous public protest arose and finally regulators decided to stop the larger telecommunications providers from pursuing such plans (Openmedia.ca, 2011). At the same time Internet traffic has increased, a trend that is often created by an increasing number of quality demanding services. One prominent example for this development is the company Netflix. Netflix offers video on demand streaming of many TV shows and movies for a monthly subscription fee. According to Sandvine (2010, p.14), already 20.6 percent of all peak period bytes downloaded on fixed access networks in North America are due to Netflix. In total, approximately 45 percent of downstream traffic on North American fixed and mobile access networks is attributable to real-time entertainment (Sandvine, 2010, p.12). In an effort to prepare for the extra-flood ISPs were and are forced to invest heavily in their networks. Such investments are always lumpy and thus periodically cause an overprovisioning of bandwidth, which, however, is soon filled up again with new content. This is the vicious circle that network operators are trying to escape from. However, it is important to emphasize that transportation network equipment providers like Cisco, Alcatel Lucent and Huawei are constantly improving the efficiency of their products (e.g., by making use of new sophisticated multiplexing methods) such that the costs per unit of bandwidth are decreasing. This partially offsets the costs that ISPs worry about. In summary, ISPs claim that their investments in the network are hardly counter-balanced by new revenues from IUs. In reverse, CSPs benefit from the increased bandwidth of the customer access networks, which enables them to offer even more bandwidth demanding services, which in turn leads to a recongestion of the network and a new need for infrastructure investments. In the absence of additional profit prospects on the user side, access ISPs could generate extra revenue from CSPs, who are in part causing the necessity for infrastructure investments, by exercising their market power on the installed subscriber base in the sense of a two-sided market. CSPs have a high valuation for customers, consequently, the terminating access ISP demands an extra fee (over and beyond the access fee to the backbone ISP they are connected to) from the CSP for delivering its data to the IUs. This new revenue stream (the black arrows in the figure above) would clearly be considered as a violation of net neutrality.
Internet contains three classes of ISPs:
1. Eyeball ISPs, such as Time Warner Cable and Comcast specialize in delivery to hundreds of thousands of residential users, i.e., supporting the last-mile connectivity.
2. Content ISPs specialize in providing hosting and network access for end-users and commercial companies that offer content, such as Google, and Yahoo. Typical examples are content distribution networks (CDNs).
3. Transit ISPs. Transit ISPs model the Tier-1 ISPs, such as Level 3, Qwest, and Global Crossing, which provide transit services for other ISPs and naturally form a full-mesh topology to provide the universal accessibility of the Internet.
Evolution of commercial internet with all powerful last-mile ISP:
In the early Internet, the flow of traffic (mainly emails and files) was roughly symmetrical. A packet originating at ISP A and handed off to ISP-B for delivery would be balanced by a packet moving in the opposite direction. ISPs often entered into no-cost agreements to carry one another’s traffic, each figuring that the amount of traffic it carried for another ISP would be matched by that other ISP carrying its own traffic. Network neutrality prevailed naturally since ISPs, compensated for bandwidth used, did not differentiate one packet from another. The more packets of any kind, the more profits for all ISPs, an economic situation that aligned nicely with customers’ interest in having an expanding supply of bandwidth. Internet economics have changed considerably in recent years with the rise of behemoth for-profit content providers such as Facebook, Google, Amazon, and Netflix. These for-profit content providers did two things: it changed the Internet from a symmetric network to an asymmetric one where the vast preponderance of traffic flows from content providers to customers. And it introduced a new revenue stream, one outside the Internet and generated by advertising, online merchandizing, or payments for gaming, streaming video, and financial and other services. The Internet has evolved from a simple, symmetric network where light email and web traffic flowed between academics and researchers and the only revenues were from selling bandwidth, to an asymmetric one where traffic flows from content providers to consumers, generating massive revenues for content providers. The ISPs themselves were changing and becoming more specialized. Where eyeball ISPs serve people, transit ISPs serve the content providers and earn revenue by delivering content to consumers on behalf of the content providers. Since transit ISPs don’t have direct access to consumers, they arrange with the eyeball ISPs for the last-mile delivery of content to customers. With an imbalance in the direction of traffic and no mechanism for appropriate compensation, the previous no-cost (or zero-dollar) bilateral arrangements broke down and were replaced by paid-peering arrangements where ISPs pay one another to carry one another’s traffic. Each ISP adopts its pricing policies to maximize profit, and these pricing policies play a role in how ISPs cooperate with one another, or don’t cooperate. Profit-seeking and cost-reduction objectives often induce selfish behaviors in routing—ISPs will avoid links considered too expensive for example—thus contributing to Internet inefficiencies. Paid-peering is one ISP strategy to gain profits. For the eyeball ISPs that control access to consumers, there is another way. Charge higher prices by creating a premium class of service with faster speeds. The eyeball ISPs, however, are in a power position because, unlike transit ISPs, eyeball ISPs have essentially no competition. Content providers like Netflix are in a much weaker position than the eyeball ISPs. Content providers need ISPs much more than the ISPs need them. If Netflix were to disappear, other streaming services would rush to fill in the gap. For the ISPs, it matters little whether it’s Amazon, Hulu, or another service (and worryingly, services run by the eyeball ISPs themselves) providing streaming services. If ISPs don’t need Netflix, neither do customers. Customers unhappy with Netflix’s service can simply choose another such service. They can’t, however, normally choose a different ISP. The monopolistic power of the eyeball ISPs may soon be made stronger. Occupying a position of power and knowing customers are stuck, the eyeball ISPs can and do play hardball with content providers. This was effectively illustrated in the recent Netflix-vs-Comcast standoff when Comcast demanded Netflix pay additional charges (above what Netflix was already paying for bandwidth). When Netflix initially refused, Comcast customers with Netflix service started reporting download speeds so slow that some customers quit Netflix. These speed problems seemed to resolve themselves right around the time Netflix agreed to Comcast’s demands. It would have been relatively inexpensive for Comcast to add capacity. But why should it? Monopolies such as Comcast have no real incentive to upgrade their networks. There is in fact an incentive to not upgrade since a limited commodity commands a higher price than a bountiful one. By limiting bandwidth, Comcast can force Netflix and other providers to pay more or opt into the premium class. Besides charging more for fast Internet lanes, ISPs have other ways to extract revenues from content providers. What Netflix paid for in its deal with Comcast was not a fast lane in the Internet, but a special arrangement whereby Comcast connects directly to Netflix’s servers to speed up content delivery. It is important to note that this arrangement is not currently covered under conventional net neutrality, which bans fast lanes over the Internet backbone. In the Netflix-Comcast deal, Netflix’s content is being moved along a private connection and never reaches the global Internet.
Smart broadband pipes leads to more revenue to last-mile ISP:
Choosing an Internet service provider (ISP):
Once you have decided which type of Internet access you’re interested in, you can determine which ISPs are available in your area that offer the type of Internet access you want. Then you’ll need to purchase Internet service from one of the available ISPs. Talk to friends, family members, and neighbors to see which ISPs they use. Below are some things to consider as you research ISPs:
•Ease of installation
Although dial-up has traditionally been the least expensive option, many ISPs have raised dial-up prices to be the same as broadband. This is intended to encourage people to switch to broadband.
Bandwidth cost to ISP:
The Internet is witnessing explosive growth in demand for bulk content. Examples of bulk content transfers include downloads of music and movie files, distribution of large software and games, online backups of personal and commercial data, and sharing of huge scientific data repositories. Recent studies of Internet traffic in commercial backbones as well as academic and residential access networks show that such bulk transfers account for a large and rapidly growing fraction of bytes transferred across the Internet. The bandwidth costs of delivering bulk data are substantial. A recent study reported that average monthly wholesale prices for bandwidth vary from $30,000 per Gbps/month in Europe and North America to $90,000 in certain parts of Asia and Latin America. The high cost of wide-area network traffic means that increasingly economic rather than physical constraints limit the performance of many Internet paths. As charging is based on peak bandwidth utilization (typically the 95th percentile over some time period), ISPs are incentivized to keep their bandwidth usage on inter-AS links much lower than the actual physical capacity. To control their bandwidth costs, ISPs are deploying a variety of ad-hoc traffic shaping policies today. These policies target specifically bulk transfers, because they consume the vast majority of bytes. However, these shaping policies are often blunt and arbitrary. For example, some ISPs limit the aggregate bandwidth consumed by bulk flows to a fixed value, independently of the current level of link utilization. A few ISPs even resort to blocking entire applications. So far, these policies are not supported by an understanding of their economic benefits relative to their negative impact on the performance of bulk transfers, and thus their negative impact on customer satisfaction.
Internet data caps are monthly limits on the amount of data you can use over your Internet connection. When an Internet user hits that limit, different network operators engage in different actions, including slowing down data speeds, charging overage fees, and even disconnecting a subscriber. These caps come into play when a user either uploads or downloads data. Caps are most restrictive for wireless Internet access, but wired Internet access providers are also imposing these caps. Whatever the variation of data cap, they all have the same effect—they discourage the use of the Internet and the innovative applications it spawns. Think of the effect data caps have on visual artists, for example. Films, photographs, images of paintings, and other works of art are often data-rich, requiring significant bandwidth. These artists rely on the ability of new audiences to easily discover their work, but in a world with data caps, people may be less inclined to explore new things because of concerns about exceeding their cap. Data caps also make it impossible to do all the important things 4G LTE supposedly lets you do. Recently, T-Mobile released evidence that showed that users with capped or throttled broadband use 20x-30x less broadband than users with uncapped broadband. and 37% of subscribers don’t use streaming media because they fear going over their data caps. This hurts not only the ability of consumers to use broadband to its fullest potential, but it has serious implications for net neutrality.
Users’ appetite for services and applications which require continuous data exchange keeps growing. Mirroring the market evolution, the traffic conveyed on networks has been increasing continuously. Overall IP traffic is estimated by Cisco to almost quadruple by 2016 & reach 110.2 exabytes per month. One of the main objectives behind the use of traffic management is the reduction of network congestion resulting from this outstanding growth in data traffic. ISPs commonly apply differential treatment of traffic, in particular during certain times of the day, to ensure that the end user’s experience is not disrupted by network congestion. Users may share access over a common network infrastructure. Since most users do not use their full connection capacity all of the time, this aggregation strategy (known as contended service) usually works well and users can burst to their full data rate at least for brief periods. However, peer-to-peer (P2P) file sharing and high-quality streaming video can require high data-rates for extended periods, which violates these assumptions and can cause a service to become oversubscribed, resulting in congestion and poor performance. The TCP protocol includes flow-control mechanisms that automatically throttle back on the bandwidth being used during periods of network congestion. This is fair in the sense that all users that experience congestion receive less bandwidth, but it can be frustrating for customers and a major problem for ISPs. In some cases the amount of bandwidth actually available may fall below the threshold required to support a particular service such as video conferencing or streaming live video–effectively making the service unavailable. When traffic is particularly heavy, an ISP can deliberately throttle back the bandwidth available to classes of users or for particular services. This is known as traffic shaping and careful use can ensure a better quality of service for time critical services even on extremely busy networks. However, overuse can lead to concerns about fairness and network neutrality or even charges of censorship, when some types of traffic are severely or completely blocked.
Net neutrality is shorthand for the concept that all Internet traffic should be treated equally irrespective of the nature of the traffic. So the bytes that make up a 10KB email should be shuttled about cyberspace in the same unbiased way the bytes that make up a 10GB HD movie are. Broadband providers generally do not like the concept of net neutrality. Streaming a 10GB movie will use up a lot more bandwidth than a 10KB email. While vast, there is still a limit on the total amount of bandwidth available at any given point in time. Also, broadband providers charge end users for access. At least up until recently, a user who is streaming a 10GB represents the same revenue as the individual who sends the 10KB email but uses one millionth the bandwidth. Get enough of those high-use consumers on your system and you will crowd out the other paying customers who then cannot send their 10KB emails. Broadband providers have chosen several ways of dealing with the bandwidth hogs. Providers can charge end users more if they use more bandwidth. They also slow or impede the delivery of large files or entire classes of files to ensure capacity is never constrained. This slowdown could frustrate the high-use consumer who might switch to a more reliable service. The proponents of net neutrality believe that broadband providers should not be the gatekeepers for the type of content any particular individual seeks. The Internet is a great free market of ideas and commerce, and these should flow with as little regulation as possible. From a cultural and philosophical perspective, it’s hard to argue with the proponents of net neutrality. From an economic standpoint, it seems fairly clear that net neutrality promotes inefficiency. Bandwidth hogs are a form of free riders. Normally, consumers pay an amount that is correlated to what they consume. In the early days of the Internet, the technical structure of the Internet generally allowed consumers to consume as much as they want for a single price. If a resource has no capacity constraints, then one individual’s consumption of the resource will not affect another’s. If the resource has a capacity constraint, however, there may be a point at which a single user’s consumption will negatively affect another’s. Larry Lessig of Stanford University recognized this potential problem with the Internet. He sees the Internet as a great “commons.” A commons is a public resource consumption of which is free to the members of the community. A classic commons would be a natural area owned by the government where any farmer could take its livestock to graze. A problem can occur because consumers are not required to pay directly for their consumption. (They may pay indirectly through taxes.) Since there is no immediate cost associated with consumption, they could take as much as they want with impunity. This is the tragedy of the commons: collectively, the members of the community are benefited from the collective ownership and stewardship of the commons; but individually, each is incentivized to consume as much of the commons as possible. The economic inefficiency occurs when an overconsuming consumer consumes more of the commons than he needs to obtain his optimal output. It can also occur where a disadvantaged consumer cannot consume enough to produce an economically optimal amount. The cure for the tragedy of the commons is regulating consumption or charging the user for access. Overconsumption of a depleting asset reduces the amount of the asset available to others who may need it to reach their economically optimal output. If the two parties are competitors, the overconsumption can, theoretically, harm competition. Strategic overconsumption of a depleting asset can be a form of foreclosure—it can deprive a competitor of the ability to produce an economically optimal amount and so serves as an artificial capacity constraint that reduces total output of the market. To the extent a competitor must then seek higher priced inputs to achieve the same output, the competitor’s costs have been raised. On the other hand, capacity on the Internet is a vast but ultimately depletable resource at any given point in time. (The depletion is transient. Once the file is downloaded, full capacity is restored.) In order to prevent overcrowding, the broadband providers impede the flow of this high bandwidth content. Doing so discourages the consumers that are using more of the bandwidth than anyone else, and allows low-volume, and therefore high-value (under the one-price-for-all model), customers the access they would want to remain on the service. By slowing these bandwidth hogs down, the broadband providers are in fact enforcing, although inartfully, an efficient allocation of bandwidth. Indeed, if the broadband providers charge either the content providers, the customers or both, the content provider best suited to provide content will bid the most for the most bandwidth, and the consumers to whom the added bandwidth has the most value will pay more, ensuring an efficient outcome.
ISP and conflict of interest:
The conflict is between internet service providers like Verizon, and content providers like Netflix. Netflix wants to deliver their video content to Verizon’s customers with flawless quality. But to do that, it needs a lot of bandwidth. Netflix alone currently accounts for around 30% of U.S. internet traffic, and growing. Verizon (and other providers) see this as unsustainable and has demanded that Netflix pay for peering arrangements if they want to their traffic delivered to customers. But it’s not that simple. Verizon has its own video streaming service. In the ISP’s ideal world, Netflix wouldn’t exist and the ISP would be the provider of this type of service. So it’s difficult to trust the ISPs when they say that this is purely to ensure a stable network. There is also evidence that the disruptions are artificial and not a consequence of a network filled to capacity. It’s not clear to users whose fault this is. When a user sees their Netflix stream fail, they perceive a failure of Netflix, no matter where the failure actually was. So any network disruption destroys their credibility as a dependable service.
ISPs bundle bandwidth with other services:
Bundling broadband with other services gives ISPs an unfair advantage over new competitive Internet services. The insidious part about the broadband business that is not being discussed in the net neutrality debate is that bundling services enables the ISPs to blend pricing among broadband, TV, telephone and, in some cases, mobile services. The ISPs know that everyone needs and will subscribe to a broadband service, so they can charge whatever they can get. In fact, U.S. high-speed broadband service costs nearly three times as much as in the UK and France, and more than five times that of South Korea. Since broadband service is the ISP’s highest margin offering, exceeding 90% gross profit, in a bundled offering they can afford to cut the prices of other services to inhibit competition. This allows ISPs to continue to increase prices for broadband Internet service with impunity, and price their bundled TV service based on TV competition. If a new over-the-top (OTT) video service attempts to offer a competitive TV service, the ISP can simply lower the cost of their own video service to break-even or to a small loss – losing money on video services. OTT services will not be able to compete because they cannot allot profit margins among other services. As an analogy, imagine if your electric utility bundled your electric service with your television service. You couldn’t receive one without the other without large increases in price. The critical need to maintain an open and competitive Internet is paramount not just to net neutrality – to ensure equal and open Internet access – but also to prevent ISPs from bundling their bandwidth itself (and those magnificent profit margins) with the other services the ISPs carry. New OTT video services companies trying to compete with established ISP Internet services will not be able to succeed. And this is in addition to the data caps being imposed by some ISPs.
Instead of increasing their capacity, ISP deliberately keeps it scarce:
Perhaps most damaging of all, network operators would have a powerful incentive to continue to under-invest in infrastructure. They would be allowed to charge for preferential access to a resource they could manage to ensure remained artificially scarce.
Some examples that illustrate how an ISP could violate net neutrality principles:
•Blocking – some users could be prevented from visiting specific websites or accessing specific services, such as those of a competitor to the ISP;
•Throttling – different treatment could be given to specific sites or services, such as slower speeds for Netflix;
•Re-direction – users could be automatically redirected from one website to a competing website;
•Cross-subsidization – users of a service could be offered free or discounted access to other services;
•Paid prioritization – companies might buy priority access to an ISP’s customers (e.g., Google or Facebook could (in theory) pay ISP’s to provide faster, more reliable access to their websites than to potential competitors).
From the ISP’s perspective, net neutrality places restrictions on potentially revenue-generating functionality. It may also impact how private networks might co-exist with shared public networks. Net neutrality enforcement can also be an important governance issue. Net neutrality is good for end users, as it ensures that all traffic is equitably handled. Net neutrality is bad for ISPs who want to leverage their position as network providers to give their own services special treatment and thereby make more profit as a result. The real debate here is whether or not ISPs should have the legal protections afforded to ‘common carriers’. In other industries (e.g., transportation, telephony), carriers are not responsible for the content of their networks. They simply provide a service and if the traffic on their network is not legal, it’s not their problem as they will carry anything for anyone. ISPs who want to retain common carrier status should welcome net neutrality. However, those who are willing to forgo common carrier status should then be held liable for their traffic content.
The figure below depicts internet classification:
What is an OTT?
OTT or over-the-top refers to applications and services which are accessible over the internet and ride on operators’ networks offering internet access services. The best known examples of OTT are Skype, Viber, WhatsApp, e-commerce sites, Ola, Facebook messenger. The OTTs are not bound by any regulations. An OTT provider can be defined as a service provider offering ICT (Information Communication Technology) services, but neither operates a network nor leases network capacity from a network operator. Instead, OTT providers rely on the global internet and access network speeds (ranging from 256 Kilobits for messaging to speeds in the range of 0.5 to 3 Megabits for video streaming) to reach the user, hence going “over-the-top” of a telecom service provider’s (TSP’s)network. Services provided under the OTT umbrella typically relate to media and communications and are generally free or lower in cost as compared to traditional methods of delivery.
Today, users can directly access these OTT applications online from any place, at any time, using a variety of internet connected consumer devices. The characteristics of OTT services are such that ISPs realise revenues solely from the increased data usage of the internet-connected customers for various applications (henceforth, apps). The ISPs do not realise any other revenues, be it for carriage or bandwidth. They are also not involved in planning, selling, or enabling OTT apps. On the other hand, OTT providers make use of the ISPs’ infrastructure to reach their customers and offer products/services that not only make money for them but also compete with the traditional services offered by ISPs. Leave aside ISPs, these apps also compete with brick and mortar rivals e.g. e-commerce sites, banking etc. OTTs can impact revenue of all the three real time application verticals – video, voice and messaging. The various other non-real time applications include e-payments, e-banking, entertainment apps, mobile location based services and digital advertising. Table below provides a bird’s eye view of how OTTs can potentially have an adverse impact on incumbent ISPs or other business entities.
Is the growth of OTT impacting the traditional revenue stream of ISPs?
Should OTT players pay for use of ISPs network over and above data charges paid by consumers?
The availability of Voice over the Internet Protocol (“VoIP”) services offering flat rated long distance telephone service on a monthly subscription rate, or per call rates for a few pennies a minute, show how software applications riding on top of a basic transmission link can devastate an existing business plan that anticipates ongoing, large profit margins for core services. VoIP and wireless services have adversely impacted wireline local exchange revenues as consumers migrate to a triple play bundle of services from cable television companies offering local and long distance telephone service and Internet access coupled with their core video programming services. To retain subscribers the incumbent telephone companies have created their own triple play bundles at prices that generate lower margins for the voice telephony portion of the package deal. The apparent inability of ISPs to raise subscription rates and to receive payment from content providers has frustrated senior managers and motivated them to utter provocative claims that heavy users of their networks, such as Google, have become free riders: Now what they would like to do is use my pipes free, but I ain’t going to let them do that because we have spent this capital and we have to have a return on it. So there’s going to have to be some mechanism for these people who use these pipes to pay for the portion they’re using. Why should they be allowed to use my pipes? The Internet can’t be free in that sense, because we and the cable companies have made an investment and for a Google or Yahoo! or Vonage or anybody to expect to use these pipes [for] free is nuts! On the other hand, Airtel India CEO Gopal Vittal had said during the company’s earnings conference call, earlier this year, that there’s no evidence of VoIP cannibalisation of voice services. Last year, Idea Cellular MD Himanshu Kapania had also said that OTT apps like Viber have had some impact on their International calling business, but on regular voice calls, there was no impact. A study jointly done by AT Kearney and Google estimated that telecom companies will earn an additional $8 billion in revenues by 2017 due to the proliferation of data and data-based services. Charging users extra for specific apps or services will overburden them, which in turn will lead to them not using the services at all. It is also akin to breaking up the Internet into pieces, which is fundamentally against what Net Neutrality stands for. Also, the Internet depends on interconnectivity and the users being able to have seamless experience – differential pricing will destroy the very basic tenets of the Internet.
ISP testing software:
Test your ISP whether they are respecting net neutrality:
At a minimum, consumers deserve a complete description of what they are getting when they buy “unlimited Internet access” from an ISP. Only if they know what is going on and who is to blame for deliberate interference can consumes make informed choices about which ISP to prefer (to the extent they have choices among residential broadband providers) or what counter-measures they might employ. Policy-makers, as well, need to understand what is actually being done by ISPs in order to pierce the evasive and ambiguous rhetoric employed by some ISPs to describe their interference activities. Accordingly, Electronic Frontier Foundation (EFF) is developing information and software tools intended to help subscribers test their own broadband connections.
Switzerland Network Testing Tool:
Is your ISP interfering with your BitTorrent connections? Cutting off your VoIP calls? Undermining the principles of network neutrality? In order to answer those questions, concerned Internet users need tools to test their Internet connections and gather evidence about ISP interference practices. After all, if it weren’t for the testing efforts of Rob Topolski, the Associated Press, and EFF, Comcast would still be stone-walling about their now-infamous BitTorrent blocking efforts. Developed by the Electronic Frontier Foundation, Switzerland is an open source software tool for testing the integrity of data communications over networks, ISPs and firewalls. It will spot IP packets which are forged or modified between clients, inform you, and give you copies of the modified packets.
Known ISP Testing Software:
|Tool||Active / Passive||# Participants per Test||Platform||Protocols||Notes|
|Gemini||Active(?)||Bilateral||Bootable CD||?||Uses pcapdiff|
|Glasnost||Active||1.5 sided||Java applet||BitTorrent|
|ICSI IDS||Passive||0 sided (on the network)||IDS||Forged RSTs||Not code users can run|
|Google/New America MeasurementLab||Active||2 sided||PlanetLab (server), Any (client)||Any||A server platform for others’ active testing software|
|NDT||Active||1.5 sided||Java applet / native app||TCP performance||A sophisticated speed test|
|Network Neutrality Check||Active||1.5 sided||Java applet||No real tests yet||Real tests forthcoming here ; discussion here|
|NNMA||Passive||Unilateral||(currently) Windows app||Any|
|pcapdiff / tpcat||Either||Bilateral||Python app||Any||A tool to make manual tests easier. EFF is no longer working on pcapdiff, but development continues with the tpcat project.|
|Switzerland||Passive||Multilateral||Portable Python app||Any||Sneak preview release just spots forged/modified packets|
Find out if your ISP is slowing down your connection:
Take the Internet Health Test to check if ISPs are throttling websites. The tool checks for any degradation of the internet connection and will provide you with details. This web tool is supported by three well-known open internet groups: Demand Progress, Fight for the Future and the Free Press Action Fund. They note: “This test measures whether interconnection points are experiencing problems. It runs speed measurements from your (the test user’s) ISP, across multiple interconnection points, thus detecting degraded performance.” A number of different processes are launched once you start the test. First, it checks your connection by sending data to different locations throughout the Internet. This helps you to know whether there are choke points in between various connections that are slowing you down. The Internet Health Test is designed to help users check if ISPs are breaking the rules and slowing you down. To perform the Internet Health Test, just visit the site at http://internethealthtest.org/ and click on the green “Start the test” button. This will open a new window where the test will check for signs for degradation. The test will tell you your internet speed as well as latency. The supporting open internet organizations wants many people as possible to take the test so that everyone can know which ISPs are throttling down connections. So go ahead and take the test, it will take only a minute or two, and then you’ll know whether your provider is really giving you the internet “fast lane” or not.
Quality of Service (QoS):
Internet routers forward packets according to the diverse peering and transport agreements that exist between network operators. Many networks using Internet protocols now employ quality of service (QoS). QoS is the measure of transmission quality and service availability of a network (or internetworks). Service availability is a crucial foundation element of QoS. The network infrastructure must be designed to be highly available before you can successfully implement QoS. The target for High Availability is 99.999 % uptime, with only five minutes of downtime permitted per year. The transmission quality of the network is determined by the following factors:
1. Loss—A relative measure of the number of packets that were not received compared to the total number of packets transmitted. Loss is typically a function of availability. If the network is Highly Available, then loss during periods of non-congestion would be essentially zero. During periods of congestion, however, QoS mechanisms can determine which packets are more suitable to be selectively dropped to alleviate the congestion.
2. Delay—The finite amount of time it takes a packet to reach the receiving endpoint after being transmitted from the sending endpoint. In the case of voice, this is the amount of time it takes for a sound to travel from the speaker’s mouth to a listener’s ear.
3. Delay variation (Jitter)—The difference in the end-to-end delay between packets. For example, if one packet requires 100 ms to traverse the network from the source endpoint to the destination endpoint and the following packet requires 125 ms to make the same trip, then the delay variation is 25 ms.
Each end station in a Voice over IP (VoIP) uses a jitter buffer to smooth out changes in the arrival times of voice data packets. Although jitter buffers are dynamic and adaptive, they may not be able to compensate for instantaneous changes in arrival times of packets. This can lead to jitter buffer over-runs and under-runs, both of which result in an audible degradation of call quality.
QoS technologies refer to the set of tools and techniques to manage network resources and are considered the key enabling technology for network convergence. The objective of QoS technologies is to make voice, video and data convergence appear transparent to end users. QoS technologies allow different types of traffic to contend inequitably for network resources. Voice, video, and critical data applications may be granted priority or preferential services from network devices so that the quality of these strategic applications does not degrade to the point of being unusable. Therefore, QoS is a critical, intrinsic element for successful network convergence. QoS tools are not only useful in protecting desirable traffic, but also in providing deferential services to undesirable traffic such as the exponential propagation of worms.
Implementing QoS involves combining a set of technologies defined by the Internet Engineering Task Force (IETF) and the Institute of Electrical and Electronic Engineers (IEEE). These technologies are designed to alleviate the problems caused by shared network resources and finite bandwidth. Although the concept of QoS encompasses a variety of standards and mechanisms, QoS for Windows Server 2003 IP-based networks is centered on traffic control, which includes mechanisms for prioritization and traffic shaping (the smoothing of traffic bursts). QoS can be used in any network environment in which bandwidth, latency, jitter, and data loss must be controlled for mission-critical software, such as Enterprise Resource Planning (ERP) applications, or for latency-sensitive software, such as video conferencing, IP telephony, or other multimedia applications. QoS also can be used to improve the throughput of traffic that crosses a slow link, such as a dial-up connection.
Advocates of net neutrality have proposed several methods to implement a net neutral Internet that includes a notion of quality-of-service:
1. An approach offered by Tim Berners-Lee allows discrimination between different tiers, while enforcing strict neutrality of data sent at each tier: If I pay to connect to the Net with a given quality of service, and you pay to connect to the net with the same or higher quality of service, then you and I can communicate across the net, with that quality and quantity of service”. “[We] each pay to connect to the Net, but no one can pay for exclusive access to me.”
2. United States lawmakers have introduced bills that would now allow quality of service discrimination for certain services as long as no special fee is charged for higher-quality service.
Traffic shaping (also known as “packet shaping”) is a computer network traffic management technique which delays some or all datagrams to bring them into compliance with a desired traffic profile. Traffic shaping is used to optimise or guarantee performance, improve latency, and/or increase usable bandwidth for some kinds of packets by delaying other kinds. It is often confused with traffic policing, the distinct but related practice of packet dropping and packet marking. The most common type of traffic shaping is application-based traffic shaping. In application-based traffic shaping, fingerprinting tools are first used to identify applications of interest, which are then subject to shaping policies. Some controversial cases of application-based traffic shaping include P2P bandwidth throttling. Many application protocols use encryption to circumvent application-based traffic shaping. Another type of traffic shaping is route-based traffic shaping. Route-based traffic shaping is conducted based on previous-hop or next-hop information. If a link becomes saturated to the point where there is a significant level of contention (either upstream or downstream) latency can rise substantially. Traffic shaping can be used to prevent this from occurring and keep latency in check. Traffic shaping provides a means to control the volume of traffic being sent into a network in a specified period (bandwidth throttling), or the maximum rate at which the traffic is sent (rate limiting), or more complex criteria such as GCRA (generic cell rate algorithm). This control can be accomplished in many ways and for many reasons; however traffic shaping is always achieved by delaying packets. Traffic shaping is commonly applied at the network edges to control traffic entering the network, but can also be applied by the traffic source (for example, computer or network card) or by an element in the network. Traffic shaping is sometimes applied by traffic sources to ensure the traffic they send complies with a contract which may be enforced in the network by a policer. It is widely used for network traffic engineering, and appears in domestic ISPs’ networks as one of several Internet Traffic Management Practices (ITMPs). Some ISPs may use traffic shaping against peer-to-peer file-sharing networks, such as BitTorrent.
Traffic Policing vs. Traffic Shaping:
The following diagram illustrates the key difference.
Traffic policing propagates bursts. When the traffic rate reaches the configured maximum rate, excess traffic is dropped (or remarked). The result is an output rate that appears as a saw-tooth with crests and troughs. In contrast to policing, traffic shaping retains excess packets in a queue and then schedules the excess for later transmission over increments of time. The result of traffic shaping is a smoothed packet output rate. Shaping implies the existence of a queue and of sufficient memory to buffer delayed packets, while policing does not. Queueing is an outbound concept; packets going out an interface get queued and can be shaped. Only policing can be applied to inbound traffic on an interface. Ensure that you have sufficient memory when enabling shaping. In addition, shaping requires a scheduling function for later transmission of any delayed packets. This scheduling function allows you to organize the shaping queue into different queues.
To queue something is to store it, in order, while it awaits processing. An Internet router typically maintains a set of queues, one per interface, that hold packets scheduled to go out on that interface. Historically, such queues use a drop-tail discipline: a packet is put onto the queue if the queue is shorter than its maximum size (measured in packets or in bytes), and dropped otherwise. Active queue disciplines drop or mark packets before the queue is full. Typically, they operate by maintaining one or more drop/mark probabilities, and probabilistically dropping or marking packets even when the queue is short. A FIFO (first-in, first-out) queue works like the line-up at a supermarket’s checkout — the first item into the queue is the first processed. As new packets arrive they are added to the end of the queue. If the queue becomes full, and here the analogy with the supermarket stops, newly arriving packets are dropped. This is known as tail-drop. Besides FIFO queueing, we have Class Based Queueing and Priority Queueing
In telecommunication and computer engineering, the queuing delay (or queueing delay) is the time a job waits in a queue until it can be executed. It is a key component of network delay. In a packet-switched network, queueing delay is the sum of the delays encountered by a packet between the time of insertion into the network and the time of delivery to the addressee. This term is most often used in reference to routers. When packets arrive at a router, they have to be processed and transmitted. A router can only process one packet at a time. If packets arrive faster than the router can process them (such as in a burst transmission) the router puts them into the queue (also called the buffer) until it can get around to transmitting them. Delay can also vary from packet to packet so averages and statistics are usually generated when measuring and evaluating queuing delay. As a queue begins to fill up due to traffic arriving faster than it can be processed, the amount of delay a packet experiences going through the queue increases. The speed at which the contents of a queue can be processed is a function of the transmission rate of the facility. This leads to the classic delay curve. The average delay any given packet is likely to experience is given by the formula 1/(μ-λ) where μ is the number of packets per second the facility can sustain and λ is the average rate at which packets are arriving to be serviced. This formula can be used when no packets are dropped from the queue. The maximum queuing delay is proportional to buffer size. The longer the line of packets waiting to be transmitted, the longer the average waiting time is. The router queue of packets waiting to be sent also introduces a potential cause of packet loss. Since the router has a finite amount of buffer memory to hold the queue, a router which receives packets at too high a rate may experience a full queue. In this case, the router has no other option than to simply discard excess packets.
The Process of Buffering (queueing):
Routers store packets in a few different locations depending on the congestion level:
As a packet enters a router, the packet is stored inside of the ingress buffer waiting to be processed. As you can see, VoIP gets priority over other packets.
Prioritization of Network Traffic:
Prioritization of network traffic is simple in concept: give important network traffic precedence over unimportant network traffic. That leads to some interesting questions. What traffic should be prioritized? Who defines priorities? Do people pay for priority or do they get it based on traffic type (e.g., delay-sensitive traffic such as real-time voice)? For Internet traffic, where are priorities set (at the ingress based on customer preassigned tags in packets, or by service provider policies that are defined by service-level agreements)? Prioritization is also called CoS (class of service) since traffic is classed into categories such as high, medium, and low (gold, silver, and bronze), and the lower the priority, the more “drop eligible” is a packet. E-mail and Web traffic is often placed in the lowest categories. When the network gets busy, packets from the lowest categories are dropped first. Prioritization/CoS should not be confused with QoS. It is a subset of QoS. A package-delivery service provides an analogy. You can request priority delivery for a package. The delivery service has different levels of priority (next day, two-day, and so on). However, prioritization does not guarantee the package will get there on time. It may only mean that the delivery service handles that package before handling others. To provide guaranteed delivery, various procedures, schedules, and delivery mechanisms must be in place. The problem with network priority schemes is that lower-priority traffic may be held up indefinitely when traffic is heavy unless there is sufficient bandwidth to handle the highest load levels. Even high-priority traffic may be held up under extreme traffic loads. One solution is to overprovision network bandwidth, which is a reasonable option given the relatively low cost of networking gear today. As traffic loads increase, router buffers begin to fill, which adds to delay. If the buffers overflow, packets are dropped. When buffers start to fill, prioritization schemes can help by forwarding high-priority and delay-sensitive traffic before other traffic. This requires that traffic be classed (CoS) and moved into queues with the appropriate service level. One can imagine an input port that classifies traffic or reads existing tags in packets to determine class, and then moves packets into a stack of queues with the top of the stack having the highest priority. As traffic loads increase, packets at the top of the stack are serviced first.
Prioritize Packets to improve Quality:
Voice traffic competes for available bandwidth on your broadband connection. If there is not enough bandwidth, packets get dropped. VoIP media streams require a constant, uninterrupted data flow. This data flow is composed of UDP packets that each carry between 10 and 30 milliseconds of sound information. Ideally, each packet in a media stream is evenly spaced and of the same size. In a perfect world, a packet never arrives out of sequence or gets dropped. Voice over IP media packets are framed in a highly precise, performance-sensitive way. Dropped packets and packet jitter (packets arriving out of order) cause problems—big problems—for an ongoing call. These problems can cause the voices on the call to sound robotic, to cut in and out, or to go silent altogether. Most of the packet-drop problems you’ll encounter while VoIPing will be the fault of your bandwidth-limited ISP connection—the link from the ISP’s network to your broadband router.
Traffic control and management:
In the Internet world, everything is packets. Managing a network means managing packets: how they are generated, router, transmitted, reorder, fragmented, etc… Traffic control works on packets leaving the system. It doesn’t initially have as an objective to manipulate packets entering the system (although you could do that, if you really want to slow down the rate at which you receive packets). The Traffic Control code operates between the IP layer and the hardware driver that transmits data on the network. We are discussing a portion of code that works on the lower layers of the network stack of the kernel. In fact, the Traffic Control code is the very one in charge of constantly furnishing packets to send to the device driver. It means that the TC module, the packet scheduler, is permanently activated in the kernel. Even when you do not explicitly want to use it, it’s there scheduling packets for transmission. By default, this scheduler maintains a basic queue (similar to a FIFO type queue) in which the first packet arrived is the first to be transmitted. At the core, TC is composed of queuing disciplines, or qdisc, that represent the scheduling policies applied to a queue. Several types of qdisc exist. We have FIFO (first in first out), FIFO with multiple queues, FIFO with hash and round robin (SFQ). We also have a Token Bucket Filter (TBF) that assigns tokens to a qdisc to limit it flow rate.
Traffic control is a collection of mechanisms that segregate traffic into the appropriate service types and regulate its delivery to the network. Traffic control involves classifying, shaping, scheduling, and marking traffic. The fundamental technical challenge is getting the Net to carry traffic that it was never meant to handle. Internet packet switching was designed for digital file transfers between computers, and it was later adapted for e-mail and Web pages. For these purposes the digital data does not have to be delivered at a specific rate or even in a specific order, so it can be chopped into packets that are routed over separate paths to be reassembled, in leisurely fashion, at their destinations. By contrast, voice and video signals must come fast and in a specific sequence.
During classification, packets are separated into distinct data flows and then directed to the appropriate queue on the forwarding interface. Queues are based on service type. The algorithm that services a queue determines the rate at which traffic is forwarded from the queue.
Traffic control in Windows Server 2003 supports the following service types:
Best-effort is the standard service level in many IP-based networks. It is a connectionless model of delivery that provides no guarantees for reliability, delay, or other performance characteristics.
Controlled load data flows are treated similarly to best-effort data flows in unloaded (uncongested) conditions. This means that a very high percentage of transmitted packets will be delivered successfully to the receiving end node and a very high percentage of packets will not exceed the minimum delay in delivery. Controlled load service provides high quality delivery without guaranteeing minimum latency.
Guaranteed service provides high-quality delivery with guaranteed minimum latency. The impact of guaranteed traffic on the network is heavy, so guaranteed service is typically used only for traffic that does not adapt easily to change. Network control:
Network control, the highest service level, is designed for network management traffic.
Qualitative service is designed for applications that require prioritized traffic handling but cannot quantify their QoS traffic requirements. These applications typically send traffic that is intermittent or burst-like in nature. For this service type, the network determines how the data flow is treated.
Engineers decided that the best way to manage traffic flow was to label each packet with codes based on the time sensitivity of the data, so routers could use them to schedule transmission. Everyone called them priority codes, but the name wasn’t meant to imply that some packets were more important than others, only that they were more perishable. It’s like the difference between a shipment of fresh fruit and one of preserves. Here’s a set of such codes that the IEEE P802.1p task force defined in 1998 for local area networks. The highest priority values are for the most time-sensitive services, with the top two slots going to network management, followed by slots for voice packets, then video packets and other traffic.
|4||4||VI||Video, < 100 ms latency and jitter|
|5||5||VO||Voice, < 10 ms latency and jitter|
|7||7 (highest)||NC||Network Control|
Although these codes have been accepted as potentially useful, they haven’t been widely used for wire line, fiber broadband, or the backbone Internet. Those systems generally have adequate internal capacity.
Reasons for traffic management:
The primary reason that is given by ISPs for traffic management is to prevent a small number of their customers from clogging up access to the Internet by using a disproportionate share of the available bandwidth. In this way, proponents of traffic management say that ISPs are justified in controlling the flow of data because it is necessary to maintain the quality of service that is required to ensure all users have an enjoyable browsing experience.
Traffic management techniques:
1. Data caps: A wide variety of data caps and “fair use” policies may be used by operators to implement a specific business model. In general, a data cap will be imposed to support the operator’s pricing strategy, so that the price of traffic is based on volume. Data caps are a technical measure that requires monitoring traffic volume and throttling data or charging for extra volume once a pre-defined data cap is reached. Data caps provide a price signal to end users in relation to the cost of their bandwidth consumption.
2. Application-agnostic congestion management: To respond to network congestion, an ISP can react to daily fluctuations or unexpected network environment changes by implementing “congestion controls” at the edge of the network, where the source of the traffic (e.g. computers) slows down the transmission rate when packet loss is occurring.
3. Prioritization: An ISP might prioritize transmission of certain types of data over others (most often used to prioritize time-sensitive traffic, such as VoIP and IPTV). ISPs may be required to prioritize emergency services, and this is generally not a concern from a net neutrality perspective.
4. Differentiated throttling: The capacity available for a particular type of content (most often peer-to-peer traffic, particularly in peak times) may be restricted, which preserves capacity for the unrestricted content. Unlike application-agnostic congestion management, this technique is aimed at a specific type of content; generally a type that is bandwidth-hungry and non-time-critical.
5. Access-tiering: An ISP may prioritize a specific application or content – for a price to be paid by a content provider. By selling access to a “lane”, access providers can generate greater revenue to fund the network in-vestments necessary to handle increasingly bandwidth-hungry services. This can be distinguished from prioritization in that access-tiering is typically open to all service providers (that can afford to pay for it) and that it is generally used to promote a particular service provider, rather than a type of content. Access-tiering has been criticized for its potential harms to innovation, particularly to start-ups unable to afford the fee. It is also commercially possible that a service prioritization arrangement could be made on an exclusive-by-service basis, to prevent competitors of the preferred content provider from purchasing a similar level of priority.
6. Blocking: End users may be prevented from using or accessing a particular website or a type of content (e.g., the blocking of VoIP traffic on a mobile data network). Blocking may be implemented to:
- inhibit competition, particularly if the access provider offers a service that competes with the service being blocked;
- manage costs, particularly where the cost of carrying a particular service or type of service places a disproportionate burden on the access provider’s network; and
- block unlawful or undesirable content, such as child abuse, viruses or spam. This may be necessary to comply with government or court orders, or done at the request of the end user. The blocking of unlawful and undesirable content generally raises few net neutrality concerns. Lawful interception measures, while not constituting “blocking”, are similarly non-controversial from a net neutrality perspective.
Specific restrictions may be applied discriminately or indiscriminately between users and they may be permanent or implemented over certain periods (e.g. peak time). The nature of the restriction will often be contractually disclosed by the ISP, so that the user is made aware that their access to a particular service will be restricted in certain circumstances.
As critics point out, that there is a fine line between correctly applying traffic management to ensure a high quality of service and wrongly interfering with Internet traffic to limit applications that threaten the ISP’s own lines of business. For example, the VoIP application Skype uses peer-to-peer technology to provide free phone calls, which compete directly with the phone services offered by many ISPs. It would be easy at a technical level for an ISP to use its traffic management equipment to limit a customer’s Skype experience in an effort to protect its own fixed or mobile telephony services.
Figure below shows Internet Plan charging different value for different sites at different speeds:
The figure below shows spectrum of traffic management conducts:
If the core of a network has more bandwidth than is permitted to enter at the edges, then good QoS can be obtained without policing. An alternative to complex QoS control mechanisms is to provide high quality communication by generously over-provisioning a network so that capacity is based on peak traffic load estimates. This approach is simple for networks with predictable peak loads. The performance is reasonable for many applications. This might include demanding applications that can compensate for variations in bandwidth and delay with large receive buffers, which is often possible for example in video streaming. Over-provisioning can be of limited use, however, in the face of transport protocols (such as TCP) that over time exponentially increase the amount of data placed on the network until all available bandwidth is consumed and packets are dropped. Such greedy protocols tend to increase latency and packet loss for all users. Commercial VoIP services are often competitive with traditional telephone service in terms of call quality even though QoS mechanisms are usually not in use on the user’s connection to their ISP and the VoIP provider’s connection to a different ISP. Under high load conditions, however, VoIP may degrade to cell-phone quality or worse. The mathematics of packet traffic indicates that network requires just 60% more raw capacity under conservative assumptions. The amount of over-provisioning in interior links required to replace QoS depends on the number of users and their traffic demands. This limits usability of over-provisioning. Newer more bandwidth intensive applications and the addition of more users results in the loss of over-provisioned networks. This then requires a physical update of the relevant network links which is an expensive process. Thus over-provisioning cannot be blindly assumed on the Internet.
Data Discrimination on Internet:
The extent to which network operators should be allowed to discriminate among Internet packet to block selectively, adjust price or quality of service is one of the most fundamental issue in the network neutrality debate. The networks favor some traffic or packet streams over others by using variety of data differentiation techniques or algorithms. There are various methods by which the ISP’s are able to discriminate, by determining which types of packets are in the network. The first type is flow classification, ISP’s are able to determine the nature of packet by examining the amount of time since the packet stream began, the amount of time between consecutive packets, and the sizes of packets in a stream. The information about every packet stream going through the network can be maintained by using the second method called as deep packet inspection. It can categorize traffic based, not just on what it can learn from the packet it is currently handling but also on the combination of the content of many consecutive packets. Instead of looking only at the information needed to get the packet to its destination, using deep packet inspection a device is aware of the information at the application layer as illustrated in Table below:
Examples of header data showing which information is stored in which data field:
|MAC address||Manufacturer of device that is attached to network.|
|IP address||Identity of sender and recipient, location of sender and recipient.|
|Transport protocol||Type of application.|
|Traffic class in IP version 4 / IP version 6||Type of application, priority desired by sender.|
|Packet length||Type of application.|
|Source port and destination port||Type of application.|
Types of discrimination:
1. Discrimination by protocol:
Discrimination by protocol is the favoring or blocking information based on aspects of the communications protocol that the computers are using to communicate. Comcast in 2008 deliberately prevented some subscribers from using peer-to-peer file-sharing service BitTorrent to download large files.
2. Discrimination by IP address:
a.) During the early decades of the Internet, creating a non-neutral Internet was technically infeasible. Originally developed to filter malware, the Internet security company NetScreen Technologies released network firewalls in 2003 with so called deep packet inspection. Deep packet inspection helped make real-time discrimination between different kinds of data possible, and is often used for Internet censorship.
b.) In a practice called zero-rating, companies will reimburse data use from certain addresses, favoring use of those services. Examples include Facebook Zero and Google Free Zone, and are especially common in the developing world.
c.) Sometimes ISPs will charge some companies, but not others, for the traffic they cause on the ISP’s network. French telecoms operator Orange, complaining that traffic from YouTube and other Google sites consists of roughly 50% of total traffic on the Orange network, reached a deal with Google, in which they charge Google for the traffic incurred on the Orange network. Some also thought that Orange’s rival ISP Free throttled YouTube traffic. However, an investigation done by the French telecommunications regulatory body revealed that the network was simply congested during peak hours.
3. Peering discrimination:
There is some disagreement about whether peering is a net neutrality issue. In the first quarter of 2014, streaming website Netflix reached an arrangement with ISP Comcast to improve the quality of its service to Netflix clients. This arrangement was made in response to increasingly slow connection speeds through Comcast over the course of 2013, where average speeds dropped by over 25% of their values a year before to an all-time low. After the deal was struck in January 2014, the Netflix speed index recorded a 66% increase in connection.
The Benefits of Discrimination:
There are several benefits for discrimination and it ranges from security to quality of service control. One of the most important benefits of discrimination on network level is security. A network operator can determine whether a packet stream is carrying a virus or a dangerous piece of spyware by using deep packet inspection. It would be a huge damage to network security if a network neutrality policy that prohibits networks from dropping dangerous traffic/packet stream of this kind. The network can prevent customers from using equipment which would hinder their neighbors’ traffic by ensuring that only authorized devices are attached to the network. The devices may access adult-only material contrary, or that consumes more of the shared resources than is allowed contrast to the customer’s stated wishes. Different applications have different QoS needs so discrimination with respect to QoS is also important. So it is not required to give equal access to all services. Pricing also plays an important role in congestion control by using price discrimination. There are quantifiable advantages of price discrimination over more traditional technical approaches; it is done by convincing some users to delay their transmissions by adjusting prices dynamically based on congestion levels. The internet traffic is increasing at tremendous rate. This flow of data/traffic can suffer from congestion at a number of points on the internet. The increasing use of multimedia technology is worsening the congestion of the flow of data/traffic. An internet user attempting to retrieve a file from a file repository in another country will generally be unable to tell whether the dominant cause of congestion is the hardware at the file repository, or the various network links between the repository and the user. Thus, the effect on internet users is generally the same, although the distinction between hardware and data/traffic congestion is important to internet providers. Thus by discriminating the ISP’s can provide better service to maximum of the customers.
The Risks of Discrimination:
One of the serious risks with discrimination is that it may lead to protecting legacy services from competition. In the current ISP market cable and telephone companies are dominant broadband providers. In this case without any network neutrality the ISP’s can block the traffic or degrade the QoS for rival services. For e.g., a telephone company can degrade the VoIP services forcing customers to use traditional telephone services, as well this can be the case with cable companies for degrading streaming videos. Discrimination may also lead to charging oligopoly rents in the broadband. In this scenario the ISP’s may be able to maximize their profit depending on the will of the customers to pay for particular services.
Jon Peha from Carnegie Mellon University believes it is important to create policies that protect users from harmful traffic discrimination, while allowing beneficial discrimination. Peha discusses the technologies that enable traffic discrimination, examples of different types of discrimination, and potential impacts of regulation. Google Chairman Eric Schmidt aligns Google’s views on data discrimination with Verizon’s: “I want to be clear what we mean by Net neutrality: What we mean is if you have one data type like video, you don’t discriminate against one person’s video in favor of another. But it’s okay to discriminate across different types. So you could prioritize voice over video. And there is general agreement with Verizon and Google on that issue.” Echoing similar comments by Schmidt, Google’s Chief Internet Evangelist and “father of the internet”, Vint Cerf, says that “it’s entirely possible that some applications needs far more latency, like games. Other applications need broadband streaming capability in order to deliver real-time video. Others don’t really care as long as they can get the bits there, like e-mail or file transfers and things like that. But it should not be the case that the supplier of the access to the network mediates this on a competitive basis, but you may still have different kinds of service depending on what the requirements are for the different applications.”
Much of the net neutrality debate centres around the management of Internet traffic by Internet Service Providers (ISPs) and what constitute reasonable traffic management. Traffic management is the tool used by ISPs to effectively protect the security and integrity of networks, to restrict the transmission to consumers of unsolicited communication (e.g. spam) or to give effect to a legislative provision or court order. It is also essential for the delivery of certain time-sensitive services (such as real-time IPTV and video conferencing) that may require a prioritisation of traffic to ensure a predefined higher quality of service. However, there is a fragile balance between ensuring the openness of the Internet and the reasonable and responsible use of traffic management by ISPs. Drawing the line between legitimate and unjustified traffic management is challenging.
Types of net control:
Bandwidth throttling is the intentional slowing of Internet service by an Internet service provider. It is a reactive measure employed in communication networks in an apparent attempt to regulate network traffic and minimize bandwidth congestion. Bandwidth throttling can occur at different locations on the network. On a local area network (LAN), a sysadmin may employ bandwidth throttling to help limit network congestion and server crashes. On a broader level, the Internet service provider may use bandwidth throttling to help reduce a user’s usage of bandwidth that is supplied to the local network. Throttling can be used to limit a user’s upload and download rates actively on programs such as video streaming, BitTorrent protocols and other file sharing applications to even out the usage of the total bandwidth supplied across all users on the network. Bandwidth throttling is also often used in Internet applications, in order to spread a load over a wider network to reduce local network congestion, or over a number of servers to avoid overloading individual ones, and so reduce their risk of crashing, and gain additional revenue by compelling users to use more expensive pricing schemes where bandwidth is not throttled.
Typically an ISP will allocate a certain portion of bandwidth to a neighbourhood, which is then sold to residents within the neighbourhood. It is common practice for ISP companies to oversell the amount of bandwidth as typically most customers will only use a fraction of what they’re allotted. By overselling, ISP companies can lower the price of service to their customers per gigabit allotted. On some ISPs, however, when one or a few customers use a larger amount than expected, the ISP company will purposely reduce the speed of that customer’s service for certain protocols, thus throttling their bandwidth. This is done through a method called Deep Packet Inspection (DPI), which allows an ISP to detect the type of traffic being sent and throttle it if it is not high priority and using a large fraction of the bandwidth. Bandwidth throttling of certain types of traffic (i.e. peer-to-peer file sharing) can be scheduled during specific times of the day to avoid congestion at peak usage hours. As a result, customers should all have equal Internet speeds. Encrypted data may be throttled or filtered causing major problems for businesses that use Virtual Private Networks (VPN) and other applications that send and receive encrypted data.
Throttling vs. capping:
The difference is that bandwidth throttling regulates a bandwidth intensive device (such as a server) by limiting how much data that device can accept or receive. Bandwidth capping on the other hand limits the total transfer capacity, upstream or downstream, of data over a medium.
Deep pocket inspection (DPI):
The “deep” in deep packet inspection (DPI) refers to the fact that these boxes don’t simply look at the header information as packets pass through them. Rather, they move beyond the IP and TCP header information to look at the payload of the packet. The goal is to identify the applications being used on the network, but some of these devices can go much further. Imagine a device that sits inline in a major ISP’s network and can throttle P2P traffic at differing levels depending on the time of day. Imagine a device that allows one user access only to e-mail and the Web while allowing a higher-paying user to use VoIP and BitTorrent. Imagine a device that protects against distributed denial of service (DDoS) attacks, scans for viruses passing across the network, and siphons off requested traffic for law enforcement analysis. Imagine all of this being done in real time, for 900,000 simultaneous users, and you get a sense of the power of deep packet inspection (DPI) network appliances. Ellacoya, which recently completed a study of broadband usage, says that 20 percent of all web traffic is really just YouTube video streams. This is information an ISP wants to know; at peak hours, traffic shaping hardware might downgrade the priority of all streaming video content from YouTube, giving other web requests and e-mails a higher priority without making YouTube inaccessible. This only works if the packet inspection is “deep.” In terms of the OSI layer model, this means looking at information from layers 4 through 7, drilling down as necessary until the nature of the packet can be determined. For many packets, this requires a full layer 7 analysis, opening up the payload and attempting to determine which application generated it (DPI gear is generally built as a layer 2 device that is transparent to the rest of the network). Procera, for instance, claims to detect more than 300 application protocol signatures, including BitTorrent, HTTP, FTP, SMTP, and SSH. Ellacoya claims that their boxes can look deeper than the protocol, identifying particular HTTP traffic generated by YouTube and Flickr, for instance. Of course, the identification of these protocols can be used to generate traffic shaping rules or restrictions. DPI can also be used to root out viruses passing through the network. While it won’t cleanse affected machines, it can stop packets that contain proscribed byte sequences. It can also identify floods of information characteristic of denial of service attacks and can then apply rules to those packets.
There are two main categories of inspection techniques by ISPs which are more or less intrusive:
•one based on the Internet Protocol header information, which enables ISPs to identify the subscriber and apply specific policies according to what he or she has subscribed to e.g. routing the packet through a slower or faster link;
•one based on a deeper inspection (called DPI, Deep Packet Inspection), which enables ISPs to access the data payload which may contain personal information.
How do you prevent DPI to read your text sent over internet?
By encrypting data.
Encryption is basically the method of turning plaintext information into unintelligible format (cipher), using different algorithms. This way, even if unauthorized parties manage to access the encrypted data, all they find is nothing but streams of unintelligent, alphanumerical characters. Encryption has widely been used to protect data in numerous areas, such as e-commerce, online banking, cloud storage, online communication and so forth. A simple example of a cipher can be, for instance, the replacing of the letters in a message with the ones one forward in the alphabet. So if your original message read “Meet you at the cafe tonight” the encrypted message reads as follows: “Nffu zpv bu uif dbgf upojhiu” Of course, advanced encryption software programs can generate extremely complicated algorithms to achieve complex ciphers. DPI from an ISP cannot read truly encrypted packets – in any way. They may be in bits and pieces as they are downloaded, but they are still like pieces of a scrambled puzzle that can only be put back together with the decryption key. Of course, ISP can throttle encrypted message without knowing what the message is.
Is your ISP throttling your Bandwidth?
1.) What is the contention ratio in your neighbourhood?
At the core of all Internet service is a balancing act between the number of people who are sharing a resource and how much of that resource is available. For example, a typical provider starts out with a big pipe of Internet access that is shared via exchange points with other large providers. They then subdivide this access out to their customers in ever smaller chunks — perhaps starting with a gigabit exchange point and then narrowing down to a 10 megabit local pipe that is shared with customers across a subdivision or area of town. The speed you, the customer, can attain is limited to how many people might be sharing that 10 megabit local pipe at any one time. If you are promised one megabit service, it is likely that your provider would have you share your trunk with more than 10 subscribers and take advantage of the natural usage behavior, which assumes that not all users are active at one time. The exact contention ratio will vary widely from area to area, but from experience, your provider will want to maximize the number of subscribers who can share the pipe, while minimizing service complaints due to a slow network. In some cases, there are as many as 1,000 subscribers sharing 10 megabits. This is a bit extreme, but even with a ratio as high as this, subscribers will average much faster speeds when compared to dial-up.
2.) Does your ISP’s exchange point with other providers get saturated?
Even if your neighbourhood link remains clear, your provider’s connection can become saturated at its exchange point. The Internet is made up of different provider networks and backbones. If you send an e-mail to a friend who receives service from a company other than your provider, then your ISP must send that data on to another network at an exchange point. The speed of an exchange point is not infinite, but is dictated by the type of switching equipment. If the exchange point traffic exceeds the capacity of the switch or receiving carrier, then traffic will slow.
3.) Does your provider give preferential treatment to speed test sites?
It is possible for an ISP to give preferential treatment to individual speed test sites. Providers have all sorts of tools at their disposal to allow and disallow certain kinds of traffic.
4.) Are file-sharing queries confined to your provider network?
Another common tactic to save resources at the exchange points of a provider is to re-route file-sharing requests to stay within their network. For example, if you were using a common file-sharing application such as BitTorrent, and you were looking some non-copyrighted material, it would be in your best interest to contact resources all over the world to ensure the fastest download. However, if your provider can keep you on their network, they can avoid clogging their exchange points. Since companies keep tabs on how much traffic they exchange in a balance sheet, making up for surpluses with cash, it is in their interest to keep traffic confined to their network, if possible.
5.) Does your provider perform any usage-based throttling?
The ability to increase bandwidth for a short period of time and then slow you down if you persist at downloading is another trick ISPs can use. Sometimes they call this burst speed, which can mean speeds being increased up to five megabits, and they make this sort of behavior look like a consumer benefit. Perhaps Internet usage will seem a bit faster, but it is really a marketing tool that allows ISPs to advertise higher connection speeds – even though these speeds can be sporadic and short-lived. For example, you may only be able to attain five megabits at 12:00 a.m. on Tuesdays, or some other random unknown times. Your provider is likely just letting users have access to higher speeds at times of low usage. On the other hand, during busier times of day, it is rare that these higher speeds will be available.
IP blocking by an ISP company is purposely preventing its Internet service customer access to a specific website or IP address. Certain ISP companies have been found to block certain websites. While some blocking (e.g., of child pornography sites) is considered acceptable or required and is even stated in an ISP company’s acceptable Internet use policy, ISP companies have absolute control over the content transmitted over their wires, without adequately informing service subscribers.
Unfair traffic management practices:
1. The blocking and throttling (i.e. intentionally slowing down the speed) of Peer-to-Peer (P2P) services (such as file sharing and media streaming) and Voice over Internet Protocol (VoIP) services (i.e. Internet telephony) are the most common examples. Other – less prevalent – instances are the restricted access to specific applications such as gaming, streaming, e-mails or instant messaging services.
2. Weakening the competition
This practice can stem from the desire to weaken the competition, the most prominent example of this is limiting access to VoIP services, as revealed by the traffic management investigation carried out by the Body of European Regulators (BEREC). Indeed, while ISPs provide voice calls through the traditional fixed or mobile networks, cheaper (or even free) VoIP substitutes can be found over the Internet.
3. The decrease of innovation
Developers of content and applications are likely to reconsider their investments into new applications if there is a risk ISPs might discriminate against them. Moreover, excessive restrictions on competing applications might remove the incentive for ISPs to improve and innovate their own products which are challenged by those applications.
4. The potential degradation of quality of service
BEREC has identified two main types of degradation of quality of service: the Internet access service as a whole (e.g. caused by congestion on a regular basis), and individual applications using Internet access service (e.g. VoIP blocking and P2P throttling).
Lack of transparency:
1. Regarding traffic management practices
ISPs tend not to openly publicise information regarding traffic management practices. Such information can most frequently be found only when looking at the detailed terms and conditions of the ISPs’ offers, if at all. A recent report from the UK consumer organisation – Consumer Focus – has found that consumers have very limited awareness of the term ‘traffic management’.
2. On actual quality of service
In some cases, consumers are not even aware of the level of quality they can expect from their Internet service, for example possible discrepancies between advertised speeds and actual broadband speeds.
Wireless networks and net neutrality:
So far I discussed traffic management vis-à-vis net neutrality in wired networks. There is considerable debate over whether and how net neutrality should apply to wireless networks. The issue is whether differences between wired and wireless network technology merit different treatment with respect to net neutrality. The primary focus is on applications and traffic management, rather than device attachment. Wireless networks differ substantially from wired networks at the network layer and below but despite differences in traffic management, similar net neutrality concerns apply. Since the differences lie in lower layers, net neutrality in both wired and wireless networks can be effectively accomplished by requiring an open interface between network and transport layers.
The network neutrality debate has focused almost exclusively on Internet access via wireline carriers. Recently the issue of wireless Internet access has surfaced in light of the growing importance of wireless services and consumer frustration with carrier tactics that disable handset functions and block access to competing services. While wireless handsets generally can access Internet services, most carriers attempt to favor content they provide or secure from third parties under what critics deem a “walled garden” strategy: deliberate efforts to lock consumers into accessing and paying for favored content and services. Just about every nation in the world has established policies that mandate the right of consumers to own their own telephone and to use any device to access any carrier, service or function provided it does not cause technical harm to the telecommunications network. Once regulators unbundled telecommunications service from devices that access network services, a robustly competitive market evolved for both devices and services. Remarkably wireless carriers in many nations, including the United States, have managed to avoid having to comply with this open network concept. Even though consumers own their wireless handset, the carrier providing service will operate only with specific types of handsets programmed only to work with one carrier’s network. Carriers justify this lock in and high fees for early termination of service, because the carriers sell wireless handsets at subsidized rates—sometimes “free”—based on a two year subscription term. Of course the value of a two year lock in period offsets the handset subsidy, particularly in light of next generation wireless networks that will offer many services in addition to voice communications. In the United States wireless carriers and their “big box” retail store partners sell more than 60% of all wireless handsets, typically when a subscriber commences service or renews a subscription. No market for used handsets has evolved, because wireless carriers do not offer lower service rates for subscribers who do not need or want a subsidized handset. Wireless network neutrality would require carriers to stop blocking the use of non-carrier affiliated handsets and locking handsets so that they work only on a single carrier network. More broadly wireless network neutrality would prevent wireless carriers from preventing subscribers from using their handsets to access the content, services, and software applications of ventures unaffiliated with the carrier. It also would require carriers to support an open interface so that handset manufacturers and content providers can develop equipment and services that do not have any potential for harming wireless carrier networks. Opponents of wireless network neutrality consider the initiative unnecessary government intrusion in a robustly competitive marketplace. They claim that imposing such requirements would risk causing technical harm to wireless networks and such generate regulatory uncertainty that the carriers might refrain from investing in next generation network enhancements. Opponents claim that separating equipment from service constituted an appropriate remedy when a single wireline carrier dominated, but that such compulsory unbundling should not occur when consumers have a variety of carrier options.
India is currently debating the merits of net neutrality. However, the Indian population’s access to and use of the Internet provides unique parameters to the discussion. For starters, although India has the third largest number of Internet users, only 19 percent of the Indian population currently has Internet access. In comparison, 87 percent of the U.S. population can access the Internet. On the losing end of India’s digital divide is India’s poor and often rural class, where Internet access is limited, or if available, too expensive for marginal customers. India also lacks the large scale infrastructure necessary for broad fixed-line Internet access. For this reason, mobile platforms are the easiest way to bring Internet access to the population, particularly to the less affluent and rural areas of the country where residents suffer not only from poor broadband infrastructure, but also from the lack of basic access to the electricity needed to power fixed Internet lines. To a large extent, India’s net neutrality debate has paralleled the recent debate in the United States, and Indian net neutrality proponents have adopted their U.S. counterparts’ arguments when criticizing zero-rating projects. However, the disparity between mobile and fixed-line Internet access marks an important difference between the net neutrality debate in India and in the U.S. Due to widespread Internet access in the United States, the domestic net neutrality debate was able to focus largely on the quality of Internet access. In the U.S., the discussion centered on regulations that would ensure equal access to all legal digital content, and promote commercial and non-commercial innovations, particularly by start-ups and small businesses. In India where Internet access is beyond the reach of so many, the calculus may be very different.
Wireless networks have their own internal congestion, which results from sharing a limited radio spectrum among many users. In 2G and 3G wireless systems, data and voice traffic are kept apart; they shunt the data over the Internet and the voice over a circuit-switched network linked to the backbone. The first 4G LTE (long term evolution) phones sent data over the new LTE network but used the old 3G network for voice. Now carriers are phasing in a new generation of 4G LTE phones that use a protocol called Voice over LTE (VoLTE) that converts voice directly to packets for transmission on 4G networks along with data. VoLTE phones have an audio bandwidth of 50 to 7,000 Hz, twice that of conventional phones, which is supplied by a service called HD voice. VoLTE phones also use network management tools to manage the flow of time-sensitive packets. The packet coding built into LTE and VoLTE is a different matter because that traffic goes over wireless networks, which do have limited internal capacity. The LTE packet coding standard reflects the mobile environment and the introduction of new services. It assigns a special priority code to real-time gaming traffic, which requires very fast transit times to keep competition even. It also divides video into two classes with distinct requirements. Real-time “conversational” services such as conferencing and videophone are similar to voice telephony in that delays degrade their usability. Buffered streaming video can better tolerate packet delays because it is not interactive.
Here are the LTE codes:
|QCI||Resource Type||Priority||Packet Delay Budget||Packet Error Loss Rate||Example Services|
|1||GBR||2||100 ms||10-2||Conversational Voice|
|2||4||150 ms||10-3||Conversational Video (live streaming)|
|3||3||50 ms||10-3||Real-Time Gaming|
|4||5||300 ms||10-6||Non-Conversational Video (buffered streaming)|
|5||Non-GBR||1 (highest)||100 ms||10-6||IMS Signalling|
QCI = QoS Class Identifier
GBR = Guaranteed Bit Rate
A minimum bit rate is requested by an application for optimal functioning. In LTE, minimum GBR bearers and non-GBR bearers may be provided. Minimum GBR bearers are typically used for applications like Voice over Internet Protocol (VoIP), with an associated GBR value; higher bit rates can be allowed if resources are available. Non-GBR bearers do not guarantee any particular bit rate, and are typically used for applications as web-browsing.
These new Net management tools allowed carriers to improve their existing services and offer new ones. Carriers now boast of the good voice quality of VoLTE phones, after years of ignoring the poor sound of 2G and 3G phones. Premium-price services could follow, such as special channels for remote real-time control of Internet of Things devices. Yet the differential treatment of packets worries advocates of Net neutrality, who fear that carriers could misuse those technologies to limit customer access to sites and services.
Net neutrality dilemma:
Net neutrality means different things to different people. Some want equal treatment for all bits; others merely want equal treatment for all information providers, which would then be free to assign priorities to their own services. Still others say that carriers should be able to charge extra for premium services, but not to block or throttle access. Each approach has different implications for network management. Treating all bits equally has become a popular mantra. It says just what it means, giving it a charming simplicity that leaves little wiggle room for companies trying to game the system. Championed by the nonprofit Electronic Frontier Foundation (EFF), the purists’ position seems to be gaining advocates. Yet its philosophical clarity could come at the cost of telephone clarity. LTE uses expedited forwarding services and [packet] priority to reduce jitter, which reduces voice quality. But that involves giving some bits priority over others. Some observers doubt that Net neutrality purists mean what they say. Yet Jeremy Gillula, a technologist for EFF, says “network operators shouldn’t be doing any sort of discrimination when it comes to managing their networks.” One reason is that EFF advocates the encryption of Internet traffic, and as Gillula points out, encrypted data can’t be examined to see whether it should get priority. Moreover, he adds, “by allowing some packets to be treated better than others, we’re closing off a universe of new ways of using the Internet that we haven’t even discovered yet, and resigning ourselves to accepting only what already exists.” Other advocacy groups take a less restrictive approach. “We realize that the network needs management to provide the desired services,” says Danielle Kehl, a policy analyst for the New America Foundation’s Open Technology Institute. “The key is to make sure network management is not an excuse to violate Net neutrality.” Thus they would allow carriers to schedule conversational video packets differently than those carrying streaming video, which is less time sensitive. But they would not allow carriers to differentiate between streaming video packets from two different companies. A key argument for this approach is the 2003 observation by Tim Wu, now a Columbia University law professor, that packet switching inherently discriminates against time-sensitive applications. That is, packet switching without Net management can’t prevent degradation of time-sensitive services on a busy network. President Obama largely followed this lead in his November 2014 speech advocating Net neutrality. He did not say that all bits should be treated equally but specified four rules: no blocking, no throttling, no special treatment at interconnections, and no paid prioritization to speed content transmission. The industry’s view of Net neutrality has another key difference—it should allow companies to offer premium-priced services. A Nokia policy paper says that users should be able to “communicate with any other individual or business and access the lawful content of their choice free from any blocking or throttling, except in the case of reasonable network management needs, which are applied to all traffic in a consistent manner.” But the paper adds that “fee-based differentiation” should be allowed for specialized services, as long as it is transparent. Carriers like this approach because adding premium services would give them a financial incentive to improve their networks. Critics counter that offering an express lane to premium customers could relegate other users to the slow lane, particularly in busy wireless networks. A crucial issue to be resolved is who pays for premium service. The big technology question in the debate over Net neutrality is which approach to packet management would give the best performance now and in the future. Cisco’s Baker says that equal treatment for all packets “would be setting the industry back 20 years.” That’s particularly true of wireless networks, where high demand and limited bandwidth make network management crucial. Take away priority coding and you break VoLTE, the first technology to offer major improvements in cellular voice quality. And without VoLTE or a similar packet-management scheme, there’s no obvious way to move wire-line telephony onto the Internet without degrading voice quality to cellphone level. Other proposed services also depend on priority coding. “If the Internet of Things develops, a lot of applications will require accurate real-time data to work well,” says Jeff Campbell, vice president of global policy and government affairs at Cisco. Telemedicine, teleoperation of remote devices, and real-time interaction among autonomous vehicles could be problematic if data packets could get stalled at peak congestion times. Some analysts argue that packet scheduling could throttle other traffic by limiting the unscheduled bandwidth. But others counter that this should not be a problem in a well-designed network, one with adequate capacity and interconnections. As undemocratic as packet scheduling may be, it seems the best technology available for delivering a mixture of time-sensitive and -insensitive services. “Some Net neutrality advocates are convinced that any kind of management will create bad results, but they’re not willing to accept that having no management will also have bad results,” says a senior Nokia engineer. So, Internet purists take heed: Traffic management is as vital on the Internet as it is on streets and highways.
Net neutrality definition:
Net neutrality (also network neutrality, Internet neutrality, or net equality) is the principle that Internet service providers and governments should treat all data on the Internet equally, not discriminating or charging differentially by user, content, site, platform, application, type of attached equipment, or mode of communication. The term was coined by Columbia University law professor Tim Wu in 2003, as an extension of the longstanding concept of a common carrier. Network neutrality is the principle that all Internet traffic should be treated equally. According to professor Tim Wu, the best way to explain network neutrality is as when designing a network: that a public information network will end up being most useful if all content, sites, and platforms are treated equally. A more detailed proposed definition of technical and service network neutrality suggests that service network neutrality is the adherence to the paradigm that operation of a service at a certain layer is not influenced by any data other than the data interpreted at that layer, and in accordance with the protocol specification for that layer. Net neutrality prohibits Internet service providers from speeding up, slowing down or blocking Internet traffic based on its source, ownership or destination. Net neutrality usually means that broadband service providers charge consumers only once for Internet access, do not favor one content provider over another, and do not charge content providers for sending information over broadband lines to end users. An example of a violation of net neutrality principles was the Internet service provider Comcast intentionally slowing uploads from peer-to-peer file sharing applications. And in 2007, Plusnet was using deep packet inspection to implement limits and differential charges for peer-to-peer, file transfer protocol, and online game traffic.
Network neutrality is best defined as a network design principle. The idea is that a maximally useful public information network aspires to treat all content, sites and platforms equally. This allows the network to carry every form of information and support every kind of application. Other net neutrality proponents argue that net neutrality means ensuring that all services are provided to all parties over the same quality of Internet pipe, with no degradation based on the service chosen by the end user and at the same cost. This definition is based on the assumption that data is transmitted on a “best efforts” basis, with limited exceptions.
Net Neutrality is the principle that every point on the network can connect to any other point on the network, without discrimination on the basis of origin, destination or type of data. This principle is the central reason for the success of the Internet. Net Neutrality is crucial for innovation, competition and for the free flow of information. Most importantly, Net Neutrality gives the Internet its ability to generate new means of exercising civil rights such as the freedom of expression and the right to receive and impart information. Advocates of net neutrality have raised concerns about the ability of broadband providers to use their last mile infrastructure to block Internet applications and content (e.g. websites, services, and protocols), and even to block out competitors. Opponents claim net neutrality regulations would deter investment into improving broadband infrastructure and try to fix something that isn’t broken.
Let’s say you want to watch a video online: you connect to the Internet, open your browser and navigate to the video service of your choice. This is possible because the access provider does not seek to restrict your options. Without Net Neutrality you might instead find that your connection to video service is being slowed down by your access provider in a way that makes it impossible for you to watch the video, at the same time, you would still be able to connect rapidly to video service B and maybe watch exactly the same content. Why would your access provider do such a thing? There are many reasons: for example, the internet access provider might a) have signed an exclusive agreement with this second video platform or b) provide their own video services and therefore want to encourage you to use these instead of the service that you initially preferred. This is just one of the many reasons for violations of Net Neutrality.
Net neutrality is the principle that every website (of the same class) should be treated equally and not given any preferential treatment in respect to other websites. In other words, if you click on Google and on Yahoo, your internet service provider (ISP) will use the fastest possible routes to deliver each website to you. It doesn’t have special routes or other preferences for one site versus another. Net Neutrality doesn’t prevent variations in overall service—in other words, it may be that you pay twice as much as your neighbour in order to have more bandwidth which could lead to Yahoo loading up faster on your computer than it does on hers even if you both clicked on Yahoo at the same time. Service providers can and should provide various tiers of overall service depending on your needs, but once you subscribe to a given tier of service, there shouldn’t be additional fees levied on you or on the sites you access
The openness of the Internet is closely linked to the application of the principle of network neutrality or net neutrality. The Electronic Communications’ Framework (ECF) defines it as the ability for consumers to “access and distribute information or run applications and services of their choice.”
The revised Framework supports the following aspects of network neutrality:
3. Quality of Service
For a thoughtful definition, consider the one given by Daniel Weitzner, who cofounded the Center for Democracy & Technology, teaches at MIT, and works for the W3C. He lays out four points that neutral networks should adhere to:
1. Non-discriminatory routing of packets
2. User control and choice over service levels
3. Ability to create and use new services and protocols without prior approval of network operators
4. Nondiscriminatory peering of backbone networks.
Level playing field:
A level playing field is a concept about fairness, not that each player has an equal chance to succeed, but that they all play by the same set of rules. A metaphorical playing field is said to be level if no external interference affects the ability of the players to compete fairly. Government regulations tend to provide such fairness, since all participants must abide by the same rules. The internet is now a level-playing field. Anybody can start up a website, stream music or use social media with the same amount of data that they have purchased with a particular ISP. The Internet has had net neutrality since its inception, which has levelled the playing field for all participants. It refers to the absence of restrictions or priorities placed on the type of content carried over the Internet by the carriers and ISPs that run the major backbones. It states that all traffic be treated equally; that packets are delivered on a first-come, first-served basis regardless from where they originated or to where they are destined. Net neutrality became an issue as major search engines such as Google and Yahoo! increasingly generated massive amounts of traffic compared with other sites. It also became an issue because some carriers that offered subscription-based VoIP services were also transporting their competitors’ VoIP traffic. Although it might seem reasonable to charge sites that disseminate huge amounts of content, ISPs may have conflicts of interest. For example, if an ISP also streams on-demand movies, it can block access to its competitors or demand fees to lift the blockade. The implications down the road are even more alarming. If net neutrality were abandoned entirely, at some point, owners of all Web sites might have to pay the carriers’ fees to prevent their content from bogging down in a low-priority delivery queue. In the absence of neutrality, your ISP might favour certain websites over others for which you might have to pay extra. Website A might load at a faster speed than Website B because your ISP has a deal with Website A that Website B cannot afford. It’s like your electricity company charging you extra for using the washing machine, television and microwave oven above and beyond what you are already paying.
Net neutrality vs. open internet:
The idea of an open Internet is the idea that the full resources of the Internet and means to operate on it are easily accessible to all individuals and companies. This often includes ideas such as net neutrality, open standards, transparency, lack of Internet censorship, and low barriers to entry. The concept of the open Internet is sometimes expressed as an expectation of decentralized technological power, and is seen by some as closely related to open-source software. Proponents often see net neutrality as an important component of an open Internet, where policies such as equal treatment of data and open web standards allow those on the Internet to easily communicate and conduct business without interference from a third party. A closed Internet refers to the opposite situation, in which established persons, corporations or governments favor certain uses. A closed Internet may have restricted access to necessary web standards, artificially degrade some services, or explicitly filter out content. Tim Wu, who is credited with crafting the term and concept, took great pains to distinguish “Net Neutrality” from “Open Access” in his original paper that introduced the topic. Open Access is about opening essential infrastructure to competition. Net Neutrality accepts that there is no Open Access, and begins to regulate the Internet rather than just essential facilities used to access the Internet.
The FCC defines “Open Internet” in 2015 as consisting of three fundamental building blocks.
1. No Blocking:
Broadband providers may not block access to legal content, applications, services, or non-harmful device.
2. No Throttling:
Broadband providers may not impair or degrade lawful Internet traffic on the basis of content, applications, services, or non-harmful devices.
3. No Paid Prioritization:
Broadband providers may not favour some lawful Internet traffic over other lawful traffic in exchange for consideration — in other words, no “fast lanes.” This rule also bans ISPs from prioritizing content and services of their affiliates.
• Broadband Internet access consumers should have access to their choice of legal Internet content within the bandwidth limits and quality of service of their service plan.
• Broadband Internet access consumers should be able to run applications of their choice, within the bandwidth limits and quality of service of their service plans, as long as they do not harm the provider’s network.
• Consumers should be permitted to attach any devices they choose to their broadband Internet access connection at their premises, so long as there is no harm to the network.
• Consumers should receive meaningful information regarding their broadband Internet access service plans in order to make informed decisions in the marketplace.
Net neutrality is based on two general technical principles that are inherent in today’s Internet standards:
1. Best efforts delivery – the network attempts to deliver every packet to its destination equally, with no discrimination and provides no guarantee of quality or performance;
2. End-to-end principle – in a general purpose network, application-specific functions should only be implemented at the endpoints of the network, not in intermediate nodes.
The foundation of net neutrality is ensuring that consumer choice is not influenced by differential ease or cost of access for Internet services. It means equal business opportunity for all Internet businesses, based on the premise that the ISP or telecom operator doesn’t create artificial distinctions between them on the basis of commercial relationships between them and some websites.
Three basic points of neutrality:
1. All sites must be equally accessible.
ISPs and telecom operators shouldn’t block certain or apps just because they don’t pay them. They should also not create gateways which influence discovery of sites, giving preference to some sites over others.
2. All sites must be accessible at the same speed.
This means no speeding up of certain sites because of business deals and more importantly, it means no slowing down some sites.
3. The cost of access must be the same for all sites (per Kb/Mb or as per data plan).
That means no zero rating. In countries like India, Net Neutrality is more about cost of access than speed of access because they don’t have fast and slow lanes. Given the paucity of 3G spectrum and a very poor, sparse wireline network, they only have slow lanes. In India, the proposal of an internet access provider to charge for usage of a free communication platform that only needs Wi-Fi connection and circumvents the need for a mobile communication platform, has sparked off a raging controversy on whether this country is violating net neutrality. If this were offensive to customers of that service provider, they could always shift to another provider who does not impose such charges, retaining the same mobile number. If multiple service providers get into a cartel to impose such charges, the competition regulator could step in and impose crippling penalties on members of the cartel.
Network neutrality advocates seek to require ISPs to maintain the Internet as a “network of networks” seamlessly interconnecting facilities without favouring any category of content provider or consumer. Network neutrality in application would require ISPs to continue routing traffic on a best efforts basis, ostensibly to foreclose the potential for the Internet to fragment and balkanize into various types of superior access arrangements, available at a premium, and a public Internet increasingly prone to real or induced congestion. Opponents to compulsory network neutrality seek to differentiate service, in terms of quality, price and features to accommodate increasingly diverse user requirements. For example, on line game players, IPTV viewers and VoIP subscribers may need prioritization of their traffic streams so that their bits arrive on time, even if this outcome requires the ISPs to identify and favour these traffic streams. ISPs want the flexibility to offer different options for consumer access to the Internet and how content providers reach consumers. Consumer tiering could differentiate service in terms of bit rate speeds, amount of permissible traffic carried per month and how an ISP would handle specific types traffic, including “mission critical” content that might require special treatment, particularly when network congestion likely may occur. While consumer tiering addresses quality of service and price discrimination at the first and last kilometer, access tiering could differentiate how ISPs handle content upstream into the Internet cloud that links content providers and end users. Network neutrality advocates have expressed concern that the potential exists for ISPs to use diversifying service requirements as cover for a deliberate strategy to favour their own content and to extort additional payments from users and content providers threatened with intentionally degraded service. Many network neutrality advocates speak and write in apocalyptic terms about the impact of price and service discrimination and how it will eviscerate the Internet and enable carriers to delay or shut out competitors and ventures unwilling or unable to pay surcharges. The head of a consumer group claims that incumbent telephone and cable companies’ can reshape the nation’s digital destiny by branding the Internet and foreclosing much of its societal and cultural benefits. On the other hand, opponents of network neutrality categorically reject as commercially infeasible any instance of unreasonable discrimination or service degradation. Network neutrality opponents also note that ISPs typically qualify for a regulatory “safe harbor” that largely insulates them from regulation, because they operate as value added, information service providers and not telecommunications service providers. While the latter group incur traditional common carrier, public utility responsibilities, including the duty not to discriminate, the former group enjoys quite limited government oversight in most nations. Opponents of network neutrality see no actual or potential problems resulting from ISPs having freedom to discriminate and diversify service. Without such flexibility, opponents of network neutrality express concern whether ISPs will continue to risk investing the billions of dollars needed for construction of next generation network infrastructure.
There are the four broad issues with reference to net neutrality.
The case in which all bits are accorded the same priority, but are priced differently is a hybrid case of net neutrality. While it satisfies net neutrality with respect to priority it does not satisfy with respect to price. Zero rating is an indicative of this where the bits of selected applications are priced at zero for the consumer that fall under this plan while they are not given either higher/ lower priority compared to others. However, zero rating is a form of an extreme pricing. We can envision a situation that this will lead to large OTTs tying up with large TSPs/ISPs to provide zero rating scheme. Smaller and start-up OTTs will be left out of this equation due to economics of subsidy. The other case is when ISP charges the same for each bit, but prioritizes certain OTT content. This case involves ISP implementing technologies such as advanced cache management and Deep Packet Inspection amongst others. From the consumer point of view, it provides better Quality of Experience (QoE) without additional price. Hence can possibly increase consumer surplus. This may also involve close cooperation and agreement between select content and service providers. This also might decrease the quality of experience of other content services that are not in the scheme.
The internet access providers claim that service providers, like Netflix and Google, are getting a “free ride” on their network, since those services are popular with their users, and they’d like to get those (very successful) companies to pay. Wait, so internet companies don’t pay for bandwidth? They absolutely do pay for their bandwidth. And here’s the tricky part of this whole thing. Everyone already pays for their own bandwidth. You pay your access provider, and the big internet companies pay for their bandwidth as well. And what you pay for is your ability to reach all those sites on the internet. What the internet access providers are trying to do is to get everyone to pay twice. That is, you pay for your bandwidth, and then they want, say, Netflix, to pay again for the bandwidth you already paid for, so that Netflix can reach you. This is under the false belief that when you buy internet service from your internet access provider, you haven’t bought with it the ability to reach sites on the internet. The big telcos and cable companies want to pretend you’ve only bought access to the edge of their network, and then internet sites should have to pay extra to become available to you. In fact, they’ve been rather explicit about this. Back in 2006, AT&T’s Ed Whitacre stated it clearly: “I think the content providers should be paying for the use of the network – obviously not the piece for the customer to the network, which has already been paid for by the customer in internet access fees, but for accessing the so-called internet cloud.” In short, the broadband players would like to believe that when you pay your bandwidth, you’re only paying from your access point to their router. Proponents say net neutrality is fundamental to advancing the Internet. If deep-pocketed digital media brands are allowed to pay for faster broadband connections, then relatively young, small ones will be at a competitive disadvantage, they argue. The anti-net neutrality argument is that ISPs should be able to allocate their resources and establish business partnerships however they deem fit, and allowing the FCC to regulate how they do business would actually stifle innovation.
High Internet use and how it affects Net Neutrality:
An example is the use of a movie streaming service such as Netflix. When sending small text messages such as email, only a small amount of data needs to move over the Internet since the email is a small piece of data. A full-length motion picture in High-Definition is a dramatically larger piece of data, and it takes up a lot more of the pipe to get from one point to another. It’s estimated that Netflix, during peak movie watching times such as Saturday night, accounts for as much as 1/3 of all the data moving on the Internet. If every user had to get to the same place on the Internet to start watching a Netflix show, the connections to Netflix would become congested, which in fact can happen. In addition to Netflix, individual Internet users watch YouTube videos, search Google, download files, and listen to music streaming services such as Pandora and Spotify. A common way for individuals to connect to the Internet is to pay an Internet Service Provider (ISP) a fee for an Internet connection. An ISP provides access to everything on the Internet for you as a consumer. However, moving large things like movies and music is much more expensive and requires larger and faster Internet “pipes” than moving emails and simpler web pages and requires more expensive Internet “pipes.” So who pays for the pipe that delivers the data to you has become one of the hot issues for Net Neutrality. The issue of Net Neutrality for bandwidth gets down to some people having deep pockets and others not. Could someone pay the company that delivers the Internet to consumers a fee to get to people’s homes faster, and if they could, should they have to pay, and if so, what happens to the Internet users (like small businesses, schools, or individual websites) who don’t pay?
The following are the major concerns of network neutrality:
1. Non-Discrimination: Internet services should be provided all over the world without any discrimination. Anyone can post or develop their own blogs or website comments. Users can search for anything and search engines will show all available matches without any discrimination.
2. Content Diversity: A service provider cannot change the contents of a website according to its requirements.
3. Commercial Use: Network neutrality governs the rules and principles that are suitable for every business owner. There are no specific boundaries for commercial website and e-business owners.
4. IP Telephones: The IP telephone, which uses Voice over Internet Protocol (VoIP), allows anyone to make a call using a computer connected to the Internet. Voice chats, Skype and other chat services are the best example of VoIP. These should not be restricted.
Why do we want network neutrality in the first place?
1. free and open internet is the single greatest technology of our time, and control should not be at the mercy of corporations.
2. free and open internet stimulates ISP competition.
3. free and open internet helps prevent unfair pricing practices.
4. free and open internet promotes innovation.
5. free and open internet promotes the spread of ideas.
6. free and open internet drives entrepreneurship.
7. free and open internet protects freedom of speech.
An Internet user should be able to connect to any other legal endpoint without interference from the service provider. This is analogous to ensuring every telephone can call every other telephone, anywhere, without restrictions on connectivity or quality.
From the user’s perspective, net neutrality eliminates:
•Connection, admission and access discrimination;
•Paid prioritization or scheduling of access and/or transport;
•Controls and limitations on applications and contents.
Freedom and Net Neutrality:
Freedom is the value that people can do what they want, make their own decisions, and express their own opinions. From the content providers’ perspective, the Internet is the platform that gives tremendous freedom to individual users and innovators. They argue the remarkable success of the Internet is based on “a few simple network principles – end-to-end design, layered architecture, and open standards – which together give consumers choice and control over their online activities”. Academics and interest groups also invoke freedom to support Net neutrality legislation. However, freedom is also invoked by opponents of Net neutrality. For example, one anti-Net neutrality academic argues that “the best broadband policy for the United States would result in lots of choice, innovation, and low prices. An anti-Net neutrality service provider downplays the extent to which differentiation among users is a hindrance to consumer choice and emphasizes that, “what would be a threat to consumers and to free speech is the elimination of competition”. Thus, freedom was used to argue both sides of the debate. We must keep the Internet free and open. If you care about any of the following freedoms, then you should care about preserving net neutrality:
Freedom from monopolies
Freedom to start a business and compete on a level playing field
Freedom of online speech
Freedom to visit any Website you want at the fastest browsing speed
Net neutrality, free speech and media:
According to the Pew Research Center, half of all Americans cite the Internet as their main source for national and international news. For young people, that number is 71 percent. I do not mean to imply that we have reached a point where newspapers are becoming obsolete or that broadcast television is a relic of the past. Much of the news online still comes from broadcast and print outlets, either on their own websites or on other sites that “aggregate” and repeat their content. But the Internet is undoubtedly shaping how we distribute and consume the news today. The future of journalism is inextricably linked with the future of the Internet. That is why Net Neutrality matters and why publishers, journalists and everyone who seeks to influence or contribute to our shared culture should be worried. Verizon could strike a deal with CNN and hinder its subscribers’ abilities to access alternative news sources. Or, once its merger conditions expire, Comcast could slow access to Al Jazeera because it wants to promote its NBC news offerings. Computer scientists at Microsoft have shown that people will visit a website less often if it’s slower than a rival site by more than 250 milliseconds. That’s a blink of an eye. The absence of Net Neutrality means that Internet service providers will have the power to silence anyone who cannot or will not pay their tolls. And that is why, in 2010, Senator Al Franken called Net Neutrality the First Amendment issue of our time. No journalist or creator should be subject to the commercial or political whims of an ISP. True, many of the biggest media companies may be able to afford to pay for prioritization. They may even like the idea because their deep pockets can ensure their content continues to be seen. So it’s not the big guys who would suffer in the absence of net neutrality. It is the independent journalists, rising stars and diverse voices who have grown up with and thrived on the open Web who would suffer in the absence of net neutrality.
Broadband providers could take away your most basic rights in the absence of net neutrality:
1. Freedom of the Press
If you wanted to start an international media company 20 years ago, you couldn’t do it on your own. The barriers to entry — getting access to a printing press and developing the complex infrastructure to distribute your work — were huge. With the open Internet, however, anyone can start a news site and publish articles or videos, without worrying about whether people can read them. Without net neutrality, ISPs can block or slow down news sites for any reason, be it commercial or ideological. For example, Comcast could block NYTimes.com or slow it to a crawl because the online newspaper published an op-ed in favor of net neutrality or even reported something negative about the company.
2. Free and Fair Elections
The large ISPs already have a major influence on politics, with Verizon alone spending $53 million on campaign donations and lobbying since 2010. However, without net neutrality, there’s nothing to stop the ISPs from influencing elections even more directly. Your broadband provider could block the website of one candidate while speeding up that of another. It could even censor the sites for political action committees that support a viewpoint or candidate it opposes. Back in 2007, Verizon initially refused to send out text messages from a pro-abortion rights group, but backed down under pressure. What if AT&T decides one day that it opposes capital punishment so much that it blocks prodeathpenalty.com and the site of any gubernatorial candidates that support the practice? Activists of any stripe should be concerned about their right to publish content that an ISP might disagree with. In 2005, Canadian ISP Telus blocked the site of a labor group that encouraged its workers to strike. Even more insidiously, ISPs can selectively block government websites that provide voter information like polling locations and registration forms. If they succeed in lowering voter turnout in certain areas, that could change the course of an election.
3. Freedom of Association
You always talk to your mom on Skype, but then your ISP signs an exclusive deal to make Google Hangouts its only allowed chat service. Meanwhile, Mom’s ISP on the other side of the country serves Hangouts at unusable speeds, but gives Skype its fast lane. This scenario may sound crazy, but without any legal constraint, your ISP has every incentive to swing priority access deals with some messaging services while blocking others. There’s already a precedent for blocking messaging clients in the world of wireless broadband. Back in 2009, AT&T blocked the iPhone from making Skype calls on its mobile network, but relented under pressure from the FCC. In 2012, the company also blocked Apple FaceTime on the iPhone. Of course, you and your mom can always talk on old-fashioned landline phones if you both still have them. Unlike broadband providers, wired phone services are defined as common carriers and are legally obligated to accept calls from anyone. VoIP services such as Vonage are exempt, as are cell phone carriers.
4. Freedom to Start a Business
If you decide to open up a restaurant in town and a street gang demands money not to destroy your place, you’d call the police. But if your business lives on the Internet, you could have as many as a dozen different ISPs in the U.S. shaking you down and, without net neutrality, no legal recourse against them. The most important question raised by net neutrality is not “should the government regulate the Internet” but “should a dozen ISPs be allowed to control thousands of other companies?” Whether you’re trying to start the next Netflix or you’re a mommy blogger eking out a living on ad revenue, you could be forced to pay broadband providers in order to reach their customers. If you can’t pay, providers could slow your site or service down to the point that nobody wants to use it. The smart money is already abandoning many Internet startups
5. Freedom of Choice
You like to do all your shoe shopping at Zappos, but your ISP has an exclusive clothing deal with Walmart, so it slows down Zappos.com so badly that each page takes a minute to load and you get timeout messages when submitting your credit card information. You might be determined enough to keep visiting your favourite online shoe store despite these roadblocks, but most people won’t. You’ve been using Gmail as your primary email address for years, but your ISP decides to slow down that service and speed up Microsoft’s Outlook.com instead. How long will you stick with the slow email over the fast one? In a world where ISPs can slow down or outright block whatever services they like, your freedom to choose everything from your email client to your online university could disappear.
6. Freedom of expression
Proponents of net neutrality believe that an open internet, where users can connect to any site or use any application, is the best guarantee of freedom of expression. They fear that traffic-control techniques like DPI represent a step toward censorship, whereby governments could censure (or pressure commercial companies to censure) opposing points of view. By blocking or slowing down certain sites, or even just excluding certain services from specialised offers, network operators could make it harder for citizens to access sites expressing certain points of view. Opponents of net neutrality regulation suggest that guidelines could indicate what kinds of traffic management techniques are permitted and under what circumstances (e.g. judicial supervision). One legal scholar has argued that private organisations (most ISPs are private) performing reasonable traffic management (including prioritising traffic) would likely not be acting contrary to the European Convention on Human Rights (though practices that clearly aimed at restricting competition or media plurality would be). On the other hand (perhaps surprisingly) a very strict codification of net neutrality principles might be held by the same measure to restrict unfairly the freedom of ISPs to offer different levels of service (like different classes on airlines) and manage their businesses as they saw fit.
When your ISP is using DPI to read your data, it is violating your privacy.
Equality is the state of being equal, especially in having the same rights, status, and opportunity for all people. The value of equality is invoked in this case to refer to network players and consumers having the same rights and opportunities. Proponents of Net neutrality claim that service providers “should not discriminate among content or application providers”. To assure the equal competition among service providers, Net neutrality regulation is thus viewed as necessary by these Net neutrality advocates. Service providers, not surprisingly, view equality differently from Net neutrality advocates. Service providers argue that discrimination does not exist in the reality of competition between service providers. They argue that it is inappropriate to excessively rely on equality. For example, in the words of one service provider, “Unfortunately, because network neutrality seems like such a sensible idea and has so much momentum, various parties have sought to extend the definition beyond this basic principle — in ways that favor their own interests and which are, ironically, non-neutral”. Thus, the opponents in the Net neutrality debate have very different and contrasting views on equality.
Creativity is the ability to create new ideas or things involving uniqueness and imagination. Both proponents and opponents of Net neutrality agree on the need for innovation. As one content provider explains, “It is innovation, not legislation that created our service and brought this competition to consumers”. He further urges, “The Internet remains an open and competitive foundation for innovation”. Service providers also see the importance of investment on innovation, noting that “we need to ensure that government policy encourages vigorous investment in continually upgrading network capacity”. Thus, Net neutrality supporters and opponents agree that creativity is an important value in this debate.
10. Social justice
Social justice is related to correcting injustice and caring for the weak. Net neutrality proponents say that net neutrality allows level playing field to start-ups, dissidents, underprivileged, oppressed and small entrepreneurs. Net neutrality opponents frequently invoke social justice to support the notion that “those who cause the costs should be charged” in the words of one academic. As an interest group representative explains, “businesses that seek to profit on the use of next-generation networks should not be free of all costs associated with the increased capacity that is required for delivery of the advanced services and applications they seek to market”. Thus, Net neutrality opponents also place emphasis on Net neutrality as a social justice issue.
There are two ways to undermine net neutrality. One is to segregate the premium market from the rest by allowing telcos to charge premium prices for high quality bandwidth. Another is to segregate the low end from the rest through initiatives such as Internet.org. The result will be identical – a segregated market that goes against the concept of the Internet as a utility where users pay fees that match long-term average costs.
The Reliance and Airtel playbook is simple:
1. Restrict access under the guise of a public good.
2. Charge companies like Facebook, WhatsApp and Skype for access or start charging users additional fees on top of data plans.
3. Anti-competitive: Preferential treatment to in-house content. For example, if startup X creates a disruptive new product, the telco conglomerate can copy it and use the guise of Internet.org or Airtel zero to gain distribution.
There are many reasons why Net Neutrality is not respected, among the most frequent ones are:
1. Access providers violate Net Neutrality to optimise profits
Some Internet access providers demand the right to block or slow down Internet traffic for their own commercial benefit. Internet access providers are not only in control of Internet connections, they also increasingly start to provide content, services and applications. They are increasingly looking for the power to become the “gatekeepers” of the Internet. For example, the Dutch telecoms access provider KPN tried to make their customers use KPN’s own text-messaging service instead of web-based chat services by blocking these free services. Another notable example of discrimination is T-Mobile’s blocking of Internet telephony services (Voice over IP), provided for example by Skype, in order to give priority to their own and their business partners’ services.
2. Access providers violate Net Neutrality for privatised censorship
In the UK, blocking measures by access providers have frequently been misused to block unwanted content. For instance, on 4 May 2012, the website of anti-violence advocates “conciliation resources” was accidentally blocked by child protection filters on UK mobile networks. Another example is Virgin Media. The company provides access to the Internet and increasingly uses Deep Packet Inspection. Virgin is now using this same privacy invasive technology to police their network in attempt to protect its own music business. In all of these cases, private companies police their users’ connections to censor what they guess may be unwanted content.
3. Access providers violate Net Neutrality to comply with the law
Governments are increasingly asking access and service providers to restrict certain types of traffic, to filter and monitor the Internet to enforce the law. A decade ago, there were only four countries filtering and censoring the Internet
Worldwide – today, they are over forty. In Europe, website blocking has been introduced for instance in Belgium, France, Italy, the UK and Ireland. This is done for reasons as varied as protecting national gambling monopolies and implementing demonstrably ineffective efforts to protect copyright.
India and Net Neutrality:
There’s a big debate on this going on in the United States, but why in India?
India has 1 billion people without internet access and it is imperative in democracy to have an open and free internet where users are free to choose the services they want to access—instead of a telecom operator deciding what information they can access. Internet apps and services are expected to contribute 5% to India’s GDP by 2020. That will only happen of entrepreneurs, big and small, have a level playing field that encourages innovation and non-preferential treatment—something that net neutrality ensures. Assuming there is no net neutrality, only the big players will be able to strike deals with telcos while the smaller players remain inaccessible, which will go against the principles of net neutrality. The problem began with Indian telecom players like Airtel, Vodafone and Reliance who realised that users were replacing traditional texting with WhatsApp or Viber and traditional network calling with apps such as Skype. They now want the right to charge what they want, when they want and how they want. In effect, if Airtel doesn’t like YouTube, but wants to push its own video app Wynk, it wants the right to offer that for free, while charging you a bomb to access YouTube. Reliance already has a Facebook-driven scheme called Internet.org, where you can access Bing for free, but you have to pay to access Google; and you have access to BabaJob for free, while you have to pay for Naukri.com.
Net neutrality protest in India:
Although Indian telecom companies’ argument that they have invested a lot in buying the spectrum and building the infrastructure is not without merit, but equal access to the internet can’t be compromised for two basic reasons. One of the primary reasons is that if a data provider enters into a tie-up with a giant like Facebook to provide free access to it and to charge money from its rivals – most of them very small players – for the same, it’s like killing entrepreneurship and innovation. The second clinching argument in support of net neutrality is that once the telecom companies have charged for the data, they have no right to tell the user where to use that data. What you do with the data you pay for — watch a YouTube video, send a WhatsApp message or make a Skype call — is entirely your prerogative.
On December 25, 2014, Airtel, the country’s largest mobile operator with over 200 million active subscribers, dropped a bombshell: it wanted to charge customers extra for using services like Skype, Viber and Google Hangouts even though they had already paid for Internet access. If customers wanted to use a service that used Internet data to make voice calls — something known as VoIP — they would need to subscribe to an additional VoIP pack, the company said. Airtel was double-dipping and customers were furious. The tweets flew thick and fast. In less than four days, Airtel backtracked on its plans. It’s important to remember that it’s not just telecom companies that are interested in a non-neutral Internet in India. According to the TRAI consultation paper, 83 percent of India’s Internet users access the Internet from their mobile phones. This massive audience is crucial for multi-billion dollar corporations like Twitter, Facebook and Google. In February 2015, Reliance Communications and Facebook partnered to launch Internet.org in India, a service whose ostentatious aim was to bring the Internet to the next billion people. In reality, Internet.org grossly violated net neutrality by offering free access to a handpicked list of websites and social networks for free, while making users pay for others; Google bundled free data with its Android One phones; and WhatsApp tied up with multiple providers across the country to provide “WhatsApp Packs.” But if things are bad for consumers, they’re worse for businesses and startups that rely on an open Internet to reach customers. Telecom operators should be seeking to maximise revenues by making us use more of the Internet. They’re slicing the pie instead of growing the pie.
Services like Whatsapp that have been adopted by Indian public themselves are a result of innovation that the telcos did not do on their own. Whatsapp succeeded in India because the Multimedia Messaging Service (MMS) provided by telcos were prohibitively expensive and really hard to use. So instead of making money per message as telcos intended to do, they are now making money out of running pipes, something that they have the license for. By creating their own walled gardens in this free, equal and open internet structure, they are now trying to impose their own distribution channels which would prevent more disruptions like Whatsapp in the future. In case of a service like the SMS, the user was charged only at one end but in a service like Whatsapp both the sender and the receiver are billed for the data they consume.
Telcos say that the evidence is there of OTT communication services cannibalizing the revenues of the ISPs. Messaging revenues have already declined from 7-10% to 3%. Further, VOIP services like Skype, Viber, etc. have already begun to erode the voice telephony revenues. This decline is at present far more evident in the international calling segment. The revenue earned by the telecom operators for one minute of use in traditional voice is Re. 0.50 per minute on an average, as compared to data revenue for one minute of VOIP usage which is around Re.0.04, which is 12.5 times lesser than traditional voice. This clearly indicates that the substitution of data with voice is bound to adversely impact the revenues of the telecom operators and consequently impact both their infrastructure related spends and the prices consumers pay.
What will happen if there is no net neutrality?
If there is no net neutrality, ISPs will have the power (and inclination) to shape internet traffic so that they can derive extra benefit from it. For example, several ISPs believe that they should be allowed to charge companies for services like YouTube and Netflix because these services consume more bandwidth compared to a normal website. Basically, these ISPs want a share in the money that YouTube or Netflix make. Without net neutrality, the internet as we know it will not exist. Instead of free access, there could be “package plans” for consumers. For example, if you pay Rs 500, you will only be able to access websites based in India. To access international websites, you may have to pay a more. Or maybe there can be different connection speed for different type of content, depending on how much you are paying for the service and what “add-on package” you have bought. Lack of net neutrality will also spell doom for innovation on the web. It is possible that ISPs will charge web companies to enable faster access to their websites. Those who don’t pay may see that their websites will open slowly. This means bigger companies like Google will be able to pay more to make access to Youtube or Google+ faster for web users but a startup that wants to create a different and better video hosting site may not be able to do that. With the loss of net neutrality, small businesses, including those in the digital creative industries, would become strained or simply unable to compete with larger, more established businesses purely because of their inability to pay cable companies for fast lanes. Irving  writes “under the current proposal, most small business websites in America could be relegated to the slow lane – transformed into second-class players overnight.” The inability for smaller companies and businesses within the digital creative industries to promote themselves and do business over the internet would mean the end for most of said businesses. And thus, an internet without net neutrality could possibly look very different from the internet we’re all familiar with today. Of course, the larger more familiar websites like Google (and Youtube), Facebook, Twitter, Netflix, Reddit, WordPress etc. would all look and function in near enough the same way. After all, despite their protests against cable companies, these large corporations are the ones who would have the resources and funds to pay for the fast lanes in order to stay in business.
What will happen to your Website if Net Neutrality is lost?
1. Slower internet
One of the most immediate and obvious effects that tiered internet would have is slower internet for lower-paying customers. As you probably know, slow upload and download speed is one of the top pet peeves for customers, and a site load time of even one second can decrease conversion by seven percent. If a significant proportion of your customers are on a lower-tiered plan, that could mean a huge bounce rate for your site.
2. SEO (search engine optimization):
A significant portion of the population being subjected to internet throttling could also have strong and limiting implications for SEO.
There are two major ways that search results could be affected:
•Search results are limited to only the sites that certain subscriber levels are able to access
•Search results remain mostly the same, and users have to guess blindly to find a result that is covered by their subscription or one that won’t choke under narrow bandwidth
3. Simpler content
Loading a gif is essentially the equivalent of loading 10-50 images – which is why gifs often load more slowly than JPEGs and even YouTube videos.
4. Effect on entertainment & education sites
Slower web speeds means that rich content sites like Buzzfeed, Udacity, and other sites that rely on images and video may suffer from decreased performance, and may be forced to simplify if they wish to keep appealing to a wide audience.
5. Effect on e-commerce sites
This may not only be restricted to entertainment sites – many e-commerce sites use rich content such as interactive visuals and product videos to explain their product details. Watching product videos can be an essential part of the online shopping experience. Testing thousands of websites shows that rich content on ecommerce sites is highly valued by users, and is often requested if it is not already present. Can you imagine shopping Apple without having all those demos and details?
Net neutrality affects poor the most:
School, public, and college libraries rely upon the public availability of open, affordable internet access for school homework assignments, distance learning classes, e-government services, licensed databases, job-training videos, medical and scientific research, and many other essential services. We must ensure the same quality access to online educational content as to entertainment and other commercial offerings. But without net neutrality, we are in danger of prioritizing Mickey Mouse and Jennifer Lawrence over William Shakespeare and Teddy Roosevelt. This may maximize profits for large content providers, but it minimizes education for all. And with education comes innovation. While we tend to glorify industrial-park incubators and think-tanks, the fact is that many of the innovative services we use today were created by entrepreneurs who had a fair chance to compete for web traffic. By enabling internet service providers to limit that access, we are essentially saying that only the privileged can continue to innovate. Meanwhile, small content creators, such as bloggers and grassroots educators, would face challenges from ISPs placing restrictions on information traveling over their networks. Protecting net neutrality and considering its effect on libraries isn’t just a feel-good sentiment about education and innovation, however. Network neutrality is actually an issue of economic access, because those who can’t afford to pay more for internet services will be relegated to the “slow lane” of the information highway. Many institutions and organizations will not want to be disadvantaged by slow load times, but they will not be able to afford the ISP’s fees. So they will charge consumers. Want to get the news, get your health report, get your homework in a reasonable period of time? Pay extra!
Who will be most hurt by the end of net neutrality?
-Not big corporations. They will pay up and probably pass the cost to the consumer.
-Not the 1%. They will pay what they need to pay to get fast internet.
The people who will be hurt the most are those who need the internet most:
-Dissident, radical, but also innovative and entrepreneurial voices — people with new and different ideas.
-Small business owners especially owners of businesses that currently use the “cloud” to store data and to connect employees.
-Educators and librarians who will be stuck on the back roads of the internet.
-Moderate- to low-income people who will undergo frustrating waits to get information because they can’t pay for the fast lane.
Zero-rating (also called toll-free data or sponsored data) is the practice of mobile network operators (MNO) and mobile virtual network operators (MVNO) not to charge end customers for data used by specific applications or internet services through the MNO’s mobile network, in limited or metered data plans. It allows customers to use data services like video streaming, without worrying about bill shocks, which could otherwise occur if the same data was normally charged according to their data plans. Internet services like Facebook, Wikipedia and Google have built special programs to use zero-rating as means to provide their service more broadly into developing markets. The benefit for these new customers, who will mostly have to rely on mobile networks to connect to the Internet, would be a subsidised access to services from these service providers. The results of these efforts have been mixed, with adoption in a number of markets, sometimes overestimated expectations and perceived lack of benefits for mobile network operators. In Chile the Subsecretaria de Telecomunicaciones of Chile ruled that this practice violated net neutrality laws and had to end by June 1, 2014. Zero-rating is essentially the practice of providing consumers with free access through sponsored data plans, arising out of the nexus between telecom companies and well-funded portals, websites and apps. While many may find nothing wrong with this practice, it’s important to understand that this will fragment the internet into a free part of the internet, much akin to a walled garden, and a non-free part of the internet from the perspective of users. This actually goes against the very DNA of the internet and its egalitarian nature, which is about universal access to all sites without any limitations placed by the telecom companies, or the content providers trying to act as gate-keepers. Zero-rated mobile traffic is blunt anti-competitive price discrimination designed to favor telcos’ own or their partners’ apps while placing competing apps at a disadvantage. A zero-rated app is an offer consumers can’t refuse. If consumers choose a third-party app, they will either need to use it only over Wi-Fi use or pay telcos hundreds of dollars to use data over 3G networks on their smartphones or tablets.
A problem worsened by volume caps:
Zero-rating isn’t new. Telcos have been zero-rating their fixed broadband IPTV offerings from day one. The difference is that the overwhelming majority of fixed broadband connections were, and still are, volume uncapped. Contrary to fixed-lines, internet over smartphones and tablets comes with very restrictive volume caps in most markets. Zero-rated mobile traffic doesn’t need to be delivered at higher speeds and with a higher quality of service, nor does it need to be prioritized.
Airtel defence for zero rating:
If the application developer is on the platform they pay for the data and their customer does not. If the developer is not on the platform the customer pays for data as they do now. Companies are free to choose whether they want to be on the platform or not. This does not change access to the content in any way whatsoever. Customers are free to choose which web site they want to visit, whether it is toll free or not. If they visit a toll free site they are not charged for data. If they visit any other site normal data charges apply. Finally every web site, content or application will always be given the same treatment on our network whether they are on the toll free platform or not. As a company we do not ever block, throttle or provide any differential speeds to any web site. We have never done it and will never do it. We believe customers are the reason we are in business. As a result we will always do what is right for our customers.
Digital divide, net neutrality and zero rating:
A digital divide is an economic and social inequality according to categories of persons in a given population in their access to, use of, or knowledge of information and communication technologies. The divide within countries (such as the digital divide in the United States) may refer to inequalities between individuals, households, businesses, or geographic areas, usually at different socioeconomic levels or other demographic categories. The divide between differing countries or regions of the world is referred to as the global digital divide, examining this technological gap between developing and developed countries on an international scale.
Is Net Neutrality more important than Internet Access through zero rating to reduce digital divide?
Millions more people were using the Internet in Africa because of Internet.org and that job search was among the most popular activities. That’s amazing when you think about it – by offering a portion of the Internet for free (which definitely goes against net neutrality), millions of people – too poor or previously unwilling to pay for the net – were now online and searching for better livelihoods in the span of a few months. The problem with the Internet in India today – the one that exists now where most bits are charged the same – is that it’s far too expensive for most folks. According to TRAI, only 20% of India is online – and that includes all those people who turned on 2G once on their phone. In short, the Internet is simply not relevant enough to most Indians. It’s largely in English, it’s expensive and it’s full of content that’s not particularly relevant for most women, the elderly, the poor, the under-educated and the rural. In many other markets – TV being a great example – content was made free and then supported by advertising e.g. a limited number of broadcasters got to choose the TVs shows that aired and paid for it all with ads. This made TV accessible to everyone, assuming they could afford the price of entry of a TV set. You could definitely argue that a broadcaster choosing a TV show went against the concept of “TV-neutrality” but in exchange, everyone got free content aka TV shows. In Facebook’s Internet.org case, they included sites like Babajob that work on low-end phones, made their sites available in local languages and offered something useful to them (like the chance to find a better job or get health information). In markets like the US where Internet penetration is much higher, the debate about net neutrality is much more relevant. But if we hope to bring most of the Indian population online, something’s got to give. Either the government needs to stop charging a bundle for bandwidth licenses (e.g. witness the recent $18 billion 3G & LTE spectrum auction – who do you think will ultimately pay for that, Internet users), or those in power need to stop taking bribes to put up cell towers and fiber across the country (again who ultimately pays? Mobile and internet users), or new business models are developed such that net users pay less or nothing (e.g. people watch ads, things like Internet.org cover their fees, companies that can afford it pay for their user’s bandwidth, richer users give subsidies to help the poor pay for internet access, etc.). Arguments about net neutrality shouldn’t be used to prevent the most disadvantaged people in society from gaining access or to deprive people of opportunity. Eliminating programs that bring more people online won’t increase social inclusion or close the digital divide. It will only deprive all of us of the ideas and contributions of the two thirds of the world who are not connected. We also must realize that this is India, not the US, not China. Over 60 per cent of Indian population lives on less than US$2 per day. Indian market dynamics are very different from the rest of the world. We must to give room to both sides of the industry to experiment to bring the cost of the internet down significantly for the one billion plus population in India without blocking, throttling or discriminating against any service, with the basic principles of net neutrality intact. Because at the end of the day that’s what matters. Bringing over a billion Indians online. Technically, Internet.org is an open platform any website or app can join, but as Zuckerberg notes, it would be impossible to give the entire Internet away for free. “Mobile operators spend tens of billions of dollars to support all of Internet traffic,” he writes. “If it was all free they’d go out of business.” That means most services necessarily must be left out if Internet.org is to be financially viable for carriers. This creates a system of fundamentally unequal access for the companies trying to reach these users and for users themselves. Facebook founder Mark Zuckerberg said ‘some access to the Internet is better than none at all and unequal access is better than no access’. But let’s start with the fact that only Reliance customers are eligible for free and selective Internet access through Facebook’s platform. Would any telecom operator who has been crying foul over losses in revenue due to people using cheaper communications options like WhatsApp, offer the web at no charge unless there was profit in it? By tempting users through free Internet, network operators can ensure they reach a wider audience.
Internet.org, the Facebook-led initiative to provide select apps and services to mobile phone users in emerging markets for free, recently passed 800,000 subscribers in India. Along with local telecom companies there, the service provides access to around 30 websites and services without charging the user for the mobile data necessary to use those services, which include Facebook and Wikipedia. 20% of Internet.org users currently on the platform did not previously access mobile data. Only 7% of the data used by Internet.org subscribers came through the initiative’s free, zero-rated offerings; other paid services accounted for the remaining 93%. This proves that zero rating only allows initial internet access to customers but later on it almost becomes paid service. Studies have showed that internet access reduces poverty and create jobs.
Are there better ways to bridge the digital divide than Internet.org or Airtel Zero?
The strongest argument in favour of zero-rating is that it helps to broaden the access and get the hitherto excluded population on the internet. In India, this translates to 80% of the population, which underscores the huge digital divide that we need to bridge. While this is a noble goal, what needs to be understood is that the scope for abuse of market power through such zero-rated services is tremendous. It is ironic that it is those websites, then startups, which benefitted from the level playing field of the internet implicit in the principles of net neutrality that are now engaged in a bid to expand their reach and in the process damage the internet as we know it, and skew the balance against current startups. This practice is akin to the eventually disallowed practice of Microsoft bundling its own browser, Internet Explorer, along with its operating system. Solving the issue of access and bridging the digital divide can just as easily and cost-effectively be addressed through other transparent and competition enhancing method like cash transfer to poor people by government.
Neither Internet.org by Facebook, Airtel Zero nor any other major zero rating platform gives the choice to the consumer. Instead, the decisions are made by big telcos working in partnership with large Internet companies. Smaller firms are forced to commercially lobby and sign up in order to prevent their competitors from being able to deal in and crush them. This reduces entrepreneurship and local Internet innovation by placing firms in a situation where their local consumers are all locked in to a limited platform under the control of a few giants. This is why regulators in Chile, the Netherlands, Slovenia and Canada have prohibited zero-rating, while their counterparts in Germany, Austria and Norway have publicly stated that zero-rating violates network neutrality. At times, this is a battle between Access and Neutrality. Facebook and Wikipedia are pitching free Internet access, as a means of bringing the Internet to more people who can’t afford it. Facebook is using this to become the gateway to the Internet on mobile, so that access to the web is through it. Remember that Google won its dominance by creating a great search product…it is also a gateway to the Internet, but mostly via desktops and laptops.
Even if it were the case that some zero-rating programs might create some barriers to market entry for new start-ups, as net neutrality supporters argue, India may need to consider that not all zero-rating programs are likely to create such barriers. Further, India may need to balance any potential loss against the immediate benefit that zero-rating programs can provide, by expanding access to Internet services. These platforms could provide rural areas with mobile access to basic search engines, social platforms, and e-commerce sites. The access could help small business owners and farmers tap into a larger market for their goods, and can bring basic education and information to rural areas. Even outside of the zero-rating context, policymakers in India are crafting new telecom regulations to achieve a greater balance between the benefits of net neutrality with opportunities for more widespread Internet access.
The figure below shows that search neutrality is part of net neutrality:
Neutrality of search engines is called Search Neutrality. If ISPs should be subjected to “net neutrality,” should companies like Google be subjected to “search neutrality”? Search neutrality is a principle that search engines should have no editorial policies other than that their results be comprehensive, impartial and based solely on relevance. This means that when a user queries a search engine, the engine should return the most relevant results found in the provider’s domain (those sites which the engine has knowledge of), without manipulating the order of the results (except to rank them by relevance), excluding results, or in any other way manipulating the results to a certain bias. Search neutrality should be understood as the remedy to the conduct that involves any manipulation or shaping of search results. This conduct is also commonly known as “search bias”. In this work, search neutrality should be understood in its broadest sense. It is the idea that search results should be free of political, financial or social pressures and that their ranking is determined by relevance, not by the interests or the opinions of the search engines’ owners. The importance attributed to search neutrality and search bias in recent years is closely linked to the role that search engines play in our information society. Indeed, search engines are currently the “gatekeepers” of considerable amounts of information scattered over the World Wide Web. Many users consider search engines to be the most important intermediaries in their quest for information. Users also believe that search engines are reliable without realising that they have the power to hide and to show democratically sensitive information. Search neutrality is related to network neutrality in that they both aim to keep any one organization from limiting or altering a user’s access to services on the Internet. Search neutrality aims to keep the organic search results (results returned because of their relevance to the search terms, as opposed to results sponsored by advertising) of a search engine free from any manipulation, while network neutrality aims to keep those who provide and govern access to the Internet from limiting the availability of resources to access any given content. Google is in the uncomfortable position of trying to stave off a corollary principle of search neutrality. Search neutrality has not yet coalesced into a generally understood principle, but at its heart is some idea that Internet search engines ought not to prefer their own content on adjacent websites in search results but should instead employ “neutral” search algorithms that determine search result rankings based on some “objective” metric of relevance. Whatever the merits of the net neutrality argument, a general principle of search neutrality would pose a serious threat to the organic growth of Internet search. Although there may be a limited case for antitrust liability on a fact-specific basis for acts of naked exclusion against rival websites, the case for a more general neutrality principle is weak. Particularly as Internet search transitions from the ten blue links model of just a few years ago to a model where search engines increasingly provide end information and interface with website information, a neutrality principle becomes incoherent.
Search engines produce immense value by identifying, organizing, and presenting the Internet´s information in response to users´ queries. Search engines efficiently provide better and faster answers to users´ questions than alternatives. Recently, critics have taken issue with the various methods search engines use to identify relevant content and rank search results for users. Google, in particular, has been the subject of much of this criticism on the grounds that its organic search results—those generated algorithmically—favor its own products and services at the expense of those of its rivals. Almost four years have now passed since the European Commission started to investigate Google’s behaviour for abuse of dominant position in the Internet search market. During the investigation Google was accused of favourably ranking its own vertical search services to the detriment of its rivals. Competitors and other stakeholders argued that Google should be regulated through a “search neutrality principle”. Similar claims were expressed during the US Federal Trade Commission investigation relating to the same abusive conduct of Google. An independent analysis finds that own‐content bias is a relatively infrequent phenomenon. Google references its own content more favorably than rival search engines for only a small fraction of terms, whereas Bing is far more likely to do so.
It is widely understood that search engines´ algorithms for ranking various web pages naturally differ. Likewise, there is widespread recognition that competition among search engines is vigorous, and that differentiation between engines´ ranking functions is not only desirable, but a natural byproduct of competition, necessary to survival, and beneficial to consumers. Rather than focus upon competition among search engines in how results are identified and presented to users, critics and complainants craft their arguments around alleged search engine “discrimination” or “bias.” While a broad search neutrality principle is neither feasible nor desirable, this does not mean that dominant search engines should never be liable for intentionally interfering with their rivals’ hits in search results. Any such liability should be narrow, carefully tailored, and predictable. Search neutrality may thus have future, not as a general principle, but as the misfiting tag line on fact-specific findings of egregious abuses by dominant search engines.
Search engines are attention lenses; they bring the online world into focus. They can redirect, reveal, magnify, and distort. They have immense power to help and to hide. We use them, to some extent, always at our own peril. And out of the many ways that search engines can cause harm, the thorniest problems of all stem from their ranking decisions. The need for search neutrality is particularly pressing because so much market power lies in the hands of one company: Google. With 71 percent of the United States search market (and 90 percent in Britain), Google’s dominance of both search and search advertising gives it overwhelming control. Google’s revenues exceeded $21 billion last year, but this pales next to the hundreds of billions of dollars of other companies’ revenues that Google controls indirectly through its search results and sponsored links. One way that Google exploits this control is by imposing covert “penalties” that can strike legitimate and useful Web sites, removing them entirely from its search results or placing them so far down the rankings that they will in all likelihood never be found. Consider an example. The U.K. technology company Foundem offers ““vertical search””——it helps users compare prices for electronics, books, and other goods. That makes it a Google competitor. But in June 2006, Google applied a penalty to Foundem’s website, causing all of its pages to drop dramatically in Google’s rankings and hence its business dropped off dramatically as a result. The experience led Foundem’s co-founder, Adam Raff, to become an outspoken advocate: creating the site searchneutrality.org, filing comments with the Federal Communications Commission (FCC), and taking his story to the op-ed pages of The New York Times, calling for legal protection for the Foundems of the world. Another way that Google exploits its control is through preferential placement. With the introduction in 2007 of what it calls “universal search,” Google began promoting its own services at or near the top of its search results, bypassing the algorithms it uses to rank the services of others. Google now favors its own price-comparison results for product queries, its own map results for geographic queries, its own news results for topical queries, and its own YouTube results for video queries. And Google’s stated plans for universal search make it clear that this is only the beginning. Because of its domination of the global search market and ability to penalize competitors while placing its own services at the top of its search results, Google has a virtually unassailable competitive advantage. And Google can deploy this advantage well beyond the confines of search to any service it chooses. Wherever it does so, incumbents are toppled, new entrants are suppressed and innovation is imperiled. Without search neutrality rules to constrain Google’s competitive advantage, we may be heading toward a bleakly uniform world of Google Everything — Google Travel, Google Finance, Google Insurance, Google Real Estate, Google Telecoms and, of course, Google Books. Some will argue that Google is itself so innovative that we needn’t worry. But the company isn’t as innovative as it is regularly given credit for. Google Maps, Google Earth, Google Groups, Google Docs, Google Analytics, Android and many other Google products are all based on technology that Google has acquired rather than invented. Even AdWords and AdSense, the phenomenally efficient economic engines behind Google’s meteoric success, are essentially borrowed inventions: Google acquired AdSense by purchasing Applied Semantics in 2003; and AdWords, though developed by Google, is used under license from its inventors, Overture.
Google was quick to recognize the threat to openness and innovation posed by the market power of Internet service providers, and has long been a leading proponent of net neutrality. But it now faces a difficult choice. Will it embrace search neutrality as the logical extension to net neutrality that truly protects equal access to the Internet? Or will it try to argue that discriminatory market power is somehow dangerous in the hands of a cable or telecommunications company but harmless in the hands of an overwhelmingly dominant search engine? Google is dominant because customers recognise that it is the best service, not because they are locked. This success has been built through important investments in software and hardware, especially huge data centres. The continuing search activities all over the years reinforced its position, creating a kind of “information barrier” for potential competitors.
Although search neutrality is a part of net neutrality, there are fundamental differences:
Internet search can never completely be neutral. Search tools and criteria are never completely objective, since they are designed, in a way, to meet the profile of users. If this is done well, the search engine will be successful, and consumers will recognise it. While for ISPs the lock-in is a fundamental barrier for changing provider, in the search engine market the lock-in does not work. If there is an alternative search engine, a simple “click” is enough. One difference between net neutrality and search neutrality is that the search engines are already suppressing and biasing our access to net information. All the majors maintain “banning” departments that routinely block or suppress access by their users to individually hand-picked web sites without notice and for arbitrary and undisclosed reasons. Search engines maintain that they are publishers and therefore have editorial free-speech rights to delete, bias, edit, or otherwise manipulate organic (non-sponsored) search results in nearly any manner. Most users think of major search “engines” as automated mechanical connection services as opposed to editorial entities and specifically want such a service; if edited information is desired there are much better sources. As with net neutrality, the lack of search neutrality is especially injurious to small business. Political bias in search could allow a tiny group of people to significantly alter our “democratic discourse.” Another functional difference is that while ISPs are local, major search engines are global and can substantially control user access worldwide. Google’s total impact on Internet information access is much larger than that of any single ISP. ISPs claim they need the additional fees (beyond the existing Internet access fees at both ends of a communication) for improving their broadband networks and therefore should be allowed to set up “tiered” access with different levels of priority. Search engines claim they need the ability to block or suppress access by their users to particular, hand-picked web sites for arbitrary and undisclosed reasons in order to improve the quality of search results they deliver to their customers and that each deleted site has violated some unspecified content rule. Neither claim is really credible, especially in light of the massive self-interest in both cases.
Search engines are essential to our ability to connect to information on the Internet. Search engines also have the structural capacity to interfere with access by their users to specific web information. Search engines also have an economic incentive to control access by their users in order to leverage their own or a partner’s Internet content. There are only three major search engines; together Google, Yahoo, and Microsoft control more than 90 percent of U.S. web searches. Search users are not given the option of seeing editorially deleted sites, even if their search produces no results. Users are not even told that hand-picked sites are being deleted. If search engines provide a connection service, then they should follow rules similar to those applied to telcos and other information carriers. Solving the neutrality issue needs regulation or legislation that constrains search engines as well as ISPs.
Search engine optimization (SEO) is the process of affecting the visibility of a website or a web page in a search engine’s unpaid results – often referred to as “natural,” “organic,” or “earned” results. In general, the earlier (or higher ranked on the search results page), and more frequently a site appears in the search results list, the more visitors it will receive from the search engine’s users. SEO may target different kinds of search, including image search, local search, video search, academic search, news search and industry-specific vertical search engines. Whenever you enter a query in a search engine and hit ‘enter’ you get a list of web results that contain that query term. Users normally tend to visit websites that are at the top of this list as they perceive those to be more relevant to the query. If you have ever wondered why some of these websites rank better than the others then you must know that it is because of a powerful web marketing technique called Search Engine Optimization (SEO). SEO is a technique which helps search engines find and rank your site higher than the millions of other sites in response to a search query. SEO thus helps you get traffic from search engines.
Here are the eight possible bases for search-neutrality regulation:
•Equality: Search engines shouldn’t differentiate at all among websites.
•Objectivity: There are correct search results and incorrect ones, so search engines should return only the correct ones.
•Bias: Search engines should not distort the information landscape.
•Traffic: Websites that depend on a flow of visitors shouldn’t be cut off by search engines.
•Relevance: Search engines should maximize users’ satisfaction with search results.
•Self-interest: Search engines shouldn’t trade on their own account.
•Transparency: Search engines should disclose the algorithms they use to rank webpages.
•Manipulation: Search engines should rank sites only according to general rules, rather than promoting and demoting sites on an individual basis.
How do I circumvent biased search results?
1. I always search multiple search engines for information e.g. I search google, yahoo, bing sequentially for any information.
2. I never trust first page and top ranked websites as the best source information.
Cloud computing neutrality:
Does neutrality apply equally to cloud computing or are they completely different issues? If net neutrality applies to (public) Internet services, then would cloud neutrality relate to public information processing and storage (i.e., SaaS) services?
The three rules of net neutrality also apply to cloud computing:
•No service blocking – SaaS providers should not arbitrarily restrict or block access to computing and storage services;
•No service throttling – SaaS providers should not favour one customer over another in areas such as capacity, elasticity, accessibility, resilience or responsiveness;
•No paid priority services – SaaS providers should not selectively offer (or provide) better services to selected customers at the expense of others.
For example, the following might hypothetically be possible:
•A SaaS provider could favour one search engine over another by preventing or slowing down search scanning;
•A SaaS provider could degrade response times for certain companies (such as a re-seller or broker) or users.
It would seem that the whole question of neutrality – the fair and open availability of public IT services – is more complicated it would be for other utilities such as water, roads or electricity. There is a need to look at net neutrality both holistically and technically as well as commercially and politically.
Is net already non-neutral? Do we already have fast lanes?
It turns out that our layman’s understanding of how the Internet works — a worldwide Web of computers linked on a free, open network — is a bit of a fairy tale. The truth is that those fast lanes demonized by net neutrality advocates already exist. Highly successful and high-traffic Web companies like Google, Facebook and Netflix already pay for direct access — inside access, in some cases — to Internet service providers like Comcast, AT&T and Verizon. They do so by bypassing internet backbone:
There are three types of fast lanes that exist today:
1. Peering: – Most Web companies need to send their data across the broader Internet backbone (the cables and data centers operated by companies around the world) before it arrives at an ISP and is served to individual users. Wealthier companies can pay ISPs for a direct connection called peering that bypasses the Internet backbone and speeds data transfers. This is called paid peering.
2. Content Delivery Network: – Ever wonder how Google can serve up search results so quickly? The search giant pays for the privilege to set up its own servers inside the bowels of ISPs so it can deliver the most popular searches and images even faster.
3. Paid prioritization: – Paid prioritization is a financial agreement in which a company that provides content, services, and applications over the Internet (an “edge provider”) pays a broadband provider to essentially jump the queue at congested nodes. These fast lanes actually work like toll booths, where paying companies get to go through the gate first when traffic is congested. Paid prioritization also covers the cases of broadband providers prioritizing their own content or that of an affiliate over the data from a competing edge provider (also called “vertical prioritization”).
Today, privileged companies—including Google, Facebook, and Netflix—already benefit from what are essentially internet fast lanes, and this has been the case for years. Such web giants—and others—now have direct connections to big ISPs like Comcast and Verizon, and they run dedicated computer servers deep inside these ISPs. In technical lingo, these are known as “peering connections” and “content delivery servers,” and they’re a vital part of the way the internet works. The real issue is that the Comcasts and Verizons are becoming too big and too powerful. Because every web company has no choice but to go through these ISPs, the Comcasts and the Verizons may eventually have too much freedom to decide how much companies must pay for fast speeds. Net isn’t neutral now. What we should really be doing is looking for ways we can increase competition among ISPs—ways we can prevent the Comcasts and the AT&Ts from gaining so much power that they can completely control the market for internet bandwidth.
Google is already running internet fast lanes:
Starting in 2012, Comcast got in a fight with Netflix over the amount of bandwidth the streaming video site required from Comcast-owned networks. Comcast refused to upgrade its equipment to handle the increased traffic unless Netflix paid up. The battle waged on for two years, during which Netflix service for millions of Comcast subscribers slowed to a crawl. Since Comcast essentially owns the last-mile connection to 22 million homes, Netflix had no choice but to pay for a direct peering arrangement. Verizon pulled a similar strong-arm tactic to get more money from Netflix in an earlier backroom deal.
By 2009, half of all internet traffic originated in less than 150 large content and content-distribution companies, and today, half of the internet’s traffic comes from just 30 outfits, including Google, Facebook, and Netflix. Because these companies are moving so much traffic on their own, they’ve been forced to make special arrangements with the country’s internet service providers that can facilitate the delivery of their sites and applications. Basically, they’re bypassing the internet backbone, plugging straight into the ISPs. Today, a typical webpage request can involve dozens of back-and-forth communications between the browser and the web server, and even though internet packets move at the speed of light, all of that chatter can noticeably slow things down. But by getting inside the ISPs, the big web companies can significantly cut back on the delay. Over the last six years, they’ve essentially rewired the internet. Google was the first. As it expanded its online operation to a network of private data centers across the globe, the web giant also set up routers inside many of the same data centers used by big-name ISPs so that traffic could move more directly from Google’s data centers to web surfers. This type of direct connection is called “peering.” Plus, the company set up servers inside many ISPs so that it could more quickly deliver popular YouTube videos, webpages, and images. This is called a “content delivery network,” or CDN. “Transit network providers” such as Level 3 already provide direct peering connections that anyone can use. And companies such as Akamai and Cloudflare have long operated CDNs that are available to anyone. But Google made such arrangements just for its own stuff, and others are following suit. Netflix and Facebook have built their own CDNs, and according to reports, Apple is building one too.
CDN (content delivery networks):
Let’s take a real example: This website is hosted on a web server that’s located in some part of America. Now if we have a visitor from Singapore, the page loading time for him will be relatively high because of the geographic distance between Singapore and America. Had there been a mirror server in either India or Australia, the page would load much faster for that visitor from Singapore. Now a content delivery network has servers across the world and they automatically determine the fastest (or the shortest) route between the server hosting the site and the end-user. So your page will be served from the server in Australia to a visitor in Singapore and from America for a visitor in UK. Of course there are other advantages but this example should give you a good idea of why we need a Content Delivery Network. The use of a Content Delivery Network (CDN) is imperative for content providers who wish to improve the availability of their content to their end users. Apart from increasing speed of access of websites, CDN also increases content availability.
If Web companies can already pay ISPs for preferential treatment, then why are net neutrality advocates making such a stink about fast lanes? Net neutrality loses its meaning and becomes irrelevant when ISPs and content providers arrange private pathways that avoid the global links. Technically you restrict fast lanes on a public highway, but you are freely allowing private highways for paying content providers. The effect is much the same. In the face of the monopolistic power of ISPs and their stiff resistance to regulation (which they are increasingly able to avoid in any case) and with perverse incentives to not increase bandwidth, the hope for maintaining a free and open Internet would seem to be a lost cause.
Do we indeed practice non net neutrality?
Non-NN scenarios can be categorized along two dimensions: The network regime and the pricing regime. The pricing regime denotes whether an access ISP employs one-sided pricing (as is traditionally the case) or two-sided pricing. We already have two-sided pricing. The network regime refers to the QoS mechanisms and corresponding business models that are in place. Under strict NN, which prohibits any prioritization or degradation of data flows, only capacity-based differentiation is allowed. This means that CSPs or IUs may acquire Internet connections with different bandwidth, however, all data packets that are sent over these connections are handled according to the BE principle, and thus, if the network becomes congested, they are all equally worse off. In a managed network, QoS mechanisms are employed as a preferential treatment of certain data packets. We already have voice/video priority over emails. So both from network regime and pricing regime, there is already no net neutrality.
In developing Countries, Google and Facebook already defy Net Neutrality:
In much of the world, the concept of “net neutrality” generates less public debate, given there’s no affordable Net in the first place. The next billion Internet users will be arriving mostly in the developing world, on low-end smartphones. In the emerging economies of the world, that’s pretty much how things already work, thanks to a growing number of deals Google and Facebook have struck with mobile phone carriers from the Philippines to Kenya. In essence, these deals give people free access to text-only version of things like Facebook news feeds, Gmail, and the first page of search results under plans like Facebook Zero or Google Free Zone. Only when users click links in e-mails or news feeds, go beyond the first page of search results, or visit websites by other means do they incur data charges. For people who have no Internet in the first place, the idea of net neutrality is not exactly top of mind. Getting online cheaply in the first place is a greater concern, and the American companies are often enabling that to happen. Internet access is expensive in developing countries—exorbitantly so for the vast majority of people. In Kenya the top four websites are Google, Facebook, YouTube (which is owned by Google), and the Kenyan version of Google. That pattern is fairly typical of Web usage in dozens of developing nations. And free services like Facebook Zero and Google Free Zone don’t have many critics among users. But the existence of a free and dominant chat, e-mail, search, and social-networking service makes it awfully hard for any competitor to arise. And Susan Crawford, visiting professor of law at Harvard University and a co-director of Harvard’s Berkman Center for Internet & Society, calls it “a big concern” that Google and Facebook are the ones becoming the portal to Web content for many newcomers. “For poorer people, Internet access will equal Facebook. That’s not the Internet—that’s being fodder for someone else’s ad-targeting business,” she says. “That’s entrenching and amplifying existing inequalities and contributing to poverty of imagination—a crucial limitation on human life.” Google had struck a deal with the major Indian mobile network Airtel to offer Free Zone, in this case giving people up to one gigabit per month of free access to Gmail, Google+, and Google search. Some critics have called this unfair treatment that disadvantages competitors. Google and Facebook are doing more than just providing various forms of free data access. Those two companies and others, like Microsoft, are increasingly in the business of trying to expand infrastructure and related data-efficiency technologies that will, inevitably, be deployed in ways that benefit themselves. And because most of the smartphones that convey the Internet to users will be low-end Android phones, Google and Facebook are also battling to develop dominant apps for those phones. Some Internet service providers in the developing world talk about trying to charge companies like Google to carry their traffic, but that is probably unlikely to happen. They recognize that free versions of popular sites like Google and Facebook draw people into greater data usage, producing revenue.
Economics of net neutrality:
Since the controversial term “net neutrality” was coined by Professor Tim Wu of Columbia Law School in 2003, much of the debates on net neutrality revolved around the potential consequences of network owners exercising additional control over the data traffic in their networks. Presently the obvious villains in the show are the Telecom Service Providers (TSPs) and the Internet Service Providers (ISPs) as they provide the last mile bandwidth to carry the content and applications to the end users. Net neutrality is a specific approach to the economic regulation of the Internet and requires context in the wider literature on the economic regulation of two-sided markets. A platform provider (TSPs; ISPs) connects end consumers and over-the-top content (OTT) as given in the diagram below.
As per the theory of two-sided markets, the provider is justified in charging a toll to avoid congestion in the network. The toll can be raised from the end user or the OTT or both. The consumer is usually more price sensitive than the OTT so the tendency of the ISP is to raise the toll from the OTT. The market power of the provider would result in a total levy (sum of levy on OTT and end user) that is too high from the point of view of efficiency. Even if the levy falls within the efficient range, it may tend to be in the higher parts of the range. This tendency is checked by ‘cross-group externalities’ – the value enhancement obtained by the end users from the presence of OTTs and vice versa. Cross group externalities soften the impact of the market power of the provider. Nevertheless, the fact of market power cannot be denied, given the low likelihood that an ordinary OTT can hurt a provider’s business by refusing to connect through that provider. The principles of static efficiency outlined above do not suggest that over-the-top contents cannot be charged, only that market power of the ISP needs regulation. However the principle of dynamic efficiency, i.e. the optimization of the rate of technological progress in the industry, suggests that OTTs, especially startups, need extra support. Indeed the rapid growth of the Internet is a result of the low barriers to entry provided by the Internet. When the considerations of dynamic efficiency outweigh the considerations of static efficiency there may be justification in reversing the natural trend of charging OTTs and, instead, charging consumers. This has been the practice so far. It must however be noted that innovation is also needed in the ISP layer. The situation becomes more complex when there is vertical integration between an OTT and a TSP/ISP. This vertical integration can take several forms:
1. App store of a Content Service Provider (CSP) bundled (preferred bundling) by the TSP/ISP;
2. Arrangements between ISP and CSP;
3. ISPs providing content caching services, becoming content distribution networks, or even content providers.
Examples of such vertical integration, small and big, are many. Recent announcements by Google that they will provide Internet access through balloons and storing its data on the servers of ISPs for better access speeds. One view on vertical integration is that it allows complementarities to be tapped and is undertaken when the gains outweigh the known restrictions in choice faced by the consumer. For this view to hold, all linkages should be made known to the consumer, who must be deemed to be aware enough to understand the consequences. The other view is that vertical integration inhibits competition as potential competitors have to enter both markets in order to compete. Further, when a provider provides communications services on its own, there is a conflict of interest with communications over-the-top content, for example between voice services provided by a telco and Skype. As we contemplate moving away from the traditional regime of the Internet, we must therefore be prepared to countenance the curbing of dynamic efficiency and the limitation to competition due to vertical integration and conflicts of interest between the provider and the OTT.
Internet Pricing Structure:
Above is a simple diagram that represents the structure of the Internet. On the right side are content providers who upload their applications and websites onto the Web usually via an Internet Service Provider, but it could be any of a variety of types of companies that sell access to the Internet. This is typically the only fee that content providers pay to access the Internet and Internet subscribers. ISPs connect their private networks to the Internet in the center of the figure. Broadband subscribers in homes and businesses across the country pay an ISP like a phone or cable company for online access. The pipes between this ISP’s Internet access point and its subscribers’ computers constitute a privately owned and operated subnetwork. This last stretch of wires and pipes are often referred to as the “last mile” of the Internet—the part that connects the network to individuals (depicted on the left side of the figure below) The last mile is the heartland of the net neutrality debate. The cost of building a last mile network is extremely high and is often borne entirely by the ISP that constructs the network. Building this type of network requires physical or wireless connections to be built between and ISP’s Internet access point and each subscriber’s household or business. This last mile network is the ISP’s most valuable asset. Some say that content providers profit from the last mile but do not compensate the ISP companies for their investment in the infrastructure that enables that profit.
ISP vs. OTT:
Vodafone has said that the government should tax over the top (OTT) players like WhatsApp, Viber, Hike and Facebook as they are getting a “free ride” on telecom networks without paying for spectrum or any other fee. The operators have to pay taxes, license fee and have to share revenue (with the government). The other guys have a complete free ride. WhatsApp accounts for nearly 60 million user base in India. Services like WhatsApp have nearly eliminated revenues from SMS, while free calling on Skype and Viber (especially from international markets) is hitting their voice revenues. Popular Internet companies such as Google, Yahoo! and Facebook should start sharing revenues with telecom companies, according to Bharti Airtel. The company said that the telecom regulator should impose interconnection charges for data services just like it is applied for voice calls. Today, Google, Yahoo! and others are enjoying at the cost of network operator. ISPs are the ones investing in setting up data pipes and OTPs make the money. Amid raging debate over Net Neutrality, mobile operators said if they are not offered a level playing field with Net-based services such as Skype and WhatApp, then their businesses would be viable only by raising data prices by up to six times. Such high rates would become unaffordable for a large number of people, denying them access to the Internet. Nasscom discounted any notion of revenue loss from OTT players to TSPs. The apps created have made the internet more useful, and opened up avenues for not just service providers, but increased convenience, transparency and enabled newer services for consumers. This is driving data revenues for telecom companies. Loss of revenue arguments from TSPs are not evident in some of the recent quarterly results announced. In the long-run it is likely that it would result in a win: win situation for both ISPs and OTT players. The growth of OTT will spur demand for data that in turn generates additional revenues for TSPs, leading to a synergistic ecosystem. Nasscom felt it would be better if the government and the telecom industry work together to create a balanced environment for ISPs to invest in network infrastructure, rather than targeting the fledgling internet-based product and service providers.
Broadband Internet access has most often been sold to users based on Excess Information Rate or maximum available bandwidth. If Internet service providers (ISPs) can provide varying levels of service to websites at various prices, this may be a way to manage the costs of unused capacity by selling surplus bandwidth (or “leverage price discrimination to recoup costs of ‘consumer surplus’”). However, purchasers of connectivity on the basis of Committed Information Rate or guaranteed bandwidth capacity must expect the capacity they purchase in order to meet their communications requirements. Various studies have sought to provide network providers the necessary formulas for adequately pricing such a tiered service for their customer base. But while network neutrality is primarily focused on protocol based provisioning, most of the pricing models are based on bandwidth restrictions.
It’s all about the money:
Internet users currently pay a flat rate for their service, whether they simply use it for checking emails or performing more data-heavy tasks including streaming movies and television programs.
It’s true that without net neutrality rules, ISPs could theoretically block or throttle access to some sites to extort fees, promote their own services, and so on. But in practice, such abuses of market power have been extremely rare. In reality, the net neutrality debate is about how costs will be shared to improve the Internet for everyone. Peak-hour Internet traffic surged 32% in 2013, according to Cisco. To meet consumers’ appetites for more bandwidth, ISPs are spending hundreds of billions of dollars on network expansion and fiber deployments. In the U.S., Netflix is a key driver of traffic growth. Not only is the company adding subscribers, it is also encouraging existing subscribers to spend more time watching Netflix, allowing families to stream multiple shows at once, and promoting higher-quality video streaming, including 4K TV (also known as Ultra-HD). Netflix encourages all of these behaviors because they will lead to a larger, more loyal, subscriber base, which will boost the company’s profit in the long run. Yet they all will require a hugely expensive Internet capacity that does not exist today. ISPs like AT&T want data-heavy services like Netflix to help fund investments in faster broadband. Internet service providers want companies such as Netflix — which are the primary beneficiaries of faster Internet service — to chip in for these upgrades. Netflix believes the ISPs should shoulder the full costs, which would ultimately be spread among all Internet users, whether or not they subscribe to Netflix. Understandably, most Americans don’t want to pay more for Internet service. But the possibility of tougher regulation on broadband service has spooked ISPs, which don’t want to invest tens or hundreds of billions of dollars unless they can be sure of recouping their costs. AT&T recently froze its plans for a massive investment to expand its high-speed fiber network. This underscores the point that people should not worry so much about ISPs artificially slowing their service or blocking some websites to make way for priority users. The real concern is that ISPs won’t invest enough money to keep pace with the extraordinary growth in Internet traffic, especially for peak periods. Unfortunately, until ISPs, content providers, and the government agree on who will share in funding the hundreds of billions of dollars of investment needed to drive a step-change in U.S. broadband speeds, more and more people will find themselves stuck in “slow lanes.”
How Net Neutrality changes could Impact Your Business:
The following are three examples of how these changes could affect your small business.
1. Higher Costs:
Without net neutrality, Internet Service Providers are able to create their own payment options for individuals and businesses. Although nothing is official, these Internet companies could charge higher fees for higher speeds. For example, with Netflix being the leading streaming video provider on the Internet, they may have to pay more to ISPs in order to provide customers with fast content. According to USA Today, “Netflix may face an incremental $75 million to $100 million in annual content delivery costs.” This additional expense will be incurred to provide the same service levels consumers already expect from Netflix. For companies that can’t afford the more expensive fees, possibly small businesses like yours, they would be subject to a slower website than larger competitors – effectively squeezing smaller companies out of the marketplace.
2. No Longer an Even Playing Field:
Net neutrality ensures small businesses are able to compete with larger companies. With both having the same access to the Internet, they are able to have the same opportunities for their businesses. If net neutrality is eliminated, small businesses may not be able to afford to share content and therefore, unable to compete with their larger competitors.
3. Changes to Video Marketing:
A lot of time and effort are spent to creating videos that feature and promote products. Small businesses that rely on video and YouTube as part of their marketing strategy, could see changes if net neutrality is eliminated. If we can’t afford to pay Internet providers to share our content, our potential customers may not be able to view as many product videos and may not be enticed to purchase our products. Furthermore, the investment to produce and optimize these videos will be result in a monetary loss.
On the other hand, there are reasons why business should oppose Net Neutrality:
Up until now, the debate over net neutrality has largely focused on how broadband consumers would be affected by net neutrality. But for at least two reasons, businesses — even those outside of the communications sector — have a dog in this fight too.
1. First, businesses need ISPs to continue investing in their broadband networks. It is well established that price regulation often truncates the returns on an investment in a regulated industry, and thereby decreases investment. According to the Columbia Institute of Tele-Information, ISPs are set to invest $30 billion annually over the next five years to blanket the country with next-generation broadband networks, nearly half of which ($14 billion) will support wireless networks. It is difficult to estimate with precision what portion of the $30 billion would be neutered in the presence of net neutrality rules, but the direction of the impact — negative — is clear. Noted telecom analyst Craig Moffett of Bernstein Research opined that, with the imposition of net neutrality rules, Verizon FiOS “would be stopped in its tracks,” and AT&T’s U-Verse “deployments would slow.” Outcomes like these clearly would not serve the interests of the business community.
2. Second, businesses need the opportunity to innovate. The ability to purchase priority delivery from ISPs would spur innovation among businesses, large and small. Priority delivery would enable certain real-time applications to operate free of jitter and generally perform at higher levels. Absent net neutrality restrictions, entrepreneurs in their garages would devote significant energies trying to topple Google with the next killer application. But if real-time applications are not permitted to run as they were intended, these creative energies will flow elsewhere. The concept of premium services and upgrades should be second-nature to businesses. From next-day delivery of packages to airport lounges, businesses value the option of upgrading when necessary. That one customer chooses to purchase the upgrade while the next opts out would never be considered “discriminatory.”
Competition and consumer protection:
An efficiently operating market for broadband internet access could avoid many of the concerns raised by potential blocking of, or discrimination against, specific internet content or services. Though numbers vary between Member States (MS), a 2012 study showed there were nearly 250 fixed-line and over 100 mobile operators in the EU, with no MS reporting less than three in either category except Cyprus (only two mobile operators). Informed consumers could make a choice among offers from different providers and choose the price, quality of service and range of applications and content that suited their particular needs. Given that 85% of fixed-line operators and 76% of mobile operators offer at least one unrestricted plan, consumers could punish any supplier who blocked or throttled an innovative new service by changing to another supplier, provided that contracts made switching quick and easy. This free-market philosophy makes sense to experts who feel it is only normal that people have to pay higher prices to access applications that require a higher quality of service. A consumer association in the UK found that traffic management concepts were poorly understood by consumers. In some cases, actual rates of delivery were much lower than those that had been promised and it may be difficult for consumers to detect whether access providers throttle certain kinds of services, such as P2P services or VoIP. Even if consumers identify problems such as insufficient speed or blocked applications, switching may not be easy: access contracts may be bundled with other services (e.g. telephone or television) or with subsidised or leased equipment that makes it harder to switch. Moreover if a particular service is blocked not by the consumer’s ISP but by a network operator in another MS, consumers will still not get access to that service even if they change internet-access supplier at their end. Even more critically, if high quality specialised services take up a large chunk of existing bandwidth, network operators may downgrade the ‘standard’ open internet service, leading to poorer service for those who cannot afford to pay more. This may encourage a ‘multi-lane’ or ‘multi-tier’ internet that could lead to less competition and greater social exclusion. However to some extent a multi-lane internet already exists. Large content providers like YouTube have built, or have contracted for, Content Delivery Networks (CDN) that use private networks to deliver their content to servers located at various places on the edge of the internet in close geographic proximity to their customers. Their content has less distance to travel over the internet to reach the end user, and thus can arrive faster and more reliably than content of smaller competitors who cannot afford a CDN. By 2017, it is estimated that more than half of the world’s internet traffic will pass through a CDN. As for risks that standard internet service will be degraded because specialised services take up too much bandwidth, NRA already have the power to impose a minimum level of service if public internet access becomes too degraded.
Net Neutrality vis-à-vis innovation:
Abandoning network neutrality factors will certainly alter innovation due to threats of exclusion and extraction. It is perhaps safe to say that the best innovations are produced with open and uncontrollable surroundings, or when the mind is allowed to operate freely without any constraints. An online giant worth mentioning here is Google. Google allows its employees to freely work on whatever they please twenty percent of the employees’ time in the day, and in turn the innovations belong to the company. G Mail is one example that resulted from such an incentive. However, now that Google is a dominant force in in various aspects, imposing regulation on the Internet would perhaps not slow this Internet giant down. If there is regulation imposed on the content providers, larger organizations like Google will be able to continue to dominate the Internet, and organizations like Yahoo! could be facing the threat of exclusion. On the other hand, should there be no regulation imposed on the Internet both companies could continue to innovate and provide users with ways for a more efficient use of the Internet. So, as a result it is evident that when no constraints or regulations are put into place at times, the results can be rewarding to every Internet user in the world. An open and free Internet has been the foundation of innovation and it can certainly continue to benefit users and contribute to innovation.
Net neutrality and education:
In today’s environment, it is impossible to imagine education without internet. And most of such content is free for the student. Absence of Net Neutrality, destroying level playing field might create an environment that favors big money and disadvantages everyone else, specifically non-profit educational institutions. The central issue with “paid prioritization”—where one content provider pays for a ‘fast lane’—is that those with the greatest financial resources will be best able to speed their content to all who use that provider. This would hurt small startups and public or non-profit content providers (like education institutions) that can’t afford to buy a ‘fast lane’ for educational, research, or other digital collections. As you know, educational content, due to media rich format, requires better internet bandwidth and higher amount of data consumption compared to ecommerce or other form of internet usage. So it is necessary that educational content gets equal priority on internet. Absence of which may make online education unviable. Such scenario will force, the quality focused not-for-profit education institutions to join in paid prioritization or fast lane platforms like Airtel Zero or internet.org. This increased cost will add to the problems of education institutions which are already facing financial crunch so it will be difficult for them to absorb this increase in cost. They will be forced to pass on this additional cost onto students in terms of fees hike, and in country like India, where online education is the only hope for economically reaching out to masses, it is also possible that the cost of online education will grow. Further, technological innovations are making education affordable. Absence of Net Neutrality will hamper EdTech startups focusing on innovative to make education affordable. Creating preferential access to further social causes and service penetration is one thing , using it to create only commercial monopolies as could be the case or fear now is quite another. Preferential treatment, if given sensitively, may help in accelerating penetration and better QoS by ISP. However, absence of right set of regulations can lead to monopolies and cartels and emerge as threat to larger objectives of delivering education to the masses.
Legal aspects of net neutrality:
Net neutrality law refers to laws and regulations which enforce the principle of net neutrality. Opponents of net neutrality enforcement claim regulation is unnecessary, because broadband service providers have no plans to block content or degrade network performance. Opponents of net neutrality regulation also argue that the best solution to discrimination by broadband providers is to encourage greater competition among such providers, which is currently limited in many areas.
Without strong legislation protecting Net Neutrality, the following examples will become the norm:
1. In 2004, North Carolina ISP Madison River blocked their DSL customers from using any rival web-based phone service (like Vonage, Skype, etc.).
2. In 2005, Canada’s telephone giant Telus blocked customers from visiting a website sympathetic to the Telecommunications Workers Union during a labor dispute.
3. Shaw, a big Canadian cable TV company, is charging $10 extra a month to subscribers in order to “enhance” competing Internet telephone services.
4. Time Warner’s AOL blocked all emails that mentioned www.dearaol.com – and advocacy campaign opposing the company’s pay-to-send email plan.
Why is the legal enforcement of net neutrality so challenging?
It did not take us much to be able to define net neutrality in the technical and service domains, but there still are some loose ends that prevent this definition from being applicable as a normative regulation; that is, other than lobbies and politics. The service provider, to exercise network neutrality, has to avoid exploiting any data for providing its service, other than the data specified by the networking protocol. However, this is not realistically achievable to the fullest extent. The ISP has to carry out some business-oriented packet shaping to prevent one user from absorbing all bandwidth, not leaving anything for other users. Obviously, there is some business logic involved in preference of packets which is acceptable. If you pay for a certain bandwidth, some network neutrality is violated by merely enforcing this deal. So where does the line cross? How is restricting a user to only use the bandwidth he pays for is okay, while preferring traffic based on payment by service providers is not? Usually, when we encounter such situations in which we cannot make up sustainable rules, one approach is to revert to demanding transparency. The ISP can do whatever it wishes, but it must openly disclose its operations and thus let economy control what is acceptable by the public and what is not, and penalize the ISPs that are below the norm. This could work. We could require that any ISP can do whatever it wishes with its traffic: prioritize, block sites at its own will, etc., just as long as it openly publishes its practices to the users, who may elect to take their business elsewhere. The reason this approach is not favorable is that network neutrality has too much significance to economy and to democracy to have it left to user preferences. There are too many potential “market failures” here: users may not understand the trade-offs well enough, the ISPs may form cartels that allow them all to offer the same terms of service in this respect; or in some cases there is just not enough choice between ISPs in the first place. Transparency is a good requirement; but it is not enough. We need to protect network neutrality by law. Even if we cannot get it a hundred percent right at first, we need to pose a firm start.
Potential for banning legitimate activity:
Poorly conceived legislation could make it difficult for Internet Service Providers to legally perform necessary and generally useful packet filtering such as combating denial of service attacks, filtering E-Mail spam, and preventing the spread of computer viruses. Quoting Bram Cohen, the creator of BitTorrent, “I most definitely do not want the Internet to become like television where there’s actual censorship…however it is very difficult to actually create network neutrality laws which don’t result in an absurdity like making it so that ISPs can’t drop spam or stop…attacks”. Some pieces of legislation, like The Internet Freedom Preservation Act of 2009, attempt to mitigate these concerns by excluding reasonable network management from regulation.
The figure below shows net neutrality laws in various countries:
Legal enforcement of net neutrality principles takes a variety of forms, from provisions that outlaw anti-competitive blocking and throttling of Internet services, all the way to legal enforcement that prevents companies from subsidizing Internet use on particular sites. Contrary to popular rhetoric and various individuals involved in the ongoing academic debate, research suggests that a single policy instrument (such as a no-blocking policy or a quality of service tiering policy) cannot achieve the range of valued political and economic objectives central to the debate. As Bauer and Obar suggest, “safeguarding multiple goals requires a combination of instruments that will likely involve government and nongovernment measures. Furthermore, promoting goals such as the freedom of speech, political participation, investment, and innovation calls for complementary policies.” Here we look into the some countries that have already adopted net neutrality:
It is the first country to enact a net neutrality law in 2010. Interestingly, the law was a culmination of a citizen’s movement; in particular the efforts of citizen group Neutralidad Si. In 2014, Chilean telecommunications regulator Subtel banned mobile operators from zero-rating, whereby internet companies strike deals with mobile telecom operators to offer consumers free internet usage.
It is the first country in Europe to pass a law on net neutrality in 2011. Even zero-rating deals between internet companies and mobile operators have been banned as per the new law.
In 2014, Brazil passed a legislation bringing into effect an `Internet Law’, which saw the introduction of the principle of Net Neutrality. Brazil’s principle of Net Neutrality meant “that all data transmissions (i.e. online traffic) must be treated equally by network operators regardless of its content, origin, destination, service, terminal or application. The aim of this provision is to prevent operators from charging higher rates for accessing content that uses greater bandwidth, like video streaming or voice communication services.
As of 2015, India had no laws governing net neutrality and there have been violations of net neutrality principles by some service providers. While the Telecom Regulatory Authority of India (TRAI) guidelines for the Unified Access Service license promote net neutrality, they are not enforced. The Information Technology Act, 2000 does not prohibit companies from throttling their service in accordance with their business interests. In March 2015, the TRAI released a formal consultation paper on Regulatory Framework for Over-the-top (OTT) services, seeking comments from the public. The consultation paper was criticised for being one sided and having confusing statements. It was condemned by various politicians and internet users. By 24 April 2015, over a million emails had been sent to TRAI demanding net neutrality.
On 26 February 2015, the U.S. Federal Communications Commission (FCC) ruled in favor of net neutrality by reclassifying broadband access as a telecommunications service and thus applying Title II (common carrier) of the Communications Act of 1934 to Internet service providers. On 12 March 2015, the FCC released the specific details of its new net neutrality rule. And on 13 April 2015, the FCC published the final rule on its new regulations. U.S. Federal Communications Commission (FCC) approved “net neutrality” rules that prevent Internet providers such as Comcast and Verizon from slowing or blocking Web traffic or from creating Internet fast lanes that content providers such as Netflix could pay to use. However, the FCC is facing several lawsuits that challenge its open Internet order.
Europe’s Current Policy:
The EU’s dealings with net neutrality have been something of an intricate dance — or you might define it as more of a roller coaster. Shifting policies and the task of weighing consumer welfare against economic welfare have resulted in Europe’s current policy. Basically, their approach is that ISPs should be reasonable in how they manage their networks, considering both their own interests and those of Internet users. As Financier Worldwide explains it, the current policy “advocates that an approach be taken which sits somewhere between a light-touch approach, at one extreme, to one which seeks to eliminate market power, promote consumer awareness, increase transparency, and to lower switching costs for end-users, at the other.” Is that really a viable approach? Perhaps officials think that if an ISP blocks certain websites or delivers some content slower than others, unhappy consumers can always switch to a different ISP, so there is no need for tighter regulations. That line of thought seems like a slippery slope and puts a lot of trust in big businesses.
Competition law is a law that promotes or seeks to maintain market competition by regulating anti-competitive conduct by companies. Competition law is implemented through public and private enforcement. Competition law is known as antitrust law in the United States and European Union, and as anti-monopoly law in China and Russia. The antitrust laws apply to virtually all industries and to every level of business, including manufacturing, transportation, distribution, and marketing. They prohibit a variety of practices that restrain trade. Examples of illegal practices are price-fixing conspiracies, corporate mergers likely to reduce the competitive vigor of particular markets, and predatory acts designed to achieve or maintain monopoly power. Microsoft, ATT, and J.D. Rockefeller Oil are companies who have been convicted of antitrust practices.
Can antitrust laws prevent discrimination on internet?
At first blush, the broadband providers and content providers don’t compete. One sells content, the other passes that content to customers. But a good number of broadband providers are also in the content delivery business. Comcast, for example, not only provides broadband services, but it also delivers movies through its cable channels, on-demand services, as well as applications that allow users to stream video to their tablets, handhelds and computers. The disadvantaged content provider may be able to show that its content was being delayed deliberately by the vertically-integrated broadband/content provider, and that this delay had a material effect on the disadvantaged content provider’s ability to provide services in the relevant market that includes its content. Netflix would first have to show that it competes with Comcast. They also would have to show consumers dropping Netflix for Comcast’s services. (That showing would be complex in a market where consumers can access content in multiple formats which themselves can vary in price, scope and availability over time.) They would then have to show causality—that they lost sales to Comcast because of the discrimination and that those lost sales resulted in users paying Comcast higher prices or that Netflix lost revenue or otherwise was harmed and perhaps even had to exit the market. Showing the effect on price is particularly complex given the byzantine pricing structure that exists in the cable markets and low marginal cost of the products being delivered. In response to such an argument, the broadband/content provider would likely re-characterize the disadvantaged content provider’s antitrust claim as being a “refusal to deal” and that, absent an actual refusal to deal on any terms, the disadvantaged content provider’s claim fails. Competitors are generally not required to deal with other competitors. So long as consumers can download content even if it is at maddeningly slow rates, the monopolist broadband provider will likely not have violated the antitrust laws. Also, antitrust provides no solution at all if the disadvantaged provider does not compete with the broadband provider. A cable company may slow VPN traffic because it uses too much bandwidth. If the broadband provider doesn’t sell VPN functionality, then the discrimination does not harm competition. An antitrust solution to traffic discrimination would ultimately only address situations where a broadband provider is impeding traffic to gain an advantage in a market in which it competes and has in fact done very well in that market in terms of market share. Antitrust is therefore inadequate to obtain universal traffic neutrality. Antitrust may play a role at the fringe of net neutrality. It is by no means a complete answer.
Pros and Cons of net neutrality:
There has been extensive debate about whether net neutrality should be required by law in the United States. Advocates of net neutrality have raised concerns about the ability of broadband providers to use their last mile infrastructure to block Internet applications and content (e.g. websites, services, and protocols), and even to block out competitors. Opponents claim net neutrality regulations would deter investment into improving broadband infrastructure and try to fix something that isn’t broken.
A non-neutral Internet would allow telecom companies to load certain websites and applications faster or slower than others or restrict access to them altogether. For example, subscriber to network X might be forced to use Bing as their search engine because Google partners with network Y and network X would either take longer to load Google (compared with Bing) or might refuse access to Google altogether. Similarly, telecom companies could also discriminate between consumers, allowing richer consumers access to a greater range of websites and applications for higher fees and forcing poorer consumers to opt for schemes that include only certain websites or applications. Thus a farmer in rural India for instance, might be able to access his Facebook profile cheaply but may have to pay much more to get reliable weather updates or track vegetable trading prices. Critics of net neutrality claim that Internet data usage is not uniform. Basic Internet services such as sending e-mails or reading news are insensitive to delays or signal distortion. Services such as Skype however require a minimum quality of service in order to be effective and thus justify higher fees. The TRAI paper argues that Internet-based communications applications such as Skype and WhatsApp are cannibalizing services from which telecom operators traditionally profited as traditional caller plans and SMS services become increasingly redundant. If operators are to continue investing in better Internet technology, they must have the incentive to do so by earning greater returns on that investment. Critics also argue that zero-rating and tiered services enable greater Internet penetration by making cheaper plans available for poorer citizens. As a trade-off, cheaper plans entail slower Internet speeds or restrict access to only certain applications and websites. But as Facebook CEO Mark Zuckerberg said, “For people who are not on the Internet, having some connectivity and some ability to share is always much better than having no ability to connect and share at all.”
Arguments for net neutrality:
Proponents of net neutrality argue that a neutral Internet encourages everyone to innovate without permission from the phone and cable companies or other authorities. A more level playing field spawns countless new businesses. Allowing unrestricted information flow becomes essential to free markets and democracy as commerce and society increasingly move online. Heavy users of network bandwidth would pay higher prices without necessarily experiencing better service. Even those who use less bandwidth could run into the same situation. Proponents of net neutrality invoke the human psychological process of adaptation where when people get used to something better, they would not ever want to go back to something worse. In the context of the Internet, the proponents argue that a user who gets used to the “fast lane” on the Internet would find the “slow lane” intolerable in comparison, greatly disadvantaging any provider who is unable to pay for the “fast lane”.
Proponents of net neutrality include consumer advocates, human rights organizations, online companies and some technology companies. Many major Internet application companies are advocates of neutrality. Yahoo!, Vonage, eBay, Amazon, IAC/InterActiveCorp. Microsoft, Twitter, Tumblr, Etsy, Daily Kos, Greenpeace, along with many other companies and organizations, have also taken a stance in support of net neutrality. Cogent Communications, an international Internet service provider, has made an announcement in favor of certain net neutrality policies. In 2008, Google published a statement speaking out against letting broadband providers abuse their market power to affect access to competing applications or content. They further equated the situation to that of the telephony market, where telephone companies are not allowed to control who their customers call or what those customers are allowed to say. However, Google’s support of net neutrality was called into question in 2014. Several civil rights groups, such as the ACLU, the Electronic Frontier Foundation, Free Press, and Fight for the Future support net neutrality. Individuals who support net neutrality include Tim Berners-Lee, Vinton Cerf, Lawrence Lessig, Robert W. McChesney, Steve Wozniak, Susan P. Crawford, Marvin Ammori, Ben Scott, David Reed, and U.S. President Barack Obama.
Reasons for being in favor of network neutrality:
The reasons that people are in favor of net neutrality is because they want to make it such that they are preventing a monopoly from happening within the last mile of connection. The last mile is the final leg of delivering connectivity from a communications provider to a customer. At this point within the transportation of a packet, it has to reach through many service providers’ equipment. What people are worried about is that a provider, who owns the physical cable lines in a given space, would charge a higher amount for a certain content provider to deliver its services versus another. This would make the content providers’ cost of doing business to go up. Going into the idea of preventing the last mile monopoly, content providers want to keep ambiguity down as ambiguity is inherent within risk. If a content provider relies on a last mile provider to distribute their services, they would have to be paying for the service. However, the last mile provider could unexpectedly increase their rates which were not accounted for within the content providers’ budget. This could throw off the content providers’ business which can lead to incorrect projections of profitability.
Control of data:
Supporters of network neutrality want to designate cable companies as common carriers, which would require them to allow Internet service providers (ISPs) free access to cable lines, the model used for dial-up Internet. They want to ensure that cable companies cannot screen, interrupt or filter Internet content without court order. Common carrier status would give the FCC the power to enforce net neutrality rules. SaveTheInternet.com accuses cable and telecommunications companies of wanting the role of gatekeepers, being able to control which websites load quickly, load slowly, or don’t load at all. According to SaveTheInternet.com these companies want to charge content providers who require guaranteed speedy data delivery… to create advantages for their own search engines, Internet phone services, and streaming video services – and slowing access or blocking access to those of competitors. Vinton Cerf, a co-inventor of the Internet Protocol argues that the Internet was designed without any authorities controlling access to new content or new services. He concludes that the principles responsible for making the Internet such a success would be fundamentally undermined were broadband carriers given the ability to affect what people see and do online.
Digital rights and freedoms:
Lawrence Lessig and Robert W. McChesney argue that net neutrality ensures that the Internet remains a free and open technology, fostering democratic communication. Lessig and McChesney go on to argue that the monopolization of the Internet would stifle the diversity of independent news sources and the generation of innovative and novel web content. Network neutrality protects the right of freedom of speech. The reason is that network neutrality restricts ISPs from blocking or prioritizing content on the Internet. Countries that have not implemented the principle of network neutrality in their legislation often control or suppress the publishing or accessing of information on the Internet. For example, in China, the government uses a system that does not allow the residents of China to access certain online content. As a result, if an Internet user searches in Google or other search engines for the word “Tibetan independence,” “democracy movements,” or other blacklisted words, he or she will be redirected to a blank page stating “page cannot be displayed.”
Competition and innovation:
Net neutrality advocates argue that allowing cable companies the right to demand a toll to guarantee quality or premium delivery would create an exploitative business model based on the ISPs position as gatekeepers. Advocates warn that by charging websites for access, network owners may be able to block competitor Web sites and services, as well as refuse access to those unable to pay. According to Tim Wu, cable companies plan to reserve bandwidth for their own television services, and charge companies a toll for priority service. Proponents of net neutrality argue that allowing for preferential treatment of Internet traffic, or tiered service, would put newer online companies at a disadvantage and slow innovation in online services. Tim Wu argues that, without network neutrality, the Internet will undergo a transformation from a market ruled by innovation to one ruled by deal-making. SaveTheInternet.com argues that net neutrality puts everyone on equal terms, which helps drive innovation. They claim it is a preservation of the way the internet has always operated, where the quality of websites and services determined whether they succeeded or failed, rather than deals with ISPs. A failure to enact Net Neutrality protections will undermine content and application providers’ freedom to do business. A non-neutral regime would hinder innovation in content, as start-ups and smaller companies would suddenly be faced with barriers to enter the market – and uncertainty about what new barriers may be created. The innovators’ freedom to impart information is therefore limited – as is their freedom to do business. Lawrence Lessig and Robert W. McChesney argue that eliminating net neutrality would lead to the Internet resembling the world of cable TV, so that access to and distribution of content would be managed by a handful of massive companies. These companies would then control what is seen as well as how much it costs to see it. Speedy and secure Internet use for such industries as health care, finance, retailing, and gambling could be subject to large fees charged by these companies. They further explain that a majority of the great innovators in the history of the Internet started with little capital in their garages, inspired by great ideas. This was possible because the protections of net neutrality ensured limited control by owners of the networks, maximal competition in this space, and permitted innovators from outside access to the network. Internet content was guaranteed a free and highly competitive space by the existence of net neutrality. The involvement of ISPs in determining what content or services reach consumers will stifle innovators. For instance, if Google can pay ISPs to deliver YouTube videos faster than other sources of Internet video, any startups offering better services than YouTube will have tremendous difficulties enter the online video market. Network neutrality does not allow ISPs to restrict content and/or services provided by their competitors. As known, restrictions of competition may lead to increased prices of services and/or goods. For example, in 2009, Deutsche Telekom announced plans to prohibit the use of Skype over iPhones. Such a prohibition will harm the interests of consumers who can otherwise save money on calls by using Skype.
Preserving Internet standards:
Network neutrality advocates have sponsored legislation claiming that authorizing incumbent network providers to override transport and application layer separation on the Internet would signal the decline of fundamental Internet standards and international consensus authority. Further, the legislation asserts that bit-shaping the transport of application data will undermine the transport layer’s designed flexibility. Network neutrality preserves the existing Internet standards. The reason is that, at present, the Internet runs on technical standards created by variety of organizations, such as the internet engineering task force (IETF). By using the existing Internet standards, computers, services, and software created by different companies can be integrated together. Without network neutrality, the Internet will be regulated by ISPs under standards chosen by them.
Alok Bhardwaj, founder of Epic Privacy Browser, argues that any violations to network neutrality, realistically speaking, will not involve genuine investment but rather payoffs for unnecessary and dubious services. He believes that it is unlikely that new investment will be made to lay special networks for particular websites to reach end-users faster. Rather, he believes that non-net neutrality will involve leveraging quality of service to extract remuneration from websites that want to avoid being slowed down.
Some advocates say network neutrality is needed in order to maintain the end-to-end principle. Network neutrality maintains the end-to-end principle. It allows nodes of the network to send packets to all other nodes of the network, without requiring intermediate network elements to maintain status information about the transmission. The principle allows people using the Internet to innovate free of any central control. According to Lawrence Lessig and Robert W. McChesney, all content must be treated the same and must move at the same speed in order for net neutrality to be true. They say that it is this simple but brilliant end-to-end aspect that has allowed the Internet to act as a powerful force for economic and social good. Under this principle, a neutral network is a dumb network, merely passing packets regardless of the applications they support. This point of view was expressed by David S. Isenberg in his paper, “The Rise of the Stupid Network”. He states that the vision of an intelligent network is being replaced by a new network philosophy and architecture in which the network is designed for always-on use, not intermittence and scarcity. Rather than intelligence being designed into the network itself, the intelligence would be pushed out to the end-user’s device; and the network would be designed simply to deliver bits without fancy network routing or smart number translation. The data would be in control, telling the network where it should be sent. End-user devices would then be allowed to behave flexibly, as bits would essentially be free and there would be no assumption that the data is of a single data rate or data type. Contrary to this idea, the research paper titled End-to-end arguments in system design by Saltzer, Reed, and Clark argues that network intelligence doesn’t relieve end systems of the requirement to check inbound data for errors and to rate-limit the sender, nor for a wholesale removal of intelligence from the network core.
Regulation vs. Competition issue:
Some of the “pipe” owners argue that net neutrality is unnecessary regulation that will stifle competition and slow deployment of broadband technologies. But the truth is there is already only a little competition between broadband providers. In most parts of the U.S., there are at most two companies that provide a broadband pipe to your home: a telephone company and a cable company. Both of these industries are already regulated because they are natural monopolies: once a cable is laid to your house, there really is no rational, non-wasteful reason to lay another cable to your house, since you only need one at a time; therefore, most communities only allow one cable or telephone company to provide service to an area, and then regulate that company so to prevent abuse of the state-granted monopoly. Thus, we don’t allow phone companies to charge exorbitant amounts for local service; nor do we permit a cable company to avoid providing service to poor neighborhoods. Contrast the quasi-monopoly on broadband pipes with the intensely competitive market of web content and services. There are millions of websites out there and countless hours of video and audio, all competing for your time, and sometimes your money. With the advent of broadband connections, the telecom and cable companies have found a new way to exploit their state-granted monopoly: leverage it into a market advantage in Internet services and content. This would harm competition in the dynamic, innovative content and services industry without solving the lack of real competition in the broadband access market. In contrast, net neutrality will encourage competition in online content and services to stay strong. By keeping broadband providers from raising artificial price barriers to competition, net neutrality will preserve the egalitarian bit-blind principles that have made the Internet the most competitive market in history.
ISPs are trying to ‘Double Dip’!
ISPs argue that they should be incentivized to invest in infrastructure that results in a faster internet. This argument ignores that they are already charging consumers for their infrastructure and are now trying to ‘double dip‘ by charging content providers too. To make matters worse, ISPs effectively have a monopoly in most markets – inhabitants in large cities have just a few cable/internet options and small markets often have one.
Arguments against net neutrality:
Network owners believe regulation like the bills proposed by net neutrality advocates will impede U.S. competitiveness by stifling innovation and hurt customers who will benefit from ‘discriminatory’ network practices. U.S. Internet service already lags behind other nations in overall speed, cost, and quality of service, adding credibility to the providers’ arguments. Obviously, by increasing the cost of heavy users of network bandwidth, telecommunication and cable companies and Internet service providers stand to increase their profit margins. Those who oppose network neutrality include telecommunications and cable companies who want to be able to charge differentiated prices based on the amount of bandwidth consumed by content being delivered over the Internet. Some companies report that 5 percent of their customers use about half the capacity on local lines without paying any more than low‐usage customers. They state that metered pricing is “the fairest way” to finance necessary investments in its network infrastructure. Internet service providers point to the upsurge in piracy of copyrighted materials over the Internet as a reason to oppose network neutrality. Comcast reported that illegal file sharing of copyrighted material was consuming 50 percent of its network capacity. The company posits that if network transmission rates were slower for this type of content, users would be less likely to download or access it. Those who oppose network neutrality argue that it removes the incentive for network providers to innovate, provide new capabilities, and upgrade to new technology.
Reasons for not being in favor of network neutrality:
The world knows that Internet Service Providers (ISPs) are not in favor of net neutrality. A specific reason that they would like to be sure that net neutrality does not exist is so that they can gain the ability to offer tiered services. They want to offer tiered services because they believe that a user should be able to pay for the quality of service provided to them (in terms of throughput). Instead of offering a flat level of service (in terms of throughput) for all customers both content providers and end users, the ISPs would like to offer different tiers of service. They want to provide a content provider with a guaranteed level of service based on the tier that which they are willing to pay. For a top tier service, this would allow that content provider for be able to provide their content at a fast rate to its users. It would also let content providers, who do not want to pay for an extremely high level of throughput, to be able to save money by not paying for a high level. People for net neutrality argue that this would provide a disadvantage to content providers who cannot afford top-tier services. However ISPs say that this form of increased level of service already exists when there is a content provider who has thousands of services strategically placed throughout the world. They have the ability to provide a consistently high level of service to its users due to physical access in relation to the users’ location. ISPs think that because this already exists that it should not a problem; offering tiered services. Another reason that ISPs are against net neutrality is because they believe that by offering the tiered services, they will have the ability to offer a higher level of service to their subscribers through tiered filtration. By offering different tiers that users can opt in and out off, users will be signing up for the service that they are happy with. The ISP can better throttle its bandwidth as people will be in different tiers. If a user subscribes to a low level of throughput then they will be paying for a low level of throughput and they will suffice with receiving a low level of throughput because they paid for it. Likewise, if a user subscribes for a high level of throughput then they will be paying a higher fee and will be content with the high level of throughput.
It was reported that Netflix consumed approximately 35% of all broadband traffic in the U.S. and Canada. In fact, both Netflix and YouTube combined take up half of the Internet’s bandwidth. Half! So wait…these companies shouldn’t pay more? In a world of net neutrality, this would all be okay. They would not be charged any more for faster lanes or special access. Our largest internet service providers, like Comcast and Verizon, would be required to let them consume as much as they want for the same price that you and I pay. And so the argument for net neutrality weakens. Net neutrality will curtail our Internet access, speed and performance. You love to watch movie, but what about your neighbor who’s not a Netflix subscriber? Should she be punished with slower Internet speeds caused by bottlenecks because she’s battling for half of what’s left? And just because those around her choose to subscribe to Netflix and stream movies and she did not? And who’s to say that in a few years other services like Netflix won’t appear that will consume even more bandwidth. Or, let’s suppose you’re staying in a hotel (or you’re on a plane) where everyone pays the same for Internet access, except there’s one guy in room 866 who’s hogging up 50% of the bandwidth watching God knows what. With net neutrality, he would have the right to the same bandwidth as you do and would pay the same. Except he’s abusing his right. And you’re suffering with slower speeds and less productivity. Net neutrality will increase our costs. The Internet cannot yet be treated as a utility because it’s not billed as a utility. If it were billed as a utility, you and your business would be paying for usage/downloads/uploads instead of a flat monthly fee. Far richer companies like Netflix, YouTube and others on the horizon would be allowed to consume as much of it as they want and pay the same fees you and I are paying. This is not equal. This is not neutral. And companies are competing everywhere where space, whether it’s real estate, market share or Internet bandwidth is valuable. This is why there are $8 million studio apartments in New York City and why a 30 second advertisement on the Super Bowl costs $4 million. Opponents of net neutrality regulations include AT&T, Verizon, IBM, Intel, Cisco, Nokia, Qualcomm, Broadcom, Juniper, dLink, Wintel, Alcatel-Lucent, Corning, Panasonic, Ericsson, and others. Notable technologists who oppose net neutrality include Marc Andreessen, Scott McNealy, Peter Thiel, David Farber, Nicholas Negroponte, Rajeev Suri, Jeff Pulver, John Perry Barlow, and Bob Kahn. Nobel Prize-winning economist Gary Becker’s paper titled, “Net Neutrality and Consumer Welfare”, published by the Journal of Competition Law & Economics, alleges that claims by net neutrality proponents “do not provide a compelling rationale for regulation” because there is “significant and growing competition” among broadband access providers. Google Chairman Eric Schmidt states that, while Google views that similar data types should not be discriminated against, it is okay to discriminate across different data types—a position that both Google and Verizon generally agree on, according to Schmidt. The supporters of net neutrality regulation believe that more rules are necessary. In their view, without greater regulation, service providers might parcel out bandwidth or services, creating a bifurcated world in which the wealthy enjoy first-class Internet access, while everyone else is left with slow connections and degraded content. That scenario, however, is a false paradigm. Such an all-or-nothing world doesn’t exist today, nor will it exist in the future. Without additional regulation, service providers are likely to continue doing what they are doing. They will continue to offer a variety of broadband service plans at a variety of price points to suit every type of consumer. Computer scientist Bob Kahn has said net neutrality is a slogan that would freeze innovation in the core of the Internet. Farber has written and spoken strongly in favor of continued research and development on core Internet protocols. He joined academic colleagues Michael Katz, Christopher Yoo, and Gerald Faulhaber in an op-ed for the Washington Post strongly critical of network neutrality, essentially stating that while the Internet is in need of remodelling, congressional action aimed at protecting the best parts of the current Internet could interfere with efforts to build a replacement.
Reduction in innovation and investments:
According to a letter to key Congressional and FCC leaders sent by 60 major ISP technology suppliers including IBM, Intel, Qualcomm, and Cisco, Title II regulation of the internet means that instead of billions of broadband investment driving other sectors of the economy forward, any reduction in this spending will stifle growth across the entire economy. This is not idle speculation or fear mongering…Title II is going to lead to a slowdown, if not a hold, in broadband build out, because if you don’t know that you can recover on your investment, you won’t make it. Opponents of net neutrality argue that prioritization of bandwidth is necessary for future innovation on the Internet. The prioritization of bandwidth stimulates innovation because the ISPs can use the money paid for preferential treatment of Internet traffic to pay for the building of network infrastructure that would increase broadband access to more consumers. Telecommunications providers such as telephone and cable companies, and some technology companies that supply networking gear, argue telecom providers should have the ability to provide preferential treatment in the form of tiered services, for example by giving online companies willing to pay the ability to transfer their data packets faster than other Internet traffic. The added revenue from such services could be used to pay for the building of increased broadband access to more consumers. Marc Andreessen states that “a pure net neutrality view is difficult to sustain if you also want to have continued investment in broadband networks. If you’re a large telco right now, you spend on the order of $20 billion a year on capex. You need to know how you’re going to get a return on that investment. If you have these pure net neutrality rules where you can never charge a company like Netflix anything, you’re not ever going to get a return on continued network investment — which means you’ll stop investing in the network. And I would not want to be sitting here 10 or 20 years from now with the same broadband speeds we’re getting today.”
Net neutrality rules could hamper the development of new technologies and prevent ISPs and wireless companies from offering special deals and incentives:
You shouldn’t regulate data packets:
Treating all Internet traffic equally would actually make it harder to keep the data flowing smoothly, handicap cloud computing services like voice recognition and even muck up phone calls. That’s because the Internet isn’t just for downloading and streaming; it’s increasingly used for real-time interactions among computers, servers, cellphones and other connected gadgets — where every millisecond really does matter. For these types of applications, prioritizing some packets over others could make a difference. Carriers are not looking to build a tollbooth. They are looking for ways to build a special-purpose network. These special purposes, he said, include voice and video calls. If the video data packets get priority in the data queue over a snippet of email, the call would run a lot better, and the email would still get through in time. But an explicit ban on prioritization would make that difficult or impossible. Prioritizing data will be important for a new generation of wireless service — voice over LTE, or VOLTE. Here, bits of conversations are mixed into the same wash of digits that carries your emails, Facebook messages, Spotify streams and selfie posts, none of which are as sensitive to delays as a phone call (or video call) is. And latency — the delay for a packet to get where it’s going — is worse with bandwidth-strapped wireless networks. Prioritization is the only way to do voice over data. Latency could also kneecap new services that require split-millisecond connections to massive computers far away. Voice-recognition apps don’t live on your phone or TV. Rather, they live on servers that record your voice, figure out what it really means and tell the app back on your device how to respond — all in an instant. The Electronic Frontier Foundation, an organization solidly on the side of net neutrality regulation, is skeptical of these arguments for prioritization. Jeremy Gillula, the EFF’s staff technologist, said that prioritization doesn’t work once data leaves the ISP and goes on to the larger Internet. “Most transit providers and interconnections today completely ignore packet prioritization codes,” Gillula said. He added that data encryption, which is becoming increasingly common, would obscure any labels that, say, distinguish a voice packet from a piece of a Web page. Gillula also argued that even well-intentioned prioritization could be unfair to users. “If I use my connection primarily for VoIP, but my neighbor uses hers primarily for gaming [and we have the same ISP], why should one person’s traffic be prioritized over another based on the type of traffic?” he asked.
Regulations quash deals for consumers:
Shopping is full of special offers. But regulating wireless providers and ISPs as utilities would require uniform pricing and prohibit the offering of deals.
Counterweight to server-side non-neutrality:
Those in favor of forms of non-neutral tiered Internet access argue that the Internet is already not a level playing field: large companies achieve a performance advantage over smaller competitors by replicating servers and buying high-bandwidth services. Should prices drop for lower levels of access, or access to only certain protocols, for instance, a change of this type would make Internet usage more neutral, with respect to the needs of those individuals and corporations specifically seeking differentiated tiers of service. Network expert Richard Bennett has written, “A richly funded Web site, which delivers data faster than its competitors to the front porches of the Internet service providers, wants it delivered the rest of the way on an equal basis. This system, which Google calls broadband neutrality, actually preserves a more fundamental inequality.”
Network neutrality decreases the revenues earned by the ISPs. The decreased revenues of the ISPs increase the level of the employment and decrease GDP. Moreover, the decreased revenues of ISPs prevent them from deploying and maintaining networks, and improving them over time. In order to recoup the decreased revenues, the ISPs may charge their customers increased fees. 142 wireless ISPs (WISPs) said that FCC’s new “regulatory intrusion into our businesses…would likely force us to raise prices, delay deployment expansion, or both.”
Significant and growing competition:
A 2010 paper on net neutrality by Nobel Prize economist Gary Becker and his colleagues stated that “there is significant and growing competition among broadband access providers and that few significant competitive problems have been observed to date, suggesting that there is no compelling competitive rationale for such regulation.” Becker and fellow economists Dennis Carlton and Hal Sidler found that “Between mid-2002 and mid-2008, the number of high-speed broadband access lines in the United States grew from 16 million to nearly 133 million, and the number of residential broadband lines grew from 14 million to nearly 80 million. Internet traffic roughly tripled between 2007 and 2009. At the same time, prices for broadband Internet access services have fallen sharply.” The PPI reports that the profit margins of U.S. broadband providers are generally one-sixth to one-eighth of companies that use broadband (such as Apple or Google), contradicting the idea of monopolistic price-gouging by providers.
A report by the Progressive Policy Institute in June 2014 argues that nearly every American can choose from at least 5-6 broadband internet service providers, despite claims that there are only a ‘small number’ of broadband providers. Citing research from the FCC, the Institute wrote that 90 percent of American households have access to at least one wired and one wireless broadband provider at speeds of at least 4 Mbps downstream and 1 Mbit/s upstream and that nearly 88 percent of Americans can choose from at least two wired providers of broadband disregarding speed (typically choosing between a cable and telco offering).
Potentially increased taxes:
The ruling issued by the FCC to impose Title II regulations explicitly opens the door to billions of dollars in new fees and taxes on broadband by subjecting them to the telephone-style taxes under the Universal Service Fund. Net neutrality proponent Free Press argues that, “the average potential increase in taxes and fees per household would be far less” than the estimate given by net neutrality opponents, and that if there were to be additional taxes, the tax figure may be around $4 billion. Under favorable circumstances, “the increase would be exactly zero.” Meanwhile, the Progressive Policy Institute claims that Title II could trigger taxes and fees up to $11 billion a year. Financial website Nerd Wallet did their own assessment and settled on a possible $6.25 billion tax impact, estimating that the average American household may see their tax bill increase $67 annually. FCC spokesperson Kim Hart said that the ruling does not raise taxes or fees.
Prevent overuse of bandwidth:
Since the early 1990s, Internet traffic has increased steadily. The arrival of picture rich websites and MP3s led to a sharp increase in the mid-1990s followed by a subsequent sharp increase since 2003 as video streaming and Peer-to-peer file sharing became more common. YouTube streamed as much data in three months as the world’s radio, cable and broadcast television channels did in one year, 75 petabytes. Networks are not remotely prepared to handle the amount of data required to run these sites. Global Internet video traffic was 57 percent of all consumer traffic in 2012. The global Internet video traffic will be 69 percent of all consumer Internet traffic in 2017. This statistic does not include video exchanged through peer-to-peer (P2P) file sharing. The sum of all forms of video traffic, including P2P, will be in the range of 80 to 90 percent of global consumer traffic by 2017. In order to deal with the increased bandwidth requirements, ISPs will need to build more infrastructure. Net neutrality would prevent broadband networks from being built, which would limit available bandwidth and thus endanger innovation.
High costs to entry for cable broadband:
According to a Wired magazine article by TechFreedom’s Berin Szoka, Matthew Starr, and Jon Henke, local governments and public utilities impose the most significant barriers to entry for more cable broadband competition: “While popular arguments focus on supposed ‘monopolists’ such as big cable companies, it’s government that’s really to blame.” The authors state that local governments and their public utilities charge ISPs far more than they actually cost and have the final say on whether an ISP can build a network. The public officials determine what hoops an ISP must jump through to get approval for access to publicly owned “rights of way” (which lets them place their wires), thus reducing the number of potential competitors who can profitably deploy internet service—such as AT&T’s U-Verse, Google Fiber, and Verizon FiOS. Kickbacks may include municipal requirements for ISPs such as building out service where it isn’t demanded, donating equipment, and delivering free broadband to government buildings.
According to PayPal founder and Facebook investor Peter Thiel, “Net neutrality has not been necessary to date. I don’t see any reason why it’s suddenly become important, when the Internet has functioned quite well for the past 15 years without it…. Government attempts to regulate technology have been extraordinarily counterproductive in the past.” Max Levchin, the other co-founder of PayPal, echoed similar statements, telling CNBC, “The Internet is not broken, and it got here without government regulation and probably in part because of lack of government regulation.” FCC Commissioner Ajit Pai, who was one of the two commissioners who opposed the net neutrality proposal, criticized the FCC’s ruling on internet neutrality, stating that the perceived threats from ISPs to deceive consumers, degrade content, or disfavor the content that they don’t like are non-existent: ” The evidence of these continuing threats? There is none; it’s all anecdote, hypothesis, and hysteria. A small ISP in North Carolina allegedly blocked VoIP calls a decade ago. Comcast capped BitTorrent traffic to ease upload congestion eight years ago. Apple introduced Facetime over Wi-Fi first, cellular networks later. Examples this picayune and stale aren’t enough to tell a coherent story about net neutrality.
Increasing Governmental Influence:
Net neutrality proponents want government to enact laws or use governmental agencies like FCC/TRAI to enforce net neutrality with strong rules. However, phone companies and ISPs have a much greater influence on the Federal Government than individuals. This influence is primarily made manifest in the money large companies spend on lobbying the FCC and the campaign contributions these companies make to politicians that are on the committees that make the decisions about net neutrality. If net neutrality supporters want government intervention to strengthen net neutrality, then they are making a mistake because governments and large corporations are always hand in glove with each other and so far internet has worked well without government meddling.
Market Demand should control the priority of content on the internet!
One can make a ‘collective good’ argument that popular content deserves higher serving priority (regardless of whether the ISP can charge for it). It’s great that a blogger with one reader has the same chance to distribute on the internet as the creators of Game of Thrones, but do millions of got watchers collectively have a greater right to their content than the hundred or so viewers of a small time video blogger? Many consumers argue that without Net Neutrality, ISPs can give preferential treatment to the content they profit from, but the market dictates that popular content will be the most profitable, so isn’t that a good thing?
Potential disadvantages of net neutrality are:
1. Users will have to pay more for internet services as ISP will pass on the cost of more bandwidth purchased to ensure they are not stretched.
2. Slower internet access speed if the ISPs are unable to have more bandwidth to handle the increased load.
3. Increase in high latency and high jitter rate due to insufficient bandwidth which would make Voice over IP perform poorly.
There were four basic Internet freedoms that everyone should agree with: the freedom to access lawful content of one’s choice, the freedom to access applications that don’t harm the network, the freedom to attach devices to the network, and the freedom to get information about your service plan. Everybody, or virtually everybody, agrees on that. Free and open Internet stimulates ISP competition, helps prevent unfair pricing practices, drives entrepreneurship and most importantly protects freedom of speech. Advocates for net neutrality say that cable companies cannot screen, interrupt or filter Internet content without court order; should ensure the internet remains a free and open technology, create an even-playing field for competition and innovation. The question is how do we operationalize that? The government is a pretty poor arbiter of what is reasonable and what is not, and it’s exceptionally poor when it comes to having a track record of promoting innovation and investment in broadband networks. That’s something the private sector has done a remarkable job of on its own. The Internet has speedily evolved from a collaborative project among governments and universities to a promising commercial medium operated primarily by private ventures. The next generation World Wide Web will not appear as a standard, “one size fits” all medium primarily because consumers expect more and different features and service providers need to find ways to recoup frequent network upgrades to accommodate ever increasing throughput requirements. For example, Internet Service Providers offer on line game players, Voice over the Internet Protocol (VoIP) and Internet Protocol Television (IPTV) with “better than best efforts” routing of bits to promote timely delivery with higher quality of service. Similarly content providers can use caching and premium traffic routing and management service to secure more reliable service than that available from standard “best efforts” routing. Service diversification can result in many reasonable and lawful types of discrimination between Internet users notwithstanding a heritage in the first two generations of nondiscrimination and best efforts routing of traffic. ISPs increasingly have the ability to examine individual traffic streams and prioritize them creating a dichotomy between plain vanilla, best efforts routing and more expensive, superior traffic management services. However the potential exists for carriers operating the major networks used to switch and route bit-streams to exploit network management capabilities to achieve anticompetitive and consumer harming outcomes. Some internet service providers are trying to fundamentally alter the way the internet works and collecting money from companies like Netflix and Facebook to guarantee their data can continue to reach users unimpeded. This is called paid prioritisation, which is against ethics like paid news in print medium, which should not be allowed at all. Advocates for the principle of network neutrality claim the potential exists for ISPs to engineer a fragmented and “balkanized” next generation Internet through unreasonable degradation of traffic even when congestion does not exist. The worst case scenario envisioned by network neutrality advocates sees a reduction in innovation, efficiency, consumer benefits and national productivity occasioned by a divided Internet: one medium prone to congestion and declining reliability and one offering superior performance and potential competitive advantages to users able and willing to pay, or affiliated with the ISP operating the bit-stream transmission network. Opponents of network neutrality mandates scoff at the possibility of the worst case scenario, and view government intervention as anathema. Proponents of net neutrality are worried that corporations will buy influence with ISPs to disrupt access to competitors, or smother online speech that’s critical of a company or its products.
On balance, internet neutrality is desirable.
1. Without net neutrality, large companies will interfere with online communication between users.
If control of the Internet and its contents are given to large companies, they can easily interfere with communication between users that was previously taken for granted. Comcast limited user access to BitTorrent, a peer-to-peer exchange. Proponents of network neutrality imagine that if unrestrained, internet service providers would block large portions of the Internet, and make other parts of the Internet accessible only behind a high-pay wall. While this is possible in theory, robust competition among service providers ensures companies will be punished for providing such egregious service… If any company adopted the measures network neutrality supporters envision, customers would jump ship to an I.S.P that gives better service.
2. Net neutrality ensures innovation and contributions from a variety of smaller users.
Part of what makes the Internet so unique is that anybody can contribute content, creating a wealth of information. However, the loss of net neutrality would mean that Internet providers would be able to create exclusive deals with existing companies, effectively shutting out smaller companies. More than 60 percent of Web content is created by regular people, not corporations. How will this innovation and production thrive if creators must seek permission from a cartel of network owners? Net neutrality promotes innovation by testing ideas on a large number of consumers. Also the per user cost of internet provision is reduced greatly because of economies of scale. Currently millions of companies profit from the internet. ISPs are just being greedy. That is why they say net neutrality interferes with innovation because it prevents companies from charging its users more to access more content, giving companies less profit and interfering with their ability to innovate.
3. Net neutrality preserves choice on the Internet and the idea that a website’s success is determined by its quality.
The Internet is special in that anybody can contribute content, and the actual success of websites is determined by the users themselves. If a website is unpopular, it will ultimately fail because not enough people are visiting that website and using it. Net neutrality ensures that this system stays in place because any user will be able to access any website. However, without net neutrality, the idea that the best and most popular websites will succeed is no longer true, as competition will be distorted by larger companies making deals and preventing access to certain websites.
Key points of concerns vis-à-vis net neutrality requirements:
1. Transparency requirement:
A person engaged in the provision of broadband Internet access service shall publicly disclose accurate information regarding the network management practices, performance, and commercial terms of its broadband Internet access services sufficient for consumers to make informed choices regarding use of such services and for content, application, service, and device providers to develop, market, and maintain Internet offerings.
While transparency and informed choice are absolutely important for consumers, we should also expect, if not demand, that internet access providers also undertake several network management practices to protect our safety, privacy and security that they do not make public. For example, ISP’s today have several mechanisms in place to identify images of child sexual exploitation, and it would seriously undermine this vital work to make public the ways in which they manage this on their networks. Additionally, there are many aspects of network management, performance that would be a boon to those interested in hacking, infecting or harming the networks to advance their financial or political goals.
The transparency rule therefore hinges on the concept of ‘sufficient’ information for consumers to make informed choices which is left undefined, while the overall directive makes a demand for transparency that may not serve individuals, companies, or the national security well.
2, No Blocking requirement:
A person engaged in the provision of fixed broadband Internet access service … shall not block lawful content, applications, services, or non-harmful devices..[or], consumers from accessing lawful websites, subject to reasonable network management; nor shall such person block applications that compete with the provider’s voice or video telephony services, subject to reasonable network management.” This point carries the caveat “No Unreasonable Discrimination,” defined as follows: … [Access providers] shall not unreasonably discriminate in transmitting lawful network traffic over a consumer’s broadband Internet access service. Reasonable network management shall not constitute unreasonable discrimination.
How the word “block” or the phrase “reasonable network management’ is defined raises safety concerns. While blocking legal content may be undesirable, slowing some content streaming in favor of other content types will be important for consumer’s overall experience – and safety. For example, according to Cisco’s Networking Index Forecast, Internet traffic will more than quadruple by 2014, with some form of video content accounting for more than 90% of all content transmitted through the internet. While some of that video streaming will be for critical purposes like remote medical assistance, most will be for entertainment. Should these two types of content be given equal priority? Should video streaming be given the same priority as phone calls (VoIP)? While a 5-second delay in video download means your video isn’t ready quite as fast as it might be, the same delay in a phone call is intolerable – and if that call is to 911, it is a clear a safety concern. Again, there is a clear need to prioritize content types from a safety perspective, particularly given the exponential growth in bandwidth use, and the faltering economic model for bandwidth development.
3. Reasonable network management:
It is defined by the FCC as follows: A network management practice is reasonable if it is appropriate and tailored to achieving a legitimate network management purpose, taking into account the particular network architecture and technology of the broadband Internet access service. Legitimate network management purposes include: ensuring network security and integrity, including by addressing traffic that is harmful to the network; addressing traffic that is unwanted by users (including by premise operators), such as by providing services or capabilities consistent with a user’s choices regarding parental controls or security capabilities; and by reducing or mitigating the effects of congestion on the network.
This looks at three aspects of network management in narrowly defined categories: 1) technical management of a service, including security defences 2) providing consumers with safety tools to manage their own content access, and 3) managing network congestion. The future may show that several additional categories are needed, and that there is more overlap between categories than suspected. At a time when new threats emerge on a daily basis, and where entirely new categories of exploits continue to emerge, this definition has the potential to hamper proactive measures of defence in new and unforeseen areas. It also risks stifling healthy competition between service providers in areas of consumer safety, and discouraging innovation of new – or hybrid – safety, security and privacy solutions that would look beyond these narrow confines. Our personal safety as well as the safety of the internet as a whole depends on ISPs taking strong protective measures on our behalf. We need to be pushing for greater safety measures, and creating an environment that encourages and rewards service providers for doing so. An ‘open’ internet is an illusion if we do not have a secure environment in which consumers can safely embrace the web. Otherwise it’s only open to the crooks, scammers, and cyber-thugs.
The essential argument is that ISPs provide better service by being allowed to actively manage their network. Some examples of this better service would be:
1. Protecting the average user from the power user: Users who download gigabytes of data may unfairly hog bandwidth resources from those who don’t. By throttling certain users or types of data, ISPs can be sure that every user has an optimal experience.
2. Preventing illegal activity: ISPs generally want to prevent illegal file swapping over their networks, both due to the legal issues and for basically the same bandwidth reasons as above.
3. Privilege Special Services: Certain important Internet services require heavy and uninterrupted bandwidth use, such as medical services or VOIP. ISPs want to give special preference to these unique services that could benefit from special treatment, and possibly could not exist without this preferential treatment. This is one of the key arguments in the Verizon/Google Proposal of 2010.
Is net neutrality technically possible?
Building a net neutral network is technologically not possible to implement. It’s a utopian idea – no basis in technology. No telecom engineer will say that network neutrality is feasible. The concept that each data is treated equally does not hold good. You can’t design data. The Internet inherently prioritises data on a scale of 0-7 points basis. Network architecture gives highest priority to network management, followed by online gaming, speech, videos and then still images, music files and last file transfers and emails. These cannot be on the same footing.
The debate over bandwidth utilization:
When the BTIG Research firm began covering the Internet pipe operator Cogent Communications, its report contained an amusing insight. Cogent’s last-mile business customers buy a service that offers 100 megabits per second. The average use by these customers, though, is only about 12 mbps, and barely “one or two dozen of their customers have ever reached 50% utilization of the 100 MB pipe,” says BTIG. So the existing infrastructure meets the requirements of the overwhelming majority of customers, and only a small minority require more. The implication is that we don’t need network neutrality, because the users are not using what is there! However, the conclusions are misleading for a variety of reasons:
First, there’s a difference between sustained bandwidth utilization and bandwidth spikes in demand.
For sustained bandwidth utilization, while network operators may differ, in general, a user should not exceed 50% to 70% utilization of a 100 MB pipe (the provisioned bandwidth provided by the ISP). Periodic spikes in demand will put the user in the 80% to 90% bandwidth for short bursts of time. When the demand spikes occur, bandwidth is available. However, if the sustained bandwidth utilization were consistently 90% of the 100 MB pipe, then random spikes in demand would exceed bandwidth, quickly creating an under-provisioned network; the user would urgently need an upgrade. The bandwidth is not a static, monolithic phenomenon, but in fact it is dynamic and ever changing.
Let me give example from India.
It is a known within the telecommunications industry that companies do not make any money off of the last mile connection(s). This is because the initial investment within the last mile connection is so expensive, and the fees at which that they can charge their customers are so competitive. Due to this, technology lags so much in the last mile connection as they cannot make any money off of it. This is what makes a non-net neutrality environment to ISPs because it now presents a way in that they can make money off of the last mile connection. Whenever a new ISP comes, it offers high bandwidth to each customer and people do get fast internet speed for few months. For example, a 3G mobile broadband tower is emitting 100 Mbps to a small town. If town has 100 customers, each will get 1 Mbps and if 50 % are not using internet, each using will get 2 Mbps. After three months number of consumers becomes 1000, then each will get 0.1 Mbps and if 50 % consumers use internet at a time, each will get 0.2 Mbps. This is because ISP is not upgrading infrastructure to give more bandwidth. Instead they prefer to have consumers continue to use the present infrastructure and continue to pay monthly fee. With traffic engineering, the incumbent carriers could deliver higher capacity (bandwidth) to a select group of customers and charge them more. However, the network neutrality ruling precludes them from doing so.
Secondly, the term “bandwidth” has never been well defined in the industry and remains largely ambiguous. There’s no agreement on what bits are counted as part of bandwidth. For example, do Ethernet header bits or CRC bits count? Certainly the carriers continue to obfuscate the terms by giving their service offerings names like “100 Ultra” that some users interpret as a bidirectional 100 MB connection. Keeping the user confused seems to be the goal. At least with network neutrality we can open the door for demanding a clear definition for how bandwidth is tested and measured, and how bandwidth utilization is tested and measured.
Third, the carrier networks are the equivalent of one-lane, dirt roads with potholes. Is it any wonder that no one makes high quality, high performance luxury automobiles to travel on a one lane dirt road with potholes? This is a classic chicken and egg problem. Great products and services requiring a high speed super highway are possible, but if the product creators only see a low capacity, inadequate Internet, why bother creating those products? Thus, the products are never created. Would network neutrality help get the super highway built? Maybe. We’ve seen the difference between HD videos from Netflix and YouTube delivered over the satellite (beautiful quality) as compared to delivery over the cable network (poor quality) and of course worst quality over mobile broadband. Maybe that’s acceptable to a large class of users, but since they have not seen any alternatives, how would they know?
Balanced Network Neutrality Policy:
The debate over network neutrality has two very different points of view. Network neutrality advocates worry about ISP’s discriminating internet traffic and opponents argue that enforcing network neutrality would be difficult and error prone (Felten, 2006). The solution for this is a balanced policy which would limit the harmful uses of discrimination and allow its beneficial uses, because making wrong decision with respect to network neutrality regulation can hamper Internet’s development (Peha, 2007). For the protection of beneficial discrimination a policy can be designed which might allow the following: Network operators could provide different QoS to different classes of traffic by using explicit prioritization or other techniques. A stricter QoS requirements for traffic sent using a higher-priced service can be favored using these techniques (Alleven, 2009). Network operators could charge a different price for both the senders and recipients of data depending on different classes of traffic (Peha, 2007). The higher price could be charged for traffic which consumes more of a limited resource or which requires superior quality of service and has adverse effect on neighbors traffic (Felten, 2006). Traffic that can pose a threat to network security or which can be harmful to the network could be blocked by the network operator; it can be either by not following certain protocols or defined algorithms (Crowcroft, 2007). Network operators could also benefit by offering unique services or proprietary content to their customers (Peha, 2007). If and only if, the broadband market is not highly competitive, a policy designed to limit harmful uses of discrimination would not allow the following: (Peha, 2007). A network operator cannot charge more for a 50kbps VoIP stream than 50kbps gaming application where the QoS requirements are the same (Felten, 2006). One user could not be charged more than another by a network operator whether the user is a service provider, content provider or a consumer for a comparable information transfer or monthly service or even whether the user is the sender or receiver (Peha, 2007). Until and unless a network operator has a reasonable belief that certain traffic poses a security threat to the network it could not block traffic based on content or application alone (Crowcroft, 2007). Degradation of QoS only on the basis of content alone couldn’t be done by the network operators (Crowcroft, 2007) (Alleven, 2009). A network operator could not offer different QoS and price for traffic that competes with a legacy circuit-switched service (Peha, 2007).
To simplify, the Internet marketplace can be analytically split into three categories: content providers (Google, Netflix, porn sites, blogs), ISPs (Comcast, Verizon, CenturyLink), and end-users (you and me). The end-users are consumers, whose consumption preferences ultimately determine the value of content. ISPs interact directly with consumers by selling the high-speed connections that allow their customers to access content. ISPs interact with content providers by managing the networks over which information flows. Thus ISPs are resource owners, not because they own the networks, but they are also entrepreneurs, insofar as they strive to maintain the profitability of their networks under rapidly evolving market conditions. To be successful, ISPs must serve consumer demand in a cost-effective manner. FCC regulation of the Internet is rooted in the belief that a “virtuous circle” of broadband investment is ultimately driven by content providers. The more good content that providers make available, the more consumers will demand access to sites and apps, and the more ISPs will invest in the infrastructure to facilitate delivery. Minimize the financial and transaction costs imposed by ISPs on content providers, and content will flourish and drive the engine. That’s the theory, anyway. But in practice, there’s no good evidence that myopically favoring content providers over infrastructure owners is beneficial even to content providers themselves, let alone to consumers. Rather, the two markets are symbiotic; gains for one inevitably produce gains for the other. Without an assessment of actual competitive effects, it is impossible to say that consumers are best served by policies that systematically favor one over the other. Somehow, even with absent net neutrality regulation, ISPs have invested heavily in infrastructure and broadband. End-users have benefitted immensely, with 94 percent of U.S. households having access to at least two providers offering fixed broadband connections of at least 10 megabits per second, not to mention the near-ubiquitous coverage of wireless carriers offering 3G and LTE service at comparable speeds. Broadband networks are expensive to build and, particularly for mobile networks, increasingly prone to congestion as snowballing consumer use outpaces construction and upgrades. In order to earn revenue, economize the scarce resource of network capacity, and provide benefits to consumers, ISPs may engage in various price-discrimination and cross-subsidization schemes—i.e., the much-maligned “paid prioritization” motivating net neutrality activists. The non-Internet economy is replete with countless business models that use similar forms of discrimination or exclusion to consumers’ benefit. From Priority Mail to highway toll lanes to variable airline-ticket pricing, discriminatory or exclusionary arrangements can improve service, finance investment, and expand consumer choices. The real question is why we would view these practices any differently when they happen on the Internet.
Comcast’s policy toward the peer-to-peer data packets made economic sense:
A small minority of its customers was consuming much of its bandwidth by downloading large movie files with Bit Torrent’s technology, thereby reducing data transfer rates for the majority of customers who used Comcast’s service primarily for Web surfing and email. By identifying peer-to-peer data packets and slowing, or “de-prioritizing,” their passage through its network, Comcast made available more capacity for the majority of its customers and avoided raising its rates in order to foot the cost of the infrastructure improvements that would be required to accommodate peer-to-peer file transfers as they grew in popularity. Given that these peer-to-peer file transfers were being made on its property, Comcast had the right to do so. According to the FCC, Comcast’s actions violated the principles of net neutrality because they unfairly “discriminated” against the Bit Torrent data packets.
It’s really about costs and there isn’t a clear cut answer. Network capacity costs money to build and maintain. There are tons and tons of over the top (OTT) services that cause broadband subscribers to use more capacity than they would have otherwise. This is why during US prime time about 1/3 of the total capacity of the US portion of the Internet is consumed by NetFlix. Now, from the outside looking most consumers respond with a shrug, after all they bought a package that advertises X amount of bandwidth, so the service provider should be able to provide that to all customers on that package. The problem for the service provider is twofold. One, all networks are designed around over subscription and no residential broadband network in the world is designed to handle all its users running at full rate all the time. The second problem is that in many cases customers choosing OTT video (also telephone) services are reducing or eliminating their video subscription from their service provider. This is key to understanding the issue because virtually all of the broadband networks were built around a multi-purpose model. This includes DSL (phone, data, and sometimes video), DOCSIS cable (video, data, and often phone), and fiber (FTTx) which is almost always a triple play (FIOS and Uverse work this way). This matters because these networks were built on the assumption that the operator would get revenue from most subscribers for both or all three services. When many subscribers choose to eliminate one or more of those services and, increase their usage on the third (data) to make up for it the service provider faces a double whammy. We cannot think of a single large facilities based (they own the gear) operator that built around simply offering data and data plans are less expensive historically because much of the cost of the network is shared with the other service(s). The reason that operators would like flexibility is to find a way to indirectly monetize the OTT services that are consuming resources on their infrastructure.
Bandwidth and net neutrality:
The resurgent issue of net neutrality—whether all net traffic has an equal opportunity to go at the same speed—is very much related to bandwidth, with Netflix using 32.25% of the total web bandwidth of North American home users nightly, followed by BitTorrent, YouTube streaming, pirate sites and porn. With more streaming services that provide movies or music as content emerging every day, one can see that the pipeline is being hogged by certain businesses, while others deal with what space is left. With a neutral net, everyone has an equal right to that same pipeline, so if lots of folks all stream Netflix or other services’ movies tonight, the music file upload that I’m trying to get to friend will go slower and slower—in fact everything slows down equally.
High bandwidth used by Netflix could lead to traffic jam and slower speed to other users. This is because video needs lot of bandwidth and priority to maintain video quality (low latency) which in turn would slow internet for emails and other webs. Is it not violation of net neutrality? Why should a consumer suffer if he is not using Netflix by a customer who uses Netflix? Net Neutrality means that as more folks use “what’s left,” Netflix movies begin to buffer, jitter and eventually deliver pixilated images to compensate. The pipeline isn’t infinite. So we have proposal is to create a separate higher speed pipeline that companies like Netflix pay for (and pass those costs on to their customers no doubt). Netflix has already paid Comcast and Verizon to get in the high-speed lane. That also violates net neutrality.
Only allow discrimination based on type of data:
Columbia University Law School professor Tim Wu observed the Internet is not neutral in terms of its impact on applications having different requirements. It is more beneficial for data applications than for applications that require low latency and low jitter, such as voice and real-time video. He explains that looking at the full spectrum of applications, including both those that are sensitive to network latency and those that are not, the IP suite isn’t actually neutral. He has proposed regulations on Internet access networks that define net neutrality as equal treatment among similar applications, rather than neutral transmissions regardless of applications. He proposes allowing broadband operators to make reasonable trade-offs between the requirements of different applications, while regulators carefully scrutinize network operator behavior where local networks interconnect. However, it is important to ensure that these trade-offs among different applications be done transparently so that the public will have input on important policy decisions. This is especially important as the broadband operators often provide competing services—e.g., cable TV, telephony—that might differentially benefit when the need to manage applications could be invoked to disadvantage other competitors. The proposal of Google and Verizon would allow discrimination based on the type of data, but would prohibit ISPs from targeting individual organizations or websites: Google CEO Eric Schmidt explains Google’s definition of Net neutrality as follows: if the data in question is video, for example, then there is no discrimination between one purveyor’s data versus that of another. However, discrimination between different types of data is allowed, so that voice data could be given higher priority than video data. On this, both Verizon and Google are agreed.
Individual prioritization without throttling or blocking:
Some opponents of net neutrality argue that under the ISP market competition, paid-prioritization of bandwidth can induce optimal user welfare. Although net neutrality might protect user welfare when the market lacks competition, they argue that a better alternative could be to introduce a neutral public option to incentivize competition, rather than enforcing existing ISPs to be neutral. Some ISPs, such as Comcast, oppose blocking or throttling, but have argued that they are allowed to charge websites for faster data delivery. AT&T has made a broad commitment to net neutrality, but has also argued for their right to offer websites paid prioritization and in favor of its current sponsored data agreements.
What if the costs were the consumers’ decision?
What if there was a meter on your Internet connection: when your bandwidth exceeded a threshold, a warning popped up and you could decide whether or not to pay more for the massive amounts of data you were streaming or downloading. This would not be a restriction of the Internet, but a realistic reflection of usage. Every Internet service—new, old or yet to come—would have an equal opportunity to offer their content or data. This would not stifle innovation or prevent small emerging companies from competing with Amazon, Apple and Netflix. There would be no discrimination, but it would be up to the consumer to decide if the surcharge to their Internet service was worth adding to the cost of that Netflix movie.
Research on protocol defined by users rather than network:
In their paper, ‘Putting Home Users in Charge of their Network’, the research team discuss why users should be the ones making the decisions. The researchers explain: “The user should define which traffic gets what type of service, and when this happens; while the ISP figures out how and where in the network, provisioning is implemented.
The researchers’ reasons are:
•Users expect the Internet to be fast, always on, reliable, and responsive.
•Users do not want the network to stand in the way of the application.
•ISPs struggle with how to share available bandwidth among users’ applications.
The research team then made the point that the current “one size fits all” approach is not working, and that each individual user should be able to choose the priority of their applications, indicate that preference to the ISP, and have the ISP implement the required changes. The researchers also feel this is entirely doable: “We could use existing methods, such as Resource ReSerVation Protocol (RSVP), but we can go one step further and exploit recent trends in networking that make it even easier for ISPs to have more programmatic control over their networks, therefore making it easier for the ISP to implement the user’s desire.”
Fast internet to save life:
Take a rural basic service hospital, which after a serious accident may have to serve as operating room, and the University clinic with a senior surgeons performs it via telemedicine– if this digital and electronic surgery is to be possible, it can only work with perfect internet connection quality and capacity for the transmission of the instructions given by the senior surgeon working on the organs (lungs or heart or cardiovascular vessels) of the patient. We’ve got to be willing to pay a price for this. And you just can’t talk about perfect equality there.
Tiered service structures allow users to select from a small set of tiers at progressively increasing price points to receive the product or products best suited to their needs. Such systems are frequently seen in the telecommunications field, specifically when it comes to wireless service, digital and cable television options, and broadband internet access. When a wireless company, for example, charges customers different amounts based on the number of voice minutes, text messages, and other features they desire, the company is utilizing the principle of tiered service. This is also seen in charging different prices for services such as the speed of one’s internet connection and the number of cable television channels one has access to. Tiered pricing allows customers access to these services which they may not otherwise due to financial constraints, ultimately reflecting the diversity of consumer needs and resources. Tiered service helps to keep quality of service standards for high-profile applications like streaming video or VoIP. This comes at a cost of increasing costs for better service levels. Major players in the Net Neutrality debate have proposed tiered internet so content providers who pay more to service providers get better quality service. The way ISPs tier services for content providers and application providers is through “access-tiering”. This is when a network operator grants bandwidth priority to those willing to pay for quality service. “Consumer-tiering” is where different speeds are marketed to consumers and prices are based on the consumers’ willingness to pay. A tiered internet gives priority to packets sent and received by end users that pay a premium for service. Network operators do this to simplify things such as network management and equipment configuration, traffic engineering, service level agreements, billing, and customer support. Initial reasoning against tiered service was that ISPs would use it to block content on the internet. Internet service providers could use this to prioritize affiliated partners instead of unaffiliated ones. Many argue that one fast network is much more efficient than deliberately throttling traffic to create a tiered internet.
My proposal of ‘two lane’ internet:
Let me start with few examples.
1. You want to travel from Mumbai to Delhi via railways. You have Rajdhani Express taking you in 17 hours with air-condition. You have Firozpur Janata express taking you in 36 hours without air-condition. The origin, the destination and the railways are the same. What is different is speed and comfort that comes with money. There is no railway neutrality. More money gives you fast speed and better quality.
2. You want to send letter to your wife. You can send by ordinary mail or by speed post. Again; the origin, the destination and the postal service are the same. What is different is speed driven by extra money spent. We have no neutrality in postal service.
3. You want bail in high court. If you are a celebrity, you get bail in 48 hours of conviction. If are a common man, you wait for months to get bail. What is different is speed driven by top lawyers of celebrity who charge so much that a common man cannot afford.
4. You visit Tirumala Tirupati temple to offer worshipping to God. If you pay Rs.300 per person, you get fast access to God. You have to wait for hours if you want free access. There is multi-tiering even at temples.
5. Water and Electricity: Tap water you get in your home cost everybody same no matter where you use it but if you want to drink bottled pure water, you have to pay extra. I drink bottled pure water every day to prevent waterborne disease. As per socialism of net neutrality, I must drink tap water. And electricity cost is not same for every unit. For first 50 units, the rate is 1.2 rupees per unit and for over 400 units, the rate is 2.55 rupees per unit as per electricity bill I receive. Where is neutrality in so called common carriers?
Are we doing injustice at railways, posts, courts, temples, water and electricity?
The non-Internet economy is replete with countless business models that use discrimination or exclusion to consumers’ benefit. From priority mail to highway toll lanes to variable airline-ticket pricing, discriminatory or exclusionary arrangements can improve service, finance investment, and expand consumer choices. The real question is why we would view these practices any differently when they happen on the Internet.
We are humans and we have evolved ways to access anything depending on quantum of money we spend. We have also evolved habits to do fast or slow. The same logic applies to internet. There are so many people who have money but no time. Why should they suffer under socialism of net neutrality? Why are we so hypocritical when we talk about internet? I am busy doctor, busy teacher and busy blogger. I hardly get 20 minutes in 24 hours to read comments posted by people on my website. There are thousands of comments. If internet is slow, it will take lot of time to access comments and then approve it. If I pay money to my ISP so that I get faster access to my website, how does it violate net neutrality? There are so many people in world who have money but no time. Why do injustice to them under pretext of net neutrality? Then there are net habits. You cannot change net habits. Some people want to download videos all the time reducing speed of internet to others who don’t see videos.
Technically and commercially net neutrality does not exist anyway even today:
The Institute of Electrical and Electronic Engineers (IEEE) decided that the best way to manage traffic flow was to label each packet with codes based on the time sensitivity of the data, so routers could use them to schedule transmission. The highest priority values are for the most time-sensitive services, with the top two slots going to network management, followed by slots for voice packets, then video packets and other traffic. That is why when you are downloading YouTube video; your neighbour connected to your ISP will find his email going slowly as most bandwidth is used by video which gets prioritized transmission. Net neutrality is violated. We already have two-sided pricing. ISPs collect revenues from consumers as well as from content/service/application providers, and this two-sided pricing is often linked to QoS. So we already have non-net neutrality regime. Paid peering, paid prioritization and use of CDN by large companies like Google, Netflix and Facebook get them faster and better internet access than my website. Again net neutrality is violated. While wireless handsets generally can access Internet services, most ISPs favour content they provide or secure from third parties under a “walled garden” strategy: deliberate efforts to lock consumers into accessing and paying for favoured content and services. Net neutrality is already violated on mobile phones using mobile broadband. We are living in the world where net is not neutral anyway technically and commercially.
All bits and all packets are not same as we need to differentiate between time sensitive (VoIP) and time insensitive services (email) and we need to differentiate between bandwidth hogging (video) and bandwidth sparing services (simple web page). Interestingly, many bandwidth hogging are also time sensitive and many bandwidth sparing are also time insensitive. Under strict net neutrality principle, all these would be transmitted at same speed and at same priority. It would reduce the quality of internet experience.
What any user or consumer want on internet anyway?
1. Fast speed
2. Cheap monthly fees
3. Access to all legal contents, applications and services at his/her choice
4. No blocking or slowing of any lawful website/application/service
5. Transparency by ISPs about their networking policies
6. Privacy maintained
7. Good quality of service
The key Internet services that have more demands on connectivity ought to be treated differently from other traffic. Telemedicine, teleoperation of remote devices, and real-time interaction among autonomous vehicles (driver-less cars) could be problematic if data packets could get stalled at peak congestion times. The Internet as a service should be split in two, with one “lane” providing equal and unfettered access to websites, but with another “lane” for the special services with greater demand, such as telemedicine, Netflix or HD IP television. However, both services would run over the same Internet infrastructure. An innovation-friendly Internet means that there is a guaranteed reliability for special services. These can only develop when predictable quality standards are available. Fast special lanes are necessary for the development of new, advanced uses of the internet, like telemedicine or driver-less cars. Without guaranteed, fast-access internet connections, such innovations won’t come to market. The current “one size fits all” approach is not working, and that each individual user should be able to choose the priority of their applications, indicate that preference to the ISP, and have the ISP implement the required changes. In other words, special lane can be used by any user or any content/service/application provider, provided they pay required charges to ISPs.
The internet shall be innovated into two lane internet having same infrastructure. Two lane internet will be high speed low latency network:
1. First lane is common internet
2. Second lane is special internet
Common internet is for common people treating all data on the Internet equally, not discriminating or charging differentially by user, content, site, platform, application, type of attached equipment, or mode of communication. More importantly, common internet would have basal internet speed of at least 4 Mbps in developed nations and 1 Mbps in developing nations with average network latency less than 125 millisecond for all users, most of the time, for all websites, all services and all applications without any discrimination. Flat rate for any type of data use will be charged to consumers depending on quantum of data used. When you are on common internet, the download speed of Goggle, Facebook, Netflix, YouTube, my website or a student’s website is same with no prioritization, no slowing, no blocking with transparent network traffic management policies using best efforts. No technical prioritization or bandwidth throttling and no commercial paid prioritization, paid peering, CDN or slowing of any data/voice/video. All bits, bytes and packets are equal at network level and at pricing level. The more bandwidth you consume, the more you pay irrespective whether you use YouTube, email, skype or VoIP or visit my website. In order for common internet to become successful, ISPs have to upgrade infrastructure.
ISPs can have second lane special internet only if they fulfil criteria of common internet. If ISP cannot give internet speed of 4 Mbps in developed nation and 1 Mbps in developing nations to all consumers most of the time, it cannot have second lane. The job of the regulatory bodies like FCC/TRAI is to check that prescribed speed of common lane is maintained otherwise cancel licence of ISP for second lane.
This is specialized fast reliable and secure internet for special services like telemedicine, teleoperation of remote devices, and driver-less cars. The internet speed is very fast of at least 10 Mbps with average network latency less than 100 millisecond anytime to any customer or content provider with selective network prioritization of data/video/voice at router level and transparent use of peering, P2P & CDN. Both users/customers and content/service/application provider can use fast prioritized data by paying more than common internet. Netflix, YouTube, Google, Facebook, Skype, WhatsApp, Bit-Torrent or any website/application/service can use special internet on any ISP by paying more provided that ISP fulfils conditions of common internet. In other words, special internet is built on common internet. If common internet is having slow speed or discrimination, special internet is disallowed legally.
Medical use of broadband:
While broadband alone cannot substitute for doctors, nurses and health care workers, the benefits of Internet applications in healthcare are potentially large. Appropriate mobile solutions can improve the quality of life for patients, increase efficiency of healthcare delivery models, and reduce costs for healthcare providers. It has been estimated that the use of telemedicine delivered by broadband could achieve cost savings of between 10% and 20%.
As you can see in above figure above, tele-medicine, tele-surgery and tele-imaging need time sensitive large bandwidth that is possible only in special internet.
The availability of content is a factor that stimulates broadband investment. Revenues from broadband and mobile access are dependent on demand for web-based content and applications. This has been empirically proven through the PLuM study, which found that “the ability of consumers to access Internet content, applications and services is the reason consumers are willing to pay Internet access providers. Access providers are dependent on this demand to monetise their substantial investments.” Special internet will ensure that time sensitive content/application/service would never be delayed and data packets of special services like telemedicine & driver-less car would never get stalled at peak congestion times. In return, ISPs would get ample profit both from consumers and content/service/application provider as a return on their investment and for further innovation.
The figure below is an overview of ‘Two Lane Internet’:
The job of ISP is to provide two lane internet and charge differentially depending on which lane you use and not to enforce choice on users. Consumers would be informed about traffic management practices and the level of quality they can expect from their Internet service by ISPs. It is possible that consumer may use both lanes, common internet for common surfing and special internet for videos or VoIP. It is also possible that consumer may use common internet for all uses including downloading videos. It is also possible that consumer may use special internet for all uses. Let consumer be master of his/her destiny rather than destiny scripted by ISPs or CSPs. However, special services like telemedicine, teleoperation of remote devices, and driver-less cars would work only on special internet and would always get priority transmission over any other data on special internet.
The moral of the story:
1. Net neutrality is a principle that Internet service providers (ISPs) should treat all data on the Internet equally, not discriminating or charging differentially by user, content, site, platform, application, type of attached equipment, or mode of communication. ISP is said to operate in net neutrality if it provides the service in a way that is strictly “by the book”. It means that that all packets of data on the internet are transported equally using best effort, without discrimination on the basis of content, user or design. In other words, net neutrality means internet is free, open and fair. Violation of net neutrality is not synonymous with internet censorship. Internet censorship is suppression or deletion of any data on internet which may be considered objectionable, harmful or sensitive as determined by the censor. Usually the censor is government or court of law.
2. Net neutrality does not mean there can be no discrimination at all among customers – a customer who is willing to pay for higher broadband speed gets that even today. Killing discrimination absolutely would mean killing competition among service providers. Net neutrality means that discrimination should not be unreasonable and arbitrary.
3. Net neutrality debate involves predominantly wired transmission (cable/fiber/DSL) in America while net neutrality debate involves predominantly wireless transmission (3G/4G mobile broadband) in India. Net neutrality rules affect wired and wireless transmission differentially because wired network has large capacity of data transmission while wireless network has limited capacity of data transmission due to scarce resource spectrum, and wired connection speed is near maximum throughput while wireless connection speed is much less than maximum throughput due to various factors reducing signal strength. Wired broadband networks have enough capacity to transmit voice and video packets uninterrupted while due to limited capacity in wireless broadband networks, there could be packet loss, high latency and jitters in voice and video packets transmission making voice conversation difficult and poor video quality especially during network congestion. Due to 87% population having Internet access in the United States, the domestic net neutrality debate was able to focus largely on the quality of Internet access. Due to only 19% population having internet access in India, and out of which 83% are getting internet access solely on mobile phones, the priority in India is internet access to a billion people rather than quality of internet.
4. Search neutrality is an indispensable part of net neutrality. You can circumvent biased search results by searching multiple engines sequentially and not to give undue importance to first page and top results of any search engine results. I have been doing this for years to acquire information on internet.
5. The increase in network traffic is consequence of the on-going transition of the Internet to a fundamental universal access technology. The Internet has become a trillion dollar industry and has emerged from a mere network of networks to the market of markets. Much of the net neutrality debate is devoted to the question whether the market for Internet access should be a free market or regulated.
6. Internet without net neutrality would adversely affect start-ups, dissidents, underprivileged, oppressed, activists, small entrepreneurs, small companies, educators and poor people.
7. Eliminating net neutrality would lead to the Internet resembling the world of cable/satellite TV, so that access to and distribution of content would be managed by a handful of big companies. These companies would then control what is seen as well as how much it costs to see it.
8. The quality of websites and services determines whether they succeed or fail rather than deals with ISPs. Majority of the great innovators in the history of the Internet started with little capital in their garages, inspired by great ideas. This was possible because the protections of net neutrality ensured limited control by owners of the networks. Without net neutrality, the Internet will undergo a transformation from a market ruled by innovation to one ruled by deal-making.
9. To obtain the best possible speed for your Internet connection, it is not enough to have a high bandwidth connection. It is also important that your latency is low, to ensure that the information reaches you quickly enough. This is especially true with satellite Internet connections, which can offer speeds of up to 15 Mbps – but will still feel slow due high latency of 500 milliseconds. On the other hand, you should have enough bandwidth as low latencies without enough bandwidth would still result in a very slow connection. Latency and bandwidth are independent of each other. Best internet connection ought to have high bandwidth and low latency.
10. Two human factors are responsible for provoking humans to choose one site over another besides obvious cost & quality factors:
a) Consumers are intolerant to slow-loading sites. Viewers start to abandon a video if it takes more than 2 seconds to start up, and if the video hasn’t started in five seconds, about one-quarter of those viewers are gone, and if the video doesn’t start in 10 seconds, almost half of those viewers are gone. Also, users with faster Internet connectivity (e.g., fiber-optic) abandon a slow-loading video at a faster rate than users with slower Internet connectivity (e.g., cable or mobile).
b) Human audio-visual perception is another important factor. Conversations become difficult if words or syllables go missing or are delayed by more than a couple of tenths of a second. Even twenty milliseconds of sudden silence can disturb a conversation. Human eyes can tolerate a bit more variation in video than ears can tolerate in voice. Voice and video packets must flow at the proper rate and in proper sequence. Internet discards packets that arrive after a maximum delay, and it can request retransmission of missing packets. That’s okay for Web pages and downloads, but real-time conversations can’t wait. Consonants are short and sharp, so losing a packet at the end of “can’t” turns it into “can.” Severe congestion can cause whole sentences to vanish and make conversation impossible. Wired broadband networks generally have enough capacity to transmit voice and video and therefore less affected than wireless mobile broadband.
ISPs have been using these two human factors to provoke consumers to change sites. ISPs affect consumer’s choice by reducing speed of internet for specific site provoking them to view another competitive site. ISPs also increase latency to make voice conversation over VoIP difficult provoking consumer to use another mode of conversation.
11. Assuming all other factors same, broadband internet speed is directly proportional to investment in broadband infrastructure and inversely proportional to number of users. That is why in India, whenever new ISP is set up, people get fast speed but after 3 months, speed falls as number of users increases and broadband infrastructure cannot cope up with so many users.
12. It is known within the telecommunications industry that companies do not make any money off of the last mile connection. This is because the initial investment within the last mile connection is so expensive, and the fees at which that they can charge their customers are so competitive. Due to this, technology lags so much in the last mile connection as they cannot make any money off of it. The incumbent ISPs simply do not want to make investment in upgrading their network infrastructure. The counter view is that ISPs deliberately create physical limits. Instead of increasing their capacity, ISP deliberately keeps it scarce by under-investing in broadband infrastructure to charge for preferential access to resource.
13. The availability of good content is a factor that stimulates broadband investment. The more good content that content providers make available, the more consumers will demand access to sites and apps, and the more ISPs will invest in the infrastructure to facilitate delivery.
14. Average bandwidth cost to ISP varies from $30,000 per Gbps per month in Europe and North America to $90,000 in certain parts of Asia and Latin America. Therefore to control their bandwidth costs, ISPs are deploying a variety of ad-hoc traffic shaping policies that target specifically bulk transfers, because they consume the vast majority of bytes. Examples of bulk content transfers include downloads of music and movie files, distribution of large software and games, online backups of personal and commercial data, and sharing of huge scientific data repositories. Increasingly economic rather than physical constraints limit the performance of many Internet paths.
15. The hardware and the software that run Internet treat every byte of data equally. All bits of internet transmission are fragmented into data packets that are routed through the network autonomously (end-to-end principle) and as fast as possible (best-effort principle). Internet packets generally travel the path of least resistance while travelling from one computer to another. However, there is a desire for reliable transmission of information that is time critical (low latency), or for which it is desired that data packets are received at a steady rate and in a particular order (low jitter). Voice communication, for example, requires both, low latency and low jitter. So we have quality of service (QoS) at router lever where voice transmission is prioritized over other data. Voice, video, and critical data applications are granted priority or preferential services from network devices so that the quality of these strategic applications does not degrade to the point of being unusable. This QoS technology of traffic management is implemented at router level. There is a fine line between correctly applying traffic management to ensure a high quality of service and wrongly interfering with Internet traffic to limit applications that threaten the ISP’s own lines of business. An alternative to complex QoS control mechanisms is to provide high quality communication by generously over-provisioning a network so that capacity is based on peak traffic load estimates. Remember, greater the broadband infrastructure & capacity, lesser the need for traffic control & management.
16. Fast lanes (paid peering, CDN and paid prioritization), slow lane, increased latency, zero rating, blocking, re-direction, degrading quality of service, weakening competition, and unwillingness to upgrade network (over selling service) are ways by which last-mile ISPs generate profits and all these ways are against net neutrality. It’s all about money and greed. Net neutrality places restrictions on potentially revenue-generating functionality of ISP. They could do all these things because of monopoly in the last mile connection. End-users can be left in a restricted, low quality slow lane or a fast lane with fewer destinations to reach, without even knowing about it as there is absolute lack of transparency by ISPs. Most consumers do not know anything about traffic management practices and the level of quality they can expect from their Internet service. ISPs also give preferential treatment to individual speed test sites, so when you test internet speed, it will be higher than actual.
17. Moving large data like movies and music videos requires larger and faster Internet “pipes” (more expensive pipes) than moving emails and simpler web pages. On the top, these large data are time-sensitive, need low latency and therefore prioritized. As opposed to text files, video streaming require more resources and potentially slow down the process for everyone else. There is merit in argument that all data is not same. Additionally, different types of data are obtained at different prices and therefore they all cannot be sold at the same rates. Peer-to-peer (P2P) file sharing and high-quality streaming video require high bandwidth data rates for extended periods and can cause a service to become oversubscribed, resulting in congestion and poor performance. If the resource has a capacity constraint, there may be a point at which a single user’s consumption will negatively affect another’s. This is what happens when one consumer uses too much bandwidth to download video or P2P service; it will affect other paying customers who then cannot send their 10KB emails. One out of every two bytes of data traveling across the Internet is streaming video from Netflix or YouTube. If ISPs start giving further preferential treatment to the biggest players, would there be any bandwidth left for the independent video producers, upstart social media sites, bloggers and podcasters? Higher price should be charged for traffic which consumes more of a limited resource or which requires superior quality of service and has adverse effect on neighbour’s traffic.
18. Please do not confuse between peering and peer-to-peer (P2P) file transfer. Peering is direct connection between ISP and content provider (e.g. Google) bypassing internet backbone while peer to peer is sharing files between client computers rather than downloading file from content provider. During peering, you are getting file from content provider at faster speed while during P2P, you are getting file from another user’s computer at faster speed. Peering is violation of net neutrality by ISP while P2P is violation of net neutrality by consumers.
19. Arguments about net neutrality shouldn’t be used to prevent the most disadvantaged people in society from gaining access to internet in India. Eliminating zero rating programs that bring more people online won’t increase social inclusion or close the digital divide. Only 7% of the data used by Internet.org subscribers came through the initiative’s free, zero-rated offerings; other paid services accounted for the remaining 93%. This proves that zero rating only allows initial internet access to customers but later on it almost becomes paid service. Studies have showed that internet access reduces poverty and create jobs. Even if it was the case that some zero-rating programs might create some barriers to market entry for new start-ups, the access could help small business owners and farmers tap into a larger market for their goods, and can bring basic education and information to rural areas. On the other hand, for poor people using zero rating, internet means Goggle and Facebook, making awfully hard for any competitor to arise. Creating preferential access to further social causes and service penetration is one thing, using it to create commercial monopolies and business cartels is quite another.
20. Almost all of ISPs are built on multipurpose model earning revenue from most customers for all 3 services, phone plus SMS, data and video. When many customers choose to eliminate one of those services (phone plus SMS) and, increase their usage on the data by using OTT like WhatsApp and Skype to make up for it, the service provider faces a double whammy. The revenue earned by the telecom operators for one minute of use in traditional voice call is Rupees 0.50 per minute on an average, as compared to data revenue for one minute of VoIP call usage which is around Rupees 0.04, which is 12.5 times lesser than traditional voice call. This clearly indicates that the substitution of voice with data is bound to adversely impact the revenues of the telecom operators and consequently impact both their infrastructure related spends and the prices consumers pay. Since OTTs are consuming resources on ISPs infrastructure and also hurting ISPs business interest, it would be unfair to invoke net neutrality saying all data are same. Free riders counter by saying that users already pay for content and applications, which allows ISP to profit from their investment in networks but this argument appear hollow as profit margins of U.S. broadband providers are generally one-sixth to one-eighth of companies that use broadband (such as Apple or Google).
21. I am of the view that we should keep government miles away from net neutrality because internet has worked well without government meddling, and governments & corporates are always hand in glove with each other.
22. In my view, the best way to maintain net neutrality is to increase numbers of ISPs to increase competitiveness among them and each one having large capacity to cater internet traffic. Informed consumers could make a choice among offers from different providers and choose the price, quality of service and range of applications and content that suited their particular needs.
23. Technically and commercially net neutrality does not exist anyway even today. The Institute of Electrical and Electronic Engineers (IEEE) decided that the best way to manage traffic flow was to label each packet with codes based on the time sensitivity of the data, so routers could use them to schedule transmission. The highest priority values are for the most time-sensitive services, with the top two slots going to network management, followed by slots for voice packets, then video packets and other traffic. That is why when you are downloading YouTube video; your neighbour connected to your ISP will find his email going slowly as most bandwidth is used by video which gets prioritized transmission. Net neutrality is anyway violated. We already have two-sided pricing. ISPs collect revenues from consumers as well as content/service/application providers, and this two-sided pricing is often linked to QoS. So we are already on non-net neutrality regime. Paid peering, paid prioritization and use of CDN by large companies like Google, Netflix and Facebook get them faster and better internet access than my website. Again net neutrality is violated. While wireless handsets generally can access Internet services, most ISPs favour content they provide or secure from third parties under a “walled garden” strategy: deliberate efforts to lock consumers into accessing and paying for favoured content and services. Net neutrality is already violated on mobile phones using mobile broadband. We are living in the world where net is not neutral anyway technically and commercially.
24. I propose that the internet shall be innovated into two lane internet having same infrastructure: first lane common internet and second lane special internet. Two lane internet will be high speed low latency network. Common internet is for common people treating all data on the Internet equally, not discriminating or charging differentially by user, content, site, platform, application, type of attached equipment, or mode of communication. More importantly, common internet would have basal internet speed of at least 4 Mbps in developed nations and 1 Mbps in developing nations for all users with average network latency less than 125 millisecond. Flat rate for any type of data use will be charged to consumers depending on quantum of data used. ISPs can have second lane special internet only if they fulfil criteria of common internet. The second lane special internet is specialized fast reliable and secure internet for special services like telemedicine, teleoperation of remote devices, and driver-less cars. The internet speed is very fast of at least 10 Mbps with average network latency less than 100 millisecond anytime to any consumer or content provider with selective network prioritization of data/video/voice at router level and transparent use of peering, P2P & CDN. Both users/consumers and content/service/application provider can use fast prioritized data by paying more than common internet. The job of ISP is to provide two lane internet and charge differentially depending on which lane you use and not to enforce choice on users. Consumers would be informed about traffic management practices and the level of quality they can expect from their Internet service by ISPs. It is possible that consumer may use both lanes, common internet for common surfing and special internet for videos or VoIP. It is also possible that consumer may use common internet for all uses including downloading videos. It is also possible that consumer may use special internet for all uses. Let consumer be master of his/her destiny rather than destiny scripted by ISPs or CSPs. However, special services like telemedicine, teleoperation of remote devices, and driver-less cars would work only on special internet and would always get priority transmission over any other data on special internet.
Dr. Rajiv Desai. MD.
June 15, 2015
I am grateful to internet for my survival as governments and media have done everything to degrade me. Whether it is ISPs or whether it is content providers Google, Facebook, and Netflix or whether it is service provider OTT or whether it is internet users, we all belong to internet family. Net neutrality issue ought to be resolved within family without meddling by governments and courts.