Dr Rajiv Desai

An Educational Blog

PRIVACY

Privacy: 

_____

 

_____

Section-1   

Prologue:   

Many people consciously respect other people’s property yet sometimes fail to respect something even more important to a person which is his/her privacy.  Every individual value his/her privacy. A private time with no one around gives you room to reflect on the most important issues in your life while private time with another helps to build close personal relationship with the one whom we choose to draw close to. To intrude in this is like stealing from someone or trespassing on the person’s domain. The right to privacy could refer to your right to be left alone or to your right not to share every detail with someone.

The most common retort against privacy advocates — by those in favour of ID checks, cameras, databases, data mining and other wholesale surveillance measures — is this line: “If you aren’t doing anything wrong, what do you have to hide?”  Some clever answers: “If I’m not doing anything wrong, then you have no cause to watch me.” “Because the government gets to define what’s wrong, and they keep changing the definition.” “Because you might do something wrong with my information.” Basic problem with quips like these — as right as they are — is that they accept the premise that privacy is about hiding a wrong. It is not. Privacy is an inherent human right, and a requirement for maintaining the human condition with dignity and respect.

Cardinal Richelieu understood the value of surveillance when he famously said, “If one would give me six lines written by the hand of the most honest man, I would find something in them to have him hanged.” Watch someone long enough, and you’ll find something to arrest — or just blackmail — with. Privacy is important because without it, surveillance information will be abused: to peep, to sell to marketers and to spy on political enemies — whoever they happen to be at the time. Privacy protects us from abuses by those in power, even if we’re doing nothing wrong at the time of surveillance. A future in which privacy would face constant assault was so alien to the framers of the Constitution of many nations that it never occurred to them to call out privacy as an explicit right. Privacy was inherent to the nobility of their being and their cause.

Of course, being watched in your own home was unreasonable. Watching at all was an act so unseemly as to be inconceivable among gentlemen in their day. You watched convicted criminals, not free citizens. You ruled your own home. It’s intrinsic to the concept of liberty. For if we are observed in all matters, we are constantly under threat of correction, judgment, criticism, even plagiarism of our own uniqueness. We become children, fettered under watchful eyes, constantly fearful that — either now or in the uncertain future — patterns we leave behind will be brought back to implicate us, by whatever authority that has now become focused upon our once-private and innocent acts. We lose our individuality, because everything we do is observable and recordable. Many people wrongly characterize the debate as “security versus privacy.” The real choice is liberty versus control. Tyranny, whether it arises under threat of foreign physical attack or under constant domestic authoritative scrutiny, is still tyranny. Liberty requires security without intrusion, security plus privacy.

Traditional concepts of privacy — our right to be left alone — and the basic principle that the content of our communications should remain confidential — are being challenged and eroded with advancements in digital technology. Similarly, the fundamental principle that individuals should be able to control when their personal data is collected by third parties and how it is used is nearly impossible to implement in a world where personal data is collected, created, used, processed, analysed, shared, transferred, copied, and stored in unprecedented ways and at an extraordinary speed and volume – without your consent! Many of our activities leave a trail of data. This includes phone records, credit card transactions, GPS in cars tracking our positions, mobile phones, smart wearable devices, smart toys, connected cars, drones, personal assistants like the Amazon echo, instant messaging, watching videos and browsing websites. In fact, online almost all activities leave a trail of data. There will be no opting out of this data-intensive world. Technology and sharing personal information have become indispensable to participation in modern society. Internet access and use of new digital technologies are necessary for employment, education, access to benefits, and full participation in economic and civic life. So, what happens to our personal data, identity, reputation, and privacy in this digital connected world? Unclear. Our privacy laws are based on antiquated notions of notice and choice, and are completely inadequate to address this rapid evolution in technology, computer science, and artificial intelligence.    

_____

Abbreviations and synonyms:

PII = Personally Identifiable Information

ISP= Internet Service Providers

DPIA = Data Protection Impact Assessment

PIA = Privacy Impact Assessment

GDPR = General Data Protection Regulation

CCPA = California Consumer Privacy Act

HIPAA = Health Insurance Portability and Accountability Act

IP = internet protocol (responsible for establishing communications in most of our networks)

IP = intellectual property (protected in law by patents, copyright and trademarks)

TOR = The Onion Router 

VPN = Virtual Private Network 

RTI = Right to Information

GSMA = Groupe Spéciale Mobile Association = Global System for Mobile Communications

VCC = Virtual Credit Card

NSA = National Security Agency  

Right to be left alone = right to be let alone

_____

_____

Section-2

Introduction to privacy:  

What is Privacy?  

According to (Parent, 2012), “privacy is the condition of not having undocumented personal knowledge about one possessed by others.”

According to (Warren & Brandeis, 1890), “privacy consists of being let alone” (p.205).

According to (Sandel, 1989), “privacy is the right to engage in certain conduct without government restraint, the traditional version is the right to keep certain personal facts from public view” (p. 524)

According to (Garfinkel, 2000), “privacy is about self- possession, autonomy, and integrity” (p.4).

_

Privacy is a fundamental right, essential to autonomy and the protection of human dignity, serving as the foundation upon which many other human rights are built. Privacy enables us to create barriers and manage boundaries to protect ourselves from unwarranted interference in our lives, which allows us to negotiate who we are and how we want to interact with the world around us. Privacy helps us establish boundaries to limit who has access to our bodies, places and things, as well as our communications and our information. The rules that protect privacy give us the ability to assert our rights in the face of significant power imbalances. As a result, privacy is an essential way we seek to protect ourselves and society against arbitrary and unjustified use of power, by reducing what can be known about us and done to us, while protecting us from others who may wish to exert control. Privacy is essential to who we are as human beings, and we make decisions about it every single day. It gives us a space to be ourselves without judgement, allows us to think freely without discrimination, and is an important element of giving us control over who knows what about us.

Privacy is freedom of an individual to be in solitude, withhold his/her private information and express oneself selectively. The boundaries and content of what is private or public differ among cultures and individuals, but still share common elements and constructs. When something is private to a person, it usually means that something is inherently special or sensitive to him/her. The domain of privacy partially overlaps security, which can include the concepts of appropriate use, as well as protection of information. Privacy may also take the form of bodily integrity and social dignity. Most cultures recognize the ability of individuals to withhold certain parts of their personal information from wider society, such as closing the door to one’s home.

Privacy as per article no. 12 of Universal Human Rights Declaration (UHRD, 1948):

‘No one shall be subjected to arbitrary interference with his privacy, family, home or correspondence, nor to attacks upon his honour and reputation. Everyone has the right to the protection of the law against such interference or attacks.’

Thus, we find privacy is anything which if lost can result in ruining a person’s reputation, honour, relation, intellectual possession and the like. Life is not merely eat, sleep and sex, it is much more, also demands non material entities like environment, love, acceptance, happiness and privacy as well.    

____

The term “privacy” is used frequently in ordinary language as well as in philosophical, political and legal discussions, yet there is no single definition or analysis or meaning of the term. The concept of privacy has broad historical roots in sociological and anthropological discussions about how extensively it is valued and preserved in various cultures. Moreover, the concept has historical origins in well-known philosophical discussions, most notably Aristotle’s distinction between the public sphere of political activity and the private sphere associated with family and domestic life. Yet historical use of the term is not uniform, and there remains confusion over the meaning, value and scope of the concept of privacy.  

Early treatises on privacy appeared with the development of privacy protection in American law from the 1890s onward, and privacy protection was justified largely on moral grounds. This literature helps distinguish descriptive accounts of privacy, describing what is in fact protected as private, from normative accounts of privacy defending its value and the extent to which it should be protected. In these discussions some treat privacy as an interest with moral value, while others refer to it as a moral or legal right that ought to be protected by society or the law. Clearly one can be insensitive to another’s privacy interests without violating any right to privacy, if there is one.

There are several sceptical and critical accounts of privacy. According to one well known argument there is no right to privacy and there is nothing special about privacy, because any interest protected as private can be equally well explained and protected by other interests or rights, most notably rights to property and bodily security (Thomson, 1975). Other critiques argue that privacy interests are not distinctive because the personal interests they protect are economically inefficient (Posner, 1981) or that they are not grounded in any adequate legal doctrine (Bork, 1990). Finally, there is the feminist critique of privacy, that granting special status to privacy is detrimental to women and others because it is used as a shield to dominate and control them, silence them, and cover up abuse (MacKinnon, 1989).

Nevertheless, most theorists take the view that privacy is a meaningful and valuable concept. Philosophical debates concerning definitions of privacy became prominent in the second half of the twentieth century, and are deeply affected by the development of privacy protection in the law. Some defend privacy as focusing on control over information about oneself (Parent, 1983), while others defend it as a broader concept required for human dignity (Bloustein, 1964), or crucial for intimacy (Gerstein, 1978; Inness, 1992). Other commentators defend privacy as necessary for the development of varied and meaningful interpersonal relationships (Fried, 1970, Rachels, 1975), or as the value that accords us the ability to control the access others have to us (Gavison, 1980; Allen, 1988; Moore, 2003), or as a set of norms necessary not only to control access but also to enhance personal expression and choice (Schoeman, 1992), or some combination of these (DeCew, 1997). Discussion of the concept is complicated by the fact that privacy appears to be something we value to provide a sphere within which we can be free from interference by others, and yet it also appears to function negatively, as the cloak under which one can hide domination, degradation, or physical harm to women and others.

_____

Privacy is a fundamental human right. It protects human dignity and other values such as freedom of association and freedom of speech. It has become one of the most important human rights of the modern age. Privacy is recognized around the world in different regions and cultures. It is protected in the Universal Declaration of Human Rights, the International Covenant on Civil and Political Rights, and in many other international and right of privacy in its constitution. At a minimum, these provisions include rights of inviolability of the home and secrecy of communications. In many of the countries where privacy is not specifically recognized in the constitution, the courts have found that right in other provisions. In the United States, the concept of privacy has evolved since it was first articulated by Justice Brandeis in 1898. His definition of privacy – “The right to be let alone” (Brandeis and Warren, 1890) – has been influential for nearly a century. In the 1960s, 1970s, and 1980s, the proliferation of information technology (and concurrent developments in the law of reproductive and sexual liberties) prompted further and more sophisticated legal inquiry into the meaning of privacy. Justice Brandeis’s vision of being “let alone” no longer suffices to define the concept of privacy in today’s digital environment, where personal information can be transported and distributed around the world in seconds. With the growth and development of new technological advancements, society and government also recognized its importance. The surveillance potential of powerful computer systems prompted demands for specific rules governing the collection and handling of personal information.

The genesis of modern legislation in this area can be traced to the first data protection law enacted in Germany in 1970. This was followed by national laws in Sweden (1973), the United States (1974), Germany (1977), and France (1978). At the end of 2000, ideas about privacy became more complex. It reflected, the rapid and remarkable advances in computers that have made storage, manipulation, and sharing of data at unprecedented rate.

_____

Many philosophers have examined the moral foundations of privacy interests. Some hold that the obligation to protect privacy is ultimately based on other, more fundamental moral principles such as the right to liberty or autonomy or the duty to not harm others. For example, breaching medical privacy can be regarded as unethical because it can cause harm such as loss of employment, discrimination, legal liability, or embarrassment to the person. Breaching privacy may be unethical even if it does not cause any harm because it violates a person’s right to control the disclosure of private information. Watching someone undress without permission invades physical privacy; even if it does not cause harm to the person, it violates his or her right to control access to his or her body. Others hold that violations of privacy are wrong because they undermine intimacy, which is necessary for the formation of meaningful human relationships. People develop close relationships by sharing private information, secret dreams and desires, and physical space. Because people cannot form these close relationships unless they have some expectation that their privacy will be protected, society needs laws and ethical rules to protect privacy.

______

Privacy is hitting the headlines more than ever. As computer users are asked to change their passwords again and again in the wake of exploits like Heartbleed and Shellshock, they’re becoming aware of the vulnerability of their online data — a susceptibility that was recently verified by scores of celebrities who had their most intimate photographs stolen.

Any of us could have our privacy violated at any time… but what does that mean exactly?

What privacy means varies between Europe and the US, between libertarians and public figures, between the developed world and developing countries, between women and men.

However, before you can define privacy, you first have to define its opposite number: what’s public. merriam-webster.com’s first definition of “public” includes two different descriptions that are somewhat contradictory. The first is “exposed to general view: open” and the second is “well-known; prominent”.

These definitions are at odds with each other because it’s easy for something to be exposed to view without it being prominent. In fact, that defines a lot of our everyday life. If you have a conversation with a friend in a restaurant or if you do something in your living room with the curtains open or if you write a message for friends on Facebook, you presume that what you’re doing will not be well-known, but it certainly could be open to general view.

So, is it public or is it private?

It turns out that this is a very old question. When talking about recent celebrity photo thefts, Kyle Chayka highlighted the early 20th-century case of Gabrielle Darley Melvin, an ex-prostitute who had been acquitted of murderer. After settling quietly into marriage, Melvin found herself the subject of an unauthorized movie that ripped apart the fabric of her new life. Melvin sued the makers of the film but lost. She was told in the 1931 decision Melvin v. Reid: “When the incidents of a life are so public as to be spread upon a public record they come within the knowledge and into the possession of the public and cease to be private.” So the Supreme Court of Los Angeles had one answer for what was public and what was private.

Melvin’s case was one where public events had clearly entered the public consciousness. However today, more and more of what people once thought of as private is also escaping into the public sphere — and we’re often surprised by it. That’s because we think that private means secret, and it doesn’t; it just means something that isn’t public.

Anil Dash recently discussed this on Medium and he highlighted a few reasons that privacy is rapidly sliding down this slippery slope of publication.

First, companies have financial incentives for making things “well-known” or “prominent”. The 24-hour news cycle is forcing media to report everything, so they’re mobbing celebrities and publishing conversations from Facebook or Twitter that people consider private. Meanwhile, an increasing number of tech companies are mining deep ores of data and selling them to the highest bidder; the more information that they can find, the more they can sell.

Second, technology is making it ridiculously easy to take those situations that are considered “private” despite being “exposed to general view” and making them public. It’s easier than ever to record a conversation, or steal data, or photograph or film through a window, or overhear a discussion. Some of these methods are legal, some aren’t, but they’re all happening — and they seem to be becoming more frequent.

The problem is big enough that governments are passing laws on the topic. In May 2014, the European Court of Justice decreed that people had a “Right to Be Forgotten”: they should be able to remove information about themselves from the public sphere if it’s no longer relevant, making it private once more. Whether this was a good decision remains to be seen, as it’s already resulted in the removal of information that clearly is relevant to the public sphere. In addition, it’s very much at odds with the laws and rights of the United States, especially the free speech clause of the First Amendment. As Jeffrey Toobin said in a New Yorker article: “In Europe, the right to privacy trumps freedom of speech; the reverse is true in the United States.” However right to be forgotten is distinct from the right to privacy.

All of this means that the line between public and private remains as fuzzy as ever.

We have a deep need for the public world: both to be a part of it and to share ourselves with it. However, we also have a deep need for privacy: to keep our information, our households, our activities, and our intimate connections free from general view. In the modern world, drawing the line between these two poles is something that every single person has to consider and manage.

That’s why it’s important that each individual define their own privacy needs — so that they can fight for the kinds of privacy that are important to them.

______

______

Definitions of privacy:

Of all the human rights in the international catalogue, privacy is perhaps the most difficult to define and circumscribe. Privacy has roots deep in history. The Bible has numerous references to privacy. There was also substantive protection of privacy in early Hebrew culture, Classical Greece and ancient China. These protections mostly focused on the right to solitude. Definitions of privacy vary widely according to context and environment. In many countries, the concept has been fused with Data Protection, which interprets privacy in terms of management of personal information. Outside this rather strict context, privacy protection is frequently seen as a way of drawing the line at how far society can intrude into a person’s affairs.  

_

Privacy has deep historical roots (reviewed by Pritts, 2008; Westin, 1967), but because of its complexity, privacy has proven difficult to define and has been the subject of extensive, and often heated, debate by philosophers, sociologists, and legal scholars. The term “privacy” is used frequently, yet there is no universally accepted definition of the term, and confusion persists over the meaning, value, and scope of the concept of privacy. At its core, privacy is experienced on a personal level and often means different things to different people (reviewed by Lowrance, 1997; Pritts, 2008). In modern society, the term is used to denote different, but overlapping, concepts such as the right to bodily integrity or to be free from intrusive searches or surveillance. The concept of privacy is also context specific, and acquires a different meaning depending on the stated reasons for the information being gathered, the intentions of the parties involved, as well as the politics, convention and cultural expectations (Nissenbaum, 2004; NRC, 2007b).

_

In the 1890s, future United States Supreme Court Justice Louis Brandeis articulated a concept of privacy that urged that it was the individual’s “right to be left alone.” Brandeis argued that privacy was the most cherished of freedoms in a democracy, and he was concerned that it should be reflected in the Constitution.

Robert Ellis Smith, editor of the Privacy Journal, defined privacy as “the desire by each of us for physical space where we can be free of interruption, intrusion, embarrassment, or accountability and the attempt to control the time and manner of disclosures of personal information about ourselves.”

According to Edward Bloustein, privacy is an interest of the human personality. It protects the inviolate personality, the individual’s independence, dignity and integrity.

According to Ruth Gavison, there are three elements in privacy: secrecy, anonymity and solitude. It is a state which can be lost, whether through the choice of the person in that state or through the action of another person.

The Calcutt Committee in the United Kingdom said that, “nowhere have we found a wholly satisfactory statutory definition of privacy.” But the committee was satisfied that it would be possible to define it legally and adopted this definition in its first report on privacy: The right of the individual to be protected against intrusion into his personal life or affairs, or those of his family, by direct physical means or by publication of information. The Preamble to the Australian Privacy Charter provides that, “A free and democratic society requires respect for the autonomy of individuals, and limits on the power of both state and private organizations to intrude on that autonomy…Privacy is a key value which underpins human dignity and other key values such as freedom of association and freedom of speech…Privacy is a basic human right and the reasonable expectation of every person.” Autonomy means the ability to make our own life decision free from any force. Some authors refer to privacy as “the claim of individuals, groups, or institutions to determine for themselves when, how, and to what extent information about them is communicated to others.”  

_

The Common Criteria for Information Technology Security Evaluation (referred to as Common Criteria or CC) is an international standard (ISO/IEC 15408) for computer security certification. According with the CC standard, privacy involves “user protection against discovery and misuse of identity by other users”. In addition, the CC standard defines the following requirements in order to guarantee privacy:

  • Anonymity
  • Pseudonymity
  • Unlinkability
  • Unobservability

Thus, we consider the definition of privacy as a framework of requirements that prevents the discovery and identity of the user.

Anonymity:

Anonymity is intrinsically present in the concept of privacy. Nevertheless, anonymity refers exclusively to the matters related to the identity. The CC standard defines “anonymity ensures that a user may use a resource or service without disclosing the user’s identity. The requirements for anonymity provide protection of the user identity. Anonymity is not intended to protect the subject identity. […] Anonymity requires that other users or subjects are unable to determine the identity of a user bound to a subject or operation.”

Privacy is when nobody is aware of what you are doing but potentially they know your identity. Privacy relates to content.  If you send an encrypted email to a friend so only the two of you can open it, this is private. It is not public. Anonymity is when nobody knows who you are but potentially they know what you are doing.  

Pseudonymity:

The use of anonymity techniques can protect the user from revealing their real identity. Most of the time there is a technological requirement necessary to interact with an entity, thus, such entity requires to have some kind of identity. The CC standard claims pseudonymity ensures that a user may use a resource or service without disclosing its user identity, but can still be accountable for that use.

Unlinkability:

In order to guarantee a protection of the user’s identity, there is a need for unlinkability of the user’s activities within a particular context. This involves the lack of information to distinguish if the activities performed by the user are related or not.

Unobservability:

The CC standard refers to this concept as “unobservability, requires that users and/or subjects cannot determine whether an operation is being performed.”. Other authors claim that unobservability should be differentiated from the undetectability. The reasoning behind this, claims that something can be unobservable, but can still be detected.

______

In spite of the several attempts that have been made to define privacy; no universal definition of privacy could be created. Despite the fact that the claim for privacy in universal, its concrete form differs according to the prevailing societal characteristics, the economic and cultural environment. It means that privacy must be reinterpreted in the light of the current era and be examined in the current context.  

There are several factors that affect what people consider private. There are huge differences between particular societies and cultures, or scientific development can also lead to a different, urging need for ensuring the protection of privacy. It depends on the concrete situation, on the context: sharing the same information in different situations might be considered private differently. American law professor Alan Westin established three levels that affect privacy norms: the political, the socio-cultural and the personal level. The individual also plays a central role: privacy can be understood as a quasi “aura” around the individual, which constitutes the limit between him/her and the outside world. The limits of this aura change from context to context and from individual to individual, so from all this individualized and changing context an average standard must be found and this standard can be legally protected. Besides this always changing context, numerous attempts to define privacy have been made during the last 120 years. However, there is a problem with all these definitions, which Daniel Solove explained in one of his articles: their scope is either too narrow or too broad. He emphasizes that it does not mean that these concepts lack of merit, the problem is that these authors use a traditional method of conceptualizing privacy, and as a result their definitions only highlight either some aspects of privacy, or they are too broad and do not give an exact view on the elements of privacy. He created six categories for these definitions according to which privacy is (1) the right to be let alone, (2) limited access to the self, (3) secrecy, (4) control of personal information, (5) personhood and (6) intimacy.

As already presented, Warren and Brandeis defined privacy as “the right to be let alone”. According to Israeli law professor Ruth Gavison “our interest in privacy […] is related to our concern over our accessibility to others: the extent to which we are known to others, the extent to which others have physical access to us, and the extent to which we are the subject of others’ attention.” American jurist and economist Richard Posner avoid giving a definition but states “that one aspect of privacy is the withholding or concealment of information.”  From among the authors who consider privacy as a control over personal information, Alan Westin and American professor Charles Fried must be mentioned. Westin defined privacy as “the claim of an individual to determine what information about himself or herself should be known to others” while Fried stated that „privacy […] is the control we have over information about ourselves.” American Edward Bloustein argued that intrusion into privacy has a close connection with personhood, individuality and human dignity. American professor Tom Gerety understands privacy as “the control over or the autonomy of the intimacies of personal identity”. Máté Dániel Szabó, Hungarian jurist, argued that “privacy is the right of the individual to decide about himself/herself”.  As all these definitions state something very important about what we should consider private, it is an extremely hard task to attempt to create a uniform definition of privacy.

_______

_______

What is personally identifiable information (PII)?

Personally identifiable information (PII) is any data that can be used to identify a specific individual. Social Security numbers, mailing or email address, and phone numbers have most commonly been considered PII, but technology has expanded the scope of PII considerably. It can include an IP address, login IDs, social media posts, or digital images. Geolocation, biometric, and behavioral data can also be classified as PII.  

Internet privacy involves the right or mandate of personal privacy concerning the storing, repurposing, provision to third parties, and displaying of information pertaining to oneself via the Internet. Internet privacy is a subset of data privacy. Privacy concerns have been articulated from the beginnings of large-scale computer sharing. It has been suggested that the “appeal of online services is to broadcast personal information on purpose.”

_

_

With all around technological invasion in all sort of life activities, information privacy is becoming more complex by every passing minute as more and more data is being collected, transferred, exchanged and analyzed for positive and negative reasons. As the technology gets more powerful and invasive it becomes more of a sensitive issue as it tries to blur the line between private and public. Even the business houses in the field are facing an uphill task to protect the personal information of the customers safe. As a result, privacy has become the most delicate consumer protection issue in this age of information, which otherwise is a human right at first place.

The former US Homeland Security secretary, Michael Chertoff, has been in the news of late, discussing cybersecurity and his new book ‘Exploding Data’. His opinion gets fairly scary as he intimates most of our personal/corporate data is out there already, and we all have no idea who already has it and what they intend to do with it. Data has become the “new domain of warfare,” or at least part of the toolbox for waging war. So, is stealing data or spying considered warfare? Chertoff says no, “but if you destroy things and kill people with it, that’s warfare.” He further points to the public and business fascination and reliance upon social media as an obvious data giveaway.  Chertoff warns of our cell phone usage as an immediate and direct funnel of personal, and private, information. “We’re opening ourselves up and freely giving away our data, even “Locational Data…our Digital Exhaust,” as he puts it. Loyalty cards at the grocery store, credit cards, iWallet’s, ride services and the like are other examples, and Chertoff explains that we give it away easily in the name of convenience and consumerism.  

______

Control over information:

Control over one’s personal information is the concept that “privacy is the claim of individuals, groups, or institutions to determine for themselves when, how, and to what extent information about them is communicated to others.” Generally, a person who has consensually formed an interpersonal relationship with another person is not considered “protected” by privacy rights with respect to the person they are in the relationship with.  Charles Fried said that Privacy is not simply an absence of information about us in the minds of others; rather it is the control we have over information about ourselves. Nevertheless, in the era of big data, control over information is under pressure.

______

Surveillance:

Surveillance is the systematic investigation or monitoring of the actions or communications of one or more persons. The primary purpose of surveillance is generally to collect information about the individuals concerned, their activities, or their associates. There may be a secondary intention to deter a whole population from undertaking some kinds of activity.

Two separate classes of surveillance are usefully identified:

-1, Personal Surveillance is the surveillance of an identified person. In general, a specific reason exists for the investigation or monitoring. It may also, however, be applied as a means of deterrence against particular actions by the person, or repression of the person’s behaviour.

-2. Mass Surveillance is the surveillance of groups of people, usually large groups. In general, the reason for investigation or monitoring is to identify individuals who belong to some particular class of interest to the surveillance organization. It may also, however, be used for its deterrent effects.  

The basic form, physical surveillance, comprises watching (visual surveillance) and listening (aural surveillance). Monitoring may be undertaken remotely in space, with the aid of image- amplification devices like field glasses, infrared binoculars, light amplifiers, and satellite cameras, and sound- amplification devices like directional microphones; and remotely in time, with the aid of image and sound- recording devices.

In addition to physical surveillance, several kinds of communications surveillance are practised, including mail covers and telephone interception.

The popular term electronic surveillance refers to both augmentations to physical surveillance (such as directional microphones and audio bugs) and to communications surveillance, particularly telephone taps.

These forms of direct surveillance are commonly augmented by the collection of data from interviews with informants (such as neighbours, employers, workmates, and bank managers). As the volume of information collected and maintained has increased, the record collections of organizations have become an increasingly important source. These are often referred to as ‘personal data systems’. This has given rise to an additional form of surveillance:

Data Surveillance (or Dataveillance) is the systematic use of personal data systems in the investigation or monitoring of the actions or communications of one or more persons. Dataveillance is significantly less expensive than physical and electronic surveillance, because it can be automated. As a result, the economic constraints on surveillance are diminished, and more individuals, and larger populations, are capable of being monitored.

Like surveillance more generally, dataveillance is of two kinds:

-1. Personal Dataveillance is the systematic use of personal data systems in the investigation or monitoring of the actions or communications of an identified person. In general, a specific reason exists for the investigation or monitoring of an identified individual. It may also, however, be applied as a means of deterrence against particular actions by the person, or repression of the person’s behaviour.

-2. Mass Dataveillance is the systematic use of personal data systems in the investigation or monitoring of the actions or communications of groups of people. In general, the reason for investigation or monitoring is to identify individuals who belong to some particular class of interest to the surveillance organization. It may also, however, be used for its deterrent effects.  

Dataveillance comprises a wide range of techniques. These include:

-1. Front- End Verification. This is the cross-checking of data in an application form, against data from other personal data systems, in order to facilitate the processing of a transaction.

-2.  Computer Matching. This is the expropriation of data maintained by two or more personal data systems, in order to merge previously separate data about large numbers of individuals.

-3. Profiling. This is a technique whereby a set of characteristics of a particular class of person is inferred from past experience, and data-holdings are then searched for individuals with a close fit to that set of characteristics.

-4. Data Trail This is a succession of identified transactions, which reflect real-world events in which a person has participated.

______

How do surveillance and privacy relate?

They both are about the control of information—in one case as discovery, in the other as protection. At the most basic level, surveillance is a way of accessing data. Surveillance, implies an agent who accesses (whether through discovery tools, rules or physical/logistical settings) personal data. Privacy, in contrast, involves a subject who restricts access to personal data through the same means. In popular and academic dialogue surveillance is often wrongly seen to be the opposite of privacy and, in simplistic dramaturgy, the former is seen as bad and the latter good. For example, social psychologist Kelvin (Kelvin 1973) emphasized privacy as a nullification mechanism for surveillance. But Kelvin’s assertion needs to be seen as only one of four basic empirical connections between privacy and surveillance. Surveillance is not necessarily the dark side of the social dimension of privacy.

-Yes, privacy (or better actions taken to restrict access to all or only to insiders) may serve to nullify surveillance. Familiar examples are encryption, whispering, and disguises.

But

-Surveillance may serve to protect privacy. Examples include biometric identification and audit trails.

-Privacy may serve to protect surveillance. Consider undercover police who use various fronts and false identification to protect their real identity and activities.

-Surveillance may serve to nullify privacy as Kelvin claims. (Big data, night vision video cameras, drugs tests) 

_

Government mass surveillance:

Over the past decade, rapid advancements in communications technology and new understandings of global security and cybersecurity have motivated governments to eschew the traditional limitations of lawfulness, necessity and proportionality on surveillance. It is perceived that reviewing every communication is necessary to understand all aspects of potential threats to national security, as each message could be the proverbial needle in the digital haystack. This has led to the development of mass surveillance, also known as bulk collection or bulk interception, which captures extensive amounts of data from the undersea fibre-optic cables that carry most of the world’s data.

Mass surveillance programmes form a key part of many national security apparatuses and can be purchased on the private market, including by oppressive regimes. Mass surveillance is often neither lawful nor acknowledged by national authorities, although a recent wave of legal reform aims to grant the practice greater legitimacy. It may be conducted without the knowledge and assistance of ICT companies, who typically own and manage the channels by which information is communicated, although telecommunications companies and Internet service providers are in many instances informed of, asked to cooperate with, or compelled to facilitate government programmes. 

Mass surveillance not only compromises the very essence of privacy, but also jeopardizes the enjoyment of other human rights such as freedom of expression and freedom of assembly and association. This can undermine democratic movements, impede innovation, and leave citizens vulnerable to the abuse of power.

_

In both academic and popular discussion privacy is too often justified as a value because of what it is presumed to do for individuals. But it can also be a positive social value because of what it does for the group. An additional point (neglected by some of privacy’s more strident supporters) is that it can also be an anti-social value tied to private property and modernization.

In contrast, surveillance is too often criticized for what it is presumed to do for more powerful groups (whether government or corporations) relative to the individual. But it can also be a pro-social value. Just as privacy can support the dignity and freedom to act of the person, surveillance can protect the integrity and independence of groups vital to a pluralistic democratic society and it can offer protection to individuals, whether for the dependent such as children and the sick, or to those who like clean water and industrial safety and do not want their precious liberties destroyed by enemies. Surveillance, like privacy, can be good for the individual and for the society, but like privacy it can also have negative consequences for both.

______

Scope of privacy:

Professor Roger Clarke suggests that the importance of privacy has psychological, sociological, economic and political dimensions.

-1. Psychologically, people need private space. This applies in public as well as behind closed doors and drawn curtains …

-2. Sociologically, people need to be free to behave, and to associate with others, subject to broad social mores, but without the continual threat of being observed …

-3. Economically, people need to be free to innovate …

-4. Politically, people need to be free to think, and argue, and act. Surveillance chills behaviour and speech, and threatens democracy.

In general, privacy is a right to be free from secret surveillance and to determine whether, when, how, and to whom, one’s personal or organizational information is to be revealed.

______

States of privacy:

Alan Westin defined four states—or experiences—of privacy: solitude, intimacy, anonymity, and reserve. Solitude is a physical separation from others. Intimacy is a “close, relaxed, and frank relationship between two or more individuals” that results from the seclusion of a pair or small group of individuals. Anonymity is the “desire of individuals for times of ‘public privacy.'” Lastly, reserve is the “creation of a psychological barrier against unwanted intrusion”; this creation of a psychological barrier requires others to respect an individual’s need or desire to restrict communication of information concerning himself or herself.

In addition to the psychological barrier of reserve, Kirsty Hughes identified three more kinds of privacy barriers: physical, behavioral, and normative. Physical barriers, such as walls and doors, prevent others from accessing and experiencing the individual. Behavioral barriers communicate to others—verbally, through language, or non-verbally, through personal space, body language, or clothing—that an individual does not want them to access or experience him or her. Lastly, normative barriers, such as laws and social norms, restrain others from attempting to access or experience an individual.

______   

Privacy is a fundamental human right recognized in the UN Declaration of Human Rights, the International Covenant on Civil and Political Rights and in many other international and regional treaties. Privacy underpins human dignity and other key values such as freedom of association and freedom of speech. It has become one of the most important human rights issues of the modern age. Many countries in the world recognizes a right of privacy explicitly in their Constitution. At a minimum, these provisions include rights of inviolability of the home and secrecy of communications. Most recently-written Constitutions such as South Africa’s and Hungary’s include specific rights to access and control one’s personal information. In many of the countries where privacy is not explicitly recognized in the Constitution, such as the United States, Ireland and India, the courts have found that right in other provisions. In many countries, international agreements that recognize privacy rights such as the International Covenant on Civil and Political Rights or the European Convention on Human Rights have been adopted into law.

Reasons for Adopting Comprehensive Privacy Laws:  

There are three major reasons for the movement towards comprehensive privacy and data protection laws. Many countries are adopting these laws for one or more reasons.

  • To remedy past injustices. Many countries, especially in Central Europe, South America and South Africa, are adopting laws to remedy privacy violations that occurred under previous authoritarian regimes.
  • To promote electronic commerce. Many countries, especially in Asia, but also Canada, have developed or are currently developing laws in an effort to promote electronic commerce. These countries recognize consumers are uneasy with their personal information being sent worldwide. Privacy laws are being introduced as part of a package of laws intended to facilitate electronic commerce by setting up uniform rules.
  • To ensure laws are consistent with Pan-European laws. Most countries in Central and Eastern Europe are adopting new laws based on the Council of Europe Convention and the European Union Data Protection Directive. Many of these countries hope to join the European Union in the near future. Countries in other regions, such as Canada, are adopting new laws to ensure that trade will not be affected by the requirements of the EU Directive.

Continuing Problems:

Even with the adoption of legal and other protections, violations of privacy remain a concern. In many countries, laws have not kept up with the technology, leaving significant gaps in protections. In other countries, law enforcement and intelligence agencies have been given significant exemptions. Finally, in the absence of adequate oversight and enforcement, the mere presence of a law may not provide adequate protection. 

There are widespread violations of laws relating to surveillance of communications, even in the most democratic of countries. The U.S. State Department’s annual review of human rights violations finds that over 90 countries engage in illegally monitoring the communications of political opponents, human rights workers, journalists and labor organizers. In France, a government commission estimated in 1996 that there were over 100,000 wiretaps conducted by private parties, many on behalf of government agencies. In Japan, police were recently fined 2.5 million yen for illegally wiretapping members of the Communist party. Police services, even in countries with strong privacy laws, still maintain extensive files on citizens not accused or even suspected of any crime. There are investigations in Sweden and Norway, two countries with the longest history of privacy protection for police files.  Companies regularly flaunt the laws, collecting and disseminating personal information. In the United States, even with the long-standing existence of a law on consumer credit information, companies still make extensive use of such information for marketing purposes. 

Threats to privacy:

The increasing sophistication of information technology with its capacity to collect, analyze and disseminate information on individuals has introduced a sense of urgency to the demand for legislation. Furthermore, new developments in medical research and care, telecommunications, advanced transportation systems and financial transfers have dramatically increased the level of information generated by each individual. Computers linked together by high speed networks with advanced processing systems can create comprehensive dossiers on any person without the need for a single central computer system. New technologies developed by the defense industry are spreading into law enforcement, civilian agencies, and private companies.

According to opinion polls, concern over privacy violations is now greater than at any time in recent history.  Uniformly, populations throughout the world express fears about encroachment on privacy, prompting an unprecedented number of nations to pass laws which specifically protect the privacy of their citizens. Human rights groups are concerned that much of this technology is being exported to developing countries which lack adequate protections. Currently, there are few barriers to the trade in surveillance technologies. 

It is now common wisdom that the power, capacity and speed of information technology is accelerating rapidly. The extent of privacy invasion — or certainly the potential to invade privacy — increases correspondingly.

Beyond these obvious aspects of capacity and cost, there are a number of important trends that contribute to privacy invasion:

-1. Globalization removes geographical limitations to the flow of data. The development of the Internet is perhaps the best-known example of a global technology.

-2. Convergence is leading to the elimination of technological barriers between systems. Modern information systems are increasingly interoperable with other systems, and can mutually exchange and process different forms of data.

-3. Multi-media fuses many forms of transmission and expression of data and images so that information gathered in a certain form can be easily translated into other forms. 

The macro-trends outlined above have definite effect on surveillance in various nations.

_______

_______

Various kinds of privacy:

Different authors have classified privacy differently. Here are some examples.

_

It has been suggested that privacy can be divided into a number of separate, but related, concepts:

-1. Information privacy, which involves the establishment of rules governing the collection and handling of personal data such as credit information, and medical and government records. It is also known as ‘data protection’;

-2. Bodily privacy, which concerns the protection of people’s physical selves against invasive procedures such as genetic tests, drug testing and cavity searches;

-3. Privacy of communications, which covers the security and privacy of mail, telephones, e-mail and other forms of communication; and

-4. Territorial privacy, which concerns the setting of limits on intrusion into the domestic and other environments such as the workplace or public space. This includes searches, video surveillance and ID checks.

_

Clarke’s four categories of privacy: 

Roger Clarke’s human-centred approach to defining categories of privacy does assist in outlining what specific elements of privacy are important and must be protected.

Clarke’s four categories of privacy, outlined in 1997, include privacy of the person, privacy of personal data, privacy of personal behaviour and privacy of personal communication. 

Privacy of the person has also been referred to as “bodily privacy” and is specifically related to the integrity of a person’s body. It would include protections against physical intrusions, including torture, medical treatment, the “compulsory provision of samples of body fluids and body tissue” and imperatives to submit to biometric measurement. For Clarke, privacy of the person is thread through many medical and surveillance technologies and practices. Privacy of personal behaviour includes a protection against the disclosure of sensitive personal matters such as religious practices, sexual practices or political activities. Clarke notes that there is a space element included within privacy of personal behaviour, where people have a right to private space to carry out particular activities, as well as a right to be free from systematic monitoring in public space. Privacy of personal communication refers to a restriction on monitoring telephone, e-mail and virtual communications as well as face-to-face communications through hidden microphones. Finally, privacy of personal data refers to data protection issues. Clarke adds that, with the close coupling that has occurred between computing and communications, particularly since the 1980s, the last two aspects have become closely linked, and are commonly referred to as “information privacy”.

Four to Seven categories of privacy:

Despite the utility of these four categories, recent technological advances have meant that they are no longer adequate to capture the range of potential privacy issues which must be addressed. Specifically, technologies such as whole body imaging scanners, RFID-enabled travel documents, unmanned aerial vehicles, second-generation DNA sequencing technologies, human enhancement technologies and second-generation biometrics raise additional privacy issues, which necessitate an expansion of Clarke’s four categories. These new and emerging technologies argue for an expansion to seven different types of privacy, including privacy of the person, privacy of behaviour and action, privacy of personal communication, privacy of data and image, privacy of thoughts and feelings, privacy of location and space and privacy of association (including group privacy). 

_______

_______

Another way to classify privacy is privacy of space, body, information and choice.

Privacy of space: 

Privacy of space protects private spaces or zones where one has a reasonable expectation of privacy from outside interference or intrusion.

The right to privacy initially focused on protecting “private” spaces. These included spaces such as the home, from state interference. This drew from the belief that “a person’s home is their castle”. However, this idea of privacy is not limited simply to a person’s home. Privacy rests in ‘person’ and not in ‘places’. Therefore, even outside one’s home, other spaces could also acquire the character of private spaces, and even public spaces can afford a degree of privacy.

_

Privacy of body:

Privacy of body protects bodily integrity, and acts against physical and psychological intrusions into our bodies and bodily spaces.

Privacy of body is fundamental to our understanding of our bodies as private. The understanding of where bodily privacy extends is contextual and our boundaries may start at our skin, or the point where we can feel breath, or even till the other side of the room. It is the point where we feel touched and physically affected by another person. Similarly, intrusions into our psychological space without consent and control violate our bodily privacy.

_

Privacy of information: 

Privacy of information is our right to meaningful control over the sharing & use of information about ourselves without coercion or compulsion.

In the age of Big Data, the collection and analysis of personal data has tremendous economic value. However, these economic interests should not be pursued at the expense of personal privacy. Similarly, modern technology provides excessive opportunities to governments to monitor and surveil the lives of citizens. Informed Consent and meaningful choice while sharing information about ourselves is central to the idea of informational privacy.

_

Privacy of choice:

Privacy of choice means our right to make choices about our own lives, including what we eat and wear, and our gender identities.

The understanding of privacy has expanded to protect intimate relationships, such as family and marriage, and to include autonomous decision making. Spatial privacy presumes access to private spaces, but this may not always be the case due to economic inability or social mores. Understanding privacy as choice allows greater protection to private acts of individuals, even if they are not protected by a spatial privacy. The right to privacy gives us a choice of preferences on various facets of life.

_______

_______

While the right to privacy is now well-established in international law, understandings of privacy have continued to differ significantly across cultures, societies, ethnic traditions and time.  Even the United Nations Special Rapporteur on the right to privacy has remarked on the lack of a universally agreed definition, despite the fact that “the concept of privacy is known in all human societies and cultures at all stages of development and throughout all of the known history of humankind”.  Privacy is at the heart of the most basic understandings of human dignity, and the absence of an agreed definition does not prevent the development of broad understandings of privacy and its importance in a democratic society. As set out below, current conceptions of the right to privacy draw together four related aspects: decisional privacy, informational privacy, physical privacy and dispositional privacy.

-1. Decisional privacy: A comprehensive view of privacy looks to individuals’ ability to make autonomous life choices without outside interference or intimidation, including the social, political and technological conditions that make this ‘decisional privacy’ possible.  This makes privacy a social value as well as a public good, and offers protection against outside intrusion into peoples’ homes, communications, opinions, beliefs and identities. 

-2. Informational privacy: Privacy has more recently evolved to encapsulate a right to ‘informational privacy ‘, also known as data protection. The right to informational privacy is increasingly central to modern policy and legal processes, and in practice means that individuals should be able to control who possesses data about them and what decisions are made on the basis of that data.

-3. Physical privacy: A third and more straightforward conception of privacy is that of ‘physical privacy ‘, the right of an individual to a private space and to bodily integrity. Among other things, the right to physical privacy has underpinned jurisprudence supporting autonomy with respect to sexual and reproductive choices.

-4. Dispositional privacy is restriction on attempts to know an individual’s state of mind.

______

Dimensions of privacy:

People often think of privacy as some kind of right. Unfortunately, the concept of a ‘right’ is a problematical way to start, because a right seems to be some kind of absolute standard. What’s worse, it’s very easy to get confused between legal rights, on the one hand, and natural or moral rights, on the other. It turns out to be much more useful to think about privacy as one kind of thing (among many kinds of things) that people like to have lots of.  Privacy is the interest that individuals have in sustaining ‘personal space’, free from interference by other people and organisations.

Drilling down to a deeper level, privacy turns out not to be a single interest, but rather has multiple dimensions:

  • privacy of the person, sometimes referred to as ‘bodily privacy’ This is concerned with the integrity of the individual’s body. Issues include compulsory immunisation, blood transfusion without consent, compulsory provision of samples of body fluids and body tissue, and compulsory sterilisation;
  • privacy of personal behaviour. This relates to all aspects of behaviour, but especially to sensitive matters, such as sexual preferences and habits, political activities and religious practices, both in private and in public places. It includes what is sometimes referred to as ‘media privacy’;
  • privacy of personal communications. Individuals claim an interest in being able to communicate among themselves, using various media, without routine monitoring of their communications by other persons or organisations. This includes what is sometimes referred to as ‘interception privacy’; and
  • privacy of personal data. Individuals claim that data about themselves should not be automatically available to other individuals and organisations, and that, even where data is possessed by another party, the individual must be able to exercise a substantial degree of control over that data and its use. This is sometimes referred to as ‘data privacy’ and ‘information privacy’.

With the close coupling that has occurred between computing and communications, particularly since the 1980s, the last two aspects have become closely linked. It is useful to use the term ‘information privacy’ to refer to the combination of communications privacy and data privacy.

During the period since about 2005, a further disturbing trend has emerged, which gives rise to a fifth dimension that wasn’t apparent earlier;

  • privacy of personal experience. Individuals gather experience through buying books and newspapers and reading the text and images in them, buying or renting recorded video, conducting conversations with other individuals both in person and on the telephone, meeting people in small groups, and attending live and cinema events with larger numbers of people. Until very recently, all of these were ephemeral, none of them generated records, and hence each individual’s small-scale experiences, and their consolidated large-scale experience, were not visible to others. During the first decade of the 21st century, reading and viewing activities have migrated to screens, are performed under the control of corporations, and are recorded; most conversations have become ‘stored electronic communications’, each event is recorded and both ‘call records’ and content may be retained; many individuals’ locations are tracked, and correlations are performed to find out who is co-located with whom and how often; and events tickets are paid for using identified payment instruments. This massive consolidation of individuals’ personal experience is available for exploitation, and is exploited

With the patterns becoming more complex, a list may no longer be adequate, and a diagram below may help understand privacy dimensions:

________

________

Christopher Allen classifies Four Kinds of Privacy:  

-1. The First Kind: Defensive Privacy

The first type of privacy is defensive privacy, which protects against transient financial loss resulting from information collection or theft. This is the territory of phishers, conmen, blackmailers, identity thieves, and organized crime. It could also be the purview of governments that seize assets from people or businesses. An important characteristic of defensive privacy is that any loss is ultimately transitory. A phisher might temporarily access a victim’s bank accounts, or an identity thief might cause problems by taking out new credit in the victim’s name, or a government might confiscate a victim’s assets. However, once a victim’s finances have been lost, and they’ve spent some time clearing the problem up, they can then get back on their feet. The losses also might be recoverable — or if not, at least insurable.

This type of privacy is nonetheless important because assaults against it are very common and the losses can still be very damaging. The Bureau of Justice report says that 16.6 million Americans were affected in 2012 by identity theft alone, resulting in $24.7 billion dollars being stolen, or about $1,500 per victim. Though most victims were able to clear up the problems in less than a day, 10% had to spend more than a month.

-2. The Second Kind: Human Rights Privacy

The second type of privacy is human rights privacy, which protects against existential threats resulting from information collection or theft. This is the territory of stalkers and other felonious criminals as well as authoritarian governments and other persons intent on doing damage to someone for personal for his or her beliefs or political views. An important characteristic of human rights privacy is that violations usually result in more long-lasting losses than was the case with defensive privacy. Most obviously, authoritarian governments and hard-line theocracies might imprison or kill victims while criminals might murder them. However, political groups could also ostracize or blacklist a victim.

Though governments are the biggest actors on the stage of human rights breaches, individuals can also attack this sort of privacy — with cyberbullies being among the prime culprits. Though a bully’s harassment could only involve words, the attackers frequently release personal information about the person they’re harassing, which can cause the bullying to snowball. For example Jessica Logan, Hope Sitwell, and Amanda Todd were bullied after their nude pictures were broadcast, while Tyler Clementi was bullied after a hall-mate streamed video of Clementi kissing another young man. Unfortunately, cyberbullying often results in suicide, showing the existential dangers of these privacy breaches.

-3. The Third Kind: Personal Privacy

The third type of privacy is personal privacy, which protects persons against observation and intrusion; it’s what Judge Thomas Cooley called “the right to be let alone”, as cited by future Supreme Court Justice Louis Brandeis in “The Right to Privacy”, which he wrote for the Harvard Law Review of December 15, 1890. Brandeis’ championing of this sort of privacy would result in a new, uniquely American right that has at times been attributed to the First Amendment (giving freedom of speech within one’s house), the Fourth Amendment (protecting one’s house from search & seizure by the government), and the Fifth Amendment (protecting one’s private house from public use). This right can also be found in state Constitutions, such as the Constitution of California, which promises “safety, happiness, and privacy”.

When personal privacy is breached we can lose our right to be ourselves. Without Brandeis’ protection, we could easily come under Panoptic observation where we could be forced to act unlike ourselves even in our personal lives. Unfortunately, this isn’t just a theory: a recent PEN America survey shows that 1 in 6 authors already self-censor due to NSA surveillance. Worse, it could be damaging: another report shows that unselfconscious time is restorative and that low self-esteem, depression, and anxiety can result from its lack.

Though Brandeis almost single-handedly created personal privacy, it didn’t come easily. The Supreme Court refused to protect it in a 1928 wire-tapping case called Olmstead v. United States; in response, Brandeis wrote a famous dissenting opinion that brought his ideas about privacy into the official record of the Court. Despite this initial loss, personal privacy gained advocates in the Supreme Court over the years, who often referred to Brandeis’ 1890 article. By the 1960s, Brandeis’ ideas were in the mainstream and in the 1967 Supreme Court case Katz v. United States, Olmstead was finally overturned. Though the Court’s opinion said that personal privacy was “left largely to the law of the individual States”, it had clearly become a proper expectation.

The ideas that Brandeis championed about personal privacy are shockingly modern. They focused on the Victorian equivalent of the paparazzi, as Brandeis made clear when he said: “Instantaneous photographs and newspaper enterprise have invaded the sacred precincts of private and domestic life.” He also carefully threaded the interface between public and private, saying, “The right to privacy does not prohibit any publication of matter which is of public or general interest.”

Today, personal privacy is the special concern of the more Libertarian-oriented founders of the Internet, such as Bit Torrent founder Bram Cohen, who demanded the basic human rights “to be free of intruders” and “to privacy”. Personal privacy is more of an issue than ever for celebrities and other public figures, but attacks upon it also touch the personal life of average citizens. It’s the focus of the “do not call” registry and other telemarketing laws and of ordinances outlawing soliciting and trespass. It’s the right at the heart of doing what we please in our own homes — whether it be eating dinner in peace, discussing controversial politics & religion with our close friends, or playing sexual games with our partners.

Though personal privacy has grown in power in America since the 1960s, it’s still under constant attack from the media, telemarketing interests, and the government. Meanwhile, it’s not an absolute across the globe: some cultures, such as those in China and parts of Europe, actively preach against it — advocating that community and sharing trump personal privacy.

-4. The Fourth Kind: Contextual Privacy

The fourth type of privacy is contextual privacy, which protects persons against unwanted intimacy. Failure to defend contextual privacy can result in the loss of relationships with others. No one presents the same persona in the office as they do when spending time with their kids or even when meeting with other parents. They speak different languages to each of these different “tribes”, and their connection to each tribe could be at risk if they misspoke by putting on the wrong persona or speaking the wrong language at the wrong time. Worse, the harm from this sort of privacy breach may also be increasing in the age of data mining, as bad actors can increasingly put together information from different contexts and twist them into a “complete” picture that simultaneously might be damning and completely false.

Though it’s easy to understand what can be lost with a breach of contextual privacy, the concept can still be confusing because contextual privacy overlaps with other sorts of privacy; it could involve the theft of information or an unwelcome intrusion, but the result is different. Where theft of information might make you feel disadvantaged (if a conman stole your financial information) or endangered (if the government discovered you were whistle blowing), and where an intrusion might make you feel annoyed (if a telemarketer called during dinner), a violation of contextual privacy instead makes you feel emotionally uncomfortable and exposed — or as boyd said, “vulnerable”.

You probably won’t find any case law about contextual privacy, because it’s a fairly new concept and because there’s less obvious harm. However social networks from Facebook and LinkedIn to Twitter and LiveJournal are all facing contextual privacy problems as they each try to become the home for all sorts of social interaction. This causes people — in particular women and members of the LGBT communities — to try and keep multiple profiles, only to find “real name” policies working against them.

Meanwhile, some social networks make things worse by creating artificial social pressures to reveal information, as occurs when someone tags you in a Facebook picture or in status update. To date, Google+ is one of the few networks to attempt a solution by creating “circles”, which can be used to precisely specify who in your social network should get each piece of information that you share. However, it’s unclear how many people use this feature. A related feature on Facebook called “Lists” is rarely used.

If a lack of personal privacy causes you to “not be yourself”, a loss of contextual privacy allows other to “not see you as yourself”. You risk being perceived as “other” when your actions are seen out of context.

_______

_______

Motivations for privacy breach:

Historically, the control of the communications and the flow of information, are mandatory for any entity that aims to gain certain control over the society. There are multiple entities with such interests: governments, companies, independent individuals, etc. Most of the research available on the topic claims that the main originators of the threats against privacy and anonymity are governmental institutions and big corporations. The motivations behind these threats are varied. Nevertheless, they can be classified under four categories: social, political, technological and economical. Despite the relation between them, the four categories have different backgrounds.

-1. Social & political motivations:

The core of human interaction is communication in any form. The Internet has deeply impacted in how social interaction is conducted these days. The popularity and facilities that the Internet offers, makes it a fundamental asset for the society. Currently, it is estimated that there are more than 4.8 billion users of the Internet in the world.

Any entity that gains any level of control over this massive exchange of information, implicitly obtains two main advantages: the capability to observe the social interaction without being noticed (hence, being able to act with certain prediction), and the possibility to influence it. Privacy and anonymity are the core values against these actions. Nevertheless, several authoritarian regimes implemented diverse mechanisms for the dismissal of both, privacy and anonymity.

Many of the authors highlight that the foundations of these threats are most often motivated for ideological reasons, thus, in countries in which freedom of speech or political freedom are limited, privacy and anonymity are considered an enemy of the state. In addition, national defense and social “morality” are also some of the main arguments utilized when justifying actions against privacy and anonymity.

According to the OpenNet Initiative, there are at least 61 countries that have implemented some kind of mechanism that negatively affects privacy and anonymity. Examples of these countries are well-known worldwide: China, Iran, North Korea and Syria, among others. In addition, recent media revelations have shown that several mechanisms that are negatively affecting privacy and anonymity have been implemented in regions such as the U.S. and Europe.

-2. Technological issues:

There are some cases in which the threats against privacy and anonymity occur due to the lack of proper technology. Sometimes these threats can occur unintentionally. An example of this are bugs in software that are not discovered and somehow reveal information about the identity or data of the users. Also, misconfigured Internet services that do not use proper encryption and identity mechanisms when offering interaction with their users. Certain techniques utilized by the ISPs can lead to situations in which the user’s data and identity gets compromised even if the ISPs’ intentions are focused on bandwidth optimization. Finally, non-technological educated users can be a threat to themselves by unaware leaking their identity and data voluntarily but unaware of the repercussions (e.g. usage of social networks, forums, chats, etc).

-3. Economical motivations:

The impact and penetration of the Internet in modern society affects almost every aspect of it, with a primary use of it for commercial/industrial purposes. There are multiple economic interests that are related directly to the privacy and anonymity of the users. Several companies with Internet presence take advantage of user’s identity in order to build more successful products or to target a more receptive audience. Lately, the commercialization of user’s data has proven to be a profitable business for those entities that have the capability of collecting more information about user’s behavior. In addition, due to the popularity of Internet for banking purposes, the privacy and anonymity of the users are a common target for malicious attackers seeking to gain control over user’s economical assets.

______

______

Privacy Risk Assessments: DPIAs and PIAs:

Once an organization has an initial understanding of its data collection, usage and sharing, the next step is to conduct Privacy Risk Assessments to understand the current and future privacy risks from those practices to the individual consumers and the organization. Organizations can engage in any number of individual or combined reviews in order to evaluate the implications of its business processes on privacy. There are many names for these Privacy Risk Assessments or Impact Assessment, but they are frequently referred to as either a Data Protection Impact Assessment (DPIA) or a Privacy Impact Assessment (PIA).

DPIA is the name used for impact assessments in the European Union’s General Data Protection Regulation (GDPR). The DPIA requirement is covered in GDPR Article 35 and required where processing is likely to result in a high risk to the rights and freedoms of natural persons. This includes cases of automated processing, large scale processing of special data, and systematic, large scale monitoring of a public area. The law sets forth the minimum of what must be contained in a DPIA. PIA is the term used by the Federal Trade Commission and other government agencies for an analysis of how personally identifiable information is collected, used, shared and maintained by the U.S. Government. It arose from requirements in the E-Government Act of 2002, enacted by Congress to improve the management of Federal electronic government services and processes. Although the law applies to Federal government agencies, the name is also used by industry for its own similar processes in the United States.

The purpose of a Privacy Risk Assessment is to provide an early warning system to detect privacy problems, enhance the information available internally to facilitate informed decision-making, avoid costly or embarrassing mistakes in privacy compliance, and provide evidence that an organization is attempting to minimize its privacy risks and problems.

_______

_______

Some privacy statistics:  

Pew research center May, 2013

Teens, Social Media, and Privacy:

Youth are sharing more personal information on their profiles than in the past. They choose private settings for Facebook, but share with large networks of friends. Most teen social media users say they aren’t very concerned about third-party access to their data. 

This is a report based on a survey of 802 teens that examines teens’ privacy management on social media sites:

  • The typical (median) teen Facebook user has 300 friends, while the typical teen Twitter user has 79 followers.
  • Focus group discussions with teens show that they have waning enthusiasm for Facebook, disliking the increasing adult presence, people sharing excessively, and stressful “drama,” but they keep using it because participation is an important part of overall teenage socializing.
  • 60% of teen Facebook users keep their profiles private, and most report high levels of confidence in their ability to manage their settings.
  • Teens take other steps to shape their reputation, manage their networks, and mask information they don’t want others to know; 74% of teen social media users have deleted people from their network or friends list.
  • Teen social media users do not express a high level of concern about third-party access to their data; just 9% say they are “very” concerned.

_

July 2015 data collected for National Telecommunications and Information Administration (NTIA) shows that 45% of online households reported that privacy or security concerns stopped them from:

  • Conducting financial transactions;
  • Buying goods or services;
  • Posting on social networks; or
  • Expressing opinions on controversial or political issues via the Internet.

_

Pew research center survey November, 2019:

-1. A majority of Americans believe their online and offline activities are being tracked and monitored by companies and the government with some regularity. It is such a common condition of modern life that roughly six-in-ten U.S. adults say they do not think it is possible to go through daily life without having data collected about them by companies or the government.

-2. Data-driven products and services are often marketed with the potential to save users time and money or even lead to better health and well-being. Still, large shares of U.S. adults are not convinced they benefit from this system of widespread data gathering. Some 81% of the public say that the potential risks they face because of data collection by companies outweigh the benefits, and 66% say the same about government data collection. At the same time, a majority of Americans report being concerned about the way their data is being used by companies (79%) or the government (64%). Most also feel they have little or no control over how these entities use their personal information.

-3. Americans’ concerns about digital privacy extend to those who collect, store and use their personal information. Additionally, majorities of the public are not confident that corporations are good stewards of the data they collect. For example, 79% of Americans say they are not too or not at all confident that companies will admit mistakes and take responsibility if they misuse or compromise personal information, and 69% report having this same lack of confidence that firms will use their personal information in ways they will be comfortable with.

-4. There is also a collective sentiment that data security is more elusive today than in the past. When asked whether they think their personal data is less secure, more secure or about the same as it was five years ago, 70% of adults say their personal data is less secure. Only 6% report that they believe their data is more secure today than it was in the past.

-5. Although the public expresses worry about various aspects of their digital privacy, many Americans acknowledge that they are not always diligent about paying attention to the privacy policies and terms of service they regularly encounter. Fully 97% of Americans say they are ever asked to approve privacy policies, yet only about one-in-five adults overall say they always (9%) or often (13%) read a company’s privacy policy before agreeing to it. Some 38% of all adults maintain they sometimes read such policies, but 36% say they never read a company’s privacy policy before agreeing to it.

-6. Roughly three-in-ten Americans (28%) say they have suffered at least one of three kinds of major identity theft problems in the previous 12 months at the time of the survey: 21% have had someone put fraudulent charges on their credit or debit card; 8% have had someone take over their social media or email accounts without their permission; and 6% have had someone try to open a credit line or get a loan using their name.

These findings point to an overall wariness about the state of privacy these days, but there are some circumstances where the public sees value in this type of data-driven environment. For example, pluralities of adults say it is acceptable for poorly performing schools to share data about their students with a non-profit group seeking to help improve educational outcomes or for the government to collect data about all Americans to assess who might be a potential terrorist.

These findings come from a survey of 4,272 U.S. adults conducted on Pew Research Center’s American Trends Panel between June 3-17, 2019.

_

Facebook and TikTok users have Privacy Concerns, October 2020:

The online world is in the midst of a privacy crisis. Companies track users in order to serve them with personalized ads, but people are sick of this activity. Last year, one in five people had deleted an app over privacy issues. Now, it’s become a widespread belief that companies are spying on their private conversations.

According to a recent American survey conducted by WhistleOut, 85 percent of respondents believe that at least one tech company is currently spying on them. At the head of the list are Facebook (68 percent) and TikTok (53 percent), who many believe to be breaking privacy laws. TikTok especially has seen a recent tide of hate as a Chinese company in possession of American user data. Instagram had 43 percent of respondents name the app as a concern, so its association with Facebook isn’t helping make users feel safer. Other big tech companies including Google (45 percent) and Amazon (38 percent) were among those at the top of the list. Meanwhile, Apple’s efforts to secure user data seems to have netted some goodwill among respondents—it landed so far down the list.

Many believe it goes even further than simple ad tracking, though. An overwhelming 80 percent of respondents believe that these companies are actually listening to their phone conversations. Again, Facebook (55 percent) and TikTok (40 percent) ranked more highly as concerns than companies with embedded voice assistants, such as Google, Amazon, and Apple. This speaks more to how untrustworthy people find Facebook and TikTok than to the trust they have in Alexa, Siri, and Google Assistant.

Despite respondents believing these companies are spying on them, 57 percent aren’t even sure what the companies are doing with the information they are collecting. While only 24 percent believe these companies spy in order to tailor their ads and content to users, two thirds of participants claim that they have seen or heard an advertisement or a specific product on a big-tech company’s app or website after merely talking about that product but never searching for that product online.

When asked about what they plan to do to protect their privacy from these apps, 40 percent of participants indicated that they had either deleted or stopped using TikTok. Another 18 percent said they had abandoned the Facebook app due to privacy concerns.

While TikTok continues to combat an impending ban from the US government, 57 percent of respondents felt that at least one major tech company should be banned for violating user privacy. When asked about specific apps, 37 percent of participants indicated that TikTok should be banned, while 20 percent said Facebook, and another 13 percent said Instagram.

People want Facebook to stop tracking their internet activity and prevent Google from tracking their location.

_______

_______

Privacy vs secrecy vs confidentiality vs security:

_

Privacy is sometimes defined as an option to have secrecy. Richard Posner said that privacy is the right of people to “conceal information about themselves that others might use to their disadvantage”.  In various legal contexts, when privacy is described as secrecy, a conclusion if privacy is secrecy then rights to privacy do not apply for any information which is already publicly disclosed. When privacy-as-secrecy is discussed, it is usually imagined to be a selective kind of secrecy in which individuals keep some information secret and private while they choose to make other information public and not private.

_

Privacy is neither secrecy nor security:

Secrecy is anything not allowed by the society to keep private, but some body is doing so, whereas privacy is something which is allowed by the society to keep secret. While secrecy damages one’s reputation, privacy is necessary for it. Privacy is not security but quite close to it as security gets threatened in the event of loss of privacy. Security focuses more on protecting data from malicious attacks and the exploitation of stolen data for profit as done by some companies operating in technology driven business. When we say security is necessary for protecting data, it’s not sufficient for addressing privacy.

_

The difference between privacy and secrecy:

In an excellent thread about the topic on reddit, one user clearly illustrates the difference between privacy and secrecy with this analogy:

“When you go to the bathroom you close the door even if everybody knows what you are doing.” What you do in the bathroom is private. It’s not something you’d like other people to sit and watch. But it’s usually not a secret. Everyone has a pretty good idea of what goes on when you go into the bathroom and close the door, even if they can’t see you do it. However, if you went in to the bathroom and, say, did some illicit drugs and your friends wouldn’t approve, that would probably be a secret.

Privacy and secrecy are two things that we all have a right to. But “secrets” carry an extra weight of shame, guilt, or “something to hide.” Privacy is simply the right to not have people observing you while do things you’d prefer to do alone. Secrets, on the other hand, are often associated with doing something bad — which isn’t to say that every secret is a bad one. But it is how we commonly think of them.

_

In the context of personal information, concepts of privacy are closely intertwined with those of confidentiality and security. However, although privacy is often used interchangeably with the terms “confidentiality” and “security,” they have distinct meanings. Privacy addresses the question of who has access to personal information and under what conditions. Privacy is concerned with the collection, storage, and use of personal information, and examines whether data can be collected in the first place, as well as the justifications, if any, under which data collected for one purpose can be used for another (secondary) purpose. An important issue in privacy analysis is whether the individual has authorized particular uses of his or her personal information (Westin, 1967).

Confidentiality safeguards information that is gathered in the context of an intimate relationship. It addresses the issue of how to keep information exchanged in that relationship from being disclosed to third parties (Westin, 1976). Confidentiality, for example, prevents physicians from disclosing information shared with them by a patient in the course of a physician–patient relationship. Unauthorized or inadvertent disclosures of data gained as part of an intimate relationship are breaches of confidentiality (Gostin and Hodge, 2002; NBAC, 2001).

Security can be defined as “the procedural and technical measures required (a) to prevent unauthorized access, modification, use, and dissemination of data stored or processed in a computer system, (b) to prevent any deliberate denial of service, and (c) to protect the system in its entirety from physical harm” (Turn and Ware, 1976). Security helps keep health records safe from unauthorized use. When someone hacks into a computer system, there is a breach of security (and also potentially, a breach of confidentiality). No security measure, however, can prevent invasion of privacy by those who have authority to access the record (Gostin, 1995).

_______

Privacy vs. Confidentiality:

Confidentiality refers to personal information shared with an attorney, physician, therapist, or other individuals that generally cannot be divulged to third parties without the express consent of the client. On the other hand, privacy refers to the freedom from intrusion into one’s personal matters, and personal information. We often use the terms “confidentiality” and “privacy” interchangeably in our everyday lives. However, they mean distinctly different things from a legal standpoint. While confidentiality is an ethical duty, privacy is a right rooted in the common law. Understanding the difference between confidentiality and privacy can spare you a lot of confusion when signing contracts, establishing a client-attorney relationship, and generally knowing your rights in a given situation.

_

Privacy and confidentiality are related but different concepts. Privacy refers to an individual’s interest in controlling access to himself or herself. There are two basic types of privacy: informational privacy and physical privacy. Informational privacy refers to the interest in controlling access to private information about one’s self, such as data pertaining to medical conditions, sexual practices, income, or social security number. Physical privacy refers to the interest in controlling access to one’s body, biological specimens, or personal space. Observing a person undressed without permission would be an invasion of that person’s physical privacy but not necessarily his or her informational privacy. Illegally accessing a person’s medical records would be an invasion of informational privacy but not necessarily physical privacy. Confidentiality refers to measures that are taken to protect an individual’s informational privacy, such as limiting access to medical or research records, data encryption, and secure data storage. Confidentiality is concerned with informational privacy, not physical privacy.

_

Privacy concerns people, whereas confidentiality concerns data.

Privacy Applies to the Person:

  • The way potential participants are identified and contacted
  • The setting that potential participants will interact with the researcher team and who is present during research procedures
  • The methods used to collect information about participants
  • The type of information being collected
  • Access to the minimum amount of information necessary to conduct the research

Confidentiality Applies to the Data:

  • An extension of privacy
  • Pertains to identifiable data
  • An agreement about maintenance and who has access to identifiable data
  • What procedures will be put in place to ensure that only authorized individuals will have access to the information, and
  • Limitations (if any) to these confidentiality procedures
  • In regards to HIPAA, protection of patients from inappropriate disclosures of Protected Health Information (PHI)

_

Key Differences Between Privacy and Confidentiality:

The following are the major differences between privacy and confidentiality:

-1. Privacy is a situation when a person is free from public interference. Confidentiality is a situation when information is kept secret from the reach of any other person.

-2. Privacy talks about a person, but Confidentiality is about information.

-3. Privacy restricts the public from accessing the personal details about a person, whereas Confidentiality protects the information from the range of unauthorised persons.

-4. In privacy, everyone is disallowed from interfering in the personal matters of a person. Conversely, in confidentiality some specified and trustworthy people are allowed to have access to the information.

-5. Privacy is at the voluntary; it is the choice of a person. In contrast to Confidentiality, it is compulsory if the relationship between parties is a fiduciary.

-6. Privacy is a right. However, Confidentiality is an agreement.

______

Privacy vs security:

To properly protect data and comply with data protection laws, you need both Data Privacy and Data Security. And even though these two terms can look similar, their distinctions are clearer once you start to dissect them.

Data Security regards to the means of protection that an organization is taking in order to prevent any third party from unauthorized access to digital data. It focuses on the protection of data from malicious attacks and prevents the exploitation of stolen data (data breach or cyber-attack). Data Security protects data from compromise by external attackers and malicious insiders. It includes Access control, Encryption, Network security, etc.

Data Privacy governs how data is collected, shared and used. Data Privacy focuses on the rights of individuals, the purpose of data collection and processing, privacy preferences, and the way organizations govern personal data of data subjects. It focuses on how to collect, process, share, archive, and delete the data in accordance with the law.

Imagine that your company introduces elaborate data security methods using all the necessary means and available measures to protect data, but has failed to collect that data on the valid lawful base. No matter the measures of securing your data, this would be a violation of data privacy. This example shows us that data security can exist without data privacy, but not the other way around.

_

Privacy and security are related. Privacy relates to any rights you have to control your personal information and how it’s used. Think about those privacy policies you’re asked to read and agree to when you download new smartphone apps.

Security, on the other hand, refers to how your personal information is protected. Your data — different details about you — may live in a lot of places. That can challenge both your privacy and your security.

Some people regard privacy and security as pretty much the same thing. That’s because the two sometimes overlap in a connected world. But they aren’t the same, and knowing how they differ may help you to protect yourself in an increasingly connected world.

What’s the difference between privacy and security?

Here’s an example.

You might share personal information with your bank when you open a checking account. What happens after that? Here are three possible outcomes, all related to your personal information (not to the money you may have deposited in the checking account).

-1. Your privacy and security are maintained. The bank uses your information to open your account and provide you with products and services. They go on to protect that data.

-2. Your privacy is compromised, and your security is maintained. The bank sells some of your information to a marketer. Note: You may have agreed to this in the bank’s privacy disclosure. The result? Your personal information is in more hands than you may have wanted.

-3. Both your privacy and security are compromised. The bank gets hit by a data breach. Cybercriminals penetrate a bank database, a security breach. Your information is exposed and could be sold on the dark web. Your privacy is gone. You could become the victim of cyber fraud and identity theft.

Security protects your data while privacy protects your identity. However, one rarely exists without the other. A company might have strict privacy regulations, but if they don’t have robust security in place, your data can be easily stolen by hackers. If they have strong security, but lenient privacy policies, then your data might be guarded against hackers, but that doesn’t guarantee that your data won’t be shared with third parties or abused by the company itself.

_

Why you need data privacy and data security:

Many companies probably already have some of your personal information – your name, address, national insurance number, date of birth, and facial image. All of this is valuable to companies, governments, third parties, and hackers. Not taking privacy and security seriously could lead to someone getting hold of your data without your permission and using it to:

-1. Sell it to third parties or on a dark web;

-2. Hack into your accounts to steal money or more information;

-3. Perform social engineering attacks;

-4. Impersonate you or steal your identity;

-5. Open up accounts in your name or take out loans, and more.

Therefore, it’s important to take care of your data and apply the best privacy and security practices where possible. Fortunately, not everything depends on organizations that collect your data. You can take action too.

_

Security vs. Privacy: Comparison Chart:

While security and privacy are interdependent, security can be achieved without privacy but privacy cannot be achieved without security. Despite recent advances in data privacy legislation and practice, consumer’s privacy is regularly invaded or compromised by companies and governments. That has led some to argue that consumers have already lost the privacy war. While you can have data protection without data privacy, you cannot have data privacy without data protection. Security protects confidentiality, integrity and availability of information, whereas privacy is more granular about privacy rights with respect to personal information. Privacy prevails when it comes to processing personal data, while security means protecting information assets from unauthorized access. Personal data may refer to any information concerning any individual such as names, addresses, credentials, financial accounts information, social security numbers, etc.

_

Information Security and Privacy will never merge completely, they are just too big and too different, but the lines are indeed blurring.

_

Consider a window in your home. It provides various functions for you. It allows you to look outside. It lets sunlight into your home. A window keeps weather outside. You can open a window to let in fresh air. In an emergency, you can use a window as an exit.

A window is also vulnerable. Just as you can use it as an egress, others can use it as an entrance. To protect against unwanted visitors, you can put bars or a grate in front of the window. This still allows you to keep all of the desired functionality the window provides. This is security.

Just as you can look out a window, others can look in. Preventing unwanted eyes from looking in can be addressed by putting a drape, a curtain, or a shade inside of the window. This is privacy. Obscuring the view inside of your home also provides a little security as intruders may not be able to tell when you are home or see the things you own. 

The privacy assures that personal information (and sometimes corporate confidential information as well) are collected, processed (used), protected and destroyed legally and fairly. Just as the drapes on a window may be considered a security safeguard that also protects privacy, an information security program provides the controls to protect personal information. Security controls limit access to personal information and protect against its unauthorized use and acquisition. It is impossible to implement a successful privacy program without the support of a security program. You can have security without privacy, but you can’t have privacy without security. 

_

7 countries urge tech companies to prioritise security over privacy:

Seven countries, whose populations represent a fifth of Facebook’s users across the world, recently published an international statement on the impact of end-to-end encryption policies which erode the public’s safety online. A year after Britain’s Home Secretary wrote an open letter to Facebook’s Mark Zuckerberg requesting the company halts its end-to-end encryption plans unless they can address child safety fears, the UK, alongside the United States, Australia, New Zealand, Canada, India and Japan, have called on all tech companies to ensure they do not blind themselves to illegal activity on their platforms, including child abuse images.

The 7 signatories of the international statement say they have made it clear that when end-to-end encryption is applied with no access to content, it severely undermines the ability of companies to take action against illegal activity on their own platforms. It also prevents law enforcement investigating and prosecuting the most serious crimes being committed on these services such as online child sexual abuse, grooming and terrorist content. This international intervention calls on tech companies to ensure there is no reduction in user safety when designing their encrypted services; to enable law enforcement access to content where it is necessary and proportionate; and work with governments to facilitate this. The UK and its international partners say they are clear that they support strong encryption, but not where it is applied in a way that precludes all legal access to content, therefore putting the public at significant risk of harm. “We owe it to all of our citizens, especially our children, to ensure their safety by continuing to unmask sexual predators and terrorists operating online,” the UK’s Home Secretary Priti Patel said.

The debate between privacy and security has been framed incorrectly as a zero-sum game in which we are forced to choose between one value and the other. But protecting privacy isn’t fatal to security measures; it merely involves adequate supervision and regulation.

“We must be willing to give up some privacy if it makes us more secure.”

“If you’ve got nothing to hide, you shouldn’t worry about government surveillance.”

“We shouldn’t second-guess security officials.”

“In national emergencies, rights must be cut back, but they’ll be restored later on.”

We hear these arguments all the time. We hear them in the conversations we have each day with our family, friends, and colleagues. We hear them in the media, which is buzzing with stories about government information gathering, such as the Total Information Awareness program, the airline passenger screening program, and the surveillance of people’s phone calls conducted by the secretive National Security Agency. We hear them made by politicians and security officials. And we hear them made by judges deciding how to balance security measures with people’s constitutional rights.

These arguments are part of the debate between privacy and security. The consequences of the debate are enormous, for both privacy and security are essential interests, and the balance we strike between them affects the very foundations of our freedom and democracy. In contemporary times—especially after the terrorist attacks on September 11, 2001—the balance has shifted toward the security side of the scale. The government has been gathering more information about people and engaging in more surveillance. Technology is giving the government unprecedented tools for watching people and amassing information about them—video surveillance, location tracking, data mining, wiretapping, bugging, thermal sensors, spy satellites, X-ray devices, and more. It’s nearly impossible to live today without generating thousands of records about what we watch, read, buy, and do—and the government has easy access to them.

The privacy-security debate profoundly influences how these government activities are regulated. But there’s a major problem with the debate: Privacy often loses out to security when it shouldn’t. Security interests are readily understood, for life and limb are at stake, while privacy rights remain more abstract and vague. Many people believe they must trade privacy in order to be more secure. And those on the security side of the debate are making powerful arguments to encourage people to accept this trade-off.

These arguments, however, are based on mistaken views about what it means to protect privacy and the costs and benefits of doing so. The debate between privacy and security has been framed incorrectly, with the trade-off between these values understood as an all- or-nothing proposition. But protecting privacy need not be fatal to security measures; it merely demands supervision and regulation. We can’t progress in the debate between privacy and security because the debate itself is flawed.

_

As Jon Evans wrote in his recent TechCrunch article, “Personal privacy vs. public security: fight!” he asks us to consider…”the constant demands for “golden key” back doors so that governments can access encrypted phones which are “going dark.” Its opponents focus on the fact that such a system will inevitably be vulnerable to bad actors — hackers, stalkers, “evil maids”. Few dare suggest that, even if a perfect magical golden key with no vulnerabilities existed, one which could only be used by government officials within their official remit, the question of whether it should be implemented would still be morally complex.”

He summarized three problems:

-1. Loss of “Private Spaces” inhibits growth, experimentation, research and technological/cultural advancement:

Private spaces are the experimental petri dishes for societies. If you know your every move can be watched, and your every communication can be monitored, so private spaces effectively don’t exist, you’re much less likely to experiment with anything edgy or controversial; and in this era of cameras everywhere, facial recognition, gait recognition, license plate readers, Stingrays, etc., your every move can be watched.”

-2. Loss of mass privacy, and exempting “the rich” from it, helps to perpetuate status-quo laws / standards / establishments, and encourages parasitism, corruption, and crony capitalism:

Cardinal Richelieu famously said, ‘If one would give me six lines written by the hand of the most honest man, I would find something in them to have him hanged.’ Imagine how much easier it gets if the establishment has access to everything any dissident has ever said and done, while maintaining their own privacy. How long before ’anti-terrorism‘ privacy eradication becomes ’selective enforcement of unjust laws‘ becomes ’de facto ‘oppo research’ unleashed on anyone who challenges the status quo?’”

-3. Advancing technology can manipulate the public based on their private data.

Do you think ads are bad now? Once AI starts optimizing the advertising? Behavior? Data feedback loop, you may well like the ads you see, probably on a primal, mammalian, limbic level. Proponents argue that this is obviously better than disliking them. But the propaganda? Behavior? Data loop is no different from advertising? Behavior? Data, and no less subject to ’optimization’.”  

_______

_______

Section-3

History of privacy:

The origin of the concept of privacy may be traced into the natural instincts of a man who seeks to preserve a private realm of his own. The observation of Justice Cobb appears to be the correct when he said in Pavsich case that the right to privacy is derived from Natural law. He observed: “The right to privacy has its foundations in the instincts of nature…”, There is recognition of privacy in the holy Qur’an and in the sayings of prophet Mohammed. Holy Bible has numerous references to privacy. Jewish law has long recognized the concept of being free from being watched. There were also protections in classical Greece and ancient China. Legal protections have existed in Western countries for hundreds of years. In 1361, the Justices of the Peace Act in England provided for the arrest of peeping toms and eavesdroppers. In 1765, British Lord Camden, striking down a warrant to enter a house and seize papers wrote, “We can safely say there is no law in this country to justify the defendants in what they have done; if there was, it would destroy all the comforts of society, for papers are often the dearest property any man can have”. Parliamentarian William Pitt wrote, “The poorest man may in his cottage bid defiance to all the force of the Crown. It may be frail; its roof may shake; the wind may blow through it; the storms may enter; the rain may enter – but the King of England cannot enter; all his forces dare not cross the threshold of the ruined tenement”.   

_

Aristotle’s distinction between the public sphere of politics and political activity, the polis, and the private or domestic sphere of the family, the oikos, as two distinct spheres of life, is a classic reference to a private domain. The public/private distinction is also sometimes taken to refer to the appropriate realm of governmental authority as opposed to the realm reserved for self-regulation, along the lines described by John Stuart Mill in his essay, On Liberty. Furthermore, the distinction arises again in Locke’s discussion of property in his Second Treatise on Government. In the state of nature all the world’s bounty is held in common and is in that sense public. But one possesses oneself and one’s own body, and one can also acquire property by mixing one’s labor with it, and in these cases it is one’s private property. Margaret Mead and other anthropologists have demonstrated the ways various cultures protect privacy through concealment, seclusion or by restricting access to secret ceremonies (Mead, 1949). Alan Westin (1967) has surveyed studies of animals demonstrating that a desire for privacy is not restricted to humans. However, what is termed private in these multiple contexts varies. Privacy can refer to a sphere separate from government, a domain inappropriate for governmental interference, forbidden views and knowledge, solitude, or restricted access, to list just a few.

_

In the ancient societies people had a relatively limited possibility for self-determination as their (private) lives were strongly influenced by the state. Plato illustrates this phenomenon in his dialogue the Laws, where the complete life of the individual was determined by the state and its aims, there was no place for individual freedom and autonomy. Thus the book describes a very extreme state (which in totality was never realised), some elements of it came true in ancient societies, and the life of the individual was strongly influenced by the public interests. In the Medieval Age there was no privacy as a societal value in today’s sense, the individual existed as a member of a community, so his/her private life was affected by the constant “monitoring” conducted by other members. The appearance of “real” privacy relates to the transformation of these small communities: the appearance of cities. During the 19th century the new changes in the economy and in the society led to the transformation of the way people lived and these new changes had consequences for privacy too, as physical and mental privacy were separated and started to evolve in two different ways. Due to urbanization, the population of cities started to grow and it led to the physical loss of privacy as people in cities had to live in crowded places. On the other hand, citizens could experience a new “type” of privacy, as they ceased to live under the always watching eyes of their village neighbours and the constant moral control set up by them. Another very important change was the appearance and growth of (tabloid) newspapers, which were a fertile area for gossip and photojournalism. It was Samuel D. Warren and Louis D. Brandeis who first recognized the threats to privacy caused by the technological and societal developments in their famous article The Right to Privacy in 1890.   

_

Various countries developed specific protections for privacy. In 1776, the Swedish Parliament enacted the Access to Public Records Act that required that all government-held information be used for legitimate purposes. France prohibited the publication of private facts and set stiff fines for violators in 1858. The Norwegian Criminal Code prohibited the publication of information relating to “personal or domestic affairs” in 1889. Warren’s and Brandeis’ The Right to Privacy (published in the Harvard Law Review in 1890) became a famous article among legal scholars; “unquestioned ‘classics”, the “most influential law review article of all”. In their study the authors argued that as political, social and economic changes occur in the society, the law has to evolve and create new rights in order to “meet the demand of society” and ensure the full protection of the person and the property. They recognized two phenomena that posed a threat to privacy: technological development (namely instantaneous photographs) and gossip, which became a trade in newspapers. Considering these changes, they were the first to demand the recognition of the right to privacy (which they defined as “the right to be let alone”) as a separate and general right, as a right which ensured protection against not the violation of property rights, but the mere emotional suffering. Warren and Brandeis defined an already existing common law right as a stepping stone to the right to be let alone, such as the right to determine to what extents the thoughts, the sentiments and emotions of the individual shall be communicated to others. The principle of this right was the “inviolate personality”. The right to be let alone basically ensured protection against the unwanted disclosure of private facts, thoughts, emotions, etc. The Right to Privacy influenced the law especially in the US, where this article is regarded as the origin of the four privacy torts that emerged from the US case law. This huge success can be also due to the societal and technological changes that made the public opinion in favour of accepting the idea of privacy. The article also influenced jurisprudence as numerous attempts to define privacy followed. Europe started to examine the right to privacy after the US, and created a different kind of protection.

_

The modern privacy benchmark at an international level can be found in the 1948 Universal Declaration of Human Rights, which specifically protects territorial and communications privacy. Article 12 states;

“No one should be subjected to arbitrary interference with his privacy, family, home or correspondence, nor to attacks on his honour or reputation. Everyone has the right to the protection of the law against such interferences or attacks”. Numerous international human rights treaties specifically recognize privacy as a right. The International Covenant on Civil and Political Rights (ICCPR), Article 17, the United Nations Convention on Migrant Workers, Article 14 and the UN Convention on Protection of the Child, Article 16 adopt the same language.

_

Discussion of the concept is complicated by the fact that privacy appears to be something we value to provide a sphere within which we can be free from interference by others, and yet it also appears to function negatively, as the cloak under which one can hide domination, degradation, or physical harm to women and others. Today, when we talk about privacy, we are often talking about personal autonomy as it relates to information about an individual. Privacy entails an individual’s right to control the collection and use of his or her personal information, even after he discloses it to others. When individuals provide information to a doctor, a merchant, or a bank, they expect that those professionals or companies will collect the information they need to deliver a service and use it for that sole purpose. Individual expect that they have the right to object to any further use. Implementation of principles of fair information practices – notice, choice, access, security, and enforcement – is key to preserving this autonomy by ensuring that an individual’s privacy interests in his or her personal information are protected.

_

Privacy today also refers to protection from government surveillance. The Fourth Amendment of the US Constitution, originally intended to protect citizens from physical searches and seizures, establishes an expectation of privacy in communications as well. New technologies that enhance the ability of law enforcement to monitor communications and compile an array of information about an individual, test the limits of Fourth Amendment protections and require that we revisit and redefine our established ideas about this constitutional protection. Similarly Fourth Amendment protection against search and seizure was also extended later in the twentieth century to cover telephone wiretaps and electronic surveillance. The earliest arguments by Warren and Brandeis for explicit recognition of privacy protection in law were in large part motivated by expanding communication and technology, It is now clear that many people still view privacy is a valuable interest, and feel that it is now more at threat, than ever due to technological advances. There are massive databases and Internet records of information about individual financial and credit history, medical records, purchases and telephone calls, for example, and most people do not know what information is stored about them or who has access to it.

_

Data privacy day:

Data Privacy Day (known in Europe as Data Protection Day) is an international event that occurs every year on 28th January. The purpose of Data Privacy Day is to raise awareness and promote privacy and data protection best practices. It is currently observed in the United States, Canada, Israel and 47 European countries.

Data Privacy Day’s educational initiative originally focused on raising awareness among businesses as well as users about the importance of protecting the privacy of their personal information online, particularly in the context of social networking. The educational focus has expanded over the past four years to include families, consumers and businesses. In addition to its educational initiative, Data Privacy Day promotes events and activities that stimulate the development of technology tools that promote individual control over personally identifiable information; encourage compliance with privacy laws and regulations; and create dialogues among stakeholders interested in advancing data protection and privacy. The international celebration offers many opportunities for collaboration among governments, industry, academia, nonprofits, privacy professionals and educators.

______

______

Section-4

Privacy and technology:

As technology has advanced, the way in which privacy is protected and violated has changed with it. In the case of some technologies, such as the printing press or the Internet, the increased ability to share information can lead to new ways in which privacy can be breached. It is generally agreed that the first publication advocating privacy in the United States was the article by Samuel Warren and Louis Brandeis, “The Right to Privacy”, that was written largely in response to the increase in newspapers and photographs made possible by printing technologies. New technologies can also create new ways to gather private information. For example, in the United States it was thought that heat sensors intended to be used to find marijuana-growing operations would be acceptable. However, in 2001 in Kyllo v. United States (533 U.S. 27) it was decided that the use of thermal imaging devices that can reveal previously unknown information without a warrant does indeed constitute a violation of privacy.

_

Telephone and privacy:

The right to hold a telephone conversation in the privacy of one’s home or office without interference can certainly be claimed as “right to privacy”.  Conversations on the telephone are often of an intimate and confidential character. Telephone-conversation is a part of modern man’s life. It is considered so important that more and more people are carrying mobile telephone instruments in their pockets. Telephone conversation is an important facet of a man’s private life. Right to privacy would certainly include telephone-conversation in the privacy of one’s home or office. Telephone-tapping without due procedure established by law would be violation of privacy. The evil incident to invasion of the privacy of the telephone is far greater than that involved in tampering with the mails. Whenever a telephone line is tapped, the privacy of the persons at both ends of the line is invaded and all conversations between them upon any subject, and, although proper, confidential and privileged, may be overheard. Moreover, the tapping of one man’s telephone line involves the tapping of the telephone of every other person whom he may call or who may call him. As a means of espionage, writs of assistance and general warrants are but puny instruments of tyranny and oppression when compared with wiretapping.

_

A great debate that has started to really heat up in both the press and the legal community is how technology and privacy have come into conflict, and what are the rights and expectations consumers should have when technology and privacy clash. How do you protect against far-reaching technology that could be invading your privacy with or without your knowledge?  One of the biggest issues is that our legal system simply can’t keep up with the pace and innovation of technology.  So, for the foreseeable future the burden of protecting yourself, is going to remain with you. The reality is that this is becoming a bigger challenge due to the rapid growth of online applications. More and more companies are using web-based processes as their primary interface with customers. On the social media front, it is definitely something you should control and understand. There are risks of oversharing and not protecting general information such as names, work places, and birthdates. The privacy and technology debate will impact the way you use the internet for both your personal and professional life, which is why understanding the risks and rewards are so important.

_

Human beings value their privacy and the protection of their personal sphere of life. They value some control over who knows what about them. They certainly do not want their personal information to be accessible to just anyone at any time. But recent advances in information technology threaten privacy and have reduced the amount of control over personal data and open up the possibility of a range of negative consequences as a result of access to personal data. In the second half of the 20th century data protection regimes have been put in place as a response to increasing levels of processing of personal data. The 21st century has become the century of big data and advanced information technology (e.g. forms of deep learning), the rise of big tech companies and the platform economy, which comes with the storage and processing of exabytes of data.

The revelations of Edward Snowden, and more recently the Cambridge Analytica case have demonstrated that worries about negative consequences are real. The technical capabilities to collect, store and search large quantities of data concerning telephone conversations, internet searches and electronic payment are now in place and are routinely used by government agencies and corporate actors alike. The rise of China and the large scale of use and spread of advanced digital technologies for surveillance and control have only added to the concern of many. For business firms, personal data about customers and potential customers are now also a key asset. The scope and purpose of the personal data centred business models of Big Tech (Google, Amazon, Facebook, Microsoft, Apple) has been described by various researchers.

At the same time, the meaning and value of privacy remains the subject of considerable controversy. The combination of increasing power of new technology and the declining clarity and agreement on privacy give rise to problems concerning law, policy and ethics. Many of these conceptual debates and issues are situated in the context of interpretation and analysis of the General Data Protection Regulation (GDPR) that was adopted by the EU in spring 2018 as the successor of the EU 1995 Directives, with application far beyond the borders of the European Union.

_

The way in which the internet allows data to be produced, collected, combined, shared, stored, and analysed is constantly changing and re-defining personal data and what type of protections personal data deserves and can be given. For example, seemingly harmless data such IP address, key words used in searches, websites visited, can now be combined and analysed to identify individuals and learn personal information about an individual.  From information shared on social media sites, to cookies collecting user browser history, to individuals transacting online, to mobile phones registering location data – information about an individual is generated through each use of the internet. In some cases the individual is aware that they are generating information and that it is being collected, but in many cases, the individual is unaware of the information trail that they are leaving online, do not know who is accessing the information, and do not have control over how their information is being handled, and for what purposes it is being used. For example, law enforcement routinely troll social media sites for information that might be useful in an investigation.

The borderless nature of information flows over the Internet complicates online privacy, as individual’s data is subjected to different levels of protection depending on which jurisdiction it is residing in. Thus, for example an Indian using Gmail, will be subject to the laws of the United States. On one hand this could be seen as a positive, if one country has stronger privacy protections than another, but could also be damaging to privacy in the reverse situation – where one company has lower privacy standards and safeguards. In addition to the dilemma of different levels of protection being provided over data as it flows through different jurisdictions, access by law enforcement to data stored in a different jurisdiction, or data from one country accessible to law enforcement because it is being processed in their jurisdiction, are two other complications that arise.  These complications cannot be emphasized more than with the case of the NSA Leaks. Because Indian data was residing in US servers, the US government could access and use the data with no obligation to the individual. In response to the NSA leaks, the government of India has stated that all facts need to be known before any action is taken, while citizens initially sought to hold the companies who disclosed the data to US security agencies such as Google, Facebook etc. accountable. Despite this, because the companies were acting within the legal limits of the United States where they were incorporated, they could not be held liable. In response to the dilemma, many actors in India, including government and industry are asking for the establishment of ‘domestic servers’.

_

The impact of the use of technology on the privacy of people manifests itself in a variety of areas. These areas include, among other things, the following:

-1. The electronic monitoring of people in the workplace. This is done by so-called electronic eyes. The justification by companies for the use of such technology is to increase productivity. Stair (1992, p. 655), however, in the discussion of this practice, clearly points out the ethical problem pertaining to the use of these technologies. According to him peoples’ privacy in the workplace are threatened by these devices. It can also lead to a feeling of fear and of all ways being watched – the so-called panopticon phenomenon.

-2. The interception and reading of E-mail messages. This poses an ethical problem which relates to the private communication of an individual. It is technically possible to intercept E-mail messages, and the reading thereof is normally justified by companies because they firstly see the technology infrastructure (E-mail) as a resource belonging to the company and not the individual, and secondly messages are intercepted to check on people to see whether they use the facility for private reasons or to do their job.

-3. The merging of databases which contains personal information. This is also known as databanking (Frocht & Thomas, 1994, p. 24). By this is meant the integration of personal information from a variety of databases into one central database. The problem here does not in the first place arise from the integration of the information as such. The main problems include the fact that the individual is not aware of personal information being integrated into a central database, that the individual does not know the purpose/s for which the integration is effected, or by whom or for whose benefit the new database is constructed and whether the information is accurate.

-4. Closely related to the merging of files is the increasing use of buying cards (“frequent-shopper cards”) by retail stores. Inside such a card a computer chip is buried that records every item purchased along with a variety of personal information of the buyer. This information obtained from the card enables marketing companies to do targeted marketing to specific individuals because the buying habits as well as other personal information of people are known.

-5. Another major threat to privacy is the raise of so called hackers and crackers which break into computer systems. This coincides with the shift in ethical values and the emergence of the cyberpunk culture with the motto of “information wants to be free”.

-6. The development of software that makes the decoding of digital information (which can be private information) virtually impossible also poses serious legal as well as ethical questions because it can protect criminals. A good example is the development of software called Pretty Good Privacy by P Zimmerman in 1991. According to an article in the IT Review (1996, p. 22) he has developed the most complex algorithm ever invented which makes the decoding of digital information virtually impossible.

_

Various information technologies and its impact on privacy:

The debates about privacy are almost always revolving around new technology, ranging from genetics and the extensive study of bio-markers, brain imaging, drones, wearable sensors and sensor networks, social media, smart phones, closed circuit television, to government cybersecurity programs, direct marketing, surveillance, RFID tags, big data, head-mounted displays and search engines. The impact of some of these new technologies, with a particular focus on information technology, is discussed here.

-1. Developments in information technology

“Information technology” refers to automated systems for storing, processing, and distributing information. Typically, this involves the use of computers and communication networks. The amount of information that can be stored or processed in an information system depends on the technology used. The capacity of the technology has increased rapidly over the past decades, in accordance with Moore’s law. This holds for storage capacity, processing capacity, and communication bandwidth. We are now capable of storing and processing data on the exabyte level. For illustration, to store 100 exabytes of data on 720 MB CD-ROM discs would require a stack of them that would almost reach the moon.

These developments have fundamentally changed our practices of information provisioning. The rapid changes have increased the need for careful consideration of the desirability of effects. Some even speak of a digital revolution as a technological leap similar to the industrial revolution, or a digital revolution as a revolution in understanding human nature and the world, similar to the revolutions of Copernicus, Darwin and Freud (Floridi 2008). In both the technical and the epistemic sense, emphasis has been put on connectivity and interaction. Physical space has become less important, information is ubiquitous, and social relations have adapted as well.

As we have described privacy in terms of moral reasons for imposing constraints on access to and/or use of personal information, the increased connectivity imposed by information technology poses many questions. In a descriptive sense, access has increased, which, in a normative sense, requires consideration of the desirability of this development, and evaluation of the potential for regulation by technology (Lessig 1999), institutions, and/or law.

As connectivity increases access to information, it also increases the possibility for agents to act based on the new sources of information. When these sources contain personal information, risks of harm, inequality, discrimination, and loss of autonomy easily emerge. For example, your enemies may have less difficulty finding out where you are, users may be tempted to give up privacy for perceived benefits in online environments, and employers may use online information to avoid hiring certain groups of people. Furthermore, systems rather than users may decide which information is displayed, thus confronting users only with news that matches their profiles.

Although the technology operates on a device level, information technology consists of a complex system of socio-technical practices, and its context of use forms the basis for discussing its role in changing possibilities for accessing information, and thereby impacting privacy.

-2. Internet

The Internet, originally conceived in the 1960s and developed in the 1980s as a scientific network for exchanging information, was not designed for the purpose of separating information flows (Michener 1999). The World Wide Web of today was not foreseen, and neither was the possibility of misuse of the Internet. Social network sites emerged for use within a community of people who knew each other in real life – at first, mostly in academic settings – rather than being developed for a worldwide community of users (Ellison 2007). It was assumed that sharing with close friends would not cause any harm, and privacy and security only appeared on the agenda when the network grew larger. This means that privacy concerns often had to be dealt with as add-ons rather than by-design.

A major theme in the discussion of Internet privacy revolves around the use of cookies (Palmer 2005). Cookies are small pieces of data that web sites store on the user’s computer, in order to enable personalization of the site. However, some cookies can be used to track the user across multiple web sites (tracking cookies), enabling for example advertisements for a product the user has recently viewed on a totally different site. Again, it is not always clear what the generated information is used for. Laws requiring user consent for the use of cookies are not always successful in terms of increasing the level of control, as the consent requests interfere with task flows, and the user may simply click away any requests for consent (Leenes & Kosta 2015). Similarly, features of social network sites embedded in other sites (e.g. “like”-button) may allow the social network site to identify the sites visited by the user (Krishnamurthy & Wills 2009).

The recent development of cloud computing increases the many privacy concerns (Ruiter & Warnier 2011). Previously, whereas information would be available from the web, user data and programs would still be stored locally, preventing program vendors from having access to the data and usage statistics. In cloud computing, both data and programs are online (in the cloud), and it is not always clear what the user-generated and system-generated data are used for. Moreover, as data are located elsewhere in the world, it is not even always obvious which law is applicable, and which authorities can demand access to the data. Data gathered by online services and apps such as search engines and games are of particular concern here. Which data are used and communicated by applications (browsing history, contact lists, etc.) is not always clear, and even when it is, the only choice available to the user may be not to use the application.

-3. Social media

Social media pose additional challenges. The question is not merely about the moral reasons for limiting access to information, it is also about the moral reasons for limiting the invitations to users to submit all kinds of personal information. Social network sites invite the user to generate more data, to increase the value of the site (“your profile is …% complete”). Users are tempted to exchange their personal data for the benefits of using services, and provide both this data and their attention as payment for the services. In addition, users may not even be aware of what information they are tempted to provide, as in the aforementioned case of the “like”-button on other sites. Merely limiting the access to personal information does not do justice to the issues here, and the more fundamental question lies in steering the users’ behaviour of sharing. When the service is free, the data is needed as a form of payment.

One way of limiting the temptation of users to share is requiring default privacy settings to be strict. Even then, this limits access for other users (“friends of friends”), but it does not limit access for the service provider. Also, such restrictions limit the value and usability of the social network sites themselves, and may reduce positive effects of such services. A particular example of privacy-friendly defaults is the opt-in as opposed to the opt-out approach. When the user has to take an explicit action to share data or to subscribe to a service or mailing list, the resulting effects may be more acceptable to the user. However, much still depends on how the choice is framed (Bellman, Johnson, & Lohse 2001).

-4. Big data

Users generate loads of data when online. This is not only data explicitly entered by the user, but also numerous statistics on user behavior: sites visited, links clicked, search terms entered, etc. Data mining can be employed to extract patterns from such data, which can then be used to make decisions about the user. These may only affect the online experience (advertisements shown), but, depending on which parties have access to the information, they may also impact the user in completely different contexts.

In particular, big data may be used in profiling the user (Hildebrandt 2008), creating patterns of typical combinations of user properties, which can then be used to predict interests and behavior. An innocent application is “you may also like …”, but, depending on the available data, more sensitive derivations may be made, such as most probable religion or sexual preference. These derivations could then in turn lead to inequal treatment or discrimination. When a user can be assigned to a particular group, even only probabilistically, this may influence the actions taken by others (Taylor, Floridi, & Van der Sloot 2017). For example, profiling could lead to refusal of insurance or a credit card, in which case profit is the main reason for discrimination. When such decisions are based on profiling, it may be difficult to challenge them or even find out the explanations behind them. Profiling could also be used by organizations or possible future governments that have discrimination of particular groups on their political agenda, in order to find their targets and deny them access to services, or worse.

Big data does not only emerge from Internet transactions. Similarly, data may be collected when shopping, when being recorded by surveillance cameras in public or private spaces, or when using smartcard-based public transport payment systems. All these data could be used to profile citizens, and base decisions upon such profiles. For example, shopping data could be used to send information about healthy food habits to particular individuals, but again also for decisions on insurance. According to EU data protection law, permission is needed for processing personal data, and they can only be processed for the purpose for which they were obtained. Specific challenges, therefore, are (a) how to obtain permission when the user does not explicitly engage in a transaction (as in case of surveillance), and (b) how to prevent “function creep”, i.e. data being used for different purposes after they are collected as may happen for example with DNA databases (Dahl & Sætnan 2009). One particular concern could emerge from genetics and genomic data (Tavani 2004, Bruynseels & van den Hoven, 2015). Like other data, genomics can be used to make predictions, and in particular could predict risks of diseases. Apart from others having access to detailed user profiles, a fundamental question here is whether the individual should know what is known about her. In general, users could be said to have a right to access any information stored about them, but in this case, there may also be a right not to know, in particular when knowledge of the data (e.g. risks of diseases) would reduce the well-being – by causing fear, for instance – without enabling treatment. With respect to previous examples, one may not want to know the patterns in one’s own shopping behavior either.

-5. Mobile devices 

As users increasingly own networked devices such as smart phones, these mobile devices collect and send more and more data. These devices typically contain a range of data-generating sensors, including GPS (location), movement sensors, and cameras, and may transmit the resulting data via the Internet or other networks. One particular example concerns location data. Many mobile devices have a GPS sensor that registers the user’s location, but even without a GPS sensor, approximate locations can be derived, for example by monitoring the available wireless networks. As location data links the online world to the user’s physical environment, with the potential of physical harm (stalking, burglary during holidays, etc.), such data are often considered particularly sensitive. Many of these devices also contain cameras which, when applications have access, can be used to take pictures. These can be considered sensors as well, and the data they generate may be particularly private. For sensors like cameras, it is assumed that the user is aware when they are activated, and privacy depends on such knowledge. For webcams, a light typically indicates whether the camera is on, but this light may be manipulated by malicious software. In general, “reconfigurable technology” (Dechesne, Warnier, & van den Hoven 2011) that handles personal data raises the question of user knowledge of the configuration.

-6. The Internet of Things

Devices connected to the Internet are not limited to user-owned computing devices like smartphones. Many devices contain chips and/or are connected in the so-called Internet of Things. RFID (radio frequency identification) chips can be read from a limited distance, such that you can hold them in front of a reader rather than inserting them. EU and US passports have RFID chips with protected biometric data, but information like the user’s nationality may easily leak when attempting to read such devices. “Smart” RFIDs are also embedded in public transport payment systems. “Dumb” RFIDs, basically only containing a number, appear in many kinds of products as a replacement of the barcode, and for use in logistics. Still, such chips could be used to trace a person once it is known that he carries an item containing a chip.

In the home, there are smart meters for automatically reading and sending electricity and water consumption, and thermostats and other devices that can be remotely controlled by the owner. Such devices again generate statistics, and these can be used for mining and profiling. In the future, more and more household appliances will be connected, each generating its own information. Ambient intelligence (Brey 2005), and ubiquitous computing, along with the Internet of Things (Friedewald & Raabe 2011), also enable automatic adaptation of the environment to the user, based on explicit preferences and implicit observations, and user autonomy is a central theme in considering the privacy implications of such devices. In general, the move towards a service-oriented provisioning of goods, with suppliers being informed about how the products are used through IT and associated connectivity, requires consideration of the associated privacy and transparency concerns (Pieters 2013). For example, users will need to be informed when connected devices contain a microphone and how and when it is used.

-7. E-Government

Government and public administration have undergone radical transformations as a result of the availability of advanced IT systems as well. Examples of these changes are biometric passports, online e-government services, voting systems, a variety of online citizen participation tools and platforms or online access to recordings of sessions of parliament and government committee meetings.

Consider the case of voting in elections. Information technology may play a role in different phases in the voting process, which may have different impact on voter privacy. Most countries have a requirement that elections are to be held by secret ballot, to prevent vote buying and coercion. In this case, the voter is supposed to keep her vote private, even if she would want to reveal it. For information technology used for casting votes, this is defined as the requirement of receipt-freeness or coercion-resistance (Delaune, Kremer & Ryan 2006). In polling stations, the authorities see to it that the voter keeps the vote private, but such surveillance is not possible when voting by mail or online, and it cannot even be enforced by technological means, as someone can always watch while the voter votes. In this case, privacy is not only a right but also a duty, and information technology developments play an important role in the possibilities of the voter to fulfil this duty, as well as the possibilities of the authorities to verify this. In a broader sense, e-democracy initiatives may change the way privacy is viewed in the political process.

More generally, privacy is important in democracy to prevent undue influence. While lack of privacy in the voting process could enable vote buying and coercion, there are more subtle ways of influencing the democratic process, for example through targeted (mis)information campaigns. Online (political) activities of citizens on for example social media facilitate such attempts because of the possibility of targeting through behavioural profiling. Compared to offline political activities, it is more difficult to hide preferences and activities, breaches of confidentiality are more likely, and attempts to influence opinions become more scalable.

-8. Surveillance

Information technology is used for all kinds of surveillance tasks. It can be used to augment and extend traditional surveillance systems such as CCTV and other camera systems, for example to identify specific individuals in crowds, using face recognition techniques, or to monitor specific places for unwanted behaviour. Such approaches become even more powerful when combined with other techniques, such as monitoring of Internet-of-Things devices (Motlagh et al. 2017).

Besides augmenting existing surveillance systems, ICT techniques are nowadays mainly used in the digital domain, typically grouped together under the term “surveillance capitalism” (Zuboff 2019). Social media and other online systems are used to gather large amounts of data about individuals – either “voluntary”, because users subscribe to a specific service (Google, Facebook), or involuntary by gathering all kinds of user related data in a less transparent manner. Data analysis and machine learning techniques are then used to generate prediction models of individual users that can be used, for example, for targeted advertisement, but also for more malicious intents such as fraud or micro-targeting to influence elections.

In addition to the private sector surveillance industry, governments form another traditional group that uses surveillance techniques at a large scale, either by intelligence services or law enforcement. These types of surveillance systems are typically justified with an appeal to the “greater good” and protecting citizens, but their use is also controversial. For such systems, one would typically like to ensure that any negative effects on privacy are proportional to the benefits achieved by the technology. Especially since these systems are typically shrouded in secrecy, it is difficult for outsiders to see if such systems are used proportionally, or indeed useful for their tasks (Lawner 2002). This is particularly pressing when governments use private sector data or services for surveillance purposes.

The almost universal use of good encryption techniques in communication systems makes it also harder to gather effective surveillance information, leading to more and more calls for “back doors” that can exclusively be used by government in communication systems. From a privacy standpoint this could be evaluated as unwanted, not only because it gives governments access to private conversations, but also because it lowers the overall security of communication systems that employ this technique (Abelson et al. 2015).

______

Measures to protect the privacy of personal information on the Internet offer substantial individual, economic, and societal benefits that must be vigorously protected.  First, individuals benefit from the protection of privacy in cyberspace by enjoying the right to be left alone.  A 1996 poll conducted by Equifax and privacy scholar Alan Westin indicated that “89% of those polled in the United States were either very or somewhat concerned about privacy.”  In addition to the overwhelming concern for protecting personal information collected on the Internet, lack of privacy may actually stagnate the e-commerce economy. Today, direct marketing is big business. Without the ability to choose (or even know) how much privacy will be maintained on a trip into cyberspace, many surfers may be deterred from visiting the Web.  In fact, polls reveal that the privacy concern is the top reason why consumers avoid using the Internet.  Finally, intrusive data collection, coupled with the lack of any meaningful choice regarding protection, could lead to avoidance of the Internet as a free-flowing medium of free speech. People may lie to protect their personal data, refuse to answer questions fearing that their answers will become a record in a marketer’s database, or avoid the Internet altogether. Privacy protections, therefore, may also protect against untruthful data and self-censorship.   

_______

_______

Section-5

Internet privacy and data privacy:   

The word internet simply means the ability to connect to the web. Data is the stuff you use when you are on the internet, whether you are on Facebook, Twitter, Googling some information or using the phone to check the giffgaff forum (or any other purpose, for that matter). Internet privacy is a subset of data privacy. At the highest level, privacy is the right of a citizen to be left alone, or freedom from interference or intrusion. Data Privacy describes the practices which ensure that the data shared by customers is only used for its intended purpose. Data privacy is the right of a citizen to have control over how personal information is collected and used. Information privacy is also known as data privacy or data protection. Data privacy is a subset of privacy. This is because protecting user data and sensitive information is a first step to keeping user data private. Data privacy is challenging since it attempts to use data while protecting an individual’s privacy preferences and personally identifiable information.

_

Information privacy may be applied in numerous ways, including encryption, authentication and data masking – each attempting to ensure that information is available only to those with authorized access. These protective measures are geared toward preventing data mining and the unauthorized use of personal information, which are illegal in many parts of the world.

Information privacy relates to different data types, including:

-1. Internet privacy (online privacy): All personal data shared over the Internet is subject to privacy issues. Most websites publish a privacy policy that details the website’s intended use of collected online and/or offline collected data.

-2. Financial privacy: Financial information is particularly sensitive, as it may easily used to commit online and/or offline fraud.

-3. Medical privacy: All medical records are subject to stringent laws that address user access privileges. By law, security and authentication systems are often required for individuals that process and store medical records.

_

Data privacy or Information privacy is concerned with proper handling, processing, storage and usage of personal information. It is all about the rights of individuals with respect to their personal information.

The most common concerns regarding data privacy are:

-1. managing contracts or policies,

-2. applying governing regulation or law (like General Data Protection Regulation or GDPR),

-3. third-party management.

It is a broad term, but essentially data privacy is a part of the data protection area that deals with the proper handling of data. This includes how data should be collected, stored, and shared with any third parties, as well as compliance with the applicable privacy laws (such as CCPA or GDPR).

However, we have to add that data privacy is not only about the proper handling of data but also about the public expectation of privacy. Organizations need to learn how to process personal data while protecting privacy preferences of individuals. This is what individuals expect from organizations. This is their vision of privacy.

_

Companies across all industries are using technology to collect data on you. Think about it:

  • Your smartphone continuously tracks your whereabouts. Beyond that, your service provider can access all the information you have stored on your device, including your contacts’ information and how often you receive or send a text.
  • Smart grids track your energy usage and collect details about your life — from your daily routines to the appliances you use.
  • Insurance providers will offer you a discount if you let them track your every move while driving.
  • The healthcare industry is pushing for devices that would allow doctors to collect data on you outside of the doctor’s office. For example, Apple recently launched its HealthKit platform, which allows doctors to collect real-time data from iPhones and Apple devices.
  • Companies, using browser fingerprinting, cookies, authenticated tracking, cross-device tracking and other methods, are monitoring your online activity and using that data to send you targeted ads.

Those are just a handful of ways you are being tracked. So unless you’re willing to live completely off-the-grid, someone, somewhere is compiling a profile of you.

_

Data privacy laws such as the United States’ Health Insurance Portability and Accountability Act (HIPAA) govern specific types of data. Other examples like the Electronic Communications Privacy Act (ECPA) extend government restrictions on wiretaps to include transmissions of electronic data. The Children’s Online Privacy Protection Act (COPPA) gives parents control over what information websites can collect from their kids. While the EU’s General Data Protection Regulation (GDPR) gives citizens new control over their data and their interactions with companies. Compliance officers within an organization are responsible for designing a data privacy policy so understanding data privacy regulations like these is a key element of the role.

_

Internet privacy is something that more and more people are becoming concerned with, and for good reason. While our use of websites and apps skyrockets, so does the chance of someone violating your privacy. This is because the incentives are all messed up. It’s common practice for businesses to try and acquire as much personal information about you as possible. That’s because this data is more powerful than ever. When a business has a ton of data about you, they can be more effective with their marketing and deliver highly targeted advertisements.

More data = more money.

That means when push comes to shove a corporation will err on the side of being more intrusive, not less. The problem is, a lot of people don’t know what to do. They’re either unaware that this is happening to them, or they think 100% of the blame should rest on these businesses.

And that’s a massive problem.

There’s a lack of understanding and ownership being taken by consumers when it comes to internet privacy, and it’s only digging us into a deeper hole. Each year that the public stays “concerned” but not engaged, it becomes harder to turn the ship around.

But that’s not all.

Aside from the issues with internet privacy that exist from the collection of your data, there are security risks as well. When so many businesses have massive databases of personal info, the chance of a data breach goes through the roof. This means that a business like Facebook could collect massive amounts of data on its users and end up with a third party gaining access to it, even if they weren’t supposed to. Having your info included in a data breach is an immediate problem.

It can lead to:

  • Spam
  • Hacked accounts
  • Stolen credit card information
  • Unapproved purchases
  • Identity theft

These are things you don’t want to experience.

_

Internet-based companies and publications like to position themselves as fancy and forward-thinking, but most of them are reliant on one of the oldest business models out there.

Selling ads.

Their reliance on this model is where we start to bump into some major issues with internet privacy.

Since it’s difficult to charge up front, they need to do something to keep the lights on. And lucky for them, it’s never been a better time to sell advertising to other companies. This is why businesses are so interested in tapping into the massive amount of data that’s out there. With just a handful of details about you, a company like Facebook or Google can sell ads for double the cost. But they’re not after a handful of details, they’re after hundreds. When more user data equals more profit, it’s not hard to predict the direction the ship will go. 

Are we proposing that we should suddenly pay for every site we use? Absolutely not.

Part of what makes the internet so impactful to the world is the fact that it can deliver information to those who wouldn’t have had it before. Throwing up a paywall on the whole thing would do more harm than good, and likely create more inequality than we already have. So right now we’re in limbo. More of us are starting to understand that internet privacy is important and large companies are infringing on rights in creative new ways. But until they find a more reliable way to make money they’re going to continue.

_

What does Internet Privacy mean?

Internet privacy is also known as online privacy. Internet privacy refers to the vast range of technologies, protocols and concepts related to giving individual users or other parties more privacy protections in their use of the global Internet. Internet privacy takes many forms, including mandatory privacy statements on websites, data sharing controls, data transparency initiatives and more. Internet privacy and anonymity are paramount to users, especially as e-commerce continues to gain traction. Privacy violations and threat risks are standard considerations for any website under development.

Internet privacy is the right to keep sensitive data and information produced as a result of using the web, private. Collecting this data and displaying it, selling it, or providing it to third parties are all common practices that can jeopardize internet privacy. Privacy is a major concern for all Internet users, but it is becoming more difficult to expect a reasonable expectation of privacy online. One of the problems with Internet privacy is that many users assume that they have control over their information. This is often not the case, particularly when they engage in activities such as social networking, which is essentially based upon sharing of personal information. There are entire industries devoted to piercing the veil of privacy. They have entire zombie armies at their disposal for both commercial and nefarious reasons.

As internet privacy continues to erode, public interest in protecting their rights online is growing. This has led to various companies creating products and services designed to help. DuckDuckGo is growing their private search engine, VPN companies are defending you from ISPs, and some companies are protecting your privacy online with their personal info remover.

_

Dissection of an Internet Transaction:

Without burrowing too deeply into the technological nuances of Internet architecture, it is important to understand the mechanics involved in a typical Internet transaction in order to understand how one’s privacy can be so easily surrendered in cyberspace.  Basically, Internet activities are composed of electronic requests for information and subsequent electronic fulfilment of those requests. In other words, a surfer’s mouse “click” initiates a submission of an electronic request to view data on a Web site, the site’s computer receives the electronic request, and finally, the site sends the requested data to the specific computer making the request.  In order to send the information to the correct computer among the millions logged onto the Internet, the Web site must be able to distinguish the computer requesting data from all other online computers. An Internet protocol (IP) address, which is basically a specific machine address assigned by the Web surfer’s Internet service provider (ISP) to a user’s computer, accomplishes this task.  Hence, every time a transaction requesting or sending data occurs on the Web this unique IP address accompanies the data. Furthermore, both the ISP and the Web site typically log these transactions. To the detriment of users’ personal privacy and anonymity on the Web, however, the uniqueness of the IP address may allow someone in possession of another user’s IP address to find detailed personal facts about the user, such as the user’s name, address, birth date, social security number, and e-mail address, within minutes.

_

The Internet is at once a new communications medium and a new locus for social organization on a global basis. Because of its decentralized, open, and interactive nature, the Internet is the first electronic medium to allow every user to “publish” and engage in commerce. Users can reach and create communities of interest despite geographic, social, and political barriers. The Internet is an unprecedented mechanism for providing invaluable information to government, social organizations, health care, and educational institutions. As the World Wide Web has grown fully support voice, data, and video, it has become a virtual “face-to-face” social and political medium.

However, it remains an open question whether the Internet’s democratic potential will be achieved. The Internet exists within social, political, and technological contexts that can impede its democratic potential. Governments propagate the Internet, but worry about its threat to their traditional authority. The private sector sees the economic potential of the Internet, but anticompetitive impulses are also part of the scene. Users bring not only their social aspirations to the Internet, but also their potential for antisocial behavior.

Protection of privacy is one of the critical issues that must be resolved. Will the “Digital Age” be one in which individuals maintain, lose, or gain control over information about themselves? In the midst of this uncertainty, there are reasons for hopefulness. Of course, Individuals operating on the Internet can use new tools for protecting their privacy. From anonymous mailers and web browsers that allow individuals to interact anonymously, to encryption programs that protect e-mail messages as they pass through the network; individuals can harness the technology to promote their privacy. Equally important is the newfound voice of individuals. Using e-mail, Web sites, list servers, and newsgroups, individuals on the Internet are able to quickly respond to perceived threats to privacy.

But it is not just individuals’ self-interest leading us toward increased privacy protection. Faced with numerous surveys documenting that the lack of privacy protections is a major barrier to consumer participation in electronic commerce, businesses are beginning to take privacy protection more seriously. A growing number of companies, under public and regulatory scrutiny, have begun incorporating privacy into their management process and actually marketing their “privacy sensitivity” to the public. The collective efforts pose difficult questions about how to ensure the adoption and enforcement of rules in this global, decentralized medium. Governments are also struggling to identify their appropriate role in this new environment.

While expectations of privacy are under serious challenge, the self-interest of the various constituencies that make up the Internet—i.e. users, advocates, industry, and government—are all pushing toward the adoption of technologies and rules that provide individuals with greater control over their information and their privacy.

Concerns about privacy in cyberspace are an issue of international debate. As reading and writing, health care and shopping, and sex and gossip increasingly take place in cyberspace, citizens around the world are concerned that the most intimate details of their daily lives are being monitored, searched, recorded, stored, and often misinterpreted when taken out of context. For many, the greatest threats to privacy come not from state agents but from the architecture of e-commerce itself, which is based, in unprecedented ways, on the recording and exchange of intimate personal information.

_

Public postings, the opposite of privacy:

When something is posted on any wesite publicly, everyone has access to it. Search engines make the information even more accessible, and anyone (including internet ‘bots’) can copy the information and store it indefinitely. The web has become so complex, knowing and controlling the privacy settings of all of the websites a person uses has become nearly impossible. Internet privacy settings are seemingly ever-changing.

Steps to protect internet privacy:

The first rule for protecting privacy on the internet is “think before you post’.

The second rule is ‘check your privacy settings’. Check all privacy settings on Facebook, LinkedIn, Twitter, and on their own websites. Learn about your rights and learn about your settings. Privacy settings are controls available on many social networking and other websites that allow users to limit who can access your profile and what information visitors can see.

Rule three: Ask friends to understand their privacy settings and let your friends know you care about your privacy. Remember, if a friend hasn’t set their privacy settings properly and they share a picture of you at that college party, all of the privacy measures you have taken won’t matter. ‘Friends’ are often the biggest privacy leaks out there.

IP addresses identify where you are:

Your privacy extends to your IP address. Every time you visit a website your IP address is logged. Your IP address tells the website (and the people that run it) approximately where you are. Have you ever gone to a shopping site and noticed it has a big message at the top that says ‘We ship to (your city or state)!” They know your city or state based on your IP address. Almost anyone can find your IP address, and with it they will know where you are. To protect your IP address you can use a proxy service like TOR.

Cookies follow you around:

A web cookie is just a text file placed on your computer, usually by a website. Sometimes cookies are ‘session’ based meaning they only work while you are on a site. Other cookies are ‘persistent’ meaning they continue to exist long after you have left a site. Normally, it tracks your visits so the website knows you are a returning visitor, or what ads to show you. Internet cookies are necessary for the web to function the way we’ve come to expect it to, but they are also viewable by third-parties and have an impact on internet privacy.

A cookie file contains a unique identification number which allows a Web site to recognize and distinguish the user in subsequent visits to the site.  Cookies also typically store information such as user preferences, the type of browser software or operating system used, installed plug-ins, and password or login information which allow for easier Web site browsing by the user in future visits.  However, cookies have a dual-personality potential because they can abrogate an individual’s privacy in cyberspace by collecting information regarding the user and his or her behavior.

Cookies accomplish their darker-sided agenda in several ways. 

First, a Web site can retrieve cookies at a future time. When the Web site does this, the cookie can disclose a detailed list of all Web sites that a specific computer visited within a particular time frame. Embedded within these cookie files may be tell-tale information that can identify a user personally, such as a user’s name, password, e-mail address, and other personal information. Cookies cannot read your hard drive to find out information about you; however, any personal information that you give to a Web site, including credit card information, will most likely be stored in a cookie unless you have turned off the cookie feature in your browser. In this way are cookies a threat to privacy. The cookie will only contain information that you freely provide to a Web site.  In the past, only the Web site that placed the cookie could read the file; however, now the use of cookie sharing between sites or the use of placement ads by the same ad agency allows cookies from multiple Web sites to be aggregated to create a comprehensive personal profile of an individual user.  

Second, some cookies have the capability to record the Web site from which a user came, the links accessed at the site, and any personal information entered at the site.  A Web site may also use these types of cookies in concert with a more efficient, and yet more intrusive, technique for gathering personal data known as “clickstreams.”  A clickstream is basically a recording of all Web sites a user visits during the same session or connection.  Clickstream collections not only gather a list of sites visited, but also the duration spent on each site, purchases made, advertisements viewed, and data entered. Internet service providers usually perform clickstream monitoring, because users have essentially rented a line from the provider to connect to the Internet.   Lastly, some cookies may be able to identify the IP address of the computer, which could lead to the ultimate disclosure of the location of the computer used to access the site.  

Don’t reply all without reading the addresses:

One of the easiest ways for people with dark purposes to begin to learn about your existence are chain emails. Because everyone has a friend that passes along jokes to an entire list of all their friends we like to use different online identities, we call them personas. This can be important because the Reply All emails often end up getting forwarded to hundreds of other friends and the email gets longer and longer. Each iteration adds more clearly visible email addresses. Eventually a spammer, web robot, or other internet opportunist receives the email. They now have the email addresses of what is essentially your personal social network. They can hijack your email address, send emails to your friends pretending they are you, and work all kinds of mischief.

_

Once you’ve given away your data to one company, it’s very hard to limit its use. While you might feel comfortable with Amazon or Facebook using your personal information, you just can’t be sure that other “third-party” companies and even government agencies won’t also find ways of access it.

There are three key ways you can lose control of your data privacy:

-1. Third party apps.

2018’s Cambridge Analytica scandal highlighted the threat to privacy posed by third-party apps. Applications often allow users to log in via Facebook or other social media sites, instead of creating an account. It’s a feature employed by everything from Tinder to Uber Eats. In practice, this means giving the application access to some of your social media data. It’s hard to know just how much information you’re giving up, though – in the case of Cambridge Analytica, the company leveraged this mechanism to illegally gain data on more than 50 million Facebook users.

-2. Government surveillance.

Facebook was just one of many companies that participated in Prism, a US surveillance program. Although they denied any knowledge of this, the social media giant was named along with Google as a participant in leaked government documents. According to these leaks, Prism has collected data directly from the servers of communication companies that joined their program, without users knowing.

-3. Data breaches.

Another issue is that sites like Facebook and Google will often store user data, even after someone has deleted their account. If you’ve used Facebook’s payment systems, your card details will still be in their logs. One major problem with this is that, long after you stop using an application, a data breach can still see your personal information in the hands of hackers and criminals. Many internet users report feeling helpless in the face of these tech giants, but that doesn’t have to be the case. There’s plenty you can do to empower yourself and level the playing field.

_

If a stranger came up to you on the street, would you give him your name, Social Security number and e-mail address?

Probably not.

Yet people often dole out all kinds of personal information on the Internet that allows such identifying data to be deduced. Services like Facebook, Twitter and Flickr are oceans of personal minutiae — birthday greetings sent and received, school and work gossip, photos of family vacations, and movies watched. Computer scientists and policy experts say that such seemingly innocuous bits of self-revelation can increasingly be collected and reassembled by computers to help create a picture of a person’s identity, sometimes down to the Social Security number. Technology has rendered the conventional definition of personally identifiable information obsolete. You can find out who an individual is without it.

In a class project at the Massachusetts Institute of Technology that received some attention, Carter Jernigan and Behram Mistree analyzed more than 4,000 Facebook profiles of students, including links to friends who said they were gay. The pair was able to predict, with 78 percent accuracy, whether a profile belonged to a gay male. So far, this type of powerful data mining, which relies on sophisticated statistical correlations, is mostly in the realm of university researchers, not identity thieves and marketers.

In social networks, people can increase their defenses against identification by adopting tight privacy controls on information in personal profiles. Yet an individual’s actions, researchers say, are rarely enough to protect privacy in the interconnected world of the Internet. You may not disclose personal information, but your online friends and colleagues may do it for you, referring to your school or employer, gender, location and interests. Patterns of social communication, researchers say, are revealing. Personal privacy is no longer an individual thing. In today’s online world, what your mother told you is true, only more so: people really can judge you by your friends. Collected together, the pool of information about each individual can form a distinctive “social signature,” researchers say.

The power of computers to identify people from social patterns alone was demonstrated in a study. By examining correlations between various online accounts, the scientists showed that they could identify more than 30 percent of the users of both Twitter, the microblogging service, and Flickr, an online photo-sharing service, even though the accounts had been stripped of identifying information like account names and e-mail addresses. When you link these large data sets together, a small slice of our behavior and the structure of our social networks can be identifying.

Even more unnerving to privacy advocates is the work of two researchers from Carnegie Mellon University. In a paper published, Alessandro Acquisti and Ralph Gross reported that they could accurately predict the full, nine-digit Social Security numbers for 8.5 percent of the people born in the United States between 1989 and 2003 — nearly five million individuals. Social Security numbers are prized by identity thieves because they are used both as identifiers and to authenticate banking, credit card and other transactions. The Carnegie Mellon researchers used publicly available information from many sources, including profiles on social networks, to narrow their search for two pieces of data crucial to identifying people — birthdates and city or state of birth. That helped them figure out the first three digits of each Social Security number, which the government had assigned by location. The remaining six digits had been assigned through methods the government didn’t disclose, although they were related to when the person applied for the number. The researchers used projections about those applications as well as other public data, like the Social Security numbers of dead people, and then ran repeated cycles of statistical correlation and inference to partly re-engineer the government’s number-assignment system. To be sure, the work by Mr. Acquisti and Mr. Gross suggests a potential, not actual, risk. But unpublished research by them explores how criminals could use similar techniques for large-scale identity-theft schemes.

More generally, privacy advocates worry that the new frontiers of data collection, brokering and mining, are largely unregulated. They fear “online redlining,” where products and services are offered to some consumers and not others based on statistical inferences and predictions about individuals and their behavior.

Take home point:

When you’re doing stuff online, you should behave as if you’re doing it in public — because increasingly, it is.

_

Predicting personal information:

A growing proportion of human activities, such as social interactions, entertainment, shopping, and gathering information, are now mediated by digital services and devices. Such digitally mediated behaviors can easily be recorded and analysed, fuelling the emergence of computational social science and new services such as personalized search engines, recommender systems, and targeted online marketing. However, the widespread availability of extensive records of individual behavior, together with the desire to learn more about customers and citizens, presents serious challenges related to privacy and data ownership.

We have to distinguish between data that are actually recorded and information that can be statistically predicted from such records. People may choose not to reveal certain pieces of information about their lives, such as their sexual orientation or age, and yet this information might be predicted in a statistical sense from other aspects of their lives that they do reveal. For example, a major US retail network used customer shopping records to predict pregnancies of its female customers and send them well-timed and well-targeted offers. In some contexts, an unexpected flood of vouchers for prenatal vitamins and maternity clothing may be welcome, but it could also lead to a tragic outcome, e.g., by revealing (or incorrectly suggesting) a pregnancy of an unmarried woman to her family in a culture where this is unacceptable. As this example shows, predicting personal information to improve products, services, and targeting can also lead to dangerous invasions of privacy.

Predicting individual traits and attributes based on various cues, such as samples of written text, answers to a psychometric test, or the appearance of spaces people inhabit, has a long history. Human migration to digital environment renders it possible to base such predictions on digital records of human behavior. It has been shown that age, gender, occupation, education level, and even personality can be predicted from people’s Web site browsing logs. Similarly, it has been shown that personality can be predicted based on the contents of personal Web sites, music collections, properties of Facebook or Twitter profiles such as the number of friends or the density of friendship networks, or language used by their users. Furthermore, location within a friendship network at Facebook was shown to be predictive of sexual orientation.

_

Search engine privacy:

Search engine privacy is a subset of internet privacy that deals with user data being collected by search engines. Both types of privacy fall under the umbrella of information privacy. Privacy concerns regarding search engines can take many forms, such as the ability for search engines to log individual search queries, browsing history, IP addresses, and cookies of users, and conducting user profiling in general. The collection of personally identifiable information of users by search engines is referred to as “tracking”.

This is controversial because search engines often claim to collect a user’s data in order to tailor better results to that specific user and provide the user with a better searching experience. However, search engines can also abuse and compromise its users’ privacy by selling their data to advertisers for profit. In the absence of regulations, users must decide what is more important to their search engine experience: relevance and speed of results or their privacy, and choose a search engine accordingly.

Search engines generally publish privacy policies to inform users about what data of theirs may be collected and what purposes it may be used for. While these policies may be an attempt at transparency by search engines, many people never read them and are therefore unaware of how much of their private information, like passwords and saved files, are collected from cookies and may be logged and kept by the search engine. This ties in with the phenomenon of notice and consent, which is how many privacy policies are structured.

Notice and consent policies essentially consist of a site showing the user a privacy policy and having them click to agree. This is intended to let the user freely decide whether or not to go ahead and use the website. This decision, however, may not actually be made so freely because the costs of opting out can be very high. Another big issue with putting the privacy policy in front of users and having them accept quickly is that they are often very hard to understand, even in the unlikely case that a user decides to read them. Privacy minded search engines, such as DuckDuckGo, state in their privacy policies that they collect much less data than search engines such as Google or Yahoo, and may not collect any.

_

Location privacy:

Locational privacy (also known as “location privacy”) is the ability of an individual to move in public space with the expectation that under normal circumstances their location will not be systematically and secretly recorded for later use.

Of course, when you leave your home you sacrifice some privacy. Someone might see you enter the clinic on Market Street, or notice that you and your secretary left the Hilton Gardens Inn together. Furthermore, in the world of ten years ago, all of this information could be obtained by people who didn’t like you or didn’t trust you. But obtaining this information used to be expensive. Your enemies could hire a guy in a trench coat to follow you around, but they had to pay him. Moreover, it was hard to keep the surveillance secret — you had a good chance of noticing your tail ducking into an alley. In the world of today and tomorrow, this information is quietly collected by ubiquitous devices and applications, and available for analysis to many parties who can query, buy or subpoena it. Or pay a hacker to steal a copy of everyone’s location history. It is this transformation to a regime in which information about your location is collected pervasively, silently, and cheaply that we’re worried about.

Some threats to locational privacy are overt: it’s evident how cameras backed by face-recognition software could be misused to track people and record their movements. Here we’re primarily concerned with threats to locational privacy that arise as a hidden side-effect of clearly useful location-based services. We can’t stop the cascade of new location-based digital services. Nor would we want to — the benefits they offer are impressive. What urgently needs to change is that these systems need to be built with privacy as part of their original design. We can’t afford to have pervasive surveillance technology built into our electronic civic infrastructure by accident. We have the opportunity now to ensure that these dangers are averted. The contention is that the easiest and best solution to the locational privacy problem is to build systems which don’t collect the data in the first place. Modern cryptography actually allows civic data processing systems to be designed with a whole spectrum of privacy policies: ranging from complete anonymity to limited anonymity to support law enforcement. Modern cryptography offers some really clever ways to deploy road tolls and transit tickets and location searches and all the other mobile services we want, without creating a record of where you are. This isn’t at all intuitive, but it’s really important that policymakers and engineers working with location systems know about it.

_

Internet Service Providers (ISPs):

Probably the most overlooked culprits in the internet privacy landscape are the actual internet service providers themselves. An internet service provider does exactly what the name hints at, provides you with access to the internet. Some examples are Comcast Xfinity, AT&T, Verizon Fios, and CenturyLink. The fact that they’re overlooked doesn’t make a lot of sense to us, because an ISP is the MOST likely to be shady. They have a history of doing what’s best for their bottom line at the expense of consumers’ internet privacy. There also aren’t very many large ISPs out there. This poses a problem because it leaves them with a weak incentive to respect their customers. If you want to switch your ISP there typically aren’t many to choose from in your area. Not only that but the experience of dealing with customer support when trying to switch is miserable. So most people just stick with their provider. This led to large corporations that don’t need to worry about upsetting their customers too much. As a result, they began testing the limits of what they could get away with. A typical practice that compromises your internet privacy is the collecting and selling of browser history. This has been going on in some capacity for a while now, and it’s expected to get worse. The issue is they’re the first layer of interaction when it comes to your use of the internet. This means if you value your internet privacy and want to safeguard against these practices, you’ll need to try some different tactics. You can be extremely diligent with your use of social media and search engines, but your ISP will still be able to track your activity. This is because you need to go through them to access these sites in the first place. The best way to get around this is by using a virtual private network (VPN for short). Doing this helps you reclaim some of your internet privacy by using a private network to hide your online activity. This will prevent ISPs from being able to track what you do online and sell that information to other companies.

_

Personal privacy and Internet marketing: An impossible conflict or a marriage made in heaven? 2011 paper:

With the decline of print media and network television, marketing strategy is changing. As these advertising vehicles are slowing down and more individuals turn to the Internet for daily functioning, marketers are following in kind. Technology offers businesses and marketing specialists the ability to collect immense amounts of private data about individuals’ interests or characteristics as they surf the Internet and input personal information. Data collection falls into one of two categories: a user’s voluntary sharing of such information, or involuntary/uninformed collection by other parties. The threat posed by invasion of personal privacy is real. At the same time, Internet users can benefit in several ways from the sharing and collection of personal information. For example, much online content is funded by advertising, and would otherwise only be available to consumers for a fee. Additionally, valuable information regarding trends and happenings (e.g., flu outbreaks) are detected by aggregated Internet tracking. Finally, many Internet users value and enjoy targeted advertising geared to their particular interests or needs. Laws regarding this matter are currently limited, but are developing in order to protect individuals from unscrupulous data collection, especially involving children. Fortunately, there are ways marketers can legally and ethically collect and use personal information. Ultimately, regulation needs to be developed, and the marketing profession can aid itself by expanding self-regulation and policing in order to stave off additional—and potentially onerous—regulation.

______

______

Privacy in Cloud Computing:

Just a few years ago, people used to carry their documents around on disks. Then, more recently, many people switched to memory sticks. Cloud computing refers to the ability to access and manipulate information stored on remote servers, using any Internet-enabled platform, including smartphones. Computing facilities and applications will increasingly be delivered as a service, over the Internet. We are already making use of cloud computing when, for example, we use applications such as Google Mail, Microsoft Office365 or Google Docs. In the future, governments, companies and individuals will increasingly turn to the cloud.

The cloud computing paradigm changes the way in which information is managed, especially where personal data processing is concerned. End-users can access cloud services without the need for any expert knowledge of the underlying technology. This is a key characteristic of cloud computing, which offers the advantage of reducing cost through the sharing of computing and storage resources, combined with an on-demand provisioning mechanism based on a pay-per-use business model. These new features have a direct impact on the IT budget and cost of ownership, but also bring up issues of traditional security, trust and privacy mechanisms.

Privacy is the right of individuals to ‘know what is known about them’, be aware of stored information about them, control how that information is communicated and prevent its abuse. In other words, it refers to more than just confidentiality of information. Protection of personal information (or data protection) derives from the right to privacy via the associated right to self-determination. Every individual has the right to control his or her own data, whether private, public or professional.

Without knowledge of the physical location of the server or of how the processing of personal data is configured, end-users consume cloud services without any information about the processes involved. Data in the cloud are easier to manipulate, but also easier to lose control of. For instance, storing personal data on a server somewhere in cyberspace could pose a major threat to individual privacy. Cloud computing thus raises a number of privacy and security questions.

Cloud computing has raised multiple eyebrows with IT management, especially when it comes to data security in the cloud computing. Data security and privacy protection are two major factors. These two factors are becoming more important for the future development of cloud computing technology in business, industry, and government. While addressing this fear, Google claimed that data stored in the cloud are much safer.

_

Best practice for organisations using cloud computing:

-It’s of utter importance that the service provider you choose can guarantee the highest level of uptime. Of course, cloud computing and access to your data and software is totally dependent on a working internet connection. Have an alternate plan in place if this problem were to occur.

-Be knowledgeable about where your data is being stored at all times and enquire about the laws in that jurisdiction – at the end of the day your organisation is held responsible

-ALWAYS encrypt your data, Data should be encrypted when moving through the network as well as at rest, stored in the cloud. It is ultimately the organisations responsibility to ensure that the data is secure. By encrypting the data it strengthens the security already in place though the cloud service provider. You can never have too much security when it comes to your data. Through encryption you also have a means of ensuring your data can be destroyed if necessary.

-It’s important to have an agreement in place with the cloud service provider regarding issues such as deletion of data, portability of data and software. Organisations do not want to find themselves in a situation where they are stuck with at particular vendor due to the fact that they are unable to export there data to an alternate vendor if they wish.

-When looking for a suitable cloud service provider ensure you choose one with good levels of security in place. Make sure your network is secured at all times and your local security is in place. ENCRYPT your data. A combination of all of these offers you the best chance at effective data security. Seeking an independent security audit of the provider is beneficial.

-Be aware of the third parties the cloud service provider uses and the policies around the access they may have to your data.

-Take extra care during updates ensuring access privileges remain unchanged.

-Make sure there is a system in place for monitoring data.

-Make sure that your organisation follows good password practice, this should be managed to ensure policies regarding passwords within the organisation is maintained at all times.

_______

_______

Consequences of non-compliance of data privacy:

With the development of technology, there are more and more intrusive ways to collect and process personal information. Very soon, it will become incredibly risky for companies to navigate through data privacy laws unprepared. Companies will be at risk of fines and lawsuits, not to mention company reputation and customer loyalty. Facebook has already set aside $3 billion to $5 billion for ongoing inquiries regarding multiple data breaches and mishandling of data. However, not every company can afford such a budget for non-compliance. The important thing is to take proactive steps and measures, like implementing appropriate data safeguards or implementing data protection software that will help you guide your privacy program, automate processes and navigate you through applicable data protection laws. GDPR requires you to implement proper technical and organizational measures to ensure a level of security appropriate to the risk (Article 32 GDPR -Security of processing). A potential data breach can cost your company more than you think. According to the Cost of a Data Breach Report 2020, conducted by the Ponemon Institute, the average total cost of a data breach is USD 3.86 million:

Figure below shows cost of a data breach reported in 2020: 

_____

Worldwide trends in Data Privacy:

A long list of data privacy law initiatives are indicating that there is an accelerating change in the way companies and individuals are recognizing the value and importance of protecting user’s data. Thriving businesses have already started to form their future data privacy and data protection strategies. The Big Four each have had their own struggles with positioning themselves as trustworthy companies. However, they have one thing in common. They have recognized the importance of data privacy. Apple’s CEO, Tim Cook, is repeatedly giving passionate speeches about data privacy initiatives provoking, comprehensive U.S. data-privacy law focused on minimizing data collection, data security, and informing users. No matter the motives of companies, one thing cannot be overlooked, International Association of Privacy Professionals (IAPP) research indicates that by 2022, half of our planet’s population will have its personal information covered under local privacy regulations in line with the GDPR.

Figure below shows data privacy regulations and enforcements worldwide:

________ 

________

Section-6

Right to privacy:   

_

Fundamental rights:

Fundamental rights are a group of rights that have been recognized by a high degree of protection from encroachment. These rights are specifically identified in a Constitution, or have been found under Due Process of law. Some universally recognised rights that are seen as fundamental, i.e., contained in the United Nations Universal Declaration of Human Rights, the U.N. International Covenant on Civil and Political Rights, or the U.N. International Covenant on Economic, Social and Cultural Rights, include the following:

Right to self-determination

Right to liberty

Right to due process of law

Right to freedom of movement

Right to privacy

Right to freedom of thought

Right to freedom of religion

Right to freedom of expression

Right to peaceful assembly

Right to freedom of association

_

The right to privacy is a basic human right, primordial right, natural right, inalienable right, and a fundamental right that differentiates mere animal existence of a human being from a dignified and meaningful one. Through its manifestations in personal privacy, informational privacy, territorial privacy, and communicational privacy through one lens, repose, sanctuary and intimate decision making through another, and physical privacy, psychological privacy and social privacy through a different lens, among others, privacy has been given a very broad connotation, and the scope and extent of the right seems extremely broad and all encompassing. The right to privacy extends to all possible extensions of the person from his body and mind to social behavior and political choices. Expression, personality, identity, data, information, knowledge, and many other manifestations of an individual form part of the right of privacy. This un-waivable, fundamental right allows a person to prohibit, regulate, and take other actions against any actual and/or foreseeable intrusions into his privacy, and this fundamental, constitutional right will trump any statutory right or limitation unless the high standards for making exceptions are met.

_

Privacy is a broad concept relating to the protection of individual autonomy and the relationship between an individual and society (including governments, companies, and other individuals). Privacy is considered essential in protecting an individual’s ability to develop ideas and personal relationships. Although it is often summarized as “the right to be left alone,” it encompasses a wide range of rights—including protection from intrusions into family and home life, control of sexual and reproductive rights, and communications secrecy. It is commonly recognized as a core right that underpins human dignity and such other values as freedom of association and freedom of speech.

The definitions of privacy and what is sensitive personal information vary among countries and individuals on the basis of past experiences and cultural understandings. Some cultures focus on community rights over individual rights; others, such as countries in Europe, are sensitive to privacy rights because of abuses going back to World War II. In matters relating to modern information and communications technologies, there is more agreement about the importance of privacy and the control of information.   

The legal right to privacy is recognized in nearly every national constitution and in most international human rights treaties, including the Universal Declaration of Human Rights, the International Covenant on Civil and Political Rights, the European Convention on Human Rights, the American Declaration of the Rights and Duties of Man, and the American Convention on Human Rights. International bodies, including the European Court of Human Rights and the United Nations (UN) Human Rights Committee, also have ruled on the right to privacy.

In the information age, the right to privacy has evolved to address issues relating to the collection, use, and dissemination of personal data in information systems. New technologies have driven the collection of personal information by governments and private bodies into databases of unprecedented breadth and depth. Governments and private organizations that collect information related to government services and obligations (including tax, medical, employment, criminal, and citizenship records) and identification technologies (including identity card systems, fingerprints, and DNA mapping) have quickly evolved and expanded. New communications technologies create and collect substantial records about individuals in the process of providing communications. Services run by governments and private operators collect information about individuals, including emails, records of persons communicated with, lists of Web sites visited, and mobile locations. And, of course, people share information through social networking sites. All of these have led to concerns about abuses, including misuse of information for unlawful purposes and identity theft.

Since the 1960s, principles governing the collection and handling of this information (known as “fair information practices”) have been developed and adopted by national governments and international bodies.

_

The terms privacy and right to privacy can’t be easily conceptualized. It has been taken in different ways in different situations. Tom Gaiety said right to privacy is bound to include body inviolability and integrity and intimacy of personal identity including marital privacy. Jude Cooley explained the law of privacy and has asserted that privacy is synonymous to ‘the right to be let alone’. Edward Shils has also explained privacy is ‘zero relationship between two or more persons in the sense that there is no interaction or communication between them, if they so choose’. Warren and Brandeis have very eloquently explained that ‘once a civilization has made distinction between the “outer” and “inner” man, between the life of the soul and the life of the body…. The idea of a private sphere in which man may become and remain himself’. In modern society privacy has been recognized both in the eyes of law and in common parlance. But it varies in different legal systems as they emphasize different aspects. Privacy is a neutral relationship between persons or groups or between groups and persons. Privacy is a value, a cultural state or condition directed towards individual on collective self-realization varying from society to society.

_____

Definitions of Right to privacy:

In recent years there have been few attempts to clearly and precisely define the “right to privacy”. In 2005, students of the Haifa Center for Law & Technology asserted that the right to privacy “should not be defined as a separate legal right” at all. By their reasoning, existing laws relating to privacy, in general, should be sufficient. Other experts, such as William Prosser, have attempted but failed, to find a “common ground” between the leading kinds of privacy cases in the court system, at least to formulate a definition. One law school treatise from Israel, however, on the subject of “privacy in the digital environment,” suggests that the “right to privacy should be seen as an independent right that deserves legal protection in itself.” It has therefore proposed a working definition for a “right to privacy”:

‘The right to privacy is our right to keep a domain around us, which includes all those things that are part of us, such as our body, home, property, thoughts, feelings, secrets, and identity. The right to privacy gives us the ability to choose which parts in this domain can be accessed by others and to control the extent, manner, and timing of the use of those parts we choose to disclose.’

An individual right:

Alan Westin believes that new technologies alter the balance between privacy and disclosure and that privacy rights may limit government surveillance to protect democratic processes. Westin defines privacy as “the claim of individuals, groups, or institutions to determine for themselves when, how, and to what extent information about them is communicated to others”. Westin describes four states of privacy: solitude, intimacy, anonymity, reserve. These states must balance participation against norms:

Each individual is continually engaged in a personal adjustment process in which he balances the desire for privacy with the desire for disclosure and communication of himself to others, in light of the environmental conditions and social norms set by the society in which he lives.

— Alan Westin, Privacy and Freedom, 1968

Under liberal democratic systems, privacy creates a space separate from political life, and allows personal autonomy, while ensuring democratic freedoms of association and expression.

David Flaherty believes networked computer databases pose threats to privacy. He develops ‘data protection’ as an aspect of privacy, which involves “the collection, use, and dissemination of personal information”. This concept forms the foundation for fair information practices used by governments globally. Flaherty forwards an idea of privacy as information control, “individuals want to be left alone and to exercise some control over how information about them is used”.

Marc Rotenberg has described the modern right to privacy as Fair Information Practices: “the rights and responsibilities associated with the collection and use of personal information.” Rotenberg emphasizes that the allocation of rights are to the data subject and the responsibilities are assigned to the data collectors because of the transfer of the data and the asymmetry of information concerning data practices.

Richard Posner and Lawrence Lessig focus on the economic aspects of personal information control. Posner criticizes privacy for concealing information, which reduces market efficiency. For Posner, employment is selling oneself in the labor market, which he believes is like selling a product. Any ‘defect’ in the ‘product’ that is not reported is fraud. For Lessig, privacy breaches online can be regulated through code and law. Lessig claims “the protection of privacy would be stronger if people conceived of the right as a property right”, and that “individuals should be able to control information about themselves”. Economic approaches to privacy make communal conceptions of privacy difficult to maintain.

A collective value and a human right:

There have been attempts to reframe privacy as a fundamental human right, whose social value is an essential component in the functioning of democratic societies. Amitai Etzioni suggests a communitarian approach to privacy. This requires a shared moral culture for establishing social order. Etzioni believes that “privacy is merely one good among many others”, and that technological effects depend on community accountability and oversight. He claims that privacy laws only increase government surveillance.

Priscilla Regan believes that individual concepts of privacy have failed philosophically and in policy. She supports a social value of privacy with three dimensions: shared perceptions, public values, and collective components. Shared ideas about privacy allow freedom of conscience and diversity in thought. Public values guarantee democratic participation, including freedoms of speech and association, and limits government power. Collective elements describe privacy as a collective good that cannot be divided. Regan’s goal is to strengthen privacy claims in policy making: “if we did recognize the collective or public-good value of privacy, as well as the common and public value of privacy, those advocating privacy protections would have a stronger basis upon which to argue for its protection”.

Leslie Regan Shade argues that the human right to privacy is necessary for meaningful democratic participation, and ensures human dignity and autonomy. Privacy depends on norms for how information is distributed, and if this is appropriate. Violations of privacy depend on context. The human right to privacy has precedent in the United Nations Declaration of Human Rights. Shade believes that privacy must be approached from a people-centered perspective, and not through the marketplace.

_

Privacy as defined in Black’s Dictionary right of a person and the persons’ property to be free from unwarranted public scrutiny and exposure. Privacy as a right has changed by leaps and bounds in recent times. One of the most basic liberties of the individuals after the right to life is right to privacy that has been incorporated in the legal system through legislative measures or through judicial pronouncements in various jurisdictions. The right to privacy holds high pedestal as privacy helps to create barriers and manage boundaries to defend ourselves from unwarranted interference with our personal lives and allows us to negotiate who we are and how we desire to engage with the outside world. Privacy essentially limits access to domains related to us. For example, limiting who has access to our personal details, communications and information. Also, significant to bear in mind is that there is a slew of international conventions and charters which exist to reinforce the norm that right to privacy is an essential component of human life which makes life more than mere animal existence. Some of the international conventions and charters which uphold right of privacy are as follows:

United Nations Declaration of Human Rights (UDHR) 1948, Article 12: “No one shall be subjected to arbitrary interference with his privacy, family, home or correspondence, nor to attacks upon his honour and reputation. Everyone has the right to the protection of the law against such interference or attacks.”

International Covenant on Civil and Political Rights (ICCPR) 1966, Article 17: “1. No one shall be subjected to arbitrary or unlawful interference with his privacy, family, home or correspondence, nor to unlawful attacks on his honour or reputation. 2. Everyone has the right to the protection of the law against such interference or attacks.”

The right to privacy is also included in:

Article 14 of the United Nations Convention on Migrant Workers;

Article 16 of the UN Convention on the Rights of the Child;

Article 10 of the African Charter on the Rights and Welfare of the Child;

Article 4 of the African Union Principles on Freedom of Expression (the right of access to information);

Article 11 of the American Convention on Human Rights;

Article 5 of the American Declaration of the Rights and Duties of Man,

Articles 16 and 21 of the Arab Charter on Human Rights;

Article 21 of the ASEAN Human Rights Declaration; and

Article 8 of the European Convention on Human Rights.

Over 130 countries have constitutional statements regarding the protection of privacy, in every region of the world.

An important element of the right to privacy is the right to protection of personal data. While the right to data protection can be inferred from the general right to privacy, some international and regional instruments also stipulate a more specific right to protection of personal data, including:

-the OECD’s Guidelines on the Protection of Privacy and Transborder Flows of Personal Data,

-the Council of Europe Convention 108 for the Protection of Individuals with Regard to the Automatic Processing of Personal Data,

-a number of European Union Directives and its pending Regulation, and the European Union Charter of Fundamental Rights,

-the Asia-Pacific Economic Cooperation (APEC) Privacy Framework 2004, and

-the Economic Community of West African States has a Supplementary Act on data protection from 2010.

Over 100 countries now have some form of privacy and data protection law.

However, it is all too common that surveillance is implemented without regard to these protections. That’s one of the reasons why Privacy International is around — to make sure that the powerful institutions such as governments and corporations don’t abuse laws and loopholes to invade your privacy.

_____

The right to privacy is an element of various legal traditions to restrain governmental and private actions that threaten the privacy of individuals.  Over 150 national constitutions mention the right to privacy. Since the global surveillance disclosures of 2013, initiated by ex-NSA employee Edward Snowden, the inalienable human right to privacy has been a subject of international debate. Government agencies, such as the NSA, CIA, R&AW and GCHQ, have engaged in mass, global surveillance. Some current debates around the right to privacy include whether privacy can co-exist with the current capabilities of intelligence agencies to access and analyze many details of an individual’s life; whether or not the right to privacy is forfeited as part of the social contract to bolster defense against supposed terrorist threats; and whether threats of terrorism are a valid excuse to spy on the general population.

Private sector actors can also threaten the right to privacy—particularly technology companies, such as Amazon, Apple, Facebook, Google, and Yahoo that use and collect personal data. These concerns have been strengthened by scandals, including the Facebook–Cambridge Analytica data scandal, which focused on psychographic company Cambridge Analytica use personal data from Facebook to influence large groups of people.

_____

Right to privacy: American perspective:  

The right to privacy refers to the concept that one’s personal information is protected from public scrutiny. U.S. Justice Louis Brandeis called it “the right to be left alone.” While not explicitly stated in the U.S. Constitution, some amendments provide some protections. The right to privacy most often is protected by statutory law. For example, the Health Information Portability and Accountability Act (HIPAA) protects a person’s health information, and the Federal Trade Commission (FTC) enforces the right to privacy in various privacy policies and privacy statements. The right to privacy often must be balanced against the state’s compelling interests, including the promotion of public safety and improving the quality of life. Seat-belt laws and motorcycle helmet requirements are examples of such laws.

Constitutional rights:

The right to privacy often means the right to personal autonomy, or the right to choose whether or not to engage in certain acts or have certain experiences. Several amendments to the U.S. Constitution have been used in varying degrees of success in determining a right to personal autonomy:

The First Amendment protects the privacy of beliefs

The Third Amendment protects the privacy of the home against the use of it for housing soldiers

The Fourth Amendment protects privacy against unreasonable searches

The Fifth Amendment protects against self-incrimination, which in turn protects the privacy of personal information

The Ninth Amendment says that the “enumeration in the Constitution of certain rights shall not be construed to deny or disparage other rights retained by the people.” This has been interpreted as justification for broadly reading the Bill of Rights to protect privacy in ways not specifically provided in the first eight amendments.

The right to privacy is most often cited in the Due Process Clause of the 14th Amendment, which states:

No state shall make or enforce any law which shall abridge the privileges or immunities of citizens of the United States; nor shall any state deprive any person of life, liberty, or property, without due process of law; nor deny to any person within its jurisdiction the equal protection of the laws. However, the protections have been narrowly defined and usually only pertain to family, marriage, motherhood, procreation and child rearing.

For example, the Supreme Court first recognized that the various Bill of Rights guarantees creates a “zone of privacy” in Griswold v. Connecticut, a 1965 ruling that upheld marital privacy and struck down bans on contraception.

The court ruled in 1969 that the right to privacy protected a person’s right to possess and view pornography in his own home. Justice Thurgood Marshall wrote in Stanley v. Georgia that, ” If the First Amendment means anything, it means that a State has no business telling a man, sitting alone in his own house, what books he may read or what films he may watch.”

The controversial case Roe v. Wade in 1972 firmly established the right to privacy as fundamental, and required that any governmental infringement of that right to be justified by a compelling state interest. In Roe, the court ruled that the state’s compelling interest in preventing abortion and protecting the life of the mother outweighs a mother’s personal autonomy only after viability. Before viability, the mother’s right to privacy limits state interference due to the lack of a compelling state interest.

In 2003, the court, in Lawrence v. Texas, overturned an earlier ruling and found that Texas had violated the rights of two gay men when it enforced a law prohibiting sodomy. Justice Anthony Kennedy wrote, “The petitioners are entitled to respect for their private lives. The State cannot demean their existence or control their destiny by making their private sexual conduct a crime. Their right to liberty under the Due Process Clause gives them the full right to engage in their conduct without intervention of the government.”

______

______

Indian Supreme Court Declares Right to Privacy A Fundamental Right:

On 24 August 2017, the Supreme Court of India in a historic judgement declared the right to privacy as a fundamental right protected under the Indian Constitution. In declaring that this right stems from the fundamental right to life and liberty, the Court’s decision has far-reaching consequences. A nine-judge bench of the Supreme Court in the case of Puttuswamy v. Union of India has declared that the right to privacy is a fundamental right protected under Part III of the Constitution of India. While primarily focused on the individual’s right against the State for violations of their privacy, this landmark judgement will have repercussions across both State and non-State actors and will likely result in the enactment of a comprehensive law on privacy.

The judgement was pronounced in response to a reference made in connection with the legal challenge to India’s national identity project – Aadhaar – during which the Advocate General of India argued that the Indian Constitution does not include within it a fundamental right to privacy. His arguments were based on two cases decided by the Supreme Court – one, MP Sharma v. Satish Chandra decided by an eight judge bench in 1954 and the other, Kharak Singh v. State of Uttar Pradesh, by six judges in 1962. Both cases had held, in different circumstances, that the Constitution of India does not specifically protect the right to privacy. In the 55 years that have passed since these cases were decided, there hasn’t been a larger bench of Supreme Court that has considered this issue, and therefore these judgements were still binding.

The key points of the judgement are summarized below:

-(a) Right to Privacy – A Fundamental Right

The Supreme Court confirmed that the right to privacy is a fundamental right that does not need to be separately articulated but can be derived from Articles 14, 19 and 21 of the Constitution of India. It is a natural right that subsists as an integral part to the right to life and liberty. It is a fundamental and inalienable right and attaches to the person covering all information about that person and the choices that he/ she makes. It protects an individual from the scrutiny of the State in their home, of their movements and over their reproductive choices, choice of partners, food habits, etc. Therefore, any action by the State that results in an infringement of the right to privacy is subject to judicial review.

-(b) Not an Absolute Right – Subject to Reasonable Restrictions

The Supreme Court was at pains to clarify that the fundamental right to privacy is not absolute and will always be subject to reasonable restrictions. It held that the State can impose restrictions on the right to privacy to protect legitimate State interests but it can only do so by following the three-pronged test summarized below:

-1. Existence of a law that justifies an encroachment on privacy;

-2. A legitimate State aim or need that ensures that the nature or the content of this law falls within the zone of reasonableness and operates to guard against arbitrary State action; and

-3. The means adopted by the State are proportional to the objects and needs sought to be fulfilled by the law.

Consequently, all State action that could have an impact on privacy will now have to be measured against this three-fold test. This is likely to have an impact on several ongoing projects including most importantly, the Aadhaar identity project.

-(c) Other Incidental Implications

There are several additional implications of this judgement on matters incidental to the principal issue decided by the Court:

-1. By expressly recognising an individual’s right to privacy regarding his sexual choices, the judgement is likely to have an impact on the petition pending before the Supreme Court on the de-criminalisation of homosexuality in India.

-2. To the extent that the judgement has stated that the State cannot interfere in the food choices of an individual it will have an impact on the various cases protesting the ban on beef imposed by certain States.

-3. The judgement has also made several observations on the complex relationship between personal privacy and big data, particularly in the context of how the judicious use of these technologies can result in the State achieving its legitimate interests with greater efficiencies.

-4.  It has also recognized the impact that non-State actors can have on personal privacy particularly in the context of informational privacy on the Internet. While fundamental rights are ordinarily only enforced against actions of the State, given the broad language of the judgement and the extent to which informational privacy has been referred to in the judgement, there is concern amongst certain experts that these principles will extend to the private sector as well.

_

There is a lot of confusion about what the Fundamental Right to Privacy means and what it doesn’t. The following 10 points from the SC verdict will make it clear:

-1. Privacy is a constitutionally protected right which emerges primarily from the guarantee of life and personal liberty in Article 21 of the Constitution. Elements of privacy also arise in varying contexts from the other facets of freedom and dignity recognised and guaranteed by the fundamental rights contained in Part III.

-2. The SC judgement recognising the existence of a constitutional right of privacy was not an exercise in the nature of amending the Constitution nor the Court had embarked on a constitutional function of that nature, which is entrusted to Parliament.

-3. According to SC verdict, privacy is the constitutional core of human dignity. It has both normative and descriptive function. At a normative level, privacy sub-serves those eternal values upon which the guarantees of life, liberty and freedom are founded. At a descriptive level, privacy postulates a bundle of entitlements and interests which lie at the foundation of ordered liberty.

-4. What privacy includes

  • Preservation of personal intimacies, the sanctity of family life, marriage, procreation, home and sexual orientation.
  • A right to be left alone.
  • Safeguards individual autonomy, recognises the ability of the individual to control vital aspects of his or her life.
  • Personal choices governing a way of life are intrinsic to privacy.
  • Protection of heterogeneity and recognition of plurality and diversity of our culture.

-5. Privacy is not surrendered when a person is in public place

  • By being in public place doesn’t mean an individual has surrendered privacy, even as the legitimate expectation of privacy may vary from the intimate zone to the private zone and from the private to the public arena.
  • Privacy attaches to the person since it is an essential facet of the dignity of the human being

-6. What Right to Privacy doesn’t mean

Not an absolute right: Like other rights which form part of the fundamental freedoms protected by Part III, including the right to life and personal liberty under Article 21, privacy is not an absolute right.

-7. Can law/state encroach upon privacy?

  • According to SC, a law which encroaches upon privacy will have to withstand the touchstone of permissible restrictions on fundamental rights.
  • In the context of Article 21, an invasion of privacy must be justified on the basis of a law which stipulates a procedure which is fair, just and reasonable.
  • The law must also be valid with reference to the encroachment on life and personal liberty under Article 21.
  • An invasion of life or personal liberty must meet the three-fold requirement of (i) legality, which postulates the existence of law; (ii) need, defined in terms of a legitimate state aim; and (iii) proportionality which ensures a rational nexus between the objects and the means adopted to achieve them. 

-8. Privacy has both positive and negative content.

  • The negative content of privacy restrains the state from committing an intrusion upon the life and personal liberty of a citizen.
  • Positive content of the right to privacy imposes an obligation on the state to take all necessary measures to protect the privacy of the individual.

-9. Robust regime needed for data protection

Informational privacy is a facet of the right to privacy. The court observed that dangers to privacy in an age of information can originate not only from the state but from non-state actors as well. The Union Government, hence, needs to examine and put into place a robust regime for data protection. The creation of such a regime would require a careful and sensitive balance between individual interests and legitimate concerns of the state.

-10. For what reasons state can encroach upon individual’s privacy?

According to the judgement, the legitimate aims of the state should be “protecting national security, preventing and investigating crime, encouraging innovation and the spread of knowledge, and preventing the dissipation of social welfare benefits.” These matters should be considered by the Union government while designing the regime for the protection of the data.

_______

_______

Section-7

Why do we need privacy?   

_

Privacy is important for a number of reasons. Some have to do with the consequences of not having privacy. People can be harmed or debilitated if there is no restriction on the public’s access to and use of personal information. Other reasons are more fundamental, touching the essence of human personhood. Reverence for the human person as an end in itself and as an autonomous being requires respect for personal privacy. To lose control of one’s personal information is in some measure to lose control of one’s life and one’s dignity. Therefore, even if privacy is not in itself a fundamental right, it is necessary to protect other fundamental rights. 

What happens when privacy is violated?

-1. The more widely sensitive information is disseminated, the greater the danger of error, misunderstanding, discrimination, prejudice and other abuses.

-2. The lack of privacy can inhibit personal development, and freedom of thought and expression.

-3. It makes it more difficult for individuals to form and manage appropriate relationships.

-4. It restricts individuals’ autonomy by giving them less control over their lives and in particular less control over the access others have to their lives.

-5.  It is an affront to the dignity of the person.

-6.  It leaves individuals more vulnerable to the power of government and other large institutions.

_

The most important arguments in favour of privacy:

-1. Protection from the Misuse of Personal Information

There are many ways a person can be harmed by the revelation of sensitive personal information. Medical records, psychological tests and interviews, court records, financial records–whether from banks, credit bureaus or the IRS–welfare records, sites visited on the Internet and a variety of other sources hold many intimate details of a person’s life. The revelation of such information can leave the subjects vulnerable to many abuses.

-2. Privacy and Relationship

Privacy is also needed in the ordinary conduct of human affairs, to facilitate social interchange. James Rachels, for example, argues that privacy is an essential prerequisite for forming relationships. The degree of intimacy in a relationship is determined in part by how much personal information is revealed. One reveals things to a friend that one would not disclose to a casual acquaintance. What one tells one’s spouse is quite different from what one would discuss with one’s employer. This is true of more functional relationships as well. People tell things to their doctors or therapists that they do not want anyone else to know, for example. These privileged relationships, whether personal or functional, require a special level of openness and trust that is only possible if there is an assurance that what is revealed will be kept private. As Rachel’s points out, a husband and wife will behave differently in the presence of a third party than when they are alone.  If they were always under observation, they could not enjoy the degree of intimacy that a marriage should have. Charles Fried puts it more broadly. Privacy, he writes, is “necessarily related to ends and relations of the most fundamental sort: respect, love, friendship and trust… without privacy they are simply inconceivable.”

-3. Autonomy

The analysis of Rachels and Fried suggests a deeper and more fundamental issue: personal freedom. As Deborah Johnson has observed, “To recognize an individual as an autonomous being, an end in himself, entails letting that individual live his life as he chooses. Of course, there are limits to this, but one of the critical ways that an individual controls his life is by choosing with whom he will have relationships and what kind of relationships these will be…. Information mediates relationships. Thus when one cannot control who has information about one, one loses considerable autonomy.”

-4. Human Dignity

Autonomy is part of the broader issue of human dignity, that is, the obligation to treat people not merely as means, to be bought and sold and used, but as valuable and worthy of respect in themselves. As the foregoing has made clear, personal information is an extension of the person. To have access to that information is to have access to the person in a particularly intimate way. When some personal information is taken and sold or distributed, especially against the person’s will, whether it is a diary or personal letters, a record of buying habits, grades in school, a list of friends and associates or a psychological history, it is as if some part of the person has been alienated and turned into a commodity. In that way the person is treated merely as a thing, a means to be used for some other end.

-5. Privacy and Power

Privacy is even more necessary as a safeguard of freedom in the relationships between individuals and groups. As Alan Westin has pointed out, surveillance and publicity are powerful instruments of social control.  If individuals know that their actions and dispositions are constantly being observed, commented on and criticized, they find it much harder to do anything that deviates from accepted social behavior. There does not even have to be an explicit threat of retaliation. “Visibility itself provides a powerful method of enforcing norms.”  Most people are afraid to stand apart, to be different, if it means being subject to piercing scrutiny. The “deliberate penetration of the individual’s protective shell, his psychological armour, would leave him naked to ridicule and shame and would put him under the control of those who know his secrets.”  Under these circumstances they find it better simply to conform. This is the situation characterized in George Orwell’s 1984 where the pervasive surveillance of “Big Brother” was enough to keep most citizens under rigid control.

Therefore privacy, as protection from excessive scrutiny, is necessary if individuals are to be free to be themselves. Everyone needs some room to break social norms, to engage in small “permissible deviations” that help define a person’s individuality. People need to be able to think outrageous thoughts, make scandalous statements and pick their noses once in a while. They need to be able to behave in ways that are not dictated to them by the surrounding society. If every appearance, action, word and thought of theirs is captured and posted on a social network visible to the rest of the world, they lose that freedom to be themselves. This ability to develop one’s unique individuality is especially important in a democracy, which values and depends on creativity, nonconformism and the free interchange of diverse ideas. That is where a democracy gets its vitality. Governments do need information, including personal information, to govern effectively and to protect the security of their citizens. But citizens also need protection from the overzealous or malicious use of that information, especially by governments that, in this age, have enormous bureaucratic and technological power to gather and use the information.

Privacy is a limit on government power, as well as the power of private sector companies. The more someone knows about us, the more power they can have over us. Personal data is used to make very important decisions in our lives. Personal data can be used to affect our reputations; and it can be used to influence our decisions and shape our behavior. It can be used as a tool to exercise control over us. And in the wrong hands, personal data can be used to cause us great harm.

-6. Reputation Management

Privacy enables people to manage their reputations. How we are judged by others affects our opportunities, friendships, and overall well-being. Although we can’t have complete control over our reputations, we must have some ability to protect our reputations from being unfairly harmed. Protecting reputation depends on protecting against not only falsehoods but also certain truths. Knowing private details about people’s lives doesn’t necessarily lead to more accurate judgment about people. People judge badly, they judge in haste, they judge out of context, they judge without hearing the whole story, and they judge with hypocrisy. Privacy helps people protect themselves from these troublesome judgments.

-7. Maintaining Appropriate Social Boundaries

People establish boundaries from others in society. These boundaries are both physical and informational. We need places of solitude to retreat to, places where we are free of the gaze of others in order to relax and feel at ease. We also establish informational boundaries, and we have an elaborate set of these boundaries for the many different relationships we have. Privacy helps people manage these boundaries. Breaches of these boundaries can create awkward social situations and damage our relationships. Privacy is also helpful to reduce the social friction we encounter in life. Most people don’t want everybody to know everything about them – hence the phrase “none of your business.” And sometimes we don’t want to know everything about other people — hence the phrase “too much information.”

-8. Control Over One’s Life

Personal data is essential to so many decisions made about us, from whether we get a loan, a license or a job to our personal and professional reputations. Personal data is used to determine whether we are investigated by the government, or searched at the airport, or denied the ability to fly. Indeed, personal data affects nearly everything, including what messages and content we see on the Internet. Without having knowledge of what data is being used, how it is being used, the ability to correct and amend it, we are virtually helpless in today’s world. Moreover, we are helpless without the ability to have a say in how our data is used or the ability to object and have legitimate grievances be heard when data uses can harm us. One of the hallmarks of freedom is having autonomy and control over our lives, and we can’t have that if so many important decisions about us are being made in secret without our awareness or participation.

-9. Freedom of Thought, Speech and Movement

Privacy is key to freedom of thought. A watchful eye over everything we read or watch can chill us from exploring ideas outside the mainstream. Privacy is also key to protecting speaking unpopular messages. And privacy doesn’t just protect fringe activities. We may want to criticize people we know to others yet not share that criticism with the world. A person might want to explore ideas that their family or friends or colleagues dislike. When privacy is checked with help of digital spying of places and persons, it amounts to curtailing one’s freedom to move. Now a days we find unnecessary surveillance in the buildings including educational and health institutions which makes people insecure. 

-10. Freedom of Social and Political Activities

Privacy helps protect our ability to associate with other people and engage in political activity. A key component of freedom of political association is the ability to do so with privacy if one chooses. We protect privacy at the ballot because of the concern that failing to do so would chill people’s voting their true conscience. Privacy of the associations and activities that lead up to going to the voting booth matters as well, because this is how we form and discuss our political beliefs. The watchful eye can disrupt and unduly influence these activities.

-11. Ability to Change and Have Second Chances

Many people are not static; they change and grow throughout their lives. There is a great value in the ability to have a second chance, to be able to move beyond a mistake, to be able to reinvent oneself. Privacy nurtures this ability. It allows people to grow and mature without being shackled with all the foolish things they might have done in the past. Certainly, not all misdeeds should be shielded, but some should be, because we want to encourage and facilitate growth and improvement.

-12. Not Having to Explain or Justify Oneself

An important reason why privacy matters is not having to explain or justify oneself. We may do a lot of things which, if judged from afar by others lacking complete knowledge or understanding, may seem odd or embarrassing or worse. It can be a heavy burden if we constantly have to wonder how everything we do will be perceived by others and have to be at the ready to explain.

________

Why privacy is important in business?

Consumer Privacy:

Consumer privacy measures are those taken by commercial organizations to ensure that confidential customer data is not stolen or abused. Since most such organizations have a strong competitive incentive to retain an exclusive access to these data, and since customer trust is usually a high priority, most companies take some security engineering measures to protect consumer privacy. However, these vary in effectiveness, and would not typically meet the much higher standards of client confidentiality applied by ethical codes or legal codes in banking or law, nor patient privacy measures in medicine, nor rigorous “national security” measures in military and intelligence organizations.

Consumer privacy laws and regulations seek to protect any individual from loss of privacy due to failures or limitations of corporate consumer privacy measures. They recognize that the damage done by privacy loss is typically not measurable, nor can it be undone, and that commercial organizations have little or no interest in taking unprofitable measures to drastically increase privacy of customers – indeed, their motivation is very often quite the opposite, to share data for commercial advantage, and to fail to officially recognize it as sensitive, so as to avoid legal liability for lapses of security that may occur.

Consumer privacy concerns date back to the first commercial couriers and bankers, who in every culture took strong measures to protect consumer privacy, but also in every culture tended to be subject to very harsh punitive measures for failures to keep a customer’s information private.

The Hippocratic Oath includes a requirement for doctors to avoid mentioning ills of patients to others, not only to protect them, but to protect their families – the same basic idea as modern consumer privacy law and regulation, which recognizes that innocent third parties can be harmed by the loss of control of sensitive information, and that therefore there is a responsibility beyond that to the ‘customer’ or ‘client’. Today the ethical codes of most professions very clearly specify privacy measures beyond that for the ‘consumer’ of an arbitrary service.

_

Gaining and maintaining the trust of your consumers is crucial if you ever want to grow your business. And that’s a tough sell, especially these days, where it seems information is being bought and traded and sold left and right. Most individuals will share their personal information with you if they trust your company, but to lose that trust can bring everything down upon you. Customer privacy has always been important, but now, more than ever. It’s something that has the potential to impact your brand, disrupt the customer experience, and potentially damage your reputation.

Consumers are more connected these days. They’re spending more time online and sharing more information than before. They’re researching, taking advantage of online services, and purchasing items online through computers, phones, and tablets. This information is collected by their mobile operators, internet providers, device manufacturers, and apps they use for either their own purposes or to sell to other businesses. Consumers are also well-connected socially, sharing their trials and triumphs along with pictures and locations all over Facebook and Instagram. While they’re happy to share minor details, more personal information is kept a little closer to the chest. They’re concerned about businesses collecting and selling personal information without permission.

A recent series of high-profile data breaches have brought shortcomings of data protection to light. They signify what can happen when corporations fail to protect consumer data from internet hacking. The offending companies came to include Equifax, Yahoo, Target, Uber, and Home Depot. But these breaches aren’t just problems to plague moguls like Facebook and Google. Many smaller companies have also lost customer trust or have been sued over privacy mishaps in recent years. They’re likely to face even more problems as digital data files grow in not only size, but importance to modern business.

-1. Competitive Advantage:

While basic risk management may seem like the obvious answer as to why protecting your customers’ privacy is so important, it’s a little more complex than that. Maintaining impeccable privacy and security could put you at a competitive advantage over your competition. Done right, privacy could be a cornerstone of building your brand and corporate reputation.

Protecting user privacy can enable you to drive more revenue and gain more customers. A little more than ⅔ of consumers believe that privacy practices are related to a company’s trustworthiness, only outranked by a company’s dependability and pricing practices by a small margin.

-2. Privacy matters to your customers:

Consumers have become increasingly connected and are constantly sharing information online. They are researching, purchasing and using online products and services, via any number of connected devices. They are also opting in to share their preferences as part of interactions on social media and search sites. All of this customer data is being collected by device manufacturers, desktop and mobile apps, internet providers and mobile operators for their own purposes or to sell to other businesses.

In many cases, consumers are happy to share information like photos, opinions and locations over Facebook, Snapchat and Twitter. But, when it comes to other, often highly personal aspects of their life — health, wealth and family — they are more protective.

According to a recent survey conducted by AnchorFree, a staggering majority of Americans — 95 percent — are concerned about businesses collecting and selling personal information without permission. Additionally, over 80 percent are more concerned about their online privacy and security today than a year ago. This means that your customers are thinking about privacy when they visit your website, use your app, and purchase your products and services.

-3. To build customer loyalty:

In 2017, research firm Baringa Partners conducted a survey about consumer attitudes toward data protection. Here’s a portion of their findings:

“Our results reveal companies risk losing up to 55% of customers if they suffer a significant personal data leak. We looked at consumer attitudes towards companies in the banking, insurance, energy, and TV, phone and internet sectors. We found that, in the event of a data breach, 30% of people would switch provider immediately and a further 25% would wait to see a media response or what others say and do before switching to another provider.”

And it isn’t just large corporations at risk. All sizes of organizations will lose customers following a breach. In the USA, a recent Bank of America research report revealed that “nearly 40 percent of consumers have had their credit or debit card, bank account or other personal financial information stolen. And 20 percent of those consumers who have had their information stolen said they would not shop with a small business that has experienced a data breach.”

-4. To support innovation:

Too many people claim that building security and privacy controls into new technologies, products and services stifles innovation. That is complete hogwash! Actually, when privacy is purposefully addressed within new innovations, it expands and improves innovations. It does not inhibit them. The public is demanding that privacy be protected. Privacy should be viewed as not just a differentiator or something to be done if legally required, but a standard requirement for any new technology or service involving personal data. It takes more innovation to create secure privacy-protecting devices that mitigate privacy risk than it does to simply leave out such controls.

-5. To meet compliance requirements:

This is the benefit that is most often touted. Organizations that do not implement privacy protection face huge fines in the tens of millions of dollars (now the possibilities are even higher under the EU GDPR) and up to 20-year penalties for non-compliance with laws, regulations, standards and their own published privacy and security notices. Organizations also risk losing valuable business relationships by not complying with their contractual requirements for privacy protections.

______

Why privacy at work is important:

-1. Privacy is important because it is required for ideas to gain traction, especially if they are new (or even subversive). Every new idea requires a quiet discussion between trusted colleagues before it goes before a larger group. The conversation on the down-low is what allows the idea to be developed and tested. The riskier the idea or the more it pushes the envelope of the current system, the more it requires confidential conversations to process and incubate.

-2. Privacy is also required for creativity. Far from a process that always emphasizes brainstorming in big groups, creativity must also include quiet moments for reflection and focus. It is a process that flows between groups and individuals and between convergent and divergent thinking.

-3. Privacy facilitates focus. In his book Deep Work, Cal Newport argues we need more focus. In a world marked by superficially scanning and skimming from one topic to the next, we must make room for profound thinking, reflection, attention, and concentration. These are best done with some privacy.

-4. Privacy is related to engagement. When people have greater amounts of choice in their workplace, they tend to be more engaged. Choice implies various levels of privacy as well as spaces that offer differing levels of buzz, postures, and connection.

Whether privacy is facilitating innovation and the creative process, focus, or engagement, it translates into effectiveness. People need some level of privacy to be at their best.

________

________

Privacy, personal space and stress:

Personal space is a private, intimate, and exclusive territory which nobody can invade or claim. The term personal space not only refers to a physical thing. It also has to do with the invasion of this area by other stimuli. For example, noise, emotions transmitted by others, and an overload of information. Also, the constant interruptions of our moments of solitude and privacy. People need safe personal space to feel protected, reduce stress, and remain focused. Invasion of privacy in personal space generates high levels of stress and discomfort. Personal space not only refers to the area in which we can tolerate the physical presence of others but also the voices, breathing, or body heat of others that make us feel uncomfortable and even threatening. Personal space is also a bubble which any kind of psycho-sensory stimulation can pop. In other words, certain things such as furniture, decoration, lack of illumination, or the smell of a particular environment can also be a source of stress. Likewise, not being able to take a break, or being constantly supervised or controlled are clear invasions of our privacy as well.

A couple becomes parents and are feeling overwhelmed. The stress they experience has nothing to do with their baby, but with their environment, family, friends, and co-workers. Since they were in the hospital, their personal space has been continuously invaded by their loved ones. People full of excitement and with good intentions, who took turns meeting the newborn, picking him up, and giving them a thousand parenting tips. This small example is a sample of how the environment can sometimes invade that personal bubble which we need to preserve only for ourselves. You don’t need to enter an elevator full of people to experience discomfort. The most serious “aggressions” often come from those closest to us.

Take care of yourself and protect your personal space:

Ralph Adolph and Daniel P. Kennedy, neurologists at the University of Caltech in the United States, discovered that there’s a structure in our brain which is responsible for telling us where the limits of our personal space lie. This structure is the amygdala, a small region associated with fear and the survival instinct. This discovery reveals something essential. The brain measures the personal limits of each individual. It’s like a personal alarm which tells us when something or someone is bothering us. When something is invading our privacy or violating our integrity until it becomes a threat to our well-being. These limits are different for each person. Some people feel overwhelmed and get easily stressed by minimal stimuli, while others have much greater tolerance.

Proxemics is the science which studies the effects of our interrelationships in the use of space. It reminds us that one of our greatest sources of anxiety is witnessing how we feel more “crowded” every day in every way. Not only do we have smaller physical spaces for everything. On top of that, we receive so many stimuli and so much pressure and interactions around us. So much so, that we don’t get to set any filters. We let everything come our way, we allow ourselves to be caught and surrounded…

Solution:

We must be capable of managing our personal limits. We’re talking about learning how to place both physical and psychological distance from all external dynamics which attack our privacy and act as powerful sources of stress. Sometimes, our colleagues invade our space. Also very noisy, excessively colorful, tiny, or oppressive environments can be the problem. In other cases, our inability to say no and to make clear what we can and can’t tolerate will guard our personal space. Being explicit when it comes to indicating where our personal boundaries lie will help us interact much better with others. Because only by doing so will we be able to shape a more respectful, productive and, above all, healthy social environment.

_______

_______

Section-8

‘Nothing to hide’ as counter-argument to privacy:

In the past, thieves, private detectives, or law enforcement agencies were the ones to be feared with regard to privacy. Today, the surveillance cameras of public authorities and the data centers of advertising companies are the big bogeyman, because they create profiles of all people who move around the Internet. It is all about the big picture. Over the years, we often have often heard a flawed counter-argument to privacy: “Why should I care? I have nothing to hide.”

Well. Just because you’re not doing something wrong doesn’t mean you shouldn’t be allowed privacy.

_

Nothing to hide argument:

The nothing to hide argument states that government surveillance programs do not threaten privacy unless they uncover illegal activities, and that if they do uncover illegal activities, the person committing these activities does not have the right to keep them private. A person who favors this argument may state “I’ve got nothing to hide” and therefore does not express opposition to government surveillance. An individual using this argument may say that a person should not worry about government surveillance if they have “nothing to hide.” The motto “If you’ve got nothing to hide, you’ve got nothing to fear” has been used in the closed-circuit television program practiced in the United Kingdom.

This argument is commonly used in discussions regarding privacy. Geoffrey Stone, a legal scholar, said that the use of the argument is “all-too-common”. Bruce Schneier, a data security expert and cryptographer, described it as the “most common retort against privacy advocates.” Colin J. Bennett, author of The Privacy Advocates, said that an advocate of privacy often “has to constantly refute” the argument. Bennett explained that most people “go through their daily lives believing that surveillance processes are not directed at them, but at the miscreants and wrongdoers” and that “the dominant orientation is that mechanisms of surveillance are directed at others” despite “evidence that the monitoring of individual behavior has become routine and everyday”.

______

In favor of the argument:

When discussing the MAINWAY program, former U.S. Senate majority leader Trent Lott stated “What are people worried about? What is the problem? Are you doing something you’re not supposed to?” Johann Hari, a British writer, argued that the “nothing to hide” argument is irrelevant to the placement of CCTV cameras in public places in the United Kingdom because the cameras are public areas where one is observed by many people he or she would be unfamiliar with and not in “places where you hide”.  In November 2015, British Conservative Party MP Richard Graham was accused of quoting Joseph Goebbels in defending a new surveillance bill with the words “if you’ve nothing to hide you have nothing to fear”. Former Conservative Foreign Secretary William Hague also used the same phrase in 2013.

During the right to privacy hearings before the nine-judge bench of the Indian Supreme Court, it was argued on behalf of the state of Gujarat that privacy claims are only made by those who have done something wrong. Unfortunately, arguments such as these, namely the “I have got nothing to hide” argument, represent a common misconception of the meaning and value of the right to privacy. Under this view, only people with something to hide, or those who have done something wrong, are concerned about the loss of privacy. If you have nothing to hide, then information about you cannot really be used against you. Thus, the argument proceeds, no harm should be caused to you by the breach of your privacy.

____

Against the argument:

Some harm is caused to us when our privacy is breached. That is why we draw curtains at our homes or keep private diaries. The right to one’s privacy, family, home or correspondence, has long been recognized internationally. We do not want our neighbours, or the state, to know what happens inside our homes or inside our heads unless we choose to share that information with them. We cherish private spaces to do and be as we like, free from the gaze of others, and not because something immoral or illegal is transpiring inside our homes. The “nothing to hide” argument makes an incorrect moral judgement about the kinds of information people want to hide.

It also wrongfully equates privacy with secrecy, even though they are distinct concepts. Privacy is about exercising the choice to withhold information, which others have no need to know. Secrecy, on the other hand, is about withholding information that people may have a right to know. As historian Jill Lepore explains, “Secrecy is what is known, but not to everyone. Privacy is what allows us to keep what we know to ourselves.” The “nothing-to-hide” paradigm evaluates any breach of privacy only from the perspective of disclosure of unwanted information. Nevertheless, privacy is a much richer concept than secrecy. The right to privacy includes a bundle of rights such as the privacy of beliefs, thoughts, personal information, home, and property. In fact, as far back as 1890, privacy was understood as the “right to be let alone”, a fact missed by the “nothing to hide” paradigm. Today, privacy is regarded as central to our identity, dignity, ability to have intimacy, and meaningful inter-personal relations. It determines our interaction with our peers, society, and the state.

Edward Snowden: “Arguing that you don’t care about the right to privacy because you have nothing to hide is no different than saying you don’t care about free speech because you have nothing to say.”  “When you say, ‘I have nothing to hide,’ you’re saying, ‘I don’t care about this right.’ You’re saying, ‘I don’t have this right, because I’ve got to the point where I have to justify it.’ The way rights work is, the government has to justify its intrusion into your rights.”

Daniel J. Solove stated that he opposes the argument; he stated that a government can leak information about a person and cause damage to that person, or use information about a person to deny access to services even if a person did not actually engage in wrongdoing, and that a government can cause damage to one’s personal life through making errors. Solove wrote “When engaged directly, the nothing-to-hide argument can ensnare, for it forces the debate to focus on its narrow understanding of privacy. But when confronted with the plurality of privacy problems implicated by government data collection and use beyond surveillance and disclosure, the nothing-to-hide argument, in the end, has nothing to say.”

Danah Boyd, a social media researcher, opposes the argument, stating that even though “people often feel immune from state surveillance because they’ve done nothing wrong,” an entity or group can distort a person’s image and harm one’s reputation, or guilt by association can be used to defame a person.

Adam D. Moore, author of Privacy Rights: Moral and Legal Foundations, argued, “it is the view that rights are resistant to cost/benefit or consequentialist sort of arguments. Here we are rejecting the view that privacy interests are the sorts of things that can be traded for security.” He also stated that surveillance can disproportionately affect certain groups in society based on appearance, ethnicity, and religion. Moore maintains that there are at least three other problems with the “nothing to hide” argument. First, if individuals have privacy rights, then invoking “nothing to hide” is irrelevant. Privacy, understood as a right to control access to and use of spaces, locations, and personal information, means that it is the right holder who determines access. To drive this point home Moore offers the following case. “Imagine upon exiting your house one day you find a person searching through your trash painstakingly putting the shredded notes and documents back together. In response to your stunned silence he proclaims ‘you don’t have anything to worry about – there is no reason to hide, is there?'”  Second, individuals may wish to hide embarrassing behavior or conduct not accepted by the dominant culture. “Consider someone’s sexual or medical history. Imagine someone visiting a library to learn about alternative lifestyles not accepted by the majority.” Finally, Moore argues that “nothing to hide,” if taken seriously, could be used against government agents, politicians, and CEO’s. This is to turn the “nothing to hide” argument on its head. Moore argues that the NSA agent, politician, police chief, and CEO have nothing to hide so they should embrace total transparency like the rest of us. “But they don’t and when given the technological tools to watch, the politician, police chief, or CEO are almost always convinced that watching others is a good thing.”

Bruce Schneier, a computer security expert and cryptographer, expressed opposition, citing Cardinal Richelieu’s statement “If one would give me six lines written by the hand of the most honest man, I would find something in them to have him hanged”, referring to how a state government can find aspects in a person’s life in order to prosecute or blackmail that individual.  Schneier also argued “Too many wrongly characterize the debate as ‘security versus privacy.’ The real choice is liberty versus control.”

____

Do the governments have a right to monitor its citizens’ actions?

Mass surveillance is an unprecedented intrusion into the privacy of ordinary people. At no point in history have we accepted that governments should be able to monitor everything we do to keep us safe. Imagine if we were told they wanted to install cameras in our living rooms, or microphones under tables in coffee shops, to ensure they could catch criminals. This is the physical-world equivalent of online mass surveillance. It’s a huge overreach of government power and we consent to it every time we say we have “nothing to hide”. Instead, we should say to governments: “I have nothing to hide and my private business is none of yours”.

Privacy should be a right unless something is done that arouses legitimate suspicion. Usually, governments conduct targeted surveillance, when they monitor a person or group for specific, legitimate reasons. For this, they’ll need to get permission from a judge, for example to monitor the internet use of someone they suspect of criminal activities. If surveillance is indiscriminate, our communications are being monitored without any reasonable suspicion that we might be doing something dodgy. Governments are treating us all like criminal suspects, and every detail of our personal lives as suspicious. And there are few laws to control what they’re doing.

It all amounts to: Nothing to hide – as long as you agree 100% with the outlook and policies of your government!

Much like the right to protest, our privacy is something we notice more when it’s taken away. Throughout history, seemingly innocent information about people has been used to persecute them during moments of crisis. You may trust your current government to look for criminals and not do anything dishonest with your data. But what if it changed and shifted dramatically to the left or the right? In these situations, authorities could gather data to find and crackdown on groups they disagree with. They could use the information to target journalists, persecute activists and discriminate against minorities.

We all believe the premise that the ones behind the cameras will have the best interests of the people at heart. You may think you have done nothing wrong, but that puts blind faith in the people looking at your data to think the same way. As NSA whistle-blower Edward Snowden told us: “These people are looking for criminals. You could be the most innocent person in the world, but if somebody programmed to see patterns of criminality looks at your data, they’re not going to find you – they’re going to find a criminal.”

_

If you have “nothing to hide”, then you should not worry about government surveillance: this is a fallacious argument from many angles.

First, such an argument justifying mass surveillance upends the long-standing principle of presumption of innocence.

Second, it fundamentally misunderstands the consequences of the perceived loss of privacy and ensuing chilling effects on speech and behaviour. The fear that who we meet, what we say, and which websites we visit could be subject to scrutiny, may result in an unconscious change in (even lawful) behaviour. When we believe we are being observed, we are more likely to behave according to socially accepted norms. The change in behaviour, thus, has less to do with the content of our actions, but more to do with the knowledge of being watched. Such a modification of behaviour is also evident in the arena of free speech and expression. A person critical of the ruling government may be more likely to self-censor her views if she believes her communications are being monitored. The reduction in diversity of views only undermines the democratic process.

Third, surveillance programmes are problematic even when there is no “undesirable” information that people want to keep hidden. Law professor Daniel Solove explains this beautifully by using the example of Kafka’s The Trial, where the problem is not prohibited behaviour. Rather, it is the protagonist’s exclusion from the judicial process, both in terms of knowledge or participation, and the attendant suffocating powerlessness and vulnerability created by the system’s use of his personal data. Franz Kafka’s The Trial novel centers around a man who is arrested but not informed why. He desperately tries to find out what triggered his arrest and what’s in store for him. He finds out that a mysterious court system has a dossier on him and is investigating him, but he’s unable to learn much more. The Trial depicts a bureaucracy with inscrutable purposes that uses people’s information to make important decisions about them, yet denies the people the ability to participate in how their information is used.

Finally, justifying the invasion of privacy because “I have nothing to hide” takes a short-term view of privacy and data collection. Data once collected can be used, misused, shared, and stored in perpetuity. Worse, it can be combined with other individually inconsequential data points to reveal extremely significant information about an individual. For example, mere knowledge that an unmarried woman went to a gynaecologist does not tell us much. But if we combine this information with a visit to an abortion clinic later, we suddenly know much more about her, and more than she may want to reveal publicly.

It is true that both the private sector and the state can know this information. But in the hands of the state, which has the monopoly on coercion and violence, it is far more potent.

______

______

You should value Privacy, even if you have Nothing to Hide:

Privacy underpins a healthy democracy, and ensures our freedoms of expression, association, and assembly. The erosion of privacy is something that affects all people, even those who have nothing to hide.

-1. A little data goes a long way

The argument that a lack of privacy isn’t a problem if you’re not doing anything problematic usually builds on the assumption that our individual data is useless. And while data collected in one moment may be trivial on its own, privacy researchers reveal that everything you do online is logged in obscene detail.

All these data points are aggregated to build a digital profile, which follows you across the web and dictates your digital experiences, customizing everything you see online to optimize engagement, clicks, and purchases. This process of segmenting people based on personal details is known as microtargeting.

While microtargeting curates our online experience according to our interests, which may seem like a good thing, it is a powerful tool that comes with a high social cost. The same technology is also used by companies to distort our mood, modify our behavior, and alter purchasing patterns with a flood of targeted ads, videos, and stories—mostly without our consent.

-2. Mass surveillance hasn’t made us safer

Often the need for citizens to forfeit personal privacy is framed in the context of curbing organized crime, but a lack of privacy has a much larger effect on how everyday people go about their lives than it does on criminals or terrorists. According to attorney and educator Jennifer Granick, “almost every major terrorist attack on Western soil in the past fifteen years was committed by someone already on the government’s radar for one or another reason.”

There is little evidence that our governments’ indiscriminate bulk data collection through legislation such as the Patriot Act has thwarted any terrorist attacks. There’s also no evidence to show that the inconvenient and invasive experiences we endure at airports lead to security. The Department of Homeland Security was able to smuggle guns and bombs past Transit Security Administration (TSA) airport officials at a staggering 95% success rate. What are we truly gaining in exchange for our privacy?

-3. Privacy isn’t about hiding

The nothing-to-hide argument frames privacy as something only criminals and other bad actors would demand, but nothing could be further from the truth. Privacy is about the freedom to make choices without fear: how you want to live, what you believe in, who you are friends with, and what you want to share with whom. A lack of privacy leads to uniformity and self-censorship, which pushes our opinions to the edges and erodes our ability to engage in healthy debate.

Ultimately, privacy also protects us from the unknown. Circumstances change. Something that can be harmlessly shared today may someday be worth concealing; whether its political beliefs, or your ethnic or religious background. Privacy is the ultimate insurance against a rapidly changing corporate and political climate. Once we lose our privacy, we won’t get it back.

-4. The Society Argument

Privacy may not be critically important to you, but it is crucial to consider how it impacts all of us collectively. There are rights that are taken for granted that can be dismantled if privacy is lost, that will lead to a worse world to live in. The loss of things like doctor-patient confidentiality, attorney-client privilege, to maintain important business secrets, for reporters to maintain confidential sources, for whistle-blowers to report wrongdoing at their workplaces, and to go about your everyday lives are crucial.

Specific examples involving The Society Argument – The Chinese censor the news and monitor internet traffic to identify potential dissidents and send them to “re-education camps.” They are also looking into a national social credit score, where doing things like speaking out against the government, or talking to someone who does, can inhibit your ability to travel, get a job, and live your life. In Bangladesh, bloggers who speak out against Islam are murdered when their online identities are tied to their real ones. The US has had a number of recent incidents where whistle-blowers within the administration are being hunted.

-5. Lack of privacy creates significant harms that everyone wants to avoid.

You need privacy to avoid unfortunately common threats like identity theft, manipulation through ads, discrimination based on your personal information, harassment, the filter bubble, and many other real harms that arise from invasions of privacy.

In addition, what many people don’t realize is that several small pieces of your personal data can be put together to reveal much more about you than you would think is possible. For example, an analysis conducted by MIT researchers found that “just four fairly vague pieces of information — the dates and locations of four purchases — are enough to identify 90 percent of the people in a data set recording three months of credit-card transactions by 1.1 million users.”

It’s critical to remember that privacy isn’t just about protecting a single and seemingly insignificant piece of personal data, which is often what people think about when they say, “I have nothing to hide.” For example, some may say they don’t mind if a company knows their email address while others might say they don’t care if a company knows where they shop online.

However, these small pieces of personal data are increasingly aggregated by advertising platforms like Google and Facebook to form a more complete picture of who you are, what you do, where you go, and with whom you spend time. And those large data profiles can then lead much more easily to significant privacy harms. If that feels creepy, it’s because it is. We can’t stress enough that your privacy shouldn’t be taken for granted. The ‘I have nothing to hide’ response does just that, implying that government and corporate surveillance should be acceptable as the default. Privacy should be the default. We are setting a new standard of trust online and believe getting the privacy you want online should be as easy as closing the blinds.

-6. Do you really think you have nothing to hide?

Would you be willing to hand over your phone and give me your PIN?

Would you be willing to let your postman open your mail and make a copy of it?

Do you still think you have nothing to hide? Be honest; It will be quite embarrassing even if you really ‘have nothing to hide’ – simply because it’s your right to have your own space and self-freedom.

Strangely, if you replied, “I do have things to hide”, most people would stare at you and reply “are you a terrorist?”. Does it mean only terrorists have things to hide? Of course not. Everyone needs protection and privacy, no matter what the subject is about.

We are dealing with three different threat scenarios:

-State actors who build surveillance systems using the argument of fighting crime.

-For-profit companies that create profiles from the available data to play out personalized advertising.

-Criminals who gain access to online identities and payment information through fraudulent activities in order to enrich themselves personally.

Here are some examples:

Users of the fitness app Strava have tracked their routes while jogging, to measure and compare running times and calorie consumption. A harmless hobby – one would think. But it is possible (even without any hacking skills) to compare the routes and view them in the map view. So it happened that the locations of secret military bases became public, because the soldiers jogged around the area to keep fit. Probably none of them had in mind to betray their own location to the enemy. And yet that is exactly what happened.

A financial loss, for example, can arise if you handle your Amazon access data carelessly. If you log into your Amazon account on a strangers computer, or with a weak password, you risk having your account hijacked. Unwanted orders can cause a large minus on your credit card bill within hours. Even if you manage to cancel such orders, you will have to deal with your bank and the police. This is avoidable stress.

Health data is very sensitive information. We expect treating physicians to adhere to the confidentiality and data protection basic regulation – in other words: to do everything in their power to protect patient information. But then why do we, for example, freely give away our menstruation cycle data, including preferred sex positions and desire to have children, to an app on our smartphone, which ends up directly in the databases of advertising networks? How is it possible that we voluntarily make our daily routines, including resting pulse, available to insurance companies? And why does Facebook find out about every medical emergency that we check in the diagnosis app?

When you think about all the data that is transmitted from your smartphone and your apps alone, which is collected and used to create a profile, it would be legitimate to get nervous. Your information will likely fall into the wrong hands. While you may feel like you don’t have anything to hide from government agencies for ‘security’ purposes, you might be however alarmed to know that your information could fall into the hands of hackers, blackmailers, data aggregators and others who may be hell-bent on exploiting this data. Remember Equifax, Ashley Madison and Yahoo breaches? Being blasé about your privacy could mean courting trouble. There is uncertainty how the information you share now will evolve in the future. Your private communication and information exposed to the world in an out-of-context manner can also be used against you. While you currently may not have a lot to risk if your data is shared, this can change in the future where your personal data can be misrepresented to the detriment of your career or social life. Politicians are constantly bombarded with their personal information exploited, speeches manipulated, and pictures were taken out of context.

-7. How much is your data?

When you put tons of personal information online using Twitter, Facebook, Instagram, you have an instant benefit; it’s easy to use and free. So, it’s pretty tricky to balance with something that might be dangerous in a hard to predict future. But those services aren’t entirely free, we’re paying with our data, after all. Do you wonder what tech companies and telecommunication giants gain from allowing you to transfer or store unlimited data in their servers for free? Are they philanthropists? No, they are not, they are the wealthiest companies on earth. Google is not a search company; it’s a data company. The lack of privacy enriches corporations.

‘If You Have Something You Don’t Want Anyone To Know, Maybe You Shouldn’t Be Doing It’

– Eric Schmidt, Google CEO

This is because if you’re not paying for it, you become the product. Or rather, your ‘private’ information is. It is sold to interested parties such as advertising companies without your say-so. Thus gaining these companies riches and a reluctance to enforce privacy. You should, therefore, be very concerned!

If you say you have nothing to hide, you may pull your saved Google queries of the last 10 years from the search engine. Print them out and distribute them around the office. The same applies to Facebook likes and private messages (including those you never sent), which the social network also stores indefinitely. That feels strange? It should. But then why do we allow internet companies to view, store and analyse this information about us?  

______

______

Section-9 

Right to be forgotten:

In 1998, a Spanish national, Mario Costeja Gonzalez, ran into financial trouble. He put up his property on auction. The details were covered in the news, inadvertently putting it online for everyone to see. 16 years later, Gonzalez has overcome his financial difficulties. But a simple web search for his name will still throw up the news of the auction. Gonzalez has argued that the news continues to damage his reputation and has demanded that it be removed from Google search results. In May 2014, the European Court of Justice ruled against Google.

_

The right to be forgotten is the right to have private information about a person be removed from Internet searches and other directories under some circumstances. The concept has been discussed and put into practice in both the European Union (EU) and in Argentina since 2006. The issue has arisen from desires of individuals to determine the development of their life in an autonomous way, without being perpetually or periodically stigmatized as a consequence of a specific action performed in the past. The right to be forgotten is distinct from the right to privacy. The right to privacy constitutes information that is not publicly known, whereas the right to be forgotten involves removing information that was publicly known at a certain time and not allowing third parties to access the information.

There has been controversy about the practicality of establishing a right to be forgotten (in respect to access of information) as an international human right. This is partly due to the vagueness of current rulings attempting to implement such a right. Furthermore, there are concerns about its impact on the right to freedom of expression, its interaction with the right to privacy, and whether creating a right to be forgotten would decrease the quality of the Internet through censorship and the rewriting of history. Those in favor of the right to be forgotten cite its necessity due to issues such as revenge porn sites appearing in search engine listings for a person’s name, as well as instances of these results referencing petty crimes individuals may have committed in the past. The central concern lies in the potentially undue influence that such results may exert upon a person’s online reputation almost indefinitely if not removed. It’s worth keeping in mind that this right is not an absolute right, meaning that other rights, such as the freedom of expression and scientific research, are also safeguarded.

_

The Right to be Forgotten has been put into practise by the European Union and was upheld by Court of Justice of the EU in 2014.

-1. The Right to be Forgotten allows certain people to ask search engines to remove certain results that includes their name and other information.

-2. This means that search engines like Google and Bing will have to remove certain search results from their pages.

-3. Experts argue that asking search engines to remove certain content is a form of internet censorship. This, therefore, means that it is bad as internet (almost) never forgets. It is also argued that censoring certain information through search engines is a violation of the freedom of expression and media.

-4. Google reported that it received over 50,000 requests for articles to be removed from search results. All 50,000 requests might not have come from separate individuals but it clearly shows that there is a great demand for people who want to be forgotten.

-5. The information will only be removed if the impact on the individual’s privacy is greater than the public’s right to find it.

_

The right to be forgotten empowers individuals to ask organisations to delete their personal data. It is provided by the EU’s General Data Protection Regulation (GDPR), a law passed by the 28-member bloc in 2018. Recently, the European Court of Justice (ECJ) ruled in favour of the search engine giant Google, which was contesting a French regulatory authority’s order to have web addresses removed from its global database. The European Union’s highest court ruled that an online privacy rule known as the ‘right to be forgotten’ under European law would not apply beyond the borders of EU member states. The judgement is an important victory for Google, as now the online privacy law cannot be used to regulate the internet in countries such as India, which are outside the European Union.

_

Criticism of right to be forgotten:

Major criticisms stem from the idea that the right to be forgotten would restrict the right to freedom of speech. Many nations, and the United States in particular (with the First Amendment to the United States Constitution), have very strong domestic freedom of speech laws, which would be challenging to reconcile with the right to be forgotten. Some academics see that only a limited form of the right to be forgotten would be reconcilable with US constitutional law; the right of an individual to delete data that he or she has personally submitted. In this limited form of the right individuals could not have material removed that has been uploaded by others, as demanding the removal of information could constitute censorship and a reduction in the freedom of expression in many countries.  

______

______

Section-10

Privacy paradox:

The emergence of the Semantic Web has brought numerous opportunities with it, including an almost unlimited access to information, round-the-clock social networking connectivity and large scale data aggregation. It has grown to the extent that it now plays a part in the everyday lives of billions of people around the world. Simultaneously, the advent of big data and digital technologies has also raised serious privacy and security issues. Smith and Kollars (2015) called these digital developments ‘the uncontrolled electronic panopticism’ (p. 160). Fact is, the information being transformed between electronic devices equates to a form of unwitting user observation. When considering mobile applications and data ‘leakage’ in particular, recent literature argues that the consumer’s choice to use mobile technologies is primarily driven by considerations of popularity, usability and the price of a given technology (Kelley et al., 2013, Kim et al., 2008) despite the potential risk of data misuse. At the same time however, research indicates that consumers are concerned about their privacy, including the ambiguous distribution of data and its use by third parties (Smith et al., 2011). This discrepancy between the expressed concern and the actual behavior of users is a phenomenon known as the privacy paradox: users claim to be very concerned about their privacy but do very little to protect their personal data. There are currently multiple theories explaining the privacy paradox. Some have explained this paradoxical behavior from a rational perspective by arguing that users weigh the cost-benefit ratio of online information disclosure both consciously and rationally (Simon, 1955). Others have questioned this rational view by arguing that individuals are bound in their rational decision-making by several cognitive biases, resulting in a pre-determinable cost-benefit calculation (Simon, 1982). Interestingly, both perspectives result in a risk-benefit calculation that ultimately chooses benefits over risks. In addition, an unbalanced decision-making process serves as the basis for a third perspective, where decision-making is based on prevalent benefits and as a result, no or negligible risk assessment takes place.

_

The privacy paradox is a phenomenon in which online users state that they are concerned about their privacy but behave as if they were not. While this term was coined as early as 1998, it wasn’t used in its current popular sense until the year 2000. The privacy paradox has been studied and scripted in different research settings. Although several studies have shown this inconsistency between privacy attitudes and behavior among online users, the reason for the paradox still remains unclear. A main explanation for the privacy paradox is that users lack awareness of the risks and the degree of protection. Users may underestimate the harm of disclosing information online. On the other hand, some researchers argue the privacy paradox comes from lack of technology literacy and from the design of sites. For example, users may not know how to change their default settings even though they care about their privacy. Psychologists particularly pointed out that the privacy paradox occurs because users must trade-off between their privacy concerns and impression management.

_

Some researchers believe that decision making takes place on irrational level especially when it comes to mobile computing. Mobile applications are built up in a way that decision making is fast. Restricting one’s profile on social networks is the easiest way to protect against privacy threats and security intrusions. However, such protection measures are not easily accessible while downloading and installing apps. Even if there would be mechanisms to protect your privacy then most of the users do not have the knowledge or experience to protective behavior. Mobile applications consumers also have very little knowledge of how their personal data are used, they do not rely on the information provided by application vendors on the collection and use of personal data, when they decide which application to download. Users claim that permissions are important while downloading app, but research shows that users do not value privacy and security related aspects to be important when downloading and installing app. Users value cost, functionality, design, ratings, reviews and downloads more important than requested permissions.

_______

The economic valuation of privacy:

The willingness to incur a privacy risk is driven by a complex array of factors including risk attitudes, self-reported value for private information, and general attitudes to privacy (derived from surveys).  Experiments aiming to determine the monetary value of several types of personal information indicate low evaluations of personal information. On the other hand, it appears that consumers are willing to pay a premium for privacy, albeit a small one.  Users do not always act in accordance with their professed privacy concerns and they are sometimes willing to trade private information for convenience, functionality, or financial gain, even when the gains are very small.  One of the studies suggest that people think their browser history is worth the equivalent of a cheap meal.  Attitudes to privacy risk do not appear to depend on whether it is already under threat or not. People do not either get discouraged in protecting their information, or come to value it more if it is under threat.

Concrete solutions on how to solve paradoxical behavior still do not exist. Many efforts are focused on processes of decision making like restricting data access permissions during the applications installation. However, nothing that would solve the gap between user intention and behavior. Susanne Barth and Menno D.T. de Jong believe that for users to make more conscious decisions on privacy matters the design needs to be more user oriented. Meaning, the ownership of data related risks will be better perceived if psychological ownership of data is being considered as ‘mine’ rather than ‘not mine’.

There are many opinions related to privacy paradox. It is also suggested that it should not be considered a paradox anymore. It’s maybe more of a privacy dilemma, because people would like to do more but they also want to use services that would not exist without sharing their data. It is suggested to be, that people do understand that they pay with personal data, but believe they get a fair deal.

____

Selfie culture:

Selfies are popular today. A search for photos with the hashtag #selfie retrieves over 23 million results on Instagram and “a whopping 51 million with the hashtag #me” However, due to modern corporate and governmental surveillance, this may pose a risk to privacy. In a research which takes a sample size of 3763, researchers found that for selfies, females generally have greater concerns than male social media users. Users who have greater concerns inversely predict their selfie behavior and activity.

_______

The ‘Privacy Paradox’ in the Social Web: The Impact of Privacy Concerns, Individual Characteristics, and the Perceived Social Relevance on Different Forms of Self-Disclosure: a 2014 study:

Given the diffusion of the Social Web and increased disclosure of personal information online, the ‘privacy paradox’ suggests that while Internet users are concerned about privacy, their behaviors do not mirror those concerns. This study investigates the potential influence of privacy concerns, psychological traits, attitudes to the Social Web and age on self-disclosure. Using an online survey of a representative sample of German Internet users (n = 2, 739), the variety and quality of self-disclosure as well as access were measured. The findings indicate that privacy concerns hardly impact self-disclosure, but different variables moderate this relation. Perceived social relevance and the number of applications used proved important. Users’ general willingness to disclose is most important when providing sensitive information.

_____

The privacy paradox – Investigating discrepancies between expressed privacy concerns and actual online behavior – A systematic literature review, 2017:

Also known as the privacy paradox, recent research on online behavior has revealed discrepancies between user attitude and their actual behavior. More specifically: While users claim to be very concerned about their privacy, they nevertheless undertake very little to protect their personal data. This systematic literature review explores the different theories on the phenomenon known as the privacy paradox. Drawing on a sample of 32 full papers that explore 35 theories in total, authors determined that a user’s decision-making process as it pertains to the willingness to divulge privacy information is generally driven by two considerations: (1) risk-benefit evaluation and (2) risk assessment deemed be none or negligible. By classifying in accordance with these two considerations, they have compiled a comprehensive model using all the variables mentioned in the discussed papers.

The purpose of this article was to review prior research on the phenomenon of the privacy paradox. Authors highly question whether or not rational decision-making processes are the only suitable explanation for the discrepancies between privacy concerns, especially as it applies to mobile computing as decision-making in a mobile environment is subject to environmental and circumstantial factors different from those encountered during desktop computing. When analyzing the design of mobile applications, authors favor a mixed approach (rational and irrational decision-making) and design solutions should be adapted to different cognitive styles. Implementing cues into the design (backend and interface) is a necessary requirement for the empowerment of the user if data protection is to become more rational. However, attempts to theoretically explain and practically solve the problem of the privacy paradox are still scarce and authors feel the subject deserves far more research attention.

_____

Putting the privacy paradox to the test: Online privacy and security behaviors among users with technical knowledge, privacy awareness, and financial resources: a 2019 study:

Research shows that people’s use of computers and mobile phones is often characterized by a privacy paradox: Their self-reported concerns about their online privacy appear to be in contradiction with their often careless online behaviors. Earlier research into the privacy paradox has a number of caveats. Most studies focus on intentions rather than behavior and the influence of technical knowledge, privacy awareness, and financial resources is not systematically ruled out. This study therefore tests the privacy paradox under extreme circumstances, focusing on actual behavior and eliminating the effects of a lack of technical knowledge, privacy awareness, and financial resources. Authors designed an experiment on the downloading and usage of a mobile phone app among technically savvy students, giving them sufficient money to buy a paid-for app. Results suggest that neither technical knowledge and privacy awareness nor financial considerations affect the paradoxical behavior observed in users in general. Technically-skilled and financially independent users risked potential privacy intrusions despite their awareness of potential risks. In their considerations for selecting and downloading an app, privacy aspects did not play a significant role; functionality, app design, and costs appeared to outweigh privacy concerns.

_______

_______

Section-11

Criticism of privacy:

-1. Thomson’s Reductionism

Probably the most famous reductionist view of privacy is one from Judith Jarvis Thomson (1975). Noting that there is little agreement on what privacy is, Thomson examines a number of cases that have been thought to be violations of the right to privacy. On closer inspection, however, Thomson believes all those cases can be adequately and equally well explained in terms of violations of property rights or rights over the person, such as a right not to be listened to. Ultimately the right to privacy, on Thomson’s view, is merely a cluster of rights. Those rights in the cluster are always overlapped by, and can be fully explained by, property rights or rights to bodily security. The right to privacy, on her view, is “derivative” in the sense that there is no need to find what is common in the cluster of privacy rights. Privacy is derivative in its importance and justification, according to Thomson, as any privacy violation is better understood as the violation of a more basic right. Numerous commentators provide strong arguments against Thomson’s critique (Scanlon, 1975; Inness, 1992).

-2. Posner’s Economic Critique

Richard Posner (1981) also presents a critical account of privacy, arguing that the kinds of interests protected under privacy are not distinctive. Moreover, his account is unique because he argues that privacy is protected in ways that are economically inefficient. With respect to information, on Posner’s view privacy should only be protected when access to the information would reduce its value (e.g. allowing students access to their letters of recommendation make those letters less reliable and thus less valuable, and hence they should remain confidential or private). Focusing on privacy as control over information about oneself, Posner argues that concealment or selective disclosure of information is usually to mislead or manipulate others, or for private economic gain, and thus protection of individual privacy is less defensible than others have thought because it does not maximize wealth. In sum, Posner defends organizational or corporate privacy as more important than personal privacy, because the former is likely to enhance the economy.

-3. Bork’s View

Another strong critic of privacy is Robert Bork (1990), whose criticism is aimed at the constitutional right to privacy established by the Supreme Court in 1965. Bork views the Griswold v. Connecticut decision as an attempt by the Supreme Court to take a side on a social and cultural issue, and as an example of bad constitutional law. Bork’s attack is focused on Justice William O. Douglas and his majority opinion in Griswold. Bork’s major point is that Douglas did not derive the right to privacy from some pre-existing right or from natural law, but merely created a new right to privacy with no foundation in the Constitution or Bill of Rights. Bork is correct that the word “privacy” never appears in those documents. Douglas had argued, however, that the right to privacy could be seen to be based on guarantees from the First, Third, Fourth, Fifth, and Ninth Amendments. Taken together, the protections afforded by these Amendments showed that a basic zone of privacy was protected for citizens, and that it covered their ability to make personal decisions about their home and family life. In contrast, Bork argues i) that none of the Amendments cited covered the case before the Court, ii) that the Supreme Court never articulated or clarified what the right to privacy was or how far it extended, and he charges iii) that the privacy right merely protected what a majority of justices personally wanted it to cover. In sum, he accuses Douglas and the Court majority of inventing a new right, and thus overstepping their bounds as judges by making new law, not interpreting the law. Bork’s views continue to be defended by others, in politics and in the popular press.

Theorists including William Parent (1983) and Judith Thomson (1975) argue that the constitutional right to privacy is not really a privacy right, but is more aptly described as a right to liberty. Other commentators believe, to the contrary, that even if Douglas’ opinion is flawed in its defense, using vague language about a penumbral privacy right emanating from the Constitution and its Amendments, there is nevertheless a historically and conceptually coherent notion of privacy, distinct from liberty, carved out by the constitutional privacy cases (Inness, 1992; Schoeman, 1992; Johnson, 1994; DeCew, 1997).

In response to Bork’s complaint that constitutional privacy protection is not at all about privacy but only concerns liberty or autonomy, it has been successfully argued that while we have multiple individual liberties such as freedom of expression, many do not seem to be about anything particularly personal or related to the types of concerns we might be willing and able to see as privacy issues. If so, then liberty is a broader concept than privacy and privacy issues and claims are a subset of claims to liberty. In support of this view, philosophical and legal commentators have urged that privacy protects liberty, and that privacy protection gains for us the freedom to define ourselves and our relations to others (Allen, 2011; DeCew, 1997; Reiman, 1976, 2004; Schoeman, 1984, 1992).

-4. The Feminist Critique of Privacy

There is no single version of the feminist critique of privacy, yet it can be said in general that many feminists worry about the darker side of privacy, and the use of privacy as a shield to cover up domination, degradation and abuse of women and others. Many tend to focus on the private as opposed to the public, rather than merely informational or constitutional privacy. If distinguishing public and private realms leaves the private domain free from any scrutiny, then these feminists such as Catharine MacKinnon (1989) are correct that privacy can be dangerous for women when it is used to cover up repression and physical harm to them by perpetuating the subjection of women in the domestic sphere and encouraging non-intervention by the state. Jean Bethke Elshtain (1981, 1995) and others suggest that it appears feminists such as MacKinnon are for this reason rejecting the public/private split, and are, moreover, recommending that feminists and others jettison or abandon privacy altogether. But, Elshtain points out, this alternative seems too extreme.

A more reasonable view, according to Anita Allen (1988), is to recognize that while privacy can be a shield for abuse, it is unacceptable to reject privacy completely based on harm done in private. A total rejection of privacy makes everything public, and leaves the domestic sphere open to complete scrutiny and intrusion by the state. Yet women surely have an interest in privacy that can protect them from state imposed sterilization programs or government imposed drug tests for pregnant women mandating results sent to police, for instance, and that can provide reasonable regulations such as granting rights against marital rape. Thus collapsing the public/private dichotomy into a single public realm is inadequate. What puzzles feminists is how to make sense of an important and valuable notion of privacy that provides them a realm free from scrutiny and intervention by the state, without reverting to the traditional public/private dichotomy that has in the past relegated women to the private and domestic sphere where they are victims of abuse and subjection. The challenge is to find a way for the state to take very seriously the domestic abuse that used to be allowed in the name of privacy, while also preventing the state from insinuating itself into all the most intimate parts of women’s lives. This means drawing new boundaries for justified state intervention and thus understanding the public/private distinction in new ways.

-5. Right to privacy is not absolute

In the US, the right to privacy is largely subject to a rule of reason, reflecting whether society would view any given alleged infringement as highly offensive to a reasonable person. In Europe, the fundamental rights of privacy and data protection are also concededly not absolute. Indeed, they are expressly subject to the principle of proportionality, and must be balanced against the other rights and freedoms specified in the EU’s Charter of Fundamental Rights (such as, for example, free speech, due process, property and business rights).

Only in exceptional circumstances, however, can an individual’s right to privacy be superseded to protect national interest. The Indian Supreme Court, in Justice K.S. Puttaswamy v. Union of India (2017), ruled that privacy is a fundamental right. But this right is not unbridled or absolute. Further, the Court has expressly recognised “protecting national security, preventing and investigating crime, encouraging innovation and the spread of knowledge, and preventing the dissipation of social welfare benefits” as certain legitimate aims of the State. The Central government, under Section 69 of the Information Technology (IT) Act, 2000, has the power to impose reasonable restrictions on this right and intercept, decrypt or monitor Internet traffic or electronic data whenever there is a threat to national security, national integrity, security of the state, and friendly relations with other countries, or in the interest of public order and decency, or to prevent incitement to commission of an offence. Indian laws such as s.132 etc.  of the Indian Income Tax Act, 1961; or sections. 91, 165 and 166 of the Criminal Procedure Code, 1973 as to search and seizure have been extensively considered by the Courts in India and have been held to be valid. Similarly, the second part of clause (j) of Section 8(1), the Right to Information (RTI) Act appears to deal with the scope of defence founded on the right of privacy of an individual. The tussle between the right of privacy of an individual and the right of others to seek information which may impinge on the said right of privacy, is what the said clause seeks to address.   

-6. Privacy comes at a cost.

It is a cost that civilised and democratic societies are generally prepared to pay. But governments must – or at least ought to – be concerned by just how much cost is at stake, and to what end. Regulating without taking into account cost-benefit analysis is simply foolish. Moreover, acting on privacy without consideration of all relevant consequences is not required by any constitution, charter, law or moral code. Of course, the relevant “costs” to be considered are not just financial expenses in support of compliance, or lost profits due to privacy restrictions, but also the cost of imposing unnecessary expenditures to be borne by a country’s consumers or the lost opportunities for a country’s technological advancement.

Governments have an obligation to protect the interests of their citizens in data privacy, but they should make an effort to protect their citizens from real privacy threats, and not from illusory ones – or from harms that are merely assumed rather than demonstrated.  

Policy makers must also identity and quantify what society will lose by diverting resources from more productive purposes to privacy compliance-intensive practices that do not yield societal benefits commensurate with their cost. There is no genuine benefit from protecting individuals from risks they do not really worry about, and which may not actually be harmful at all. The tendency of some national privacy rules to treat all personal information as sensitive or easily susceptible to abuse – or to expand the definition of personal data unreasonably – is not costless to society.

______

The promise (and risks) of data access and sharing:

Data access and sharing is more important today than ever before. The effective use of data can help boost productivity and improve or foster new products, processes, organisational methods and markets. This is particularly evident in data-rich sectors such as health care, transportation and public administration, as well as in new production platforms in manufacturing and services. With the increasing adoption of artificial intelligence and the Internet of Things across economies, the supply of, and demand for, data will grow even in traditionally less data-intensive fields. One self-driving car, for example, can generate up to 5 terabytes of data per hour, but requires access to additional third party data to operate securely in different traffic, weather and street conditions. 

Access to data is therefore crucial for competition and innovation in the digital economy – not only for businesses, but also for governments and individuals. Overall, data access and sharing is estimated to generate social and economic benefits worth between 0.1% and 1.5% of gross domestic product (GDP), in the case of public-sector data, and between 1% and 2.5% of GDP when also including private-sector data. 

Yet there are also risks associated with data access and sharing. Data breaches, most notably, may violate the privacy of individuals when their personal data is involved and harm the commercial or non-commercial interests of organisations (e.g. through the infringement of intellectual property rights). Other risk factors include potential violations of agreed terms and expectations around data re-use, loss of control of individuals and organisations over their data and related uncertainties about “data ownership”. These risks may make individuals, businesses, and governments more reluctant to share data, thereby compounding barriers to accessing data. For example, some individuals may object to their health-related data being re-used for research purposes due to confidentiality concerns, even if they are aware of the social benefits that such re-use could deliver. 

______   

______ 

Section-12

Right to privacy vis-à-vis. right to information:  

In the words of Michel Gentot during his term as president of the French National Data Processing and Liberties Commission, freedom of information and data protection are “two forms of protection against the Leviathan state that have the aim of restoring the balance between the citizen and the state”. On first inspection, it would appear that the right of access to information and the right to protection of personal privacy are irreconcilable.  Right to information (RTI) laws provide a fundamental right for any person to access information held by government bodies. At the same time, right to privacy laws grant individuals a fundamental right to control the collection of, access to, and use of personal information about them that is held by governments and private bodies. However, the reality is more complex. Privacy and RTI are often described as “two sides of the same coin”—mainly acting as complementary rights that promote individuals’ rights to protect themselves and to promote government accountability.

The relationship between privacy and RTI laws is currently the subject of considerable debate around the globe as countries are increasingly adopting these types of legislation. To date, more than 50 countries have adopted both laws.

Privacy is increasingly being challenged by new technologies and practices. The technologies facilitate the growing collection and sharing of personal information. Sensitive personal data (including biometrics and DNA makeup) are now collected and used routinely. Public records are being disclosed over the Internet. In response to this set of circumstances, more than 60 countries have adopted comprehensive laws that give individuals some control over the collection and use of these data by public and private bodies. Several major international conventions have long been in place in Europe, and new ones are emerging in Africa and Asia.

At the same time, the public’s right to information is becoming widely accepted. RTI laws are now common around the world, with legislation adopted in almost 90 countries. Access to information is being facilitated through new information and communications technologies, and Web sites containing searchable government records are becoming even more widely available. International bodies are developing conventions, and relevant decisions are being issued by international courts.  

Availability, legislation, and judicial decisions have led to many debates about rules governing access to personal information that is held by public bodies. As equal human rights, neither privacy nor access takes precedence over the other. Thus it is necessary to consider how to adopt and implement the two rights and the laws that govern them in a manner that respects both rights. There is no easy way to do this, and both rights must be considered in a manner that is equal and balanced.

_

Complements and Conflicts in RTI and Privacy Laws:

Right to information (RTI) and privacy laws can both complement and conflict with each other, depending on the situation. As figure below shows, the two rights play different roles in most cases, and only in a small number of cases do they overlap and lead to potential conflict.

_

Complementary Roles of RTI and Privacy: 

RTI and privacy often play complementary roles. Both are focused on ensuring the accountability of powerful institutions to individuals in the information age. The Council of Europe stated in a 1986 recommendation that the roles are “not mutually distinct but form part of the overall information policy in society” (Council of Europe 1986). The U.K. data protection registrar noted, “Data protection and freedom of information can be seen as complementary rights, with the potential to be mutually supportive in practice.” László Majtényi (2002), the first parliamentary commissioner for data protection and freedom of information in Hungary, says that the common purpose of the two rights is “to continue maintaining the non-transparency of citizens in a world that has undergone the information revolution while rendering transparent the state.”

In many countries, the two rights are intertwined constitutionally. Under the concept of habeas data—a constitutional right that permits individuals to demand access to their own information and to control its use—countries in Latin America have adopted both types of laws. Santiago Canton (the first Organization of American States special rapporteur for freedom of expression and the executive secretary of the Inter-American Commission on Human Rights) said, “The action of habeas data, or the right to obtain personal information contained in public or private databases, has been very important in many countries in exacting accountability for human rights abuses and helping countries scarred by human rights abuses reconcile and move forward, which can only be accomplished by exposing the truth and punishing the guilty.”

In many cases, the two rights overlap in a complementary manner. Both rights provide an individual access to his or her own personal information from government bodies, and privacy laws allow for access to personal information held by private entities. They also mutually enhance each other: privacy laws are used to obtain policy information in the absence of an RTI law, and RTI laws are used to enhance privacy by revealing abuses.

_

The most obvious commonality between the two types of laws is the right of individuals to obtain information about themselves that is held by government bodies. This access is an important safeguard to ensure that individuals are being treated fairly by government bodies and that the information kept is accurate.

When a country has both laws, the general approach is to apply the data protection act to individuals’ requests for personal information; requests for information that contains personal data about other parties are handled under the right to information act. In some jurisdictions, such as Bulgaria and Ireland, applications by people for their own personal information can be made under both acts. In these cases, it is possible that slightly different outcomes may result because of the differences in exemptions and oversight bodies. Often, data protection laws give greater rights for access to personal information because there is a stronger right of access. In Ireland, the official policy guidance notes, “one’s own personal information will very often be released under FOI [freedom of information], while under the Data Protection Act there is a presumption in favour of access to one’s own personal data” (Government of Ireland 2006). In cases where there is a request for information about the individual and other persons, both acts will be considered.

In some countries, the RTI act is the primary legislation used by individuals to access their own personal information held by government departments. In Australia, all requests under the Privacy Act are filtered through the Freedom of Information Act (FOIA), resulting in more than 80 percent of all FOIA requests being from people seeking their own information (Law Reform Commission 2010). In Ireland, where both laws allow for individuals’ access, even with the presumption above, the FOIA is still the act most people use: approximately 70 percent of all requests are made by individuals for their own information.

In countries such as India and South Africa, where there is no general privacy law giving individuals a right of access to their own records, the RTI laws are the only means to access personal records. In India, RTI laws are regularly used by advocates for the poor to obtain records on distribution of food subsidies to show that individuals’ names have been forged and records have been falsified.

Some RTI acts also provide for privacy protections where there is no general privacy law. In South Africa, section 88 of the Promotion of Access to Information Act provides that, in the absence of other legislation (currently under consideration), public and private bodies must make reasonable efforts to establish internal measures to correct personal information held by the relevant bodies.

_

The right to privacy and the right to information are both essential human rights in the modern information society. For the most part, these two rights complement each other in holding governments accountable to individuals. But there is a potential conflict between these rights when there is a demand for access to personal information held by government bodies. Where the two rights overlap, states need to develop mechanisms for identifying core issues to limit conflicts and for balancing the rights.

Access to information and protection of privacy are both rights intended to help the individual in making government accountable. Most of the time, the two rights complement each other. However, there are conflicts—for example, privacy laws often are improperly invoked by governments. And there are cases where the conflicts are legitimate.

There is no simple solution to balancing the two rights, but most issues can be mitigated through the enactment of clear definitions in legislation, guidelines, techniques, and oversight systems.

Of key importance is that governments take care when writing the laws to ensure that the access to information and data protection laws have compatible definitions of personal information. They should adopt appropriate public interest tests that allow for careful balancing of the two rights. Finally, they should create appropriate institutional structures that can balance these rights and ensure that data protection and right to information officials work together, even if they represent different bodies.

______

______

Section-13

Gender and privacy:  

The Internet and the Web were largely the inventions of men as well as government and private institutions managed by men. However, today both men and women are designing cyberspace, and both men and women are using it. Like men, women use cyberspace variously to build and enhance careers or businesses, to purchase consumer goods and services for themselves and their families, to magnify and challenge their political voices, to educate themselves and the general public, and to enhance their social lives.

Moreover, both men and women are vulnerable to unwelcome privacy invasions in cyberspace. Indeed, in major respects, men and women sail through cyberspace in the same leaky boat. We can analogize cyberspace to a vast sea into which spills the private data of those who navigate its swelling waters. For neither men nor woman can assume complete privacy in their travels from Web site to Web site, in their “anonymous” chat room conversations and bulletin board postings, or in the personal and financial data they disclose to companies with whom they do business online. Neither men nor women have access to the encryption tools some experts say they need to ensure the security of personal communications.

_

Too little privacy in cyberspace is something of a problem for anyone who wants privacy, whether male or female. But too much privacy in cyberspace can be a problem, too. Cyberspace privacy (including anonymity, confidentiality, secrecy, and encryption) can obscure the sources of tortious misconduct, criminality, incivility, surveillance, and threats to public health and safety. Since too little or too much privacy can be a problem for both men and women and their common communities, why focus on gender in cyberspace?

A woman-centered perspective on privacy in cyberspace is vital because only with such a perspective can we begin to evaluate how the advent of the personal computer and global networking, conjoined with increased opportunity for women, has affected the privacy predicament that once typified many women’s lives. Women often had too much privacy in the senses of imposed modesty, chastity, and domestic isolation and not enough privacy in the sense of adequate opportunities for individual modes of privacy and private choice. Women are particularly vulnerable to privacy problems because they are perceived as inferiors, ancillaries, and safe targets and that women’s privacy is sometimes probed by others who implicitly assume that daughters, pregnant women, mothers, and wives are more accountable for their private conduct than their male counterparts.

_

Women’s overall standing as equal participants in the family and in the economic and political life of our society has improved in recent decades. In this new environment, many women have the privacy that they want. They have experienced success in “overcoming inequitable social and economic patterns that substitute confinement to the private sphere for meaningful privacy.” They have learned to “exploit individual privacy without sacrificing worthy ideals of affiliation and benevolent caretaking to self-centeredness.” These egalitarian achievements in the final decades of the twentieth century could mean that women in the lately developed realm of cyberspace quite naturally enjoy the same privacy benefits that men enjoy and only suffer the privacy indignities that men also suffer.

However, women in cyberspace do not enjoy the same level and types of desirable privacy that men do. Women face special privacy problems in cyberspace because there, too, they are perceived as inferiors, ancillaries, and safe targets and held more accountable for their private conduct. In short, the complex gendered social norms of accessibility and inaccessibility found in the real world are also found in the cyberworld. That privacy may be a special problem for women in cyberspace is an especially disturbing possibility since “women may be more concerned than men about information gathering and their privacy on-line”.

_

While women have greater concerns than men about digital privacy, research has found that they also have less awareness of the potential threats posed by technology, data, and interface design. This isn’t so surprising, given that the technology sector is still geared toward, made up of, and controlled by men. Only 23% of those working at Facebook, Apple, and Google are women, while decision-making processes in technology and digital policy are still dominated by men. Additionally, work in tech companies is highly sex segregated, with men dominating engineering of products and women heavily represented in legal, trust, and safety and customer care roles.

Given that women disproportionately face fetishization, harassment, and threats of violence online, this lack of representation in the technology sector and relevant decision-making processes is significant. It is only by involving women in policy discussions that sexism can be tackled online. Both marginalization and sex segregation in the industry reduce the input women have on the design of technology and the policies that affect our online lives, meaning risks encountered primarily by women are often left unconsidered. The dearth of women with a career in technology also perpetuates the stereotype of tech as a men’s subject, leading to female disengagement. This disengagement, furthered by the pressures of external bias from peers, friends, and family, creates a vicious cycle whereby women become less digitally literate and aware of the risks they encounter online. This impairs women’s ability to take adequate measures or demand that companies improve their products.

Take VPN usage, for example. VPNs are a key tool for improving privacy and security online, as they encrypt a user’s data, making it unreadable to their ISP and anyone spying on their network. In 2018, Global Web Index reported that only 32% of VPN users were women. Forbes also reported in 2016 that, while women are more likely to make changes in behavior to protect their privacy, men are more likely to use encrypted email (10% vs. 7%), password managers (20% vs. 17%), privacy-enhancing browsers (18% vs. 13%), and two-factor authentication (15% vs. 12%).

_

There is a puzzle about privacy. Consider two Supreme Court cases of great importance to feminists: Griswold v. Connecticut and Roe v. Wade. Both were decided on privacy grounds. In Griswold, the U.S. Supreme Court said that the privacy right protects married couples in their use of contraceptives. In Roe, the Court said that the privacy right protects a woman’s decision to terminate her pregnancy. Without Griswold and Roe, the lives of American women might be unimaginably different. And yet many feminists have objected to the privacy right, arguing that the concept is somehow in league with male dominance. The right to privacy implies the right not merely to prevent the incorrect portrayal of private life but the right to prevent it being depicted at all. Even a woman of easy virtue is entitled to privacy and no one can invade her privacy as and when he likes. There is the feminist critique of privacy, that granting special status to privacy is detrimental to women because it is used as a shield to dominate and control them, silence them, and cover up abuse (MacKinnon, 1989). It also appears to function negatively, as the cloak under which one can hide domination, degradation, or physical harm to women. Many feminists worry about the darker side of privacy, and the use of privacy as a shield to cover up domination, degradation and abuse of women. The privacy has not always been on the side of women. When the right to privacy has been defined as protecting the sanctity of homes and the right to be left alone inside our homes, it really was not for the privacy of women. With 137 women reportedly killed every day across the world by a partner or family member and 35 per cent of women worldwide having experienced physical and/or sexual violence by their partner, the phrase “the right to be let alone” takes a dark turn.

______

______

Section-14

Children’s right to privacy in cyberspace:    

As more children around the world spend more time on the Internet, in more ways, it becomes more essential to appreciate what children’s rights mean in a digital world. While there is now a widely accepted public imperative to protect children from harm, abuse and violence online, there has been comparatively little consideration of how to empower children as active digital rights-holders. At the same time, the rapidly expanding power and reach of the ICT sector have thrust communications and technology companies into key policy debates around the risks and opportunities children encounter online.

Privacy and the Internet have a complex relationship. On the one hand, technology has enhanced privacy by offering more accessible means to communicate and access information. For example, activities that once required in-person visits to banks, post offices, libraries, shops and doctors’ offices can now be carried out alone from the sanctity of home. Accompanying advances in encryption have made many online transactions and interactions increasingly secure, with users enjoying greater protection of their messages from prying eyes.

At the same time, new and varied threats to privacy have emerged with the growth of the digital universe. Government surveillance is exponentially easier and cheaper, painting a detailed picture of individuals’ communications, movements and browsing habits. Sophisticated identity thieves, cybercriminals and hackers have exploited vulnerabilities in online banking and e-commerce platforms for financial gain. Online retailers, search engines and email providers track users’ behaviour, collating and selling information to advertisers and marketers.

If the relationship between privacy and the Internet is complex for adults, it is doubly so for children. On one hand, the Internet offers children a way to connect and learn away from the physical oversight of adult authority figures. Communications that previously required the clandestine passing of notes behind teachers’ backs can now take place on social networks, and information that could formerly be accessed only under the watchful eye of a librarian is now available in a free and unbridled way. The Internet has undoubtedly enhanced children’s autonomy and independence, key aspects of the right to privacy.

On the other hand, children experience more serious threats to their privacy from a greater range of actors than any other group. Children’s privacy online is placed at serious risk by those who seek to exploit and abuse them, using the Internet as a means to contact and groom children for abuse or share child sexual abuse material. Yet children’s privacy is also at risk from the very measures that have been put in place to protect them from these threats. Laws designed to facilitate the prevention and detection of crimes against children online often mandate Internet monitoring and surveillance, incentivize intermediaries to generate and retain personal information, and provide government authorities with access to privately-held data. Meanwhile, at home, popular parental control mechanisms to monitor and restrict Internet access promise to expose every last detail of children’s online activity.

Against this backdrop, and driven by the value and power of children as a consumer demographic, companies have likewise acquired seemingly unfettered access to extensive information on children. Children’s personal data is now collected almost from birth, with wearable trackers being introduced in the bassinet and infant photographs adorning parents’ online profiles. Increasingly, individual children are intimately known and understood by commercial forces long before they make their first purchase.

It is fair to say that children’s rights to privacy and the protection of personal information and reputation must be considered, even attenuated, in the context of the need to protect children from harm and abuse and to preserve the role of parents as a source of guidance and support in the exercise of children’s rights. However, these rights must not be neglected as children’s privacy enjoys equal, albeit qualified, protection under international human rights law.

_

Children’s right to privacy:

The Convention on the Rights of the Child (CRC) makes clear that children have a specific right to privacy. Tracking the language of the UDHR and ICCPR, Article 16 of the CRC states that “no child shall be subjected to arbitrary or unlawful interference with his or her privacy, family, or correspondence, nor to unlawful attacks on his or her honour and reputation,” and reaffirms that “the child has the right to the protection of the law against such interference or attacks.” Taken together, these standards imply that children and adults should be given the same level of protection for their right to privacy as adults. When contextualizing children’s right to privacy in the full range of their other rights, best interests and evolving capacities, however, it becomes evident that children’s privacy differs both in scope and application from adults’ privacy. 

A differentiated approach to children’s privacy does not necessarily mean that children should enjoy less protection of this right. In fact, with respect to informational privacy, there is a strong argument that children should be offered even more robust protection. Especially given that informational privacy protections are often circumvented by asking users to consent to lengthy terms and conditions for the collection and processing of the personal information, children’s more limited levels of literacy and comprehension would demand heightened scrutiny and vigilance.

Equally, though, there are arguments for interfering with children’s right to privacy in light of their ongoing physical and mental development. For example, necessary health assessments and medical care can require invasions of young children’s physical privacy that they may not fully appreciate. By the same token, while preventing children from engaging with the world without supervision can curtail their freedom, it can also create safe spaces for them to play, learn and communicate in ways that are central to their growth and empowerment.

In practice, respecting children’s privacy is often a difficult balancing act. Some interferences with children’s privacy are clearly justifiable; until children have the capacity to make fully informed decisions, giving them unbridled autonomy and independence is not in their best interests. In these circumstances, it can be appropriate and sensible to rely on parents and guardians to manage their children’s privacy.

Even so, some argue that parents have been given too much authority over their children’s privacy online. Requiring parental involvement and consent for the use of widely-available online services, for instance, can impede children’s freedom of expression, access to information and development of digital literacy. Parental controls can similarly threaten children’s free and confident use of technology, and applications installed to track children online may generate even more data about children’s Internet use. Perhaps most concerning, parents who threaten their children’s safety may use their power to cut off digital lifelines for seeking outside assistance.

Albeit unintentionally, many parents also take actions that adversely impact their children’s reputation online. While it is now commonplace for parents to share information about their children online, most children are not in a position to either scrutinize the information or object to its posting. As there is frequently no way for children to request that offending content be removed, even when they reach adulthood, parents may inadvertently be compromising their children’s privacy far into the future.

_

Threats to children’s privacy:

As technologies advance, the threats to children’s privacy, personal information and reputation grow and the consequences of unjustified interferences multiply. Illicit government monitoring and unlawful corporate data collection are not only violations of children’s privacy in and of themselves, but may also chill children’s free expression online and increase their exposure to identify theft. Moreover, such impacts are becoming increasingly complex and interrelated. For example, recording websites visited by Internet users could also facilitate government surveillance of browsing activity and create a honeypot of data subject to attack by cybercriminals. As summarized below, a number of threats to children’s privacy and reputation directly engage, result from or are affected by the actions of private sector actors and Internet intermediaries.

  1. Corporate data collection, analysis and sale of children’s browsing data:

Children are of incredible interest to businesses. They are the largest and most powerful consumer group; they are more susceptible to advertising and marketing techniques; and their preferences and behaviours are more open to influence and manipulation. In many ways, they are the ideal audience for the new digital economic paradigm, in which companies possess tremendous amounts of information about individuals’ digital behaviour that can be used to shape their online activities. In the words of a Chief Data Scientist for a major technology company, “the goal of everything we do is to change people’s actual behaviour at scale. When people use our app, we can capture their behaviours, identify good and bad behaviours, and develop ways to reward the good and punish the bad. We can test how actionable our cues are for them and how profitable for us.”  Given children’s young age, tremendous quantities of personal information will be amassed before they reach the age of majority, much of it without their knowledge or awareness.

Even where children are given an opportunity to grant permission for this data to be collected, combined and resold, they are not likely to fully appreciate the many ways in which this may impact their long-term privacy. In addition to selling data, companies sell advertising space based on a quantified understanding of customer behaviour, purchasing patterns and browsing history. Essentially, companies collect information on users’ browsing habits, generate profiles of products or services that would interest these users, and then sell advertising space to entities that offer these products or services. With children’s greater susceptibility to advertising and marketing messages, and as measures designed to track behaviour segue into measures to influence behaviour, risks to children’s privacy from behavioural targeting appear likely to become more entrenched. In addition, above and beyond generally available online services that are used by children, behavioural tracking and targeting are also deployed in products designed for and marketed to children. For example, school computers and online educational services can be set to automatically collect information on students’ Internet activity.

  1. Use of biometrics:

Biometric data is unique and intimate, making it more sensitive than other types of personal information. Yet for these same reasons, biometric technology is an increasingly appealing way to identify individuals. Facial recognition technology has already been deployed by social networks, online photo-sharing services and mobile applications to tag and organize pictures, including photos of children.  Some Internet of Things-enabled devices and toys already have voice recognition features that record and communicate with children.  Reportedly, recent advancements will soon recognize people through other aspects of their appearance such as their hair, clothes and body shape, and companies are now poised to use biometric data for identity authentication, geolocation and other purposes. 

While the risks to children’s privacy posed by biometric data are evident, it may also offer new opportunities to protect children’s rights. For example, facial recognition technology has been promoted as an effective means to detect and analyse child sex abuse images.  Equally, this may enable law enforcement authorities and technology companies to work together in identifying potential victims from child sex abuse images. 

_

How to protect children online?

-1. Age verification and mandatory use of identity:

To protect children from inappropriate services and products, such as pornography, many ICT companies employ age verification protocols. These might involve the use of peer-based models, semantic analysis, credit/debit cards, publicly available data, social security numbers, electronic ID cards or offline confirmation such as a phone call to a parent. Age verification has also become a business in and of itself, with a range of services now commercially available, and some countries have explored digital certificates for children that verify their age and sex. 

Governments have promoted age verification technology as part of a broader initiative that emphasizes online identity verification as a way to prevent and detect cybercrime, including child sexual abuse and exploitation. Some governments now require that individuals prove their identities to purchase SIM cards, to open Internet service accounts, or even to go online at cybercafés or libraries.  Increasingly, Internet companies also verify and mandate the use of real names and identities online before their services can be accessed.   

While the stated goals of age and identify verification practices are laudable, they prevent individuals from being anonymous online and can therefore undermine the right to privacy. For children, the ability to communicate and search anonymously online provides immense protection to their identity, privacy and personal information. This can prevent children from being targeted online by savvy cybercriminals or inundated with invasive commercial messages. On the other hand, anonymity also facilitates the very criminal activity that places children in danger online. Anonymization can make it more difficult for law enforcement authorities to detect and prevent crime, and enables the existence of online marketplaces for child sex abuse material.

-2. Encryption and device security:

The use of encryption is on the rise. More hardware manufacturers offer device encryption, more messaging applications have introduced end-to-end encryption, and more websites now facilitate transport encryption. Some also believe that recent Internet-of-things-related security breaches, including the hacking of online cameras to publish video footage from baby monitors, foreshadow the greater availability of encryption in connected products. Encryption can secure communications, web browsing and online transactions against outside monitoring and interference in ways that protect human rights, but it can also frustrate legitimate government surveillance and the apprehension of cybercriminals. By the same token, while encryption can protect children’s data from illegitimate external monitoring and unauthorized access, encryption can equally be used to evade detection by those who wish to do them harm. Law enforcement authorities have in particular noted the challenges that encryption poses to investigating and preventing cases of child sexual exploitation. 

-3. Mandatory data retention:

Much like mass surveillance, mandatory data retention and bulk data acquisition programmes raise serious privacy concerns and pose the same clear threats to freedom of expression and freedom of assembly and association. These programmes also threaten freedom of the press and the right to information, as police forces can now request and access previously confidential data about journalistic sources.

Protecting children from harm is the most frequently cited justification for requiring companies to retain and disclose data, and anecdotal evidence suggests that user data is regularly sought to help locate missing or suicidal children. While combatting violence against children and providing children with social and psychological assistance remain important objectives, it must also be recognized that laws requiring the blanket retention of communications data represent a significant interference with all users’ privacy. This interference is particularly acute for children, many of whom begin using mobile devices even before they have celebrated their first birthday and will generate nearly a full lifetime of metadata. 

-4. Use of parental controls:

Parental controls can provide a powerful means to help children exercise their rights online. By allowing parents to designate content that is suitable for their children, to monitor their children’s Internet searches and browsing behaviour, and to track the timing and duration of their children’s Internet use, parental controls promise to preserve the benefits of digital education, information-sharing and connectedness all the while avoiding the web of online pitfalls and dangers.

Parental controls are now widely available in the commercial marketplace and have been readily adopted. In some countries, more than half of teenagers’ parents have installed controls or other means of blocking, filtering or monitoring their child’s online activities.  While the motivation to protect children from harmful content, sexual exploitation and disclosing personal information is undoubtedly legitimate, parental controls also present a clear interference with children’s privacy.

They raise serious questions about the extent and nature of children’s right to privacy in the home, the development of children into responsible digital citizens who can think critically and act independently online, and the support necessary for children to build trust, curiosity and creativity.

The tension between parental controls and children’s right to privacy can best be viewed through the lens of children’s evolving capacities. While parental controls may be appropriate for young children who are less able to direct and moderate their behaviour online, such controls are more difficult to justify for adolescents wishing to explore issues like sexuality, politics and religion. Furthermore, children’s privacy may inadvertently be threatened by data collection and security concerns inherent in parental control software; investigations have uncovered instances where children’s personal data was disclosed to third-party marketers without parents’ consent, and have in some cases revealed grossly inadequate security protections.  Importantly, parental controls may also hamper children’s ability to seek outside help or advice with problems at home.

-5. Managing reputation online:

The protection of reputation online is an increasingly contentious legal and political question, and the Internet has transformed the concept of managing reputation by dramatically increasing the scale, scope and reach of information. For instance, inaccurate or revealing news items that would traditionally have been rectified with a retraction are now duplicated innumerable times and effectively stored in perpetuity. Similarly, as Internet users publish personal information about themselves and others at progressively greater rates, antisocial attacks on reputation have proliferated and been memorialized in the public domain. Concerns about reputation online are particularly challenging for children, especially with a view to the long-term impact of damaging information.  Issues of specific importance to children include:

  • Unauthorized use of children’s images
  • Bullying and harassment
  • The permanence of information published by or about children on the Internet

____

Children’s rights and ICT sector:

While there has been much international attention on the ICT sector’s responsibility to respect human rights , children’s rights have rarely featured in these discussions. When children are mentioned, it has been almost exclusively in the context of sexual abuse, exploitation and harmful content without recognition of children’s full range of rights. Given the many threats to these rights detailed above, the dialogue on digital rights must now be expanded to consider the ICT sector’s impacts on children’s rights to privacy, protection of personal information and reputation. The Guidelines for Industry on Child Online Protection published by UNICEF and the International Telecommunications Union provide a useful framework to consider these rights, and highlight five key activities that ICT companies can undertake to respect and promote children’s rights in a digital world:

-1. Integrating child rights considerations into all appropriate corporate policies and management processes;

-2. Developing standard processes to handle child sexual abuse material;

-3. Creating a safe and age-appropriate online environment;

-4. Educating children, parents and teachers about children’s safety and their responsible use of ICTs; and

-5. Promoting digital technology as a mode for increasing civic engagement. 

_

Role of states in protecting children’s rights:

Governments have an obligation to ensure that businesses respect children’s rights, and should take appropriate steps to prevent and redress abuses of children’s rights to privacy, the protection of personal information and reputation online. For example, governments can prohibit police harassment and the misuse of personal information; set strict parameters for the collection, use and analysis of children’s data; and support and encourage the development of anti-bullying and privacy-friendly policies across the ICT sector. At the same time, governments must also respect children’s rights in their own activities, and should bear children’s right to privacy in mind when requesting, collecting, retaining or sharing data as part of surveillance programmes, law enforcement operations and the maintenance of public records.

________

________

Section-15

Privacy in social media:    

Social media is here to stay, and with each passing day, it plays a greater role in our lives. That’s why privacy on social media has never been more important. The way you use Twitter, Facebook, LinkedIn, and the other social networks can have major impacts on your life, good or bad. With a little bit of knowledge and a small dose of caution, however, you can enjoy all the benefits of social media with few of the risks.

_

While research has been conducted on social media, few comparisons have been made in regards to the privacy issues that exist within the most common social media networks, such as Facebook, Google Plus, and Twitter. Most research has concentrated on technical issues with the networks and on the effects of social media in fields such as medicine, law, and science. Although the effects on these fields are beneficial to the people related to them, few studies have shown how everyday users are affected by the use of social media. Social media networks affect the privacy of users because the networks control what happens to user contact information, posts, and other delicate disclosures that users make on those networks. Social media networks also have the ability to sync with phone and tablet applications. Because the use of these applications requires additional contact information from users, social media networks are entrusted with keeping user information secure.

Social media networks need to be held responsible for the privacy issues associated with their use. One of the most significant issue is the content of the online social networks (OSNs) privacy policies. Facebook, Google Plus, and Twitter all have different policies, but they generally address similar topics such as advertising and data collection. Users should not assume that all privacy policies are identical because they do vary considerably. While Facebook has a lengthy policy, it has many sections that do not describe the policies in detail, such as when advertising policies are discussed. Google Plus does an adequate job of explaining policies, but does not offer users the ability to decline the policies of specific services, such as Gmail and YouTube. Therefore, Google Plus users must accept the privacy policy for all of Google’s services regardless of whether or not the user accesses those services. Twitter has a satisfactory privacy policy, but does not provide users with many options for protection. Twitter’s privacy policy is standard and does not go the extra mile to provide users with specifics or options that could help improve user privacy, such as including explanations for why user content cannot be made completely private. 

_

Data footprints in the realm of social media are significant, thanks to internet users now spending an average of 2 hours and 24 minutes per day on social networks and messaging apps, and sharing personal content and information whilst there. But in this documented era of social media distrust, GlobalWebIndex carried out bespoke research among UK and U.S. internet users to check in on how consumers feel about privacy on social media in 2020, and the actions they’re taking in a bid to protect it.

Here’s what they found out.

Almost two-thirds say they’re concerned about how their personal data is being used by social media companies, with 3 in 10 saying they are very concerned. And many aren’t comfortable sharing social media account credentials with other services, websites or apps either, although 35% said they would do so if they trust the third party.  Almost all know they are in control of their own privacy settings, with two-thirds choosing to actively assess them. And many are likely to have done so off the back of being reminded by the platforms; two-thirds say they can recall being prompted to review their settings at some point in the past – with 44% having then done so immediately. But consumers aren’t unanimous in how they deal with privacy. 1 in 4 social media users take the most conservative approach possible – having all of their social media updates and posts set to private and only visible to a select group of followers. This figure jumps to 29% among those most concerned about how social media companies use their data. Conversely, 36% say all their profiles are public and visible to anyone. This segment includes these concerned consumers too, showing that not all of those who are concerned are proactively changing their behaviour. That said, the majority with public profiles say they don’t tend to post any sensitive information anyway, suggesting that it’s sensitive information in particular that is the concern. Almost 1 in 5 say they have different privacy settings across social media platforms, and a similar proportion says they do so because they trust some platforms more than others. This ultimately reveals that for many, the platform it’s shared on is just as important as the data itself. Another factor that feeds into this is how increasingly distrustful people are of what is real or fake on the internet. Some social media platforms will no doubt continue to suffer from association with past privacy scandals. And while they might be fighting back to recover credibility and reputation, and consumer opinion is largely open to change, there will always be some who remain sceptical.

So, we know that many have privacy concerns and that many are changing their settings, but what other actions have users taken on social media in the last six months to protect their privacy?

There’s no shortage of discussion around the decline of sharing on social media – our data continues to reveal a pattern of social media users gradually growing less likely to share their own photos or publicise their opinion on these platforms. And when we asked consumers the above question, this trend emerged at the top of the list, with 38% saying they’ve reduced personal sharing to try to protect their privacy. When consumers have something to share, they’re now more likely to head to dark social channels to do so rather than open social media platforms, and this shift to intimate spaces, at least in part, will be due to privacy concerns; a third of social media users say they think their data is safest in private messaging apps rather than on social feeds. This is great news for brands who are trying to utilise the space, as less scepticism from consumers should equal more trust for genuine conversations inside the private walls. 

Social media users are also taking other proactive measures against data privacy, and 9% have taken the most drastic action of stopping using a social media platform altogether. Almost 1 in 10 might sound like a lot, but this figure fades into insignificance when latest trends suggest that more than half of the world’s total population will be in the social media sphere by the middle of this year.

The actions many consumers are taking show they understand that data privacy isn’t a luxury or single faceted, but something they can increasingly carve out themselves to create the kind of online experience they want.

_

Causes of invasion of privacy on social media:

There are several causes that contribute to the invasion of privacy throughout social networking platforms. It has been recognized that “by design, social media technologies contest mechanisms for control and access to personal information, as the sharing of user-generated content is central to their function.” This proves that social networking companies need private information to become public so their sites can operate. They require people to share and connect with each other. This may not necessarily be a bad thing; however, one must be aware of the privacy concerns. Even with privacy settings, posts on the internet can still be shared with people beyond a user’s followers or friends. One reason for this is that “law is currently incapable of protecting those who share on social media from having their information disseminated further than they intend.” Information always has the chance to be unintentionally spread online. Once something is posted on the internet, it becomes public and is no longer private. Users can turn privacy settings on for their accounts; however, that does not guarantee that information will not go beyond their intended audience. Pictures and posts can be saved and posts may never really get deleted. In 2013, the Pew Research Center found that “60% of teenage Facebook users have private profiles.” This proves that privacy is definitely something that people still wish to obtain.

A person’s life becomes much more public because of social networking. Social media sites have allowed people to connect with many more people than with just in person interactions. People can connect with users from all across the world that they may never have the chance to meet in person. This can have positive effects; however, this also raises many concerns about privacy. Information can be posted about a person that they do not want getting out. In the novel It’s Complicated, the author explains that some people “believe that a willingness to share in public spaces—and, most certainly, any act of exhibitionism and publicity—is incompatible with a desire for personal privacy.” Once something is posted on the internet, it becomes accessible to multiple people and can even be shared beyond just assumed friends or followers. Many employers now look at a person’s social media before hiring them for a job or position. Social media has become a tool that people use to find out information about a person’s life. Someone can learn a lot about a person based on what they post before they even meet them once in person. The ability to achieve privacy is a never ending process. Boyd describes that “achieving privacy requires the ability to control the social situation by navigating complex contextual cues, technical affordances, and social dynamics.” Society is constantly changing; therefore, the ability to understand social situations to obtain privacy regularly has to be changed.

_______  

User awareness in social networking sites:

Users are often the targets as well as the source of information in social networking. Users leave digital imprints during browsing of social networking sites or services. It has been identified from few of the online studies conducted, that users trust websites and social networking sites. As per trust referred, trust is defined in (Mayer, Davis, and Schoorman, 1995) as the willingness of a party to be vulnerable to the actions of another party based on the expectation that the other will perform a particular action important to the trustor, irrespective of the ability to monitor or control that other party. A survey was conducted by Carnegie Mellon University found that majority of users provided their living city, phone numbers among other personal information, while user is clearly unaware of consequences of sharing certain information. Adding to this insight, is the social networking users are from various cities, remote villages, towns, cultures, traditions, religions, background, economic classes, education background, time zones and so on that highlight the significant gap in awareness. The survey results showed that the interaction of trust and privacy concern in social networking sites is not yet understood to a sufficient degree to allow accurate modeling of behavior and activity. The results of the study encourage further research in the effort to understand the development of relationships in the online social environment and the reasons for differences in behavior on different sites.

As per reference, a survey conducted among social networking users at Carnegie Mellon University was indicative of following as reasons for lack of user awareness:

-1) People’s disregard of privacy risks due to trust in privacy and protection offered on social networking sites.

-2) Availability of user’s personal details to third-party tools/applications.

-3) APIs and Frameworks also enable any user, who has the fair amount of knowledge to extract the user’s data.

-4) Cross-site forgery and other possible website threats.

There is hence a dire need for improving User’s awareness swiftly, in order to address growing security and privacy concerns caused due to merely user’s unawareness. Social networking sites themselves can take a responsibility and make such awareness possible by means of participatory methods by virtual online means.

______

Privacy violations – the dark side of social media:

Have you noticed how some of the ads on the sites you visit seem to be a perfect match to your interests? Think that’s a coincidence? On the web it certainly isn’t, as advertisers would do just about anything the online environment allows them to do – even if it means breaking your online privacy – to develop new ways to promote products. And the easiest way for them to find out your likes and habits is keeping a close eye on your social media behaviour. There are several ways advertisers can invade your social media privacy, take advantage of your data and make you a target for their ads. There are several ways for third parties to access user information. Flickr is an example of a social media website that provides geotagged photos that allows users to view the exact location of where a person is visiting or staying. Geotagged photos make it easy for third party users to see where an individual is located or traveling to. There is also growing use of phishing, which reveals sensitive information through secretive links and downloads through email, messages, and other communications. Social media has opened up an entirely new realm for hackers to get information from normal posts and messages. Here are the most common ways of privacy violations on social media:

-1. Data Scraping:

It involves tracking people’s activities online and harvesting personal data and conversations from social media, job websites and online forums. Usually, research companies are the harvesters, and sell the compiled data to other companies. These, in turn, use these details to design targeted ad campaigns for their products. While one might argue that people are knowingly sharing personal details on social media and thus, it’s free for everyone’s use, data harvesters don’t ask for the owner’s consent. And this raises an ethics as well as an online privacy problem.

-2. Share it with third parties:

Nearly all of the most popular applications on Facebook—including Farmville, Causes, and Quiz Planet—have been sharing users’ information with advertising and tracking companies. Even though Facebook’s privacy policy says they can provide “any of the non-personally identifiable attributes we have collected” to advertisers, they violate this policy. If a user clicked a specific ad in a page, Facebook will send the address of this page to advertisers, which will directly lead to a profile page. In this case, it is easy to identify users’ names.

It has been reported several times that certain Facebook apps are leaking identifying information about those who are using them, to advertising and Internet tracking companies. And without the users having a clue! During the app’s installation process, you are prompted to accept certain terms, and once you click “Allow”, the application receives an “access token”. Some of the Facebook apps are leaking these access tokens to advertisers, granting them access to personal-profile data such as chat logs and photos. However, no disclaimer is showed announcing you your data is being transferred to third parties. Thus your online privacy and safety are put at risk. Examples of apps that have been found to leak identifying information include FarmVille and Family Tree.

-3. Online social tracking:

We all use the “Like”, “Tweet”, “+1”, and other buttons to share content with our friends. But these social widgets are also valuable tracking tools for social media websites. They work with cookies – small files stored on a computer that enable tracking the user across different sites – that social websites place in browsers when you create an account or log in, and together they allow the social websites to recognize you on any site that uses these widgets. Thus, your interests and online shopping behaviour can be easily tracked, and your internet privacy rudely invaded. And things get worse. Other social websites allow companies to place within ads cookies and beacons – pieces of software that can track you and gather information about what you are doing on a page.

-4. Default privacy settings can expose your personal information:

Social media privacy settings are rarely straightforward. The majority of social media platforms have default settings you may not be comfortable with. These default settings, also known as opt-out settings, rely on your passive consent to access your personal information. Instead of asking you to check a box that says you consent to having your information shared, opt-out settings require you to seek out and change a default setting.

Other common opt-out privacy settings include agreeing to:

Allow your profile show up in public searches

Alerts that display when you’re online

Have your cookies saved

Track your interests

According to a 2018 report from the Pew Research Center, 74% of adult Facebook users in the United States said they didn’t know Facebook kept a list of their personal interests, and over half of those respondents said they weren’t comfortable with the company having such sensitive details.

What’s more, the privacy settings on most social media platforms don’t offer full protection. On Facebook, for example, even if you limit who sees your posts, your personal information can still get to third parties through friends who like or comment on your posts. Plus, most social media companies have a massive page of privacy settings to scroll through, some of which can be confusing.

-5. Your friends can compromise your social media privacy:

As social media use becomes more widespread, individual social media privacy is diminishing. According to a 2018 survey from the Pew Research Center, 69% of American adults use at least one social media platform (not including YouTube), which means the majority of your social circle will use one or more of the same platforms you do.

Even if you tighten your privacy controls, you’re still not in control of your friends’ privacy settings. Social media companies, advertisers, and third-party entities can still access your personal information through your friends’ profiles and posts.

Your friends might let Facebook access their contacts list, for example, which can then expose your phone number and email address to trackers. Likes, comments, and the general exchange of information between you and your friends get encoded on each other’s profiles.

As a result, advertisers can use your friend group to learn more about you. In fact, researchers found that you can profile someone with 95% accuracy using the information from just eight or nine of their friends’ social media accounts, according to a 2019 study published in Nature Human Behavior. These companies can then use your information — as well as the information of your friends — to analyze your online behavior, create a digital persona, send you targeted ads, and tailor your newsfeeds.

-6. Account hacking and impersonation:

Increasingly, spammers, hackers, and other online criminals are targeting social networks. A compromised social media account makes for an appealing target: if they can get into your Facebook or Twitter account, they can impersonate you.

Why are they interested in your social media accounts? Because it’s a much more effective way to spread viruses, malware, and scams than more traditional email spam. People tend to trust messages they get from their social media friends. They are more likely to click links without thinking twice, which can then infect their computers.

Even worse than malware is when cybercriminals use social media for identity theft. Our private social profiles contain a wealth of personal information, which can be leveraged to open credit card accounts in your name or otherwise abuse your digital identity.

-7. Stalking and harassment:

Not all social media privacy threats come from strangers. Sometimes, people in your life turn out to be less than friendly. Online stalking and cyberbullying have become very well-known threats, and social media makes them very easy to perpetrate.

In one recent incident, a woman who broke up with her boyfriend was horrified to discover some time afterward that he had broken into her Instagram account and posted transcripts of private messages about their relationship and other personal information. He also changed the account password so she couldn’t log back in, shared the information on other social networks, and then accused her of spreading it herself. By the time she was able to access her accounts, thousands of friends, acquaintances, and professional contacts had seen her private information. It was a privacy nightmare on multiple levels. She had never given out her password to the ex, so he gained access by hacking her accounts or guessing her password.

-8. Syncing apps spreads your personal information across the web:

Using your social media account to sign into other online accounts may make internet logins easier, but it also limits your privacy. When you sync two apps or accounts, you give both apps permission to access the data on the corresponding platform. If you sign into your favourite retail store using your Facebook profile, not only does Facebook get access to your account information in the online store, but the store also has access to your Facebook profile.

This information swap spreads your personal details far beyond the two apps you’re using. That’s because most social media and advertising companies reserve the right to sell or share your information with third parties. In fact, a 2018 study from the International Computer Science Institute said that eight of the top 10 global advertising and tracking companies “reserve the right to sell or share data with other organizations, while all [10 companies] reserve the right to share data with their subsidiaries.”

Not only does this make you more vulnerable to data breaches, it also puts you at risk of receiving unwanted targeted ads.

-9. API (application programming interface):

An API allows software to “speak with other software.” Furthermore, an API can collect and provide information that is not publicly accessible. This is extremely enticing for researchers due to the greater number of possible avenues of research. The use of an API for data collection can be a focal point of the privacy conversation, because while the data can be anonymous, the difficulty is understanding when it becomes an invasion of privacy. Personal information can be collected en masse, but the debate over whether it breaches personal privacy is due to the inability to match this information with specific people. There have however been some concerns with API because of the recent scandal between Facebook and the political consulting firm, Cambridge Analytica.

-10. Search engines:

Search engines are an easy way to find information without scanning every site yourself. Keywords that are typed into a search box will lead to the results. So it is necessary to make sure that the keywords typed are precise and correct. There are many such search engines, some of which may lead the user to fake sites which may obtain personal information or are laden with viruses.

-11. Location Data:

On most social media websites, user’s geographical location can be gathered either by users (through voluntary check-in applications like Foursquare and Facebook Places) or by applications (through technologies like IP address geolocation, cellphone network triangulation, RFID and GPS). The approach used matters less than the result which holds that the content produced is coupled with the geographical location where the user produced it. Additionally, many applications attach the contents of other forms of information like OS language, device type and capture time. The result is that by posting, tweeting or taking pictures, users produce and share an enormous amount of personal information.

________ 

Cambridge Analytica scandal:

Academic Aleksandr Kogan and his company Global Science Research created an app called “thisisyourdigitallife” in 2014. Users were paid to take a psychological test and the app collected the data. It also gathered data on a person’s Facebook friends, according to the reports. In this way, 50 million Facebook profiles were mined for data. Kogan then shared this with Cambridge Analytica, which allowed the firm to build a software solution to help influence choices in elections, according to whistleblower Christopher Wylie, who revealed the alleged practices to media. Wylie claimed the data sold to Cambridge Analytica was then used to develop “psychographic” profiles of people and deliver pro-Trump material to them online. Cambridge Analytica has denied any of this data was used in connection to the Trump campaign. Facebook has said that while the data was obtained by Cambridge Analytica legitimately, it claimed that Kogan “lied” to the social media platform and violated its policies in transferring the data. Facebook banned Kogan’s app in 2015 and ordered all parties he had given data to, including the consultancy, to destroy it. Recent reports surfaced suggesting that this data was not destroyed. Nonetheless, Cambridge Analytica argues it did delete the data when it was told to.

The saga is significant because of the way the harvested data might have been used. It was allegedly utilized to direct messages for political campaigns supported by Cambridge Analytica, most notably Trump’s election victory and the Brexit vote. Nonetheless, the role that marketing — or Cambridge Analytica — might have had to do with either of those political outcomes is not known. Cambridge Analytica has since denied the allegations made in media reports.

Up to 87 million Facebook users’ personal data may have been improperly shared with Cambridge Analytica—nearly 37 million more users than previously reported.

_______

Edward Snowden’s Privacy Tips: “Get Rid of Dropbox,” Avoid Facebook And Google:

According to Edward Snowden, people who care about their privacy should stay away from popular consumer Internet services like Dropbox, Facebook, and Google. Snowden conducted a remote interview as part of the New Yorker Festival, where he was asked a couple of variants on the question of what we can do to protect our privacy. His first answer called for a reform of government policies. Some people take the position that they “don’t have anything to hide,” but he argued that when you say that, “You’re inverting the model of responsibility for how rights work”: When you say, ‘I have nothing to hide,’ you’re saying, ‘I don’t care about this right.’ You’re saying, ‘I don’t have this right, because I’ve got to the point where I have to justify it.’ The way rights work is, the government has to justify its intrusion into your rights.

He added that on an individual level, people should seek out encrypted tools and stop using services that are “hostile to privacy.” For one thing, he said you should “get rid of Dropbox,” because it doesn’t support encryption, and you should consider alternatives like SpiderOak. The difference between Dropbox and SpiderOak is that SpiderOak encrypts the data while it’s on your computer, as opposed to only encrypting it “in transit” and on the company’s servers.

He also suggested that while Facebook and Google have improved their security, they remain “dangerous services” that people should avoid. His final piece of advice on this front: Don’t send unencrypted text messages, but instead use services like RedPhone and Silent Circle.

_______

Social Media Privacy Basics:

Understanding social media privacy:

Social media sites like Facebook, Instagram, and Twitter have made it easier than ever to share things online. But sharing something on social media is a bit different from other types of online communication. Unlike email or instant messaging, which are relatively private, the things you share on social media are more public, which means they’ll usually be seen by lots of other people.

Think before you share:

While social media sites offer privacy tools to help you limit who can see the things you share, it’s important to realize that they are fundamentally more open and social than traditional communication tools. Whether or not you realize it, the things you share online also can affect how you’re perceived by others. That’s why you’ll always want to think carefully about what you share over social media.

Review your privacy settings:

All social networking sites have privacy settings that allow you to control who you share with. For example, whenever you share something on Facebook, you can choose to share with just a few people, all of your Facebook friends, or publicly with everyone on Facebook. That’s why it’s so important to understand how your privacy settings work and how to control them.

_______

Ways to Protect your Privacy on Social Media:

Our social lives have experienced a complete upheaval in the last decade. Social media and online networking are entwined with our everyday lives. These accounts can provide us with great ways to keep in touch with friends and family, especially if you’re separated by vast physical distances. However, social media also opens up major privacy concerns, since we often reach broader audiences than we intend to. Online identities can prove problematic as people apply to jobs, build relationships, or even try to avoid cyber stalkers. Social media is a great way to connect with family and friends, but the apps can sometimes go too far with sharing your location, letting people know if you’re online, and if you read their message. Here are some ways to crack down on your social media privacy settings and take control over what people see.

-1. Hide Activity Status

On many social apps your friends have the ability see if you’re active, offline, or the last time you were online. Many social apps allow you to disable your activity status if you don’t want your status to be seen by all your friends. Remember though, hiding your activity also stops you from seeing someone else’s activity.

-2. Disable Read Receipts

To further protect your privacy on social media apps, turn off the read receipts. Read receipts are notifications that let your friends know that you’ve read their message. Twitter allows you to disable read receipts in your direct messages, but disabling read receipts will prevent you from seeing when other people have read your message.

-3. Stay Off-The-Grid

Social media apps use your current location for many reasons, such as delivering appropriate ads and notifying you about nearby events. They also can share your location, which you might want to turn off to keep private. Now others can’t see where you are, but you can still see where they are.

-4. Be Selective with The Audience

There are a few ways that you can limit your audience on social media networks. First, you can take advantage of using direct messages to select contacts rather than updating a status or post for all your followers to see. If you don’t want to use direct messaging, Facebook provides a different option. Facebook allows you to choose the audience for your posts and updates.

-5. Protecting Your Tweets

If your account on Twitter is public, then each tweet can potentially reach an unlimited audience. The keywords and hashtags in your posts will be searchable by the public. If you don’t need to communicate with the public at large, then you might want to consider switching over to a protected Twitter account. Protected posts are only visible to followers that have your approval. This can be an ideal way to network with your close friends, family, and audience members. It gives you an intimate space to share updates with a select group of people. Also, protected Tweets won’t be indexed by search engines, so no one will be able to view your Twitter updates when they Google you.

-6. Turning Off LinkedIn Activity Broadcasts

So maybe you’re looking for a job and you start following several companies on LinkedIn. The only problem is that these interactions are broadcast on your activity feed. This can alert your current employer that you’re searching for new work. Do your connections really need to know every time you make a change to your profile, follow companies, or write recommendations? If not, dig into your Activity Broadcasts setting and uncheck this feature.

-7. Limiting Future and Past Facebook Posts

Think about the nature of your Facebook posts. Unless you’re trying to promote products or services to the public, then it’s a good idea to keep your personal posts private. Seemingly innocuous public posts can become risks in the future. For example, you might not believe that publicly posting about your vacation is a major concern. However, this information could be used by criminals hoping to target unattended homes. You can restrict the audience of your past and future Facebook posts by visiting the “Privacy Settings and Tools” section and changing the settings under “Who can see my stuff?”

-8. Changing Facebook Friend Request Settings

Spammers and cybercriminals will sometimes target users with public Facebook profiles, attempting to phish information by sending out random messages and friend requests. You can reduce risks to your online identity by restricting friend requests to “Friends of Friends” in the “Who can contact me?” section of Facebook’s privacy settings.

-9. Preventing Search Engines from Indexing Your Facebook

Do you want anyone to find your Facebook posts when they type your name into a search engine? How about prospective employers? You can quickly turn off search engine indexing by unchecking the “Let other search engines link to your timeline” box in Facebook’s privacy settings.

-10. Preventing Facebook Email and Phone Lookup

If you want to prevent members of the public from looking up your Facebook account using your phone number or email address, then visit Facebook’s privacy settings, navigate to the “Who can look me up?” section, and change the drop-down menu option to “Friends” or “Friends of Friends.”

-11. Not Referring to Other Social Media Accounts

Many social media platforms allow you to fill in a profile field linking over to your other social networking accounts. However, it can be a good idea to maintain a separation between accounts, especially if they involve different personal and professional identities. For example, you might not want LinkedIn audiences to find your Facebook account. Avoid connecting these accounts to increase the privacy and security of your digital identities. Turn off Syncing apps.

-12. Forcing Facebook Tag Reviews

Let’s say you enjoy a fun night out, drinking with friends at a bar. One of your friends wants to post and tag a particularly embarrassing photo of you shotgunning beer. You can prevent some awkward conversations by requiring tag request approval before your name is linked to a post or photo. This prevents others from attaching your name to content without your consent. Change these settings by visiting Facebook’s “Timeline and Tagging” section.

_______

_______

Private traits and attributes are predictable from digital records of human behavior, a 2013 study:

Authors show that easily accessible digital records of behavior, Facebook Likes, can be used to automatically and accurately predict a range of highly sensitive personal attributes including: sexual orientation, ethnicity, religious and political views, personality traits, intelligence, happiness, use of addictive substances, parental separation, age, and gender. The analysis presented is based on a dataset of over 58,000 volunteers who provided their Facebook Likes, detailed demographic profiles, and the results of several psychometric tests. The proposed model uses dimensionality reduction for preprocessing the Likes data, which are then entered into logistic/linear regression to predict individual psychodemographic profiles from Likes. The model correctly discriminates between homosexual and heterosexual men in 88% of cases, African Americans and Caucasian Americans in 95% of cases, and between Democrat and Republican in 85% of cases. For the personality trait “Openness,” prediction accuracy is close to the test–retest accuracy of a standard personality test. Authors give examples of associations between attributes and Likes and discuss implications for online personalization and privacy.

Similarity between Facebook Likes and other widespread kinds of digital records, such as browsing histories, search queries, or purchase histories suggests that the potential to reveal users’ attributes is unlikely to be limited to Likes. Moreover, the wide variety of attributes predicted in this study indicates that, given appropriate training data, it may be possible to reveal other attributes as well.

Predicting users’ individual attributes and preferences can be used to improve numerous products and services. For instance, digital systems and devices (such as online stores or cars) could be designed to adjust their behavior to best fit each user’s inferred profile. Also, the relevance of marketing and product recommendations could be improved by adding psychological dimensions to current user models. For example, online insurance advertisements might emphasize security when facing emotionally unstable (neurotic) users but stress potential threats when dealing with emotionally stable ones. Moreover, digital records of behavior may provide a convenient and reliable way to measure psychological traits. Automated assessment based on large samples of behavior may not only be more accurate and less prone to cheating and misrepresentation but may also permit assessment across time to detect trends. Moreover, inference based on observations of digitally recorded behavior may open new doors for research in human psychology.

On the other hand, the predictability of individual attributes from digital records of behavior may have considerable negative implications, because it can easily be applied to large numbers of people without obtaining their individual consent and without them noticing. Commercial companies, governmental institutions, or even one’s Facebook friends could use software to infer attributes such as intelligence, sexual orientation, or political views that an individual may not have intended to share. One can imagine situations in which such predictions, even if incorrect, could pose a threat to an individual’s well-being, freedom, or even life. Importantly, given the ever-increasing amount of digital traces people leave behind, it becomes difficult for individuals to control which of their attributes are being revealed. For example, merely avoiding explicitly homosexual content may be insufficient to prevent others from discovering one’s sexual orientation.

There is a risk that the growing awareness of digital exposure may negatively affect people’s experience of digital technologies, decrease their trust in online services, or even completely deter them from using digital technology. It is author’s hope, however, that the trust and goodwill among parties interacting in the digital environment can be maintained by providing users with transparency and control over their information, leading to an individually controlled balance between the promises and perils of the Digital Age.

______

______

Section-16

Privacy and mainstream media:

The freedom of press has not been expressly mentioned in Art. 19 of the Constitution of India but has been interpreted that it is implied under it. The Constitution exhaustively enumerates the permissible grounds of restriction on the freedom of expression in Art. 19(2). So, a female who is the victim of sexual assault, kidnapping, abduction or a like offence should not further be subject to the indignity of her name and the incident being published in press media. Newspaper or a journalist or anybody has the duty to assist the State in detection of the crime and bringing criminal to justice. Withholding such information cannot be traced to right to privacy. In Destruction of Public and Private Properties v. State of A.P. the Indian Supreme Court said that media should be based upon the principles of impartiality and objectivity in reporting, ensuring neutrality; responsible reporting of sensitive issues, especially crime, violence, agitations and protests; sensitivity in reporting woman and children and matters relating to national security; and respect for privacy.

Investigative journalism may raise a number of eyebrows. But there is absolutely nothing wrong (no matter how intrusive) if its object is to bring to light corruption or any issues that borders on public morality, public safety, economic wellbeing of the nation, for the protection of health or morals, the prevention of crime or disorder or any of the constitutional exceptions to the right to privacy. 

______

Privacy violations by Indian Media:

-1. Safeguarding Identity of Rape Victims

Section 228A of the Indian Penal Code makes disclosure of the identity of a rape victim punishable. In the recent Aarushi Talwar murder case and the rape of an international student studying at the Tata Institute of Social Sciences (TISS) the media frenzy compromised the privacy of the TISS victim and besmirched the character of the dead person. In the TISS case, the media did not reveal the name of the girl, but revealed the name of the university and the course she was pursuing, which is in violation of the PCI norms. In addition to revealing names of individuals, the PCI norms expressly states that visual representation in moments of personal grief should be avoided. In the Aarushi murder case, the media repeatedly violated this norm.

-2. Trial by Media & Victimisation by Media 

The PCI norms lay down the guidelines for reporting cases and avoiding trial by media. The PCI warns journalists not to give excessive publicity to victims, witnesses, suspects and accused as that amounts to invasion of privacy. Similarly, the identification of witnesses may endanger the lives of witnesses and force them to turn hostile. Zaheera Sheikh, who was a key witness in the Gujarat Best Bakery case, was a victim of excessive media coverage and sympathy. Her turning hostile invited equal amount of media speculation and wrath. Her excessive media exposure possibly endangered her life. Instead,of focussing on the lack of a witness protection program in the country, the media focussed on the twists and turns of the case and the 19-year old’s conflicting statements.  The right of the suspect or the accused to privacy is recognised by the PCI to guard against the trial by media.

Swati Deshpande, a Senior Assistant Editor (Law) at the Times of India, Mumbai, observes that, “As a good journalist one will always have more information than required, but whether you publish that information or exercise restraint is up to you.” In a span of 11 years of court reporting, as per her, there have been instances when she has exercised the option of not reporting certain information that could be defamatory and cannot be attributed. If an allegation is made in a court room, but is not supported by evidence or facts, then it is advisable that it be dropped from the report. As per her, to a certain degree the publication of Tata–Radia conversations did violate Tata’s privacy. “Media needs to question itself prior to printing on how the information is of public interest. Of course, as a journalist you do not want to lose out on a good story, but there needs to be gate keeping, which is mostly absent in most of the media today.”

In the Bofors pay-off case the High Court of Delhi, observed that, “The fairness of trial is of paramount importance as without such protection there would be trial by media which no civilised society can and should tolerate.  The functions of the court in the civilised society cannot be usurped by any other authority.”  It further criticised the trend of police or the CBI holding a press conference for the media when investigation of a crime is still ongoing. The court agreed that media awareness creates awareness of the crime, but the right to fair trial is as valuable as the right to information and freedom of communication.

If the suspect’s pictures are shown in the media, identification parades of the accused conducted under Code of Civil Procedure would be prejudiced. Under Contempt of Court Act, publications that interfere with the administration of justice amount to contempt. Further, the principles of natural justice emphasise fair trial and the presumption of innocence until proven guilty.  The rights of an accused are protected under Article 21 of the Constitution, which guarantees the right to fair trial. This protects the accused from the over-zealous media glare which can prejudice the case. Although, in recent times the media has failed to observe restraint in covering high-profile murder cases, much of which has been hailed as media’s success in ensuring justice to the common man.

The Apex Court observed that the freedom of speech has to be carefully and cautiously used to avoid interference in the administration of justice. If trial by media hampers fair investigation and prejudices the right of defence of the accused it would amount to travesty of justice. The Court remarked that the media should not act as an agency of the court. The Court, commented, “Presumption of innocence of an accused is a legal presumption and should not be destroyed at the very threshold through the process of media trial and that too when the investigation is pending.”

-3. Sting Operations

On 30 August, 2007 Live India, a news channel conducted a sting operation on a Delhi government school teacher forcing a girl student into prostitution. Subsequent to the media exposé, the teacher Uma Khurana was attacked by a mob and was suspended by the Directorate of Education, Government of Delhi. Later investigation and reports by the media exposed that there was no truth to the sting operation. The girl student who was allegedly being forced into prostitution was a journalist. The sting operation was a stage-managed operation. The police found no evidence against the teacher to support allegations made by the sting operation of child prostitution. In this case, the High Court of Delhi charged the journalist with impersonation, criminal conspiracy and creating false evidence. The Ministry of Information and Broadcasting sent a show cause notice to TV-Live India, alleging the telecast of the sting operation by channel was “defamatory, deliberate, containing false and suggestive innuendos and half-truths.”

The court observed that false and fabricated sting operations violate a person’s right to privacy. It further, observed, “Giving inducement to a person to commit an offence, which he is otherwise not likely and inclined to commit, so as to make the same part of the sting operation is deplorable and must be deprecated by all concerned including the media.” It commented that while “…sting operations showing acts and facts as they are truly and actually happening may be necessary in public interest and as a tool for justice, but a hidden camera cannot be allowed to depict something which is not true, correct and is not happening but has happened because of inducement by entrapping a person.” The court criticised the role of the media in creating situations of entrapment and using the ‘inducement test’. It remarked that such inducement tests infringe upon the individual’s right to privacy. It directed news channels to take steps to prohibit reporters from producing or airing any programme which are based on entrapment and which are fabricated, intrusive and sensitive.

In a nutshell:

Unwarranted invasion of privacy by the media is widespread in India. For instance, in the UK, Sweden, France and Netherlands, the right to photograph a person or retouching of any picture is prohibited unlike, in India where press photographers do not expressly seek consent of the person being photographed, if he/she is in a public space.  In France, not only is the publication of information is prohibited on account of the right to privacy, but the method in which the information is procured also falls within the purview of the right to privacy and could be violative. This includes information or photograph taken in both public and private spaces. Privacy within public spaces is recognised, especially, “where there is reasonable expectation of privacy.” The Indian norms or code of ethics in journalism fail to make such a distinction between public and private space. Nor do the guidelines impose any restrictions on photographing an individual without seeking express consent of the individual.

The Indian media violates privacy in day-to-day reporting, like overlooking the issue of privacy to satisfy morbid curiosity. The PCI norms prohibit such reporting, unless it is outweighed by ‘genuine overriding public interest’. Almost all the above countries prohibit publication of details that would hurt the feelings of the victim or his/her family. Unlike the UK, where the PCC can pass desist orders, in India the family and/or relatives of the victims are hounded by the media.

In India, the right to privacy is not a positive right. It comes into effect only in the event of a violation. The law on privacy in India has primarily evolved through judicial intervention. It has failed to keep pace with the technological advancement and the burgeoning of the 24/7 media news channels. The prevalent right to privacy is easily compromised for other competing rights of ‘public good’, ‘public interest’ and ‘State security’, much of what constitutes public interest or what is private is left to the discretion of the media.

_______

_______

Safeguarding Privacy in the Media:

Freedom of expression, the role of the media and responsible journalism:

-1. Freedom of expression

The right to freedom of expression is a right guaranteed to everyone. It includes the right to hold opinions and receive and disseminate information and ideas without interference by public authorities. However, states do have the right to require licensing for broadcasting, television or cinema enterprises. This right is an essential element of a democratic society and a basic condition for its progress and for each individual’s self-fulfilment. Going beyond information and ideas that are favourably received or regarded as inoffensive or with indifference, the right to freedom of expression extends to information that could offend, shock or even disturb. Pluralism of the media is an important aspect of the right to freedom of expression. In a democratic society, pluralism of opinions in the media must not only be tolerated, but actively promoted and facilitated. Different voices and opinions present in a society must be included and reflected in the media. In this way, tolerance and broadmindedness are built.

-2. Media as public watchdogs with rights and responsibilities

Members of the media are considered to be public watchdogs with a vital role in a democratic society. They have a duty to disseminate information and inform the public regarding all matters of public interest, which the public also has a right to receive.

Nonetheless, a journalist’s right to freedom of expression is not absolute. Journalists have rights and responsibilities. In this regard, the term “rights” is construed as journalist’s prerogative to exercise their profession and report on matters of public interest, whereas the term “responsibilities” means that they should act in good faith and provide accurate and reliable information in accordance with the ethics of journalism.

Journalists are required to verify facts before they publish them, though the same requirement does not apply when they report and disseminate value judgments (opinions). However, even opinions must have some factual grounds. In Bodrožić v. Serbia, the Court found it acceptable for a journalist to criticise a historian by calling him “idiot” and “fascist” because his opinion was published in response to an appearance by the historian on a television show where he had discussed ethnic and national tensions in the Balkans. The offensive words were thus not to be interpreted as statements of facts, but as an opinion in reaction to the historian’s own intolerance towards national minorities.

In special circumstances, it is justified for journalists not to verify factual statements. For example, when journalists report on the content of official reports or information from the government or public records, they are not required to conduct additional independent research to verify those facts.

It is up to journalists to decide how a press article is presented, and they are allowed to use a certain degree of exaggeration or even provocation. They may therefore enhance articles and attempt to present them attractively, provided they do not misrepresent or mislead readers.

In reporting matters of privacy, journalists might be limited by court injunctions because prior restraint on publications is not prohibited. However, it is important to know that judicial authorities are required to carefully assess injunctions concerning the press, because news is a perishable commodity and to delay its publication, even for a short period, may well deprive it of all its value and interest.

Whenever possible and practical, journalists should ask for comments from the subjects of their reports, though they are not bound to inform them before the publication or broadcast. In Mosley v. UK, the person concerned was photographed and filmed participating in sado-masochistic activities with prostitutes. He successfully sued a newspaper for a violation of his right to privacy, but also sought to legally force the media into notifying persons concerned in advance of their intention to publish material. The Court, however, decided that it was not necessary for the media outlets to give prior notice of intended publications to those whom they feature in them.

-3. Responsible journalism v. tabloid journalism

Responsible journalism means that journalists exercise their profession by acting in good faith and collecting and disseminating information in line with the journalistic ethics. They make sure the reporting is balanced by repeatedly attempting to contact persons concerned for a comment before publication.

Tabloid journalism means that personal information (especially photos) is collected in a climate of continual harassment, which can instil a very strong sense of intrusion into private life or even of persecution of the persons concerned.

Whether personal information is published by a journalist adhering to the principles of responsible journalism or by tabloids publishing such information solely to satisfy the curiosity of the public is an important element of the legal assessment. Journalists practising responsible journalism enjoy stronger protection of their right to freedom of expression. It is however not for the national authorities to decide what reporting techniques should be adopted by journalists.

Journalists should be mindful that the public does not need to know the whereabouts of a well-known person or how he or she behaves privately even when that person appears in places that could not always be described as entirely private.

Publishing photos with accompanying commentaries relating exclusively to details of private life, when without consent and especially when taken secretly from a distance, are likely to infringe the right to privacy of public figures. It is not necessarily considered that such photos contribute to a debate of general interest. This standard is even stricter in cases concerning private individuals.

Journalists are in principle obliged to respect the law and ethical codes while reporting news and should exercise utmost caution in situations which may amount to a violation of applicable laws. Law-breaking can only be justified in situations where the interest in the public’s being informed outweighs the duty to obey ordinary (criminal) law. For example, a journalist must comply with a police order to leave the scene during public demonstrations or risks being detained by law enforcement officers. Likewise, journalists who opt to, for instance, illegally buy firearms to prove that weapons can be easily accessible may not expect to be exempt from prosecution.

_

Private life and conditions for publishing private matters:

-1. private life

The right to private life is guaranteed to everyone.

The notion of private life is a broad term with no strict definition, covering but not limited to the physical and psychological integrity of a person and multiple aspects of a person’s identity such as gender identification and sexual orientation, name or elements relating to a person’s right to their image. A person’s reputation is also part of the right to private life.

Private life extends to the right to freely establish and develop relationships (including romantic ones) with other human beings. In addition, information relating to medical conditions, home addresses, fathering a child out of wedlock and sexual activities are considered to fall within the sphere of private life.

The right to privacy means that everyone, i.e. private and public figures have the right to live privately away from unwanted attention (subject to some exceptions).

As a principle, a publication concerning strictly private matters infringes the right to respect for private life, unless consent of the concerned person is obtained or such publication is considered in the public interest. As such, decisions on what is considered to be a private matter and what has entered into a public sphere need to be taken by journalists themselves on a case-by-case basis.

The more intimate the aspect of private life being disclosed, the more serious the justification must be.

-2. Consent

As a general rule, personal information should not be made public without the consent of the concerned person. Consent is an important element in determining whether a publication of a detail from private life interferes with the right to privacy.

That being said, information about individuals can also be published without consent if there is an overriding public interest, i.e. if the disclosure of information is justified by a general interest or concern, which is considered to prevail over the considerations of the concerned individual’s privacy. The concept of public interest may therefore constitute an “alternative justification” for a publication.

Alleging a violation of her right to privacy, Princess Caroline Von Hannover complained several times about the publication of photographs from her private sphere in German magazines. The Court examined also the manner in which photographs were obtained, stressing the importance of obtaining the consent of the persons involved. For example, in Von Hannover 2 v. Germany, a picture of the Princess on a skiing holiday was published alongside an article about her father’s illness, which was found to contribute to a debate of general interest. Therefore, even in the absence of her consent, the publication was found to be justified.

However, in any publication without consent the rule is: the more private the matter, the greater the call for caution. For example, a person’s romantic relationships are in principle a strictly private matter. Accordingly, details concerning an individual’s sex life or intimate relations are only permitted to become public without consent in exceptional circumstances. This was the case in Couderc and Hachette Filipacchi Associés v. France.

In practice, information and images published with the consent of the persons involved generally do not pose problems. Judicial proceedings are mostly initiated in cases where no such consent was obtained.

-3. Public interest

Generally speaking, public interest relates to matters affecting the public to such an extent that it may rightfully (legitimately) take an interest in them, attracting its attention or concerning the public significantly.

Areas considered to be of public interest include yet are not limited to misuse of public office, improper use of public money, protection of public health, safety and environment, protection of national security, crime and social behaviour and similar political and socioeconomic topics.

Journalists may publish ordinarily personal information when it serves a greater value and is used to discuss a matter in the public interest (published personal information should serve some important purpose). The greater the information value for the public, the more the interest of a person in being protected against the publication has to yield, and vice versa.

Along these lines, journalists may republish personal information already made public by the concerned person. In Krone Verlag GmbH & Co. KG v. Austria, a journalist took and used a picture of a politician from the website of the Parliament to accompany an article revealing that he had allegedly received unlawful salaries. Journalists can also republish information and photographs of private individuals which were originally published with their consent, insofar as the information is a matter of legitimate public interest (Eerikainen and others v. Finland).

News reports need not entirely be devoted to a debate of public interest to contribute to that debate, as it may suffice for the article to be concerned with the debate or contain one or several elements thereof.

It is difficult to define public interest clearly because of the risk of excluding some matters or of proposing an overly narrow definition. The decision on whether to publish personal information about a public figure or private person will always depend on the circumstances of the case. Journalists are thus expected to apply the public interest test and balance the strength of considerations in favour and against disclosure on a case-by-case basis.

In determining public interest, what should matter to journalists is whether the news report is capable of contributing to a debate of general interest and not whether they will manage to fully achieve that objective. In Erla Hlynsdottir v. Iceland, a journalist reported that the director of a Christian rehabilitation centre and his wife had been involved in sex games with the patients of the centre. Although the wife was not ultimately convicted, reporting about the allegations, which involved private sexual activities, contributed to the public interest. Public interest applies among other to matters which are capable of giving rise to considerable controversy, or involve a problem that the public would have an interest in being informed about, but it cannot be reduced to the public’s thirst for information about the private life of others or to the reader’s wish for sensationalism or even voyeurism, as was the case of publishing details of the sexual activities of Max Mosley in the Court case. If the sole aim of an article is to satisfy curiosity of the readership regarding details of a person’s private life, it cannot be deemed to contribute to any debate of general interest to society.

-4. Public figures

Public figures are persons holding public office and/or using public resources. More broadly speaking, public figures include anyone with a role in public life, regardless of whether the domain is politics, the economy, the arts, the social sphere, sports or other.

People’s private lives have become a highly lucrative commodity for some media outlets. The targets are mostly public figures, since details of their private lives serve as an impetus for sales. Yet public figures should know that the position they hold in society – in many cases by choice – automatically entails increased pressure on their privacy.

In determining whether a person is a public figure, it is of little importance for journalists whether a certain person is actually known to the public. Journalists cannot be limited by the claims of concerned persons that they are not actually known to the public. What matters is whether the person has entered the public arena by participating in a public debate, by being active in a field of public concern or in public debate.

Public figures inevitably and knowingly lay themselves open to close scrutiny of their every word by both journalists and the public at large. Their right to keep their private life protected from the eyes of the public is therefore more restricted. Freedom of expression in the sphere of politics would receive a fatal blow if public figures could censor the press and public debate in the name of personality rights.

When reporting matters involving private aspects of life, journalists should pay special attention to the role or function of the concerned person and the nature of the activities subject of the news report. Depending on whether or not he or she is vested with official functions, an individual will enjoy a more or less restricted right to his or her intimacy. For example, Princess Caroline von Hannover is considered to be a public figure, but does not exercise any official functions, which allows her the right to enjoy a higher degree of privacy than that enjoyed by a person holding a public office.

Public figures with the lowest expectation of privacy are politicians. The exercise of a public function or aspiration to political office necessarily exposes an individual (also after death) to the attention of the public, including in many areas that come within one’s private life. In Editions Plon v. France, a journalist and the former private physician to former French President Mitterrand wrote a book describing the state of his health during his term of office. The president’s heirs successfully sued to prohibit further dissemination of the book, alleging that it invaded the former president’s privacy and interfered with the personal life and feelings of his widow and children. The Court, however, ruled in favour of the journalist and the physician, finding that was in the public interest to discuss the history of the president who served two terms in office.

Certain private actions of public figures cannot be regarded as private on account of their potential impact, viewed from the perspective of the role played by those figures in political or social spheres, and of the public’s resulting interest in being informed thereof. For example, an arrest of a well-known television actor (who might be considered as a role model for young people) for possession and use of illegal drugs is likely to be considered a matter of public interest worth reporting.

Journalists should respect the legitimate expectations of public figures to privacy when they engage in purely private activities such as participating in sports, walking, leaving a restaurant or when on holiday or in intimate relationships (marital problems, extramarital affairs), if the reporting does not contribute to a matter of public interest.

-5. Private individuals

Private individuals, who have not entered the public domain, in principle enjoy greater protection of their right to privacy. However, their actions may take them into the public sphere, which is why journalists do not have an absolute ban on reporting about them, even without their consent.

In certain cases, journalists may report on and even name private individuals. In Standard Verlags GmbH v. Austria, a newspaper reported on speculation losses incurred by a bank and the ensuing criminal investigation. In its reporting, the newspaper named the banker under investigation. The Court found that while the banker could not be considered to be a public figure as a senior employee of the bank and son of a politician, the journalist was nevertheless justified to publish his name because he had headed the bank’s treasury at the time when the losses were incurred.

Private individuals voluntarily involved in controversial undertakings cannot expect absolute privacy. For example, journalists would be allowed to name persons doing business with prostitutes (there are on-going discussions in many countries as to whether strip clubs should have more stringent regulations or be banned altogether). In this regard, by choosing to engage in a highly controversial business, these private individuals have entered the public domain and thus opened themselves to scrutiny from journalists.

Journalists should pay particular attention to the wider implications that the publication of personal information may entail, such as possible exclusion from the local community. In Armonienė v. Lithuania the Court addressed the matter of an entire family’s suffering of severe moral and psychological trauma which drove them to move from their village, after a journalist disclosed that a member of that family was infected with HIV.

_

Specific issues of private life:

-1. family, home, property

Family members, relatives and friends of public figures, not being public figures themselves, enjoy a higher degree of privacy, though there are cases in which journalists are allowed to report about them. In Flinkkilä and Others v. Finland, publishing the name, age, picture, workplace and family relationship details of the partner of a public figure was not considered to be in violation of privacy because she was involved in a domestic incident which had resulted in public disorder charges (both being criminally charged, fined and convicted).

Articles about children of public figures regularly appear in newspapers. If such publications are made only to trigger gossip, journalists do not enjoy strong protection of their right to freedom of expression. In Zvagulis v. Lithuania, a newspaper reporting that a prominent pop star had a child born out of wedlock violated his right to privacy, since the newspaper was unable to link this information to the pop star’s professional activity. The Court considered that the child’s existence did not go beyond the private sphere and that publication was stressful for the public figure and harmful for the psychological integrity of the child.

The right to privacy includes not only the right to an actual physical area but also the quiet enjoyment of that area. A person’s home address is personal data; hence it is protected and in principle should not be made available to the public by journalists. In Alkaya v. Turkey, a journalist reporting on the burglary of the home of a famous actress violated her right to privacy by disclosing her home address. The Court found that even when assuming public interest in reporting that she was burgled, there was no such interest in publishing the exact details of her home address. Location of other places related to private spheres of life may be problematic such as the case of the treatment centre frequented by N. Campbell.

-2. Physical and moral integrity

-Medical information:

Journalists should pay particular attention to medical information because it is of fundamental importance to a person’s enjoyment of his or her right to respect for privacy. It is crucial to not only respect the sense of privacy of a patient but also preserve his or her confidence in the medical profession and health services in general. Otherwise, the impact could be so negative that those in need of medical assistance may hesitate to disclose such information to receive the appropriate treatment.

In Fürst-Pfeifer v. Austria, an article about a registered psychological expert for court proceedings was published in December 2008 on a regional news website. The article stated in particular that the psychological expert suffered from psychological problems such as mood swings and panic attacks but had been working as a court-appointed expert for many years. According to the Court, a serious debate on the mental health status of a psychological expert, evoked by reasoned suspicions, has to be seen as a debate of general interest, as an expert in court proceedings is required to meet standards of physical and psychological fitness.

In Armoniene v. Lithuania, the largest national daily newspaper published details ̇ about the medical condition of a private person who was suffering from HIV. After the person concerned died, his wife continued legal proceedings. The Court found that publicly disclosing the husband’s state of health and indicating his full name, surname and residence was not in the public interest. By confirming information on the husband’s illness, the employees at the AIDS centre could have negatively affected the willingness of others to be voluntarily screened for HIV.

In Mitkus v. Latvia, the newspaper violated a prisoner’s privacy when it reported that he was infected with HIV. The article included a picture, although the national judicial authorities had prohibited its publication. The Court found that since the prisoner’s features were clearly visible (his first name and the first letter of his surname, details of his criminal record and place of imprisonment were mentioned), it was perfectly possible that his fellow prisoners and other persons could identify him and behave differently to him based on this state of health.

-Moral integrity:

In principle, it will be difficult for a journalist to justify reporting about private, especially intimate relationships of public figures if they do not contribute to a debate of general interest. In Standard Verlags GmbH v. Austria, a newspaper violated the privacy of the persons concerned when it published an article commenting on rumours that the wife of the then Austrian President sought to divorce him and was maintaining close contacts with another politician. According to the Court, journalists can report information concerning politicians’ state of health, which might prevent them from exercising their duties, but the same freedom does not apply to pointless gossip about their marriages.

-3. The right to one’s image

A person’s image constitutes one of the principal attributes of his or her personality, as it reveals the person’s unique characteristics and distinguishes the person from others. It is an essential component of personal development and everyone has the right to control the use of his or her own image. In this light, the publication of a photograph in general constitutes a more substantial interference with the right to privacy than the mere communication of a person’s name. Individuals have the right to refuse publication of their image and to object to the recording, conservation and reproduction of the image by another person.  Journalists should, in principle, secure the consent of the person concerned at the time the picture is taken and not simply if and when it is published. Otherwise an essential attribute of personality (the image) is dependent on third parties and the person concerned has no control over it. Images taken without consent of the persons concerned or secretly without their knowledge will result in a violation of the right to privacy, unless they are considered to contribute to a debate of public interest.

In Mgn Limited v. the United Kingdom, a newspaper published an article about the supermodel Naomi Campbell. The title on its front page read “Naomi: I am a drug addict” and a longer article inside the newspaper elaborated on Campbell’s addiction treatment. The articles were accompanied by photos taken secretly near the Narcotics Anonymous centre she was attending at the time. National courts concluded that that the publication of the information was justified as a matter of public interest, given that Campbell had previously publicly denied drug use and the articles disclosed that she was being treated for drug addiction. However, although the publication of that information was justified, the Court found that the additional publication of photographs was offensive and distressing for her, and infringed on her right to respect for private life.

In Müller v. Germany, the applicants were first informed about their son’s presumed (and later confirmed) suicide from a newspaper article featuring their son’s photograph. While the publication of the photograph without their consent was considered a violation of the applicants’ privacy, the accompanying article was accurate and in no way defamatory, and the photograph itself bore no particularities. In addition, the applicants could have sought injunction to prevent further publication of unconfirmed information. The combined effect of those factors lessened the gravity of the violation of privacy, so the applicants were not awarded any damages. 

-4. Specific cases of photographing and filming

Images of violent or traumatic events:

As part of their responsibilities, journalists should be sensitive when publishing information concerning people who are affected by tragedy of grief, since publication of such information might result in a violation of the right to privacy of those affected. In Hachette Filipacchi Associés v. France, a weekly magazine published an article illustrated by a photograph of a murdered high official’s body lying on the road, facing the camera. Family members successfully sued the magazine for violation of privacy.

CCTV:

Journalists should refrain from publishing footage taken by closed-circuit television (CCTV) featuring private persons without masking the pictures, unless that information contributes to a debate of general interest. In Peck v. United Kingdom, a private individual (who was suffering from depression yet was not accused of any criminal offence) was recorded while walking in the street with a kitchen knife in his hand and subsequently attempted to slit his wrists. Publication of this footage by the local council and the media was considered to be in violation of his right to privacy.

Hidden cameras:

Investigative journalists are allowed to use hidden cameras to record interviews with non-public figures only under certain conditions. Using hidden cameras is allowed when a) the matter contributes to the public debate, b) the reporting does not focus on the person personally but on one of his or her professional aspects, c) the person’s face and voice is disguised and d) the interview is not conducted at usual business premises.

In Haldimann and Others v. Switzerland, four journalists were involved in recording and broadcasting a documentary on the sale of life insurance products against a background of public discontent with the practices used by insurance brokers. The documentary contained sequences of an interview recorded by hidden camera to highlight an insurance broker’s malpractice. The Court held that interference in the private life of the broker, who had decided against expressing an opinion on the interview, had not been serious enough to override the public interest in receiving information on the alleged malpractice in the field of insurance brokerage.

However, an individual’s celebrity or functions cannot under any circumstances justify hounding by the media or the publication of photographs [or information] secured through fraudulent or clandestine operations, or disclosures portraying details of an individual’s private life and representing an intrusion into their intimacy.

Taking pictures at the weddings of well-known figures:

Reporting about weddings of well-known figures and publishing pictures of the ceremonies are in principle allowed because they have a public side, under certain conditions, also without consent. In Sihler-Jauch and Jauch v. Germany, a weekly magazine published an article illustrated by several photographs about the wedding of a well-known TV presenter. It was decided that the journalist did not violate the right to privacy of the couple because the presenter was well known and had a strong influence in shaping public opinion. Furthermore, the list of guests had prominent names, including the mayor of Berlin and the couple was not portrayed in a negative light.

Likewise, in Lillo-Stenberg and Saether v. Norway, a well-known musician and actress complained about the press invading their privacy during their wedding party. A magazine published a two-page article about the wedding accompanied by six photographs without the couple’s consent. The Court deemed that their privacy was not violated because the event was held in an open and accessible place, they were not portrayed in a negative light, and their wedding party was a less private affair than a marriage ceremony would have been.

Children:

Journalists should avoid publishing pictures of the children of public figures if such information does not contribute to a debate of public interest. In Kahn v. Germany, pictures of two children of Oliver Kahn, a former goalkeeper of the German national football team, and his wife were featured in a magazine. The journalists were fined because they had violated the family’s right to privacy. All the photos showed the children in the company of their parents or on holiday, though the subject of the reports had not been the children themselves, but rather their parents’ relationship and Oliver Kahn’s career. In Reklos and Davourlis v. Greece, taking pictures of a new-born baby without the consent of his parents (in the intensive unit to which only hospital staff should have had access) was considered to be a violation of the right to privacy even though the pictures were not published.

_

Crime reporting:

When reporting about crimes, journalists should pay particular attention to whether the person concerned is known to the public. The mere fact that a person is subject to criminal investigation, even for a very serious offence, does not justify treating him or her in the same manner as a public figure who is more exposed to publicity.

General principles:

The public has a legitimate interest in being informed about crimes, investigation proceedings and trials. While the aim of crime reporting is to inform the public, journalist should nevertheless report in good faith by refraining from publishing groundless and unverified accusations.

In particular, journalists should not present a person as guilty until a conviction has been pronounced by a court. A clear distinction should be made between suspicion and conviction. As a matter of good practice, media could specify whether a person has pleaded guilty or not, taking into consideration that a confession of guilt should not be presented as a proven guilt.

The right of victims (minor) to protect their identity:

In Krone Verlag GmbH & Co KG and Krone Multimedia GmbH & Co KG v. Austria, a newspaper revealed the identity of a minor victim of sexual abuse by publishing her photograph on its website. Although the issue was a matter of public concern, given that neither the offenders nor the victim were public figures or had previously entered the public sphere, the knowledge of their identity was not necessary to understand the particulars of the case. The child was not a public figure and it was not considered by the Court that she has entered the public scene by becoming the victim of a criminal offence which attracted considerable public attention.

The right to privacy of a presumed paedophile:

In Y v. Switzerland, a journalist was found to violate the right to privacy of a person prosecuted for paedophilia, who was eventually released. The article contained a considerable amount of detailed information and extracts from the complainant’s statements to the police, which was deemed to be in violation of his right to privacy and did not contribute to a public debate.

Revealing the identity of an investigated police officer:

In Wirtschafts-Trend Zeitschriften-Verlagsgesellschaft v. Austria, a news magazine published an article with excerpts of the minutes of preliminary investigations in criminal proceedings against three foreign police officers who were on a deportation flight. The deportee they were escorting had died under unclear circumstances. The Court ruled that the disclosure of the identity of one officer by the news magazine had negatively affected his private and social life and particular care had to be taken to protect him against a condemnation by the media.

Suspected persons:

Journalists are in principle allowed to publish pictures of public figures under investigation, e.g. on the suspicion of large-scale tax evasion. In Verlagsgruppe News GmbH v. Austria, the newspaper published an article about pending investigations on suspicion of large scale tax evasion against the managing director of a well-known pistol manufacturer. Such reporting was not considered to violate the right to privacy of the managing director.

Journalist should be much more careful when lesser known persons are in question. In the case of Khuzhin and Others v. Russia, publishing (during a talk show) pictures of passports of persons charged with kidnaping and torturing a few days before their trial resulted in a violation of their right to privacy.

Publishing banal aspects of accused persons:

In Bedat v. Switzerland, a journalist was considered to have violated the right to privacy of a private person accused of three deaths in connection to a car accident. The Court deemed that publishing records of interviews, statements made by the accused’s wife and doctor, and letters sent by the accused to the investigating judge concerning banal aspects of his everyday life in detention did not contribute to a public debate. In addition, the Court stated that the journalist had painted a highly negative picture of the accused person, adopting a quasi-mocking tone, with large close-up photographs of the accused accompanying the text as proof that the journalist sought to create a sensationalist article.

Persons in custody:

In Toma v. Romania, after the police had taken a person in custody for possession of drugs, some police officers contacted journalists and invited them to record pictures of the person concerned at the police headquarters. The Court found that this person’s right to privacy had been violated. 

In another case, Khmel v. Russia, the police had invited journalists to the police station to film a member of the regional legislature who was arrested on suspicion of drunk driving and unruly conduct. Some of the footage was broadcasted on television and was considered to be in violation of his right to privacy.

Convicted persons in emotional situations:

In Egeland and Hanseid v. Norway, two newspapers had published, albeit without consent, photographs of an individual about to be taken away to serve a long prison term to which she had just been sentenced. Although the photographs had concerned a public event and had been taken in a public place at a time when her identity was already well known to the public, the Court found that the newspapers’ portrayal of her had been particularly intrusive as she was in tears and in great distress. She had just been arrested inside a courthouse after having been notified of a verdict convicting her of triple murder entailing the most severe sentence.

Convicted persons released on parole:

It is often the case that public authorities, especially law enforcement bodies, release pictures of wanted, arrested or released-on-parole persons. In principle, journalists are allowed to republish such pictures. In Österreichischer Rundfunk v. Austria, it was acceptable to broadcast the picture of the head of a neo-Nazi organisation, who had been released on parole. According to the Court, his interest not to have his physical appearance disclosed was not more important than the fact that he was a notorious person who had committed crimes of a political nature.

_______

_______ 

Do privacy laws only serve the rich and powerful?

It is certainly the case that the rich and powerful make greater use than the rest of us of privacy laws. But that is largely because the press and other media publish much more about their private life, particularly their sexual affairs, than they do about the life of ordinary citizens, which is generally of no interest at all to most readers and viewers. Further, only the wealthy can afford to bring actions in the courts. That is itself no objection to privacy laws as such, any more than their prohibitive cost is an argument against the existence of the Ritz or Dorchester hotels. The solution is to reduce the costs of legal actions, or more realistically to ensure that members of the public – whose private life does sometimes attract the attention of the media – have access to other inexpensive tribunals to protect their privacy.

_______

_______

Section-17

Mobile privacy:  

No one doubts that the rapid growth of mobile technologies provides enormous value to both businesses and consumers. Mobile devices are revolutionizing how consumers interact, communicate, and carry out everyday activities. In a typical day a consumer may use a mobile device to read the latest news, email, text, pay bills, place and receive phone calls, post status updates on a social networking site, download and launch an app to find nearby movie theatres and buy tickets to the latest release, and even pay for a cup of coffee. At the same time, mobile technology presents unique privacy challenges. 

First, more than other types of technology, mobile devices are typically personal to an individual, almost always on, and with the user. This can facilitate unprecedented amounts of data collection. The data collected can reveal sensitive information, such as communications with contacts, search queries about health conditions, political interests, and other affiliations, as well as other highly personal information. This data also may be shared with third parties, for example, to send consumers behaviourally targeted advertisements.

Second, in the complicated mobile ecosystem, a single mobile device can facilitate data collection and sharing among many entities, including wireless providers, mobile operating system providers, handset manufacturers, application developers, analytics companies, and advertisers to a degree unprecedented in the desktop environment.  This can leave consumers wondering where they should turn if they have questions about their privacy.

Third, mobile devices can reveal precise information about a user’s location that could be used to build detailed profiles of consumer movements over time and in ways not anticipated by consumers. Indeed, companies can use a mobile device to collect data over time and “reveal the habits and patterns that mark the distinction between a day in the life and a way of life.”  Even if a company does not intend to use data in this way, if the data falls in the wrong hands, the data can be misused and subject consumers to harms such as stalking or identity theft.  

In recent studies, consumers have expressed concern about their privacy on mobile devices.  For example, a nationwide survey indicated that 57% of all app users have either uninstalled an app over concerns about having to share their personal information, or declined to install an app in the first place for similar reasons.   Similarly, in a 2011 survey of U.S. smartphone users, less than one-third of survey respondents reported feeling in control of their personal information on their mobile devices.  Lack of attention to these concerns could lead to an erosion of trust in the mobile marketplace, which could be detrimental to both consumers and industry.

Finally, with many devices possessing screens of just a few inches, there are practical challenges in terms of how critical information – such as data collection, sharing of information, and use of geolocation data – is conveyed to consumers.

_

One of the major challenges faced by the growth of the mobile internet is that the security and privacy of consumers’ personal information is regulated by a patchwork of geographically bound privacy regulations, while the mobile internet is, by definition, international. In addition, important categories of information such as location or traffic data are often only subject to privacy rules when processed by a mobile operator but not when processed by an internet content provider. This inconsistent applicability of rules is likely to be exacerbated as more devices and sensors are interconnected through the ‘Internet of Things’.

This misalignment between national or market-sector privacy laws and global data flows makes it impossible for consumers’ privacy expectations to be met in a consistent way by all the parties accessing their data. Equally, the misalignment distorts the market on data, causing legal uncertainty for operators, which can deter investment and innovation.

The wide range of services available through mobile devices offers varying degrees of privacy protection. To give consumers confidence that their personal data is being properly protected, irrespective of service or device, a consistent level of protection must be provided. The necessary safeguards should derive from a combination of internationally agreed approaches, national legislation and industry action.

_

The privacy of mobile users is impacted by a number of factors, often controlled by multiple stakeholders. In many cases, a user’s privacy will be primarily impacted by the collection, use or disclosure of their personal information. This will often be undertaken by the person or organisation providing the relevant service or application. But other factors may be involved, such as the default settings or controls provided within an application, the prompts a user receives when installing applications or using certain features, and the way data about that user is made available to other applications or services. Different stakeholders, such as the relevant service or application provider, the mobile operator, the handset manufacturer and the operating system or other software provider, will often control these factors. Each of these industry stakeholders should bear some responsibility for achieving the desired privacy outcomes for mobile users.

Governments should ensure legislation is technology neutral and that its rules are applied consistently to all players in the internet ecosystem. Because of the high level of innovation in mobile services, legislation should focus on the overall risk to an individual’s privacy, rather than attempting to legislate for specific types of data. For example, legislation must deal with the risk to an individual arising from a range of different data types and contexts, rather than focusing on individual data types.

_

The GSMA Mobile privacy principles:

These principles apply to applications and services that may impact a mobile user’s privacy. This includes applications or services that seek to access, collect and otherwise use personal information and other private data about users which may be held on a mobile handset or generated by their use of a mobile application or service. The principles also apply to activities that impact user privacy in other ways, such as through intrusion, unwarranted contact or real– time monitoring. 

____

Different avenues of Data Collection that violate mobile privacy:

By its very definition, privacy demands that sensitive or confidential information should remain solely in the possession and control of the person or organization it belongs to. However, the very act of using mobile devices in the course of our everyday affairs makes this practically and logistically impossible.

Legitimate apps and mobile operating systems need to access and distribute information about the user, simply to perform their stated function. Maps and navigation tools need to establish where you are. Account and personal credentials need to be traded back and forth between devices and websites to log users in, and so on.

But beyond the functional requirements, there are other avenues of data collection that may not be quite so obvious, straightforward, or acceptable – even though strictly speaking they may actually be legal. Apps, some websites, and third-party operators in the cloud may at any time be engaged in data gathering from your mobile devices via the following methods:

  • Network service providers monitoring incoming and outgoing calls, text messages, and emails
  • Network carriers keeping a record of how often you access the internet
  • Geo-location tools establishing your location and tracking your movements
  • Geo-tagging features on smartphone cameras and certain websites (e.g. social media platforms) marking your location when you take a picture or shoot a video clip
  • Websites, social media, and eCommerce platforms keeping a record of your personal and account data
  • Browser cookies being deposited to note your login credentials, viewing habits, and movement between sites
  • Email addresses, contact information, browsing activity, and other data logged by mobile apps and shared with third-party advertising or marketing networks

_____

Mobile Apps as a threat To Privacy:

People have become over-reliant on mobile devices, especially on smart phones to drive their sociable, enjoyment and refinement networks. Operation of these phones is made possible by the utilization of mobile applications. A mobile application commonly referred to as apps. It is a kind of application software intended to run on a mobile contrivance that can be a smartphone, tablet etc. Their primary aim is to give users a kindred experience to that of the PCs. All apps provide constrained and concrete functionality such as web browsing, games, weather etc.

These apps are made to run on different operating systems such as Android, iOS, Windows phone, and the most common being Android and iOS. Users are able to download apps for free or for a fee from their respective app stores.

Smartphones are handheld computers, hence are prone to malware attacks. Upon downloading an app, a utilizer may be requested before installation for sanction to access certain data on the contrivance. Some will consequently be able to access phone and email contacts, authenticate credentials etc. Every decision made by the utilizer to open security restrictions to sanction the app on the phone is a probable security loophole. Some apps will access data that they require to function efficaciously whereas others access data that is not cognate to the functionality of the app.

Unrestricted mobile apps can access data from other associated contrivances additionally. Ad networks, with the avail of unrestricted apps, cluster personal data or information that one provides while making online purchases. Therefore, they have the capability to send users targeted ads which helps to attract the users.

To prevent this Apple is very rigorous on app installation in the iOS. Their apps are barred from accessing data from other apps thereby averting outflow of any sensitive data. Apple’s App Store is additionally very rigorous on the apps they sanction to be hosted on their site for download thus eradicating most of the sketchy app developers. Apple is also opposed to third-party app stores and consequently expects users to hook up to their app store.

Downloadable applications can present many types of security threats for mobile contrivances. Vindictive apps may look fine on a download site, but they are concretely designed to commit fraud. Smartphone apps can do more than provide you with entertainment, information or utilizable accommodations – they can additionally invade your privacy. Even some lawful software can be exploited for deceitful purposes. Apps can track your Web habits, look into your contact list, make phone calls without your cognizance, track your location, examine your files and more which is not at all acceptable. They can additionally automatically send information such as location data to mobile ad networks. Apps can accumulate the phone number and the unique ID number of each type of phone. Personal clues that apps accumulate about you can be coordinated to these IDs. That signifies that ad networks can facilely collect multiple data by multiple apps, build a sophisticated profile about you – and then lawfully sell that data to other marketing companies.

Privacy Threats may be caused by applications that are not indispensably maleficent, but accumulate or use sensitive information such as location, contact lists, personally identifiable information than is indispensable to perform their function.

Researchers recommend that a strangely high percentage of smartphone apps may threaten your privacy. It is found that 50 percent of Android apps analysed sent geographic information to remote ad servers without user’s cognizance. Out of them, many apps additionally sent the unique phone identifier; in some cases, the authentic phone number and serial number were sent to app vendors. This can enable app vendors and/or advertisers to engender comprehensive profiles about your relishes and dislikes, the places you visit when you carry your phone, your Web surfing usage and more. They can then use those profiles however they want or sell them to others.

Meanwhile, Android apps sanctioned third parties to get entry to private or delicate information. In addition to that 5% of the apps could make phone calls by themselves without user intervention and 2% could send an SMS text message to a premium, for-pay number – again without the user making the call. Few apps are additionally selling added information to ad networks, including users’ location, age, gender, political views etc.

Google recently booted 20 apps from Android phones and its app store. Those apps could record with the microphone, monitor a phone’s location, take photos and then extract the data. And they could do all of this without a user’s knowledge! Stolen photos and sound bites pose obvious privacy invasions. But even seemingly innocent sensor data might broadcast sensitive information. A smartphone’s motions might reveal what a user is typing. Or it might disclose a someone’s location. Even barometer readings could be misused. These readings subtly shift with increased altitude. That could give away which floor of a building you’re on.  Such sneaky intrusions may not be happening in real life — yet.  

_____

Mobile Privacy – What You Can Do:

The smartphone is one of the most invasive devices ever invented. It’s easy to forget that because we are so familiar with them, and they are so useful. But while you might value your smartphone for the convenience it gives you, tech companies value it for an entirely different reason: it is collecting data on everything you do. 

If you believe that privacy is a human right, Android is something of a nightmare. Most people who use Google services are aware the company is tracking their location, checking which websites they go to, recording their voice, and reading their emails. What a lot of people forget is that Android was developed by Google, and is one of the most important tools for this data collection.

Threats to privacy may, of course, originate from outside any official frameworks or policies, and simply derive from the efforts of hackers and cyber-criminals to steal data, monitor their targets, or steal identities. In all cases, there are measures you can take to safeguard your privacy.

  • Read the fine print. This applies to consent forms for privacy policies, sign-up conditions for web services and accounts, and the Terms & Conditions associated with your mobile software. Port these documents over to a larger screen if possible, as this will make them much less painful to read.
  • Check those app permissions. Unreasonable requests for access to your location, contacts, camera, storage media or personal data (i.e., clearly unrelated to an app’s stated function) should disqualify that software from download or installation.
  • Take physical measures to protect your device. Passwords, PIN, encryption, lockscreens, and remote wiping capabilities (if it’s lost or stolen) come into play here.
  • Keep your software up-to-date. Disable automatic log-ins and check-ins. This would include “AutoFill” features for online forms, and automatic geo-location permissions on certain sites. Turn off all the connectivity you do not need. This goes for whatever smartphone, and whichever operating system, you have. Don’t let your phone connect to unknown Wi-Fi networks because they may be a source of malware. Don’t leave your Bluetooth on because there are plenty of Bluetooth security vulnerabilities. Don’t connect your phone to your computer (if you can avoid it), because smartphones can also act as a reservoir of malware, and your phone can be infected without you realizing it. Use a VPN. A virtual private network (VPN) encrypts all of the data passing between your phone (or computer, or tablet) and the wider Internet. There are plenty of VPN providers out there, but you should be careful about which one you choose. In general, VPN providers often are not transparent about who operates them or how they may or may not use your data. In addition, be wary of VPN providers that are based in the EU or (even worse) the US, because they may be required to share data with foreign intelligence agencies.
  • Use the information that’s out there. Consumer rights advocates, privacy groups, and the online resources of your local representative may yield valuable information on your rights, and measures laid down to enforce them.

_____

_____

Section-18

Covid-19 and privacy:  

The novel coronavirus pandemic is not only causing tens of thousands of deaths worldwide, damaging economies and disrupting normal lifestyles of people, but it is also posing several legal challenges. One such challenge is the right to privacy vs. the right to public health. Aggressive and intrusive contact-tracing mechanisms of governments worldwide during this acute public health emergency are causing certain privacy concerns among civil liberties defenders. It has been argued that we simply can’t defend privacy while public health is tailspinning, but privacy and public health do not have to be incommensurable goals. Assertive containment strategies do not require governments to be completely ignorant of other rights, nor do they require any kind of epidemiological inaction for the sake of informational and associational privacy of individuals.

_

A greater understanding of how populations move and interact, in combination with artificial intelligence, is a valuable resource in predicting how the pandemic will continue to spread. The collection and sharing of biomedical data is sine qua non for containing the infectious diseases in order to identify point of contact. The pandemic has further escalated the need for collecting and sharing data, as well as for tracking people’s movements. From Israel to South Korea to China, governments around the world are using technology to track the coronavirus outbreak as they race to stem its spread. In China, government-installed CCTV cameras point at the apartment door of those under a 14-day quarantine to ensure they don’t leave. Drones tell people to wear their masks. Digital barcodes on mobile apps highlight the health status of individuals. In Singapore, the government rolled out an app called TraceTogether. It uses Bluetooth signals between cellphones to see if potential carriers of the coronavirus have been in close contact with other people. Over in Hong Kong, some residents were made to wear a wristband which linked to a smartphone app and could alert authorities if a person left their place of quarantine.  In South Korea, the government used records such as credit card transactions, smartphone location data and CCTV video as well as conversations with people, to create a system where confirmed cases were tracked. The result was a map that could tell people whether they had gone near a coronavirus carrier. Meanwhile, Israel’s security agency Shin Bet is using citizens’ cell phone location data to track where they’ve been so they can enforce quarantine controls and monitor the movements of those infected. Some parts of India were stamping the hands of people arriving at airports telling them how long they had to be quarantined, Reservation data from airlines and trains were being monitored to make sure those people didn’t travel. In the south Indian state of Kerala, authorities have been using a mixture of telephone call records, surveillance camera footage and phone location data to track down people who may have been in contact with coronavirus patients. In the U.S., the government is talking to Facebook, Google and other tech companies about the possibility of using location and movement data from Americans’ smartphones to combat coronavirus. This aggressive digital surveillance has been justified by governments worldwide in the name of public health crises during the pandemic. European countries where data is protected by General Data Protection Regulation have sought to suspend some regulations, taking into account Article 9 (2)(1) of GDPR, which allows for data processing when necessary in public interest, such as serious cross-border health threats.

_

However, despite such exceptions, strict regulations for data protection should not be blatantly suspended or ignored. There ought to be balance, especially when privacy rights have received spectacular national and international recognition. This becomes even more important when institutionalization of biosurveillance technologies leads to biomedical data, which is sensitive data, being factored into routine government screening and monitoring. Tech companies and certain private players can chase this untapped resource if it is not properly regulated. Also, there is no clue as to what happens to the data and the whole surveillance mechanism once the emergency ends. The way that privacy rights are dealt with will create a lasting impact on the way that privacy is perceived in future. Thus, a robust regime of surveillance is necessary, requiring stringent procedures to keep this information safe and to delete it when no longer in use. A government’s transparency and an engaged society are all very important to fight this pandemic. Data collection during the Covid-19 pandemic must be done in such a way that it doesn’t create future privacy risks. But privacy experts raised concerns about how governments were using the data, how it was being stored and the potential for authorities to maintain heightened levels of surveillance — even after the coronavirus pandemic is over.  According to a database maintained by MIT Technology Review, Aarogya Setu, India’s contact-tracing app to combat COVID-19, poses significant risks to the privacy of the user compared to similar apps in other countries. Though there is little clarity on the design and security of the app, it has been made mandatory in some places. The apps in China and Turkey pose greater risks for user privacy than Aarogya Setu.  A fear of misuse or of a privacy breach is not peculiar to this pandemic. China has been building a digital authoritarian surveillance state from pre-pandemic times. China and the U.S. have been doing so on the international stage, to determine global standards and to shape key network infrastructure, exporting 5G technology and the Orwellian system of facial recognition abroad. Recently, Microsoft too planned to launch 20 data sharing groups by 2022 and give away some of its digital information, including data collected on COVID-19.  

_

One of the big problems, according to the Electronic Frontier Foundation (EFF), a nonprofit digital privacy advocacy group, is that collection of certain data like phone location, hasn’t been proven to be effective in tracking the spread of the virus. The organization argued that even the Global Positioning System, or GPS, on smartphones are only accurate to a 16-foot radius. Yet the Centers for Disease Control and Prevention (CDC) said the virus can spread between people who are in close contact with one another, which the body estimates is within 6 feet. “These and other technologies like Bluetooth can be combined for better accuracy, but there’s no guarantee that a given phone can be located with six-foot precision at a given time,” the EFF said.

______

______

Section-19

Privacy in health care (health privacy):   

In medical practice, privacy has become an important issue since the time when the Hippocratic Oath, which has been enforcing medical ethics for centuries, originated in the 4th century BC (Moskop et al., 2005). Privacy is a basic human need, and it is central for psychological well-being (Altman, 1976).  Medical records can include some of the most intimate details about a person’s life. They document a patient’s physical and mental health, and can include information on social behaviors, personal relationships, and financial status (Gostin and Hodge, 2002). Accordingly, surveys show that medical privacy is a major concern for many people. In a 1999 survey of consumer attitudes toward health privacy, three out of four people reported that they had significant concerns about the privacy and confidentiality of their medical records (Forrester Research, 1999). In another survey, conducted in 2005 after the implementation of the Health Insurance Portability and Accountability Act (HIPAA) Privacy Rule, 67 percent of respondents still said they were concerned about the privacy of their medical records, suggesting that the Privacy Rule had not effectively alleviated public concern about health privacy. CynergisTek, a leading cybersecurity firm helping more than 1,000 hospitals navigate emerging security and privacy issues recently released survey results highlighting growing privacy concerns among Americans due to COVID-19 with nearly 70 percent citing they would likely sever healthcare provider ties if they found that their personal health data was not being properly protected.

__

Protecting the security of health data is important because it contains large amounts of personally identifiable health information, much of which may be sensitive and potentially embarrassing. If security is breached, the individuals whose health information was inappropriately accessed face a number of potential harms. The disclosure of personal information may cause intrinsic harm simply because that private information is known by others (Saver, 2006). Another potential danger is economic harm. Individuals could lose their job, health insurance, or housing if the wrong type of information becomes public knowledge. Individuals could also experience social or psychological harm. For example, the disclosure that an individual is infected with HIV or another type of sexually transmitted infection can cause social isolation and/or other psychologically harmful results (Gostin, 2008). Finally, security breaches could put individuals in danger of identity theft (Pritts, 2008).

Privacy violations in the healthcare sector that stem from policy formulation and implementation gaps include the disclosure of personal health information to third parties without consent; unlimited or unnecessary collection of personal health data; provision of personal health data given for research; and commercial uses without de-identification of data and improper security standards, storage and disposal. It may result in a certain kind of discrimination or stigmatization.

_

Health privacy on social media:

The growing number of online health communities, patient blogs, and patient portals shows that many people are active in social media as patients. The sharing of personal health information, such as information about diagnosis and treatment, has demonstrated benefits, but also presents risks for example, such disclosure may negatively affect relationships, job opportunities, and insurance options. However, there has been a contradiction between people’s attitudes toward privacy and their online privacy behavior, the so-called privacy paradox, which is well recognized. For teenage users of social media, studies have found that they simultaneously seemed to care about their privacy but did not act on that concern, revealing personal information that can be used and disclosed by governments, marketers, and predators.

__

Privacy concerns surface as India pushes digital health plan:

India’s government is using the coronavirus pandemic to push its plan to digitise the health records and data of its 1.3 billion people, despite concerns about privacy and increased surveillance. Prime Minister Narendra Modi announced the launch of the National Digital Health Mission (NDHM), under which unique Health IDs will be created to hold digital health records of individuals. “Whether it is making a doctor’s appointment, depositing money or running around for documents in the hospital, the Mission will help remove all such challenges,” Modi said. Praveen Gedam of the National Health Authority (NHA) said that the pandemic had highlighted the need to “move swiftly to improve medical infrastructure”.

But without a data protection law or an independent data protection authority, there are few safeguards and no recourse if rights are violated, said Raman Jit Singh Chima, Asia policy director at Access Now, a digital rights non-profit. “It will possibly be the largest centralised health ID and data storage system in the world, and it is being done in the absence of a data protection law and data protection authority,” he told the Thomson Reuters Foundation.

The new Health ID will be “entirely voluntary” with an opt-out option, will hold information such as tests, prescriptions, treatments and medical records. However, the new Health ID has the potential to be made mandatory and deny services to those who opt out, while also posing the risk of data abuse, human rights campaigners say. At a time when Indians are struggling for hospital beds and oxygen, the government is focusing on digitisation.   

_______

_______

Section-20

Infringement of privacy: 

Other Names:

Invasion of privacy

Denial of right to privacy

Arbitrary interference with privacy

Unlawful interference with privacy

_

Infringement of privacy is a complex and difficult issue involving diverse meanings in different cultures. At the simplest level, unwanted intrusions on one’s immediate situation of being alone or with family is a violation of privacy. The intrusion may be the physical presence of others or the noises one does not wish to hear. Degrees of privacy depend on the proximity of the intruder which is largely culturally defined. In some activities an important aspect of privacy is the freedom from observation, such as when praying, defecating or having sexual relations. In this vein privacy may be violated by being compelled to observe things which are offensive. While not every intrusion is a violation of privacy, the line is difficult to determine at any given moment because of shifting social expectations and the increasing interaction between people of different cultures.

A second dimension of infringements of privacy is the dispersion of private information about persons. Observations and physical intrusions may also be means of obtaining information that the individual wants to keep secret. When information about a person is obtained against his or her will, either by coercion or by force, their right to privacy has been violated. When another person divulges information to a broader audience, or their information has been taken, privacy has been violated. The right to privacy in some societies has been extended to include the freedom from inaccurate or misleading information being spread about an individual.

A third dimension of the infringement of privacy is lack of autonomy in making private decisions. While this is perhaps the most debatable dimension, it is, for example, acceptable by many legal and social systems for a married couple to choose whether to use contraceptives or not. The right for a woman to choose to terminate a pregnancy is argued as a right to the autonomous making of private choices.

Different legal systems emphasize different aspects of the right to privacy; many claims to privacy are hard to distinguish from claims to respect for personal integrity, to personality, and to freedom from interference from government and other external agents. Litigations concerning violated rights of privacy may arise between celebrities or public figures and the media, particularly sensationalist tabloids. Under some countries’ laws, public figures such as heads of state or royal families appear to have less rights to privacy than other people, as everything concerning them may be deemed legitimate news.

_

The right against unsanctioned invasion of privacy by the government, corporations or individuals is part of many countries’ privacy laws, and in some cases, constitutions. Almost all countries have laws which in some way limit privacy; an example of this would be law concerning taxation, which normally requires the sharing of information about personal income or earnings. In some countries individual privacy may conflict with freedom of speech laws and some laws may require public disclosure of information which would be considered private in other countries and cultures. Privacy may be voluntarily sacrificed, normally in exchange for perceived benefits and very often with specific dangers and losses, although this is a very strategic view of human relationships. Academicians who are economists, evolutionary theorists, and research psychologists describe revealing privacy as a ‘voluntary sacrifice’, where sweepstakes or competitions are involved. In the business world, a person may give personal details often for advertising purposes in order to enter a gamble of winning a prize. Information which is voluntarily shared and is later stolen or misused can lead to identity theft. Different people, cultures, and nations have a wide variety of expectations about how much privacy a person is entitled to or what constitutes an invasion of privacy.

_______

Invasion of privacy:

Invasion of privacy is a legal concept dealing with intrusion into an individual’s private life. It is a tort that allows the person whose privacy was invaded to file a lawsuit against the person intruding upon his or her privacy. Laws governing the right to privacy do not treat all people the same, however, as public figures, such as politicians, are commonly not afforded the same rights of privacy as laypeople.

An invasion of privacy occurs when your reasonable expectation of privacy is violated.  For example, if you are having a private telephone conversation containing sensitive information in your home or office and someone is listening without your knowledge, consent, or an applicable exception, then this act constitutes an invasion of privacy.  However, unintentionally leaving sensitive or confidential information in a public area and another person picking it up does not constitute an invasion of privacy. 

The following are examples of invasion of privacy against which legal action CAN be taken:

-Illegally intercepting calls;

-Snooping through someone’s private records;

-Taking photos or videos of someone inside their home or a private place without their knowledge or consent;

-Incessant unwanted phone calls;

-Publicly disclosing private information about someone that has caused damage or injury; and

-Publicizing a matter regarding another’s private life.

The following are examples of what does NOT constitute an invasion of privacy:

-Hearing a phone call take place in a public setting;

-Reading a document that someone left in a public setting;

-Taking photos of a person in public; and

-Calling a person once or twice.

_____

The four main types of invasion of privacy claims are:

-1. Intrusion of Solitude

-2. Appropriation of Name or Likeness

-3. Public Disclosure of Private Facts

-4. False Light

The following information explores these types of claims and the basics of invasion of privacy law in general.

-1. Intrusion of Solitude

Intruding upon another’s solitude or private affairs is subject to liability if the intrusion is considered highly offensive to a reasonable person. This tort is often associated with “peeping Toms,” someone illegally intercepting private phone calls, or snooping through someone’s private records. Taking photographs of someone in public would not be invasion of privacy; however, using a long- range camera to take photos of someone inside their home would qualify. Making a few unsolicited telephone calls may not constitute a privacy invasion, but calling repeatedly after being asked to stop would.

Example:

A man with binoculars regularly climbs a tree in his yard and watches a woman across the street undress through her bathroom window.

-2. Appropriation of Name or Likeness

Plaintiffs may make a claim for damages if an individual (or company) uses their name or likeness for benefit without their permission. Usually this involves a business using a celebrity’s name or likeness in an advertisement. Some states even limit this type of privacy tort to commercial uses. This is not always the case. For example, a private detective who impersonates someone else to obtain confidential information has invaded that person’s privacy. The recognition of this tort is like a property right; in other words, a person’s name and likeness is treated as that person’s property. For celebrities, this is often referred to as “right of publicity”.

Example:

An advertising agency approached musician Tom Waits to participate in a campaign for a new automobile. Waits, who has a distinctive and easily recognizable voice, declined. The advertisers hired someone who sounds like him to do the soundtrack, prompting Waits to sue the automaker for appropriating his likeness.

-3. Public Disclosure of Private Facts

This type of invasion of privacy claim must be weighed against the First Amendment’s protection of free speech. Unlike defamation (libel or slander), truth of the disclosed information isn’t a defense. If an individual publicly reveals truthful information that is not of public concern and which a reasonable person would find offensive if made public, they could be liable for damages. For example, a woman about to deliver a baby via caesarean section agrees to allow the operation to be filmed for educational purposes only, but instead it’s shown to the public in a commercial theatre. This is an invasion of her privacy. However, publishing an article about a politician known for his family values who is having an affair with a staffer is of public concern and therefore not an invasion of his privacy. Some states including New York don’t recognize this type of claim.

Example:

It probably seemed like a great idea when a well-regarded suburban school district decided to loan its students laptop computers for the entire school year, even permitting the students to take the laptops home. The students, however, were unaware that the laptops were armed with internal anti-theft protection that allowed school district personnel to activate the laptops’ webcams anytime without the consent or knowledge of the user. The school district used this anti-theft function to take thousands of pictures of its students studying, speaking to family members, and even sleeping. The so-called “Webcamgate” scandal resulted in a Pennsylvania school district paying a six-figure sum to settle the invasion of privacy lawsuit against it. While Webcamgate would have seemed far-fetched in the 1980s and 1990s, today and in the future, we can expect technology to continue to challenge our right to privacy, making understanding this right essential. 

-4. False Light

A false light claim is similar to a defamation claim in that it allows an individual to sue for the public disclosure of information that is misleading (or puts that person in a “false light”), but not technically false. The key difference is that defamation claims only apply to the public broadcasting of false information and as with defamation, sometimes First Amendment protections prevail.

Generally, a false light claim must contain the following elements: (1) the defendant made a publication about the plaintiff; (2) it was done with reckless disregard; (3) it placed the plaintiff in a false light; and (4) it would be highly offensive or embarrassing to a reasonable person.

Example:

A 96-year-old woman sued an Arkansas newspaper for printing her picture next to the headline, “Special Delivery: World’s oldest newspaper carrier, 101, quits because she’s pregnant!” The woman, who was not pregnant, was awarded damages of $1.5 million.

_______

Invasion of privacy as a crime:

The simple act of invading someone’s privacy is not a criminal offense, though certain methods of such an invasion may be considered criminal. In most cases, invasion of privacy is considered a civil rights violation, and is therefore addressed in civil court proceedings. In a civil lawsuit for invasion of privacy, a victim may seek a monetary award to be paid by the perpetrator.

In some cases, a perpetrator may be subject to both criminal charges and civil sanctions. This might occur when a perpetrator breaks the law in his quest to gain or publish private information about someone else.

For example:

Amber’s ex-husband, Mark, is obsessed with all aspects of her life, and apparently will stop at nothing in his quest to get back at her. Mark follows Amber to work, and as she goes out with her friends, frequently snapping photos of her without her knowledge or consent. Mark tapes Amber’s phone conversations, again without her consent, then publishes both the photos and information gleaned from the phone conversations to social media accounts online. In such a case, Amber may sue Mark in civil court for invasion of privacy, and possibly other civil torts. In addition, Amber may be able to get a restraining order against her ex-husband, and Mark may be prosecuted for criminal trespassing, stalking, and a host of other violations of the law.

-Any violation of privacy resulting in sexual harassment or coercion or blackmail will invite criminal prosecution.

-Any violation of privacy resulting in suicide of victim will invite criminal prosecution. 

_______

_______

Top Privacy Risks:

-1. Web Application Vulnerabilities

-2. Operator-sided Data Leakage

-3. Insufficient Data Breach Response

-4. Insufficient Deletion of personal data

-5. Non-transparent Policies, Terms and Conditions

-6. Collection of data not required for the primary purpose

-7. Sharing of data with third party

-8. Outdated personal data

-9. Missing or Insufficient Session Expiration

-10. Insecure Data Transfer

______

Technologies of privacy invasion:

There are number of technologies causing new concerns about the protection of privacy. Many of these technologies were being adopted and implemented outside legal protections. 

-1. Identity (ID) cards

-2. Biometrics including DNA identification

-3. Surveillance of Communications including wiretapping capability over telephone, fax and telex

-4. Internet and email interception

-5. National Security and the ECHELON system

-6. Video Surveillance including Closed Circuit Television, or CCTV

-7. Workplace surveillance

______

Threat categories to an organization’s privacy:

Privacy risk arises from various sources. Threats can be generally classified as generated internally or externally. Figure below illustrates the threat categories to an organization’s privacy, including the intention underlying the threat, and provides examples of each type.

______

______

Your biggest Privacy Threats come from the ones You Love: 

As technology has become more ubiquitous in people’s everyday lives, a new class of privacy threats has emerged in family, romantic, friendship, and caregiving relationships. These “intimate threats” are the thorny risks that are intertwined with issues around location tracking, always-on monitoring or recording, online surveillance, and the control over technology accounts or devices.

The use of technology in intimate relationships can quickly turn dark with very little recourse for the victim because the product was never designed to account for abuse cases. Facebook had a system for a while where you’d get your account back because they’d show you pictures and you’d click on the ones that are your friends, assuming that you know who they are but other people don’t. But your partner and your parents all know that stuff too. So it’s a great system, but it fails in the intimate context. It fails when your boyfriend takes over your account.  There are many illustrative examples of the abuse of technology not only having to do with covert monitoring, but also control over financial accounts or technology. For example, the use of household-shared smart devices and thermostats increasingly being used by abusers and vengeful exes to wreak havoc on intimate partners’ daily lives.

The difficulty is that there are a lot of complicated dynamics – both ethical and technical – to account for when mitigating intimate threats. It’s complicated because the motivations in the intimate context are often pretty complicated and not necessarily nefarious. For example, a teenager might be told by his or her parents that they can have a later curfew only if the parent can track where their phone is. Or a parent may set up a webcam with full knowledge from their nanny to be able to get a happy glimpse of their little one during a break in an otherwise hectic workday. There’s lots of ways in which we use this in perfectly beneficent and socially negotiated ways to take care of one another. And so that means that the line between this is okay and this isn’t okay is going to be very variable, both within and across relationships.

So when you’re designing, assume that the person that is buying the product is not necessarily your only user and that you should think of the people who are subject to being monitored as potentially your users too. The idea is to get the technology-design community thinking critically and remembering that these relationship dynamics are always going to be at play within the user base. When you’re designing systems, you really have to take these relationships into account.

______

Invasion of Privacy can be a form of Sexual Harassment:

Invasion of privacy is one of the most damaging types of sexual harassment because it damages a person’s reputation and personal relationships; this can include anything from leaking important and private information in order to coerce you into a sexual relationship or secretly recording you in places that are clearly private — such as the restroom.

Other types of actions that are considered sexual harassment through invasion of privacy include:

-Physically invading your personal space, such as at your work station or desk

-Recording you in the restroom, dressing room or other private area

-Taking your picture without your knowledge and releasing it on the Internet via social media

-Threatening to release confidential information or video/photos of a sexual relationship

-Blackmailing you to conduct sexual acts or engage in a sexual relationship

_______

Your Internet Behaviors that Impact Your Privacy:

Most of us practice bad Internet hygiene and don’t even realize it, so don’t forget to avoid doing the following:

-1. Using the Same Credentials for Multiple Accounts

Sure, it’s easy to remember and get things done online when you use the same credentials across your accounts. But if a cybercriminal is able to gain access into your one accounts, they’ll most likely get into the other ones as well.

-2. Staying Logged into Websites

Not logging out of websites and having them remember your credentials is indeed convenient. However, it also leaves your online accounts and personal information vulnerable to anyone who uses or hacks into your device.

-3. Using Services without Reading their Terms & Conditions

Never click “agree” until you understand what you’re getting yourself into. You wouldn’t want to legally grant companies and service providers access to all kinds of data. Then sell this information to the highest bidder!

-4. Opening Suspicious Attachments or Downloading Malicious Files

You should be careful when opening attachments in emails or on social media as they could contain malware and viruses. Similarly, it always downloads files from trusted sources because it may result in virus infection.

_______

_______

Is Google a threat to privacy?

Google uses a potent technique called browser fingerprinting to identify a particular device, in conjunction with a service called Google Analytics, which is hugely popular with websites all over the world. Anytime you visit any website that uses Google Analytics, your browser is fingerprinted and a note is made that you visited it, including a log of every page you browsed. If it’s a website that requires you to have an account, Google will then be able to associate your browser’s fingerprint with your account information. Later, when you browse some seemingly unrelated site, Google will again fingerprint your browser and it will know who you are even though you never logged into that other site. About half the websites in the entire world use Google Analytics.

If you use Gmail, the content of all your email is scanned, including the content of all the files you attach to an eNote, including pictures and videos, documents, spreadsheets, presentations, and everything else. It is all opened and scanned. The core purpose of this is to detect keywords (such as “vacation”) and phrases (such as “my mom’s sick”) that will assist them to target very specific ads from advertisers at you. There are secondary reasons they do this, including to scan for illegal content. Even if you aren’t a Gmail user, if you email someone who is, Google also does this with whatever you send them. If you use Google’s Chrome browser, a myriad of information is sent back to Google, including a browser fingerprint, your browsing history, detailed geo-location information and details about your device. This information is used to establish your identity on any site that uses Google Analytics (about half the internet, as mentioned above). If you use Chrome you are directly plugged into Google’s vast data collection ecosystem. If you use Google Maps or Google Earth, the places you view are all logged by Google, and associated with your device, and thereby you. If you use Google Drive, your data is very safe from external threats, but completely open to examination by Google. And then there’s the Android operating system, with more hooks back to Google than a fishing boat.

On the other hand, google typically never had any major data breaches, and most data breaches hold far less personal information than Facebook does. Sure, Google tracks your searches. But it hardly knows your contacts aside from who’s in your Gmail. Facebook on the other hand knows what you say, what you like, who your friends and family are. It knows who you are, and where you’ve been. All this while selling your information in bundles and getting major security breaches. Maybe not in Facebook servers, but they’re not holding their data-purchasing clients to the same standards. So breaches occur often at servers where your FB information exists, in FB client’s sub-par security wall. Whereas Google never sells its information. It merely provides Keywords and demographics at broader levels to advertisers. Google’s security record is excellent, better than Yahoo and Hotmail.  Google scanning your email for ad targeting is simply not the biggest risk involved with email.   

______ 

Is government threat to privacy?        

Individuals enjoy privacy when they have the power to control information about themselves and when they have exercised that power consistent with their interests and values. But while most of the debate about privacy has been focused on privacy with regard to private companies, government poses a much greater threat to privacy. In terms of privacy, the public sector and the private sector are worlds apart.

Because of governments’ unique powers, the issue of privacy from government is of a much more critical nature than privacy from companies. Governments can invade privacy by taking and using personal information against the will of individuals. Private companies cannot get information from people who refuse to share it. Moving beyond privacy, governments can knock down doors, audit people’s finances, break up families, and throw people in jail.

Governments take and use information by force of law. They must be hemmed in by rules aimed at privacy (and related interests) because they lack the incentives to do so on their own. In the marketplace, on the other hand, good information practices are good business. Companies are in the business of pleasing their customers. Consumer dollars pressure companies toward privacy protection on the terms consumers want.

When a federal agency like the Internal Revenue Service wants personal information, it has an easy option. It demands the information from taxpayers and businesses under penalty of law. Annual income tax forms and various information collections throughout the year are a treasure trove of information for the Internal Revenue Service. Governments take information by law, giving citizens no right to opt out. And their information demands are substantial. Not only tax forms, but applications for licenses, permits, and benefits of all kinds come laden with requirements to hand over information. Employers, banks, and investment houses have been conscripted to collect information about people and turn it over to the government too, as required by law.

Governments can change how information may be used. When information is collected by governments with promises of confidentiality, those promises are not a contract but a naked assertion about an unpredictable future. Governments can make new uses of data they hold if a new law or regulation is passed—regardless of what they have promised. In many cases, U.S. federal agencies can make new uses and new disclosures of data merely by stating in the Federal Register that they are doing so.

When a government agency violates the rules about information, the penalties are minimal. An agency may suffer bad press if lax security leads to a privacy debacle, but its funding continues—or even increases to fix the problem. When U.S. federal agencies have tripped over the extremely low hurdles of the Privacy Act and suffered lawsuits, no capital has been at risk.

Not surprisingly, government agencies make many demands for personal information, have notoriously lax security, and are constantly building, growing, and combining databases of personal information. Governments have used information abusively both historically and in the recent past.

In a nutshell:

Between government and the private sector, government is the clearest threat to privacy. Governments have the power to take information from people and use it in ways that are objectionable or harmful. This is a power that no business has: People can always turn away from businesses that do not satisfy their demands for privacy.

_______

_______

Privacy merchants:

Major cell phone and mobile technology companies offer services that allow lovers, ex-spouses, lawyers, or anyone else to find out where a person is—and track their movements—by using the GPS capabilities of their cell phones.  A German politician who inquired about location storage information discovered that over a six-month period, his longitude and latitude had been recorded over 35,000 times. 

There are two kinds of corporations that keep track of what Internet users buy, read, visit, and drink, and who they call, e-mail, date, and much else. Some merely track users’ activity on their sites as part of their regular business; recording purchases and viewed products helps them increase sales. This is true for nearly every major online retailer.  Other corporations make shadowing Internet users—and keeping very detailed dossiers on them—their main line of business.  One can call these the “Privacy Merchants.”  They sell information to whoever pays the required price.  In 2005, one such company— Choicepoint—had records on over 220 million people.  Professor Christopher Slobogin notes that the amount of information culled by corporate data miners

can provide the inquirer with a wide array of data about any of us, including basic demographic information, income, net worth, real property holdings, social security number, current and previous addresses, phone numbers and fax numbers, names of neighbors, driver records, license plate and VIN numbers, bankruptcy and debtor filings, employment, business and criminal records, bank account balances and activity, stock purchases, and credit card activity.

In October 2010, the Wall Street Journal broke the story of extensive user privacy breaches by Facebook. It discovered that popular Facebook applications were “providing access to people’s names and, in some cases, their friends’ names” to Internet tracking companies.  According to the Journal, the breach affected “tens of millions” of users—including those who were vigilant in setting their privacy protections—and was in violation of Facebook’s stated policies.   In the same month, the New York Times reported on two studies that found that “in certain circumstances, advertisers—or snoops posing as advertisers [on Facebook]—may be able to learn sensitive profile information, like a person’s sexual orientation or religion, even if the person is sharing that information only with a small circle of friends.” 

In addition, the nearly ubiquitous Facebook “Like” button and Twitter “Tweet” button on Web sites “notify Facebook and Twitter that a person visited those sites even when users don’t click on the buttons.”  These widgets have been added to millions of Web pages and they appear on more than one-third of the world’s top 1000 Web sites—allowing sites with those widgets to track specific Facebook users. The tracking (which is used for targeted advertising) continues until the user specifically logs out of his or her account, even if the user turns off the computer. 

One may argue that the private sector merely uses this information for commercial purposes, while the government may use it to jail people, suppress free speech, and otherwise violate their rights.  However, one must note that the violation of privacy by private agents has some similar effects to violations committed by government agents—effects that lead to discrimination and “chilling” of expression and dissent.  Thus, when gay people who seek to keep their sexual orientation private are “outed” by the media, or banks call in loans of those they find out have cancer, or employers refuse to hire people because they learn about their political or religious views, privacy is violated in a manner about as consequential as if the same violations had been carried out by a government agency.

_

Privacy merchants in service of the governments:

Even if one disregards the facts already cited, which show that corporate violations of privacy are far-reaching and chilling, one must note that the information corporations amass is available to the government.  Laws may prevent the government from ordering a private company to conduct surveillance on innocent citizens not suspected of anything or from generating dossiers that the government itself is banned from generating (in other words, when corporations act as government agents, they may be subject to the same or similar limitations by which the government must abide).  However, the government can and does use data already amassed by Privacy Merchants for their own sake.  Nor do prevailing laws prevent private corporations from analyzing online activity with an eye towards the government’s needs and shaping their privacy-violating data in ways to make them more attractive to government purchasers of their services.  Indeed, because the government is such a large and reliable client, corporate databanks have a strong financial interest in anticipating its needs. The thesis that what is private does not stay private is far from hypothetical. 

According to Daniel Solove, “for quite some time, the government has been increasingly contracting with businesses to acquire databases of personal information.  Database firms are willing to supply the information and the government is willing to pay for it.”   Solove points out that the government can “find out details about people’s race, income, opinions, political beliefs, health, lifestyle, and purchasing habits from the database companies that keep extensive personal information on millions of Americans.” 

Hoofnagle similarly warns that “private sector commercial data brokers have built massive data centers with personal information custom-tailored to law enforcement agents.”   ChoicePoint, a major Privacy Merchant, has at least thirty-five contracts with government agencies, including the Department of Justice (through which it provides its databases to the Federal Bureau of Investigations (“FBI”)), as well as the Drug Enforcement Administration (“DEA”), the Internal Revenue Service (“IRS”), and the Bureau of Citizenship and Immigration Services. 

Another corporate data miner, Florida-based SeisInt, ran a massive database called MATRIX (Multi-State Anti-Terrorism Information Exchange) in a joint effort among several U.S. states to coordinate counterterrorism efforts. The federal government paid $12 million to support the program, which SeisInt developed with extensive amounts of data, including individuals’ “criminal histories, photographs, property ownership, SSNs, addresses, bankruptcies, family members, and credit information.”  Even before the 9/11 attacks, the U.S. Marshals Service alone performed up to 40,000 searches every month using private databanks. The exact number of contracts the government has made with corporate data miners is unknown because many of the contracts are classified. However, one 2006 government study found that at least fifty-two federal agencies had launched—or were planning to launch at the time of the study— at least 199 data mining projects that rely on the services and technology of commercial databanks. 

Other government tracking and surveillance efforts have relied on private corporations.  In 2006, it was disclosed that three major telecommunications providers, AT&T, Verizon, and BellSouth, had cooperated with the National Security Agency (“NSA”) to provide it with the phone call records of “tens of millions of Americans”—a program which, according to one source, was “the largest database ever assembled in the world.”   The companies, which agreed to work with the NSA, provide phone service to over 200 million Americans, leading the program significantly closer to its ultimate goal:  creating a database of every phone call made within the United States.  Other government projects relying on private sources include efforts by Homeland Security to secure air travel and the nation’s borders and a Pentagon program which collects data on teenagers to better target military recruitment efforts. 

Moreover, the trend is to extend this use, as evidenced by a 2011 FBI manual that enables agents to search for private citizens in commercial databases without prior authorization or even notification.  In 2011, Google revealed that the U.S. government made the most requests for Internet users’ private data in 2010, with Google complying with 94% of those orders. 

One may well hold that some of the usages of private databanks by the government serve legitimate purposes, even if they are loaded with extensive dossiers on most adult Americans, rather than those for which there is some evidence or reason to suspect that they are violating the law. However, one must still note that from here on, whether such databanks are in the FBI headquarters or in some corporate office matters little. At most, they are just a click—and a payment—away.

_

To get a brief glimpse into how serious the threat to online privacy can be, let’s explore a few examples. The following showcase the U.S. and U.K. governments’ blatant disregard for internet privacy: 

Prism:

Prism is the name of a surveillance program under the NSA that compels tech companies like Microsoft, Google, YouTube, Apple, etc. to grant access to user data on their servers.

_

Optic Nerve:

Optic Nerve, a program started by the British intelligence agency Government Communications Headquarters (GCHQ) with the NSA’s assistance, is one that turned the webcams of millions of Britons and Americans against them. It allowed secret access into Yahoo! Webcam chats and took one still image for every five minutes of video per user. These images and their associated metadata were then subjected to experimental facial recognition software. In a six-month period in 2008, it spied on the private conversions of about two million Yahoo! users. The users’ interactions were watched without any targeted focus or limit to only on those individuals who were believed to pose a national security threat.

SIGINT Enabling Project:

The SIGINT Enabling Project shatters yet another delusion about the government making efforts in the right direction when it comes to internet security. The NSA spent $250 million per year on this project to bypass encryption in commercial products. They tampered with standards such as those outlined by the National Institute of Standards and Technology (NIST) to weaken protocols and promote vulnerable cryptography.

According to a Reuters report, the NSA paid $10 million to RSA, an influential network security company, to create backdoors in encryption products. RSA set DUAL_EC_DRBG, an algorithm known as a Dual Elliptic Curve Deterministic Random Bit Generator, as the default cryptographically secure pseudo-random number generator (CSPRNG) in its BSAFE toolkit. Until 2014, DUAL_EC_DRBG was considered one of the NIST standardized CSPRNGs.

The world that we live in today is one dominated by alliances such as Five Eyes, Nine Eyes, and Fourteen Eyes. There are several mass surveillance projects run by governments from across the world that attempt to sidestep security controls by force of law, exploiting backdoors, or hacking.

_

In a nutshell:

Corporations, especially those that make trading in private information their main line of business—the Privacy Merchants—are major violators of privacy, and their reach is rapidly expanding.  Given that the information these corporations amass and process is also available to the government, it is no longer possible to protect privacy by only curbing the State. One must assume that what is private is also public in two senses of these words:  that one’s privacy (including sensitive matters) is rapidly corroded by the private sector and that whatever it learns is also available to the government. So the coin of privacy violation has two sides, government and corporates, no matter how you toss the coin, your privacy will be compromised.   

_______

_______

Facial Recognition and privacy violation:

Facial recognition technology is making its way into our lives in more and more places all the time. Clearview AI runs a facial recognition app. Clearview has collected a large database of more than three billion images from places like Facebook and YouTube and uses that information to identify people in other images. The company has begun selling its technology to law enforcement agencies around the United States. How can people have any expectation of privacy if they are being monitored everywhere they go? And with AI systems in place, anyone can easily be identified just by showing their face.

Facial recognition without an individual’s consent has been at the center of controversy in recent news. It’s often associated with widespread surveillance and a breach of civilian privacy. Its use should be distinguished as a technology that removes control from the person whose likeness is being captured without consent — in some cases to catch bad actors or known terrorists, but in other cases, the intent is more malicious. For example, American billionaire John Catsimatidis was recently criticized for using the Clearview AI app to profile his daughter’s date. Catsimatidis simply captured a photo of the individual and uploaded it to the app to conduct a full-fledged background check. While his intent is seemingly innocent, the use of the technology is a clear breach of the suitor’s privacy, as it was used without the individual’s consent or awareness. This use case can and should be considered an abuse of the technology and needs to be reinforced by regulatory bodies.

Facial Recognition is a technology that matches captured images with other facial images held, for example, in databases or “watchlists”. It is an extremely intrusive form of surveillance and can seriously undermine our freedoms.

______

CCTV and invasion of privacy:

Many countries now employ public video surveillance as a primary tool to monitor population movements and to prevent crime and terrorism, both in the private and public sectors. Councils, law enforcement and security management professionals rely heavily on video surveillance as a tool to fight crime and prevent terrorism. Such systems continue to enjoy general public support but they do involve intrusion into the lives of ordinary people as they go about their day to day business and can raise wider privacy concerns.

The potential value of public surveillance technology was well demonstrated all the way back in April, 2013 when investigators identified the two suspects in the Boston Marathon bombing after sifting through video images captured by the city’s cameras. The Boston bombers were apprehended quickly due to surveillance cameras. No dispute over how well the public cameras were on that day. Yet, many lingering questions remain and will continue to drive debate for the foreseeable future.

Who regulates and implements CCTV usage?

Who draws the line at what is public interest and just plain harassment when a camera is placed. What about the private camera placed on property with malicious intent? Who regulates the camera on private property? Yes, surveillance cameras are important to deter crimes, however it is important to note who is at the other end of the camera? Who really is watching you? Who ultimately views what the camera lens observes? Who decides when a privately owned security surveillance camera is poorly or maliciously aimed (when the camera is deliberately pointed into the windows of a private residence)? Are privately owned, operated surveillance cameras to be treated the same way as public cameras? We also need to draw fine lines about ‘Who really is watching you’? ‘Who owns that camera anyway’? Big brother? The Corporates? Or harassing neighbour? Who draws the line between public interest and harassment?  At the moment, nobody! The camera could legally do what a peeping tom could not do. They could peer inside of windows with the full protection of the law on their side. If a person was standing watching outside a window it could be a crime, yet the same person could place a surveillance camera then remotely view a person within the privacy of their home.

So CCTV cameras could become an infringement on your civil liberties. Why film innocent people doing nothing criminal in private/public places? Though CCTV regulations may serve the purpose of security and protection of people from anti-social and criminal elements. there have been reports which reveal how CCTV recordings have been used as an intrusion to privacy of another person for malicious fun or as a nuisance.

______

3-D printing poses a threat to people’s privacy:

3-D printing technology poses a “grave and growing threat” to individual privacy because of the potential for products to reveal private information about individuals. People could use cameras, laptops or mobile phones to track and trace the origins of 3-D printed objects and how they have been used if they have watermarks.

Put simply, watermarks are essentially any mark or logo that can be used to identify the owner of any given content, and they have existed for decades. Anything from an ISBN number to a QR code can be found on everyday objects, but the practice could spread to 3D printed objects as well. You can actually put a watermark into an STL file. There are other techniques as well. You can alter the surface of an object, put a texture on something that you print, or you can rotate small elements of it ever so slightly. These could be exploited by individuals, companies, and state entities. The rise of the Internet of Things, the increasing complexity of watermarking technologies that can survive transfer between different file formats, and the ability for big data to track 3-D printed content could allow greater state surveillance of individuals.

3-D printing will have a profound impact upon our notions of social privacy. This has the potential to be considerably more invasive than the Internet of Things. Every physical product that is 3-D printed has the potential to be tracked in a way that has never occurred before. In the future, as 3-D printing becomes more common place, there will be the potential for strangers to trace, track and observe objects, which can reveal an incredible amount of information about the users of such content. The problem is that the more we 3D print content, the more there is the possibility that everyday items might be used to track the movements of individuals. Something as basic as a 3D printed cup, or broach, could be tracked and traced in future through the uploading of photographs. There is potential for all 3-D biotech materials such as blood vessels or replicas of body parts, to be traced.

Digital watermarking and 3-D printed products present a future where objects can be searched for with nothing more than the equivalent of a Google search word. 3-D printing and digital watermarking specifically has not been considered by any government or regulatory body, nor has there been any regulatory research carried out on the matter.

______

Smart wearables as a threat to our privacy:  

There was a minor scandal in 2011 when fitness trackers were still novel and one of the leading manufacturers, Fitbit, made all data public by default. It also provided dozens of different categories for activity – the simple motion sensor could tell that you were moving, but not what you were doing. So you simply told it whether you were skiing, running or having sex, and it worked out how many calories you’d burned. Unfortunately, because this data was made public, it could be Googled. Use the right search term and you could view long lists of amorous encounters and pore over the stats such as intensity and duration. It was voyeurism-cum-data analysis, and all very embarrassing for Fitbit. Similar messes will occur time and time again as we learn to harness data from new and increasingly sophisticated trackers which interact with each other in complex ways.

Suppose you are wearing one which tracks your heart rate and logs the data on a graph. If you share that information publicly, people can tell exactly when and for how long your heart rate rises and falls. On the surface, that seems harmless. But what if a client sees your nerves before a big meeting and spots an opportunity to renegotiate a contract in their favour?  Add GPS tracking into the mix and it gets even more dangerous. Burglars wait until you are out before targeting your home (similar crimes have already been reported). Clearly there are lessons to be learned, for people and for the companies which collate this data. A recent survey by PwC found that 82 per cent of Americans were worried about privacy implications of wearable devices, while 86 per cent said that they would make them more prone to security breaches. It’s not just who we grant access to our data that we should be worried about, but the people who’ll take it regardless, the hackers.

______

The IoT threat to privacy:

The most dangerous part of IoT is that consumers are surrendering their privacy, bit by bit, without realizing it, because they are unaware of what data is being collected and how it is being used. As mobile applications, wearables and other Wi-Fi-connected consumer products replace “dumb” devices on the market, consumers will not be able to buy products that don’t have the ability to track them. It is normal for consumers to upgrade their appliances, and it most likely does not occur to them that those new devices will also be monitoring them.

People assume that a refrigerator or an internet-connected toy couldn’t collect any sensitive data so they don’t think twice about connecting these devices to their network. The problem is that the company that makes the device may have included vulnerabilities in the systems running on these devices. Hackers can then exploit these vulnerabilities to access other devices connected to your network.

After an Electronic Frontier Foundation activist tweeted about the unsettling similarity of the Samsung Smart TV privacy policy — which warned consumers not to discuss sensitive topics near the device — to a passage from George Orwell’s 1984, widespread criticism caused Samsung to edit its privacy policy and clarify the Smart TV’s data collection practices.

But most people do not read privacy policies for every device they buy or every app they download, and, even if they attempted to do so, most would be written in legal language unintelligible to the average consumer. Those same devices also typically come with similarly unintelligible terms of use, which include mandatory arbitration clauses forcing them to give up their right to be heard in court if they are harmed by the product. As a result, the privacy of consumers can be compromised, and they are left without any real remedy.

Increased corporate transparency is desperately needed, and will be the foundation of any successful solution to increased privacy in the IoT. This transparency could be accomplished either by industry self-regulation or governmental regulation requiring companies to receive informed and meaningful consent from consumers before collecting data.

Generally, industries will respond if their customers demand more privacy. For example, after surveys revealed that new-car buyers are concerned about the data privacy and security of connected cars, the Alliance of Automobile Manufacturers (a trade association of 12 automotive manufacturers) responded by developing privacy principles they agreed to follow. Businesses can self-regulate by developing and adopting industry-wide best practices on cybersecurity and data minimization. When companies collect user data, they must take responsibility for protecting their users; if they do not want to be responsible for the data, they should refrain from collecting it in the first place.

_______

_______

Section-21

Privacy protection:   

Many countries give citizen rights to privacy in their constitutions. Representative examples of this include the Constitution of Brazil, which says “the privacy, private life, honor and image of people are inviolable”; the Constitution of South Africa says that “everyone has a right to privacy”; and the Constitution of the Republic of Korea says “the privacy of no citizen shall be infringed.” Among most countries whose constitutions do not explicitly describe privacy rights, court decisions have interpreted their constitutions to intend to give privacy rights. Many countries have broad privacy laws outside their constitutions, including Australia’s Privacy Act 1988, Argentina’s Law for the Protection of Personal Data of 2000, Canada’s 2000 Personal Information Protection and Electronic Documents Act, and Japan’s 2003 Personal Information Protection Law.

Beyond national privacy laws, there are international privacy agreements. The United Nations Universal Declaration of Human Rights says “No one shall be subjected to arbitrary interference with his privacy, family, home or correspondence, nor to attacks upon his honor and reputation.” The Organisation for Economic Co-operation and Development published its Privacy Guidelines in 1980. The European Union’s 1995 Data Protection Directive guides privacy protection in Europe. The 2004 Privacy Framework by the Asia-Pacific Economic Cooperation is a privacy protection agreement for the members of that organization.

Privacy International’s privacy ranking in 2007:

The highest-ranking countries in 2007 are Greece, Romania and Canada. In terms of statutory protections and privacy enforcement, the US is the worst ranking country in the democratic world.

_

Multiple competing interests in privacy protection: 

An important implication of the definition of privacy as an interest is that it has to be balanced against many other, often competing, interests:

  • the privacy interests of one person or category of people may conflict with some other interest of their own, and the two may have to be traded off (e.g. privacy against access to credit, or quality of health care);
  • the privacy interest of one person or category of people may conflict with the privacy interests of another person, or another category of people (e.g. health care information that is relevant to multiple members of a family); and
  • the privacy interest of one person or category of people may conflict with other interests of another person, category of people, organisation, or society as a whole (e.g. creditors, an insurer, and protection of the public against serious diseases).

Hence:

Privacy Protection is a process of finding appropriate balances between privacy and multiple competing interests.

Because there are so many dimensions of the privacy interest, and so many competing interests, at so many levels of society, the formulation of detailed, operational rules about privacy protection is a difficult exercise. The most constructive approach is to:

  • establish general principles;
  • apply the principles to all organisations;
  • create effective sanctions against non-compliance;
  • develop operational codes of practice, consistent with the Principles, applying to specific industry sectors and to particular applications of technology;
  • establish dispute-resolution procedures at the levels of individual organisations and industry sectors; and
  • bind the framework together by making the principles, codes and sanctions enforceable through quasi-judicial (tribunal) and court procedures.

_

Major models for privacy protection:

There are four major models for privacy protection. Depending on their application, these models can be complementary or contradictory. In most countries several are used simultaneously. In the countries that protect privacy most effectively, all of the models are used together to ensure privacy protection.

-1. Comprehensive Laws

In many countries around the world, there is a general law that governs the collection, use and dissemination of personal information by both the public and private sectors. An oversight body then ensures compliance. This is the preferred model for most countries adopting data protection laws and was adopted by the European Union to ensure compliance with its data protection regime. A variation of these laws, which is described as a “co-regulatory model,” was adopted in Canada and Australia. Under this approach, industry develops rules for the protection of privacy that are enforced by the industry and overseen by the privacy agency.

-2. Sectoral Laws

Some countries, such as the United States, have avoided enacting general data protection rules in favor of specific sectoral laws governing, for example, video rental records and financial privacy. In such cases, enforcement is achieved through a range of mechanisms. A major drawback with this approach is that it requires that new legislation be introduced with each new technology so protections frequently lag behind. The lack of legal protections for individual’s privacy on the Internet in the United States is a striking example of its limitations. There is also the problem of a lack of an oversight agency. In many countries, sectoral laws are used to complement comprehensive legislation by providing more detailed protections for certain categories of information, such as telecommunications, police files or consumer credit records.

-3. Self-Regulation

Data protection can also be achieved, at least in theory, through various forms of self-regulation, in which companies and industry bodies establish codes of practice and engage in self-policing. However, in many countries, especially the United States, these efforts have been disappointing, with little evidence that the aims of the codes are regularly fulfilled. Adequacy and enforcement are the major problem with these approaches. Industry codes in many countries have tended to provide only weak protections and lack enforcement.

-4. Technologies of Privacy

With the recent development of commercially available technology-based systems, privacy protection has also moved into the hands of individual users. Users of the Internet and of some physical applications can employ a range of programs and systems that provide varying degrees of privacy and security of communications. These include encryption, anonymous remailers, proxy servers and digital cash. Users should be aware that not all tools effectively protect privacy. Some are poorly designed while others may be designed to facilitate law enforcement access.

_______

Privacy protection can be achieved by three stakeholders:  

-1. By the state

-2. By companies

-3. By people themselves

______

______

The most common data privacy regulations:

International Data Privacy Law: the GDPR

The most important data protection legislation enacted to date is the General Data Protection Regulation (GDPR). It governs the collection, use, transmission, and security of data collected from residents of any of the 28 member countries of the European Union. The law applies to all EU residents, regardless of the entity’s location that collects the personal data. Fines of up to € 20 million or 4% of total global turnover may be imposed on organizations that fail to comply with the GDPR. Some important requirements of the GDPR include:

-1. Consent

Data subjects must be allowed to give explicit, unambiguous consent before the collection of personal data. Personal data includes information collected through the use of cookies. Some information not usually considered “personal information” in the United States, such as the user’s computer IP address, is considered to be “personal data” according to the GDPR.

-2. Data Breach Notification

Organizations are required to notify supervisory authorities and data subjects within 72 hours in the event of a data breach affecting users’ personal information in most cases.

-3. Data Subjects’ Rights

Data subjects (people whose data is collected and processed) have certain rights regarding their personal information. These rights should be communicated to data subjects in a clear, easy to access privacy policy on the organization’s website.

-The right to be informed. Data subjects must be informed about the collection and use of their personal data when the data is obtained.

-The right to access their data. A data subject can request a copy of their personal data via a data subject request. Data controllers must explain the means of collection, what’s being processed, and with whom it is shared.

-The right of rectification. If a data subject’s data is inaccurate or incomplete, they have the right to ask you to rectify it.

-The right of erasure. Data subjects have the right to request the erasure of personal data related to them on certain grounds within 30 days.

-The right to restrict processing. Data subjects have the right to request the restriction or suppression of their personal data (though you can still store it).

-The right to data portability. Data subjects can have their data transferred from one electronic system to another at any time safely and securely without disrupting its usability.

-The right to object. Data subjects can object to how their information is used for marketing, sales, or non-service-related purposes. The right to object does not apply where legal or official authority is carried out, a task is carried out for public interest, or when the organization needs to process data to provide you with a service for which you signed up.

_

What types of privacy data does the GDPR protect?

  • Basic identity information such as name, address and ID numbers
  • Web data such as location, IP address, cookie data and RFID tags
  • Health and genetic data
  • Biometric data
  • Racial or ethnic data
  • Political opinions
  • Sexual orientation

_

The General Data Protection Regulation (GDPR) was enacted in 2018 to protect the rights of citizens in the EU when it comes to data collection and privacy. GDPR applies to companies that meet the following criteria:

-1. A presence in an EU country.

-2. No presence in the EU, but it processes personal data of European residents.

-3. More than 250 employees.

-4. Fewer than 250 employees but its data-processing impacts the rights and freedoms of data subjects, is not occasional, or includes certain types of sensitive personal data.

This means it effectively applies to almost all companies. It gives customers the right to know what data is being collected and sets requirements for how and when businesses must report breaches.

GDPR is one of the toughest data privacy regulations to comply with. It does allow for a tiered approach to fines and penalties based on the relative seriousness of the offense, but businesses shouldn’t count on leniency; in 2019, British Airways was fined $228 million and Marriott International was fined over $124 million for exposing millions of records of personal data.

A new survey conducted by Propeller Insights and sponsored by Netsparker Ltd. asked executives which industries would be most affected by GDPR. Most (53%) saw the technology sector being most impacted followed by online retailers (45%), software companies (44%), financial services (37%), online services/SaaS (34%), and retail/consumer packaged goods (33%).

_

CCPA:

When the European General Data Protection Regulation (GDPR) came into full force on 25 May 2018, we couldn’t even grasp the far-reaching consequences it would cause. Little by little, other governments are following the EU lead. California is proceeding with an effort to create one cohesive national data privacy law, with the California Consumer Privacy Act (CCPA) becoming fully effective on January 1, 2020.

The CCPA is heavily inspired by the GDPR. The California Consumer Privacy Act (CCPA) applies to companies that do business in California and either 1) generate $25 million or more in annual revenue; 2) buy or sell the personal information of 50,000 or more consumers, households, or devices; or 3) earns more than half its annual revenue selling consumers’ personal data. The law allows any California resident to get a full list of the data a business has about them and entitles consumers to know who businesses have shared that data with. If a business violates the privacy guidelines in the CCPA, consumers are allowed to sue the business even if there hasn’t been a data breach.

The important thing to remember is that both CCPA and GDPR have the same goal, to protect the privacy of individuals, no matter if they are called consumers or data subjects.

_

Key differences between CCPA and GDPR:

_

Health Insurance Portability and Accountability Act (HIPAA) sets the standard for how patient’s information has to be handled by doctors’ offices, hospitals, insurance companies, and other businesses that handle personal health information. HIPAA requires that businesses that process patient data and providers (e.g., hospitals) safeguard patient information and only allows it to be disclosed in certain situations. 

HIPAA provides four general rules that businesses must abide by, which are:

-1. Ensure the confidentiality, integrity, and availability of all e-PHI they create, receive, maintain or transmit;

-2. Identify and protect against reasonably anticipated threats to the security or integrity of the information;

-3. Protect against reasonably anticipated, impermissible uses or disclosures; and

-4. Ensure compliance by their workforce.

_

Payment Card Industry Data Security Standards (PCI-DSS) is somewhat unique, as it isn’t a government regulation and is imposed and enforced by an independent regulatory body, the Payment Card Industry Security Standards Council. Any business that accepts, stores, or transmits cardholder data is subject to PCI-DSS. This regulation requires businesses to have policies and processes in place to protect their customers’ information and ensure they’re properly handling and storing credit card data. This even applies to businesses that use third-party vendors to handle credit card payments. All businesses involved in ecommerce need to be well versed in these requirements and prepared to make sure their vendors are too.

_

Tokenization for Data Privacy and Security:

One of the unique things about tokenization—and one of its greatest strengths—is its potential to satisfy both data privacy and security concerns. Through its ability to pseudonymize information, tokenization can act as a security failsafe to protect sensitive data in the event of a breach, rendering the data stored in the breached system unreadable. In effect, pseudonymization desensitizes data by deidentifying it and preventing it from being returned to its original, sensitive form. Because tokenization removes sensitive data from internal systems, it can virtually eliminate the risk of data theft, making it a particularly useful tool for risk reduction and compliance in terms of both data privacy and security considerations. So even if the security systems established to protect data privacy become compromised, the privacy of that sensitive information does not.

______  

Privacy law:

Privacy law is the area of law concerned with the protection and preservation of the privacy rights of individuals. Increasingly, governments and other public as well as private organizations collect vast amounts of personal information about individuals for a variety of purposes. The law of privacy regulates the type of information which may be collected and how this information may be used. The scope of applicability of privacy laws is called expectation of privacy.

Classification of Privacy Law:

Privacy Laws can be broadly classified into:

-1. General Privacy Law:  General privacy laws have an overall bearing on the personal information of individuals and affect the policies that govern many different areas of information.

-2. Specific Privacy Law: These laws are designed to regulate specific types of information. Some examples include:

  • Health privacy laws
  • Financial privacy laws
  • Online privacy laws
  • Communication privacy laws
  • Information privacy laws
  • Privacy in one’s home

_______

Law that mandates ‘Do Not Track’ compliance:

As you probably know, many countries are right now taking up the greatly needed task of updating their privacy laws for this modern era. However, they are consistently missing one key component when doing so: an easy opt-out mechanism. While Europe’s GDPR law does a lot of great things, it also has created pop-up hell, much like the cookie law that preceded it. What we need right now is a law that works in concert with GDPR (and other similar laws) to give consumers a simple mechanism to exercise their opt-out rights.

Ten years ago, privacy researchers proposed this compelling idea to help protect people’s privacy online: a web browser setting called Do Not Track. Once enabled, your browser would thereafter send a Do Not Track signal to the websites you visit, informing them that you do not give them permission to collect or share your personal information for behavioral advertising, price discrimination, or for any other purpose. If this setting was working, then all those hidden trackers that are watching you around the Internet would be cut off in one shot.

Unfortunately, the idea fell apart when the ad-tech industry balked at any meaningful self-regulation. Despite that, many web browsers actually did build the feature into their platforms, and, in the intervening years, hundreds of millions of people worldwide have turned the feature on. A Forrester research report found 25% of people using the Do Not Track setting, and a national survey we conducted found 23%.

Of course, unbeknownst to the vast majority of these people, this browser setting is doing next to nothing right now. It is currently left to each site individually do what they think is right. And none of the big tech companies do anything with it, giving all these people a false sense of privacy. That, however, can change overnight with a law that mandates Do Not Track compliance.

Here’s how it would work.

The signal would work like it already does today – enabled by your web browser, operating system (for apps), or Internet router (for home devices). Once on, companies that receive the signal would have to respect it, and stop tracking you. The legislation would need to define the line of what is allowed and what is not allowed.

For example:

-1. No third-party tracking by default. Data brokers would no longer be legally able to use hidden trackers to slurp up your personal information from the sites you visit. And the companies that deploy the most trackers across the web — led by Google, Facebook, and Twitter — would no longer be able to collect and use your browsing history without your permission.

-2. No first-party tracking outside what the user expects. For example, if you use Whatsapp, its parent company (Facebook) wouldn’t be able to use your data from Whatsapp in unrelated situations (such as for advertising on Instagram, also owned by Facebook). As another example, if you go to a weather site, it could give you the local forecast, but not share or sell your location history.

As a one-and-done setting, Do Not Track provides that simple mechanism to enable consumers to exercise their opt-out rights and avoid invasive data collection and profiling. The endless stream of privacy popups that Europeans have been subject to under GDPR would significantly diminish with Do Not Track. And, as a setting built into major browsers and operating systems, it is not easily undermined by dark patterns.

Importantly, giving legislative teeth to the Do Not Track browser setting would not destroy online advertising, as some companies fear. People who turn Do Not Track on could still be shown contextual ads (based on the context of the page, i.e. its content like the search you type in), as opposed to behavioral ads (based on creepy profiles of your search history, likes, purchases, and more). Increasing evidence says this can be similarly profitable. In other words, business can continue to thrive, users can continue to get great products, and your privacy can be protected.

Several U.S. Senators have expressed bipartisan support for Do Not Track legislation: Sen. Wyden proposed a bill in November 2018, and Sen. Hawley introduced the “Do Not Track Act” in May 2019 which is now co-sponsored by Sen. Feinstein.

The technical work is done, the legal foundation is in place — what we as individuals can do now is call upon our elected representatives in every nation to support Do Not Track legislation to give control back to users.

Critic of Do Not Track says it would not change online privacy to the better, but in fact, it could even be used in fingerprinting efforts. Browser fingerprinting is an incredibly accurate method of identifying unique browsers and tracking online activity. 

______

The Global Privacy Control (GPC):

The Global Privacy Control (GPC) is a new initiative by researchers, several newspaper organizations from the United States, some browser makers, the EFF, some search engines, and some other organizations to improve user privacy and rights on the Internet. Summed up in a single sentence, GPC lets sites a user connects to know that the user denies the site the right to sell or share personal information to third-parties.

While that sounds like a Do Not Track header 2.0, it is designed to work with existing legal frameworks (and upcoming ones) such as the California Consumer Privacy Act (CCPA) or the European General Data Protection Regulation (GDPR).

It all begins with a browser, extension or app that supports GPC. Currently, that means using a development version of Brave, the DuckDuckGo app for Android or iOS, or browser extensions by DuckDuckGo, Disconnect, EFF or Abine.

Brave has GPC enabled and without options to turn it off, other browsers, apps or extensions may require users to enable it first. In the DuckDuckGo Privacy Browser app for instance, it is necessary to enable Global Privacy Control in the app settings to use it.

For users, that is all there is to it. The browser, app or extension adds the GPC information to the data that is submitted during connections so that sites are aware of it. The next step depends entirely on the site that the user connects to. Sites that don’t participate will ignore the header, and everything remains as if the Global Privacy Control directive does not exist. If a site participates, it will honor the request and make sure that user data is not shared or sold to third-parties.

Right now, support is limited to a few extensions, apps, a single desktop browser with marginal market share, and some sites that participate. While some of the participating sites are major, e.g. the New York Times, it is a very limited solution at the moment. Mozilla and Automattic (WordPress) are also spearheading the effort but have not made any implementations at this point. Even if these two companies, and maybe others, would implement GPC support, it would still require major Internet companies such as Google, Microsoft or Apple to join as well, and for legislation in other regions of the world to introduce privacy bills, to enforce GPC.

_______

_______

Technological solutions for privacy protection:

Whereas information technology is typically seen as the cause of privacy problems, there are also several ways in which information technology can help to solve these problems. There are rules, guidelines or best practices that can be used for designing privacy-preserving systems. Such possibilities range from ethically-informed design methodologies to using encryption to protect personal information from unauthorized use. In particular, methods from the field of information security, aimed at protecting information against unauthorized access, can play a key role in the protection of personal data. 

_

-1. Design methods:

Value sensitive design provides a “theoretically grounded approach to the design of technology that accounts for human values in a principled and comprehensive manner throughout the design process” (Friedman et al. 2006). It provides a set of rules and guidelines for designing a system with a certain value in mind. One such value can be ‘privacy’, and value sensitive design can thus be used as a method to design privacy-friendly IT systems (Van den Hoven et al. 2015). The ‘privacy by design’ approach as advocated by Cavoukian (2009) and others can be regarded as one of the value sensitive design approaches that specifically focuses on privacy (Warnier et al. 2015). More recently, approaches such as “privacy engineering” (Ceross & Simpson 2018) extend the privacy by design approach by aiming to provide a more practical, deployable set of methods by which to achieve system-wide privacy.

_

Privacy by design:

The term “Privacy by Design” means nothing more than “data protection through technology design.” The privacy by design approach provides high-level guidelines in the form of principles for designing privacy-preserving systems. These principles have at their core that “data protection needs to be viewed in proactive rather than reactive terms, making privacy by design preventive and not simply remedial” (Cavoukian 2010). Privacy by design’s main point is that data protection should be central in all phases of product life cycles, from initial design to operational use and disposal.

Cavoukian’s approach to privacy has been criticized as being vague, difficult to enforce its adoption, difficult to apply to certain disciplines, as well as prioritizing corporate interests over consumers’ interests and placing insufficient emphasis on minimizing data collection.

The European GDPR regulation incorporates privacy by design.

Privacy by design is based on seven “foundational principles”:

-1. Proactive not reactive; preventive not remedial

-2. Privacy as the default setting

-3. Privacy embedded into design

-4. Full functionality – positive-sum, not zero-sum

-5. End-to-end security – full lifecycle protection

-6. Visibility and transparency – keep it open

-7. Respect for user privacy – keep it user-centric

_

The Privacy Impact Assessment approach proposed by Clarke (2009) makes a similar point. It proposes “a systematic process for evaluating the potential effects on privacy of a project, initiative or proposed system or scheme” (Clarke 2009). Note that these approaches should not only be seen as auditing approaches, but rather as a means to make privacy awareness and compliance an integral part of the organizational and engineering culture.

There are also several industry guidelines that can be used to design privacy preserving IT systems. The Payment Card Industry Data Security Standard, for example, gives very clear guidelines for privacy and security sensitive systems design in the domain of the credit card industry and its partners (retailers, banks). Various International Organization for Standardization (ISO) standards (Hone & Eloff 2002) also serve as a source of best practices and guidelines, especially with respect to information security, for the design of privacy friendly systems. Furthermore, the principles that are formed by the EU Data Protection Directive, which are themselves based on the Fair Information Practices (Gellman 2014) from the early 70s – transparency, purpose, proportionality, access, transfer – are technologically neutral and as such can also be considered as high level design principles. Systems that are designed with these rules and guidelines in mind should thus – in principle – be in compliance with EU privacy laws and respect the privacy of its users.

_

-2. Cryptography:

Cryptography has long been used as a means to protect data, dating back to the Caesar cipher more than two thousand years ago. Modern cryptographic techniques are essential in any IT system that needs to store (and thus protect) personal data, for example by providing secure (confidential) connections for browsing (HTTPS) and networking (VPN). Note however that by itself cryptography does not provide any protection against data breaching; only when applied correctly in a specific context does it become a ‘fence’ around personal data. In addition, cryptographic schemes that become outdated by faster computers or new attacks may pose threats to (long-term) privacy.

_

The degree of privacy is mostly linked to the type of encryption utilized and computational capacity available. Different encryption algorithms are currently available, offering certain guarantees for the users. Several protocols in the application layer rely on these algorithms as the core of privacy enforcement. Some examples of this is the use of public-key cryptography and the use of algorithms such as RSA and DSA. In addition, and due to recent media revelations, some applications are moving to new cryptography schemes based on the use of elliptic curve cryptography such as Elliptic Curve Diffie-Hellman (ECDH), Integrated Encryption Scheme (IES) or Elliptic Curve Digital Signature Algorithm (ECDSA). The main argument behind the use of new cryptography schemes, is the suspected evidences concerning the pseudo-random number generators utilized for them, and the possibility of broken cryptography. Furthermore, the possibility to encapsulate the connections through a SOCKS interface allows the use of routing techniques through anonymous networks, that are difficult to trace.

_

Encryption converts electronic data into a form that is unreadable by anyone for whom the information is not intended. Three forms of encryption are prevalent in day-to-day Internet usage:

  • End-to-end encryption: In end-to-end encryption, the keys to decrypt communications are held exclusively by the sender and recipient. When end-to-end encryption is employed, any intermediate device, service provider or potential interceptor is unable to read the content of communications. End-to-end encryption is offered by popular messaging services and applications.
  • Disk / device encryption: Disk or device encryption is used to protect stored information. This means that data held on a disk or device cannot be read or accessed by anyone who does not possess the PIN or password, including hardware manufacturers and software providers. Device encryption is commonly used in computers and smartphones.

Is encrypting your Hard Drive effective?

An adversary that is able to physically access your devices on multiple occasions will have little difficulty defeating an encrypted hard drive. An adversary observing your internet traffic will likely remain unaware you’ve gone to the effort – an encrypted system looks almost exactly the same as an unencrypted system when it communicates with the rest of the world. A single mistake in how you encrypt your hard drive will allow sophisticated attackers to break your encryption.

Take home point:

Encrypting your devices is a great idea to reduce the privacy and security impact of a device being lost or stolen. It won’t do much else.

  • Transport encryption: Transport encryption follows information as it traverses a computer network and can thereby encrypt individuals’ browsing activity. Once data reaches site operators, however, it can be accessed or disclosed. Transport encryption is frequently offered by interactive websites, and includes HTTPS, Secure Socket Layer and Transport Layer Security.

The use of encryption is on the rise. More hardware manufacturers offer device encryption, more messaging applications have introduced end-to-end encryption, and more websites now facilitate transport encryption. Some also believe that recent Internet-of-things-related security breaches, including the hacking of online cameras to publish video footage from baby monitors, foreshadow the greater availability of encryption in connected products.

Encryption can secure communications, web browsing and online transactions against outside monitoring and interference in ways that protect human rights, but it can also frustrate legitimate government surveillance and the apprehension of cybercriminals. By the same token, while encryption can protect children’s data from illegitimate external monitoring and unauthorized access, encryption can equally be used to evade detection by those who wish to do them harm. Law enforcement authorities have in particular noted the challenges that encryption poses to investigating and preventing cases of child sexual exploitation. 

A blockchain is basically a distributed ledger that stores transactions in a non-reputable way, without the use of a trusted third party. Cryptography is used to ensure that all transactions are “approved” by members of the blockchain and stored in such a way that they are linked to previous transactions and cannot be removed. Although focused on data integrity and not inherently anonymous, blockchain technology enables many privacy-related applications (Yli-Huumo et al. 2016, Karame and Capkun 2018), such as anonymous cryptocurrency (Narayanan et al. 2016) and self-sovereign identity.

_

Is SSL really secure?

HTTPS (Hyper Text Transfer Protocol Secure) appears in the URL when a website is secured by an SSL certificate. SSL is the technology some websites use to cause a nice re-assuring padlock to appear in browser address bars. That is quite literally what motivates most purchases of SSL certificates – the appearance of security. It is an utter mess of a technology with a design too complicated to conduct a useful security audit. SSL is a broken technology vulnerable to many technical attacks, only some of which are in the public domain. Furthermore, the technology relies on placing complete trust in an impractically large number of individuals and organizations – many of which have a demonstrated history of not being trustworthy. Furthermore, the expertise required to securely implement SSL is far beyond the expertise of most of the individuals tasked with implementing it.

SSL makes it so that it’s more likely your device is talking to the device you intend. SSL also makes it hard to observe the contents of your communication without modifying that communication (a risky proposition for an adversary wishing to remain covert). But it’s nowhere close to the panacea many vendors claim. The insurance policies which cover the “$1,000,000 security guarantee” are narrowly defined and have enough exceptions that they are pointless. They are good marketing tools which provide a great reason to sell SSL certificates to website owners at a variety of prices based on their ability to pay – but don’t serve a significant non-marketing purpose.

Take home point:

When you see the SSL padlock, it is usually uneconomic for an adversary to intercept your communications with that site. There is no assurance that the server you are communicating with will handle your data securely or in a way that meets your privacy expectations.

_

Homographic encryption: the best tool for protecting privacy:   

It can save a life, figure out someone’s predisposition to developing cancer, solve a crime from a long time ago or find long-lost relatives: genome sequencing has come a long way since the human genome was first sequenced in the early 2000s. Fast-forward to today, and this process of determining someone’s complete genetic code is becoming ever more routine. Thousands of COVID-19 survivors, for instance, are now getting their genome mapped, in a bid to help researchers understand how specific genetic makeup could affect a person’s susceptibility to the coronavirus.

But while peeking into someone’s DNA often does help prevent, diagnose and treat many diseases, obtaining the genetic fingerprint also exposes that individual’s personal information encoded in the genome. This is the conundrum around the future of precision medicine. Suddenly, you’re sharing all six billion base pairs of genes with the people sequencing your genome. Whatever the goal, genome mapping and sequencing jeopardizes our privacy.  A unified DNA database for civil and criminal cases might lead to speculative searching and will impinge on one’s right to privacy. 

But it doesn’t have to be like that. There is a way to completely obscure someone’s DNA records (and, to be clear, sensitive data sets in general) while still keeping the data useful: by fully homomorphic encryption (FHE). It is a type of next-generation cryptography that is so secure that even future quantum computers won’t be able to crack it.

Encryption we commonly use today doesn’t make our data totally safe. Whenever one needs to run any computations, for example to carry out necessary medical genetic testing on a sequenced genome, the data have to be decrypted. However briefly, the data become susceptible to theft and leaks.

With FHE, though, the data never get decrypted. The information is encoded in such a way that it remains encrypted all the time—when it’s being transmitted or when it’s in storage, and also during any computations. The data stay cryptographically jumbled to preserve privacy while they are being processed, and so that even the people handling the data can’t know the contents. So even if the data do get stolen or leaked, they will remain safely encrypted. The recipient simply has to decrypt the results with a special secret key, and that move doesn’t reveal any information about the source.

Even when quantum computers become powerful enough to break modern cryptography, easily cracking typical encryption algorithms, they won’t be able to break homomorphic encryption. This is because FHE is based on the mathematics of lattices—repeating, multidimensional grid like collections of points. Lattice-based encryption schemes hide data inside such a collection, some distance away from a point. Calculating just how far away an encrypted message is from a lattice point is extremely difficult for both a quantum and a traditional computer.

Preserving genomic privacy is just one possible use of FHE. It can be used to preserve any sensitive data, be they medical records or financial information.

Homomorphic encryption also addresses the problem of sharing data—critical because of Europe’s GDPR regulations, a country’s specific privacy laws or even a company’s own regulations. For instance, take a bank. If two departments were to share their data, one dealing with insurance and another one with investment, there would be data aggregation, giving data analysts access to all the data. With FHE, the analysts wouldn’t have a clue what the data are about.

Currently, the computational requirements of FHE are a lot greater than with typical modern encryption, making the process much, much longer. But the technology keeps improving, and in the near future is likely to become fast enough for many different applications. When that happens, it should become the default crypto option for sensitive data, especially medical and genomic. Because at the end of the day, there’s nothing more important than the data about our genetic makeup and that of our children—the information about what makes us “us.”

_

-3. Proxy server:

One of the oldest technologies used to enforce privacy and anonymity has been the use of proxy servers. Perhaps due to the simplicity in their functioning, together with their popularity in the early days of the Internet, proxy servers still are one of the main technologies in use when enforcing privacy and anonymity. The function of a proxy server consists mainly in masking the client requests, providing a new identity, i.e. a different IP address possibly located in a different geographical location. There is a vast number of proxy servers publicly and privately currently available such as www.anonymizer.ru, provides integrated proxy solutions within the web browser. In addition, several lists of free proxies are published daily across the Internet, offering all kind of proxies located in different countries, with different levels of anonymity. Notwithstanding, proxy servers cannot be considered a reliable method to guarantee privacy and anonymity by definition. The main reason being that the proxy server knows the origin and destination of the requests, therefore, if it is compromised, it can expose the identity of the users behind its use. Also, although the proxy server can keep the identity secret, there is no guarantee that the content of the requests is not being monitored. Therefore, proxy servers cannot guarantee any property related to plausible deniability and true anonymity/privacy, thus they should be avoided as method to guarantee privacy and anonymity.

_

-4. Onion routing:

Onion Routing is a general purpose infrastructure for private communication over a public network. The core architecture of onion routing is the implementation of mixed networks, i.e. nodes that are accepting messages from different sources and routing them randomly to other nodes within the network. The messages transmitted between them are encrypted, and different layers of encryption get removed in the process of routing while traveling across the nodes. This increases the difficulty of monitoring the traffic (thus revealing the identity). In addition, onion routing is not node-dependent, therefore, the compromising of a router does not compromise the network itself (although can facilitate the traffic analysis). Nevertheless, onion routing has some weaknesses while protecting the privacy and anonymity of its users.

Recent research shows that even if the traffic is encrypted and hard-traceable, there are still possibilities to conclude the population of users and their geographical location. Furthermore, intersection attacks and timing analysis, can reveal user’s behavior within the onion routing network, leading to further identification of the sources or destinations. Despite these issues, onion routing is still considered one of the best alternatives when aiming to guarantee privacy and anonymity.

TOR:

TOR, previously known as The Onion Router, is among the most popular solutions used these days to protect privacy and anonymity on the Internet. The main goal of TOR it is to provide a circuit-based low-latency anonymous communication service. TOR’s core architecture is based on the same principles as onion routing. TOR contains several improvements over traditional onion routing, including: “perfect forward secrecy, congestion control, directory servers, integrity checking, configurable exit policies, and a practical design for location-hidden services via rendezvous points”.

Figure below shows TOR functioning:

TOR has won popularity across the most common Internet users. The main reason (in addition to its enforcement of privacy and anonymity) is due to the fact that the it is free software and there are multiple cross-platform clients available. In addition, several extensions/add-ons are available for the most popular web-browsers, making TOR a very suitable solution. In addition, because TOR is configurable in most of the applications through a SOCK interface, TOR can be used for a broad number of protocols, facilitating anonymity in different type of services. 

The success of TOR does not rely only on its core technology and principles, but on the network of volunteers that maintain the nodes. Due to TOR’s design, anyone with enough bandwidth can provide a new router, allowing this to expand TOR worldwide. Nowadays, TOR network is composed of more than 5000 routers.

Furthermore, TOR user’s population has been fluctuating over time. Recent media revelations concerning global surveillance programs, allowing TOR to win even more popularity across common users, making the network grow considerable, both on relays and users, in the past months, reaching peaks of more than 5 million daily connections to the network.

_

-5. Seals:

Another proposal to solve the Internet privacy dilemma harnesses optional seals to identify Web sites that adhere to the seal provider’s privacy principles.  “The most notable examples of such initiatives are TRUSTe, Better Business Bureau Online, and SecureAssure.”  Participating Web businesses donning a seal assure the site’s visitors that the site’s privacy policy and practices conform to the privacy policy standards outlined by the seal-sponsoring organization. For example, the privacy policy of the Better Business Bureau’s Privacy Program includes “verification, monitoring and review, consumer dispute resolution, a compliance seal, enforcement mechanisms and an educational component.”  Seal programs also seem to pass EU muster, because the programs’ privacy policies meet the rigorous demands of the Directive.  

Unfortunately, voluntary seal programs have faltered as a feasible solution to the privacy issue for several reasons.  First, the programs are completely voluntary, thus severely limiting the number of Web sites that fall under the purview of a seal program. Also, in many cases, a seal program’s sponsors, who established and fund the seal program, are also seal program participants.  In addition, although the seal programs boast of enforcement mechanisms, the only real penalty that the seal issuer can assess against a violator is the revocation of the seal.  Lastly, it is difficult to uncover seal participants who violate the privacy policies of a program, which further undermines the effectiveness of the seal programs.  

_

-6. The “Privacy Toolbar”

The nucleus of the proposed privacy program is a graphical user interface coined the “privacy toolbar.”  This toolbar would be similar in appearance to the visual toolbars of many software applications and operate in a similar fashion. The toolbar would comprise a series of buttons, each containing a picture icon and representing a “core element” of a privacy policy.  Thus, the particular buttons that appear on a Web site’s privacy toolbar would depend on its treatment of an individual’s information.  However, every toolbar would derive from the same pool of icons, furthering uniformity and reliability while allowing each toolbar to be custom-fit to the site’s data collection practices. The toolbars should also have the same basic construction and be placed in a conspicuous location on the site.  Furthermore, the icons should be readily apparent and internationally recognizable, in a similar manner to road signs, and serve the same purpose: imparting information about what lies ahead for the person who utilizes the Web site.  As a result, these iconic buttons would serve as visual management guides to an individual visiting a particular Web site.

The privacy toolbar is designed to compress a complex privacy policy into simple icons in order to facilitate a user’s understanding of a site’s privacy policy.  By design, the toolbar should not supplant the posting of a privacy policy in full text.  In fact, the toolbar may encourage Web sites to remove layers of complexity that cloak their current privacy policies and create easy-to-read, consumer-friendly textual privacy policies that clearly and fully explains their information collection practices.

Educating the Internet public regarding the meaning of the buttons located on the toolbar may require a formal program that utilizes various media.  Thus, successful implementation of the program may require governmental spending to help educate the e-community about the toolbar program, its purpose, and its limitations.  In addition to a formal campaign to impart general learning, the toolbar itself should be an indispensable tool for informal self-education.  Each button on the toolbar, therefore, should be a functional button.  When depressed (“clicked”), the button should link the user to a site that explains the element in detail as it pertains to the site and any steps the user may need to take to effectuate that element.  In addition, all toolbars would include a help button, which would link to an FTC Web page not only describing in detail the general mechanics and definitions of the toolbar program, but also a place to report suspected violators of the program.  Considering the power of the Internet and user familiarity with toolbars, the self-education program may yield successful results without pursuing secondary educational avenues.

_

-7. ISP cooperation:

It has been highlighted by some authors, that ISPs are playing a crucial role in privacy and anonymity matters. They are the main entities responsible for implanting mechanisms that can affect the privacy and anonymity of the users. Nevertheless, there are not too many “friendly” ISPs in these matters. Some proposals involve the commitment of the ISP to provide better quality relay-bridges for TOR. Also, there have been official requests to get more neutrality and protection from the ISPs. Nevertheless, all of them are still subjected to the laws that sometimes conflict with the mechanisms for guaranteeing privacy and anonymity.

_

-8. Identity management:

The use and management of user’s online identifiers are crucial in the current Internet and social networks. Online reputations become more and more important, both for users and for companies. In the era of big data correct information about users has an increasing monetary value.

‘Single sign on’ frameworks, provided by independent third parties (OpenID) but also by large companies such as Facebook, Microsoft and Google (Ko et al. 2010), make it easy for users to connect to numerous online services using a single online identity. These online identities are usually directly linked to the real world (off line) identities of individuals; indeed Facebook, Google and others require this form of log on (den Haak 2012). Requiring a direct link between online and ‘real world’ identities is problematic from a privacy perspective, because they allow profiling of users (Benevenuto et al. 2012). Not all users will realize how large the amount of data is that companies gather in this manner, or how easy it is to build a detailed profile of users. Profiling becomes even easier if the profile information is combined with other techniques such as implicit authentication via cookies and tracking cookies (Mayer & Mitchell 2012).

From a privacy perspective a better solution would be the use of attribute-based authentication (Goyal et al. 2006) which allows access of online services based on the attributes of users, for example their friends, nationality, age etc. Depending on the attributes used, they might still be traced back to specific individuals, but this is no longer crucial. In addition, users can no longer be tracked to different services because they can use different attributes to access different services which makes it difficult to trace online identities over multiple transactions, thus providing unlinkability for the user. Recently (Allen 2016, Other Internet Resources), the concept of self-sovereign identity has emerged, which aims for users to have complete ownership and control about their own digital identities. Blockchain technology is used to make it possible for users to control a digital identity without the use of a traditional trusted third party (Baars 2016).

_______

_______

The Concept of Privacy as a Risk Management Discipline:

The principles of risk management can be applied to a company’s privacy protection initiatives.  Privacy risk is defined as the “potential loss of control over personal information”. Although an individual may consent to the use of his or her personal information, the “loss of control” occurs when the organization fails to provide adequate safeguards. 

The Personal Information Protection and Electronic Documents Act (PIPEDA) is the federal privacy law for private-sector organizations in Canada. Schedule 1of PIPEDA contains ten principles that guide corporations in the protection of personal information. Principle 7, “Safeguards”, recommends that “personal information shall be protected by security safeguards appropriate to the sensitivity of the information”. This includes protection against loss, theft, unauthorized access, disclosure, copying, use, or modification with recommendations that include physical measures, organizational measures, and technological measures.  

The challenge that corporations have in meeting this recommendation is that they don’t fully understand the nature of the risk presented by being responsible for safeguarding privacy despite an abundance of information on the general issue of risk. There are dozens of studies, whitepapers and similar information available that outline the types of threats that organizations face. The apparent disconnect is the lack of information about privacy risk. In the absence of useful and readily available information, organizations default to technology solutions to reduce the privacy risk before understanding what is the nature of the risk being managed. It’s effectively “putting the cart before the horse”. 

Privacy risk as operational risk:

Within the discipline of risk management, privacy as a risk is rarely identified or effectively managed. This may be a result of the failure by corporate executives to understand either the source of privacy risk or the strategies to treat them. Integrating privacy risk into an organization’s risk management strategy requires an understanding of the type or categorization of risk and where it should reside within the risk management structure. Privacy should be seen as a form of operational risk despite the ambiguity of this risk category. Operational risk is defined as “the risk of [money] loss resulting from inadequate or failed internal processes, people and systems or from external events” including legal risk, but excluding strategic and reputation linked risks”. The exclusion of reputational risk can be problematic given the extent to which companies speak of the perceived link between privacy management and reputation (usually referred to in terms of customer trust). However, notwithstanding this difficulty, it is reasonable to treat privacy risk within an operational risk framework given that privacy breaches frequently result from failed processes and systems or external events.   

_

Risk Management Process:

Risk management is defined as “the process of identifying risk, assessing risk, and taking steps to reduce risk to an acceptable level”. There are five steps in the risk management process that help companies to answer key questions as outlined below:

What are the key privacy risks and what will happen if the risk materializes?        

-1. Identification and classification of risk type

-2. Assessment of likelihood and severity of loss

How can we deal with these risks?    

-3. Evaluation of necessary strategies to address risks

-4. Implementation of strategies

How do we learn and apply our learning given new information or in the event of a breach?

-5. Monitoring and modification of strategies

_

There are four strategies for treating identified risks: 

-1. Mitigation

-2. Avoidance

-3. Acceptance or retention

-4. Transference.

_

These five basic steps and four strategies have been mapped into a Privacy Risk Management Model as shown in Figure below. 

Figure above shows how the stages and steps of privacy risk management fit together. Three observations are noteworthy. First, the boxes illustrate the process of a risk management cycle that begins with the identification of risk and ends with recovery from a breach. Joining the last box and the first is a feedback loop that illustrates the learning from a breach incident and the anticipated effects of this learning on the risk identification and strategy steps. Second, appearing over the boxes are numbered circles corresponding to the discrete steps in the risk management process described above. Note that steps 4 (implementation of strategies) and 5(monitoring and modification of strategies) appear both pre and postbreach to signify that these actions are continuous. Third, note that the first two boxes are encircled. This indicates that there should be regular reviews and updating of company privacy risk management policies.                                                                                                                                               

_______

_______

Data privacy advice for businesses: 

_

_

Data privacy advice for consumers:

______

______

How do you protect internet & data privacy by yourself!!!

-1. Use VPN: 

There are various ways of protecting your online privacy, but the most successful and certain way is through VPN. It is a tool that provides an encrypted tunnel for all your online activities, which means it encodes all the information transferred between you and your host site and leaves no chances of snooping and spying. It also provides you an anonymous IP and disguises your actual identity, hiding your geographical location and making your online existence more safe and secure.

There are various VPN available including free and paid ones. Some VPN work on a small scale and have access to a few countries only while others are international ones with access to most parts of the world.

VPNs work by funnelling all of your internet traffic through an encrypted pipe to the VPN server, making it more difficult for anyone on the internet to see which sites you are visiting or which apps you are using. But VPNs don’t inherently protect your privacy or give you anonymity. VPNs simply divert all of your internet traffic from going to your internet provider’s systems into the VPN provider’s systems instead.

Why should you trust a VPN that promises to protect your privacy more than your internet provider? The answer is that you can’t, and you shouldn’t.

By far, some of the worst offenders are the free VPNs. As the old adage goes, if it’s free then you are the product. What that means is that they make money off you — specifically, your data. Like any service that costs nothing, VPNs are often supported by ads. That means taking your internet traffic and selling it to the highest bidder to serve you targeted ads while you’re connected to the VPN. Other free VPNs have been accused of injecting ads into the websites that you visit.

While there are paid and premium VPNs that are generally more mindful about your privacy, they aren’t anonymous, as they can be linked to your billing address. Paid VPNs also don’t solve the problem of funnelling all of your internet traffic to a potentially untrustworthy company.

Some VPN providers also claim to protect your privacy by not storing any logs or track which websites you visit or when. While that may be true in some cases, there’s no way you can be completely sure. In fact, some VPN providers have claimed they don’t store any logs — but were proven completely false.

_

Does VPN really improve your Privacy? 

A VPN creates a relatively secure connection between your computer and another computer on the internet. VPNs can be used to see the internet from the perspective of another country, and they are also a good countermeasure for many of the risks of using public Wi-Fi. But while the connection to your VPN provider may be secure, the connection from your VPN provider to the rest of the internet remains insecure. The privacy situation is just as bad as if you hadn’t bothered with a VPN. VPNs only protect against local privacy threats.

But there’s also a new risk introduced by using a VPN. VPNs provide a convenient chokepoint for your adversary to monitor all of your communications. By routing all your communications through a VPN, your adversary only needs to compromise your VPN provider’s systems or otherwise compel their assistance in monitoring all of your traffic. Without a VPN, your adversary needs to figure out all the networks you use, and find a way to monitor your use of each of those networks. With a VPN, they only need to monitor traffic coming from and going to your VPN provider. Which is easier?

Take home point:

Using a VPN is a good choice when using public Wi-Fi networks. Although they can address some privacy issues, they also introduce new ones. Using a VPN on a day-to-day basis in the belief that doing so will increase your privacy is probably misguided. VPNs can be useful, but it’s important to know their limitations. Just don’t rely on them to protect your privacy or your anonymity.   

_

-2. Use virtual credit card (VCC):

If you have ever bought anything online—and the chances are that you have—you have exposed yourself to potential hack attacks and data breaches. Even if there were no fraudulent activities on your bank account, your information was probably compromised at some point.

Although online shopping is an excellent way to conduct most of your purchases, safety and privacy concerns are not a minor drawback. The solution comes in the form of virtual credit cards (VCCs) that are becoming increasingly popular with fans of buying things on the Internet.

You can get this nifty service either from your bank or a specialized VCC issuer, like Privacy.com.

Privacy virtual credit card is a virtual credit card number that you can use as a paying token with online retailers. Although it will have all the essential features of a real credit card, this number is a one-time payment method that does not disclose your data to merchants. VCCs are generated for particular purchases and cannot be used for anything else. They don’t store any bank account or credit card information, so even if the merchant gets hacked, your details won’t be compromised. You can think of it as a chip that is connected to a specific retailer, which ensures that your other funds remain safe from fraudsters and thieves. Privacy developed its virtual credit cards in an attempt to offer a secure and easy online payment method that allows you to buy products or services even from iffy online stores.

A virtual credit card is a randomly created card number that you can get every time you are making an online purchase. You must link your Privacy VCC to your bank account or your debit card as the source of funds. The account data will not be visible or available to the merchant or anyone else.

When the time comes to hit the Purchase button, you simply enter the virtual credit card details provided by Privacy. VCCs appear as regular card numbers to merchants, so the payment will go on in the same as with your standard credit or debit card.

You can create cards via the Privacy website, or you can add the extension to your web browser. If you add the extension, the process gets a lot quicker because the extension will offer to make the card as soon as you are ready to pay. If you conduct most of your shopping on your phone or tablet, there is also the Privacy mobile app available.

_

-3. Conduct Safe Browsing:

Hackers can easily track your activities and get into your system through your browser. It’s highly recommended to keep your browser updated to the latest version. Avoid using spammy websites that asks for user details. You can also block ads on your browser and take extra time to actually read privacy policies before giving your consent.

Browse in incognito or private mode:

If you don’t want your computer to save your browsing history, temporary internet files, or cookies, do your web surfing in private mode. Web browsers today offer their own versions of this form of privacy protection. In Chrome, it’s called Incognito Mode. Firefox calls its setting Private Browsing, and Internet Explorer uses the name InPrivate Browsing for its privacy feature. When you search with these modes turned on, others won’t be able to trace your browsing history from your computer.

But these private modes aren’t completely private. When you’re searching in incognito or private mode, your Internet Service Provider (ISP) can still see your browsing activity. If you are searching on a company computer, so can your employer. The websites you visit can also track you.

So, yes, incognito browsing does have certain benefits. But it’s far from the only tool available to help you maintain your privacy while online. Anonymous search engines and virtual private networks can bolster your online privacy.

_

-4. Keep Your System Up-to-Date:

Keep your system up to date to ensure that you don’t miss out any feature and security fixes. If you find it a hassle to manually apply updates, you can always use tools to automate your software updates. Regularly scan your system or it’s better to keep auto scan on in your system.

_

-5. Use Anti-Virus:

A strong anti-virus program will keep your device free from all types of malware, such as spyware, viruses, Trojans, etc. You can also use a good anti-virus that will keep you updated if it found something wrong in your system. Using anti-virus is essential as it helps you to get real time updates.

_

-6. Adjust Your Settings on Social Media:

Take advantage of the options that are available to you. Big Internet companies such as Facebook and Google usually give you options to opt out of some, if not all, of their personalization and tracking.

_

-7. Don’t use public storages for private information:

Don’t use online services that are meant for sharing information to store your private data. For example, Google Docs isn’t an ideal place to store a list of passwords, and Dropbox is not the best venue for your passport scans unless they are kept in an encrypted archive.

_

-8. Use messaging apps with end-to-end encryption:

Most modern messaging apps use encryption, but in many cases it’s what they call encryption in transit — messages are decrypted on the provider’s side and stored on its servers. What if someone hacks those servers? Don’t take that risk — chose end-to-end encryption — that way, even the messaging service provider can’t see your conversations.

  • Use a messaging app with end-to-end encryption — for example, WhatsApp;
  • Note that by default, Facebook Messenger, Telegram and Google Allo do not use end-to-end encryption. To enable it, manually start a secret chat.

_

-9. Use secure passwords:

Using weak passwords to protect your private information is as good as shouting that information to passers-by. It’s nearly impossible to memorize long and unique passwords for all the services you use, but with a password manager you can memorize just one master password.

  • Use long (12 characters and more) passwords everywhere;
  • Use a different password for each service;
  • Use a Password Manager to make using secure passwords easier.

_

-10. Review permissions for mobile apps and browser extensions:

Mobile apps prompt you to give them permissions to access contacts or files in device storage, and to use the camera, microphone, geolocation, and so on. Some really cannot work without these permissions, but some use this information to profile you for marketing (and worse). Fortunately, it’s relatively easy to control which apps are given which permissions. The same stands for browser extensions, which also have unfortunate spying tendencies.

_

-11. Stay private on Wi-Fi networks:

Public Wi-Fi networks usually do not encrypt traffic, and that means anyone on the same network can try to snoop on your traffic. Avoid transmitting any sensitive data — logins, passwords, credit card data, and so forth — over public Wi-Fi, and use a VPN to encrypt your data and protect it from prying eyes.

Many people believe that it’s perfectly safe to use public Wi-Fi, as long as you don’t do anything “important” such as accessing online banking. Accessing online banking from a public Wi-Fi network is actually not especially problematic – as long as you correctly type in the https URL for your bank’s website.

Every unencrypted connection your device makes while using a public Wi-Fi network is easily observed. Security tokens which are almost as useful to an attacker as your actual username and password are frequently transmitted without any protection. Many websites use cookies in an insecure way that creates a risk when a device which was previously used on a “trustworthy” network is later used on a “dangerous” network. Connecting to a public WIFI network without first doing a complete reset of your browser is probably a bad idea. During your use of a public Wi-Fi network, an adversary can put files into your browser’s cache which create a security problem even after you stop using that network.

Additionally, many devices will automatically re-connect to any previously used Wi-Fi network, unless you specifically tell the device not to. Apple iOS devices allow you to “forget” a network while you are connected to it, but once you are outside the range of the Wi-Fi hotspot, your only option is to reset all network settings – there is no simple way to view the list of networks to which your device will automatically connect. Public Wi-Fi hotspots are problematic because it’s easy for an adversary to set up a fake Wi-Fi network which looks the same to your device as the public hotspot you previously used. If the signal of the fake network is strong enough, your device may then automatically connect to it instead of the “trustworthy” network you expect.

Most devices will regularly transmit a unique identifier whenever the Wi-Fi radio is on (even if the device is in your pocket and not in use). Many devices will also transmit information about the networks to which they have recently connected. If “Yourname’s iPhone” is one of those networks, your name is broadcast. Furthermore, publicly available databases indicate the precise location of most Wi-Fi hotspots (which is how your laptop can locate itself on a map). Combine those databases with the data your phone transmits, and it’s not difficult for an attacker to figure out where you live, work, or otherwise spend your time.

In a nutshell:

Reset your web browser before and after using a public Wi-Fi network. Remove open networks from your device’s auto connect list. Consider turning off your device’s Wi-Fi radio when not in use.

_

-12. Use a different search engine:

If you’re like many web surfers, you rely heavily on Google as your search engine. But you don’t have to. Privacy is one reason people prefer to use anonymous search engines. This type of search engine doesn’t collect or share your search history or clicks. Anonymous search engines can also block ad trackers on the websites you visit. Switch to DuckDuckGo or Startpage as these search engines do not track or store any information about you, or place cookies on your machine. Keep this in mind, especially when you’re using the Tor network.

_

-13. Delete Cookies at Browser Exit:

You should delete cookies regularly as they’re used by websites, advertisers, and other third parties to track you online. While you can clear your cookies manually, you’re better off configuring your browser to automatically delete them at the end of the browsing session. Once your cookies are deleted, use ad blockers and anti-tracking tools to avoid disclosing your browsing habits without consent. It is a good idea to clear your browser cache because it: prevents you from using old forms. protects your personal information. helps our applications run better on your computer.

Can deleting cookies increase your privacy meaningfully?

Many believe that deleting their cookies will prevent their online activities from being tracked. If cookies are deleted but cache, history, and other browser settings (“state”) are left behind, many of the cookies will be re-constructed. Even if you do a complete browser reset, it’s still pretty easy to fingerprint your device. The size of your screen, the fonts and plugins you have installed, subtle manufacturing inconsistencies, and various other properties of your device make it pretty easy to identify it for the unique snowflake that it is. Common devices which are hard to customize (such as iPhones) are a little harder to fingerprint, but not enough to actually protect you. Furthermore, some of the most interesting approaches to fingerprinting a device are trade secrets. The technology industry is a long way from building devices which don’t implicitly identify themselves every time you use them.

Even if you manage to shed your digital fingerprint, your new fingerprint will be linked to your old one the moment you do something on the internet which identifies you. This could be something as obvious as logging into a website with your email address. It’s also pretty likely that you are one of the only people in the world who visit a specific combination of seven different websites.

If you went to the effort of deleting the cache on your device, you probably didn’t think about the fact your ISP may also operate another cache on your behalf – and that one doesn’t have a delete button under your control. If you ask them about it they’ll probably mumble something about “increasing network performance” or “improving battery life”, both of which are valid explanations.

Take home point:

Deleting your cookies will make you appear as a “new” visitor to less sophisticated tracking systems. But it will not meaningfully increase your privacy online, and the “delete cookies” button often leads to a false expectation of privacy that simply does not exist.

_

-14. Use the Tor Network to surf the Internet:

Use the Tor browser as it was built for enabling anonymous communication. It uses onion routing to not only prevent third-party trackers from following you around but also to mask your IP address from prying eyes. The idea is to prevent an observer from linking your activities on the web to you. Of course, that requires certain precautions on your end as well, such as:

-using pseudonyms to protect your actual identity,

-compartmentalizing your social media use, and/or

-not leaving trails of your real email addresses.

Orbot is the Tor for Android. It packages the anonymizing features and functionalities of Tor and brings it to the Android mobile operating system. The bad news? There’s no official Tor app for iOS due to interoperability issues. The good news?  The Tor project has endorsed the open-source Onion Browser.

______

______

Differential privacy:

While there are many ways to provide privacy for people who share their data, differential privacy has recently emerged as a leading technique and is being rapidly adopted.

Imagine your local tourism committee wanted to find out the most popular places in your area. A simple solution would be to collect lists of all the locations you have visited from your mobile device, combine it with similar lists for everyone else in your area, and count how often each location was visited. While efficient, collecting people’s sensitive data in this way can have dire consequences. Even if the data is stripped of names, it may still be possible for a data analyst or a hacker to identify and stalk individuals.

Differential privacy can be used to protect everyone’s personal data while gleaning useful information from it. Differential privacy disguises individuals’ information by randomly changing the lists of places they have visited, possibly by removing some locations and adding others. These introduced errors make it virtually impossible to compare people’s information and use the process of elimination to determine someone’s identity. Importantly, these random changes are small enough to ensure that the summary statistics – in this case, the most popular places – are accurate.

_

Differential privacy is a rigorous mathematical definition of privacy. In the simplest setting, consider an algorithm that analyzes a dataset and computes statistics about it (such as the data’s mean, variance, median, mode, etc.). Such an algorithm is said to be differentially private if by looking at the output, one cannot tell whether any individual’s data was included in the original dataset or not. In other words, the guarantee of a differentially private algorithm is that its behavior hardly changes when a single individual joins or leaves the dataset — anything the algorithm might output on a database containing some individual’s information is almost as likely to have come from a database without that individual’s information. Most notably, this guarantee holds for any individual and any dataset. Therefore, regardless of how eccentric any single individual’s details are, and regardless of the details of anyone else in the database, the guarantee of differential privacy still holds. This gives a formal guarantee that individual-level information about participants in the database is not leaked.

_

Through extensive theoretical research, differential privacy shows promise in enabling research data to be shared in a wide variety of settings. The simplest and most well-studied scenario is statistical query release: a data owner can specify counting queries, such as “how many people in the database are male?” or “how many people in the database live in Massachusetts?” and receive answers perturbed by a small amount of random noise. Differentially private algorithms are able to answer a large number of such queries approximately, so that researchers seeing these approximate answers can draw roughly the same conclusions as if they had the data themselves.

However, the reach of differential privacy extends far beyond the simple case of statistical queries. For instance, there are differentially private versions of algorithms in machine learning, game theory and economic mechanism design, statistical estimation, and streaming.

It is worth remarking that differential privacy works better on larger databases. This is because as the number of individuals in a database grows, the effect of any single individual on a given aggregate statistic diminishes.

_

In short, Differential Privacy permits:

— Companies access a large number of sensitive data for researching and business without privacy breach.

— Research institutions can develop differential privacy technology to automate privacy processes within cloud-sharing communities across countries. Thus, they could protect the privacy of users and resolve data sharing problem.

Differentially private techniques can strip data of their identifying characteristics so that they can’t be used by anyone — hackers, government agencies, and even the company that collects them — to compromise any user’s privacy. That’s important for anyone who cares about protecting the rights of at-risk users, whose privacy is vital for their safety. Ideally, differential privacy will enable companies to collect information, while reducing the risk that it will be accessed and used in a way that harms human rights.

_

In practice, differential privacy isn’t perfect. The randomization process must be calibrated carefully. Too much randomness will make the summary statistics inaccurate. Too little will leave people vulnerable to being identified. Also, if the randomization takes place after everyone’s unaltered data has been collected, as is common in some versions of differential privacy, hackers may still be able to get at the original data.

When differential privacy was developed in 2006, it was mostly regarded as a theoretically interesting tool. In 2014, Google became the first company to start publicly using differential privacy for data collection.

Since then, new systems using differential privacy have been deployed by Microsoft, Google and the U.S. Census Bureau. Apple uses it to power machine learning algorithms without needing to see your data, and Uber turned to it to make sure their internal data analysts can’t abuse their power. Differential privacy is often hailed as the solution to the online advertising industry’s privacy issues by allowing advertisers to learn how people respond to their ads without tracking individuals.

_______

_______

Section-22  

Privacy policy:

A privacy policy is a statement or legal document (in privacy law) that discloses some or all of the ways a party (company or website) gathers, uses, discloses, and manages a customer or client or visitor’s data. It explicitly describes whether that information is kept confidential, or is shared with or sold to third parties. Personal information can be anything that can be used to identify an individual, not limited to the person’s name, address, date of birth, marital status, contact information, ID issue, and expiry date, financial records, credit information, medical history, where one travels, and intentions to acquire goods and services. In the case of a business, it is often a statement that declares a party’s policy on how it collects, stores, and releases personal information it collects. It informs the client what specific information is collected, and whether it is kept confidential, shared with partners, or sold to other firms or enterprises. Privacy policies typically represent a broader, more generalized treatment, as opposed to data use statements, which tend to be more detailed and specific.

For example, an excerpt from Pinterest’s Privacy Policy agreement clearly describes the information Pinterest collects from its users as well as from any other source that users enable Pinterest to gather information from. The information that the user voluntarily gives includes names, photos, pins, likes, email address, and/or phone number etc., all of which is regarded as personal information. Additionally, Pinterest also states that it collects user location data from mobile devices, and if someone makes a purchase on Pinterest, payment and contact information – including an address and phone number – will be collected. If users buy products or services for others, Pinterest gathers their contact information and shipping details, too. Users may also give Pinterest permission to access information that is shared with other websites like Facebook and Twitter by linking their Pinterest account with them. This information would also include information about their friends and followers. The account settings have information about how much access Pinterest has to their users’ data.

In sum, a Privacy Policy is where you let your users know all about how you make sure their privacy is respected by your business practices.

The exact contents of a certain privacy policy will depend upon the applicable law and may need to address requirements across geographical boundaries and legal jurisdictions. Most countries have their own legislation and guidelines of who is covered, what information can be collected, and what it can be used for. In general, data protection laws in Europe cover the private sector, as well as the public sector. Their privacy laws apply not only to government operations but also to private enterprises and commercial transactions.

California Business and Professions Code, Internet Privacy Requirements (CalOPPA) mandate that websites collecting Personally Identifiable Information (PII) from California residents must conspicuously post their privacy policy.

_

Why you need a Privacy Policy:

Privacy is not a new concept. Humans have always desired privacy in their social as well as private lives. But the idea of privacy as a human right is a relatively modern phenomenon. Around the world, laws and regulations have been developed for the protection of data related to government, education, health, children, consumers, financial institutions, etc. This data is critical to the person it belongs to. Data privacy and security binds individuals and industries together and runs complex systems in our society. From credit card numbers and social security numbers to email addresses and phone numbers, our sensitive, personally identifiable information is important. This sort of information in unreliable hands can potentially have far-reaching consequences. Companies or websites that handle customer information are required to publish their Privacy Policies on their business websites. If you own a website, web app, mobile app or desktop app that collects or processes user data, you most certainly will have to post a Privacy Policy on your website (or give in-app access to the full Privacy Policy agreement).

There are several reasons for a website to post its Privacy Policy agreement on its website.

Here are some of the main reasons:

-1. Required by the law

-2. Required by third party services

-3. Users are interested in their privacy

-4. It’s ubiquitous

-5. Increases Transparency

_____

Privacy Policies in the European Union:

In the European Union, the Data Protection Directive oversees the processing of personal data and requires businesses operating from the European Union to post a Privacy Policy on their websites. In January of 2012, the European Commission unveiled a draft of the European General Data Protection Regulation (GDPR) that supersedes the original Data Protection Directive. Its main purpose is to strengthen and unify the processes involving data collected from individuals within the European Union. The GDPR became enforceable on May 25, 2018. The Organization for Economic Cooperation and Development (OECD) issued guidelines for protecting consumers’ personal data, which includes notifying users when their data is being collected, collecting data only for the stated purpose, not disclosing the data without the user’s consent, and other ways to protect consumers.

____

Privacy Policies in the United Kingdom:

The Data Protection Act is a United Kingdom Act of Parliament designed to protect users’ personal data whether it’s stored on computers or paper filing systems. It follows closely in line with the European Union’s Data Protection Directive.

The Data Protection Act is comprised of eight data protection principles:

-Personal data is processed fairly and lawfully.

-It is only obtained for specified, lawful purposes.

-The data is adequate, relevant, and not excessive for the purpose it was collected.

-The data is accurate and up to date.

-The data will not kept for longer than is necessary.

-Personal data is processed in compliance with the rights of the users.

-Appropriate measures are taken against unlawful data processing.

-The personal data cannot be transferred to a country outside the European Economic Area unless that country guarantees an adequate level of protection of personal data.

______

Privacy Policies required by Third Parties:

Privacy Policies aren’t only required by federal, state, or country law. In some cases, third-party services you use to enhance your site or provide performance/analytical data will also require you to post a Privacy Policy on your website for their protection. What this means is that if you’re not required by law to have a Privacy Policy published on your site, for example, because you’re not knowingly collecting any form of personal user data, third-party services may still require you to post one in order to use their services.

Google Analytics:

Google Analytics is a great example of a third-party website that requires you to have a Privacy Policy even if you’re running a simple website that doesn’t collect any personal data from users. In its Terms of Service agreement, it says you will need to have an appropriate Privacy Policy and abide by it.

Google Analytics Terms of Service Privacy clause:

It goes on to explain the different provisions your Policy should include, such as notifying visitors that you’re using cookies to collect data and that you’re using Google Analytics which collects and processes data on its own. You’re also required to provide clear information about how cookies and other information is stored and accessed on user devices in cases where the activity is related to the services offered by Google Analytics. Furthermore, your visitors must give consent to let you store and access these cookies.

Google AdSense:

In the Google AdSense Terms of Service agreement, Google states that you’re required to publish a clearly labelled and easily accessible Privacy Policy on your website at all times while using the AdSense services.

Google AdSense Terms of Service: Privacy clause updated for 2018:

The Privacy Policy should contain information about how your site and Google AdSense uses:

-Cookies

-Device-specific information

-Location information

-Information stored on, accessed on, or collected from user’s devices in relation to AdSense

In addition to this, Google also gives you the responsibility of making sure your visitors give consent to the storing and accessing of all of the above-mentioned data.

Apple App Store

If your mobile app collects user data and you want to distribute the app in the Apple App Store, you’ll need to have a Privacy Policy. Apple’s App Store Review Guidelines contain a Data Collection and Storage clause that requires you to have a Privacy Policy.

Google Play:

The Google Play Developer Distribution Agreement requires you to protect the privacy and legal rights of users if you use Google Play to publish your app. This means that you’re required to post a legally binding Privacy Policy that informs users of the information you’re collecting and that protects their personal data. It also states that your app can only use the collected information for the purposes you stated at the time of securing the user’s consent. As well, if you’re storing any of the information that you collect through your app, you must store it securely and only for as long as you need it.

If your website/mobile app collects personal information from users, you need to be aware of:

-Privacy laws and Privacy Policy requirements in your jurisdiction and others where you operate

-Privacy Policy requirements of third-party services your website/app uses

-Privacy Policy requirements of any app store you use to distribute your app

With all of these principles in mind, you should be ready to review your current Privacy Policy for needed updates, or create your first Privacy Policy for your website or mobile app.

_______

_______

Criticism of privacy policy:

Many critics have attacked the efficacy and legitimacy of privacy policies found on the Internet. Concerns exist about the effectiveness of industry-regulated privacy policies. For example, a 2000 FTC report Privacy Online: Fair Information Practices in the Electronic Marketplace found that while the vast majority of websites surveyed had some manner of privacy disclosure, most did not meet the standard set in the FTC Principles. In addition, many organizations reserve the express right to unilaterally change the terms of their policies. In June 2009 the EFF website TOSback began tracking such changes on 56 popular internet services, including monitoring the privacy policies of Amazon, Google and Facebook.

There are also questions about whether consumers understand privacy policies and whether they help consumers make more informed decisions. A 2002 report from the Stanford Persuasive Technology Lab contended that a website’s visual designs had more influence than the website’s privacy policy when consumers assessed the website’s credibility. A 2007 study by Carnegie Mellon University claimed “when not presented with prominent privacy information…” consumers were “…likely to make purchases from the vendor with the lowest price, regardless of that site’s privacy policies”. However, the same study also showed that when information about privacy practices is clearly presented, consumers prefer retailers who better protect their privacy and some are willing to “pay a premium to purchase from more privacy protective websites”. Furthermore, a 2007 study at the University of California, Berkeley found that “75% of consumers think as long as a site has a privacy policy it means it won’t share data with third parties,” confusing the existence of a privacy policy with extensive privacy protection.

Privacy policies suffer generally from a lack of precision, especially when compared with the emerging form of the Data Use Statement. Where privacy statements provide a more general overview of data collection and use, data use statements represent a much more specific treatment. As a result, privacy policies may not meet the increased demand for transparency that data use statements provide.

Critics also question if consumers even read privacy policies or can understand what they read. A 2001 study by the Privacy Leadership Initiative claimed only 3% of consumers read privacy policies carefully, and 64% briefly glanced at, or never read, privacy policies. The average website user once having read a privacy statement may have more uncertainty about the trustworthiness of the website than before. One possible issue is length and complexity of policies. According to a 2008 Carnegie Mellon study, the average length of a privacy policy is 2,500 words and requires an average of 10 minutes to read. The study cited that “Privacy policies are hard to read” and, as a result, “read infrequently”. However, any efforts to make the information more presentable simplify the information to the point that it does not convey the extent to which users’ data is being shared and sold. This is known as the ‘transparency paradox.’

It is also believed that for proper privacy to be offered by service providers, it is not enough to force transparency through regulation, but it is also essential to have viable alternatives, so that the Internet services market (such as that of social networks) can operate like a free market where choices can be made by consumers.

______

______

Section-23

Privacy as a paid service:  

Some major services, such as Google and Facebook, still don’t charge users, which is something Pierre Valade, CEO of the privacy-focused startup Jumbo, sees as its own kind of toll. “People have learned, unfortunately the hard way, that if you’re free, they’re the product,” he said. So when Valade set out to create an app that would be a privacy assistant for everyday users, he knew he’d have to charge a monthly fee to make it work. Jumbo quietly began charging $3 or more per month for its privacy manager, a previously free smartphone app that lets people go to one place to manage their privacy settings for Facebook, Google, Amazon and more. The idea of paying for privacy is a sign that the concept of privacy itself is evolving — and emerging as a big business.

Understanding and managing one’s digital footprint is now so complicated that corporations and individuals can’t do it very well by themselves. Some people are seeking a middleman between tech companies and themselves, and now there’s a monthly price tag. Privacy as a service, however, is something that doesn’t sit well with privacy advocates and civil rights experts, who worry that freedom from corporate or government surveillance could soon become another sharp contrast between the haves and the have-nots.

The Jumbo app is a kind of dashboard for other accounts from common online services. Users log into those accounts through Jumbo, which scans the accounts to see what the current privacy settings are and makes recommendations. Wondering what information Google is keeping? Jumbo walks through the data step by step, giving users the option to delete their histories or leave the data where it is on Google’s servers. And Jumbo says it doesn’t store any of the data itself. The attraction is convenience, but it’s also an education in how much data the tech companies collect and store. “On Facebook, we look at more than 40 settings. That would take you an hour, and you’re probably going to miss some,” Valade said.

Putting a price tag on privacy — or just a more convenient form of it — may be unsettling to people who think the onus should be on prying tech companies, not on individual consumers, to protect privacy. “If you think of privacy as a right, you shouldn’t have to pay for it,” said Justin Brookman, director of consumer privacy and technology policy for the nonprofit Consumer Reports. “The law should be protecting us, and people shouldn’t have to worry about it.”

It’s not clear how big tech companies like Google, Facebook or Amazon will respond to the growth of Jumbo and what will likely be other subscription services like it.

_____

Paying for privacy:

Growing demands for privacy and increases in the quantity and variety of consumer data have engendered various business offerings to allow companies, and in some instances consumers, to capitalize on these developments. One such example is the emerging “personal data economy” (PDE) in which companies, such as Datacoup, purchase data directly from individuals. At the opposite end of the spectrum, the “pay-for-privacy” (PFP) model requires consumers to pay an additional fee to prevent their data from being collected and mined for advertising purposes.

______

Data Privacy as a Service:

Privacy as a Service (PaaS or DPaaS) is a form of software as a service (SaaS) platform where disclosure notices, consent management and compliance software are combined to offer businesses a comprehensive managed privacy service to improve transparency and user control regarding data privacy. For a complete solution that combines software with privacy professionals, a privacy team can be included to operate as an external Data Protection Officer and guide the organization through impact assessments and introducing privacy by design into the workflow.

Organizations are collecting an increasing amount of data in order to facilitate product improvements and improved marketing. Consumers are willing to share data but businesses may have taken their data collection, usage and sharing too far. As users begin demanding more transparency into organization’s privacy practices, businesses may need to turn to external vendors in order to overhaul their privacy systems. This sort of Data Privacy as a Service will likely be a popular offering in the future as businesses try to maintain or rebuild customer trust from data breaches and controversial data sharing or usage.

______

______

Section-24 

Privacy and intellectual property:

Some economists and privacy advocates have proposed giving individuals property rights in their personal data to promote information privacy in cyberspace. A property rights approach would allow individuals to negotiate with firms about the uses to which they are willing to have personal data put and would force businesses to internalize a higher proportion of the societal costs of personal data processing. However, granting individuals property rights in personal information is unlikely to achieve information privacy goals in part because a key mechanism of property law, namely the general policy favouring free alienability of such rights, would more likely defeat than achieve information privacy goals.

_

A pressing concern today is whether the rationale underlying the protection of personal data is itself a meaningful foundation for according intellectual property (IP) rights in personal data to data subjects. In particular, are there particular technological attributes about the collection, use and processing of personal data on the Internet, and global access to that data, that provide a strong justification to extend IP rights to data subjects? A central issue in so determining is whether data subjects need the protection of such rights in a technological revolution in which they are increasingly exposed to the use and abuse of their personal data. A further question is how IP law can provide them with the requisite protection of their private space, or whether other means of protecting personal data, such as through general contract rights, render IP protections redundant, or at least, less necessary. Lawmakers often fail to distinguish between general property and IP protection of personal data; that IP protection encompasses important attributes of both property and contract law; and that laws that implement IP protection in light of its sui generis attributes are more fitting means of protecting personal data than the alternatives. One of the benefits of providing IP rights in personal data goes some way to strengthening data subjects’ control and protection over their personal data and strengthening data protection law more generally. It also argues for greater harmonization of IP law across jurisdictions to ensure that the protection of personal data becomes more coherent and internationally sustainable.

_

The linkages between intellectual property and privacy, in its broadest connotation, have existed for centuries. While some of them have been codified and recognized through judgments, others may be implied and/or read into the law by extension that does not amount to extrapolation. Some examples of privacy in the intellectual property include private, secret information that qualifies as a trade secret, publicity and celebrity rights, expressions of and including persons, moral rights, performer’s rights, inventorship, and invention information, and pre-public product representations.

In today’s social media context, any person having a profile on Facebook may be considered as a celebrity with publicity rights, which means that the right to control the commercial use of one’s online persona is available to one and all. Information shared online by a person from photographs and videos to updates and comments are the subject matter of protection as copyrights, trade marks, and trade secrets, and most of these rights are transferred on social media platforms and online forums through electronic contracts in the form of terms of service, etc. Facebook, Twitter, LinkedIn, and other social media platforms take non-exclusive rights with respect to the said intellectual property, and are currently free to use the same for both non-commercial and/or commercial purposes.

Expressions in the form of literary, musical, and other works is an extension of the person expressing, and privacy rights extend to the expression as well. If privacy rights add an extra layer of protection to copyrights and other rights over expressions, moral rights will be positively impacted, but exercise of rights granted by the copyright law, and fair use may be limited, or impacted detrimentally. An author may be able to control how a work may, or may not be used even after his rights are exhausted or transferred on privacy grounds. For example, if my expression is an element of my privacy, and I transfer copyrights in such expression, which is made in the form of a book, and copies of the book are sold by my publisher, can I control the distribution of my book using my privacy rights? Would it be different if I had shared the book in confidence, and the publisher releases it without my permission? Will I have an extra tool to take action in the form of violation of privacy rights? On its face, such an extension cannot be ruled out.

Even patent law may not be free from privacy incursions. Inventions are extensions of an inventor’s personality, and all data/information with respect to inventions and its transfer/disclosure may be susceptible to control under the aegis of the right to privacy. Information about patents and their exploitation may also be subject to the extensions of the right of privacy. Trade Secret law is a creature of common law, and privacy rights may prove to be handy in an action against misappropriation of trade secrets.

______

______

Section-25

Is privacy dead?

Protecting privacy is a losing game today. More and more data about each of us is being generated faster and faster from more and more devices, and we can’t keep up. It’s a losing game both for individuals and for our legal system. If we don’t change the rules of the game soon, it will turn into a losing game for our economy and society. Snowden, Equifax, and Cambridge Analytica provide three conspicuous reasons to take action but there are really quintillions of reasons. That’s how fast IBM estimates we are generating digital information, quintillions of bytes of data every day—a number followed by 30 zeros. This explosion is generated by the doubling of computer processing power every 18-24 months that has driven growth in information technology throughout the computer age, now compounded by the billions of devices that collect and transmit data, storage devices and data centers that make it cheaper and easier to keep the data from these devices, greater bandwidth to move that data faster, and more powerful and sophisticated software to extract information from this mass of data. All this is both enabled and magnified by the singularity of network effects—the value that is added by being connected to others in a network—in ways we are still learning. This information Big Bang is doubling the volume of digital information in the world every two years. The data explosion that has put privacy and security in the spotlight will accelerate. Futurists and business forecasters debate just how many tens of billions of devices will be connected in the coming decades, but the order of magnitude is unmistakable—and staggering in its impact on the quantity and speed of bits of information moving around the globe.

_

Most recent proposals for privacy legislation aim at slices of the issues this explosion presents. The Equifax breach produced legislation aimed at data brokers. Responses to the role of Facebook and Twitter in public debate have focused on political ad disclosure, what to do about bots, or limits to online tracking for ads. Most state legislation has targeted specific topics like use of data from ed-tech products, access to social media accounts by employers, and privacy protections from drones and license-plate readers. Facebook’s simplification and expansion of its privacy controls and recent federal privacy bills in reaction to events focus on increasing transparency and consumer choice. So does European GDPR and the newly enacted California Privacy Act. Measures like these double down on the existing privacy regime. The trouble is, this system cannot keep pace with the explosion of digital information, and the pervasiveness of this information has undermined key premises of these laws in ways that are increasingly glaring. Our current laws were designed to address collection and storage of structured data by government, business, and other organizations and are busting at the seams in a world where we are all connected and constantly sharing. It is time for a more comprehensive and ambitious approach. We need to think bigger, or we will continue to play a losing game.

_

Our existing laws developed as a series of responses to specific concerns, a checkerboard of federal and state laws, common law jurisprudence, and public and private enforcement that has built up over more than a century. It began with the famous Harvard Law Review article by (later) Justice Louis Brandeis and his law partner Samuel Warren in 1890 that provided a foundation for case law and state statutes for much of the 20th Century, much of which addressed the impact of mass media on individuals who wanted, as Warren and Brandeis put it, “to be let alone.” The advent of mainframe computers saw the first data privacy laws adopted in 1974 to address the power of information in the hands of big institutions like banks and government: the federal Fair Credit Reporting Act that gives us access to information on credit reports and the Privacy Act that governs federal agencies. Today, our checkerboard of privacy and data security laws covers data that concerns people the most. These include health data, genetic information, student records and information pertaining to children in general, financial information, and electronic communications (with differing rules for telecommunications carriers, cable providers, and emails).

_

As the data universe keeps expanding, more and more of it falls outside the various specific laws on the books. This includes most of the data we generate through such widespread uses as web searches, social media, e-commerce, and smartphone apps. The changes come faster than legislation or regulatory rules can adapt, and they erase the sectoral boundaries that have defined our privacy laws. Take smart watch, for one example: data it generates about heart rate and activity is covered by the Health Insurance Portability and Accountability Act (HIPAA) if it is shared with doctor, but not when it goes to fitness apps like Strava (to compare performance with peers). Either way, it is the same data, just as sensitive to individual and just as much of a risk in the wrong hands. It makes little sense that protection of data should depend entirely on who happens to hold it. This arbitrariness will spread as more and more connected devices are embedded in everything from clothing to cars to home appliances to street furniture. Add to that striking changes in patterns of business integration and innovation—traditional telephone providers like Verizon and AT&T are entering entertainment, while startups launch into the provinces of financial institutions like currency trading and credit and all kinds of enterprises compete for space in the autonomous vehicle ecosystem—and the sectoral boundaries that have defined privacy protection cease to make any sense.

_

Putting so much data into so many hands also is changing the nature of information that is protected as private. To most people, “personal information” means information like social security numbers, account numbers, and other information that is unique to them. This concept is aiming at “personally identifiable information,” but data scientists have repeatedly demonstrated that this focus can be too narrow. The aggregation and correlation of data from various sources make it increasingly possible to link supposedly anonymous information to specific individuals and to infer characteristics and information about them. The result is that today, a widening range of data has the potential to be personal information, i.e. to identify us uniquely. Few laws or regulations address this new reality.

_

Nowadays, almost every aspect of our lives is in the hands of some third party somewhere. This challenges judgments about “expectations of privacy” that have been a major premise for defining the scope of privacy protection. These judgments present binary choices: if private information is somehow public or in the hands of a third party, people often are deemed to have no expectation of privacy. This is particularly true when it comes to government access to information—emails, for example, are nominally less protected under laws once they have been stored 180 days or more, and articles and activities in plain sight are considered categorically available to government authorities. But the concept also gets applied to commercial data in terms and conditions of service and to scraping of information on public websites.

_

As more devices and sensors are deployed in the environments we pass through as we carry on our days, privacy will become impossible if we are deemed to have surrendered our privacy simply by going about the world or sharing it with any other person. Plenty of people have said privacy is dead, starting most famously with Sun Microsystems’ Scott McNealy back in the 20th century (“you have zero privacy … get over it”) and echoed by a chorus of despairing writers since then. Without normative rules to provide a more constant anchor than shifting expectations, true privacy actually could be dead or dying.

_

Constant streams of data about us change the ways that privacy should be protected. Our existing laws rely heavily on notice and consent—the privacy notices and privacy policies that we encounter online or receive from credit card companies and medical providers, and the boxes we check or forms we sign. These declarations are what provide the basis for the FTC to find deceptive practices and acts when companies fail to do what they said. This system follows the model of informed consent in medical care and human subject research, where consent is often asked for in person, and was imported into internet privacy in the 1990s. Maybe informed consent was practical two decades ago, but it is a fantasy today. In a constant stream of online interactions, especially on the small screens that now account for the majority of usage, it is unrealistic to read through privacy policies. And people simply don’t. At the end of the day, it is simply too much to read through even the plainest English privacy notice, and being familiar with the terms and conditions or privacy settings for all the services we use is out of the question. Moreover, individual choice becomes utterly meaningless as increasingly automated data collection leaves no opportunity for any real notice, much less individual consent. We don’t get asked for consent to the terms of surveillance cameras on the streets or “beacons” in stores that pick-up cell phone identifiers. At best, a sign may be posted somewhere announcing that these devices are in place. As devices and sensors increasingly are deployed throughout the environments we pass through, some after-the-fact access and control can play a role, but old-fashioned notice and choice become impossible.

_

We know very little about how the businesses that collect our data operate. There is no practical way even a reasonably sophisticated person can get arms around the data that they generate and what that data says about them. After all, making sense of the expanding data universe is what data scientists do. How can the rest of us who are far from being data scientists hope to keep up? As a result, the businesses that use the data know far more than we do about what our data consists of and what their algorithms say about us. Add this vast gulf in knowledge and power to the absence of any real give-and-take in our constant exchanges of information, and you have businesses able by and large to set the terms on which they collect and share this data.

_

The fundamental need for baseline privacy legislation is to ensure that individuals can trust that data about them will be used, stored, and shared in ways that are consistent with their interests and the circumstances in which it was collected. This should hold regardless of how the data is collected, who receives it, or the uses it is put to. If it is personal data, it should have enduring protection. Such trust is an essential building block of a sustainable digital world. It is what enables the sharing of data for socially or economically beneficial uses without putting human beings at risk. But trust is betrayed too often, whether by intentional actors like Cambridge Analytica or Russian “Fancy Bears,” or by bros in cubes inculcated with an imperative to “deploy or die.” Trust needs a stronger foundation that provides people with consistent assurance that data about them will be handled fairly and consistently with their interests.

_

The conventional wisdom is that the easiest way to stop social media companies like Facebook and Twitter from tracking and profiling you is simply by deleting your social media accounts. That, for example, was the basis for the #DeleteFacebook movement that gained momentum around the time of the Facebook Cambridge Analytica scandal in early 2018. But now a new study by researchers at the University of Adelaide in Australia and the University of Vermont in the United States suggests that even deleting your social media accounts might not be enough to protect your social media privacy. This research study, which was published in the journal Nature Human Behaviour, analysed 30.8 million Twitter messages from 13,905 Twitter accounts to see whether it might be possible to profile an individual simply by examining the profiles and interactions with his or her friends. From a social media privacy perspective, the study turned up some very concerning results. It turns out that the science research team didn’t even need 15 accounts to figure out a person’s profile. All they needed was tweets from 8-9 accounts (i.e. the “friends” of the user), and they could start to create some startlingly accurate profiles. For example, machine learning algorithms could start to predict factors such as “political affiliation” or “leisure interests” simply by studying the tweets of someone’s friends. Often, they were able to do this with up to 95 percent accuracy. In many ways, the study is an affirmation of the adage, “Tell me who your friends are, and I’ll tell you who you are.” Every day, say the researchers, your friends are leaving tell-tale clues about you, what you like, and even how you are likely to vote in any election. Thus, even if you decide to delete your social media account, your profile is still “encoded” in previous interactions with your friends. You can think of your friends as creating a “mirror image” of yourself – all a company or government entity needs to do is figure out who a person’s friends are, and it’s possible to predict how a person will act or behave.

This obviously has social media privacy implications. In a base case scenario, a clever brand would be able to craft marketing messages customized for you, simply by analyzing the people in your network. Search engines would be able to deliver search results geared to specific people based on what their friends are saying. And, in an even scarier worst-case scenario, an authoritarian government might be able to crack down on a group of political dissidents very quickly simply by putting a few machine learning algorithms to work. Even people suspected of having certain thoughts might be rounded up, solely on the basis of Internet users in their network.

And there’s another element to the research study on social media privacy that is perhaps more subtle, and that is the fact that social media privacy is not necessarily an individual choice. Friends are sharing personal information about you, even if you are doing everything possible to protect your social media privacy (even to the extent of deleting your Facebook account or restricting access to personal data in other ways). This would seem to fly in the face of conventional wisdom about online activity and how data is collected. This conventional wisdom suggests that each individual is in control of his/her social media privacy. All it takes is checking a few boxes, the thinking goes, and you can immediately move from “weak” social media privacy to “strong” social media privacy. But this doesn’t seem to be the case. And it’s also particularly troubling for social media privacy advocates that some of the biggest tech companies, including Facebook, appear to be collecting “shadow profiles” of non-users. What this means is that Facebook is not only collecting data on its own users (which most people realize), but also that is creating profiles of non-users simply by capturing all the ambient data that flows through the social network on a daily basis. For example, if you tag a photo of your grandmother on Facebook, and your grandmother is not on Facebook yet, is Facebook able to start assembling a “shadow profile” of your grandmother without her realizing it? Information is collected on social media sites in ways that might not be obvious to social media users.

There is no place to hide on social networking platforms. Your behavior is now predictable from the social media data of just 8-9 of your friends. Even when you have deleted your accounts, you can still be profiled based on personal information online derived from your friends’ posts. Social media privacy may be difficult as new research study suggests that even deleting your accounts might not be enough.

_

There is a new report about the future of privacy from Pew Research Center, which collected the opinions of more than 2,500 experts in computer programming, engineering, publishing, data science, and related fields.

Some respondents told Pew they are confident that policymakers will, in the next decade, establish privacy rights that protect individuals from government and corporate surveillance. But many others are pessimistic about the possibility that such a framework might come about in the next 10 years ago—or ever.

Experts agreed, though, that our expectations about personal privacy are changing dramatically. While privacy once generally meant, “I assume no one is looking,” as one respondent put it, the public is beginning to accept the opposite: that someone usually is. And whether or not people accept it, that new normal—public life and mass surveillance as a default—will become a component of the ever-widening socioeconomic divide. Privacy as we know it today will become a luxury commodity. Opting out will be for the rich. To some extent that’s already true. Consider the supermarkets that require you to fill out an application—including your name, address, phone number, and so on—in order to get a rewards card that unlocks coupons. In the next 10 years, we will see the development of more encryption technologies and boutique services for people prepared to pay a premium for greater control over their data. This is the creation of privacy as a luxury good. It also has the unfortunate effect of establishing a new divide: the privacy rich and the privacy poor. Whether genuine control over your information will be extended to the majority of people—and for free—seems very unlikely, without a much stronger policy commitment.

And there’s little incentive for the entities that benefit from a breakdown in privacy to change the way they operate. In order to get more robust privacy protections—like terms of service agreements that are actually readable to non-lawyers, or rules that let people review the personal information that data brokers collect about them—many experts agree that individuals will have to demand them. But even that may not work. Where there’s tension between convenience and privacy, individuals are already primed to give up their right to be left alone. For instance, consider the Facebook user who feels uneasy about the site’s interest in her personal data but determines quitting isn’t an option because she’d be giving up the easiest way to stay in touch with friends and family.

_

Our mentality is changing the way we think about our privacy rights in the first place. “By 2025, many of the issues, behaviors, and information we consider to be private today will not be so,” said Homero Gil de Zuniga, director of the Digital Media Research Program at the University of Texas-Austin, in the Pew report. “Information will be even more pervasive, even more liquid, and portable. The digital private sphere, as well as the digital public sphere, will most likely completely overlap.” In other words, the conveniences of the modern world will likely dictate privacy norms. This is already happening all around us. As the media critic Mark Andrejevic points out to Pew, many people today treat email as though it’s equivalent to a private face-to-face conversation. It is not. “We will continue to act as if we have what we once called ‘privacy,’” Andrejevic told Pew, “but we will know, on some level, that much of what we do is recorded, captured, and retrievable, and even further, that this information will provide comprehensive clues about aspects of our lives that we imagined to be somehow exempt from data collection.” “We are embarked, irreversibly, I suspect, upon a trajectory toward a world in which those spaces, times, and spheres of activity free from data collection and monitoring will, for all practical purposes, disappear.”

_______

_______

Moral of the story:  

_

  1. Privacy is essential to who we are as human beings, and we make decisions about it every single day. It gives us a space to be ourselves without judgement, allows us to think freely without discrimination, and is an important element of giving us control over who knows what about us. Privacy is important — perhaps more so than ever due to the assaults by con artists, corporates, governments, the press, and social media in the 21st century. Privacy can be defined as the right of individuals, groups, or institutions to determine for themselves when, how, and to what extent information about them is communicated to others.

_

  1. The right to privacy is a basic human right, a fundamental right. The right to privacy is our right to keep a domain around us, which includes all those things that are part of us, such as our body, home, property, thoughts, feelings, secrets, and identity. The right to privacy gives us the ability to choose which parts in this domain can be accessed by others and to control the extent, manner, and timing of the use of those parts we choose to disclose. Privacy essentially limits access to domains related to us. The right to privacy holds high pedestal as privacy helps to create barriers and manage boundaries to defend ourselves from unwarranted interference with our personal lives and allows us to negotiate who we are and how we desire to engage with the outside world.

_

  1. Human beings value their privacy and the protection of their personal sphere of life. As technology has advanced, the way in which privacy is protected and violated has changed with it. We are living in a world where personal data is collected, created, used, processed, analysed, shared, transferred, copied, and stored in unprecedented ways and at an extraordinary speed and volume – without our consent! The technical capabilities to collect, store and search large quantities of data concerning telephone conversations, internet searches and electronic payment are now in place and are routinely used by government agencies and corporate actors alike. Therefore, privacy is challenged and eroded in digital technology on daily basis.

  1. Privacy and Internet have a complex relationship. On the one hand, technology has enhanced privacy by offering more accessible means to communicate, access information and advances in encryption have made many online transactions and interactions increasingly secure with users enjoying greater protection of their messages from prying eyes. On the other hand, new and varied threats to privacy have emerged with the growth of the digital universe including government surveillance; data collection by internet giants, online retailers, search engines and email providers to track users’ behaviour, collating and selling information to advertisers and marketers; not to mention identity thieves, cybercriminals and hackers exploiting vulnerabilities in online banking and e-commerce platforms for financial gain.    

_

  1. While the right to privacy is now well-established in international law, understandings of privacy have continued to differ significantly across cultures, societies, ethnic traditions and time. What privacy means varies between Europe and the US, between libertarians and public figures, between the developed world and developing countries, between women and men. The concept of privacy is also context specific, and acquires a different meaning depending on the stated reasons for the information being gathered, the intentions of the parties involved, as well as the politics, convention and cultural expectations. Despite the fact that the claim for privacy in universal, its concrete form differs according to the prevailing societal characteristics, the economic and cultural environment.

_

  1. Different authors have classified privacy differently. One way to classify privacy is privacy of space, body, information and choice.

_

  1. Personally identifiable information (PII) is any data that can be used to identify a specific individual for example mailing or email address, phone numbers etc. Technology has rendered the conventional definition of personally identifiable information obsolete. You can find out who an individual is without PII by using powerful data mining, which relies on sophisticated statistical correlations and social signature. From information shared on social media sites, to cookies collecting user browser history, to individuals transacting online, to mobile phones registering location data – information about an individual is generated through each use of the internet and seemingly harmless data can now be combined and analysed to identify individuals and learn personal information about an individual.     

_

  1. Anonymity refers exclusively to the matters related to the identity. Anonymity ensures that a user may use a resource or service without disclosing the user’s identity. Anonymity is when nobody knows who you are but potentially they know what you are doing. Privacy is when nobody is aware of what you are doing but potentially they know your identity. Anonymity is a technique one uses to enhance privacy.

_

  1. Privacy concerns people, whereas confidentiality concerns data. Privacy restricts the public from accessing the personal details about a person, whereas confidentiality protects the information from the range of unauthorized persons. In privacy, everyone is disallowed from interfering in the personal matters of a person. Conversely, in confidentiality some specified and trustworthy people are allowed to have access to the information.

_

  1. Secrecy is anything not allowed by the society to keep private but some body is doing so, whereas privacy is something which is allowed by the society to keep secret. Privacy is about exercising the choice to withhold information which others have no need to know. Secrecy, on the other hand, is about withholding information that people may have a right to know. What you do in the bathroom is private. It’s not something you’d like other people to sit and watch. But it’s usually not a secret. Everyone has a pretty good idea of what goes on when you go into the bathroom and close the door, even if they can’t see you do it. However, if you went into the bathroom and did some illicit drugs that your friends won’t approve, that would probably be a secret.

_

  1. Privacy and security are related. Privacy relates to any rights you have to control your personal information and how it’s used. Security, on the other hand, refers to how your personal information is protected. Data privacy governs how data is collected, shared and used. Data security protects data from compromise by external attackers and malicious insiders. A company might have strict privacy regulations, but if they don’t have robust security in place, your data can be easily stolen by hackers. If they have strong security, but lenient privacy policies, then your data might be guarded against hackers, but that doesn’t guarantee that your data won’t be shared with third parties or abused by the company itself. No security measure can prevent invasion of privacy by those who have authority to access the record. While security and privacy are interdependent, security can be achieved without privacy but privacy cannot be achieved without security. It is impossible to implement a successful privacy program without the support of a security program. Data security can exist without data privacy, but not the other way around.

_

  1. Encryption converts electronic data into a form that is unreadable by anyone for whom the information is not intended. The use of encryption is on the rise. More hardware manufacturers offer device encryption, more messaging applications have introduced end-to-end encryption, and more websites now facilitate transport encryption. Encryption can secure communications, web browsing and online transactions against outside monitoring and interference in ways that protect privacy rights, but it can also frustrate legitimate government surveillance and capture of cybercriminals.

When end-to-end encryption is applied with no access to content to protect privacy, it severely undermines the ability of companies to take action against illegal activity on their own platforms. It also prevents law enforcement investigating and prosecuting the most serious crimes being committed on these services such as online child sexual abuse, grooming and terrorism. The debate between privacy and security has been framed incorrectly as a zero-sum game in which we are forced to choose between one value and the other. But protecting privacy isn’t fatal to security measures; it merely involves adequate supervision and regulation. Both privacy and security are essential interests, and the balance we strike between them affects the very foundations of our freedom and democracy.

Technology is giving the government unprecedented tools for watching people and amassing information about them—video surveillance, location tracking, data mining, wiretapping, bugging, thermal sensors, spy satellites, X-ray devices, and more. The privacy-security debate profoundly influences how these government activities are regulated. The government can use these massive breaches of privacy for public good for example prevent terrorism but the government can also use them to target opposition & government critics or use them to maintain status co to support establishment.    

_

  1. Data privacy describes the practices which ensure that the data shared by customers is only used for its intended purpose. Data privacy (information privacy) is also known as data protection. Data privacy is a subset of privacy. This is because protecting user data and sensitive information is a first step to keeping user data private. Data privacy is challenging since it attempts to use data while protecting an individual’s privacy preferences and personally identifiable information. Data privacy includes how data should be collected, stored, and shared with any third parties, as well as compliance with the applicable privacy laws (such as CCPA or GDPR). Internet privacy is a subset of data privacy. Internet privacy is also known as online privacy. Internet privacy refers to the vast range of technologies, protocols and concepts related to giving individual users or other parties more privacy protections in their use of the global Internet.

_

  1. Once you’ve given away your data to one company, it’s very hard to limit its use. While you might feel comfortable with Amazon or Facebook using your personal information, you just can’t be sure that other “third-party” companies and even government agencies won’t also find ways of access it. Another issue is that sites like Facebook and Google often store user data, even after someone has deleted their account. If you’ve used Facebook’s payment systems, your card details will still be in their logs. One major problem with this is that, long after you stop using an application, a data breach can still see your personal information in the hands of hackers and criminals. 

_

  1. Search engine privacy is a subset of internet privacy that deals with user data being collected by search engines. Privacy concerns regarding search engines can take many forms, such as the ability of search engines to log individual search queries, browsing history, IP addresses, and cookies of users, and conducting user profiling in general. Search engines can also abuse and compromise its users’ privacy by selling their data to advertisers for profit.

_

  1. The most overlooked culprits in internet privacy landscape are the actual internet service providers themselves. A typical practice that compromises your internet privacy is the collecting and selling of browser history. You can be extremely diligent with your use of social media and search engines, but your ISP will still be able to track your activity. This is because you need to go through them to access these sites in the first place. The best way to get around this is by using a virtual private network (VPN).

_

  1. The borderless nature of information flows over the Internet complicates online privacy, as individual’s data is subjected to different levels of protection depending on which jurisdiction it is residing in and where data server is located. Governments should ensure that legislation is technology neutral and that its rules are applied consistently to all players in the internet ecosystem.

_

  1. By design, social media technologies contest mechanisms for control and access to personal information, as the sharing of user-generated content is central to their function. This proves that social networking companies need private information to become public so their sites can operate. They require people to share and connect with each other. Users are often the targets as well as the source of information in social networking. This may not necessarily be a bad thing; however, one must be aware of the privacy concerns. There are several ways advertisers can invade your social media privacy, take advantage of your data and make you a target for their ads. Also, social media has opened up an entirely new realm for hackers to get information from normal posts and messages.

_

  1. Your behavior is now predictable from the social media data of just 8-9 of your friends. Every day your friends are leaving tell-tale clues about you, what you like, and even how you are likely to vote in any election. Thus, even if you decide to delete your social media account, your profile is still “encoded” in previous interactions with your friends. You can think of your friends as creating a “mirror image” of yourself – all a company or government entity needs to do is figure out who a person’s friends are, and it’s possible to predict how a person will act or behave. Therefore, social media privacy is not necessarily an individual choice.

_

  1. A survey found that 45% of online households reported that privacy concerns stopped them from: conducting financial transactions; buying goods or services; posting on social networks; or expressing opinions on controversial or political issues via the Internet. Another survey found that 85 percent of respondents believe that at least one tech company is currently spying on them. Majority of Americans report being concerned about the way their data is being used by companies (79%) or the government (64%).

_

  1. Mass surveillance is an unprecedented intrusion into the privacy of ordinary people. At no point in history have we accepted that governments should be able to monitor everything we do to keep us safe. Mass surveillance not only compromises the very essence of privacy, but also jeopardizes the enjoyment of other human rights such as freedom of expression and freedom of assembly & association. This can undermine democratic movements, impede innovation, and leave citizens vulnerable to the abuse of power.

When we believe we are being observed, we are more likely to behave according to socially accepted norms. The change in behaviour, thus, has less to do with the content of our actions, but more to do with the knowledge of being watched. Such a modification of behaviour is also evident in the arena of free speech and expression. Persons critical of the ruling government may be more likely to self-censor their views if they believe their communications are being monitored. The reduction in diversity of views only undermines the democratic process.

Mass surveillance can disproportionately affect certain groups in society based on appearance, ethnicity, and religion. Authorities could gather data to find and crackdown on groups they disagree with. They could use the information to target journalists, persecute activists and discriminate against minorities.

Mass surveillance hasn’t made us safer. Almost every major terrorist attack on Western soil in the past fifteen years was committed by someone already on the government’s radar for one or another reason. 

_

  1. The premise that privacy is about hiding a wrong is completely wrong. Watch someone long enough, and you’ll find something to arrest — or just blackmail — with. Privacy is important because without it, surveillance information will be abused: to peep, to sell to marketers and to spy on political enemies — whoever they happen to be at the time. Privacy protects us from abuses by those in power, even if we’re doing nothing wrong at the time of surveillance. Privacy is an essential way we seek to protect ourselves and society against arbitrary and unjustified use of power, by reducing what can be known about us and done to us, while protecting us from others who may wish to exert control. Loss of privacy can result in ruining a person’s reputation, honour, relationship, intellectual possession and the like. To lose control of one’s personal information is in some measure to lose control of one’s life and one’s dignity.

_

  1. Measures to protect the privacy of personal information on the Internet offer substantial individual, economic, and societal benefits that must be vigorously protected. First, individuals benefit from the protection of privacy in cyberspace by enjoying the right to be left alone. Secondly lack of privacy may actually stagnate the e-commerce economy. In fact, polls reveal that the privacy concern is the top reason why consumers avoid using the Internet. Finally, intrusive data collection, coupled with the lack of any meaningful choice regarding protection, could lead to avoidance of the Internet as a free-flowing medium of free speech. People may lie to protect their personal data, refuse to answer questions fearing that their answers will become a record in a marketer’s database, or avoid the Internet altogether. Privacy protections, therefore, may also protect against untruthful data and self-censorship.

_

  1. Privacy is important in democracy to prevent undue influence. While lack of privacy in the voting process could enable vote buying and coercion, there are more subtle ways of influencing the democratic process. About 87 million Facebook users’ personal data may have been utilized by Cambridge Analytica for political campaigns of Trump’s 2016 election and Brexit vote by violating Facebook users’ privacy although it is unclear whether it affected the outcome of these political campaigns.

_

  1. Privacy is the key to freedom of thought, expression, research and movement. Privacy is a limit on government power, as well as the power of private sector companies. The more someone knows about us, the more power they can have over us.

_

  1. People need some level of privacy to be at their best at their workplace because privacy is facilitating innovation and creative process, focus, and engagement which translates into effectiveness.

_

  1. We often have often heard a flawed counter-argument to privacy: “Why should I care? I have nothing to hide.” The nothing to hide argument states that government surveillance programs do not threaten privacy unless they uncover illegal activities, and that if they do uncover illegal activities, the person committing these activities does not have the right to keep them private. If you have nothing to hide, then information about you cannot really be used against you. Thus, the argument proceeds, no harm should be caused to you by the breach of your privacy.

Nothing to hide argument is fallacious because:

-1. Just because you’re not doing something wrong doesn’t mean you shouldn’t be allowed privacy.

-2. Arguing that you don’t care about the right to privacy because you have nothing to hide is no different than saying you don’t care about free speech because you have nothing to say.

-3. Individuals may wish to hide embarrassing behavior or conduct not accepted by the dominant culture.

-4. Cardinal Richelieu stated “If one would give me six lines written by the hand of the most honest man, I would find something in them to have him hanged”, referring to how a government can find aspects in a person’s life in order to prosecute or blackmail that individual. Government can leak information about a person and cause damage to that person, or use information about a person to deny access to services even if a person did not actually engage in wrongdoing, and government can cause damage to one’s personal life through making errors.

-5. Nothing to hide argument fundamentally misunderstands the consequences of the perceived loss of privacy and ensuing chilling effects on speech and behaviour. Privacy underpins a healthy democracy, and ensures our freedoms of expression, association, and assembly. The erosion of privacy is something that affects all people, even those who have nothing to hide.   

-6. Nothing to hide argument frames privacy as something only criminals and other bad actors would demand, but nothing could be further from the truth.  

-7. Nothing to hide argument imply that government surveillance should be acceptable as the default. On the contrary, the privacy should be the default.   

_

  1. The right to be forgotten is the right to have private information about a person be removed from Internet searches and other directories under some circumstances. The issue has arisen from desires of individuals to determine the development of their life in an autonomous way, without being perpetually or periodically stigmatized as a consequence of a specific action performed in the past. The right to be forgotten is distinct from the right to privacy. The right to privacy constitutes information that is not publicly known, whereas the right to be forgotten involves removing information that was publicly known at a certain time and not allowing third parties to access the information. Proponents claim that the information will only be removed if the impact on the individual’s privacy is greater than the public’s right to find it. Experts argue that asking search engines to remove certain content is a form of internet censorship and censoring certain information through search engines is a violation of the freedom of expression. In Europe, the right to be forgotten trumps freedom of speech; the reverse is true in the United States.

_

  1. The ‘privacy paradox’ suggests that while Internet users are concerned about privacy, their behaviors do not mirror those concerns; users claim to be very concerned about their privacy but do very little to protect their personal data. The purported explanation for privacy paradox is that users lack awareness of the risks and the degree of protection. Users may underestimate the harm of disclosing information online. However, a study found that while downloading an app; functionality, app design, and costs appeared to outweigh privacy concerns.

On the other hand, many argue that it may be more of a privacy dilemma, because people would like to do more but they also want to use services that would not exist without sharing their data. Users are tempted to exchange their personal data for the benefits of using services, and provide data as payment for the services. When the service is free, the data is needed as a form of payment. People do understand that they pay with personal data, but believe they get a fair deal. What is the alternative? Pay money for services. What makes internet so impactful to the world is the fact that it can deliver information to those who wouldn’t have had it before. Throwing up a paywall on the whole thing would do more harm than good, and likely create more inequality than we already have.   

_

  1. On first inspection, it would appear that the right of access to information and the right to protection of personal privacy are irreconcilable. However, privacy and right to information are often described as “two sides of the same coin”—mainly acting as complementary rights that promote individuals’ rights to protect themselves and to promote government accountability. Both are focused on ensuring the accountability of powerful institutions to individuals in the information age. The common purpose of these two rights is “to continue maintaining the non-transparency of citizens in a world that has undergone the information revolution while rendering transparent the state.” A right to information renders citizens access to information about the functioning of the state and, the right to privacy provides citizens the control over their personal information. For the most part, these two rights complement each other in holding governments accountable to individuals. But there is a potential conflict between these rights when there is a demand for access to personal information held by government bodies; or, privacy laws often are improperly invoked by governments to cover up for their targeting of individuals who show dissent.  

_

  1. The Internet has undoubtedly enhanced children’s autonomy and independence, key aspects of their right to privacy. On the other hand, children experience more serious threats to their privacy from a greater range of actors than any other group. Children’s privacy online is placed at serious risk by those who seek to exploit and abuse them, using the Internet as a means to contact and groom children for abuse or share child sexual abuse material. Yet children’s privacy is also at risk from the very measures that have been put in place to protect them from these threats. Laws designed to facilitate the prevention and detection of crimes against children online often mandate Internet monitoring and surveillance, incentivize intermediaries to generate and retain personal information, and provide government authorities with access to privately-held data. It is fair to say that children’s rights to privacy and the protection of personal information and reputation must be considered, even attenuated, in the context of the need to protect children from harm and abuse.

_

  1. Children are of incredible interest to businesses. They are the largest and most powerful consumer group; they are more susceptible to advertising and marketing techniques; and their preferences and behaviours are more open to influence and manipulation. Therefore, children are specially targeted for corporate data collection violating their privacy.

_

  1. While the motivation to protect children from harmful content, sexual exploitation and disclosing personal information is undoubtedly legitimate, parental controls also present a clear interference with children’s privacy. Parents have been given authority over their children’s privacy online by requiring parental involvement, control and consent for the use of widely-available online services but it can impede children’s freedom of expression, access to information and development of digital literacy. Perhaps most concerning, parents who threaten their children’s safety may use their power to cut off digital lifelines for seeking outside assistance.

_

  1. Given that women disproportionately face fetishization, harassment, and threats of violence online, things become worse as they also have less awareness of the potential threats posed by technology, data, and interface design. Invasion of privacy is one of the most damaging types of sexual harassment because it damages her reputation and personal relationships; this can include anything from leaking important and private information in order to coerce her into a sexual relationship or secretly recording her in places that are clearly private.

_

  1. It’s common practice for businesses to try and acquire as much personal information about you as possible. That’s because this data is more powerful than ever. When a business has a ton of data about you, they can be more effective with their marketing and deliver highly targeted advertisements. With just a handful of details about you, a company like Facebook or Google can sell ads for double the cost. When more user data equals more profit, it’s not hard to predict the direction the ship will go. As reading and writing, health care and shopping, and sex and gossip increasingly take place in cyberspace, citizens around the world are concerned that the most intimate details of their daily lives are being monitored, searched, recorded, stored, and often misinterpreted when taken out of context. Numerous surveys document that lack of privacy protections is a major barrier to consumer participation in electronic commerce. Companies risk losing up to 55% of customers if they suffer a significant personal data leak. So businesses are now beginning to take privacy protection more seriously. A growing number of companies, under public and regulatory scrutiny, have begun incorporating privacy into their management process and actually marketing their “privacy sensitivity” to the public.

_

  1. Users generate loads of data when online. This is not only data explicitly entered by the user, but also numerous statistics on user behavior: sites visited, links clicked, search terms entered etc. We have to distinguish between data that are actually recorded and information that can be statistically predicted from such records. It has been shown that age, gender, occupation, education level, and even personality can be predicted from people’s website browsing logs. Facebook Likes, can be used to automatically and accurately predict a range of highly sensitive personal attributes including: sexual orientation, ethnicity, religious and political views, personality traits, intelligence, happiness, use of addictive substances, parental separation, age, and gender. Such data may be used in profiling the user, creating patterns of typical combinations of user properties, which can then be used to predict interests and behavior. Predicting users’ individual attributes and preferences can be used to improve numerous products and services.

On the other hand, the predictability of individual attributes from digital records of behavior may have considerable negative implications, because it can easily be applied to large numbers of people without obtaining their individual consent and without them noticing. This predicting of personal information to improve products, services, and targeting can lead to dangerous invasions of privacy. The profiling could lead to refusal of insurance or a credit card, in which case profit is the main reason for discrimination. Profiling could also be used by organizations or governments that have discrimination of particular groups on their political agenda, in order to find their targets and deny them access to services, or worse. Data analysis and machine learning techniques are used to generate prediction models of individual users that can be used for targeted advertisement, but also for more malicious intents such as fraud or micro-targeting to influence elections. One can imagine situations in which such predictions, even if incorrect, could pose a threat to an individual’s well-being, freedom, or even life. There is a risk that the growing awareness of digital exposure may negatively affect people’s experience of digital technologies, decrease their trust in online services, or even completely deter them from using digital technology.

_

  1. The main originators of the threats against privacy are governmental institutions and big corporations. The motivations behind these threats are varied. Nevertheless, they can be classified under four categories: social, political, technological and economical.

Data once collected can be used, misused, shared, and stored in perpetuity. Worse, it can be combined with other individually inconsequential data points to reveal extremely significant information about an individual. It is true that both the private sector and the state can know this information. Both governments and corporates are threat to privacy. But in the hands of the state, which has the monopoly on coercion and violence, it is far more potent. Between government and the private sector, government is the clearest threat to privacy. Governments have the power to take information from people and use it in ways that are objectionable or harmful. This is a power that no business has. People can always turn away from businesses that do not satisfy their demands for privacy. One may argue that the private sector merely uses this information for commercial purposes, while the government may use it to jail people, suppress free speech, target government critics and violate their rights. However, one must note that the violation of privacy by private agents has some similar effects to violations committed by government agents—effects that lead to discrimination and “chilling” of expression and dissent. Corporations, especially those that make trading in private information their main line of business—the Privacy Merchants—are major violators of privacy, and their reach is rapidly expanding. One must also note that the information corporations amass is available to the government as government has been increasingly contracting with businesses to acquire databases of personal information. Therefore, what is private is also public i.e. one’s privacy (including sensitive matters) is rapidly corroded by the private sector and that whatever it learns is also available to the government. So the coin of privacy violation has two sides, government and corporates, no matter how you toss the coin, your privacy will be compromised. When you’re doing stuff online, you should behave as if you’re doing it in public — because increasingly, it is.  

_

  1. Unwarranted invasion of privacy by the media is widespread in India. The Indian norms or code of ethics in journalism fail to make such a distinction between public and private space. The functions of a courts are usurped by media trials and media trials hampers fair investigation in India.

_

  1. As a principle, a publication concerning strictly private matters infringes the right to privacy unless consent of the concerned person is obtained or such publication is considered in the public interest. Areas considered to be of public interest include yet are not limited to misuse of public office; improper use of public money; protection of public health, safety and environment; protection of national security; crime and social behaviour and similar political and socioeconomic topics.  

_

  1. Smartphone apps do invade your privacy. Apps can track your web habits, look into your contact list, make phone calls without your cognizance, record with the microphone, take photos, extract data, track your location, examine your files and more which is not at all acceptable.

_

  1. Privacy violations in the healthcare sector include the disclosure of personal health information to third parties without consent; unlimited or unnecessary collection of personal health data; provision of personal health data given for research; and commercial uses without de-identification of data and improper security standards, storage and disposal. Individuals whose private health information was inappropriately accessed face a number of potential harms. Individuals could lose their job, health insurance, housing and relationship if private health information becomes public knowledge.

_

  1. Aggressive digital surveillance has been justified by governments worldwide during Covid-19 pandemic for Covid-19 patients and their contact tracing, enforcing quarantine and encouraging mask wearing. However stringent procedures are required to keep this information safe and to delete it when no longer in use. Data collection during Covid-19 pandemic must be done in such a way that it doesn’t create future privacy risks.  

_

  1. Facial recognition, CCTV, 3D printing, smart wearables and IoT are all privacy risks.

_

  1. The term “Privacy by Design” means nothing more than “data protection through technology design.” Privacy by design’s main point is that data protection should be central in all phases of product life cycles, from initial design to operational use and disposal.

_

  1. Deleting your cookies will make you appear as a “new” visitor to less sophisticated tracking systems. But it will not meaningfully increase your privacy online, and the “delete cookies” button often leads to a false expectation of privacy that simply does not exist.

_

  1. There are various ways of protecting your online privacy, but the most successful and certain way is through virtual private network (VPN). It is a tool that provides an encrypted tunnel for all your online activities, which means it encodes all the information transferred between you and your host site and leaves no chances of snooping and spying. It also provides you an anonymous IP and disguises your actual identity, hiding your geographical location and making your online existence more safe and secure. Using a VPN is a good choice when using public Wi-Fi networks. VPN also prevent ISPs from being able to track what you do online and sell that information to other companies. Although VPNs can address some privacy issues, they also introduce new ones. VPNs provide a convenient chokepoint for your adversary to monitor all of your communications. By routing all your communications through a VPN, your adversary only needs to compromise your VPN provider’s systems to monitor all of your traffic. Also, some VPN providers have claimed they don’t store any logs — but were proven completely false.

_

  1. If you’re like many web surfers, you rely heavily on Google as your search engine. But you don’t have to. Switch to DuckDuckGo or Startpage as these search engines do not track or store any information about you, or place cookies on your machine. You can also use the Tor browser as it was built for enabling anonymous communication. It uses onion routing to not only prevent third-party trackers from following you around but also to mask your IP address from prying eyes. The idea is to prevent an observer from linking your activities on the web to you.

_

  1. Do Not Track is a one-and-done web browser setting, Do Not Track provides simple mechanism to enable consumers to exercise their opt-out rights and avoid invasive data collection and profiling. Do Not Track browser setting would not destroy online advertising, as some companies fear. People who turn Do Not Track on could still be shown contextual ads (based on the context of the page, i.e. its content like the search you type in), as opposed to behavioural ads (based on creepy profiles of your search history, likes, purchases, and more). Increasing evidence says this can be similarly profitable. In other words, business can continue to thrive, users can continue to get great products, and your privacy can be protected.

_

  1. Differential privacy can be used to protect everyone’s personal data while gleaning useful information from it. Differentially private techniques can strip data of their identifying characteristics so that they can’t be used by anyone — hackers, government agencies, and even the company that collects them — to compromise any user’s privacy. Differential privacy will enable companies to collect information, while reducing the risk that it will be accessed and used in a way that harms privacy rights.

_

  1. A privacy policy is a statement or legal document (in privacy law) that discloses some or all of the ways a party (company or website) gathers, uses, discloses, and manages a customer or visitor’s data. It explicitly describes whether that information is kept confidential, or is shared with or sold to third parties. But most people do not read privacy policies for every device they buy or every app they download or every website they visit and, even if they attempted to do so, most would be written in legal language unintelligible to the average consumer. Privacy policies are hard to read, therefore read infrequently. However, any efforts to make the information more presentable simplify the information to the point that it does not convey the extent to which users’ data is being shared and sold. This is known as the ‘transparency paradox.’ Many policies essentially consist of a site showing the user a privacy policy and having them click to agree. This is intended to let the user freely decide whether or not to go ahead and use the website. This decision, however, may not actually be made so freely because the costs of opting out can be very high.   

_

  1. Because there are so many dimensions of the privacy interest, and so many competing interests at so many levels of society, the formulation of detailed, operational rules about privacy protection is a difficult exercise. Even with the adoption of legal and other protections, violations of privacy remain a concern. More and more data about each of us is being generated faster and faster from more and more devices, and we can’t keep up with protecting privacy. We are generating quintillions of bytes of data every day—a number followed by 30 zeros. Privacy laws and regulations cannot keep pace with the explosion of digital information, and the pervasiveness of this information has undermined key premises of these laws in ways that are increasingly glaring. As the data universe keeps expanding, more and more of it falls outside the various specific laws on the books. In many countries, laws have not kept up with the technology, leaving significant gaps in protections. In other countries, law enforcement and intelligence agencies have been given significant exemptions. Finally, in the absence of adequate oversight and enforcement, the mere presence of a law may not provide adequate protection. There are widespread violations of laws relating to surveillance of communications, even in the most democratic of countries.

_

  1. There are several criticisms of right to privacy.

-1. Right to privacy is not absolute right and it can be restricted to protect national security, prevent and investigate crime, encourage innovation and spread of knowledge, and prevent the dissipation of social welfare benefits. The right to privacy must be balanced against the state’s compelling interests including the promotion of public safety and improving the quality of life for example seat-belt laws and motorcycle helmet requirements.

-2. Too much privacy in cyberspace can be a problem. Cyberspace privacy can obscure the sources of tortious misconduct, criminality, incivility, surveillance, and threats to public health and safety.

-3. According to one well known argument there is no right to privacy and there is nothing special about privacy, because any interest protected as private can be equally well explained and protected by other interests or rights, most notably rights to property and bodily security.

-4. Posner criticizes privacy for concealing information, which reduces market efficiency.

-5. Privacy comes at a cost and the relevant “costs” to be considered are not just financial expenses in support of compliance, or lost profits due to privacy restrictions, but also the cost of imposing unnecessary expenditures to be borne by a country’s consumers or the lost opportunities for a country’s technological advancement.

-6. Access to data is crucial for competition and innovation in the digital economy – not only for businesses, but also for governments and individuals. Overall, data access and sharing is estimated to generate social and economic benefits worth between 0.1% and 1.5% of gross domestic product (GDP) in the case of public-sector data, and between 1% and 2.5% of GDP when also including private-sector data.   

-7. There is feminist critique of privacy, saying that granting special status to privacy is detrimental to women because it is used as a shield to dominate and control them, harm them, degrade them, silence them, and cover up abuse.

_

  1. Granting individuals property rights in personal data is unlikely to achieve information privacy goals because a key mechanism of property law, namely the general policy favouring free alienability of such rights, would more likely defeat than achieve information privacy goals.

_

  1. The linkages between intellectual property and privacy, in its broadest connotation, have existed for centuries. Some examples of privacy in the intellectual property include private, secret information that qualifies as a trade secret, publicity and celebrity rights, expressions of and including persons, moral rights, performer’s rights, inventorship, and invention information, and pre-public product representations. Providing intellectual property rights in personal data goes some way to strengthening data subjects’ control and protection over their personal data and strengthening data protection law more generally.

_

  1. Even today, privacy laws mainly serve the rich and the powerful, and only the wealthy can afford to bring actions in the courts. Whether or not people accept it, that new normal—public life and mass surveillance as a default—will become a component of the ever-widening socioeconomic divide. Privacy as we know it today will become a luxury commodity. Opting out will be for the rich. There are smartphone apps that charge money to become your privacy manager and manage all privacy settings for Facebook, Google, Amazon and more. In the next 10 years, we will see the development of more encryption technologies and boutique services for people prepared to pay a premium for greater control over their data. This is the creation of privacy as a luxury good. It also has the unfortunate effect of establishing a new divide: the privacy rich and the privacy poor. Whether genuine control over your information will be extended to the majority of people—and for free—seems very unlikely. Privacy as a paid service, is something that doesn’t sit well with privacy advocates and civil rights experts, who worry that freedom from corporate or government surveillance could soon become another sharp contrast between the haves and the have-nots. When privacy is a right, why should you pay for it? If you can’t pay, do you lose right to privacy? Well, the truth is poor people don’t have privacy.

_____

Dr. Rajiv Desai. MD.

November 20, 2020

_____

Postscript:

I am the victim of unprecedented privacy violation in the history of mankind perpetrated by Indian media under pretext of freedom of press, which include telephone tapping, email snooping, secretly recording private conversations in government healthcare facilities, secretly recording private conversations in private clinic, snooping court papers of divorce, snooping classroom lectures for students, snooping Zoom online lectures for students, hire women to secretly record sexual acts, snooping WhatsApp chat, snooping income tax return, and so on and so forth.   

______ 

Tags: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

115 comments on “PRIVACY”

Leave a Reply

Your email address will not be published. Required fields are marked *

Designed by @fraz699.