Tag Archives: right to privacy

Over Your Dead Body: the Creation of Internet Companies

Jeff Kosseff’s “The Twenty-Six Words That Created the Internet” (2019) explains how the internet was created. The 26 words are these: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” They form Section 230 of the Communications Decency Act, itself a part of the Telecommunications Act of 1996.   Section 230 shields online platforms from legal liability for content generated by third-party users. Put simply: If you’re harassed by a Facebook user, or if your business is defamed by a Yelp reviewer, you might be able to sue the harasser or the reviewer, assuming you know his or her identity, but don’t bother suing Facebook or Yelp. They’re probably immune. That immunity is what enabled American tech firms to become far more than producers of content (the online versions of newspapers, say, or company websites) and to harness the energy and creativity of hundreds of millions of individual users. The most popular sites on the web—YouTube, Twitter, Facebook, eBay, Reddit, Wikipedia, Amazon—depend in part or in whole on user-generated content…

Because of section 230, the U.S. was able to cultivate online companies in ways that other countries—even countries in the developed world—could not….American law’s “internet exceptionalism,” as it’s known, is the source of mind-blowing technological innovation, unprecedented economic opportunity and, a great deal of human pain. The book chronicles the plights of several people who found themselves targeted or terrorized by mostly anonymous users… Each of them sued the internet service providers or websites that facilitated these acts of malice and failed to do anything about them when alerted. And each lost—thanks to the immunity afforded to providers by Section 230.

Has the time come to delete the section?

Excerpt from Barton Swaim, ‘The Twenty-Six Words That Created the Internet’ Review: Protecting the Providers, WSJ, Aug. 19, 2019

Stasi Reborn: Democratizing Internet Censorship

The internet is the “spiritual home” of hundreds of millions of Chinese people. So China’s leader, Xi Jinping, described it in 2016. He said he expected citizens to help keep the place tidy. Many have taken up the challenge. In December 2019 netizens reported 12.2m pieces of “inappropriate” content to the authorities—four times as many as in the same month of 2015. The surge does not indicate that the internet in China is becoming more unruly. Rather, censorship is becoming more bottom-up

Officials have been mobilising people to join the fight in this “drawn-out war”, as a magazine editor called it in a speech in September to Shanghai’s first group of city-appointed volunteer censors. “Internet governance requires that every netizen take part,” an official told the gathering. It was arranged by the city’s cyber-administration during its first “propaganda month” promoting citizen censorship. The 140 people there swore to report any online “disorder”…

 Information-technology rules, which took effect on December 1st, 2019 oblige new subscribers to mobile-phone services not only to prove their identities, as has long been required, but also to have their faces scanned. That, presumably, will make it easier for police to catch the people who post the bad stuff online.

Excerpt from  The Year of the Rat-fink: Online Censorship, Economist, Jan 18, 2020

The Repressive Digital Technologies of the West

A growing, multi-billion-dollar industry exports “intrusion software” designed to snoop on smartphones, desktop computers and servers. There is compelling evidence that such software is being used by oppressive regimes to spy on and harass their critics. The same tools could also proliferate and be turned back against the West. Governments need to ensure that this new kind of arms export does not slip through the net.

A recent lawsuit brought by WhatsApp, for instance, alleges that more than 1,400 users of its messaging app were targeted using software made by NSO Group, an Israeli firm. Many of the alleged victims were lawyers, journalists and campaigners. (NSO denies the allegations and says its technology is not designed or licensed for use against human-rights activists and journalists.) Other firms’ hacking tools were used by the blood-soaked regime of Omar al-Bashir in Sudan. These technologies can be used across borders. Some victims of oppressive governments have been dissidents or lawyers living as exiles in rich countries.

Western governments should tighten the rules for moral, economic and strategic reasons. The moral case is obvious. It makes no sense for rich democracies to complain about China’s export of repressive digital technologies if Western tools can be used to the same ends. The economic case is clear, too: unlike conventional arms sales, a reduction in spyware exports would not lead to big manufacturing-job losses at home.

The strategic case revolves around the risk of proliferation. Software can be reverse-engineered, copied indefinitely and—potentially—used to attack anyone in the world…. There is a risk that oppressive regimes acquire capabilities that can then be used against not just their own citizens, but Western citizens, firms and allies, too. It would be in the West’s collective self-interest to limit the spread of such technology.

A starting-point would be to enforce existing export-licensing more tightly… Rich countries should make it harder for ex-spooks to pursue second careers as digital mercenaries in the service of autocrats. The arms trade used to be about rifles, explosives and jets. Now it is about software and information, too. Time for the regime governing the export of weapons to catch up

The spying business: Western firms should not sell spyware to tyrants, Economist, Dec. 14, 2019

Dodging the Camera: How to Beat the Surveillance State in its Own Game

Powered by advances in artificial intelligence (AI), face-recognition systems are spreading like knotweed. Facebook, a social network, uses the technology to label people in uploaded photographs. Modern smartphones can be unlocked with it… America’s Department of Homeland Security reckons face recognition will scrutinise 97% of outbound airline passengers by 2023. Networks of face-recognition cameras are part of the police state China has built in Xinjiang, in the country’s far west. And a number of British police forces have tested the technology as a tool of mass surveillance in trials designed to spot criminals on the street.  A backlash, though, is brewing.

Refuseniks can also take matters into their own hands by trying to hide their faces from the cameras or, as has happened recently during protests in Hong Kong, by pointing hand-held lasers at cctv cameras. to dazzle them. Meanwhile, a small but growing group of privacy campaigners and academics are looking at ways to subvert the underlying technology directly…

Laser Pointers Used to Blind CCTV cameras during the Hong Kong Protests 2019

In 2010… an American researcher and artist named Adam Harvey created “cv [computer vision] Dazzle”, a style of make-up designed to fool face recognisers. It uses bright colours, high contrast, graded shading and asymmetric stylings to confound an algorithm’s assumptions about what a face looks like. To a human being, the result is still clearly a face. But a computer—or, at least, the specific algorithm Mr Harvey was aiming at—is baffled….

Modern Make-Up to Hide from CCTV cameras

HyperFace is a newer project of Mr Harvey’s. Where cv Dazzle aims to alter faces, HyperFace aims to hide them among dozens of fakes. It uses blocky, semi-abstract and comparatively innocent-looking patterns that are designed to appeal as strongly as possible to face classifiers. The idea is to disguise the real thing among a sea of false positives. Clothes with the pattern, which features lines and sets of dark spots vaguely reminiscent of mouths and pairs of eyes are available…

Hyperface Clothing for Camouflage

 Even in China, says Mr Harvey, only a fraction of cctv cameras collect pictures sharp enough for face recognition to work. Low-tech approaches can help, too. “Even small things like wearing turtlenecks, wearing sunglasses, looking at your phone [and therefore not at the cameras]—together these have some protective effect”. 

Excerpts from As face-recognition technology spreads, so do ideas for subverting it: Fooling Big Brother,  Economist, Aug. 17, 2019

Your Typing Discloses Who You Are: Behavioral Biometrics

Behavioural biometrics make it possible to identify an individual’s “unique motion fingerprint”,… With the right software, data from a phone’s sensors can reveal details as personal as which part of someone’s foot strikes the pavement first, and how hard; the length of a walker’s stride; the number of strides per minute; and the swing and spring in the walker’s hips and step. It can also work out whether the phone in question is in a handbag, a pocket or held in a hand.

Using these variables, Unifyid, a private company, sorts gaits into about 50,000 distinct types. When coupled with information about a user’s finger pressure and speed on the touchscreen, as well as a device’s regular places of use—as revealed by its gps unit—that user’s identity can be pretty well determined, ction….Behavioural biometrics can, moreover, go beyond verifying a user’s identity. It can also detect circumstances in which it is likely that a fraud is being committed. On a device with a keyboard, for instance, a warning sign is when the typing takes on a staccato style, with a longer-than-usual finger “flight time” between keystrokes. This, according to Aleksander Kijek, head of product at Nethone, a firm in Warsaw that works out behavioural biometrics for companies that sell things online, is an indication that the device has been hijacked and is under the remote control of a computer program rather than a human typist…

Used wisely, behavioural biometrics could be a boon…Used unwisely, however, the system could become yet another electronic spy on people’s privacy, permitting complete strangers to monitor your every action, from the moment you reach for your phone in the morning, to when you fling it on the floor at night.

Excerpts from Behavioural biometrics: Online identification is getting more and more intrusive, Economist, May 23, 2019

Facebook Denizens Unite! the right to privacy and big tech

The European Union’s (EU) approach to regulating the big tech companies draws on its members’ cultures tend to protect individual privacy. The other uses the eu’s legal powers to boost competition.  The first leads to the assertion that you have sovereignty over data about you: you should have the right to access them, amend them and determine who can use them. This is the essence of the General Data Protection Regulation (GDPR), whose principles are already being copied by many countries across the world. The next step is to allow interoperability between services, so that users can easily switch between providers, shifting to firms that offer better financial terms or treat customers more ethically. (Imagine if you could move all your friends and posts to Acebook, a firm with higher privacy standards than Facebook and which gave you a cut of its advertising revenues.)

Europe’s second principle is that firms cannot lock out competition. That means equal treatment for rivals who use their platforms. The EU has blocked Google from competing unfairly with shopping sites that appear in its search results or with rival browsers that use its Android operating system. A German proposal says that a dominant firm must share bulk, anonymised data with competitors, so that the economy can function properly instead of being ruled by a few data-hoarding giants. (For example, all transport firms should have access to Uber’s information about traffic patterns.) Germany has changed its laws to stop tech giants buying up scores of startups that might one day pose a threat.

Ms Vestager has explained, popular services like Facebook use their customers as part of the “production machinery”. …The logical step beyond limiting the accrual of data is demanding their disbursement. If tech companies are dominant by virtue of their data troves, competition authorities working with privacy regulators may feel justified in demanding they share those data, either with the people who generate them or with other companies in the market. That could whittle away a big chunk of what makes big tech so valuable, both because Europe is a large market, and because regulators elsewhere may see Europe’s actions as a model to copy. It could also open up new paths to innovation.

In recent decades, American antitrust policy has been dominated by free-marketeers of the so-called Chicago School, deeply sceptical of the government’s role in any but the most egregious cases. Dominant firms are frequently left unmolested in the belief they will soon lose their perch anyway…By contrast, “Europe is philosophically more sceptical of firms that have market power.” ..

Tech lobbyists in Brussels worry that Ms Vestager agrees with those who believe that their data empires make Google and its like natural monopolies, in that no one else can replicate Google’s knowledge of what users have searched for, or Amazon’s of what they have bought. She sent shivers through the business in January when she compared such companies to water and electricity utilities, which because of their irreproducible networks of pipes and power lines are stringently regulated….

The idea is for consumers to be able to move data about their Google searches, Amazon purchasing history or Uber rides to a rival service. So, for example, social-media users could post messages to Facebook from other platforms with approaches to privacy that they prefer…

Excerpts from Why Big Tech Should Fear Europe, Economist, Mar. 3, 2019; The Power of Privacy, Economist, Mar. 3, 2019

Your Biometric Data in Facebook

A federal judge has dismissed a class action lawsuit against Facebook after the California-based social media site claimed there was a lack of personal jurisdiction in Illinois.The plaintiff in the case, Fredrick William Gullen, filed the complaint alleging violations of the Illinois Biometric Information Privacy Act. Gullen is not a Facebook user, but he alleged that his image was uploaded to the site and that his biometric identifiers and biometric information was collected, stored and used by Facebook without his consent. The Illinois Biometric Information Privacy Act, implemented in 2008, regulates the collection, use, and storage of biometric identifiers and biometric information such as scans of face or hand geometry. The act specifically excludes photographs, demographic information, and physical descriptions….

In the Facebook case, no ruling has been made on whether the information on Facebook counts as biometric identifiers and biometric information under the Illinois Biometric Information Privacy Act. Instead, the judge agreed with Facebook that the case could not be tried in Illinois.

However, the company is currently facing a proposed class action in California relating to some of the same questions….How the California class action will play out remains to be seen. California does not yet have a clear policy on biometric privacy.A bill pending in the state’s legislature would extend the scope of the data security law to include biometric data as well as geophysical location, but it has not yet become law.  The question of privacy in regards to biometric information is one that has garnered increasing attention in recent months. On Feb. 4, 2016 the Biomterics Institute, an independent research and analysis organization, released revised guidelines comprising 16 privacy principles for companies that gather and use biometrics data.

Excerpts from Emma Gallimore, Federal judge boots Illinois biometrics class action against Facebook, Legal Newswire, Feb. 22, 2016, 12:15pm

See also the case (pdf)