Tag Archives: right to privacy

Addictive Ads and Digital Dignity

Social-media firms make almost all their money from advertising. This pushes them to collect as much user data as possible, the better to target ads. Critics call this “surveillance capitalism”. It also gives them every reason to make their services as addictive as possible, so users watch more ads…

The new owner could turn TikTok from a social-media service to a digital commonwealth, governed by a set of rules akin to a constitution with its own checks and balances. User councils (a legislature, if you will) could have a say in writing guidelines for content moderation. Management (the executive branch) would be obliged to follow due process. And people who felt their posts had been wrongfully taken down could appeal to an independent arbiter (the judiciary). Facebook has toyed with platform constitutionalism now has an “oversight board” to hear user appeals…

Why would any company limit itself this way? For one thing, it is what some firms say they want. Microsoft in particular claims to be a responsible tech giant. In January  2020 its chief executive, Satya Nadella, told fellow plutocrats in Davos about the need for “data dignity”—ie, granting users more control over their data and a bigger share of the value these data create…Governments increasingly concur. In its Digital Services Act, to be unveiled in 2020, the European Union is likely to demand transparency and due process from social-media platforms…In the United States, Andrew Yang, a former Democratic presidential candidate, has launched a campaign to get online firms to pay users a “digital dividend”. Getting ahead of such ideas makes more sense than re-engineering platforms later to comply.

Excerpt from: Reconstituted: Schumpeter, Economist, Sept 5, 2020

See also Utilities for Democracy: WHY AND HOW THE ALGORITHMIC
INFRASTRUCTURE OF FACEBOOK AND GOOGLE MUST BE REGULATED
(2020)

Breath and Sweat: the Biometrics of All Private Things

It is not just DNA that people scatter to the wind as they go about their business. They shed a whole range of other chemicals as well, in their breath, their urine, their faeces and their sweat. Collectively, these molecules are referred to as metabolites….

The most common way of analysing metabolite content is gas chromatography-mass spectrometry. This technique sorts molecules by their weight, producing a pattern of peaks that correspond to different substances….There are, however, a lot of information sources out there, in the form of publicly available metabolite databases. The databases themselves are getting better, too…. A study just published by Feliciano Priego-Capote at University of Cordoba, in Spain, for example, shows it is possible to extract much meaningful information from even a dried-up drop of sweat. “The day is coming soon”, observes Cecil Lewis, a molecular anthropologist at University of Oklahoma, who is studying the matter, “when it will be possible to swab a person’s desk, steering wheel or phone and determine a wide range of incredibly private things about them….


The police may be tempted to push the boundaries as well. The fourth amendment to America’s constitution protects against unwarranted searches and seizure of evidence. This means it is hard to force someone to give a sample. But if obtaining such merely requires taking a swab of a surface in a public place—perhaps a keyboard someone has just used—the 4th amendment is unlikely to apply.

That is not necessarily wrong, if it means more criminals are caught and convicted. But it needs to be thought about carefully, because many metabolites are sticky. Cocaine is a case in point. Studies have shown that as many as two-thirds of the dollar bills in circulation in America carry traces of this substance, which might thus end up on the fingertips of the innocent, as well as the guilty.

Excerpts from Metabolites and You, Economist, Feb. 15, 2019

Biometrics Run Amok: Your Heartbeat ID, please

Before pulling the trigger, a sniper planning to assassinate an enemy operative must be sure the right person is in the cross-hairs. Western forces commonly use software that compares a suspect’s facial features or gait with those recorded in libraries of biometric data compiled by police and intelligence agencies. Such technology can, however, be foiled by a disguise, head-covering or even an affected limp. For this reason America’s Special Operations Command (SOC), which oversees the units responsible for such operations in the various arms of America’s forces, has long wanted extra ways to confirm a potential target’s identity. Responding to a request from soc, the Combating Terrorism Technical Support Office (CTTSO), an agency of the defence department, has now developed a new tool for the job.

This system, dubbed Jetson, is able to measure, from up to 200 metres away, the minute vibrations induced in clothing by someone’s heartbeat. Since hearts differ in both shape and contraction pattern, the details of heartbeats differ, too. The effect of this on the fabric of garments produces what Ideal Innovations, a firm involved in the Jetson project, calls a “heartprint”—a pattern reckoned sufficiently distinctive to confirm someone’s identity.

To measure heartprints remotely Jetson employs gadgets called laser vibrometers. These work by detecting minute variations in a laser beam that has been reflected off an object of interest. They have been used for decades to study things like bridges, aircraft bodies, warship cannons and wind turbines—searching for otherwise-invisible cracks, air pockets and other dangerous defects in materials. However, only in the past five years or so has laser vibrometry become good enough to distinguish the vibrations induced in fabric by heartprints….

Candice Tresch, a spokeswoman for the cttso…. cannot discuss the process by which heartprint libraries might be built up in the first place. One starting point, presumably, would be to catalogue the heartbeats of detainees in the way that fingerprints and dna samples are now taken routinely.

Excerpts from Personal identificationPeople can now be identified at a distance by their heartbeat, Economist, Jan 23, 2020

Over Your Dead Body: the Creation of Internet Companies

Jeff Kosseff’s “The Twenty-Six Words That Created the Internet” (2019) explains how the internet was created. The 26 words are these: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” They form Section 230 of the Communications Decency Act, itself a part of the Telecommunications Act of 1996.   Section 230 shields online platforms from legal liability for content generated by third-party users. Put simply: If you’re harassed by a Facebook user, or if your business is defamed by a Yelp reviewer, you might be able to sue the harasser or the reviewer, assuming you know his or her identity, but don’t bother suing Facebook or Yelp. They’re probably immune. That immunity is what enabled American tech firms to become far more than producers of content (the online versions of newspapers, say, or company websites) and to harness the energy and creativity of hundreds of millions of individual users. The most popular sites on the web—YouTube, Twitter, Facebook, eBay, Reddit, Wikipedia, Amazon—depend in part or in whole on user-generated content…

Because of section 230, the U.S. was able to cultivate online companies in ways that other countries—even countries in the developed world—could not….American law’s “internet exceptionalism,” as it’s known, is the source of mind-blowing technological innovation, unprecedented economic opportunity and, a great deal of human pain. The book chronicles the plights of several people who found themselves targeted or terrorized by mostly anonymous users… Each of them sued the internet service providers or websites that facilitated these acts of malice and failed to do anything about them when alerted. And each lost—thanks to the immunity afforded to providers by Section 230.

Has the time come to delete the section?

Excerpt from Barton Swaim, ‘The Twenty-Six Words That Created the Internet’ Review: Protecting the Providers, WSJ, Aug. 19, 2019

Stasi Reborn: Democratizing Internet Censorship

The internet is the “spiritual home” of hundreds of millions of Chinese people. So China’s leader, Xi Jinping, described it in 2016. He said he expected citizens to help keep the place tidy. Many have taken up the challenge. In December 2019 netizens reported 12.2m pieces of “inappropriate” content to the authorities—four times as many as in the same month of 2015. The surge does not indicate that the internet in China is becoming more unruly. Rather, censorship is becoming more bottom-up

Officials have been mobilising people to join the fight in this “drawn-out war”, as a magazine editor called it in a speech in September to Shanghai’s first group of city-appointed volunteer censors. “Internet governance requires that every netizen take part,” an official told the gathering. It was arranged by the city’s cyber-administration during its first “propaganda month” promoting citizen censorship. The 140 people there swore to report any online “disorder”…

 Information-technology rules, which took effect on December 1st, 2019 oblige new subscribers to mobile-phone services not only to prove their identities, as has long been required, but also to have their faces scanned. That, presumably, will make it easier for police to catch the people who post the bad stuff online.

Excerpt from  The Year of the Rat-fink: Online Censorship, Economist, Jan 18, 2020

The Repressive Digital Technologies of the West

A growing, multi-billion-dollar industry exports “intrusion software” designed to snoop on smartphones, desktop computers and servers. There is compelling evidence that such software is being used by oppressive regimes to spy on and harass their critics. The same tools could also proliferate and be turned back against the West. Governments need to ensure that this new kind of arms export does not slip through the net.

A recent lawsuit brought by WhatsApp, for instance, alleges that more than 1,400 users of its messaging app were targeted using software made by NSO Group, an Israeli firm. Many of the alleged victims were lawyers, journalists and campaigners. (NSO denies the allegations and says its technology is not designed or licensed for use against human-rights activists and journalists.) Other firms’ hacking tools were used by the blood-soaked regime of Omar al-Bashir in Sudan. These technologies can be used across borders. Some victims of oppressive governments have been dissidents or lawyers living as exiles in rich countries.

Western governments should tighten the rules for moral, economic and strategic reasons. The moral case is obvious. It makes no sense for rich democracies to complain about China’s export of repressive digital technologies if Western tools can be used to the same ends. The economic case is clear, too: unlike conventional arms sales, a reduction in spyware exports would not lead to big manufacturing-job losses at home.

The strategic case revolves around the risk of proliferation. Software can be reverse-engineered, copied indefinitely and—potentially—used to attack anyone in the world…. There is a risk that oppressive regimes acquire capabilities that can then be used against not just their own citizens, but Western citizens, firms and allies, too. It would be in the West’s collective self-interest to limit the spread of such technology.

A starting-point would be to enforce existing export-licensing more tightly… Rich countries should make it harder for ex-spooks to pursue second careers as digital mercenaries in the service of autocrats. The arms trade used to be about rifles, explosives and jets. Now it is about software and information, too. Time for the regime governing the export of weapons to catch up

The spying business: Western firms should not sell spyware to tyrants, Economist, Dec. 14, 2019

Dodging the Camera: How to Beat the Surveillance State in its Own Game

Powered by advances in artificial intelligence (AI), face-recognition systems are spreading like knotweed. Facebook, a social network, uses the technology to label people in uploaded photographs. Modern smartphones can be unlocked with it… America’s Department of Homeland Security reckons face recognition will scrutinise 97% of outbound airline passengers by 2023. Networks of face-recognition cameras are part of the police state China has built in Xinjiang, in the country’s far west. And a number of British police forces have tested the technology as a tool of mass surveillance in trials designed to spot criminals on the street.  A backlash, though, is brewing.

Refuseniks can also take matters into their own hands by trying to hide their faces from the cameras or, as has happened recently during protests in Hong Kong, by pointing hand-held lasers at cctv cameras. to dazzle them. Meanwhile, a small but growing group of privacy campaigners and academics are looking at ways to subvert the underlying technology directly…

Laser Pointers Used to Blind CCTV cameras during the Hong Kong Protests 2019

In 2010… an American researcher and artist named Adam Harvey created “cv [computer vision] Dazzle”, a style of make-up designed to fool face recognisers. It uses bright colours, high contrast, graded shading and asymmetric stylings to confound an algorithm’s assumptions about what a face looks like. To a human being, the result is still clearly a face. But a computer—or, at least, the specific algorithm Mr Harvey was aiming at—is baffled….

Modern Make-Up to Hide from CCTV cameras

HyperFace is a newer project of Mr Harvey’s. Where cv Dazzle aims to alter faces, HyperFace aims to hide them among dozens of fakes. It uses blocky, semi-abstract and comparatively innocent-looking patterns that are designed to appeal as strongly as possible to face classifiers. The idea is to disguise the real thing among a sea of false positives. Clothes with the pattern, which features lines and sets of dark spots vaguely reminiscent of mouths and pairs of eyes are available…

Hyperface Clothing for Camouflage

 Even in China, says Mr Harvey, only a fraction of cctv cameras collect pictures sharp enough for face recognition to work. Low-tech approaches can help, too. “Even small things like wearing turtlenecks, wearing sunglasses, looking at your phone [and therefore not at the cameras]—together these have some protective effect”. 

Excerpts from As face-recognition technology spreads, so do ideas for subverting it: Fooling Big Brother,  Economist, Aug. 17, 2019

Your Typing Discloses Who You Are: Behavioral Biometrics

Behavioural biometrics make it possible to identify an individual’s “unique motion fingerprint”,… With the right software, data from a phone’s sensors can reveal details as personal as which part of someone’s foot strikes the pavement first, and how hard; the length of a walker’s stride; the number of strides per minute; and the swing and spring in the walker’s hips and step. It can also work out whether the phone in question is in a handbag, a pocket or held in a hand.

Using these variables, Unifyid, a private company, sorts gaits into about 50,000 distinct types. When coupled with information about a user’s finger pressure and speed on the touchscreen, as well as a device’s regular places of use—as revealed by its gps unit—that user’s identity can be pretty well determined, ction….Behavioural biometrics can, moreover, go beyond verifying a user’s identity. It can also detect circumstances in which it is likely that a fraud is being committed. On a device with a keyboard, for instance, a warning sign is when the typing takes on a staccato style, with a longer-than-usual finger “flight time” between keystrokes. This, according to Aleksander Kijek, head of product at Nethone, a firm in Warsaw that works out behavioural biometrics for companies that sell things online, is an indication that the device has been hijacked and is under the remote control of a computer program rather than a human typist…

Used wisely, behavioural biometrics could be a boon…Used unwisely, however, the system could become yet another electronic spy on people’s privacy, permitting complete strangers to monitor your every action, from the moment you reach for your phone in the morning, to when you fling it on the floor at night.

Excerpts from Behavioural biometrics: Online identification is getting more and more intrusive, Economist, May 23, 2019

Facebook Denizens Unite! the right to privacy and big tech

The European Union’s (EU) approach to regulating the big tech companies draws on its members’ cultures tend to protect individual privacy. The other uses the eu’s legal powers to boost competition.  The first leads to the assertion that you have sovereignty over data about you: you should have the right to access them, amend them and determine who can use them. This is the essence of the General Data Protection Regulation (GDPR), whose principles are already being copied by many countries across the world. The next step is to allow interoperability between services, so that users can easily switch between providers, shifting to firms that offer better financial terms or treat customers more ethically. (Imagine if you could move all your friends and posts to Acebook, a firm with higher privacy standards than Facebook and which gave you a cut of its advertising revenues.)

Europe’s second principle is that firms cannot lock out competition. That means equal treatment for rivals who use their platforms. The EU has blocked Google from competing unfairly with shopping sites that appear in its search results or with rival browsers that use its Android operating system. A German proposal says that a dominant firm must share bulk, anonymised data with competitors, so that the economy can function properly instead of being ruled by a few data-hoarding giants. (For example, all transport firms should have access to Uber’s information about traffic patterns.) Germany has changed its laws to stop tech giants buying up scores of startups that might one day pose a threat.

Ms Vestager has explained, popular services like Facebook use their customers as part of the “production machinery”. …The logical step beyond limiting the accrual of data is demanding their disbursement. If tech companies are dominant by virtue of their data troves, competition authorities working with privacy regulators may feel justified in demanding they share those data, either with the people who generate them or with other companies in the market. That could whittle away a big chunk of what makes big tech so valuable, both because Europe is a large market, and because regulators elsewhere may see Europe’s actions as a model to copy. It could also open up new paths to innovation.

In recent decades, American antitrust policy has been dominated by free-marketeers of the so-called Chicago School, deeply sceptical of the government’s role in any but the most egregious cases. Dominant firms are frequently left unmolested in the belief they will soon lose their perch anyway…By contrast, “Europe is philosophically more sceptical of firms that have market power.” ..

Tech lobbyists in Brussels worry that Ms Vestager agrees with those who believe that their data empires make Google and its like natural monopolies, in that no one else can replicate Google’s knowledge of what users have searched for, or Amazon’s of what they have bought. She sent shivers through the business in January when she compared such companies to water and electricity utilities, which because of their irreproducible networks of pipes and power lines are stringently regulated….

The idea is for consumers to be able to move data about their Google searches, Amazon purchasing history or Uber rides to a rival service. So, for example, social-media users could post messages to Facebook from other platforms with approaches to privacy that they prefer…

Excerpts from Why Big Tech Should Fear Europe, Economist, Mar. 3, 2019; The Power of Privacy, Economist, Mar. 3, 2019

Your Biometric Data in Facebook

A federal judge has dismissed a class action lawsuit against Facebook after the California-based social media site claimed there was a lack of personal jurisdiction in Illinois.The plaintiff in the case, Fredrick William Gullen, filed the complaint alleging violations of the Illinois Biometric Information Privacy Act. Gullen is not a Facebook user, but he alleged that his image was uploaded to the site and that his biometric identifiers and biometric information was collected, stored and used by Facebook without his consent. The Illinois Biometric Information Privacy Act, implemented in 2008, regulates the collection, use, and storage of biometric identifiers and biometric information such as scans of face or hand geometry. The act specifically excludes photographs, demographic information, and physical descriptions….

In the Facebook case, no ruling has been made on whether the information on Facebook counts as biometric identifiers and biometric information under the Illinois Biometric Information Privacy Act. Instead, the judge agreed with Facebook that the case could not be tried in Illinois.

However, the company is currently facing a proposed class action in California relating to some of the same questions….How the California class action will play out remains to be seen. California does not yet have a clear policy on biometric privacy.A bill pending in the state’s legislature would extend the scope of the data security law to include biometric data as well as geophysical location, but it has not yet become law.  The question of privacy in regards to biometric information is one that has garnered increasing attention in recent months. On Feb. 4, 2016 the Biomterics Institute, an independent research and analysis organization, released revised guidelines comprising 16 privacy principles for companies that gather and use biometrics data.

Excerpts from Emma Gallimore, Federal judge boots Illinois biometrics class action against Facebook, Legal Newswire, Feb. 22, 2016, 12:15pm

See also the case (pdf)

By Hook or By Crook: Harvesting DNA of Indigenous Peoples

Tensions between Western scientists and Indigenous communities around the world. (“Indigenous” is an internationally inclusive term for the original inhabitants, and their descendants, of regions later colonized by other groups.) Scientists have used Indigenous samples without permission, disregarded their customs around the dead, and resisted returning samples, data, and human remains to those who claim them. Indigenous communities have often responded by severely restricting scientists’ sampling of their bodies and their ancestors, even as genomics has boomed, with increasing relevance for health….

The  Indigenous Peoples in Genomics (SING) aims to train Indigenous scientists in genomics so that they can introduce that field’s tools to their communities as well as bring a sorely needed Indigenous perspective to research. Since Malhi helped found it at UI in 2011, SING has trained more than 100 graduates and has expanded to New Zealand and Canada. The program has created a strong community of Indigenous scientists and non-Indigenous allies who are raising the profile of these ethical issues and developing ways to improve a historically fraught relationship…

Some Indigenous communities, such as the Navajo Nation, decline to participate in genetic research at all. And many tribes don’t permit research on their ancestors’ remains. Such opposition can feel like a hostile stumbling block to Western scientists, some of whom have gone to court to gain or maintain access to Indigenous samples. Not being able to study at least some early samples would “result in a world heritage disaster of unprecedented proportions,” the American Association of Physical Anthropologists said in 2007 in a debate over an amendment to the Native American Graves Protection and Repatriation Act.

To understand why so many Indigenous people distrust Western scientists, consider how intertwined science has been with colonialism, says SING co-founder Kim TallBear, an anthropologist at the University of Alberta in Edmonton, Canada, and a member of the Sisseton Wahpeton Oyate in North and South Dakota. “While the U.S. was moving westward, stealing land, and massacring Indians, you had contract grave robbers coming out onto the battlefields and immediately picking up the dead—Native people—and boiling them down to bone, and sending their bones back east,” she says. Many of those skeletons were displayed and studied in museums by researchers who used them to argue for the biological inferiority of Indigenous people. Some of those skeletons are still there.  “Science was there, always. It’s part of that power structure,”

Many Indigenous communities see echoes of this painful history reverberating in the 21st century. In 2003, the Havasupai Tribe in Arizona discovered that samples taken for a study on diabetes had been used for research projects they had never consented to, including on population genetics and schizophrenia. They sued Arizona State University in Tempe, which eventually returned the samples and paid $700,000 to the tribe (Science, 30 April 2010)…

Researchers working for the Human Genome Diversity Project (HGDP), a major international effort, were collecting samples from around the world to build a public database of global genetic variation. The project publicly emphasized the importance of collecting DNA from genetically isolated Indigenous populations before they “went extinct.”  That rationale “was offensive to Indigenous populations worldwide,” Gachupin says. “Resources for infrastructure and for the wellbeing of the community were not forthcoming, and yet now here were these millions and millions of dollars being invested to ‘save’ their DNA.” The message from the scientific establishment was, she says, “We don’t care about the person. We just want your DNA.” Some activists dubbed the HGDP “the Vampire Project,” believing the only beneficiaries would be Western scientists and people who could afford costly medical treatments.

Excerpts from Lizzie Wade, Bridging the Gap, Science,  Sept. 28, 2018

How to Stop the Expoitation of Internet Users

Data breaches at Facebook and Google—and along with Amazon, those firms’ online dominance—crest a growing wave of anxiety around the internet’s evolving structure and its impact on humanity…The runaway success of a few startups has created new, proprietized one-stop platforms. Many people are not really using the web at all, but rather flitting among a small handful of totalizing apps like Facebook and Google. And those application-layer providers have dabbled in providing physical-layer internet access. Facebook’s Free Basics program has been one of several experiments that use broadband data cap exceptions to promote some sites and services over others.

What to do? Columbia University law professor Tim Wu has called upon regulators to break up giants like Facebook, but more subtle interventions should be tried first…Firms that do leverage users’ data should be “information fiduciaries,” obliged to use what they learn in ways that reflect a loyalty to users’ interests…The internet was designed to be resilient and flexible, without need for drastic intervention. But its trends toward centralization, and exploitation of its users, call for action

Excerpts from Jonathan Zittrain, Fixing the internet, Science, Nov. 23, 2018

Behavior Mining

Understanding and assessing the readiness of the warfighter is complex, intrusive, done relatively infrequently, and relies heavily on self-reporting. Readiness is determined through medical intervention with the help of advanced equipment, such as electrocardiographs (EKGs) and otherspecialized medical devices that are too expensive and cumbersome to employ continuously without supervision in non-controlled environments. On the other hand, currently 92% of adults in the United States own a cell phone, which could be used as the basis for continuous, passive health and readiness assessment.  The WASH program will use data collected from cellphone sensors to enable novel algorithms that conduct passive, continuous, real-time assessment of the warfighter.

DARPA’s WASH [Warfighter Analytics using Smartphones for Health] will extract physiological signals, which may be weak and noisy, that are embedded in the data obtained through existing mobile device sensors (e.g., accelerometer, screen, microphone). Such extraction and analysis, done on a continuous basis, will be used to determine current health status and identify latent or developing health disorders. WASH will develop algorithms and techniques for identifying both known indicators of physiological problems (such as disease, illness, and/or injury) and deviations from the warfighter’s micro-behaviors that could indicate such problems.

Excerpt from Warfighter Analytics using Smartphones for Health (WASH)
Solicitation Number: DARPA-SN-17-4, May, 2, 2018

See also Modeling and discovering human behavior from smartphone sensing life-log data for identification purpose

Data Security: Real Fear

On its website, ProfitBricks touts what it calls “100 percent German data protection,” underneath the black, red, and gold colors of the German flag. “Having a German cloud helps tremendously,” says Markus Schaffrin, an IT security expert at Eco, a lobbying group for Internet companies. “Germany has some of the most stringent data-protection laws, and cloud-service providers with domestic data centers are of course highlighting that.”

The companies known as the Mittelstand—the small and midsize enterprises that form the backbone of the German economy—are rapidly embracing the idea of the networked factory. Yet they remain wary of entrusting intellectual property to a cloud controlled by global technology behemoths and possibly subject to government snooping. “Small and medium enterprises are afraid that those monsters we sometimes call Internet companies will suck out the brain of innovation,” says Joe Kaeser, chief executive officer of Siemens, which in March began offering cloud services using a network managed by German software powerhouse SAP.

In a case being closely watched in Germany, the U.S. Department of Justice has demanded that Microsoft hand over e-mails stored on a data server in Ireland. The software maker argues that the U.S. has no jurisdiction there; the U.S. government says it does, because Microsoft is an American company. …

U.S. companies aren’t ceding the market. Microsoft will offer its Azure public cloud infrastructure in German data centers, with T-Systems acting as a trustee of customer data. The companies say the arrangement will keep information away from non-German authorities. And IBM in December opened a research and sales hub for Watson, its cloud-based cognitive computing platform, in Munich—a move intended to reassure Mittelstand buyers about the security of their data. “If a customer wants data never to leave Bavaria, then it won’t,” says Harriet Green, IBM’s general manager for Watson. “I’m being invited in by many, many customers in Germany, because fear about security is very, very real.”

Excerpts from Building a National Fortress in the Cloud, Bloomberg, May 19, 2016

Platform Capitalism: FANG

Hardly a day goes by without some tech company proclaiming that it wants to reinvent itself as a platform. …Some prominent critics even speak of “platform capitalism” – a broader transformation of how goods and services are produced, shared and delivered.   Such is the transformation we are witnessing across many sectors of the economy: taxi companies used to transport passengers, but Uber just connects drivers with passengers. Hotels used to offer hospitality services; Airbnb just connects hosts with guests. And this list goes on: even Amazon connects booksellers with buyers of used books.d innovation, the latter invariably wins….

But Uber’s offer to drivers in Seoul does raise some genuinely interesting questions. What is it that Uber’s platform offers that traditional cabs can’t get elsewhere? It’s mostly three things: payment infrastructure to make transactions smoother; identity infrastructure to screen out any unwanted passengers; and sensor infrastructure, present on our smartphones, which traces the location of the car and the customer in real time. This list has hardly anything to do with transport; they are the kind of peripheral activity that traditional taxi companies have always ignored.

However, with the transition to knowledge-based economy, these peripherals are no longer really peripherals – they are at the very centre of service provision.There’s a good reason why so many platforms are based in Silicon Valley: the main peripherals today are data, algorithms and server power. And this explains why so many renowned publishers would team up with Facebook to have their stories published there in a new feature called Instant Articles. Most of them simply do not have the know-how and the infrastructure to be as nimble, resourceful and impressive as Facebook when it comes to presenting the right articles to the right people at the right time – and doing it faster than any other platform.

Few industries could remain unaffected by the platform fever. The unspoken truth, though, is that most of the current big-name platforms are monopolies, riding on the network effects of operating a service that becomes more valuable as more people join it. This is why they can muster so much power; Amazon is in constant power struggles with publishers – but there is no second Amazon they can turn to.

Venture capitalists such as Peter Thiel want us to believe that this monopoly status is a feature, not a bug: if these companies weren’t monopolies, they would never have so much cash to spend on innovation.  This, however, still doesn’t address the question of just how much power we should surrender to these companies.

Making sure that we can move our reputation – as well as our browsing history and a map of our social connections – between platforms would be a good start. It’s also important to treat other, more technical parts of the emerging platform landscape – from services that can verify our identity to new payment systems to geolocational sensors – as actual infrastructure (and thus ensuring that everybody can access it on the same, nondiscriminatory terms) is also badly needed.

Most platforms are parasitic: feeding off existing social and economic relations. They don’t produce anything on their own – they only rearrange bits and pieces developed by someone else. Given the enormous – and mostly untaxed – profits made by such corporations, the world of “platform capitalism”, for all its heady rhetoric, is not so different from its predecessor. The only thing that’s changed is who pockets the money.

Excerpt from Evgeny Morozov, Where Uber and Amazon rule: welcome to the world of the platform, Guardian, Nov. 15, 2015

Looking Behind the Brick Wall: Military

From the DARPA website on project  Revolutionary Enhancement of Visibility by Exploiting Active Light-fields (REVEAL) program

Imagine, for example, squad members patrolling a street in a deployed urban environment, and an armed assailant crouching behind a car or a concrete barrier. Without the benefit of different vantage points (from the air, for example), the squad could be blind to the hidden threat. If by chance a glass storefront window were behind the assailant, the squad might spot the assailant’s reflection in the window. But if the backdrop were a brick wall, there would be no visible reflection. By exploiting currently untapped aspects of light and the varied paths of photons bouncing off the brick wall, troops using hardware based on the theoretical foundations provided by REVEAL might someday be able to detect the otherwise hidden assailant [or see clearly what people are doing inside their homes].

Another potential application could be determining an unknown material’s composition and other properties from a safe distance, avoiding the potential danger associated with close proximity and physical examination. Based on information carried by the photons interacting with the material, it may be possible for troops in the future to identify radioactive, biological or chemical threats and camouflaged targets from much farther away than currently possible.

See also FBO

Online Anonymity Guaranteed by DARPA

From the DARPA website—DARPA “BRANDEIS” PROGRAM AIMS TO ENSURE ONLINE PRIVACY

DARPA announced plans on March 11, 2015 to research and develop tools for online privacy, one of the most vexing problems facing the connected world as devices and data proliferate beyond a capacity to be managed responsibly. Named for former Supreme Court Justice Louis Brandeis, who while a student at Harvard law school co-developed the concept of a “right to privacy”…The goal of DARPA’s newly launched Brandeis program is to enable information systems that would allow individuals, enterprises and U.S. government agencies to keep personal and/or proprietary information private.

Existing methods for protecting private information fall broadly into two categories: filtering the release of data at the source, or trusting the user of the data to provide diligent protection. Filtering data at the source, such as by removing a person’s name or identity from a data set or record, is increasingly inadequate because of improvements in algorithms that can cross-correlate redacted data with public information to re-identify the individual. According to research conducted by Dr. Latanya Sweeney at Carnegie Mellon University, birthdate, zip code and gender are sufficient to identify 87% of Americans by name.

On the other side of the equation, trusting an aggregator and other data recipients to diligently protect their store of data is also difficult. In the past few months alone, as many as 80 million social security numbers were stolen from a health insurer, terabytes of sensitive corporate data (including personnel records) were exfiltrated from a major movie studio and many personal images were illegitimately downloaded from cloud services.

“Currently, most consumers do not have effective mechanisms to protect their own data, and the people with whom we share data are often not effective at providing adequate protection’

Currently, we do not have effective mechanisms to protect data ourselves, and the people with whom we share data are often not effective at providing adequate protection.The vision of the Brandeis program is to break the tension between (a) maintaining privacy and (b) being able to tap into the huge value of data. Rather than having to balance between them, Brandeis aims to build a third option, enabling safe and predictable sharing of data in which privacy is preserved. Specifically, Brandeis will develop tools and techniques that enable us to build systems in which private data may be used only for its intended purpose and no other. The potential for impact is dramatic.

Assured data privacy can open the doors to personal medicine (leveraging cross-linked genotype/phenotype data), effective smart cities (where buildings, energy use, and traffic controls are all optimized minute by minute), detailed global data (where every car is gathering data on the environment, weather, emergency situations, etc.), and fine grained internet awareness (where every company and device shares network and cyber-attack data). Without strong privacy controls, every one of these possibilities would face systematic opposition [it should].

From the DARPA website

Data Hunger: Google

[Some] worry that Google could prove to be the ultimate digital monopoly. They do not think that its reason for being is primarily online search or the advertising business; they see it as being in the business of mining any and all data it can accumulate for new profit streams. The data hunger such a goal demands is the main reason, they argue, why Google is entering markets as diverse as self-driving cars, smart homes, robotics and health care. “Google is trying to leverage the advantage it has in one area into many others,” says Nathan Newman, a lawyer and technology activist. The idea is that Google could use its assets—its data, its unparalleled ability to exploit those data, its brilliant employees and knack for managing them—to take control of other industries.

For such a data-centric conglomerate to get ever more dominant seems against the flow of history and intuitively unlikely. But intuitive views of the direction of internet competition have been wrong before, as the existence of giants like Google, Amazon and Facebook bears witness. And should it show signs of coming to pass, the current antitrust skirmishes will give way to an epic battle on the scale of the one against Standard Oil. “If we will not endure a king as a political power,” said John Sherman, the senator who gave his name to America’s original antitrust law, “we should not endure a king over the production, transportation and sale of any of the necessaries of life.” Even one that makes things very, very easy.

Excerpt from Internet monopolies: Everybody wants to rule the world, Economist, Nov. 29, 2014. at 19

Internet or Equinet?

“The Internet governance should be multilateral, transparent, democratic,and representative, with the participation of governments, private sector, civil society, and international organizations, in their respective roles. This should be one of the foundational principles of Internet governance,” the external affairs ministry says in its initial submission to the April 23-24 Global Multistakeholder Meeting on the Future of Internet Governance, also referred as NETmundial, in Sao Paulo, Brazil.  The proposal for a decentralised Internet is significant in view of Edward Snowden’s Wikileaks revelations of mass surveillance in recent months.

“The structures that manage and regulate the core Internet resources need to be internationalized, and made representative and democratic. The governance of the Internet should also be sensitive to the cultures and national interests of all nations.”The mechanism for governance of the Internet should therefore be transparent and should address all related issues. The Internet must be owned by the global community for mutual benefit and be rendered impervious to possible manipulation or misuse by any particular stake holder, whether state or non-state,” the ministry note says.  NETmundial will see representatives from nearly 180 countries participating to debate the future of Internet…

The US announced last month of its intent to relinquish control of a vital part of Internet Corporation for Assigned Names and Numbers (ICANN) – the Internet Assigned Numbers Authority (IANA).  “Many nations still think that a multilateral role might be more suitable than a multistakeholder approach and two years back India had proposed a 50-nation ‘Committee of Internet Related Policies’ (CIRP) for global internet governance,” Bhattacharjee added.

The concept of Equinet was first floated by Communications Minister Kapil Sibal in 2012 at the Internet Governance Forum in Baku, Azerbaijan.  Dr. Govind, chief executive officer, National Internet Exchange of India, is hopeful that Equinet is achievable. “Equinet is a concept of the Internet as a powerful medium benefiting people across the spectrum.It is all the more significant for India as we have 220 million Internet users, standing third globally after China and the US.””Moreover, by the year-end India’s number of Internet users are expected to surpass that of the US. The word Equinet means an equitable Internet which plays the role of an equaliser in the society and not limited only to the privileged people.”

He said the role of government in Internet management is important as far as policy, security and privacy of the cyber space is concerned, but the roles of the private sector, civil society and other stakeholders are no less. “Internet needs to be managed in a more collaborative, cooperative, consultative and consensual manner.”  Talking about the global strategy of renaming Internet as Equinet, he said: “Globally the US has the largest control over the management of the Internet, which is understandable since everything about Internet started there. Developing countries have still not much say over the global management of the Internet. But it is important that the Internet management be more decentralised and globalised so that the developing countries have more participation, have a say in the management where their consent be taken as well.”  The ministry note said: “A mechanism for accountability should be put in place in respect of crimes committed in cyberspace, such that the Internet is a free and secure space for universal benefaction. A ‘new cyber jurisprudence’ needs to be evolved to deal with cyber crime, without being limited by political boundaries and cyber-justice can be delivered in near real time.”

But other experts doubt the possibility of an Equinet or equalising the Internet globally.  Sivasubramanian Muthusamy, president, Internet Society India, Chennai, who is also a participant in the NETmundial, told IANS that the idea of Equinet is not achievable.  “Totally wrong idea. Internet provides a level playing field already. It is designed and operated to be universally accessible, free and open. Internet as it is operated today offers the greatest hope for developing countries to access global markets and prosper.”  “The idea of proposing to rename the Internet as Equinet has a political motive, that would pave way for telecom companies to have a bigger role to bring in harmful commercial models that would destabilize the open architecture of the Internet. If India is considering such a proposal, it would be severely criticized. The proposal does not make any sense. It is wrong advice or misplaced input that must have prompted the government of India to think of such a strange idea,” he said.

Excerpt from India wants Internet to become Equinet, Business Standard, Apr. 20, 2014

The Transparent Individual

By integrating data you want into the visual field in front of you Google Glass is meant to break down the distinction between looking at the screen and looking at the world. When switched on, its microphones will hear what you hear, allowing Glass to, say, display on its screen the name of any song playing nearby…It could also contribute a lot to the company’s core business. Head-mounted screens would let people spend time online that would previously have been offline. They also fit with the company’s interest in developing “anticipatory search” technology—ways of delivering helpful information before users think to look for it. Glass will allow such services to work without the customer even having to reach for a phone, slipping them ever more seamlessly into the wearer’s life. A service called Google Now already scans a user’s online calendar, e-mail and browsing history as a way of providing information he has not yet thought to look for. How much more it could do if it saw through his eyes or knew whom he was talking to…

People may in time want to live on camera in ways like this, if they see advantages in doing so. But what of living on the cameras of others? “Creep shots”—furtive pictures of breasts and bottoms taken in public places—are a sleazy fact of modern life. The camera phone has joined the Chinese burn in the armamentarium of the school bully, and does far more lasting damage. As cameras connect more commonly, sometimes autonomously, to the internet, hackers have learned how to take control of them remotely, with an eye to mischief, voyeurism or blackmail.  More wearable cameras probably mean more possibilities for such abuse.

Face-recognition technology, which allows software to match portraits to people, could take things further. The technology is improving, and is already used as an unobtrusive, fairly accurate way of knowing who people are. Some schools, for example, use it to monitor attendance. It is also being built into photo-sharing sites: Facebook uses it to suggest the names with which a photo you upload might be tagged. Governments check whether faces are turning up on more than one driver’s licence per jurisdiction; police forces identify people seen near a crime scene. Documents released to the Electronic Frontier Foundation, a campaign group, show that in August 2012 the Federal Bureau of Investigation’s “Next Generation Identification” database contained almost 13m searchable images of about 7m subjects.

Face recognition is a technology, like that of drones, which could be a boon to all sorts of surveillance around the world, and may make mask-free demonstrations in repressive states a thing of the past. The potential for abuse by people other than governments is clear, too…In America, warrants to seize user data from Facebook often also request any stored photos in which the suspect has been tagged by friends (though the firm does not always comply). Warrants as broad as some of those from which the National Security Agency and others have benefited in the past could allow access to all stored photos taken in a particular place and time.

The people’s panopticon, Economist,  Nov. 16, 2013, at 27

Open Government Data a Bonus to Mining Companies

On May 9th, 2013 Barack Obama ordered that all data created or collected by America’s federal government must be made available free to the public, unless this would violate privacy, confidentiality or security. “Open and machine-readable”, the president said, is “the new default for government information.”

This is a big bang for big data, and will spur a frenzy of activity. Pollution numbers will affect property prices. Restaurant reviews will mention official sanitation ratings. Data from tollbooths could be used to determine prices for nearby billboards. Combining data from multiple sources will yield fresh insights. For example, correlating school data with transport information and tax returns may show that academic performance depends less on income than the amount of time parents spend with their brats.

Over the next few months federal agencies must make an inventory of their data and prioritise their release. They must also take steps not to release information that, though innocuous on its own, could be joined with other data to undermine privacy—a difficult hurdle.  Many countries have moved in the same direction. In Europe the information held by governments could be used to generate an estimated €140 billion ($180 billion) a year. Only Britain has gone as far as America in making data available, however. For example, it requires the cost of all government transactions with citizens to be made public. Not all public bodies are keen on transparency. The Royal Mail refuses to publish its database of postal addresses because it makes money licensing it to businesses. On May 15th an independent review decried such practices, arguing that public-sector data belong to the public.

Rufus Pollock of the Open Knowledge Foundation, a think-tank, says most firms will eventually use at least some public-sector information in their business.

Open data: A new goldmine, Economist,  May 18, 2013, at 73

Watching your Internet Fingerprint

The current standard method for validating a user’s identity for authentication on an information system requires humans to do something that is inherently difficult: create, remember, and manage long, complex passwords. Moreover, as long as the session remains active, typical systems incorporate no mechanisms to verify that the user originally authenticated is the user still in control of the keyboard. Thus, unauthorized individuals may improperly obtain extended access to information system resources if a password is compromised or if a user does not exercise adequate vigilance after initially authenticating at the console.

The Active Authentication program seeks to address this problem by developing novel ways of validating the identity of the person at the console that focus on the unique aspects of the individual through the use of software-based biometrics. Biometrics is defined as the characteristics used to uniquely recognize humans based upon one or more intrinsic physical or behavioral traits. This program focuses on the computational behavioral traits that can be observed through how we interact with the world. Just as when you touch something with our finger you leave behind a fingerprint, when you interact with technology you do so in a pattern based on how your mind processes information, leaving behind a “cognitive fingerprint.”

This BAA addresses the first phase of this program. In the first phase of the program, the focus will be on researching biometrics that does not require the installation of additional hardware sensors. Rather, DARPA will look for research on biometrics that can be captured through the technology already in use in a standard DoD office environment, looking for aspects of the “cognitive fingerprint.” A heavy emphasis will be placed on validating any potential new biometrics with empirical tests to ensure they would be effective in large scale deployments.

The later planned phases of the program that are not addressed in this BAA will focus on developing a solution that integrates any available biometrics using a new authentication platform suitable for deployment on a standard Department of Defense desktop or laptop. The planned combinatorial approach of using multiple modalities for continuous user identification and authentication is expected to deliver a system that is accurate, robust, and transparent to the user’s normal computing experience. The authentication platform is planned to be developed with open Application Programming Interfaces (APIs) to allow the integration of other software or hardware biometrics available in the future from any source.

The combined aspects of the individual that this program is attempting to uncover are the aspects that are the computational behavioral “fingerprint” of the person at the keyboard. This has also been referred to in existing research as the “cognitive fingerprint.” The proposed theory is that how individuals formulate their thoughts and actions are reflected through their behavior, and this behavior in turn can be captured as metrics in how the individual performs tasks using the computer.

Some examples of the computational behavior metrics of the cognitive fingerprint include:

− keystrokes

− eye scans

− how the user searches for information (verbs and predicates used)

− how the user selects information (verbs and predicates used)

− how the user reads the material selected

• eye tracking on the page

• speed with which the individual reads the content

− methods and structure of communication (exchange of email)

These examples are only provided for illustrative purposes and are not intended as a list of potential research topics. The examples above include potential biometrics that would not be supported through this BAA due to a requirement for the deployment of additional hardware based sensors (such as tracking eye scans).

Excerpt from, Broad Agency Announcement, Active Authentication, DARPA-BAA-12-06, January 12, 2012

On Feb. 12, 2013, two groups announced related projects. The first is an industry group calling itself the FIDO (Fast IDentity Online) Alliance. It consists of the computer-maker, Lenovo, the security firm, Nok Nok Labs, the online payment giant, PayPal, the biometrics experts, Agnito, and the authentication specialists, Validity. The second is the Defense Advanced Research Project Agency (DARPA), a research and development arm of the Defense Department.

Excerpt from DARPA, FIDO Alliance Join Race to Replace Passwords, CNET, Feb. 12, 2013

Three Activists and their Twitter Accounts

A federal appeals court ruled Friday (Jan. 25, 2012)  that prosecutors can demand Twitter account information of certain users in their criminal probe into the disclosure of classified documents on WikiLeaks.  The three-judge panel of the 4th U.S. Circuit Court of Appeals also said the government’s reasons as to why it is seeking the information can remain sealed.

The case involves three Twitter account holders with some connection to the secret-busting WikiLeaks website. They had argued that forcing Twitter to cooperate with the investigation by turning over data amounts to an invasion of privacy and has a chilling effect on the free speech rights of Twitter users.

The federal panel in Richmond rejected their appeal and affirmed a magistrate’s court order that Twitter must turn over limited account information to prosecutors. The court said it weighed the right of public access against the need to keep an investigation secret. The appeals court agreed with the magistrate that the government’s interest in keeping the documents secret outweigh the right to public access.

Prosecutors have said federal law specifically allows them to seek account information as a routine investigative tool. Specifically, the Stored Communications Act allows them to obtain certain electronic data without a search warrant or a demonstration of probable cause. The government must only show that it has a reasonable belief that the records it seeks are relevant to an ongoing criminal investigation.  “This is essentially a reasonable suspicion standard,” the court wrote.  Under the Stored Communications Act, the government can also keep sealed documents related to their investigation from the subscribers. The appeals panel concluded the subscribers had no First Amendment right to access the documents. Prosecutors submitted their rationale for seeking the Twitter information to U.S. Magistrate Judge Theresa Carroll Buchanan but it was kept secret and sealed also.

The court wrote that the “government’s interests in maintaining secrecy of its investigation, preventing potential subjects from being tipped off, or altering behavior to thwart the government’s ongoing investigation, outweighed” the subscribers’ claims.

The American Civil Liberties Union and the Electronic Frontier Foundation, representing the Twitter users, said the government can use those IP addresses as a sort of virtual tracking device to identify a specific computer used by an account holder and with it the user’s physical location.

The appeals panel also allows the government to keep secret any similar orders it sought from other social media sites.

“This case shows just how easy it is for the government to obtain information about what people are doing on the Internet, and it highlights the need for our electronic privacy laws to catch up with technology,” said ACLU attorney Aden Fine. “The government should not be able to get private information like this without getting a warrant and also satisfying the standard required by the First Amendment, and it shouldn’t be able to do so in secret except in unusual circumstances”

The original order issued in December 2010 at prosecutors’ request also sought Twitter account information from WikiLeaks founder Julian Assange and Pfc. Bradley Manning, who faces life in prison if he’s convicted of indirectly aiding the enemy by leaking U.S. secrets while working as an intelligence analyst in Baghdad in 2009 and 2010.  Neither Assange nor Manning was a party in the lawsuit challenging the legality of the Twitter order.

WikiLeaks Case: U.S. Appeals Court Rules On Investigation, Huffington Post, Jan. 25, 2013

Chevron, 50 Activists and their Email Accounts

The Electronic Frontier Foundation (EFF) and EarthRights International (ERI) asked judges in California and New York today to quash subpoenas issued by Chevron Corporation to three email providers demanding identifying information about the users of more than 100 email accounts, including environmental activists, journalists, and attorneys. The information Chevron wants could be used to create a detailed map of the individuals’ locations and associations over nearly a decade.

The subpoenas are the latest salvo in the long-running battle over damage caused by oil drilling in Ecuador. After years of litigation, an Ecuadorian court last year imposed a judgment of over $17 billion on Chevron for dumping toxic waste into Amazon waterways and causing massive harm to the rainforest. Instead of paying, Chevron sued more than 50 people who were involved in the Ecuador lawsuit, claiming they were part of a conspiracy to defraud the oil giant. None of the individuals represented by EFF and ERI has been sued by Chevron or accused of wrongdoing.

“Environmental advocates have the right to speak anonymously and travel without their every move and association being exposed to Chevron,” said Marcia Hofmann, EFF Senior Staff Attorney. “These sweeping subpoenas create a chilling effect among those who have spoken out against the oil giant’s activities in Ecuador.”

The motions to quash filed today asked the courts to reject the subpoenas, pointing out that anonymous speakers who are not parties in a lawsuit receive particularly strong First Amendment protections. EFF first won court recognition of this protection in Doe v. 2theMart.com in 2001. Chevron’s subpoenas also violate the legal protections for the right of association for political action that were developed during the civil rights era.

“The courts have long recognized that forcing activists to reveal their names and political associations will chill First Amendment rights and can only be done in the most extreme situations,” added Marco Simons, Legal Director of ERI, which has provided legal assistance to third parties affected by the Chevron litigation in two international proceedings. “We look forward to having those longstanding principles applied in this case so that people can engage in journalism and political activism and assist in litigation against environmental destruction without fear that their identities and personal email information will be put at risk.”

EFF and ERI are challenging the subpoenas to Google and Yahoo! in the U.S. District Court for the Northern District of California and the subpoena to Microsoft in the U.S. District Court for the Northern District of New York. .

EFF and ERI Fight to Quash Speech-Chilling Subpoenas from Chevron, Press Release of Electronic Frontier Foundation, Oct. 22, 2012