Tag Archives: internet fingerprint

Why a Dumb Internet is Best

Functional splintering [of the internet] is already happening. When tech companies build “walled gardens”, they decide the rules for what happens inside the walls, and users outside the network are excluded…

Governments are playing catch-up but they will eventually reclaim the regulatory power that has slipped from their grasp. Dictatorships such as China retained control from the start; others, including Russia, are following Beijing. With democracies, too, asserting their jurisdiction over the digital economy, a fragmentation of the internet along national lines is more likely. …The prospect of a “splinternet” has not been lost on governments. To avoid it, Japan’s G20 presidency has pushed for a shared approach to internet governance. In January 2019, prime minister Shinzo Abe called for “data free flow with trust”. The 2019 Osaka summit pledged international co-operation to “encourage the interoperability of different frameworks”.

But Europe is most in the crosshairs of those who warn against fragmentation…US tech giants have not appreciated EU authorities challenging their business model through privacy laws or competition rulings. But more objective commentators, too, fear the EU may cut itself off from the global digital economy. The critics fail to recognise that fragmentation can be the best outcome if values and tastes fundamentally differ…

If Europeans collectively do not want micro-targeted advertising, or artificial intelligence-powered behaviour manipulation, or excessive data collection, then the absence on a European internet of services using such techniques is a gain, not a loss. The price could be to miss out on some services available elsewhere… More probably, non-EU providers will eventually find a way to charge EU users in lieu of monetising their data…Some fear EU rules make it hard to collect the big data sets needed for AI training. But the same point applies. EU consumers may not want AI trained to do intrusive things. In any case, Europe is a big enough market to generate stripped, non-personal data needed for dumber but more tolerable AI, though this may require more harmonised within-EU digital governance. Indeed, even if stricter EU rules splinter the global internet, they also create incentives for more investment into EU-tailored digital products. In the absence of global regulatory agreements, that is a good second best for Europe to aim for.

Excerpts from Martin Sandbu,  Europe Should Not be Afraid of Splinternet,  FT, July 2, 2019

The Internet Was Never Open

Rarely has a manifesto been so wrong. “A Declaration of the Independence of Cyberspace”, written 20 years ago by John Perry Barlow, a digital civil-libertarian, begins thus: “Governments of the Industrial World, you weary giants of flesh and steel, I come from Cyberspace, the new home of Mind. On behalf of the future, I ask you of the past to leave us alone. You are not welcome among us. You have no sovereignty where we gather.”

At the turn of the century, it seemed as though this techno-Utopian vision of the world could indeed be a reality. It didn’t last… Autocratic governments around the world…have invested in online-surveillance gear. Filtering systems restrict access: to porn in Britain, to Facebook and Google in China, to dissent in Russia.

Competing operating systems and networks offer inducements to keep their users within the fold, consolidating their power. Their algorithms personalise the web so that no two people get the same search results or social media feeds, betraying the idea of a digital commons. Five companies account for nearly two-thirds of revenue from advertising, the dominant business model of the web.

The open internet accounts for barely 20% of the entire web. The rest of it is hidden away in unsearchable “walled gardens” such as Facebook, whose algorithms are opaque, or on the “dark web”, a shady parallel world wide web. Data gathered from the activities of internet users are being concentrated in fewer hands. And big hands they are too. BCG, a consultancy, reckons that the internet will account for 5.3% of GDP of the world’s 20 big economies this year, or $4.2 trillion.

How did this come to pass? The simple reply is that the free, open, democratic internet dreamed up by the optimists of Silicon Valley was never more than a brief interlude. The more nuanced answer is that the open internet never really existed.

[T]e internet, it was developed “by the US military to serve US military purposes”… The decentralised, packet-based system of communication that forms the basis of the internet originated in America’s need to withstand a massive attack on its soil. Even the much-ballyhooed Silicon Valley model of venture capital as a way to place bets on risky new businesses has military origins.

In the 1980s the American military began to lose interest in the internet…. The time had come for the hackers and geeks who had been experimenting with early computers and phone lines.  Today they are the giants. Google, Apple, Facebook, Amazon and Microsoft—together with some telecoms operators—help set policy in Europe and America on everything from privacy rights and copyright law to child protection and national security. As these companies grow more powerful, the state is pushing back…

The other big risk is that the tension between states and companies resolves into a symbiotic relationship. A leaked e-mail shows a Google executive communicating with Hillary Clinton’s state department about an online tool that would be “important in encouraging more [Syrians] to defect and giving confidence to the opposition.”+++ If technology firms with global reach quietly promote the foreign-policy interests of one country, that can only increase suspicion and accelerate the fracturing of the web into regional internets….

Mr Malcomson describes the internet as a “global private marketplace built on a government platform, not unlike the global airport system”.

Excerpts from Evolution of the internet: Growing up, Economist, Mar. 26, 2016

+++The email said Google would be “partnering with Al Jazeera” who would take “primary ownership” of the tool, maintaining it and publicizing it in Syria.  It was eventually published by Al Jazeera in English and Arabic.

Who Controls Peoples’ Data?

The McKinsey Global Institute estimates that cross-border flows of goods, services and data added 10 per cent to global gross domestic product in the decade to 2015, with data providing a third of that increase. That share of the contribution seems likely to rise: conventional trade has slowed sharply, while digital flows have surged. Yet as the whole economy becomes more information-intensive — even heavy industries such as oil and gas are becoming data-driven — the cost of blocking those flows increases…

Yet that is precisely what is happening. Governments have sharply increased “data localisation” measures requiring information to be held in servers inside individual countries. The European Centre for International Political Economy, a think-tank, calculates that in the decade to 2016, the number of significant data localisation measures in the world’s large economies nearly tripled from 31 to 84.

Even in advanced economies, exporting data on individuals is heavily restricted because of privacy concerns, which have been highlighted by the Facebook/ Cambridge Analytica scandal. Many EU countries have curbs on moving personal data even to other member states. Studies for the Global Commission on Internet Governance, an independent research project, estimates that current constraints — such as restrictions on moving data on banking, gambling and tax records — reduces EU GDP by half a per cent.

In China, the champion data localiser, restrictions are even more severe. As well as long-established controls over technology transfer and state surveillance of the population, such measures form part of its interventionist “ Made in China 2025 ” industrial strategy, designed to make it a world leader in tech-heavy sectors such as artificial intelligence and robotics.

China’s Great Firewall has long blocked most foreign web applications, and a cyber security law passed in 2016 also imposed rules against exporting personal information, forcing companies including Apple and LinkedIn to hold information on Chinese users on local servers. Beijing has also given itself a variety of powers to block the export of “important data” on grounds of reducing vaguely defined economic, scientific or technological risks to national security or the public interest.   “The likelihood that any company operating in China will find itself in a legal blind spot where it can freely transfer commercial or business data outside the country is less than 1 per cent,” says ECIPE director Hosuk Lee-Makiyama….

Other emerging markets, such as Russia, India, Indonesia and Vietnam, are also leading data localisers. Russia has blocked LinkedIn from operating there after it refused to transfer data on Russian users to local servers.

Business organisations including the US Chamber of Commerce want rules to restrain what they call “digital protectionism”. But data trade experts point to a serious hole in global governance, with a coherent approach prevented by different philosophies between the big trading powers. Susan Aaronson, a trade academic at George Washington University in Washington, DC, says: “There are currently three powers — the EU, the US and China — in the process of creating separate data realms.”

The most obvious way to protect international flows of data is in trade deals — whether multilateral, regional or bilateral. Yet only the World Trade Organization laws governing data flows predate the internet and have not been thoroughly tested through litigation. It recently recruited Alibaba co-founder Jack Ma to front an ecommerce initiative, but officials involved admit it is unlikely to produce anything concrete for a long time. In any case, Prof Aaronson says: “While data has traditionally been addressed in trade deals as an ecommerce issue, it goes far wider than that.”

The internet has always been regarded by pioneers and campaigners as a decentralised, self-regulating community. Activists have tended to regard government intervention with suspicion, except for its role in protecting personal data, and many are wary of legislation to enable data flows.  “While we support the approach of preventing data localisation, we need to balance that against other rights such as data protection, cyber security and consumer rights,” says Jeremy Malcolm, senior global policy analyst at the Electronic Frontier Foundation, a campaign for internet freedom…

Europe has traditionally had a very different philosophy towards data and privacy than the US. In Germany, for instance, public opinion tends to support strict privacy laws — usually attributed to lingering memories of surveillance by the Stasi secret police in East Germany. The EU’s new General Data Protection Regulation (GDPR), which comes into force on May 25, 2018 imposes a long list of requirements on companies processing personal data on pain of fines that could total as much as 4 per cent of annual turnover….But trade experts warn that the GDPR is very cautiously written, with a blanket exemption for measures claiming to protect privacy. Mr Lee-Makiyama says: “The EU text will essentially provide no meaningful restriction on countries wanting to practice data localisation.”

Against this political backdrop, the prospects for broad and binding international rules on data flow are dim. …In the battle for dominance over setting rules for commerce, the EU and US often adopt contrasting approaches.  While the US often tries to export its product standards in trade diplomacy, the EU tends to write rules for itself and let the gravity of its huge market pull other economies into its regulatory orbit. Businesses faced with multiple regulatory regimes will tend to work to the highest standard, known widely as the “Brussels effect”.  Companies such as Facebook have promised to follow GDPR throughout their global operations as the price of operating in Europe.

Excerpts from   Data protectionism: the growing menace to global business, Financial Times, May 13, 2018

Biometrics: Behavioral and Physical

From DARPA pdf document available at  FedBizOpps. Gov Enhanced Attribution
Solicitation Number: DARPA-BAA-16-34

Malicious actors in cyberspace currently operate with little fear of being caught due to the fact that it is extremely difficult, in some cases perhaps even impossible, to reliably and confidently attribute actions in cyberspace to individuals. The reason cyber attribution is difficult stems at least in part from a lack of end-to-end accountability in the current Internet infrastructure…..The identities of malicious cyber operators are largely obstructed by the use of multiple layers of indirection… The lack of detailed information about the actions and identities of the adversary cyber operators inhibits policymaker considerations and decisions for both cyber and non-cyber response options (e.g., economic sanctions under EO-13694).

The DARPA’s Enhanced Attribution program aims to make currently opaque malicious cyber adversary actions and individual cyber operator attribution transparent by providing high-fidelity visibility into all aspects of malicious cyber operator actions and to increase the Government’s ability to publicly reveal the actions of individual malicious cyber operators without damaging sources and methods….

The program seeks to develop:

–technologies to extract behavioral and physical biometrics from a range of devices and
vantage points to consistently identify virtual personas and individual malicious cyber
operators over time and across different endpoint devices and C2 infrastructures;
–techniques to decompose the software tools and actions of malicious cyber operators into semantically rich and compressed knowledge representations;
–scalable techniques to fuse, manage, and project such ground-truth information over time,toward developing a full historical and current picture of malicious activity;

–algorithms for developing predictive behavioral profiles within the context of cyber campaigns; and
–technologies for validating and perhaps enriching this knowledge base with other sources of data, including public and commercial sources of information.

Excerpts from Enhanced Attribution, Solicitation Number: DARPA-BAA-16-34, April 22, 2016

Wikipedia Lawsuit against U.S. NSA

Excerpts from the Lawsuit of Wikipedia against the NSA

UNITED STATES DISTRICT COURT DISTRICT OF MARYLAND, Case 1:15-cv-00662-RDB, Filed 03/10/15

The government conducts at least two kinds of surveillance under the The Foreign Intelligence Surveillance Amendments Act of 2008 (FAA).  Under a program called “PRISM,” the government obtains stored and real-time communications directly from U.S. companies—such as Google, Yahoo, Facebook, and Microsoft—that provide communications services to targeted accounts.

This case concerns a second form of surveillance, called Upstream. Upstream surveillance involves the NSA’s seizing and searching the internet communications of U.S. citizens and residents en masse as those communications travel across the internet “backbone” in the United States. The internet backbone is the network of high-capacity cables, switches, and routers that facilitates both domestic and international communication via the internet.The NSA conducts Upstream surveillance by connecting surveillance devices to multiple major internet cables, switches, and routers inside the United States. These access points are controlled by the country’s largest telecommunications providers, including Verizon Communications, Inc. and AT&T, Inc. ….

. With the assistance of telecommunications providers, the NSA intercepts a wide variety of internet communications, including emails, instant messages, webpages, voice calls, and video chats. It copies and reviews substantially all international emails and other “text-based” communications—i.e., those whose content includes searchable text.

More specifically, Upstream surveillance encompasses the following processes, some of which are implemented by telecommunications providers acting at the NSA’s direction:

• Copying. Using surveillance devices installed at key access points, the NSA makes a copy of substantially all international text-based communications—and many domestic ones—flowing across certain high-capacity cables, switches, and routers. The copied traffic includes email, internet-messaging communications, web-browsing content, and search-engine queries.

• Filtering. The NSA attempts to filter out and discard some wholly domestic communications from the stream of internet data, while preserving international communications. The NSA’s filtering out of domestic communications is incomplete, however, for multiple reasons. Among them, the NSA does not eliminate bundles of domestic and international communications that transit the internet backbone together. Nor does it eliminate domestic communications that happen to be routed abroad.

• Content Review. The NSA reviews the copied communications—including their full content—for instances of its search terms. The search terms, called “selectors,” include email addresses, phone numbers, internet protocol (“IP”) addresses, and other identifiers that NSA analysts believe to be associated with foreign intelligence targets. Again, the NSA’s targets are not limited to suspected foreign agents and terrorists, nor are its selectors limited to individual email addresses. The NSA may monitor or “task” selectors used by large groups of people who are not suspected of any wrongdoing— such as the IP addresses of computer servers used by hundreds of different people.

• Retention and Use. The NSA retains all communications that contain selectors associated with its targets, as well as those that happened to be bundled with them in transit….

NSA analysts may read, query, data-mine, and analyze these communications with few restrictions, and they may share the results of those efforts with the FBI, including in aid of criminal investigations….. In other words, the NSA copies and reviews the communications of millions of innocent people to determine whether they are discussing or reading anything containing the NSA’s search terms. The NSA’s practice of reviewing the content of communications for selectors is sometimes called “about” surveillance. This is because its purpose is to identify not just communications that are to or from the NSA’s targets but also those that are merely “about” its targets. Although it could do so, the government makes no meaningful effort to avoid the interception of communications that are merely “about” its targets; nor does it later purge those communications.

PDF document of Lawsuit

Internet or Equinet?

“The Internet governance should be multilateral, transparent, democratic,and representative, with the participation of governments, private sector, civil society, and international organizations, in their respective roles. This should be one of the foundational principles of Internet governance,” the external affairs ministry says in its initial submission to the April 23-24 Global Multistakeholder Meeting on the Future of Internet Governance, also referred as NETmundial, in Sao Paulo, Brazil.  The proposal for a decentralised Internet is significant in view of Edward Snowden’s Wikileaks revelations of mass surveillance in recent months.

“The structures that manage and regulate the core Internet resources need to be internationalized, and made representative and democratic. The governance of the Internet should also be sensitive to the cultures and national interests of all nations.”The mechanism for governance of the Internet should therefore be transparent and should address all related issues. The Internet must be owned by the global community for mutual benefit and be rendered impervious to possible manipulation or misuse by any particular stake holder, whether state or non-state,” the ministry note says.  NETmundial will see representatives from nearly 180 countries participating to debate the future of Internet…

The US announced last month of its intent to relinquish control of a vital part of Internet Corporation for Assigned Names and Numbers (ICANN) – the Internet Assigned Numbers Authority (IANA).  “Many nations still think that a multilateral role might be more suitable than a multistakeholder approach and two years back India had proposed a 50-nation ‘Committee of Internet Related Policies’ (CIRP) for global internet governance,” Bhattacharjee added.

The concept of Equinet was first floated by Communications Minister Kapil Sibal in 2012 at the Internet Governance Forum in Baku, Azerbaijan.  Dr. Govind, chief executive officer, National Internet Exchange of India, is hopeful that Equinet is achievable. “Equinet is a concept of the Internet as a powerful medium benefiting people across the spectrum.It is all the more significant for India as we have 220 million Internet users, standing third globally after China and the US.””Moreover, by the year-end India’s number of Internet users are expected to surpass that of the US. The word Equinet means an equitable Internet which plays the role of an equaliser in the society and not limited only to the privileged people.”

He said the role of government in Internet management is important as far as policy, security and privacy of the cyber space is concerned, but the roles of the private sector, civil society and other stakeholders are no less. “Internet needs to be managed in a more collaborative, cooperative, consultative and consensual manner.”  Talking about the global strategy of renaming Internet as Equinet, he said: “Globally the US has the largest control over the management of the Internet, which is understandable since everything about Internet started there. Developing countries have still not much say over the global management of the Internet. But it is important that the Internet management be more decentralised and globalised so that the developing countries have more participation, have a say in the management where their consent be taken as well.”  The ministry note said: “A mechanism for accountability should be put in place in respect of crimes committed in cyberspace, such that the Internet is a free and secure space for universal benefaction. A ‘new cyber jurisprudence’ needs to be evolved to deal with cyber crime, without being limited by political boundaries and cyber-justice can be delivered in near real time.”

But other experts doubt the possibility of an Equinet or equalising the Internet globally.  Sivasubramanian Muthusamy, president, Internet Society India, Chennai, who is also a participant in the NETmundial, told IANS that the idea of Equinet is not achievable.  “Totally wrong idea. Internet provides a level playing field already. It is designed and operated to be universally accessible, free and open. Internet as it is operated today offers the greatest hope for developing countries to access global markets and prosper.”  “The idea of proposing to rename the Internet as Equinet has a political motive, that would pave way for telecom companies to have a bigger role to bring in harmful commercial models that would destabilize the open architecture of the Internet. If India is considering such a proposal, it would be severely criticized. The proposal does not make any sense. It is wrong advice or misplaced input that must have prompted the government of India to think of such a strange idea,” he said.

Excerpt from India wants Internet to become Equinet, Business Standard, Apr. 20, 2014

Watching your Internet Fingerprint

The current standard method for validating a user’s identity for authentication on an information system requires humans to do something that is inherently difficult: create, remember, and manage long, complex passwords. Moreover, as long as the session remains active, typical systems incorporate no mechanisms to verify that the user originally authenticated is the user still in control of the keyboard. Thus, unauthorized individuals may improperly obtain extended access to information system resources if a password is compromised or if a user does not exercise adequate vigilance after initially authenticating at the console.

The Active Authentication program seeks to address this problem by developing novel ways of validating the identity of the person at the console that focus on the unique aspects of the individual through the use of software-based biometrics. Biometrics is defined as the characteristics used to uniquely recognize humans based upon one or more intrinsic physical or behavioral traits. This program focuses on the computational behavioral traits that can be observed through how we interact with the world. Just as when you touch something with our finger you leave behind a fingerprint, when you interact with technology you do so in a pattern based on how your mind processes information, leaving behind a “cognitive fingerprint.”

This BAA addresses the first phase of this program. In the first phase of the program, the focus will be on researching biometrics that does not require the installation of additional hardware sensors. Rather, DARPA will look for research on biometrics that can be captured through the technology already in use in a standard DoD office environment, looking for aspects of the “cognitive fingerprint.” A heavy emphasis will be placed on validating any potential new biometrics with empirical tests to ensure they would be effective in large scale deployments.

The later planned phases of the program that are not addressed in this BAA will focus on developing a solution that integrates any available biometrics using a new authentication platform suitable for deployment on a standard Department of Defense desktop or laptop. The planned combinatorial approach of using multiple modalities for continuous user identification and authentication is expected to deliver a system that is accurate, robust, and transparent to the user’s normal computing experience. The authentication platform is planned to be developed with open Application Programming Interfaces (APIs) to allow the integration of other software or hardware biometrics available in the future from any source.

The combined aspects of the individual that this program is attempting to uncover are the aspects that are the computational behavioral “fingerprint” of the person at the keyboard. This has also been referred to in existing research as the “cognitive fingerprint.” The proposed theory is that how individuals formulate their thoughts and actions are reflected through their behavior, and this behavior in turn can be captured as metrics in how the individual performs tasks using the computer.

Some examples of the computational behavior metrics of the cognitive fingerprint include:

− keystrokes

− eye scans

− how the user searches for information (verbs and predicates used)

− how the user selects information (verbs and predicates used)

− how the user reads the material selected

• eye tracking on the page

• speed with which the individual reads the content

− methods and structure of communication (exchange of email)

These examples are only provided for illustrative purposes and are not intended as a list of potential research topics. The examples above include potential biometrics that would not be supported through this BAA due to a requirement for the deployment of additional hardware based sensors (such as tracking eye scans).

Excerpt from, Broad Agency Announcement, Active Authentication, DARPA-BAA-12-06, January 12, 2012

On Feb. 12, 2013, two groups announced related projects. The first is an industry group calling itself the FIDO (Fast IDentity Online) Alliance. It consists of the computer-maker, Lenovo, the security firm, Nok Nok Labs, the online payment giant, PayPal, the biometrics experts, Agnito, and the authentication specialists, Validity. The second is the Defense Advanced Research Project Agency (DARPA), a research and development arm of the Defense Department.

Excerpt from DARPA, FIDO Alliance Join Race to Replace Passwords, CNET, Feb. 12, 2013