Tag Archives: cyber-sovereignty

Let them Eat Data! Decolonizing Artificial Intelligence

Tap water isn’t drinkable. Power outages are common. The national average annual wage is $2,200. Yet rising on Jakarta’s outskirts are giant, windowless buildings packed inside with Nvidia’s latest artificial-intelligence chips. They mark Indonesia’s surprising rise as an AI hot spot, a market estimated to grow 30% annually over the next five years to $2.4 billion.

The multitrillion-dollar spending spree on AI has spread to the developing world. It is driven in part by a philosophy known in some academic circles as AI decolonization. The idea is simple. Foreign powers once extracted resources such as oil from colonies, offering minimal benefits to the locals. Today, developing nations aim to ensure that the AI boom enriches more than just Silicon Valley.  Regulations effectively require tech companies such as Google and Meta to process local data domestically. That pushes companies to build or rent data facilities onshore instead of relying on global infrastructure. These investments add up to billions of dollars and create jobs that foster national talent, or so developing nations hope.

AI decolonization is a twist on data sovereignty, a concept that gained traction after Edward Snowden revealed that American tech companies cooperated with U.S. government surveillance of foreign leaders. The European Union in 2018 pioneered data-protection laws that other nations have since mimicked.

Regulations vary by country and industry, but the principle is this: If a developing-nation bank wants an American tech giant to store customer data and analyze it with AI, the bank must hire a company with domestically located servers… Nvidia Chief Executive Jensen Huang championed “sovereign AI” during a visit to Jakarta in 2024

“No country can afford to have its natural resource—the data of its people—be extracted, transformed into intelligence and then imported back into the country,” Huang said…

Excerpt from Stu Woo, It’s Not Just Rich Countries. Tech’s Trillion-Dollar Bet on AI Is Everywhere, WSJ, Oct. 26, 2025

Algorithms as Weapons –Tracking,Targeting Nuclear Weapons

 
New and unproved technologies—this time computer systems capable of performing superhuman tasks using machine learning and other forms of artificial intelligence (AI)—threaten to destabilise the global “strategic balance”, by seeming to offer ways to launch a knockout blow against a nuclear-armed adversary, without triggering an all-out war.

A report issued in November by America’s National Security Commission on Artificial Intelligence, a body created by Congress and chaired by Eric Schmidt, a former boss of Google, and Robert Work, who was deputy defence secretary from 2014-17, ponders how AI systems may reshape global balances of power, as dramatically as electricity changed warfare and society in the 19th century. Notably, it focuses on the ability of AI to “find the needle in the haystack”, by spotting patterns and anomalies in vast pools of data…In a military context, it may one day find the stealthiest nuclear-armed submarines, wherever they lurk. The commission is blunt. Nuclear deterrence could be undermined if AI-equipped systems succeed in tracking and targeting previously invulnerable military assets. That in turn could increase incentives for states, in a crisis, to launch a devastating pre-emptive strike. China’s rise as an AI power represents the most complex strategic challenge that America faces, the commission adds, because the two rivals’ tech sectors are so entangled by commercial, academic and investment ties.

Some Chinese officials sound gung-ho about AI as a path to prosperity and development, with few qualms about privacy or lost jobs. Still, other Chinese fret about AI that might put winning a war ahead of global stability, like some game-playing doomsday machine. Chinese officials have studied initiatives such as the “Digital Geneva Convention” drafted by Microsoft, a technology giant. This would require states to forswear cyber-attacks on such critical infrastructure as power grids, hospitals and international financial systems.  AI would make it easier to locate and exploit vulnerabilities in these…

One obstacle is physical. Warheads or missile defences can be counted by weapons inspectors. In contrast, rival powers cannot safely show off their most potent algorithms, or even describe AI capabilities in a verifiable way….Westerners worry especially about so-called “black box” algorithms, powerful systems that generate seemingly accurate results but whose reasoning is a mystery even to their designers.

Excerpts from Chaguan: The Digital Divide, Economist, Jan 18, 2019

The Repressive Digital Technologies of the West

A growing, multi-billion-dollar industry exports “intrusion software” designed to snoop on smartphones, desktop computers and servers. There is compelling evidence that such software is being used by oppressive regimes to spy on and harass their critics. The same tools could also proliferate and be turned back against the West. Governments need to ensure that this new kind of arms export does not slip through the net.

A recent lawsuit brought by WhatsApp, for instance, alleges that more than 1,400 users of its messaging app were targeted using software made by NSO Group, an Israeli firm. Many of the alleged victims were lawyers, journalists and campaigners. (NSO denies the allegations and says its technology is not designed or licensed for use against human-rights activists and journalists.) Other firms’ hacking tools were used by the blood-soaked regime of Omar al-Bashir in Sudan. These technologies can be used across borders. Some victims of oppressive governments have been dissidents or lawyers living as exiles in rich countries.

Western governments should tighten the rules for moral, economic and strategic reasons. The moral case is obvious. It makes no sense for rich democracies to complain about China’s export of repressive digital technologies if Western tools can be used to the same ends. The economic case is clear, too: unlike conventional arms sales, a reduction in spyware exports would not lead to big manufacturing-job losses at home.

The strategic case revolves around the risk of proliferation. Software can be reverse-engineered, copied indefinitely and—potentially—used to attack anyone in the world…. There is a risk that oppressive regimes acquire capabilities that can then be used against not just their own citizens, but Western citizens, firms and allies, too. It would be in the West’s collective self-interest to limit the spread of such technology.

A starting-point would be to enforce existing export-licensing more tightly… Rich countries should make it harder for ex-spooks to pursue second careers as digital mercenaries in the service of autocrats. The arms trade used to be about rifles, explosives and jets. Now it is about software and information, too. Time for the regime governing the export of weapons to catch up

The spying business: Western firms should not sell spyware to tyrants, Economist, Dec. 14, 2019

DARPA for Transparent Computing

From the DARPA website
Modern computing systems act as black boxes in that they accept inputs and generate outputs but provide little to no visibility of their internal workings. This greatly limits the potential to understand...advanced persistent threats (APTs). APT adversaries act slowly and deliberately over a long period of time to expand their presence in an enterprise network and achieve their mission goals (e.g., information exfiltration, interference with decision making and denial of capability). Because modern computing systems are opaque, APTs can remain undetected for years if their individual activities can blend with the background “noise” inherent in any large, complex environment. ..

The Transparent Computing (TC) program aims to make currently opaque computing systems transparent by providing high-fidelity visibility into component interactions during system operation across all layers of software abstraction, while imposing minimal performance overhead. The program will develop technologies to record and preserve the provenance of all system elements/components (inputs, software modules, processes, etc.); dynamically track the interactions and causal dependencies among cyber system components; assemble these dependencies into end-to-end system behaviors; and reason over these behaviors, both forensically and in real-time. By automatically or semi-automatically “connecting the dots” across multiple activities that are individually legitimate but collectively indicate malice or abnormal behavior, TC has the potential to enable the prompt detection of APTs and other cyber threats, and allow complete root cause analysis and damage assessment once adversary activity is identified. In addition, the TC program will integrate its basic cyber reasoning functions in an enterprise-scale cyber monitoring and control construct that enforces security policies at key ingress/exit points, e.g., the firewall.

Excerpt from http://www.darpa.mil/Our_Work/I2O/Programs/Transparent_Computing.aspx