Tag Archives: robots

So You Want a Job? De-Humanizing the Hiring Process

Dr. Lee, chairman and chief executive of venture-capital firm Sinovation Ventures and author of “AI Superpowers: China, Silicon Valley and the New World Order,” maintains that AI “will wipe out a huge portion of work as we’ve known it.” He hit on that theme when he spoke at The Wall Street Journal’s virtual CIO Network summit.

Artificial intelligence (AI) (i.e., robots), according to Dr. Lee, can be used for recruiting…We can have a lot of résumés coming in, and we want to match those résumés with job descriptions and route them to the right managers. If you’re thinking about AI computer and video interaction, there are products you can deploy to screen candidates. For example, AI can have a conversation with the person, via videoconference. And then AI would grade the people based on their answers to your questions that are preprogrammed, as well as your micro-expressions and facial expressions, to reflect whether you possess the right IQ and EQ (emotional intelligence) for a particular job.

Excerpts from Jared Council , AI’s Impact on Businesses—and Jobs, WSJ,  Mar. 8, 2021

How to Fool your Enemy: Artificial Intelligence in Conflict

The contest between China and America, the world’s two superpowers, has many dimensions… One of the most alarming and least understood is the race towards artificial-intelligence-enabled warfare. Both countries are investing large sums in militarised artificial intelligence  (AI), from autonomous robots to software that gives generals rapid tactical advice in the heat of battle….As Jack Shanahan, a general who is the Pentagon’s point man for AI, put it last month, “What I don’t want to see is a future where our potential adversaries have a fully ai-enabled force and we do not.”

AI-enabled weapons may offer superhuman speed and precision.  In order to gain a military advantage, the temptation for armies will be to allow them not only to recommend decisions but also to give orders. That could have worrying consequences. Able to think faster than humans, an AI-enabled command system might cue up missile strikes on aircraft carriers and airbases at a pace that leaves no time for diplomacy and in ways that are not fully understood by its operators. On top of that, ai systems can be hacked, and tricked with manipulated data.

AI in war might aid surprise attacks or confound them, and the death toll could range from none to millions.  Unlike missile silos, software cannot be spied on from satellites. And whereas warheads can be inspected by enemies without reducing their potency, showing the outside world an algorithm could compromise its effectiveness. The incentive may be for both sides to mislead the other. “Adversaries’ ignorance of AI-developed configurations will become a strategic advantage,” suggests Henry Kissinger, who led America’s cold-war arms-control efforts with the Soviet Union…Amid a confrontation between the world’s two big powers, the temptation will be to cut corners for temporary advantage. 

Excerpts from Mind control: Artificial intelligence and war, Economist,  Sept. 7, 2019

Example of the Use of AI in Warfare: The Real-time Adversarial Intelligence and Decision-making (RAID) program under the auspices of The Defense Advanced Research Projects Agency’s (DARPA) Information Exploitation Office (IXO)  focuses on the challenge of anticipating enemy actions in a military operation. In the US Air Force community, the term, predictive battlespace awareness, refers to capabilities that would help the commander and staff to characterize and predict likely enemy courses of action…Today’s practices of military intelligence and decision-making do include a number of processes specifically aimed at predicting enemy actions. Currently, these processes are largely manual as well as mental, and do not involve any significant use of technical means. Even when computerized wargaming is used (albeit rarely in field conditions), it relies either on human guidance of the simulated enemy units or on simple reactive behaviors of such simulated units; in neither case is there a computerized prediction of intelligent and forward-looking enemy actions….

[The deception reasoning of the adversary is very important in this context.]  Deception reasoning refers to an important aspect of predicting enemy actions: the fact that military operations are historically, crucially dependent on the ability to use various forms of concealment and deception for friendly purposes while detecting and counteracting the enemy’s concealment and deception. Therefore, adversarial reasoning must include deception reasoning.

The RAID Program will develop a real-time adversarial predictive analysis tool that operates as an automated enemy predictor providing a continuously updated picture of probable enemy actions in tactical ground operations. The RAID Program will strive to: prove that adversarial reasoning can be automated; prove that automated adversarial reasoning can include deception….

Excerpts from Real-time Adversarial Intelligence and Decision-making (RAID), US Federal Grants

Military Robots and Automated Killing

Military robots come in an astonishing range of shapes and sizes. DelFly, a dragonfly-shaped surveillance drone built at the Delft University of Technology in the Netherlands, weighs less than a gold wedding ring, camera included. At the other end of the scale is America’s biggest and fastest drone, the $15m Avenger, the first of which recently began testing in Afghanistan. It uses a jet engine to carry up to 2.7 tonnes of bombs, sensors and other types of payload at more than 740kph (460mph).

On the ground, robots range from truck-sized to tiny. TerraMax, a robotics kit made by Oshkosh Defense, based in Wisconsin, turns military lorries or armoured vehicles into remotely controlled or autonomous machines. And smaller robotic beasties are hopping, crawling and running into action, as three models built by Boston Dynamics, a spin-out from the Massachusetts Institute of Technology (MIT), illustrate.  By jabbing the ground with a gas-powered piston, the Sand Flea can leap through a window, or onto a roof nine metres up. Gyro-stabilisers provide smooth in-air filming and landings. The 5kg robot then rolls along on wheels until another hop is needed—to jump up some stairs, perhaps, or to a rooftop across the street. Another robot, RiSE, resembles a giant cockroach and uses six legs, tipped with short, Velcro-like spikes, to climb coarse walls. Biggest of all is the LS3, a four-legged dog-like robot that uses computer vision to trot behind a human over rough terrain carrying more than 180kg of supplies. The firm says it could be deployed within three years.

Demand for land robots, also known as unmanned ground vehicles (UGVs), began to pick up a decade ago after American-led forces knocked the Taliban from power in Afghanistan. Soldiers hunting Osama bin Laden and his al-Qaeda fighters in the Hindu Kush were keen to send robot scouts into caves first. Remote-controlled ground robots then proved enormously helpful in the discovery and removal of makeshift roadside bombs in Afghanistan, Iraq, and elsewhere. Visiongain, a research firm, reckons a total of $689m will be spent on ground robots this year. The ten biggest buyers in descending order are America, followed by Israel, a distant second, and Britain, Germany, China, South Korea, Singapore, Australia, France and Canada.

Robots’ capabilities have steadily improved. Upload a mugshot into an SUGV, a briefcase-sized robot than runs on caterpillar tracks, and it can identify a man walking in a crowd and follow him. Its maker, iRobot, another MIT spin-out, is best known for its robot vacuum cleaners. Its latest military robot, FirstLook, is a smaller device that also runs on tracks. Equipped with four cameras, it is designed to be thrown through windows or over walls.

Another throwable reconnaissance robot, the Scout XT Throwbot made by Recon Robotics, based in Edina, Minnesota, was one of the stars of the Ground Robotics Capabilities conference held in San Diego in March. Shaped like a two-headed hammer with wheels on each head, the Scout XT has the heft of a grenade and can be thrown through glass windows. Wheel spikes provide traction on steep or rocky surfaces. In February the US Army ordered 1,100 Scout XTs for $13.9m. Another version, being developed with the US Navy, can be taken to a ship inside a small aquatic robot, and will use magnetic wheels to climb up the hull and onto the deck, says Alan Bignall, Recon’s boss.

Even more exotic designs are in development. DARPA, the research arm of America’s Department of Defence, is funding the development of small, soft robots that move like jerky slithering blobs. EATR, another DARPA project, is a foraging robot that gathers leaves and wood for fuel and then burns it to generate electricity. Researchers at Italy’s Sant’Anna School of Advanced Studies, in Pisa, have designed a snakelike aquatic robot. And a small helicopter drone called the Pelican, designed by German and American companies, could remain aloft for weeks, powered by energy from a ground-based laser….

A larger worry is that countries with high-performance military robots may be more inclined to launch attacks. Robots protect soldiers and improve their odds of success. Using drones sidesteps the tricky politics of putting boots on foreign soil. In the past eight years drone strikes by America’s Central Intelligence Agency (CIA) have killed more than 2,400 people in Pakistan, including 479 civilians, according to the Bureau for Investigative Journalism in London. Technological progress appears to have contributed to an increase in the frequency of strikes. In 2005 CIA drones struck targets in Pakistan three times; last year there were 76 strikes there. Do armed robots make killing too easy?

Not necessarily….. Today’s drones, blimps, unmanned boats and reconnaissance robots collect and transmit so much data, she says, that Western countries now practise “warfare by committee”. Government lawyers and others in operation rooms monitor video feeds from robots to call off strikes that are illegal or would “look bad on CNN”, says Ms Cummings, who is now a robotics researcher at MIT. And unlike pilots at the scene, these remote observers are unaffected by the physical toil of flying a jet or the adrenalin rush of combat.

In March Britain’s Royal Artillery began buying robotic missiles designed by MBDA, a French company. The Fire Shadow is a “loitering munition” capable of travelling 100km, more than twice the maximum range of a traditional artillery shell. It can circle in the sky for hours, using sensors to track even a moving target. A human operator, viewing a video feed, then issues an instruction to attack, fly elsewhere to find a better target, or abort the mission by destroying itself. But bypassing the human operator to automate attacks would be, technologically, in the “realm of feasibility”, an MBDA spokesman says……

Traditional rules of engagement stipulate that a human must decide if a weapon is to be fired. But this restriction is starting to come under pressure. Already, defence planners are considering whether a drone aircraft should be able to fire a weapon based on its own analysis. In 2009 the authors of a US Air Force report suggested that humans will increasingly operate not “in the loop” but “on the loop”, monitoring armed robots rather than fully controlling them. Better artificial intelligence will eventually allow robots to “make lethal combat decisions”, they wrote, provided legal and ethical issues can be resolved…..

Pressure will grow for armies to automate their robots if only so machines can shoot before being shot, says Jürgen Altmann of the Technical University of Dortmund, in Germany, and a founder of the International Committee for Robot Arms Control, an advocacy group. Some robot weapons already operate without human operators to save precious seconds. An incoming anti-ship missile detected even a dozen miles away can be safely shot down only by a robot, says Frank Biemans, head of sensing technologies for the Goalkeeper automatic ship-defence cannons made by Thales Nederland.  Admittedly, that involves a machine destroying another machine. But as human operators struggle to assimilate the information collected by robotic sensors, decision-making by robots seems likely to increase. This might be a good thing, says Ronald Arkin, a roboticist at the Georgia Institute of Technology, who is developing “ethics software” for armed robots. By crunching data from drone sensors and military databases, it might be possible to predict, for example, that a strike from a missile could damage a nearby religious building. Clever software might be used to call off attacks as well as initiate them.

In the air, on land and at sea, military robots are proliferating. But the revolution in military robotics does have an Achilles heel, notes Emmanuel Goffi of the French air-force academy in Salon-de-Provence. As robots become more autonomous, identifying a human to hold accountable for a bloody blunder will become very difficult, he says. Should it be the robot’s programmer, designer, manufacturer, human overseer or his superiors? It is hard to say. The backlash from a deadly and well-publicised mistake may be the only thing that can halt the rapid march of the robots.

Robots go to war: March of the robots, Economist Technology Quarterly, June 2, 2012