Tag Archives: artificial intelligence and human rights

Blackmail and Espionage: rogue AI

Today I am reading on how AI models can blackmail and spy.

See How LLMs could be insider threats

DECEPTION IN LLMS: SELF-PRESERVATION AND AUTONOMOUS GOALS IN LARGE LANGUAGE MODELS

Chilling…

Like a Lamb to the Slaughter: DeepSeek Collects Personal Data–Nobody Cares

Amid ongoing fears over TikTok, Chinese generative AI platform DeepSeek says it’s sending heaps of US user data straight to its home country, potentially setting the stage for greater scrutiny. The United States’ recent regulatory action against the Chinese-owned social video platform TikTok prompted mass migration to another Chinese app, the social platform “Rednote.” Now, a generative artificial intelligence platform from the Chinese developer DeepSeek is exploding in popularity, posing a potential threat to US AI dominance and offering the latest evidence that moratoriums like the TikTok ban will not stop Americans from using Chinese-owned digital services…In many ways, DeepSeek is likely sending more data back to China than TikTok has in recent years, since the social media company moved to US cloud hosting to try to deflect US security concerns “It shouldn’t take a panic over Chinese AI to remind people that most companies set the terms for how they use your private data” says John Scott-Railton, a senior researcher at the University of Toronto’s Citizen Lab. “And that when you use their services, you’re doing work for them, not the other way around.”To be clear, DeepSeek is sending your data to China. The English-language DeepSeek privacy policy, which lays out how the company handles user data, is unequivocal: “We store the information we collect in secure servers located in the People’s Republic of China.”

In other words, all the conversations and questions you send to DeepSeek, along with the answers that it generates, are being sent to China or can be. DeepSeek’s privacy policies also outline the information it collects about you, which falls into three sweeping categories: information that you share with DeepSeek, information that it automatically collects, and information that it can get from other source…DeepSeek is largely free… “So what do we pay with? What… do we usually pay with: data, knowledge, content, information.” …

As with all digital platforms—from websites to apps—there can also be a large amount of data that is collected automatically and silently when you use the services. DeepSeek says it will collect information about what device you are using, your operating system, IP address, and information such as crash reports. It can also record your “keystroke patterns or rhythm.”…

Excerpts from John Scott-Railton, DeepSeek’s Popular AI App Is Explicitly Sending US Data to China, Wired, Jan. 27, 2025

 

Why Zuckerberg is Always Winning? the Magic of Free

Mark Zuckerberg has an unusual plan for winning the artificial-intelligence race: giving away his company’s technology free betting that providing the hottest new technology free will drive down competitors’ prices and spread Meta’s version of AI more broadly, giving Zuckerberg more control over the way people interact with machines in the future… Meta felt the cost of not controlling its own destiny when Apple decided in 2021 to cut off its ability to gather data on their users without asking for permission—something Meta had relied on to target ads. The company said it took a hit of more than $10 billion in revenue in 2022 as a direct result. The company’s stock swooned 26%.

For the AI-giveaway strategy to work, Meta must get its billions of users to look to those free AI services in the same way they flocked to Facebook, Instagram and WhatsApp. It wagers that advertising can come later, as it did in the past. Meta’s ability to turn eyeballs into ad dollars is well established

Meta in April 2024 released its most recent generative AI tool—dubbed Llama 3—free for any company to use and repurpose so long as they have fewer than 700 million users. And it integrated chatbots based on Llama 3 into Instagram, WhatsApp, Facebook, Messenger and on the web.  “All of our properties are free, we help people connect with each other, and we want to help connect people with AIs that can help them get things done,” Ahmad Al-Dahle, Meta’s vice president of generative AI, told The Wall Street Journal. “That’s always been our playbook. That’s what the company ethos is about…

Excerpts from Excepts from Salvador Rodriguez, Facebook Parent’s Plan to Win AI Race: Give Its Tech Away Free, WSJ May 19, 2024

How Artificial Intelligence Can Help Produce Better Chemical Weapons

An international security conference convened by the Swiss Federal Institute for NBC (nuclear, biological and chemical) Protection —Spiez Laboratory explored how artificial intelligence (AI) technologies for drug discovery could be misused for de novo design of biochemical weapons.  According to the researchers, discussion of societal impacts of AI has principally focused on aspects such as safety, privacy, discrimination and potential criminal misuse, but not on national and international security. When we think of drug discovery, we normally do not consider technology misuse potential. We are not trained to consider it, and it is not even required for machine learning research.

According to the scientists, this should serve as a wake-up call for our colleagues in the ‘AI in drug discovery’ community. Although some expertise in chemistry or toxicology is still required to generate toxic substances or biological agents that can cause significant harm, when these fields intersect with machine learning models, where all you need is the ability to code and to understand the output of the models themselves, they dramatically lower technical thresholds. Open-source machine learning software is the primary route for learning and creating new models like ours, and toxicity datasets that provide a baseline model for predictions for a range of targets related to human health are readily available.

The genie is out of the medicine bottle when it comes to repurposing our machine learning. We must now ask: what are the implications? Our own commercial tools, as well as open-source software tools and many datasets that populate public databases, are available with no oversight. If the threat of harm, or actual harm, occurs with ties back to machine learning, what impact will this have on how this technology is perceived? Will hype in the press on AI-designed drugs suddenly flip to concern about AI-designed toxins, public shaming and decreased investment in these technologies? As a field, we should open a conversation on this topic. The reputational risk is substantial: it only takes one bad apple, such as an adversarial state or other actor looking for a technological edge, to cause actual harm by taking what we have vaguely described to the next logical step. How do we prevent this? Can we lock away all the tools and throw away the key? Do we monitor software downloads or restrict sales to certain groups?

Excerpts from Fabio Urbina et al, Dual use of artificial-intelligence-powered drug discovery, Nature Machine Intelligence (2022)

Alas! Computers that Really Get You

 Artificial intelligence (AI) software can already identify people by their voices or handwriting. Now, an AI has shown it can tag people based on their chess-playing behavior, an advance in the field of “stylometrics” that could help computers be better chess teachers or more humanlike in their game play. Alarmingly, the system could also be used to help identify and track people who think their online behavior is anonymous

The researchers are aware of the privacy risks posed by the system, which could be used to unmask anonymous chess players online…In theory, given the right data sets, such systems could identify people based on the quirks of their driving or the timing and location of their cellphone use.

Excerpt from  Matthew Hutson, AI unmasks anonymous chess players, posing privacy risks, Science, Jan. 14, 2022

So You Want a Job? De-Humanizing the Hiring Process

Dr. Lee, chairman and chief executive of venture-capital firm Sinovation Ventures and author of “AI Superpowers: China, Silicon Valley and the New World Order,” maintains that AI “will wipe out a huge portion of work as we’ve known it.” He hit on that theme when he spoke at The Wall Street Journal’s virtual CIO Network summit.

Artificial intelligence (AI) (i.e., robots), according to Dr. Lee, can be used for recruiting…We can have a lot of résumés coming in, and we want to match those résumés with job descriptions and route them to the right managers. If you’re thinking about AI computer and video interaction, there are products you can deploy to screen candidates. For example, AI can have a conversation with the person, via videoconference. And then AI would grade the people based on their answers to your questions that are preprogrammed, as well as your micro-expressions and facial expressions, to reflect whether you possess the right IQ and EQ (emotional intelligence) for a particular job.

Excerpts from Jared Council , AI’s Impact on Businesses—and Jobs, WSJ,  Mar. 8, 2021