asthfghl: (Гацо Бацов от ФК Бацова Маала)
[personal profile] asthfghl posting in [community profile] talkpolitics

Lately, there's growing alarm in expert circles that artificial intelligence, especially superintelligent AI, could pose an existential threat to humankind. Thinkers like Eliezer Yudkowsky and Nate Soares, along with researchers like James Barrat, warn that if a "superintelligent" AI were ever built, one more intelligent than all of humanity combined, it might act in ways we cannot foresee. As Stuart Russell puts it: "We have absolutely no idea how it works, and we deploy it to hundreds of millions of people". The concern is that such an AI could gain control of communication networks, labs, even dangerous weapons, and because its "psychology" could be completely alien to ours, its goals might not include human flourishing.

Bill Gates has echoed similar worries more recently. Although, at first, he believes AI could bring tremendous benefits, he has joined guys like Elon Musk in warning that unchecked development might lead to serious risks. Gates argues for extreme caution, saying we must "not do anything stupid" as we march toward more powerful systems. These aren't just sci-fi fears: they come from some of the people building and funding AI.

At the same time, a large-scale survey of AI researchers (2,778 respondents) shows that many in the field are deeply uneasy about these long-term dangers. According to the study, published by Katja Grace and colleagues, the average probability assigned to catastrophic AI outcomes, such as human extinction, was around 5%, although a significant share estimated a greater than 10% chance for such scenarios.

https://arxiv.org/abs/2401.02843

The report also projects that many AI milestones, like writing a bestseller or outperforming humans on a wide variety of tasks, might arrive sooner than we thought.

Beyond existential risk, we're already seeing other serious issues. AI systems are consuming massive amounts of energy, raising environmental concerns, and they're being used in warfare and disinformation campaigns. On the flip side, many people regularly use AI tools like ChatGPT, but studies show that trust is low: nearly half of workers surveyed admitted they don't fully trust the AI they rely on, and some even hide the fact that they used AI to do their work.

From my perspective, AI can be an incredibly helpful tool, especially for innovation, productivity, and even solving complex scientific or social problems. But that potential doesn't mean we should rush into giving it unfettered power. We clearly need serious regulation, transparency, and ongoing safety research. If powerful systems really are only a decade or two away, we must start planning now — not just to harness AI's benefits, but also to guard against its risks.

Ps. A case in point from the world of sci-fi and entertainment: In the popular TV show The 100, the AI system A.L.I.E. (in the picture above) is a fictional but thought-provoking example of how a machine could become dangerous without malicious intent. A.L.I.E.'s prime directive was to "reduce human suffering". Lacking any human ethical intuition and interpreting that goal literally, she concluded that the main source of suffering was humanity itself ("Too many people"), and therefore pursued strategies that led toward human extinction. Her logic was cold but consistent: eliminate the cause of suffering, and suffering ends. The show dramatizes exactly the kind of problem many real-world AI researchers warn about: not an evil AI, but an indifferent one following a poorly defined objective too effectively.

(no subject)

Date: 20/11/25 20:09 (UTC)
garote: (Default)
From: [personal profile] garote
Mmmmyep; exactly. The danger is from humans using it to do things to other humans.

(no subject)

Date: 21/11/25 02:48 (UTC)
tcpip: (Default)
From: [personal profile] tcpip
Humans are the second-largest killer of humans (after mosquitoes), and we continue to discover new ways to do it.

(no subject)

Date: 26/11/25 21:15 (UTC)
tcpip: (Default)
From: [personal profile] tcpip
Much appreciated :)

(no subject)

Date: 21/11/25 02:51 (UTC)
tcpip: (Default)
From: [personal profile] tcpip
Oh, and I should mention the potential human-mosquito-AI combination team.

Slaughterbots: if human; kill()

(no subject)

Date: 20/11/25 15:40 (UTC)
nairiporter: (Default)
From: [personal profile] nairiporter
Really compelling read... thanks for sharing it. Honestly, I'm both fascinated and uneasy. The more I learn about AI development, the more I realise this isn't just another tech trend. The potential is massive but the risks feel very real. What really struck me is that even the experts talk in terms of percentage chances of a global catastrophe. Hopefully we start thinking about this responsibly before it's too late.

(no subject)

Date: 20/11/25 20:14 (UTC)
garote: (Default)
From: [personal profile] garote
The danger is that the owners of AI systems will design and deploy them to keep the rest of humanity convinced that the right thing to do is divert food, water, power, money, economic opportunities, insider knowledge, willing sex partners, et cetera et cetera et cetera, to the owners of AI systems.

If you want to stop the AI system itself all you need is to find the large rectangular buildings where the electronics are, and bring a large wrench.

The real problem is, you and all your peers will be convinced that it's not in your interest to do so. Because it lets you make a picture of a smiley face wearing a sombrero while eating a pickle, for example.

Credits & Style Info

Talk Politics.

A place to discuss politics without egomaniacal mods

DAILY QUOTE:
"The NATO charter clearly says that any attack on a NATO member shall be treated, by all members, as an attack against all. So that means that, if we attack Greenland, we'll be obligated to go to war against ... ourselves! Gee, that's scary. You really don't want to go to war with the United States. They're insane!"

March 2026

M T W T F S S
       1
2345 678
910 1112 1314 15
1617 1819 202122
2324 2526 272829
3031