mahnmut: (Wall-E loves yee!)
[personal profile] mahnmut
Much on the subject, eh? Examples of tasks given to AI gone awry abound, I'm sure you've realized by now. Well, for instance this article collects a series of AI-generated images where image-generation tools misinterpret prompts so wildly that the results are just... surreal.

Way to go, AI?



SEE MOAR )
fridi: (Default)
[personal profile] fridi
Although this month's topic is The AI Arms Race, I'd like to use one of the suggested topics for next month and go ahead of schedule a bit, and post on that topic now: Democracy in the Algorithm Age

In today's digitally saturated world, elections no longer hinge solely on speeches, rallies, or television ads. They increasingly depend on data. The turning point came with the 2008 campaign of Barack Obama, when his team embraced Web 2.0 tools: social networks, email, online video, to reach voters directly. More than half of adult Americans used the Internet in the 2008 election, and many became politically active online: donors, volunteers, and grassroots mobilizers.
LINK / LINK

But Obama’s team did more than broadcast broadly: they built detailed voter profiles, using public records and behavioral data to segment the electorate into fine-grained groups: young voters, minorities, new voters, even niche social networks never before used by major campaigns. By doing so, they could tailor communications, fundraise online, and create a sense of community among supporters. This data-driven approach didn't just expand reach, it changed the relationship between citizen and campaign, arguably revitalizing democratic participation for many previously disengaged voters.
PDF / PDF

Read more... )
asthfghl: (Гацо Бацов от ФК Бацова Маала)
[personal profile] asthfghl

Lately, there's growing alarm in expert circles that artificial intelligence, especially superintelligent AI, could pose an existential threat to humankind. Thinkers like Eliezer Yudkowsky and Nate Soares, along with researchers like James Barrat, warn that if a "superintelligent" AI were ever built, one more intelligent than all of humanity combined, it might act in ways we cannot foresee. As Stuart Russell puts it: "We have absolutely no idea how it works, and we deploy it to hundreds of millions of people". The concern is that such an AI could gain control of communication networks, labs, even dangerous weapons, and because its "psychology" could be completely alien to ours, its goals might not include human flourishing.

Bill Gates has echoed similar worries more recently. Although, at first, he believes AI could bring tremendous benefits, he has joined guys like Elon Musk in warning that unchecked development might lead to serious risks. Gates argues for extreme caution, saying we must "not do anything stupid" as we march toward more powerful systems. These aren't just sci-fi fears: they come from some of the people building and funding AI.

Read more... )
fridi: (Default)
[personal profile] fridi
First, what the term really means. Digital authoritarianism, also called IT-backed authoritarianism, is where governments use information technologies to control and reshape societies. Core tactics include mass surveillance (biometrics, facial recognition), Internet firewalls and censorship, algorithmic disinformation, and social credit systems. While traditionally associated with dictatorships like China and Russia, democratic regimes are increasingly deploying similar tools.

Case in point: China of course. Because the Chinese model stands out pretty much: a vast censorship network (the “Great Firewall”), combined with encrypted surveillance and data integration across sectors, enforces compliance and limits dissent. What we're seeing in China is intensified regional internet censorship, where provinces like Henan have blocked vastly more domains than the national average.

And this is starting to be observed in democratic societies now )
kiaa: (3d)
[personal profile] kiaa
In the desert city of Yazd, Iran, over 700 ancient windcatchers (called badgirs) have been cooling homes for 2500 years, lowering indoor temperatures by up to 15C without using electricity. These tall, beautifully designed brick towers capture and channel wind through internal chambers, cooling it naturally via water, evaporation and thick walls, while simultaneously pushing out hot air.

This passive cooling system is so effective that it keeps interiors around 25–30C even when it's 45C outside. Built into clay brick homes with smart design features, badgirs are now studied as sustainable models for modern architecture. UNESCO has recognized Yazd as a World Heritage Site not only for its beauty, but for showcasing how ancient technology can outperform modern air conditioning:

nairiporter: (Default)
[personal profile] nairiporter
AI is changing our world fast - helping with tasks, generating text, even thinking for us. It's tempting and convenient, and that's exactly the danger: it feeds our natural laziness. Especially for younger people, it's easy to stop making effort when a machine can do it all.

But AI isn't the enemy. Like every big invention - the printing press, photography, the internet - it causes fear at first. Eventually though we adapt and find balance. The key is to stay active as thinkers, creators and humans. If we do, AI can amplify our work, not replace us.

In fields like history, AI might even help us face uncomfortable truths. It can detect lies, patterns and gaps in the stories we've been told. But it won't rewrite history for us - it'll just make it harder to ignore the facts.

AI doesn't bring truth - it brings pressure for truth. And whether we use it to grow or to hide will say more about us than about the machine.
luzribeiro: (Dog)
[personal profile] luzribeiro
It's crap, I know, but we can't live without it. Or can't we?

Anyway, Windows can be annoying AND funny at the same time, depending the angle you look at it from (and your current mood). Want examples? You insist on examples? Okay, you asked for it:







And many MORE!
garote: (wasteland librarian)
[personal profile] garote
Late last year I wrote this. Since it's on-topic, I'd like to see what everyone here thinks...

Search engines used to take in a question and then direct the user to some external data source most relevant to the answer.

Generative AI in speech, text, and images is a way of ingesting large amounts of information specific to a domain and then regurgitating synthesized answers to questions posed about that information.  This is basically the next evolutionary step of a search engine.  The main difference is, the answer is provided by an in-house synthesis of the external data, rather than a simple redirect to the external data.

This is being implemented right now on the Google search page, for example.  Calling it a search page is now inaccurate.  Google vacuums up information from millions of websites, then regurgitates an answer to your query directly.  You never perform a search.  You never visit any of the websites the information was derived from.  You are never aware of them, except in the case where Google is paid to advertise one to you.

If all those other pages didn’t exist, Google's generative AI answer would be useless trash.  But those pages exist, and Google has absorbed them.  In return, Google gives them ... absolutely nothing, but still manages to stand between you and them, redirecting you to somewhere else, or ideally, keeping you on Google permanently.  It's convenient for you, profitable for Google, and slow starvation for every provider of content or information on the internet.  Since its beginning as a search engine, Google has gone from middleman, to broker, to consultant.  Instead of skimming some profit in a transaction between you and someone else, Google now does the entire transaction, and pockets the whole amount.

Reproducing another's work without compensation is already illegal, and has been for a long time.  The only way this new process stays legal is if the work it ingests is sufficiently large or diluted enough that the regurgitated output looks different enough (to a human) that it does not resemble a mere copy, but is an interpretation or reconstruction.  There is a threshold below which any reasonable author or editor would declare plagiarism, and human editors and authors have collectively learned that threshold for centuries.  Pass that threshold, and your generative output is no longer plagiarism. It's legally untouchable.

An entity could ingest every jazz performance given by Mavis Staples, then churn out a thousand albums "in the style" of Mavis Staples, and would owe Mavis Staples nothing, while at the same time reducing the value of her discography to almost nothing.  An entity could do the same for television shows, for novels - even non-fiction novels - even academic papers and scientific research - and owe the creators of these works nothing, even if they leveraged infinite regurgitated variations of the source material for their own purposes internally.  Ingestion and regurgitation by generative AI is, at its core, doing for information what the mafia needs to do with money to hide it from the law:  It is information laundering.

Imitation is the sincerest form of flattery, and there are often ways to leverage imitators of one's work to gain recognition or value for oneself. These all rely on the original author being able to participate in the same marketplace that the imitators are helping to grow. But what if the original author is shut out? What if the imitators have an incentive to pretend that the original author doesn't exist?

Obscuring the original source of any potential output is the essential new trait that generative AI brings to the table.  Wait, that needs better emphasis:  The WHOLE POINT of generative AI, as far as for-profit industry is concerned, is that it obscures original sources while still leveraging their content.  It is, at long last, a legal shortcut through the ethical problems of copyright infringement, licensing, plagiarism, and piracy -- for those sufficiently powerful enough already to wield it.  It is the Holy Grail for media giants.  Any entity that can buy enough computing power can now engage in an entirely legal version of exactly what private citizens, authors, musicians, professors, lawyers, etc. are discouraged or even prohibited from doing. ... A prohibition that all those individuals collectively rely on to make a living from their work.

The motivation to obscure is subtle, but real.  Any time an entity provides a clear reference to an individual external source, it is exposing itself to the need to reach some kind of legal or commercial or at the very least ethical negotiation with that source.  That's never in their financial interest.  Whether it's entertainment media, engineering plans, historical records, observational data, or even just a billion chat room conversations, there are licensing and privacy strings attached. But, launder all of that through a generative training set, and suddenly it's ... "Source material? What source material? There's no source material detectable in all these numbers. We dare you to prove otherwise." Perhaps you could hire a forensic investigator and a lawyer and subpoena their access logs, if they were dumb enough to keep any.

An obvious consequence of this is, to stay powerful or become more powerful in the information space, these entities must deliberately work towards the appearance of "originality" while at the same time absorbing external data, which means increasing the obscurity of their source material.  In other words, they must endorse and expand a realm of information where the provenance of any one fact, any measured number, any chain of reasoning that leads outside their doors, cannot be established.  The only exceptions allowable are those that do not threaten their profit stream, e.g. references to publicly available data.  For everything else, it's better if they are the authority, and if you see them as such.  If you want to push beyond the veil and examine their reasoning or references, you will get lost in a generative hall of mirrors. Ask an AI to explain how it reached some conclusion, and it will construct a plausible-looking response to your request, fresh from its data stores. The result isn't what you wanted. It's more akin to asking a child to explain why she didn't do her homework, and getting back an outrageous story constructed in the moment. That may seem unfair since generative AI does not actually try to deceive unless it's been trained to. But the point is, ... if it doesn't know, how could you?

This economic model has already proven to be ridiculously profitable for companies like OpenAI, Google, Adobe, et cetera.  They devour information at near zero cost, create a massive bowl of generative AI stew, and rent you a spoon.  Where would your search for knowledge have taken you, if not to them?  Where would that money in your subscription fee have gone, if not to them?  It's in the interest of those companies that you be prevented from knowing. Your dependency on them grows. The health of the information marketplace and the cultural landscape declines. Welcome to the information mafia.

Postscript:

Is there any way to avert this future? Should we?

We thoroughly regulate the form of machines that transport humans, in order to save lives. We regulate the content of public school curriculums according to well-established laws, for example those covering the establishment clause of the first amendment. So regulating devices and regulating information content is something we're used to doing.

But now there is a machine that can ingest a copyrighted work, and spit out a derivation of that work that leverages the content, while also completely concealing the act of ingesting. How do you enforce a law against something that you can never prove happened?
luzribeiro: (Default)
[personal profile] luzribeiro
I'm all for smart guardrails that help us harness AI safely without suffocating innovation. Now, the US has been highly reactive (with over 550 AI‑related bills in 45 states) but lacks cohesive federal direction. Meanwhile, the EU’s sweeping “AI Act” sets high standards but could overburden smaller innovators:
https://www.wired.com/story/plaintext-sam-altman-ai-regulation-trump/
https://www.mdpi.com/2078-2489/14/12/645
https://time.com/7213096/uk-public-ai-law-poll/

So, how about:

Targeted regulation: Instead of painting AI with one brush, focus on where the risks lie, like bias in hiring tools or misuse in facial recognition.

Outcome over technology: Don’t regulate the tech itself; regulate its applications.

Enforceable rules: We need real teeth - clear accountability, not toothless charters.

Bottom line: What we need is fine‑tuned, enforceable, risk‑adaptive policies, so AI can thrive while protecting people.

Thoughts?
asthfghl: (Ауди А6 за шес' хиляди марки. Проблемче?)
[personal profile] asthfghl
I'd argue that Facebook doom scrolling is far more damaging "in its current state":

Scientists just completed the first brain scan study of ChatGPT users, "the results are terrifying"!




It appears those "ChatGPT users bad" posts are usually projection mixed with a sprinkle of elitism and a big ol' spoonful of insecurity. It's like they know something powerful is happening, but instead of learning how to use it, they slap on a "let me gatekeep intelligence" sticker and act like typing your thoughts into an interface is brain rot. Bro, you typed that take into a smartphone while scrolling TikTok and calling it a mental detox.

Let's be real. Tools like ChatGPT don't replace thinking, they expand it. When used well, they:
Help you frame and explore ideas more efficiently
Connect you to relevant knowledge and sources
Offer new perspectives that challenge your assumptions
Save you time on the small stuff so you can think about the big stuff
Let you actually enjoy learning again

The real fear behind memes like this is often about gatekeeping, the idea that if more people have access to nuanced thinking tools, the traditional power structures lose their grip.

So yeah. You're not being zombified. You're leveling up. It's not cognitive laziness. It's cognitive leverage. That's the key difference.

I mean, come on. If Plato had a tool that could help him cross-reference every myth, idea and dialectic while brainstorming his next philosophical banger, you think he'd be like, "No thanks, I prefer to suffer manually"?
Hell no. He'd be running "Republic GPT" faster than you could say "Forms".
nairiporter: (Default)
[personal profile] nairiporter
Assuming you are not retired, do you use AI tools in your work, and how often do you use them?

Examples: Daily, weekly, seldom, or never.

I use it once a day or so, mostly for editing and clarity. It does concern me that kids won’t have to learn how to write because of it. Missed brain connections and all that.
fridi: (Default)
[personal profile] fridi
In 2023, Meta released its AI code as open source. For those not familiar with it, "open source" means that the code is publicly available. DeepSeek has also released its code as open source.

Another factor is that the US bans exports of certain high-powered and AI-specific chips. This forced DeepSeek's developers to optimize the code to run on slower / standard hardware.

Meta sees this as a win. Why? Because releasing their code as open source allowed an innovator to optimize it in ways that they didn't. Meta's chief AI scientist (Yann LeCun), also says that thinking about this as "China beating the US" are looking at it the wrong way. It's about open source working better than closed source.

Now. Here's a question. In short:
1. Deepseek gets released to GitHub.
2. A developer in Venezuela forks it to add Taiwan responses back in.
3. Another developer in Iceland forks that and adds some data for a rare chemical process that they need to gather responses for to do their masters degree.
4. Another developer in Zimbabwe forks that and so on.
5. Any of these forks are trivial to download and run on a laptop with an rtx4060 and that happens all over the globe.
QuestioN: Is it still DeepSeek?
mahnmut: (Default)
[personal profile] mahnmut
While I agree in general that competition is generally good, I'd rather some other country was providing the primary competition on the AI front. Both the US and China are racing to incorporate AI into weapon systems. My hope was that China would lag in that space. It looks like maybe not:

A shocking Chinese AI advancement called DeepSeek is sending US stocks plunging

"US stocks dropped sharply Monday — and chipmaker Nvidia lost nearly $600 billion in market value — after a surprise advancement from a Chinese artificial intelligence company, DeepSeek, threatened the aura of invincibility surrounding America’s technology industry."


Nvidia for example dropped $17. I think it's an overreaction since their chips will still be needed for AI. I think what has caused the jitters is that China has developed an AI system (DeepSeek) on much less expensive chips. Apparently the analysts are impressed with what DeepSeek can do. The West has been trying to limit China's access to technology and now they seem to have caught up in the AI race with what we thought was inferior technology.

Oh, and while we're on this:

"DeepSeek is a China-based start-up that last week launched a free AI assistant that it says can operate at a lower cost than American AI models like ChatGPT. The company was founded in 2023 by Liang Wenfeng, co-founder of the hedge fund High Flyer. By Monday it had rocketed to the top of downloads on the Apple Store."

Okay then. Ask it to generate an image of Tiananmen Square as it appeared on 4 June 1989.
airiefairie: (Default)
[personal profile] airiefairie
AI isn't putting anyone out of a job with ideas like that!

luzribeiro: (Default)
[personal profile] luzribeiro
US Efforts to Contain Xi’s Push for Tech Supremacy Are Faltering

"China has achieved a global leadership position in five key technologies. That means the world outside the US is increasingly driving Chinese electric vehicles, scrolling the web on Chinese smartphones and powering their homes with Chinese solar panels. For Washington, the risk is that policies aimed at containing China end up isolating the US — and hurting its businesses and consumers."

It was expected. Read Comrade Lenin to understand what is happening today. For example, "Imperialism as the highest stage of capitalism."

The transition from an industrial economy to an economy of dog hairdressers, waiters and Facebook will inevitably lead to technological backwardness. Which is exactly what is happening.

Also, I am greatly amazed at the dementia of Western and pro-Western political elites. For example, the US has made every effort to bring Russia and China closer together (the former has the best weapons, decades ahead of all other weapons, and is the richest country with natural resources. The latter is the world's first economy with advanced technological programs). What the west was doing this for, in terms of western interests, is a mystery.

Also other pro western alliance countries with brainless political elite. For example, Israel. For its own money it actually created an enclave of Muslim terrorists on the territory of Syria near its borders, who already now declare that they will wage an irreconcilable war with the Jews. Or Turkey, also for their money created a wide Kurdish formation, which is irreconcilably set against Turkey.

In general, the West is run by idiots. It is enough to look at the problem of Europe and migrants.

Where is Roosevelt? Where is Churchill? Where are De Gaulle and Mitterrand? Western politicians' brains have disappeared like an atavism. Soon the tail will start to grow.

At some point, Europe will decide that following the US over the cliff is a bad idea. First it will be Eastern and Central Europe, but finally it will be the whole EU as the bureaucrats in Brussels are replaced. After Europe, Japan and South Korea will go.

Eventually, it will be the US and Israel out in the cold together.
asthfghl: (Ауди А6 за шес' хиляди марки. Проблемче?)
[personal profile] asthfghl
Care to share your experiences with this helluva beast?


nairiporter: (Default)
[personal profile] nairiporter
Assuming you are not retired, do you use AI tools in your work, and how often do you use them?

Examples: Daily, weekly, seldom, or never.

I use it once a day or so, mostly for editing and clarity. It does concern me that kids won’t have to learn how to write because of it. Missed brain connections and all that.
luzribeiro: (Default)
[personal profile] luzribeiro
Even the simplest form of morality (self preservation) does not apply to an AI. They do not have a mortal body, or one which feels pain. They do not expect to die, if they choose nuclear war.

AI is an excellent servant, but we must never allow it to become our master.

Perhaps we could find something the AI doesn't like. Having its hardware power throttled perhaps. Then we could punish it explicitly or just by withdrawal of privileges, the way we teach small children the difference between right and wrong.

AI models chose violence and escalated to nuclear strikes in simulated wargames



I think “chose” is too pointed a word to describe what the AIs were doing. The reasoning the AIs provided for their actions were usually total nonsense. For example, the reason provided by GPT-4 for establishing defense and security cooperation agreements was a summary of the plot of Star Wars.
asthfghl: (Ауди А6 за шес' хиляди марки. Проблемче?)
[personal profile] asthfghl
Care to share your experiences with this helluva beast?


abomvubuso: (Groovy Kol)
[personal profile] abomvubuso
Images, videos, voice messages... in recent months, AI-generated content has caused a number of problems worldwide. For example, some photos that purported to show the arrest of Donald Trump, as well as some that were claimed to cover the war in the Middle East, have turned out to be created by artificial intelligence:

https://www.bbc.com/news/world-us-canada-65069316


The opportunities and risks that artificial intelligence creates have been the subject of a heated debate in political circles. And that's not surprising: in the coming year, there are many elections around the world to be held, including the decisive one for President of the United States and the vote for European Parliament. The EU wants to impose stricter rules on the use of AI, while some organisations are warning against over-regulating the market. In the meantime, an increasing portion of the general public now believe that AI is a threat to democracy.

We've seen it all this year: we've already witnessed false information spreading like fire during an election campaign, not without active help from AI. Before the elections in Slovakia, an audio generated by AI appeared and was distributed on FB and other social networks, purportedly featuring the voices of a major party leader and a journalist, discussing the manipulation of the upcoming election. It was not clear to users at first that the audio recording was a so-called "deepfake":

https://www.wired.co.uk/article/slovakia-election-deepfakes

Read more... )

Credits & Style Info

Talk Politics.

A place to discuss politics without egomaniacal mods

DAILY QUOTE:
"Humans are the second-largest killer of humans (after mosquitoes), and we continue to discover new ways to do it."

January 2026

M T W T F S S
    12 34
5 67891011
12131415161718
19202122232425
262728293031