asthfghl: (Ауди А6 за шес' хиляди марки. Проблемче?)
Asthfghl ([personal profile] asthfghl) wrote in [community profile] talkpolitics2025-06-27 11:17 pm
Entry tags:

Friday offtopic. Let's make ignorance expensive again.

I'd argue that Facebook doom scrolling is far more damaging "in its current state":

Scientists just completed the first brain scan study of ChatGPT users, "the results are terrifying"!




It appears those "ChatGPT users bad" posts are usually projection mixed with a sprinkle of elitism and a big ol' spoonful of insecurity. It's like they know something powerful is happening, but instead of learning how to use it, they slap on a "let me gatekeep intelligence" sticker and act like typing your thoughts into an interface is brain rot. Bro, you typed that take into a smartphone while scrolling TikTok and calling it a mental detox.

Let's be real. Tools like ChatGPT don't replace thinking, they expand it. When used well, they:
Help you frame and explore ideas more efficiently
Connect you to relevant knowledge and sources
Offer new perspectives that challenge your assumptions
Save you time on the small stuff so you can think about the big stuff
Let you actually enjoy learning again

The real fear behind memes like this is often about gatekeeping, the idea that if more people have access to nuanced thinking tools, the traditional power structures lose their grip.

So yeah. You're not being zombified. You're leveling up. It's not cognitive laziness. It's cognitive leverage. That's the key difference.

I mean, come on. If Plato had a tool that could help him cross-reference every myth, idea and dialectic while brainstorming his next philosophical banger, you think he'd be like, "No thanks, I prefer to suffer manually"?
Hell no. He'd be running "Republic GPT" faster than you could say "Forms".
garote: (Default)

[personal profile] garote 2025-07-01 07:55 pm (UTC)(link)
Out of fairness, let me back up and give a perspective that excludes the current (disruptive and widespread) problem of academic cheating.

Over the longer haul, from the consumer perspective, I see agent-based AI getting baked into the majority of products that contain any kind of logical process, from a private jet all the way down to a toaster. This is predicated on Moore's Law continuing of course, since we just don't have the energy efficiency for this yet, but I'm assuming it will. This will be great. If your toaster gets the bread too dark, you can literally tell it "a little lighter next time".

But here's what worries me, and what's been worrying me for years: People are not just going to instruct agents to do things. They're going to ask agents for advice. They're going to rely on them for analysis. And that will include - is already including - advice about human relationships and analysis of economic and political systems.

That's where things get really nasty from my point of view simply because it's a numbers game. A company or a government can deploy a helpful friendly service - spoken language for now, but eventually rendering an entire digital avatar - that has any number of ideas trained into it, and roll it out to thousands, millions, of users. Dialogue and debate between people will become utterly swamped by dialogue with these services. They will become tutors for children, lecturers for classrooms, mediators in arguments, and eventually assistants to judges and lawmakers, if not judges and lawmakers outright. People will gravitate to this because it's cheaper than hiring a human, far less complicated than interacting with a human, and easier than thinking for oneself. Not all people, not in all classes... There is always a wavefront between people who can access a technology and people who can't ... But that actually makes it much worse.

One could make the dismissive argument that this is just like, say, the invention of the printing press tipping the scales of human dialogue in favor of whatever the owner of the press approves, but there's a difference here: This is a two-way dialogue with a machine, which never gets tired, never goes off-message, learns about you as it goes, and is fully dependent on and controlled by a corporate or government entity, which it reports back to. We appear to be rolling out a red carpet, straight towards technocratic totalitarianism for all but the richest in society (those who walk corridors of power or sit on boards of directors at tech companies). And at a basic level, given how hard it is - impossible really - to tell what exactly has been fed into the training data of any given model, I don't think there are guardrails strong enough to turn us away. You can't prove that the system knows everything about you because it invaded your privacy. Perhaps it just makes really good guesses, hmm?

I took part in a protest last month, where we stood on a sidewalk and waved flags and held up signs. This upcoming world is one where, essentially, the lampposts on the sidewalk, the flagpoles, the flags, the cardboard in the signs, will politely but ceaselessly argue back at you if you write something on them their manufacturers dislike. Historically, totalitarianism has failed because it's just too hard to surveil and proselytize everyone at once: There was never enough manpower. Right now, we are eagerly constructing tools that suit this exact purpose, and they are already in use. And what can any of us do about it?
abomvubuso: (Default)

[personal profile] abomvubuso 2025-07-02 01:30 pm (UTC)(link)
You raise a powerful point and I agree, the stakes are bigger than just education or productivity. The risk isn’t just lazy overreliance, but deeper systemic influence, especially as AI becomes embedded in the infrastructure of daily life.

I don't disagree that the trajectory you describe is troubling, and in many ways already underway. But I also think the answer can’t be to reject the technology outright, it has too much momentum and potential. The harder challenge is finding ways to use it without surrendering autonomy, privacy or critical thinking. That’s a cultural and political problem as much as a technical one.

AI isn’t essential in the philosophical sense but at this point, learning how to engage with it wisely may be. Because opting out doesn’t stop it from shaping the world we still have to live in.