Asthfghl (
asthfghl) wrote in
talkpolitics2025-06-27 11:17 pm
![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
![[community profile]](https://www.dreamwidth.org/img/silk/identity/community.png)
Entry tags:
Friday offtopic. Let's make ignorance expensive again.
I'd argue that Facebook doom scrolling is far more damaging "in its current state":
Scientists just completed the first brain scan study of ChatGPT users, "the results are terrifying"!

It appears those "ChatGPT users bad" posts are usually projection mixed with a sprinkle of elitism and a big ol' spoonful of insecurity. It's like they know something powerful is happening, but instead of learning how to use it, they slap on a "let me gatekeep intelligence" sticker and act like typing your thoughts into an interface is brain rot. Bro, you typed that take into a smartphone while scrolling TikTok and calling it a mental detox.
Let's be real. Tools like ChatGPT don't replace thinking, they expand it. When used well, they:
Help you frame and explore ideas more efficiently
Connect you to relevant knowledge and sources
Offer new perspectives that challenge your assumptions
Save you time on the small stuff so you can think about the big stuff
Let you actually enjoy learning again
The real fear behind memes like this is often about gatekeeping, the idea that if more people have access to nuanced thinking tools, the traditional power structures lose their grip.
So yeah. You're not being zombified. You're leveling up. It's not cognitive laziness. It's cognitive leverage. That's the key difference.
I mean, come on. If Plato had a tool that could help him cross-reference every myth, idea and dialectic while brainstorming his next philosophical banger, you think he'd be like, "No thanks, I prefer to suffer manually"?
Hell no. He'd be running "Republic GPT" faster than you could say "Forms".
Scientists just completed the first brain scan study of ChatGPT users, "the results are terrifying"!

It appears those "ChatGPT users bad" posts are usually projection mixed with a sprinkle of elitism and a big ol' spoonful of insecurity. It's like they know something powerful is happening, but instead of learning how to use it, they slap on a "let me gatekeep intelligence" sticker and act like typing your thoughts into an interface is brain rot. Bro, you typed that take into a smartphone while scrolling TikTok and calling it a mental detox.
Let's be real. Tools like ChatGPT don't replace thinking, they expand it. When used well, they:
Help you frame and explore ideas more efficiently
Connect you to relevant knowledge and sources
Offer new perspectives that challenge your assumptions
Save you time on the small stuff so you can think about the big stuff
Let you actually enjoy learning again
The real fear behind memes like this is often about gatekeeping, the idea that if more people have access to nuanced thinking tools, the traditional power structures lose their grip.
So yeah. You're not being zombified. You're leveling up. It's not cognitive laziness. It's cognitive leverage. That's the key difference.
I mean, come on. If Plato had a tool that could help him cross-reference every myth, idea and dialectic while brainstorming his next philosophical banger, you think he'd be like, "No thanks, I prefer to suffer manually"?
Hell no. He'd be running "Republic GPT" faster than you could say "Forms".
no subject
Interesting.
no subject
“Can you write code?” “No but I can ask Claude.”
“Can you design a UI?” “No but I can ask Claude.”
“Can you lead a work sprint?” “No but I can ask Claude.”
“Can you fill out this ethics questionnaire?” “No but I can ask Claude.”
“Can you show up at the office?” “No but I can ask Claude.”
no subject
no subject
Reply #1 is hilarious and dead-on in skewering a mindset that treats AI as a stand-in for actual ability ;)
Sounds like a satire of late-stage tech capitalism, and yeah, if everyone at a company is just delegating to Claude, it’s basically a ghost town staffed by middlemen. But the article isn’t defending that, it’s explicitly criticising that mentality. That’s the whole point of the “cheat at everything” section: to show how AI can be used badly, cynically, and even destructively.
As for Reply #2, I see where you're coming from. The article is written from the POV of someone who's already educated and employed, but that doesn't necessarily make it worthless. In fact, that's part of what makes it interesting to me, because even people with established skillsets are now being confronted with this technology and asking themselves whether it's sharpening or dulling their edge or whatever. The MIT study may focus on young users, but the larger question (I mean how AI affects anyone's ability to think, create and learn) is universal.
If anything, the article is a reminder that using AI thoughtfully (as an accelerant, not a crutch) isn't just possible but essential. The risk isn't the tool itself; it's how passively or uncritically we choose to use it. Whether you're 19 or 49, that challenge applies.
no subject
no subject
Over the longer haul, from the consumer perspective, I see agent-based AI getting baked into the majority of products that contain any kind of logical process, from a private jet all the way down to a toaster. This is predicated on Moore's Law continuing of course, since we just don't have the energy efficiency for this yet, but I'm assuming it will. This will be great. If your toaster gets the bread too dark, you can literally tell it "a little lighter next time".
But here's what worries me, and what's been worrying me for years: People are not just going to instruct agents to do things. They're going to ask agents for advice. They're going to rely on them for analysis. And that will include - is already including - advice about human relationships and analysis of economic and political systems.
That's where things get really nasty from my point of view simply because it's a numbers game. A company or a government can deploy a helpful friendly service - spoken language for now, but eventually rendering an entire digital avatar - that has any number of ideas trained into it, and roll it out to thousands, millions, of users. Dialogue and debate between people will become utterly swamped by dialogue with these services. They will become tutors for children, lecturers for classrooms, mediators in arguments, and eventually assistants to judges and lawmakers, if not judges and lawmakers outright. People will gravitate to this because it's cheaper than hiring a human, far less complicated than interacting with a human, and easier than thinking for oneself. Not all people, not in all classes... There is always a wavefront between people who can access a technology and people who can't ... But that actually makes it much worse.
One could make the dismissive argument that this is just like, say, the invention of the printing press tipping the scales of human dialogue in favor of whatever the owner of the press approves, but there's a difference here: This is a two-way dialogue with a machine, which never gets tired, never goes off-message, learns about you as it goes, and is fully dependent on and controlled by a corporate or government entity, which it reports back to. We appear to be rolling out a red carpet, straight towards technocratic totalitarianism for all but the richest in society (those who walk corridors of power or sit on boards of directors at tech companies). And at a basic level, given how hard it is - impossible really - to tell what exactly has been fed into the training data of any given model, I don't think there are guardrails strong enough to turn us away. You can't prove that the system knows everything about you because it invaded your privacy. Perhaps it just makes really good guesses, hmm?
I took part in a protest last month, where we stood on a sidewalk and waved flags and held up signs. This upcoming world is one where, essentially, the lampposts on the sidewalk, the flagpoles, the flags, the cardboard in the signs, will politely but ceaselessly argue back at you if you write something on them their manufacturers dislike. Historically, totalitarianism has failed because it's just too hard to surveil and proselytize everyone at once: There was never enough manpower. Right now, we are eagerly constructing tools that suit this exact purpose, and they are already in use. And what can any of us do about it?
no subject
I don't disagree that the trajectory you describe is troubling, and in many ways already underway. But I also think the answer can’t be to reject the technology outright, it has too much momentum and potential. The harder challenge is finding ways to use it without surrendering autonomy, privacy or critical thinking. That’s a cultural and political problem as much as a technical one.
AI isn’t essential in the philosophical sense but at this point, learning how to engage with it wisely may be. Because opting out doesn’t stop it from shaping the world we still have to live in.
no subject
Generative AI does not relieve people of the burden of knowing things or the need to think critically. But it is currently being used by students worldwide for this purpose.
The entirety of the problem is that the technology has been made widely available for use by large companies, for the purpose of developing it faster by using the public as both test subject and knowledge base, and they have collectively seen zero consequences for the massive disruption this has caused to the educational system.
no subject
no subject