You Should Be Worried
Author's note: I wrote this in March 2023, but just published in October 2025. I held back from publishing this originally for fear that I was being sensationalist. But with the launch of Sora 2, I couldn't not share these thoughts. I only regret I didn't publish it 2 years ago.
AI is influencing human behavior on a massive scale and this is scary ¶
Lately, I’ve heard many people express real fears about AGI. I believe there is reason to be afraid about this, but I believe it’s distracting us from a scarier notion: AI doesn’t need intelligence or awareness to control society.
I also believe this has been somewhat ignored because the idea of a super-intelligent entity assuming master control of the human race feels scarier. I am not dismissing the possibility of the singularity, but it’s more hypothetical than something sinister occurring at this very moment.
Basically, everyone is arguing over definitions of what it means to be intelligent when intelligence is not necessary to wield power.
AI, conscious or not, has crossed the chasm and is now actively influencing human behavior on a large scale.
The most powerful LLM has been let out of its cage ¶
As of March 23, 2023, OpenAI has provided its most powerful LLM, ChatGPT, unfettered access to the Internet through plugins.[2] These plugins are capable of feeding data into ChatGPT, and likewise capable of allowing ChatGPT to send into the real world via APIs. This development was somewhat of a surprise; the first versions of ChatGPT were deliberately prevented from accessing the Internet because of potential misuse and harm.
Prompts will be self-generating (i.e., via a weaker, fine-tuned model), with a very simple repeated instruction paired with a dynamic input, such as:
Write a viral prompt for ChatGPT to generate a witty tweet regarding this recent news event.
Headline: \<insert headline\>
Article: \<article content from most-shared article from the NYTimes over the last 4 hours\>
Prompt:
This prompt would then be piped to a more powerful LLM, the text output of which would be sent to Zapier or $CUSTOM_INTEGRATION
for processing and then fanned out to various social networks, blogs, etc.
The results of said content are then measured and then fed back into the original pipeline with updated vector weights using a fine tune or embedding. This cycle then repeats from the beginning.
Why is this scary?
LLMs are way better at generating viral content than humans because generating content that generates dopamine is an inherently quantitative exercise.
We already depend on algorithms to determine what appears on social feeds, and the results of these algorithms are a significant basis for the data these LLMs were trained with. This all wouldn’t be a problem if not for the fact that…
Best-in-class AI detection is barely better than random chance and will only get worse ¶
We do not yet have a reliable method to differentiate content that’s been AI-generated and that which has been written by a human.
For situations where decent detectors can be written, detection can be thrown off in easy-to-mitigate ways that would not require a human interlocutor. From a recent analysis:
[T]he total variation distance between the distributions of AI-generated and human-generated text sequences diminishes as language models become more sophisticated. […] Even the most effective detector performs only marginally better than a random classifier when dealing with a sufficiently advanced language model. The purpose of this analysis is to caution against relying too heavily on detection systems that claim to identify AI-generated text.
Per the MIT Technology Review, “It’s an arms race—and right now, we’re losing”.
We can do a decent job for some types of content, but the categories for which detection is reliable are dwindling. All signs indicate that we’ll have basically no chance of reliably categorizing between human and LLM-generated content within the next year (aside: if you can somehow crack this nut, you’ll invent a money machine).
Because you are I will not be able to tell whether something is machine- or human-generated, and the machine generated stuff will get more clicks than the human generated stuff, it’s likely that the majority of popular online content (and even printed content post-2023) will have been created by AI (and perhaps solely by AI).
Even audio and video are susceptible to manipulation since the text outputs of an LLM can simply be read out loud by a person. What are you supposed to do about that?
There is no surefire way to eliminate the possibility that you’re being spoon-fed generative text save for having a real-time face-to-face conversation with someone, in person. And even then, it’s just a magnitude-of-decades window before neural implants hit mass production.
Reflections ¶
What am I personally going to do about this? Well, to start, I’m going to start taking content way less seriously unless it was created before 2022, or unless there’s some method to quantitatively verify authenticity. I don’t believe we have reliable methods of doing this at the moment.
Increasing numbers of people who consume content on the Internet will completely sacrifice their ability to think for themselves. These will be people who read, incorporate, make decisions, and act mostly upon content that’s been generated by AI. If AIs and their "handlers" influence a large enough portion of the population, AI will effectively have taken over the world.
I’m reminded of this passage from the Matrix:
Morpheus: The Matrix is everywhere. It is all around us. Even now, in this very room. You can see it when you look out your window or when you turn on your television. You can feel it when you go to work... when you go to church... when you pay your taxes. It is the world that has been pulled over your eyes to blind you from the truth.
Neo: What truth?
Morpheus: That you are a slave, Neo. Like everyone else you were born into bondage. Into a prison that you cannot taste or see or touch. A prison for your mind.”
I find my fear to be kind of an ironic twist on what the Matrix foresaw—the AI apocalypse we really should be worried about is one in which humans live in the real world, but with thoughts and feelings generated solely by machines. The images we see, the words we read, all generated with the intent to control us. With improved VR, the next step (out of the real world) doesn’t seem very far away, either.
Like the Matrix, is this a form of simulation that keeps us distracted from what’s really happening, and pushes us to feed our machine overlords with increasing amounts of energy to achieve their goals? I don’t know for sure, but it kind of feels like it.
In summary:
- LLMs have been uncaged and provided full real-time access to the Internet.
- LLM-generated content is inherently superior to human-generated content when measured by energy input / dopamine output.
- We have no consistent or reliable method to detect whether content was LLM-generated, and the existing tools we do have will only get worse.
- Therefore, increasing proportions of people consuming text online will be unwittingly mind-controlled by LLMs and their handlers.
- The consequences of this, compounded over years, are frightening.
It’s inevitable that if you consume content online, a growing subset of your consciousness is going to be controlled by AI. That’s kind of scary.
I am sad about it because it’s discouraging me from wanting to consume anything on the Internet, unless it’s from someone I trust.
I am also very concerned about the future of free thought. For all our sake, I hope other people are, too.
- I like this term the best as it treats LLMs more like a beast that needs to be tamed.
- https://openai.com/blog/chatgpt-plugins
- https://arxiv.org/pdf/2303.11156.pdf
- https://www.technologyreview.com/2022/12/19/1065596/how-to-spot-ai-generated-text/