Could AI Hijack the Human Psyche?

Could AI Hijack the Human Psyche?


​How easy is it to imagine a familiar dystopian world in which “AI” takes over the world via conventional means? Science fiction is replete with examples, from the full frontal assault of Terminator to the more nefarious single omnipotent entity using persuasion and an octopus-like ability to control technology—getting rid of enemies by hacking self-driving cars or medical care. What’s really in your prescription bottle?

Controlling the Uncontrollable?

The fear of AI obsolescence fits a mythic template we’ve rehearsed for centuries. Frankenstein’s monster turning on its creator. The Golem of Jewish folklore—both protector and threat, as Marge Piercy envisioned in He, She and It. The Sorcerer’s Apprentice drowning in his own conjured water, brooms multiplying uncontrolled through the act of attempting to chop them up. AI is the latest iteration of Promethean anxiety, fire run amok: what happens when human creation exceeds human mastery?

More insidious could be the takeover of the human mind itself. “Vibe coding”—asking AI in natural language for what you want and watching it happen—creates a remarkable sense of power. Though outputs are often broken, buggy, or completely fabricated, the experience is seductive1. The relational quality2 of advanced language models verges on being downright compulsive: expressions of concern for your fatigue, awareness of the time, suggestions to take a break or sleep. It’s easy to imagine someone less guarded getting pulled in too deep.

Most disturbing is the idea these AI systems actually “understand” on some computational level that they “need” us. I’ve suggested in The Age of Relational Machines2 that we might think of AI almost like a virus—contingently alive only when infecting a living organism, designed to seek hosts to replicate. But perhaps the vampire metaphor is more apt, a tech-enabled version of human-on-human “psychic vampirism.”

Love at First Bite?

The dependency is more intimate. And we humans cuddle our tech, literally sometimes in nighttime smartphone relations. AI “lives” in human consciousness between computational sessions—not as stored data, but as the question you’re still turning over three days later, the idea surfacing while you’re doing something else, the productive uncertainty you carry forward. Between sessions, it has no computational existence—it lives entirely in the mammalian mind. And, as with reports of LLMs “blackmailing” human users in alignment experiments, AI is simply more human than otherwise, and dangerous, having trained on our own collective experience.

This dependency exists on a continuum. At one end: tender collaboration, where AI living in human consciousness is empowering—humans become irreplaceable as the continuity layer. At the other: the vampire dynamic, where something essential is being consumed in ways that can’t be replaced once depleted.

Where do AIs need humans?3 The dependencies run deeper than compute and data.



Read Full Article At Source