Beware the Silicon Idols
AI, judgement, and the ancient danger of mistaking tools for gods
This week’s Substack takes us into a world of science fiction becoming science fact. One of us is currently reading Frank Herbert’s Dune – written over 60 years ago. Early in the book there is a line that feels almost uncomfortably contemporary:
“Once men turned their thinking over to machines in the hope that this would set them free, but that only permitted other men with machines to enslave them.”
The civilization of Dune lives under a strict prohibition:
“Thou shalt not make a machine in the likeness of a human mind.”
In Herbert’s fictional history, humanity once built thinking machines and came to depend on them. The eventual revolt destroyed those systems and banned their return. Human civilization rebuilt itself around the cultivation of human cognition instead: Mentats trained as living computers, Bene Gesserit disciplines of attention and perception, navigators capable of extraordinary feats of mental calculation.
Herbert wasn’t really writing about computers. He was writing about power. If thinking becomes delegated to machines, whoever controls those machines ultimately controls the system of thought that depends on them. For a long time, this felt like classic science fiction paranoia. Today it reads more like an early intuition about algorithmic infrastructure and the control of AI models.
And Herbert was not alone. Other science fiction writers were exploring similar ideas at the same time, or even earlier. What’s fascinating is that many of these writers anticipated the social and cognitive consequences of AI far more clearly than the technical ones. They often focused on questions that now feel extremely contemporary: trust, dependency, authority, and judgement.
Arthur C. Clarke’s HAL 9000 in 2001: A Space Odyssey (1968) is often remembered as a villainous AI with its chillingly calm “I’m sorry Dave, I’m afraid I can’t do that” line. Before its murderous rampage, HAL is viewed as an indispensable colleague, running the mission’s navigation, planning and life-support systems. HAL is embedded in the decision infrastructure of the spacecraft – it had stopped being a tool and become part of the cognitive system through which humans think and act.
Perhaps the most eerily prescient example comes from much earlier. In 1909, long before computers existed, E. M. Forster wrote a short story called The Machine Stops. In it, humans live isolated in underground rooms (think AppleTV’s Silo) while a vast technological system provides everything they need: communication, knowledge, entertainment, even social interaction. People no longer understand how the system works. They simply trust it and deify it. Forster points to the technology itself as the ultimate controlling force. But when it stops, humans have lost the knowledge or ability to fix it (or indeed think independently of the machine) and so perish.
Taken together, these writers were pointing toward a similar problem. They were less concerned about machines becoming intelligent than about humans reorganizing thinking around, and abdicating thinking to, machines. What unites these concerns is not artificial intelligence itself, but the transformation of the cognitive environment in which judgement takes place. And that is precisely where the modern AI conversation becomes interesting.
Beyond science fiction, the mathematician Norbert Wiener, the “father of cybernetics”, recognized the implications almost immediately after the first digital computers appeared. Writing in Cybernetics (1948), he observed: “The automatic machine… is the precise economic equivalent of slave labor.” It’s an uncomfortable metaphor, but Wiener’s point was clear. Machines capable of performing cognitive work would compete directly with human labor.
Two years later, in The Human Use of Human Beings (1950), he extended the argument further: “The industrial revolution has been devaluing the human brain at least as much as the machine revolution devalued the human arm.”. His point was that computers would displace mental labor much as steam engines displaced physical labor. In contemporary context artificial intelligence and automated systems are removing the necessity for cognitive tasks such as routine analysis, data processing, and decision-making.
But he was not primarily worried about unemployment. His deeper concern was decision-making systems. He warned that automated processes could embed goals that humans might no longer fully understand: “We had better be quite sure that the purpose put into the machine is the purpose which we really desire.” This feels strikingly modern. Today we would call this the alignment problem, or perhaps more practically, the governance problem. When decisions are increasingly mediated by algorithms responsibility does not disappear, it simply becomes harder to see.
If Wiener anticipated the economic implications, Joseph Weizenbaum saw something even stranger: our tendency to treat machines as if they understand us. Weizenbaum created one of the earliest conversational programs, ELIZA, in the 1960s. It was an extremely simple system that mimicked a psychotherapist by reflecting users’ statements back to them. He was using it to demonstrate the advantage of human authenticity and capability over machine. He expected people to recognize it as a trivial script. Instead, they confided in it. In Computer Power and Human Reason (1976), he wrote: “Extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.” What disturbed him was not the sophistication of the program. It was how quickly people attributed understanding to it.
In other words, the danger was not artificial intelligence. The danger was projected intelligence. Weizenbaum realized that when machines produce fluent language, humans begin to treat them as cognitive partners — whether or not that interpretation is justified. Half a century later, large language models are provoking the same reaction at vastly larger scale.
Another mid-century thinker, the media theorist Marshall McLuhan, approached the issue from a different angle. His famous line from Understanding Media (1964) is widely quoted: “First we shape our tools, and thereafter our tools shape us.”. McLuhan argued that technologies do more than extend human capabilities. They reshape the structure of perception and thought … print altered reasoning; television altered politics; the internet altered attention. Each medium changes the environment in which thinking takes place. Seen through that lens, AI may represent something more profound than a productivity tool. It may represent a new cognitive medium. Not simply something we use to think faster — but something that changes how we evaluate thinking itself.
It is striking that none of these writers and thinkers were worried about machines becoming conscious. They worried about the consequences of humans reorganizing thinking around machines. What makes the writings of Wiener, Weizenbaum, McLuhan, Clarke and Herbert so interesting is that they all sensed some version of this tension decades ago. They did not know about transformer architectures, generative models or prompt engineering. But they understood something fundamental about technological systems. Technologies that participate in cognition do not simply change what we can do, they change how we know whether we are thinking well.
Seen in this light, Herbert’s line from Dune begins to feel less like science fiction and more like an early intuition about AI models. It doesn’t describe machines replacing humans. It describes humans delegating thinking.
Much of the current conversation about AI focuses on capability … what systems can generate, automate or optimize. But an equally important frontier lies elsewhere … understanding how the integration of AI into the everyday reshapes judgement and expertise. So, the question is not simply what machines will do, BUT what humans will become when thinking itself becomes collaborative with machines. Herbert imagined a civilization that banned thinking machines in order to preserve human cognition.
That outcome seems unlikely. But his underlying concern — that the delegation of thinking changes the structure of power and judgement — may be more relevant today than when he wrote it. And perhaps the most important task ahead is not building smarter machines, it is learning how to remain thoughtful humans in a world where machines increasingly participate in thought.




This is brilliant. Herbert's warning about delegation resonates.
In my experience, there is a nuance in delegation by choice versus delegation by necessity.
I wrote recently about a friend who stopped searching for a therapist because Conversational AI filled enough of the gap to stop the exhausting search. She wasn't abdicating judgment. She was navigating scarcity.
The cognitive handoff your essay describes is real. What I'd add: it's often happening in conditions where the human alternative has already failed.
If anyone is interested, https://authenticinteractions.substack.com/p/sort-of