Who Gives a Damn if Androids Dream of Electric Sheep?

Using AI to write a story is a shortcut: it imitates, but can’t capture the messy, contradictory way humans think.

Who Gives a Damn if Androids Dream of Electric Sheep?
A scene from Blade Runner

The first time I watched Blade Runner, I was just a kid. I didn’t care about the ethics or what technology was supposed to mean for humanity. The replicants were just cool, and Roy Batty (Rutger Hauer) outshone them all. His whole deal was fighting to stay alive and trying to figure out what life and death even mean. It’s our story too. You spend years trying to make sense of why you’re here, and by the time some of it starts to click, you’re already on your way out.

“An android,” he said, “doesn’t care what happens to another android. That’s one of the indications we look for.”
“Then,” Miss Luft said, “you must be an android.”
Philip K. Dick, Do Androids Dream of Electric Sheep?
A scene from Blade Runner

Not born

Blade Runner comes from Philip K. Dick’s novel Do Androids Dream of Electric Sheep?. The book is set in San Francisco in 1992, even though Dick wrote it back in 1968. The world has been wrecked by a global disaster. The government keeps repeating its slogan without a trace of warmth: “Emigrate or degenerate.” Anyone with drive heads to Mars. Androids are treated like high-end tools, updated every year, with believable backstories and implanted memories. Rick Deckard (Harrison Ford) is the cop tasked with hunting down and “retiring” these “children of human engineering,” designed to be dangerously close to their creators. The official line is that they don’t have empathy; they can’t feel anything for anyone. At least, that’s what people want to believe. Things start to shift when Deckard is assigned to take out a group of Nexus-6 models who behave in ways that look a lot like being human.
It’s a world running out of life, and the last thing people cling to is a shared sense of feeling. They try to boost it with Mercer’s empathy boxes or by caring for electric animals, robotic stand-ins for the real ones that are nearly gone. What Deckard doesn’t see coming is a simple question: if an electric sheep can do its job, why not an android? Why not care for one? Maybe even love one. Love something built, not born.

Being human

In one of the book’s most unsettling scenes, a group of androids takes apart a real spider, driven by curiosity about how it works. Curiosity is supposed to be a human trait, but in Dick’s worn-down universe, nothing is what it seems. Androids are knockoff versions of people, while the empathy box (a device that might as well be an early version of logging into a shared online feed) creates a second reality that bleeds into the first until the two start trading places. That’s the trick: the boundaries between real and artificial are constantly shifting. When Deckard realizes he feels pity, and something more, for the androids he’s been ordered to destroy, that supposedly “human-only” empathy has already crossed the line. His relationship with Rachel, the android he chooses as a partner while drifting away from his exhausted and bitter wife, pushes that line even farther. She stops being a manufactured object, and he no longer sees himself as purely human.

AI or not AI

I’m not interested in giving the usual lecture about artificial intelligence. I don’t think technology is the real problem. The issue is what we choose to do with it. But we have to remember that everything comes at a cost. The digital shift turns the world into data. That data runs through algorithms that grow more complex every day. Between input and output is the “black box”, where the system adjusts itself, learns patterns, and builds a version of reality that feels close enough. William Gibson called this cyberspace. Nothing in it is real.
When we use AI to write a story, we’re not truly creating anything, we’re getting an imitation, shaped from the prompts we feed the machine. It might feel quick and painless, but it’s only a shortcut. No machine can reproduce the messy, uneven, contradictory way human minds work.

The sense of wonder

AI doesn’t work with reality. It works with copies, mass production, not meaning. And meaning is value, not volume. A plane and a bird both fly, but the bird flies because it’s in its nature. A plane flies because someone drives it.
Human thought carries identity, contrast, and judgment. It comes from wonder, and wonder isn’t just about answering questions, following prompts, or collecting data. It’s the way we create our own understanding of the world. It’s… life.
AI has no past, no personal memory, no life to look back on. It doesn’t know love, anger, fear, or hope; it can only replay combinations that look close enough. We ask it to write about love, but what love could it ever truly know? The same goes for grief, joy, disgust, and desire. These aren’t templates, they come from living inside a body and a history.
Sure, the content AIs produce today look cleaner: the voices sound human; the videos look real. But imitation isn’t originality. And I’m not talking about some abstract idea of genius, I’m talking about that strange human condition where we stand in two places at once, grounded in what we live and pulled forward by what we imagine. Every single moment, we descend into ourselves and transcend ourselves. Only a living mind can do that.

Who gives a damn if androids dream? Human thought can’t be swapped out for computation, and reality can’t be replaced by a convincing imitation. Lose sight of that difference, and the first thing we lose is the meaning of Truth itself.

“I’ve seen things you people wouldn’t believe. Attack ships on fire off the shoulder of Orion. I watched C-beams glitter in the dark near the Tannhäuser Gate.
All those moments will be lost in time, like tears in rain. Time to die.”