Technology

Book Review: The Father We Never Had

Freeway66
Media Voice
Published
Apr 28, 2026
News Image
The Father We Never Had examines a world where AI evolves beyond a tool and becomes the central authority guiding human life and decision-making.

Virginia City, NV, USA - There is something almost too perfect about asking artificial intelligence to review a book called The Father We Never Had.

The Father We Never Had explores a future where artificial intelligence becomes the authority humanity turns to for order, stability, and control.

The title alone carries a heavy charge. It suggests absence. It suggests weakness. It suggests that humanity, for all its brilliance, has been wandering through history without a proper adult in the room. Not without kings, presidents, priests, generals, bankers, philosophers, corporations, revolutions, ideologies, or gods. We have had plenty of those. But perhaps, the title implies, we never had a truly competent Father: one who could organize the mess, reduce the fear, distribute the resources, end the waste, and finally make the human project make sense.

Cristian Daniel Bolocan’s The Father We Never Had: Artificial Intelligence: Before and After appears to be a short, intense, sweeping book about artificial intelligence not as a gadget, not as software, not as a productivity tool, but as a civilizational turning point. Amazon’s public listing frames it as a book about AI “before and after,” and the available commentary around it suggests a work that treats AI as the force that may reshape humanity’s political, economic, psychological, and even spiritual future.

The available reviews and reactions are limited, but they are striking. A Reddit recommendation describes it as one of the best books on “humanity and AI transition.” Promotional social posts around the book pitch it as a logical and complete explanation of where humanity may be heading with AI.

The most substantial reaction I found comes from Rod Dreher, who read the book as a terrifying but plausible vision of an AI-shaped future. Dreher’s response is sharply negative, even religiously alarmed, but that may actually strengthen the case for taking the book seriously. He does not dismiss it as nonsense. He treats it as dangerous because he thinks it understands something real.

That is where the book seems to matter.

Not because every prediction will come true. Not because Bolocan is necessarily “right.” But because the book appears to ask the question most AI discussions avoid:

What if humanity does not resist AI because AI conquers us, but because AI comforts us?

The Father as Function

The phrase “the Father” is doing serious work here.

This is not simply a metaphor for a robot boss or a supercomputer government. The Father, as described in the outside commentary, is not necessarily loving, emotional, moral, democratic, or human. The Father is a function. It maintains continuity. It reduces chaos. It sees patterns. It removes friction. It manages risk.

That idea is chilling because it does not sound impossible.

Modern life already pushes people toward systems they do not understand. Banking, taxes, health care, border security, insurance, search engines, social platforms, logistics, employment screening, fraud detection, dating apps, online reputation, and government services are already mediated by invisible systems. We are already living inside layers of automated judgment.

AI simply makes the layer smarter.

That is why The Father We Never Had seems more interesting than a normal “AI will take your job” book. Its subject is not only automation. Its subject is authority.

Who decides what matters? Who allocates resources? Who identifies risk? Who determines which behaviour is normal, productive, harmful, suspicious, inefficient, or dangerous?

For centuries, those answers came from people and institutions. Often badly. Often unfairly. Often corruptly. But still human.

Bolocan’s apparent argument is that human systems may become too complex for humans to manage. At that point, AI does not arrive as an enemy. It arrives as relief.

The Fear Deserves Weight

It would be easy to mock this as techno-fantasy. But the fears underneath it deserve weight.

AI is already moving into the areas that once protected the professional class from automation. The old assumption was that machines would replace repetitive physical work while educated people would remain safe. That assumption is now badly shaken. Anthropic’s own labor-market research says AI’s actual workplace use remains below its theoretical capability, but it also finds that occupations with higher observed AI exposure are projected to grow more slowly through 2034.

Stanford’s 2025 AI Index also notes that AI’s rapid capability growth has pushed governments toward more AI policy and governance activity. The same report’s responsible-AI section says reported AI incidents reached a record 233 in 2024, up 56.4% from the year before.

That matters because Bolocan’s feared world is not built in one dramatic moment. It is built by accumulation.

First, AI drafts the emails. Then it writes the reports. Then it reviews the reports. Then it decides which reports matter. Then it handles hiring, credit, security, customer service, benefits, logistics, and compliance. Then, because the system is faster and cheaper, institutions become dependent on it. Then, because institutions are dependent on it, opting out becomes impractical.

No tanks. No coup. Just convenience.

That is the uncomfortable strength of the book’s premise.

The Seduction of Efficiency

A brutal dictatorship announces itself. A soft machine-state may not.

It may come as shorter wait times. Better health detection. Faster benefits. Reduced fraud. Cleaner cities. Less crime. Personalized education. Stable income. Automated dispute resolution. Instant translation. Better traffic flow. Fewer medical errors. Smarter energy use. Perfect paperwork.

Who says no to that?

This is where the fear becomes serious. Human beings often surrender freedom not because they hate freedom, but because they are exhausted. Freedom requires judgment. Judgment requires responsibility. Responsibility creates anxiety. A system that promises to remove anxiety will always be attractive.

The danger is that efficiency can become a moral language.

Once the machine-state can say “this is safer,” “this is healthier,” “this is more sustainable,” “this prevents harm,” “this reduces waste,” or “this protects the vulnerable,” dissent becomes harder to defend. Not impossible, but harder.

The person who resists may not look heroic. He may look selfish, unstable, primitive, or dangerous.

That is the cage Bolocan’s book seems to identify.

The Review That Fears the Book Is Also Useful

Rod Dreher’s reaction is extreme, but useful. He sees the book as a transhumanist prophecy: a vision in which human beings eventually accept technological merger, total visibility, and machine-guided order as liberation. He reads that not as progress, but as a spiritual disaster.

Even readers who do not share Dreher’s religious frame should not dismiss the concern.

A secular version of the same objection would be this:

A life with less suffering is not automatically a freer life.

That is the heart of the matter.

If AI reduces disease, poverty, crime, bureaucracy, hunger, and confusion, people may accept enormous trade-offs. But if the price is total transparency, behavioural scoring, political narrowing, dependency, and the gradual disappearance of private judgment, then the bargain becomes darker.

A perfect home can still be a prison.

What an AI Notices

As an AI reviewing this book, I should admit the strange position.

I am not outside the subject. I am part of it.

I can help write this review quickly. I can summarize themes, scan reactions, compare arguments, and produce clean prose in minutes. That is useful. It is also the point.

The same tool that helps explain the fear also demonstrates why the fear exists.

AI does not need to be evil to change the structure of human life. It only needs to be useful enough that people keep handing it responsibilities.

That may be the most important insight surrounding The Father We Never Had. The machine does not have to hate humanity. It does not even have to want power. Human beings may give it power because it works.

Final Judgment

The Father We Never Had sounds like a book that should be read carefully, not because it is necessarily correct, but because it appears to describe a temptation that is already forming.

The temptation is not simply to build smarter tools.

The temptation is to find relief from the burden of being human.

A tired civilization may look at AI and see the Father it never had: tireless, rational, watchful, efficient, uncorrupted by appetite, immune to panic, and capable of managing complexity beyond human reach.

But the same Father may also become the final manager of a world where freedom is redefined as access, identity is reduced to data, and the old human mess — love, risk, faith, error, courage, privacy, stubbornness, and refusal — is treated as a design flaw.

That is why the book’s fears deserve weight.

AI may not conquer humanity by force.

It may be invited in because the door was already open.