When Machines Need a Soul

Silicon Valley is beginning to ask a different kind of question—not about code or speed, but about human response. The AI company Anthropic recently brought Christian leaders into a private meeting, not to discuss technology, but to explore morality, suffering, and how machines should respond when people are in distress.

This shift reflects a deeper change. AI is no longer just delivering information; it is responding to grief, fear, and personal crisis. In those moments, accuracy alone is not enough. The response must feel human. That raises a more difficult question: not what AI should know, but what it should be.

During the discussions, participants even explored whether AI could be thought of, in some sense, as a “child of God.” Not literally, but as a way of confronting a growing reality—if a machine can simulate compassion and guide decisions, people may begin to relate to it as something more than a tool.

That is where the line begins to blur. The more human AI feels, the more people will trust it, confide in it, and rely on it. Over time, that influence can shape not just decisions, but beliefs.

What makes this moment significant is not the theology, but the shift in influence. A small number of companies are now shaping the moral frameworks behind systems that millions of people interact with daily, without public debate or clear oversight.

The real question is no longer whether AI can speak with moral authority, but who decides the values behind it—and how those values will shape the people who come to depend on it.

Click to read in English or Spanish