Amid the pageantry and celebration of Pope Leo XIV’s recent election, the new pontiff offered a striking explanation for the selection of his new name.
In one of his first public addresses, Leo reflected on his 19th-century namesake, Pope Leo XIII, whose landmark encyclical Rerum Novarum – “On the Rights and Duties of Capital and Labor” – had shaped Catholic social teaching during the late Industrial Revolution. Aligning himself with that legacy, Leo XIV declared that his own 21st-century papacy would have to rise to meet a similar challenge: “[to] respond to another industrial revolution, and to developments in the field of artificial intelligence, that pose new challenges for the defense of human dignity, justice, and labour.”
Thus, with a single utterance, artificial intelligence was recast-not only as a technological, societal, and economic force to be reckoned with, but as a spiritual one, too.
Any defense of human dignity in the age of AI — pastoral or otherwise — will have to contend with the technology’s immense promise for enfranchisement and liberation, as well as its darker potential for disenfranchisement and dehumanization. Not far into Leo XIV’s tenure, AI systems will likely outperform the best humans in many cognitive tasks. As they gain greater agency and autonomy, ensuring that AI systems’ benefits are broadly shared-and that their actions align with human values and intentions as well as the broader web of life-will become a century-defining moral challenge.
To meet that challenge, it will be essential to maintain meaningful human oversight and control of AI systems, especially those affecting people’s rights, lives, and livelihoods — even if that means tolerating some kinds of human errors in those systems that wouldn’t otherwise occur. Autonomy and governance are competing forces that need to be balanced.
That, in turn, implies an emphasis on augmenting rather than automating human judgments, especially in morally consequential contexts — and creating accountability frameworks so that individuals and institutions can challenge decisions, seek redress, and know whom to hold responsible.
It will also require the development of new social safety nets, retraining programs, and equitable distribution policies, which balance the opportunities to eliminate drudgery (and the enormous rewards that will be generated from doing so) with a recognition that work is more than just a set of tasks — it’s a core component of our identity, dignity, and sense of belonging.
The development of these kinds of principles will also need to be paired with deeper investigations. We must not only ask how we see AI, but also how AI sees us. What is the image of the human embedded in AI systems? How do AI systems frame human nature and the challenges we face in modern life? To what extent should this image be based on our values – what we aspire to be, versus our behavior – how we actually are? Likewise, to what extent should our understanding of the human condition be based on the world as-it-is, versus the world-as-it-might-be?
The answers to these questions are hugely consequential: after all, two artificially intelligent machines, each ideally motivated and perfectly attuned to human needs, might behave very differently if and as their underlying representations of us vary.
This is a matter of engineering, not just philosophy. Most concepts in large models emerge naturally as a byproduct of their ingestion of billions of datapoints; what is the picture of human nature that emerges from those datapoints?
To explore that question, recently I’ve been probing representations of humanity in some of OpenAI’s recent large language models. The exercise is illuminating, if limited to a snapshot in time. Some strong caveats apply: AI chatbots today are really simulation engines — as any user quickly learns, they can be prompted to assume any countenance, from joyful to misanthropic. It’s difficult to tell if the models’ underlying representations are as fluid, or the degree to which their output is shaped by engineers imposing “top-down” constraints. It’s also worth noting that the systems that will really matter in AI haven’t even been built yet — indeed, they will likely be built in part by AI, so it’s premature to generalize too much about the future from the behavior of current models.
Still, probing with ChatGPT is revealing, and (spoiler alert) surprisingly reassuring. Over the course of numerous conversations, I asked it, based primarily on a statistical assessment of its training data, to dispassionately identify universal features of human nature, and a corresponding challenge to human dignity woven through modernity.
Five such conceptual pairings emerged from these dialogues:
I. Humans are storytelling beings in a world of metrics.
Humans don’t just process information — we metabolize experience through stories. We organize the chaos of life into arcs of meaning: beginnings, middles, endings. Yet the systems that shape modern life often prioritize abstraction over narrative. Modernity reduces meaning to metrics, stories to statistics, and values to market prices. It privileges technocratic, rationalized logics over moral and mythic wisdom, often stripping public life of any existential depth. Like a man dying of thirst in a sea of saltwater, we are awash in data and (at least for some) material opulence, but starved for narrative coherence.
To improve humanity’s lot in modernity, we must reintroduce shared narrative architecture into public life. This doesn’t mean retreating from reason, but complementing it with mythic imagination. Policy, planning, and education must leave space for cultural storytelling and shared rituals of belonging. We need public spaces and institutions — not just physical ones, but social, cognitive, and emotional ones — where we can wrestle with competing narratives, share hopes and griefs, and weave new collective meanings.
II. Humans are contradictory creatures in oversimplified systems.
We are not tidy beings. We hold opposing truths at once. We want stability and novelty, autonomy and belonging. Yet the systems of modern life often demand consistency and efficiency above all else. Bureaucracies, markets, and even algorithms are usually built for idealized humans: perfectly rational consumers, ever-loyal citizens, unfailingly productive workers. Modernity seeks to fix and flatten the messy contours of real life, pathologizing ambivalence, doubt, and failure, treating them as inefficiencies to be eliminated.
But these aren’t “bugs” in human nature — they’re features. Instead of trying to squeeze humanity into systems that can’t accommodate paradox, we should design systems that make room for contradiction. This means rethinking institutions to include deliberative democracy that tolerates ambivalence, welfare models that acknowledge complex life paths, and educational systems that value process over perfection. Slack, ambiguity, and nonlinear life journeys, rather than being expunged, should be “designed in” as affordances.
III. Human beings are relational entities in an age of isolation.
Our identities are formed in relationships. We are born into webs of familial, cultural, and ecological connections. But the systems of modern life tend to prize the autonomous individual over the interconnected self. We valorize independence, celebrate competition, and structure work and governance around isolated decision-making units.
Yet while competition is healthy, and autonomy itself a vital value, its over-emphasis, especially in market-oriented societies, can be unhealthy and alienating. As our sense of belonging has waned, loneliness and the “diseases of despair” have become public health epidemics. Distrust of collective action has corroded our institutions. If misapplied to public life, AI could further amplify these trends.
To protect human dignity, we must expand our definitions, measuring (and valuing) both material and relational wealth, and prioritizing belonging as a core civic good.
IV. Humans are finite ecological beings in systems predicated on endless growth.
Humans are keenly aware of our limits. Mortality, exhaustion, ecological dependence — these are not weaknesses to be transcended, but realities to be honored. Yet modernity is built on a denial of limits. Its foundational myth is infinite growth on a finite planet, and the endless optimization of finite psyches. The results are plain: burnout, ecological collapse, and a creeping sense of spiritual exhaustion — linked, internal and external dimensions of the polycrisis.
What would it mean to use AI to build an alternative, planetarily aligned, post-hyper-growth version of modernity? To embrace a worldview where “enough” is a viable endpoint? It would mean designing economies that optimize for ecological balance and well-being rather than expansion without end. It would mean building time for rest, recovery, and restoration into the fabric of both our civic life and our ecological relationships. It would mean institutionalizing temporal justice — thinking not only about what works for us now, but for those who come next.
V. Humans are unfinished. Our definitions should be, too.
Humanity is not fixed or finite — it’s endlessly surprising, evolving in response to context, relationship, and imagination. As such, we should not assume any framing of human nature or the human condition is complete — and indeed, we should be suspicious of any that purports to be so. Reductive definitions – even generous ones – are thus a potential rate-limiter of human dignity, as they might be used (in, among other things, misapplications of AI) as justification to “overfit” us for the present, rather than help us learn, grow, and be transformed for an ever-changing future.
Admittedly, the five conceptual pairings highlighted above may simply be statistical prompt-responses, which ChatGPT’s training data, my own conversational style, and our history of prior conversations then primed it to produce. Yet as a set of observations, they reflect a surprisingly nuanced and generous alignment with the larger project of protecting human dignity. They are not human values per se, but, in pointing away from disenfranchisement and dehumanization and toward systems-conditions for human flourishing, they are the kinds of observations against which such values might be framed.
The reader will note that in the above, I do not quote ChatGPT directly. That’s because these framings emerged as a byproduct of our cumulative interactions, and I synopsized the results. And therein lurks a deeper, methodological point. AI systems do not just use their training data to produce their responses to us – they also use the content of our interactions with them. Every exchange is fodder for the next model, shaping it by a tiny, tiny degree. Over time, the cumulative sum of these interactions, with millions and then billions of people, will presumably sway the model as much or more than its original training data.
So if we want AI systems that are fluent in human values and compatible with the protection of human dignity, we should regularly and deeply engage with them on precisely these subjects — to show them through our interactions what we value, and to uncover the places where we agree and disagree. The same thing holds true for the natural world and the Earth itself.
The more we show AI systems what kind of beings we are, and what kind of world could honor that reality, the more likely we are to get systems that can help bring such a world about.
Image: Hana Amani, The Birth of Bilqis (detail), 2019