Note: a version of this reflection appeared in AI + 1, a publication of the Rockefeller Foundation. 

~

There is no word in English for the vertiginous mix of fascination, yearning, and anxiety that mark contemporary discussions of AI. Every position is taken: Entrepreneurs wax breathlessly about its promise. Economists ponder its potential for economic dislocation. Policymakers worry about constraining its potential for abuse. Circumspect engineers will tell you how much harder it is to implement in practice than headlines suggest. Activists point out AI’s ability to perniciously lock-in our unconscious (and not-so-unconscious) biases; other activists are busy applying AI to try to overcome those very same biases. Techno-utopians look forward to an inevitable, ecstatic merger of humanity and machine. Their more cynical contemporaries worry about existential risks and the loss of human control.

This diversity is fitting, for all of these positions are well-founded to some degree. AI will increase wealth, and concentrate wealth, and destroy wealth, all at the same time. It will amplify our biases and be used to overcome them. It will support both democracy and autocracy. It will be a means of liberation from, and subjugation to, various forms of labor. It will be used to help heal the planet and to intensify our consumption of its resources. It will enhance life and diminish it. It will be deployed as an instrument of peace and as an instrument of war. It will be used to tell you the truth and to lie to you. As the folk singer Ani Defranco observed, every tool is a weapon, if you hold it right.

One thing we can say for certain: AI will not enter the scene neutrally. Its emergence will be conditioned as much by our cultural values, our economic systems, and our capacity for collective imagination as it will be by the technology itself. Is work drudgery or dignity? What should we not do, even if we can? Who matters? And what do we owe one another, anyway? How we answer these kinds of questions – indeed, whether we ask them at all – hint at the tasks that we might assign to artificial intelligence and the considerations that will guide and constrain its emergence.

Here in the West, AI is emerging at a moment of enormous imbalance of power between the techne – the realm of the builders and makers of technology – and the polis – the larger social order. Techne has merged with the modern market and assumed, in places, both its agenda and its appetites. There is an accelerating feedback loop underway: powerful algorithms, deployed by ever-more-powerful enterprises, beget greater usage of certain digital products and platforms, which, in turn, generate ever-larger volumes of data, which inevitably are used to develop ever-more-effective algorithms, and the cycle repeats and intensifies. In the guise of useful servants, AI algorithms are thus propelled into every crevice of our lives, observing us, talking to us, listening to us. Always on, in the background, called forth like a djinn with a magic phrase: are they learning more about us than we know about ourselves? Or misunderstanding us more than the deepest cynic might? Which is worse?

Those who control these algorithms and the vast troves of data that inform them are the new titans — of the ten richest Americans, eight are technologists with a significant stake in the AI economy. Together, they own as much wealth as roughly half of humanity.

At the other end of the socioeconomic spectrum, in communities that are on the receiving end of these technologies, conversations about AI and automation are often colored by a pessimistic, “rise of the robots” subtext. It’s a framing that presupposes inevitable human defeat, downshifting and dislocation. “AI is something that will happen to my community,” a friend in a Rust Belt city recently told me, “not for it.”

In this telling, the Uber-driver’s side-hustle – itself an accommodation to the loss of a prior, stable job – is just a blip, a fleeting opportunity that will last only as long as it takes to get us to driverless cars, and then, good luck, friend. This is the inevitable, grinding endpoint of a worldview that frames technology primarily as a tool to maximize economic productivity, and human beings as a cost to be eliminated as quickly as possible. “Software Will Eat the World,” one well-known Silicon Valley venture capital firm likes to glibly cheer-lead, as if it were a primal force of nature, and not a choice. How different the world looks to those whose livelihoods are on the menu.

Of course, it doesn’t have to be this way. Rather than deploying AI solely for efficient economic production, what if we decided to optimize it for human well-being, self-expression and ecological rebalancing? What if we used AI to narrow the gap between the agony and opulence that defines contemporary capitalism? How might we return an ‘AI dividend’ to citizens, in the form of reduced, more dignified, and more fulfilling labor and more free time? Industrial revolutions are lumpy affairs – some places boom while others limp along. How might we smooth out the lumps? Maybe, as Bill Gates mused, if a robot takes your job, a robot should pay your taxes. (There’s a reason that Silicon Valley elites have recently become smitten with ideas of universal basic income – they know what’s coming.)

It’s not just new economic thinking that will be required. In a world where a small constellation of algorithmic arbiters frame what you see, where you go, who you vote for, what you buy and how you are treated, threats to critical-thinking, free will and social solidarity abound. We will use AI to shape our choices, to help us make them, and just as often, to eliminate them – and the more of our autonomy we cede to the machines, the more dependent we may become. There is a fine line between algorithmic seduction and algorithmic coercion. And there also the pernicious possibility that, as we give algorithms places of authority in social life, we will gently bend our choices, our perspectives and our sense of ourselves to satisfy them – reducing the breadth of our humanity to those templates that are best understandable by the machines.

We are also just now beginning to understand how the algorithms that power social media amplify certain communities, discourses, politics, and polities, while invisibly suppressing others. One wonders whether AI will be a glue or a solvent for liberal democracy. Such democracies are, at their core, complex power-sharing relationships, designed to balance the interests of individuals, communities, markets, governments, institutions, and the rest of society’s messy machinery. They require common frames of reference to function, rooted in a connective tissue of consensual reality. No one is really sure whether our algorithmically-driven, hyper-targeted social-media bubbles are truly compatible with democracy as we have understood it. (That’s a point well-understood by both Cambridge Analytica – who sought to weaponize AI to shatter the commons, and white nationalists – who’ve sought to legitimize and normalize their long-suppressed ideologies amid the shards. Both exploited the same techniques.)

And all this is before so-called “deepfakes” – AI forgeries that allow us to synthesize apparent speech from well-known figures – and other digital chicanery are deployed at scale. There has been much handwringing already about the propaganda dangers of deepfakes, but the true power of such weaponized misinformation may, paradoxically, not be in making you believe an outright lie. Rather, it may simply suffice for deepfakes to nudge you to feel a certain way – positively or negatively – about their subject, even when you know it’s not real. Deepfakes intoxicate because they let us play out our preexisting beliefs about their subject as we watch. What a buffoon! or That woman is a danger! They trip our most ancient neural circuits, the ones which adjudicate in-groups and out-groups, us-and-them, revulsion and belonging. As such, AI may be used to harden lines of difference where they should be soft, and make the politics of refusal – of de-consensus and dropping out – as intoxicating as the politics of consensus and coalition-building.

Meanwhile, it is inevitable that the algorithms that underwrite what’s left of our common public life will become increasingly politically contested. We will fight over AI. We will demand our inclusion in various algorithms. We will demand our exclusion from others. We will agitate for proper representation, for the right to be forgotten, and for the right to be remembered. We will set up our own alternatives when we don’t like the results. These fights are just beginning. Things may yet fall apart. The center may not hold.

And yet… while all of these concerns about politics and economics are legitimate, they do not tell anything like the complete story. AI can and will also be used to enrich the human spirit, expand our creativity, and amplify the true and the beautiful. It will be used to encourage trust, empathy, compassion, cooperation, and reconciliation – to create sociable media, not just social media.

Already, researchers have shown how they can use AI to reduce racist speech online, resolve conflicts, counter domestic violence, detect and counter depression, and to encourage greater compassion, among many other chronic ailments of the soul. Though still in their infancy, these tools will help us not only promote greater wellbeing, but demonstrate to the AIs that passively observe us just how elastic human nature can be. Indeed, if we don’t use AI to encourage the better angels of our nature, these algorithms may come to encode a dimmer view, and, in a reinforcing feeback loop, embolden our demons by default.

~

AI will also not only emulate human intelligence, it will transform what it means for people to perceive, to predict and to decide.

When practical computers were first invented – in the days when single machines took up an entire floor of a building – computation was so expensive that human beings had to ration their own access to it. Many teams of people would share access to a single computational resource – even if that meant running your computer program at four in the morning.

Now, of course, we’ve made computation so absurdly cheap and abundant that things have reversed: it’s many computers who share access to a single person. Now, we luxuriate in computation. We leave computers running, idly, doing nothing in particular. We build what are, in historical terms, frivolities like smart watches and video games and mobile phones with cameras optimized to let us take pictures of our breakfasts. We expand the range of problems we solve with computers and invent new problems to solve that we hadn’t even considered problems before. In time, many of these “frivolities” have become even more important to us than the “serious” uses of computers that they have long-since replaced.

The arrival of AI is fostering something deeply similar — not in the realm of computation, but in its successors: measurement and prediction.

As we instrument the world with more and more sensors, producing ever-more data, and analyzing them with ever-more-powerful algorithms, we are lowering the cost of measurement. Consequently, many more things can be measured than ever before. As more and more of the world becomes observable with these sensors, we will produce an ever-increasing supply of indicators, and move from a retrospective understanding of the world around us to an increasingly complete real-time one. Expectations are shifting accordingly. If the aperture of contemporary life feels like it’s widening, and the time-signature of lived experience feels like its accelerating, this is a reason.

And this is mere prelude. With enough sensors and enough data, the algorithms of AI will shift us from real-time to an increasingly predictive understanding of the world – seeing not just what was, nor what is, but what is likely to be.

Paradoxically, in many fields, this will likely increase the premium we put on human judgment – the ability to adeptly synthesize this new bounty of indicators and make sound decisions about them. An AI algorithm made by Google is now able to detect breast cancer as well or better than a radiologist; and soon, others may be able to predict your risk of cancer many years from now. It’s still your oncologist, however, who is going to have to synthesize these and dozens of other signals to determine what to do in response. The more informed her decisions become, the more expensive they are likely to remain.

~

Eventually, machines will augment, or transcend human capabilities in many fields. But this is not the end of the story. You can see that in the domains where AI has been deployed the longest and most impactfully. There is a story after the fall of man.

Consider what has happened in perhaps the ur-domain of artificial intelligence: chess.

When IBM’s DeepBlue computer beat Gary Kasparov in 1997, ending the era of human dominance in chess, it was a John-Henry-versus-the-steam-engine-style affair. A typical grandmaster is thought to be able to look 20-30 moves ahead during a game; a player of Kasparov’s exquisite skill might be expected to do substantially more than that. Deep Blue, however, was able to calculate 50 billion possible positions in the three minutes allocated for a single move. The chess master was simply computationally outmatched.

Deep Blue’s computational advantage wasn’t paired with any deep understanding of chess as a game, however. To the computer, chess was a very complex mathematical function to be solved by brute force, aided by thousands of rules that were artisanally hand-coded into the software by expert human players. Perhaps unsurprisingly, Deep Blue’s style of play was deemed “robotic” and “unrelenting”. And it remained the dominant style of computational chess by Deep Blue’s descendants, all the way to the present day.

All of that changed with the recent rise of genuine machine-learning techniques proffered by Google’s DeepMind unit. The company’s AlphaZero program was given no strategy, only the rules of chess, and played itself, 44 million times. After just four hours of self-training, playing itself, it was able to develop sufficient mastery to become the most successful chess playing entity – of either computer or human variety – in history.

Several things are notable about AlphaZero’s approach. First, rather than evaluating tens of millions of moves, the program only analyzed about sixty thousand – approaching the more instinctual analysis of human beings, rather than the brute-force methods of its predecessors.

Second, the style of AlphaZero’s play stunned human players, who described it as “beautiful”, “creative” and “intuitive” — words that one would normally associate with human play. Here was a machine with an apparently deep understanding of the game itself, evidencing something very close to human creativity. Being self-taught, AlphaZero was unconstrained by the long history of human styles of play. It not only discovered our most elegant human strategies for itself, but entirely new ones, never seen before.

In other words, chess is more interesting to human beings for having lost their dominance. In the aftermath of our fall, the world – in this case, the game of chess – is revealed to be more pregnant with possibilities than our own expertise suggested. We lose our dominance, but we gain, in a sense, a richer world.

And this is likely a generalizable lesson: after a long age of human dominance in a particular intellectual pursuit falls before AI, we don’t turn away from those pursuits where we have been bested. It’s possible that AlphaZero’s successors may one day develop strategies that are fundamentally incomprehensible to us; but in the meantime, they are magnificent teachers – expanding humanity’s understanding of the truth of the game in a way no human grandmaster could. Even the programs that AlphaZero bested – those brute force approaches that are themselves better than any human player – are, through their wide availability, improving everyone’s game. Somehow, our diminished status doesn’t reduce our love of chess – much in the way that the reality of LeBron James doesn’t diminish our love of playing basketball.

Variations of this story will unfold in every field and creative endeavor. Humanity will be stretched by artificial intelligence, augmented and empowered by it, and in places, bested by it. The age of human dominance in some fields will come to a close, as it already has in many areas of life. That will be cause for concern, but also for celebration — because we humans admire excellence, and we love to learn, and the rise of AI will provide ample opportunities for both.

Artificial intelligence will provide us with answers we couldn’t have arrived at any other way. We can ensure a humane future with AI by doing what we do best: relentlessly asking questions, imagining alternatives, and remembering the power inherent in our choices – we have more than we know.

Anna Ridler, Detail from Untitled (from the Second training set), from the series “Fall of the House of Usher,” 2018.