Humanity and AI:
Notes from the Field

June 10, 2020

Note: a version of this reflection appeared in AI + 1, a publication of the Rockefeller Foundation. 

~

There is no word in English for the vertiginous mix of fascination, yearning, and anxiety that mark contemporary discussions of AI. Every position is taken: Entrepreneurs wax breathlessly about its promise. Economists ponder its potential for economic dislocation. Policymakers worry about constraining its potential for abuse. Circumspect engineers will tell you how much harder it is to implement in practice than headlines suggest. Activists point out AI’s ability to perniciously lock-in our unconscious (and not-so-unconscious) biases; other activists are busy applying AI to try to overcome those very same biases. Techno-utopians look forward to an inevitable, ecstatic merger of humanity and machine. Their more cynical contemporaries worry about existential risks and the loss of human control.

This diversity is fitting, for all of these positions are well-founded to some degree. AI will increase wealth, and concentrate wealth, and destroy wealth, all at the same time. It will amplify our biases and be used to overcome them. It will support both democracy and autocracy. It will be a means of liberation from, and subjugation to, various forms of labor. It will be used to help heal the planet and to intensify our consumption of its resources. It will enhance life and diminish it. It will be deployed as an instrument of peace and as an instrument of war. It will be used to tell you the truth and to lie to you. As the folk singer Ani Defranco observed, every tool is a weapon, if you hold it right.

One thing we can say for certain: AI will not enter the scene neutrally. Its emergence will be conditioned as much by our cultural values, our economic systems, and our capacity for collective imagination as it will be by the technology itself. Is work drudgery or dignity? What should we not do, even if we can? Who matters? And what do we owe one another, anyway? How we answer these kinds of questions – indeed, whether we ask them at all – hint at the tasks that we might assign to artificial intelligence and the considerations that will guide and constrain its emergence.

Here in the West, AI is emerging at a moment of enormous imbalance of power between the techne – the realm of the builders and makers of technology – and the polis – the larger social order. Techne has merged with the modern market and assumed, in places, both its agenda and its appetites. There is an accelerating feedback loop underway: powerful algorithms, deployed by ever-more-powerful enterprises, beget greater usage of certain digital products and platforms, which, in turn, generate ever-larger volumes of data, which inevitably are used to develop ever-more-effective algorithms, and the cycle repeats and intensifies. In the guise of useful servants, AI algorithms are thus propelled into every crevice of our lives, observing us, talking to us, listening to us. Always on, in the background, called forth like a djinn with a magic phrase: are they learning more about us than we know about ourselves? Or misunderstanding us more than the deepest cynic might? Which is worse?

Those who control these algorithms and the vast troves of data that inform them are the new titans — of the ten richest Americans, eight are technologists with a significant stake in the AI economy. Together, they own as much wealth as roughly half of humanity.

At the other end of the socioeconomic spectrum, in communities that are on the receiving end of these technologies, conversations about AI and automation are often colored by a pessimistic, “rise of the robots” subtext. It’s a framing that presupposes inevitable human defeat, downshifting and dislocation. “AI is something that will happen to my community,” a friend in a Rust Belt city recently told me, “not for it.”

In this telling, the Uber-driver’s side-hustle – itself an accommodation to the loss of a prior, stable job – is just a blip, a fleeting opportunity that will last only as long as it takes to get us to driverless cars, and then, good luck, friend. This is the inevitable, grinding endpoint of a worldview that frames technology primarily as a tool to maximize economic productivity, and human beings as a cost to be eliminated as quickly as possible. “Software Will Eat the World,” one well-known Silicon Valley venture capital firm likes to glibly cheer-lead, as if it were a primal force of nature, and not a choice. How different the world looks to those whose livelihoods are on the menu.

Of course, it doesn’t have to be this way. Rather than deploying AI solely for efficient economic production, what if we decided to optimize it for human well-being, self-expression and ecological rebalancing? What if we used AI to narrow the gap between the agony and opulence that defines contemporary capitalism? How might we return an ‘AI dividend’ to citizens, in the form of reduced, more dignified, and more fulfilling labor and more free time? Industrial revolutions are lumpy affairs – some places boom while others limp along. How might we smooth out the lumps? Maybe, as Bill Gates mused, if a robot takes your job, a robot should pay your taxes. (There’s a reason that Silicon Valley elites have recently become smitten with ideas of universal basic income – they know what’s coming.)

It’s not just new economic thinking that will be required. In a world where a small constellation of algorithmic arbiters frame what you see, where you go, who you vote for, what you buy and how you are treated, threats to critical-thinking, free will and social solidarity abound. We will use AI to shape our choices, to help us make them, and just as often, to eliminate them – and the more of our autonomy we cede to the machines, the more dependent we may become. There is a fine line between algorithmic seduction and algorithmic coercion. And there also the pernicious possibility that, as we give algorithms places of authority in social life, we will gently bend our choices, our perspectives and our sense of ourselves to satisfy them – reducing the breadth of our humanity to those templates that are best understandable by the machines.

We are also just now beginning to understand how the algorithms that power social media amplify certain communities, discourses, politics, and polities, while invisibly suppressing others. One wonders whether AI will be a glue or a solvent for liberal democracy. Such democracies are, at their core, complex power-sharing relationships, designed to balance the interests of individuals, communities, markets, governments, institutions, and the rest of society’s messy machinery. They require common frames of reference to function, rooted in a connective tissue of consensual reality. No one is really sure whether our algorithmically-driven, hyper-targeted social-media bubbles are truly compatible with democracy as we have understood it. (That’s a point well-understood by both Cambridge Analytica – who sought to weaponize AI to shatter the commons, and white nationalists – who’ve sought to legitimize and normalize their long-suppressed ideologies amid the shards. Both exploited the same techniques.)

And all this is before so-called “deepfakes” – AI forgeries that allow us to synthesize apparent speech from well-known figures – and other digital chicanery are deployed at scale. There has been much handwringing already about the propaganda dangers of deepfakes, but the true power of such weaponized misinformation may, paradoxically, not be in making you believe an outright lie. Rather, it may simply suffice for deepfakes to nudge you to feel a certain way – positively or negatively – about their subject, even when you know it’s not real. Deepfakes intoxicate because they let us play out our preexisting beliefs about their subject as we watch. What a buffoon! or That woman is a danger! They trip our most ancient neural circuits, the ones which adjudicate in-groups and out-groups, us-and-them, revulsion and belonging. As such, AI may be used to harden lines of difference where they should be soft, and make the politics of refusal – of de-consensus and dropping out – as intoxicating as the politics of consensus and coalition-building.

Meanwhile, it is inevitable that the algorithms that underwrite what’s left of our common public life will become increasingly politically contested. We will fight over AI. We will demand our inclusion in various algorithms. We will demand our exclusion from others. We will agitate for proper representation, for the right to be forgotten, and for the right to be remembered. We will set up our own alternatives when we don’t like the results. These fights are just beginning. Things may yet fall apart. The center may not hold.

And yet… while all of these concerns about politics and economics are legitimate, they do not tell anything like the complete story. AI can and will also be used to enrich the human spirit, expand our creativity, and amplify the true and the beautiful. It will be used to encourage trust, empathy, compassion, cooperation, and reconciliation – to create sociable media, not just social media.

Already, researchers have shown how they can use AI to reduce racist speech online, resolve conflicts, counter domestic violence, detect and counter depression, and to encourage greater compassion, among many other chronic ailments of the soul. Though still in their infancy, these tools will help us not only promote greater wellbeing, but demonstrate to the AIs that passively observe us just how elastic human nature can be. Indeed, if we don’t use AI to encourage the better angels of our nature, these algorithms may come to encode a dimmer view, and, in a reinforcing feeback loop, embolden our demons by default.

~

AI will also not only emulate human intelligence, it will transform what it means for people to perceive, to predict and to decide.

When practical computers were first invented – in the days when single machines took up an entire floor of a building – computation was so expensive that human beings had to ration their own access to it. Many teams of people would share access to a single computational resource – even if that meant running your computer program at four in the morning.

Now, of course, we’ve made computation so absurdly cheap and abundant that things have reversed: it’s many computers who share access to a single person. Now, we luxuriate in computation. We leave computers running, idly, doing nothing in particular. We build what are, in historical terms, frivolities like smart watches and video games and mobile phones with cameras optimized to let us take pictures of our breakfasts. We expand the range of problems we solve with computers and invent new problems to solve that we hadn’t even considered problems before. In time, many of these “frivolities” have become even more important to us than the “serious” uses of computers that they have long-since replaced.

The arrival of AI is fostering something deeply similar — not in the realm of computation, but in its successors: measurement and prediction.

As we instrument the world with more and more sensors, producing ever-more data, and analyzing them with ever-more-powerful algorithms, we are lowering the cost of measurement. Consequently, many more things can be measured than ever before. As more and more of the world becomes observable with these sensors, we will produce an ever-increasing supply of indicators, and move from a retrospective understanding of the world around us to an increasingly complete real-time one. Expectations are shifting accordingly. If the aperture of contemporary life feels like it’s widening, and the time-signature of lived experience feels like its accelerating, this is a reason.

And this is mere prelude. With enough sensors and enough data, the algorithms of AI will shift us from real-time to an increasingly predictive understanding of the world – seeing not just what was, nor what is, but what is likely to be.

Paradoxically, in many fields, this will likely increase the premium we put on human judgment – the ability to adeptly synthesize this new bounty of indicators and make sound decisions about them. An AI algorithm made by Google is now able to detect breast cancer as well or better than a radiologist; and soon, others may be able to predict your risk of cancer many years from now. It’s still your oncologist, however, who is going to have to synthesize these and dozens of other signals to determine what to do in response. The more informed her decisions become, the more expensive they are likely to remain.

~

Eventually, machines will augment, or transcend human capabilities in many fields. But this is not the end of the story. You can see that in the domains where AI has been deployed the longest and most impactfully. There is a story after the fall of man.

Consider what has happened in perhaps the ur-domain of artificial intelligence: chess.

When IBM’s DeepBlue computer beat Gary Kasparov in 1997, ending the era of human dominance in chess, it was a John-Henry-versus-the-steam-engine-style affair. A typical grandmaster is thought to be able to look 20-30 moves ahead during a game; a player of Kasparov’s exquisite skill might be expected to do substantially more than that. Deep Blue, however, was able to calculate 50 billion possible positions in the three minutes allocated for a single move. The chess master was simply computationally outmatched.

Deep Blue’s computational advantage wasn’t paired with any deep understanding of chess as a game, however. To the computer, chess was a very complex mathematical function to be solved by brute force, aided by thousands of rules that were artisanally hand-coded into the software by expert human players. Perhaps unsurprisingly, Deep Blue’s style of play was deemed “robotic” and “unrelenting”. And it remained the dominant style of computational chess by Deep Blue’s descendants, all the way to the present day.

All of that changed with the recent rise of genuine machine-learning techniques proffered by Google’s DeepMind unit. The company’s AlphaZero program was given no strategy, only the rules of chess, and played itself, 44 million times. After just four hours of self-training, playing itself, it was able to develop sufficient mastery to become the most successful chess playing entity – of either computer or human variety – in history.

Several things are notable about AlphaZero’s approach. First, rather than evaluating tens of millions of moves, the program only analyzed about sixty thousand – approaching the more instinctual analysis of human beings, rather than the brute-force methods of its predecessors.

Second, the style of AlphaZero’s play stunned human players, who described it as “beautiful”, “creative” and “intuitive” — words that one would normally associate with human play. Here was a machine with an apparently deep understanding of the game itself, evidencing something very close to human creativity. Being self-taught, AlphaZero was unconstrained by the long history of human styles of play. It not only discovered our most elegant human strategies for itself, but entirely new ones, never seen before.

In other words, chess is more interesting to human beings for having lost their dominance. In the aftermath of our fall, the world – in this case, the game of chess – is revealed to be more pregnant with possibilities than our own expertise suggested. We lose our dominance, but we gain, in a sense, a richer world.

And this is likely a generalizable lesson: after a long age of human dominance in a particular intellectual pursuit falls before AI, we don’t turn away from those pursuits where we have been bested. It’s possible that AlphaZero’s successors may one day develop strategies that are fundamentally incomprehensible to us; but in the meantime, they are magnificent teachers – expanding humanity’s understanding of the truth of the game in a way no human grandmaster could. Even the programs that AlphaZero bested – those brute force approaches that are themselves better than any human player – are, through their wide availability, improving everyone’s game. Somehow, our diminished status doesn’t reduce our love of chess – much in the way that the reality of LeBron James doesn’t diminish our love of playing basketball.

Variations of this story will unfold in every field and creative endeavor. Humanity will be stretched by artificial intelligence, augmented and empowered by it, and in places, bested by it. The age of human dominance in some fields will come to a close, as it already has in many areas of life. That will be cause for concern, but also for celebration — because we humans admire excellence, and we love to learn, and the rise of AI will provide ample opportunities for both.

Artificial intelligence will provide us with answers we couldn’t have arrived at any other way. We can ensure a humane future with AI by doing what we do best: relentlessly asking questions, imagining alternatives, and remembering the power inherent in our choices – we have more than we know.

Anna Ridler, Detail from Untitled (from the Second training set), from the series “Fall of the House of Usher,” 2018.

Ending The Tyranny
of the Average

November 6, 2018

This blog is returning from hiatus, with a new, occasional series on thinking about complex problems.

At a recent convening of leaders in public health, Dr. David Fleming, of PATH, shared what has become a common observation regarding the relationship between spending and outcomes on US healthcare: namely, that the US spends far more than any other nation (both per-capita and in total dollars) on the health of our citizens, yet achieves results (measured in terms of life expectancy) which place us last on the list of developed nations:

This fact has featured prominently in debates over the design of the US healthcare system, fueling arguments on all sides. Yet a much more interesting picture emerges, Dr. Fleming showed, when you look at life expectancy by county, rather than nationally:

Here we see how US counties perform when compared to the world’s ten-best performing countries in terms of life expectancy. The darkest blue counties are up to 15 years ahead of the world’s best performing countries; the darkest red counties are up to fifty years behind.

What becomes immediately clear from even a casual glance is how place-based US health performance is. America’s least-well-performing communities are clustered in the American Deep South — there are counties in rural Alabama, for example, where life expectancy substantially lags that of say, Vietnam or Bangladesh. Conversely, there are certain Northeastern and California communities that are so far ahead it may take even the most advanced economies another decade just to catch up to them.

The absence of these distinctions in our everyday debates over healthcare illustrates what statisticians call the tyranny of the average, a term used describe the consequences of using averages as indicators of the performance of complex systems.

Average indicators mask the “lumpy” reality of many complex phenomena, and generally, dumb down our thinking and our debates. They suppress our understanding of both the negative and positive outliers in a given domain. Aggregate educational statistics about a city, for example, can “hide” a failing school, just as readily as they can “hide” school that is outperforming. They are an enemy of accountability and a disincentive to action.

Averages also shape our view of what constitute “normal” phenomena in the popular imagination, and they reinforce our assumptions about what “normal” responses to those phenomena should be. The real world, of course, is much more diverse. Psychologists, for example, are learning that there is a far wider array of healthy responses to adverse life events than our default cultural categories suggest. As PACE University’s Anthony Mancini writes:

 “Reliance on average responses has led to the cultural assumption that most people experience considerable distress following loss and traumatic events, and that everyone can benefit from professional intervention. After 9/11, for example, counselors and therapists descended on New York City to provide early interventions, particularly to emergency service workers, assuming that they were at high risk of developing posttraumatic stress disorder. In fact, most people—even those who experience high levels of exposure to acute stress—recover without professional help.

And we now know that many early interventions are actually harmful and can impede natural processes of recovery. For example, critical incident stress debriefing, a once widely used technique immediately following a traumatic event, actually resulted in increased distress three years later among survivors of motor vehicle accidents who received this treatment, compared to survivors who received no treatment.”

The Tyranny of the Average is given perfect mathematical expression in Anscombe’s quartet, four datasets that appear nearly identical when described with simple statistics, but reveal a completely different picture when graphed. All of these datasets have exactly the same average, variance and correlation – yet the underlying behavior of the system they represent is completely hidden until you look at the data spatially:

Each of these variations describes a different reality, and each of those realities require a different range of strategies if we wanted to drive intentional (and presumably beneficial) change. There are many strategies which might improve one variation, but actually harm the others.

For many complex social phenomena, such “lumpy” distributions are often fractal – repeating at every resolution. Just as some counties dramatically over- or under-perform their peers in healthcare, so too one community within a county may do so, and one neighborhood within that community, and one block within that neighborhood.

You can see this in evidence in the State of Mississippi, which ranks 50th in the country in terms of infant mortality, with 9.08 infant deaths per 1000 live births. (This is called the “IMR” or “Infant Mortality Rate.) Drill down on this “averaged” statewide indicator and, as you might expect, you’ll find wild disparities at the county level, with some counties doing dramatically worse (or better) than their peers:

Immediately, you may think this is a good proxy map of where the maternal health system is strongest and where it is weakest. Yet even this map hides something significant, and distorting – the shocking disparity between the IMR for black and white mothers. Here’s the relevant comparison data for the twenty most populous counties in Mississippi:

As you can see, in every single county on the list, the IMR for black mothers is higher than it is for white mothers. In Alcorn county, for example, the IMR for black mothers is 30.8 deaths per 1000 births – 275% higher than it is for white mothers who live in the same county. (The IMR in North Korea, in contrast, is ‘only’ 22.) In Pearl River County, the IMR for black mothers is 24.9, an incredible 344% higher than the IMR for white mothers of 5.6. Yet, on the above map, Pearl River only merits a severity ranking of 2 (on a scale of 1-5) because the aggregate statistic effectively “hides” this unequal racial outcome.

Two observations stand out here: first, for those seeking to encourage positive social and ecological change, having higher-resolution statistics is not only tactically essential, it’s politically and even morally essential. Without them, you can’t see the texture of the problem you’re confronting – and you can’t build the interest groups, stakeholders, accountability frameworks or political case necessary to measure and drive change.

The second is that many of our 21st-century problems may ultimately be better understood in terms of place, rather than in terms of problem-set. Poverty, limited social power, environmental degradation, underfunded education, poor health access and other issues often cluster spatially – and mutually reinforce one another. But sometimes, they don’t. Understanding the difference is essential if we’re to avoid one-size-fits-all solutions.

To do that, we need a revolution in data collection and reporting – including the production of new kinds of indicators that not only tell us the health of the overall system, but can decompose that larger picture into its constituent parts – and reveal the nuanced ‘texture’ of a place in both space and time.

The Namib desert, as seen from the Korean KompSat-2 Satellite

Seeing the Whole

April 23, 2016

“The world is full of magic things, patiently waiting for our senses to grow sharper.”
―W.B. Yeats

Human perception is a fickle, paradoxical instrument.

Our visual sense, while more acute than that of many species, is hardly the keenest in the animal kingdom. An eagle, for instance, has eyesight so sharp it can spot small prey more than three kilometers away. The next time you happen upon one in the wild, know that it saw you coming from afar, and waited patiently for you to arrive.

Birds are also “tetrachromats”; in addition to the spectra that are visible to humans, they possess a fourth kind of cone in their retinas, which allows them to see ultraviolet wavelengths of light. A very few human beings have inherited a genetic variant that confers tetrachromacy; they describe living in a world of spectacular subtlety and vibrancy, wholly unavailable to the rest of us, in which hundreds of unseen variations lurk in what we might otherwise label “green” or “blue.”

Other animals, especially small ones, sense time in ways that we might consider super-human. A common housefly processes about four times more visual information each second than a human being. Their “mental movie” is composed of two hundred and fifty frames per second; ours, a paltry sixty. As a consequence, some zoologists believe that a fly’s experience of time is radically slowed. To them, we appear as lumbering beasts, haplessly waving our rolled-up newspapers in slow motion.

One could fill a book with such unflattering comparisons, but don’t pity the poor humans. Over thousands of years of self-compounding refinement, we have managed to augment our otherwise provincial senses far beyond what any other animal might hope for. Indeed, this increasing sensual acuity is a central theme in the story of human progress.

Consider: In the 5th century BCE, the Greek philosopher Democritus first developed the (at the time, unobvious) idea that the world was filled with small, indivisible particles—άτομα. His contemporary, Aristotle, thought this idea was ludicrous, and the idea languished for centuries.  Today, along the border between France and Switzerland, physicists at the Large Hadron Collider regularly accelerate subatomic particles to 99.999999% of the speed of light, and then smash them together in violent explosions that simulate the earliest moments after the Big Bang. In the resulting flash, which lasts for only a few billionths of a second, they glimpse the esoteric particles that form the basic building blocks of the universe. To even attempt this feat required the invention of detectors that are so exquisitely sensitive that they must be continually readjusted to compensate for minute fluctuations in the gravitational pull of the moon.

In a similar vein, we have peered further out into the inky darkness—and thus, further back in time—than any other animal. The universe is 13.7 billion years old. Human beings have built an instrument—the Planck Space Telescope—that has detected the dim remnants of radiation emitted when the cosmos was a mere 380 thousand years old—or 0.00002 billion years after its birth. Said another way: if the entire history of the cosmos were compressed into a year, we human beings have peered all the way back to the first ten minutes.

These Olympic feats of enhanced perception are among the crowning achievements of our species. Yet even as we celebrate them, our workaday senses remains stubbornly parochial.

Walking down the street, we easily sense changes that occur at a one or two meters per second, especially if those changes occur where our experience tells us they ought to. But, we’re terrible at sensing changes that unfold significantly faster—or slower—than our preferred speed, or occur where our experiences haven’t conditioned us to look.

This parochialism is part of the reason why we so poorly understand the world around us. Our planet is vastly larger and more complex than our ability to readily comprehend, and moves at speeds, and scales, and with interdependencies that do not conform to our everyday modes of thinking. If it did, climate change would have been solved long ago.

Paradoxically, humanity’s civilizing instinct inflames these perceptual biases. Civilization can be understood, in part, as the imposition of a kind of human-scale regularity upon the world. From inside it, it’s easy to forget that we nest, unsteadily, within the larger complexity of the whole—and not the other way around.

Now, thankfully, humanity is developing new technologies that can help us sense the world at scale, and make change visible in ways that are much more amenable to human cognition. And that matters, because seeing the world, deeply and in its totality, is the first step on the path to communion, empathy, and stewardship.

Mangroves spread in fractal patterns along the remote Keep River in Australia.
Image courtesy of Planet Labs.

 

For the past several years, I’ve been lucky enough to work with Earth-imaging specialists, planetary scientists, engineers and others who regularly peer at the world through a suite of these new instruments.

Uncontrolled peat fires in Indonesia, exacerbated by a strong El Nino.
Image courtesy of Planet Labs.

Some of these colleagues, at a company called Planet Labs, are deploying the largest constellation of Earth-observing satellites in human history. When fully operational, this system will collectively image the entire surface of the Earth, in high resolution, every day.

Luuq, Somalia rests in a large oxbow in the Jubba River, and is currently a haven for hundreds of Somalia’s “internally displaced persons.”
Image courtesy of Planet Labs.

Through the Planet Labs satellites (called Doves) and other Earth-imaging tools, on any given day, one can witness the world of the Anthropocene—the Age of Humans—in all its complexity. Agricultural fires signal the onset of the planting season in Brazil. Refugee camps expand along the Turkish border with Syria. Ice-flows dissolve off the coast of Nova Scotia. The Amazonian rainforest is slowly, and illegally denuded. Monolithic manufacturing complexes spread out in China. Megacities in Africa push ever outward. Crater-like remnants of nuclear bomb tests scar the Nevada desert. The density of nighttime illumination hints at the relative poverty, and equity, of human societies.

Nevada’s “Plutonium Valley”, where nuclear explosions, were tested in the 1950s, will remain radioactive for 400 generations.
Image courtesy of Planet Labs.

Not all of this sensing is done with satellites. At the University of Washington, computer scientist Ricardo Martin Brualla and his colleagues have developed software tools that harvest countless digital snapshots that we post on the Internet and synthesize them into films that show the aggregate change in one place over time.

 

 

 (Above: The retreat of the Briksdalsbreen glacier in Norway, and the rise of the Las Vegas skyline, recomposed from hundreds of Internet photos, by Ricardo Martin Brualla and colleagues. Used by permission.)

For the first time in our history, broad access to these kinds of tools and imagery make visible, for anyone, the hidden dynamism of the planet—a dynamism that we spy occasionally, and only liminally, in our everyday life.

These images reveal not only change, but also vast diversity. Look upon the Earth long enough, and you will find almost every adjective fulfilled, somewhere. The world is beautiful, of course. But it is also sometimes ugly. It is intensely intertwined with human affairs, although it is sometimes indifferent or even overtly hostile to them. In some places, we are instruments of the world’s ruin; in others, less frequently, its regeneration.

The world is being built. It is growing. It is on fire. It is collapsing. It is in bloom. It is in decay.

And it is all these things at once.

Sitting with the grand simultaneity of it all, with the direct perception of boundless, kaleidoscopic global change, one begins to feel something new: the possibility of aplanetary sense.

And here is the crux of the matter: Earth observation, if entered into deeply, can be not only a psychological experience, but a spiritual one, too.

This requires not just looking, but beholding—to sit in deep and focused awareness, in full presence, without judgment.

Through this practice, we can begin to internalize the complex and subtle array of connections, patterns, and rhythms that dance upon the Earth. With practice, one can induce a kind of “perceptual flickering”—the rapid switching of awareness between radically different scales of time, space, and organization.

As this awareness grows, so too do a host of simultaneous emotions: joy at the breathtaking beauty of the world; wonder at is occasional, deep strangeness;empathy with its suffering; urgency toward the relief of that suffering. These, in turn, reinforce an abiding solidarity with the planet and its many inhabitants.

Still deeper, this solidarity gives way to a sense of unity. The subject-object distinction collapses, and we discover that the dynamism of the world does not end at the water’s edge of our senses. It continues inward. We contain, and are contained within, a great multitude of systems and processes—flickering into being, growing, ebbing, and renewing.

Such an observation should not be paralyzing, but liberating. The world has conspired to produce consciousness at the human scale, but it hasn’t limited our ability to sense or act solely to that scale.

Language sometimes fails us. It is predicated on syntactical rules that often reinforce our separateness. We read “Earth Day” through the lens of this linguistic separation—as if we were somehow outside of the Earth, and not, in reality, utterly cradled within it.

By cultivating our planetary sense, to look more directly at the world, we can push past the illusions of syntax, toward a deep, contemplative ecology, of which we are an integral part.

A version of this essay first appeared on the website of the Garrison Institute, where I serve as a Board member, to commemorate Earth Day.

At the top: A dry riverbed in the Namib desert, as seen from the Korean KompSat-2 Satellite

Patterns of Resilience and Collapse

March 13, 2016

In 1901, the writer and Nobel laureate Maurice Maeterlinck published “The Life of the Bee”, which popularized the idea that humanity owes our continued survival to the dutiful pollinator. “It is … estimated that more than a hundred thousand varieties of plants would disappear if the bees did not visit them,” Maeterlinck noted, “and possibly even our civilization, for in these mysteries all things intertwine.”

Today, we are getting perilously close to testing Maeterlinck’s hypothesis empirically. Bee colonies, which are responsible for billions of dollars of agriculture, food security, and ecosystem services, are collapsing around the world. It would not take the death of the last hive for life to become grim for at least some of the people, plants and animals who depend on bees, either directly or indirectly.

But precisely how many bee colonies have to collapse before the larger system fails? Are we near the tipping point? Here, we are on much murkier ground.

The same dynamic holds true in our understanding of countless other marquee challenges of our present age, from ocean warming and climate change, to the genetic basis of cancer, the performance of electrical grids, digital networks, and even the stock market. While all of these complex systems have compensatory elements, there is a point when the accumulation of damage becomes unrecoverable, and catastrophic collapse follows. But when?

Last month, network theorists Jianxi Gao, Baruch Barzel and Albert-László Barabási published an important paper in Nature, Universal resilience patterns in complex networks, which takes us a significant step closer to determining these pernicious tipping points. The authors present a universal mathematical framework which allows one to compute a straightforward “resilience function” (or perhaps more aptly, a “collapse threshold”) for any complex system – its point of no return.

Until now, the only way to identify such thresholds was to painstakingly map the interactions of all of the individual elements in the system – a task that was impractical or impossible in most circumstances. (Even our most sophisticated models of the climate, run on supercomputers, model only a fraction of the forces acting upon it.) Gao, Barzel and Barabási have taken much of that complexity out, collapsing it into a single indicator of system health. As such, if borne out, this work will certainly find its way into applied resilience domains – especially disaster risk reduction, climate adaptation and finance.

The above video, which acts as a companion to the paper, does a terrific job of explaining the work (and key concepts of ecological resilience in general).

 

 

Introjis: Emoticons for Introverts

February 14, 2015

Continuing the recent theme of bringing emotional richness to social media, FastCo.Exist has details of Introjis, emoticons designed for introverts by designer Rebecca Evie Lynch.

Introjis allow introverted users to express a natural and healthy affinity for solitude or quietude, and wordlessly express the occasional distress or fatigue that comes from being in the crowd too long. They’re an evocative counterpoint to more traditional emojis, which express a different range of emotional temperatures.

The “No to the invitation, but thank you!” Introji

 

The “I want to leave the party” Introji

 

The “Let’s sit quietly and do our own thing” Introji