Chapter 31, Artificial Intelligence and Polarization, continued....

p33
Prejudice and hostility have always animated this social identity instinct. Hunter-gatherer tribes sometimes competed for resources or territory. One group’s survival might require the defeat of another. Because of that, social-identity instincts drive us to distrust and, if necessary, rally against out-group members. Our minds compel those behaviors by sparking two emotions in particular: fear and hate. Both are more social than you might think. Fear of a physical threat from without causes us to feel a greater sense of camaraderie with our in-group, as if rushing to our tribe for safety. It also makes us more distrustful of, and more willing to harm, people whom we perceive as different. Think of the response to the September 11 attacks: a tide of patriotic flag-waving fervor and an alignment of fellow feeling, but one that was also followed by a spike in anti- Muslim hate crimes. See Chapters 11-14

These are deeply social instincts, so social media platforms, by turning every tap or swipe into a social act, reliably surface them. And because the platforms elevate whatever sentiments best win engagement, they often produce those instincts in their most extreme form. The result can be an artificial reality in which the in-group is always virtuous but besieged, the out-group is always a terrifying threat, and virtually everything that happens is a matter of us-versus- them. See Chapters 6+7, 22-25, 28 – the unneureal and polarization

p37
Shuttling between interviews with politicians and activists, I came to see Myanmar’s future as shakier than it had been portrayed. The military still held vestiges of power that it seemed reluctant to surrender. Among the clergy, an extremist fringe was rising. And its newly available social media was filling with racism and conspiracies. Online, angry talk of traitorous minorities felt ubiquitous.

A worrying name kept coming up in my conversations: Wirathu. The Buddhist monk had been imprisoned for his hate-filled sermons for the past decade and had just been released as part of a general amnesty. He’d immediately joined Facebook and YouTube. Now, rather than traveling the country temple by temple to spread hate, he used the platforms to reach much of the country, perhaps multiple times per day. He accused the country’s Muslim minority of terrifying crimes, blending rumor with shameless fabrication. On Facebook especially, his posts circulated and recirculated among users who took them as fact, creating an alternate reality (the unneureal) defined by conspiracy and rage, which propelled Wirathu to a new level of stardom.

A Stanford researcher who had worked in Myanmar, Aela Callan, met with senior Facebook managers in late 2013 to warn them that hate speech was overrunning the platform, she later told the reporter Timothy McLaughlin. For a country with hundreds of thousands of users, and soon millions, Facebook employed only one moderator who could review content in Burmese, Myanmar’s predominant language, leaving the platform effectively unsupervised. The managers told Callan that Facebook would press forward with its Myanmar expansion anyway.

In early 2014, Callan relayed another warning to Facebook: the problem was worsening, and with it the threat of violence. Again, little changed. A few months later, Wirathu shared a post falsely claiming that two Muslim tea shop owners in the city of Mandalay had raped a Buddhist woman. He posted the names of the tea sellers and their shop, calling their

p38
fictitious assault the opening shot in a mass Muslim uprising against Buddhists. He urged the government to raid Muslims’ homes and mosques in a preemptive strike—a common demand of genocidaires, whose implied message is that regular citizens must do what the authorities will not. The post went viral, dominating feeds across the country. Outraged users joined in the froth, urging one another to wipe out their Muslim neighbors. Hundreds rioted in Mandalay, attacking Muslim businesses and owners, killing two people and wounding many more.

As the riots spread, a senior government official called someone he knew at the Myanmar office of Deloitte, a consulting firm, to ask for help in contacting Facebook. But neither could reach anyone at the company. In desperation, the government blocked access to Facebook in Mandalay. The riots cooled. The next day, officials at Facebook finally responded to the Deloitte representative, not to inquire after the violence but to ask if he knew why the platform had been blocked. Facebooks perception was unneureal. In a meeting two weeks later with the government official and others, a Facebook representative said that they were working to improve their responsiveness to dangerous content in Myanmar. But if the company made any changes, the effect was undetectable on its platform. As soon as the government lifted its virtual blockade, hate speech, and Wirathu’s audience, only grew. “From at least that Mandalay incident, Facebook knew,” (now their perception is neureal) David Madden, an Australian who ran Myanmar’s largest tech-startup accelerator, told McLaughlin, the reporter. “That’s not 20/20 hindsight. The scale of this problem was significant and it was already apparent.”

Either unable or unwilling to consider that its product might be dangerous, Facebook continued expanding its reach in Myanmar and other developing and under-monitored countries. It moored itself entirely to a self-enriching Silicon Valley credo that Schmidt had recited on that early visit to Yangon: “The answer to bad speech is more speech. More communication, more voices.”

p43
As Gamergate entered the public consciousness, Wu leveraged connections at social networks to lobby them, at the least, to curb the harassment campaigns emerging from their systems. But the Silicon Valleyites she spoke to, mostly young white men, seemed never to have considered that hate and harassment might have real consequences, much less how to curb them. “It’s not because they’re villains,” she said. “They just don’t have a certain lived experience a lot of women, and queer people, and people of color have.”

The least responsive companies were Facebook, which would not engage with her at all, and Reddit, one of the places where Gamergate had started. The more that Wu interacted with the platform operators or explored the poison emanating from their sites, the more she suspected a wider danger. “Software increasingly defines the world around us,” she wrote in early 2015. Platforms and apps “create our social realities—how we make friends, how we get jobs, and how mankind interacts.” See Chapters 6+7 on neuroreality.  But they had been designed with little input from people outside of the Valley’s narrow worldview or demographic. “These systems are the next frontier of human evolution (See Chapter 2), and they’re increasingly dangerous for us,” Wu concluded, adding, in a sentiment that was considered overstated at the time, “The stakes couldn’t be higher.”

That transformation had been set in motion forty years earlier, with a generation of Silicon Valley computer makers who saw themselves as revolutionaries destined to tear down the American status quo in its entirety, and who built social networking, very explicitly, as the tool by which they would do it. But their new digital society, envisioned as eventual replacement of all that’d come before, was engineered less for liberation than for anger and conflict, thanks to an original sin of Silicon Valley capitalism and, in the 1990s, a fateful twist in toy economics. The result was a digital world already coursing, by the early 2000s, with a strange mix of male geek chauvinism and, though it was initially dismissed, far-right extremism.

Gamergate announced our new era, of American life shaped by social media’s incentives and rules, from platforms just beyond the outskirts of mainstream society. Within a few years, those platforms would grow Gamergate and its offshoots into nationwide movements, carry them into the homes of millions of digital newcomers, and mobilize them into a movement that would, very soon, ride into the White House.

p60
Toy departments were, at that moment, sharply segmenting by gender. President Reagan had lifted regulations forbidding TV advertising aimed at children. Marketers, seized by a neo- Freudianism then in vogue, believed they could hook kids by indulging their nascent curiosity about their own genders. New TV programming like My Little Pony and GI Joe delivered hyper-exaggerated gender norms, hijacking adolescents’ natural gender self-discovery and converting it into a desire for molded plastic products. If this sounds like a strikingly crisp echo of social media’s business model, it’s no coincidence. Tapping into our deepest psychological needs, then training us to pursue them through commercial consumption that will leave us unfulfilled and coming back for more, has been central to American capitalism since the postwar boom.

p65
Others from DiResta’s informal group of social media watchers were noticing Facebook and other platforms routing them in similar ways. The same pattern played out over and over, as if those A.I.s had all independently arrived at some common, terrible truth about human nature. “I called it radicalization via the recommendation engine,” she said. “By having engagement-driven metrics, you created a world in which rage-filled content would become the norm.” The algorithmic logic was sound, even brilliant.


Radicalization is an obsessive, life-consuming process. Believers come back again and again, their obsession becoming an identity, with social media platforms the center of their day-to-day lives. And radicals, driven by the urgency of their cause, recruit other radicals. “We had built an outrage machine in which people actually participated in pushing the content along,” DiResta said, where the people who became radicalized were thereafter “the disseminators of that content.” She had seen it over and over.

p80
The result was a near-universal convergence on these behaviors and ways of thinking, incentivized all along the way by social media. There were moments, Wu admitted, when she found herself tempted to spin up outrage, to rally her followers against some adversary, to push a claim that, while dubious, might flatter the identity and inflame the prejudices of her in- group. She usually caught herself, but not always. “It’s something I really struggle with, myself, in my own person, the way I interact with the world, because there’s something really dangerous that’s been unlocked here,” she said. “This cycle of aggrievement and resentment and identity, and mob anger, it feels like it’s consuming and poisoning the entire nation.” See Chapters 15,22-25, 28

p86
Recall those early tribes of up to 150 people. To survive, the group had to ensure that everyone acted in the collective interest, part of which was getting along with one another. That required a shared code of behavior. But how do you get everyone to internalize, and to follow, that code? Moral outrage is our species’ adaptation for this challenge. When you see someone violating an important norm, you get angry. You want to see them punished. And you feel compelled to broadcast that anger, so that others will also see the violation and want to join in shaming, and perhaps punishing, the transgressor. See Chapters 11-14

Popular culture often portrays morality as emerging from our most high-minded selves: the better angels of our nature, the enlightened mind. Sentimentalism says it is actually motivated by social impulses like conformity and reputation management (remember the sociometer?), which we experience as emotion. Neurological research supports this. As people faced with moral dilemmas work out how to respond, they exhibit heavy activity in neural regions associated with emotions. And the emotional brain works fast, often resolving to a decision before conscious reason even has a chance to kick in. Only when they were asked to explain their choice would research subjects activate the parts of their brain responsible for rational calculation, which they used, retroactively, to justify whatever emotion-driven action they’d already decided on.

Those moral-emotional choices seemed reliably to serve a social purpose, like seeking peers’ approval, rewarding a Good Samaritan, or punishing a transgressor. But the instinctual nature of that behavior leaves it open to manipulation. Which is exactly what despots, extremists, and propagandists have learned to do, rallying people to their side by triggering outrage—often at some scapegoat or imagined wrongdoer. What would happen when, inevitably, social platforms learned to do the same? See Chapters 11+12, 15, 24+25, 28

p92
Truth or falsity has little bearing on a post’s reception, except to the extent that a liar is freer to alter facts to conform to a button-pushing narrative. What matters is whether the post can provoke a powerful reaction, usually outrage. A 2013 study of the Chinese platform Weibo found that anger consistently travels further than other sentiments. Studies of Twitter and Facebook have repeatedly found the same, though researchers have narrowed the effect from anger in general to moral outrage specifically. Users internalize the attentional rewards that accompany such posts, learning to produce more, which also trains the platforms’ algorithms to promote them even further.

Many of these incidents had a left-wing valence to them, leading to fears of a “cancel culture” run amok. But this merely reflected the concentration of left-leaning users in academic, literary, journalistic, and other spaces that tend to be more visible in American life. The same pattern was also unfolding in right-leaning communities. But most such instances were dismissed as the work of fringe weirdos (Gamergate, anti-vaxxers) or extremists (incels, the alt right). Right or left, the common variable was always social media, the incentives it imposes, the behavior it elicits.

For its targets, the damage, deserved or not, is real and lasting. Our brains process social ostracism as, quite literally, pain. Being shunned hurts for the same reason that a knife piercing your skin hurts: you have evolved to experience both as mortal threats. Our social sensitivity evolved for tribes where angering a few dozen comrades could mean a real risk of death. On social media, one person can, with little warning, face the fury and condemnation of thousands. At that scale, the effect can be psychologically devastating. “The big part of harassment that people who haven’t been repeatedly harassed by a hateful mob are lucky to not get is: It changes your life forever,” Pao, the former Reddit chief, once wrote. “You don’t trust as easily.”

The consequences extended beyond handfuls of people targeted by arguably misplaced or disproportionate anger. Public life itself was becoming more fiercely tribal, more extreme, more centered on hating and punishing the slightest transgression. “I’m telling you, these platforms are not designed for thoughtful conversation,” Wu said. “Twitter, and Facebook, and social media platforms are designed for: ‘We’re right. They’re wrong. Let’s put this person down really fast and really hard.’ And it just amplifies every division we have.” The purpose of the synthisophy Facebook page is thoughtful conversation on political topics, which we have been doing since 2017.

p93
THE MYSTERY OF moral outrage—why are we so drawn to an emotion that makes us behave in ways we deplore?—was ultimately unraveled by a seventy-year-old Russian geneticist holed up in a Siberian research lab, breeding thousands of foxes. See Chapters 2-7.   Lyudmila Trut arrived at the lab in 1959, fresh out of Moscow State University, to search for the origins of something that had seemed unrelated: animal domestication.

p94
Domestication was a mystery. Charles Darwin had speculated that it might be genetic. But no one knew what external pressures turned wolves into dogs or how the wolf’s biology changed to make it so friendly. Darwin’s disciples, though, had identified a clue: domesticated animals, whether dog or horse or cow, all had shorter tails, softer ears, slighter builds, and spottier coats than their wild counterparts. And many had a distinctive, star-shaped spot on their foreheads.

If Trut could trigger domestication in a controlled setting, she might isolate its causes. Her lab, attached to a Siberian fur factory, started with hundreds of wild foxes. She scored each on its friendliness to humans, bred only the friendliest 10 percent, then repeated the process with that generation’s children. On the tenth generation, sure enough, one fox was born with floppy ears. Another had a star-shaped forehead mark. And they were, Trut wrote, “eager to establish human contact, whimpering to attract attention, and sniffing and licking experimenters like dogs.” Darwin had been right. Domestication was genetic. Subsequent generations of the foxes, as they grew friendlier still, had shorter legs and tails and snouts, smaller skulls, flatter faces, spottier fur coloring.

Trut studied the animals for half a century, finally discovering the secret to domestication: neural crest cells. Every animal starts life with a set. The cells migrate through the embryo as it grows, converting themselves into jawbones, cartilage, teeth, skin pigment, and parts of the nervous system. Their path ends just above the animal’s eyes. That’s why domesticated foxes had white forehead marks: the neural crest cells passed on to them by their friendlier parents never made it that far. This also explained the floppy ears, shorter tails, and smaller snouts.

Further, it unlocked a change in personality, because neural crest cells also become the glands that produce the hormones responsible for triggering fear and aggression. Wild foxes were fearful toward humans and

p95
aggressive with one another, traits that served them well in the wild. When Trut bred the friendliest foxes, she was unknowingly promoting animals with fewer neural crest cells, stunting their neurological development in a very specific and powerful way.

Of the many revelations to flow from Trut’s research, perhaps the greatest was resolving a long-standing mystery about humans. About 250,000 years ago, our brains, after growing larger for millions of years, started shrinking. Strangely, it occurred just as humans seemed to be getting smarter, judging by tools found with their remains. Humans simultaneously developed thinner arm and leg bones, flatter faces (no more caveman brow ridges), and smaller teeth, with male bodies more closely resembling those of females. With Trut’s findings, the reason was suddenly clear. These were the markers of a sudden drop in neural crest cells—of domestication.

But Trut’s foxes had been domesticated by an external force: her. What had intervened in the evolutionary trajectory of humans to suddenly favor docile individuals over aggressive ones? The English anthropologist Richard Wrangham developed an answer: language. For millions of years, our ancestors who would eventually become Homo sapiens formed small communities led by an alpha. The strongest, most aggressive male would dominate, passing on his genes at the expense of the weaker males.

All great apes despise bullies. Chimpanzees, for instance, show preferential treatment toward peers who are kind to them and disfavor those who are cruel. But they have no way of sharing that information with one another. Bullies never suffer from poor reputations because there is, without language, no such thing. That changed when our ancestors developed language sophisticated enough to discuss one another’s behavior. Aggression went from an asset—the means by which alpha males dominated their clan—to a liability that the wider group, tired of being lorded over, could band together to punish.

“Language-based conspiracy was the key, because it gave whispering beta males the power to join forces to kill alpha- male bullies,” Wrangham wrote in a pathbreaking 2019 book. Every time an ancient human clan tore down a despotic alpha, they were doing the same thing that Lyudmila Trut did to her foxes: selecting for docility. More cooperative males reproduced, the aggressive ones did not. We self-domesticated. See Chapters 2, 11-14

p96
But just as early humans were breeding one form of aggression out, they were selecting another in: the collective violence they’d used both to topple the alphas and to impose a new order in their place. Life became ruled by what the anthropologist Ernest Gellner called “tyranny of the cousins.” Tribes became leaderless, consensus-based societies, held together by fealty to a shared moral code, which the group’s adults (the “cousins”) enforced, at times violently. “To be a nonconformist, to offend community standards, or to gain a reputation for being mean became dangerous adventures,” Wrangham wrote. Upset the collective and you might be shunned or exiled—or wake up to a rock slamming into your forehead. Most hunter-gatherer societies live this way today, suggesting that the practice draws on something intrinsic to our species. See Chapters 11-14

The basis of this new order was moral outrage. It was how you alerted your community to misbehavior—how you rallied them, or were yourself rallied, to punish a transgression. And it was the threat that hung over your head from birth until death, keeping you in line. Moral outrage, when it gathers enough momentum, becomes what Wrangham calls “proactive” and “coalitional” aggression—colloquially known as a mob. When you see a mob, you are seeing the cousins’ tyranny, the mechanism of our self-domestication. This threat, often deadly, became an evolutionary pressure in its own right, leading us to develop ultrafine sensitivities to the group’s moral standards—and an instinct to go along. If you want to prove to the group that it can trust you to enforce its standards, pick up a rock and start throwing. Otherwise, you might be next.

In our very recent history, we decided that those impulses are more dangerous than beneficial. We replaced the tyranny of cousins with the rule of law (mostly), banned collective violence, and discouraged moblike behavior. But instincts cannot be entirely neutralized, only contained.

p97
Social networks, by tapping directly into our most visceral group emotions, bypass that containment wall—and, in the right circumstances, tear it down altogether, sending those primordial behaviors spilling back into society, as seen on January 6th, 2021.  See Chapters 6+7,11-15,22-25, 28

When you see a post expressing moral outrage, 250,000 years of evolution kick in. It impels you to join in. It makes you forget your internal moral senses and defer to the group’s. And it makes inflicting harm on the target of the outrage feel necessary—even intensely pleasurable. Brain scans find that, when subjects harm someone they believe is a moral wrongdoer, their dopamine-reward centers activate. The platforms also remove many of the checks that normally restrain us from taking things too far. From behind a screen, far from our victims, there is no pang of guilt at seeing pain on the face of someone we’ve harmed. Nor is there shame at realizing that our anger has visibly crossed into cruelty. In the real world, if you scream expletives at someone for wearing a baseball cap in an expensive restaurant, you’ll be shunned yourself, punished for violating norms against excessive displays of anger and for disrupting your fellow restaurant-goers. Online, if others take note of your outburst at all, it will likely be to join in. See Chapters 6+7,11-15,22-25, 28

Social platforms are unnaturally rich with sources of moral outrage; there is always a tweet or news development to get angry about, along with plenty of users to highlight it to a potential audience of millions. It’s like standing in the center of the largest crowd ever assembled, knowing that, at any moment, it might transform into a mob. This creates powerful incentives for what the philosophers Justin Tosi and Brandon Warmke have termed “moral grandstanding”—showing off that you are more outraged, and therefore more moral, than everyone else. “In a quest to impress peers,” Tosi and Warmke write, “grandstanders trump up moral charges, pile on in cases of public shaming, announce that anyone who disagrees with them is obviously wrong, or exaggerate emotional displays.”

Off-line, moral grandstanders might heighten a particular group’s sensitivities a few degrees by pressuring peers to match them. Or they might

p98
simply annoy everyone. But on social networks, grandstanders are systematically rewarded and amplified. This can trigger “a moral arms race,” Tosi and Warmke cautioned, in which people “adopt extreme and implausible views, and refuse to listen to the other side.”

If this were just a few internet forums, the consequences might be some unpleasant arguments. But by the mid-2010s social networks had become the vector through which much of the world’s news was consumed and interpreted. This created a world, Tosi and Warmke warned in a follow-up study with the psychologist Joshua Grubbs, defined by “homogeneity, ingroup/outgroup biases, and a culture that encourages outrage.”

The result was a doom-loop of polarization and misinformation. When Congress passed a stimulus package in 2020, for example, the most-shared posts on Twitter reported that the bill siphoned $500 million meant for low-income Americans to Israel’s government and another $154 million for the National Art Gallery, that it funded a clandestine $33 million operation to overthrow Venezuela’s president, that it slashed unemployment benefits, and that $600 Covid-relief checks were really just loans that the IRS would take back on the following year’s taxes.

All were false, they were unneureal. But the platform’s extreme bias toward outrage meant that misinformation prevailed, which created demand for more outrage-affirming rumors and lies. Heartless Republicans wanted poor people to starve. Craven Democrats had sold out Americans to big business.

p103
Though the largest outrage generating machine in history might appear governed by the collective will of its participants, it was in fact ruled by Silicon Valley, whose systems were designed not to promote social progress or to fairly distribute justice, but to maximize our time on site, to make money.

 

p112
YouTube was training users to spend their days absorbing content that ranged from intellectual junk food to outright poison—far from the journey of enlightenment and discovery that Guillaume Chaslot had felt the platform made possible. “It’s so important, I need to push the project,” he recalled of his thinking. “And then I got fired.”

YouTube maintains that Chaslot was let go, that October, for poor performance. Chaslot believes he was dismissed for blowing a whistle no one wanted to hear. It was, perhaps, a distinction with little difference; YouTube was reengineering itself around a single-minded pursuit for which Chaslot wasn’t on board. “These values of moderation, of kindness, anything that you can think of that are values on which our society is based, the engineers didn’t care about putting these values in the system,” he said. “They just cared about ad revenue. They were thinking that just by caring about one metric, which is watch time, then you’ll do good for everybody. But this is just false.”

p120
In the earlier A.I., an automated system had built the programs that picked videos. But, as with the spam-catching A.I.s, humans oversaw that system, intervening as it evolved to guide it and make changes. Now, deep learning was sophisticated enough to assume that oversight job, too. As a result, in most cases, “there’s going to be no humans actually making algorithmic tweaks, measuring those tweaks, and then implementing those tweaks,” the head of an agency that developed talent for YouTube wrote in an article deciphering the deep-learning paper. “So, when YouTube claims they can’t really say why the algorithm does what it does, they probably mean that very literally.”

p121
“We design a lot of algorithms so we can produce interesting content for you,” Zuckerberg said in an interview. “It analyzes all the information available to each user and it actually computes what’s going to be the most interesting piece of information.” An ex-Facebooker put it more bluntly: “It is designed to make you want to keep scrolling, keep looking, keep liking.” Another: “That’s the key. That’s the secret sauce. That’s how, that’s why we’re worth X billion dollars.”

In 2014, the same year that Wojcicki took over YouTube, Facebook’s algorithm replaced its preference for Upworthy- style clickbait with something even more magnetic: emotionally engaging interactions. Across the second half of that year, as the company gradually retooled its systems, the platform’s in-house researchers tracked 10 million users to understand the effects. They found that the changes artificially inflated the amount of pro-liberal content that liberal users saw and the amount of pro-conservative content that conservatives saw. Just as Pariser had warned. The result, even if nobody at Facebook had consciously intended as much, was algorithmically ingrained hyperpartisanship, polarization. This was more powerful than sorting people into the Facebook equivalent of a Fox News or MSNBC news feed, because while the relationship between a cable TV network and the viewer is one-way, the relationship between a Facebook algorithm and the user is bidirectional. Each trains the other. The process, Facebook researchers put it, somewhat gingerly, in an implied warning that the company did not heed, was “associated with adopting more extreme attitudes over time and misperceiving
facts about current events.”

But the Valley’s algorithmic ambitions only grew, to nothing less than mastery of the human mind.

p124
After he was fired, Guillaume Chaslot returned home to Paris. He spent a couple of years on a French e-commerce site. Silicon Valley was a distant memory. Until, on a long bus ride in late 2015, his seatmate’s smartphone caught his attention. The man was watching YouTube, video after video, all discussing conspiracies. Chaslot’s first thought was an engineer’s: “His watch session is fantastic.” The video- recommending algorithm zagged between topics, keeping the experience fresh, while pulling the man deeper into the abyss. “That’s when I realized,” Chaslot said, “from a human point of view, this is actually a disaster. My algorithm that I’d helped build was pushing him toward these more and more hateful videos.”

Striking up a conversation, Chaslot asked him about the video then on his screen, describing a plot to exterminate billions of people. He hoped the man would laugh the video off, realizing it was absurd. Instead, he told Chaslot, “You have to look at this.” The media would never reveal such secrets, he explained, but the truth was right there on YouTube. You can’t believe everything on the internet, Chaslot told him. But he was too embarrassed to admit to the man that he’d worked at YouTube, which was how he knew its system pulled users down rabbit holes without regard for the truth. “He was telling me, ‘Oh, but there are so many videos, it has to be true,’” Chaslot said. “What convinced him was not the individual videos, it was the repetition. And the repetition came from the recommendation engine.” YouTube was exploiting a cognitive loophole known as the illusory truth effect (the unneureal). We are, every hour of every day, bombarded with information. To cope, we take mental shortcuts to quickly decide what to accept or reject. One is familiarity; if a claim feels like something we’ve accepted as true before, it probably still is. It’s a gap in our mental defenses you could drive a truck through. Chaslot’s seatmate had been exposed to the same crazed conspiracies so many times that his mind likely mistook familiarity for the whiff of truth. As with everything else on social media, the effect is compounded by a false sense of social consensus, which triggers our conformity instincts.


Chapter 31, Artificial Intelligence and Polarization, continued....