World

Can We Get Smarter About Disinformation?

Over the past five or so years, “disinformation”has become a catchall explanation for what ails the country. Everything from QAnon conspiracies to your aunt’s vaccine-hesitant Facebook confessions to Twitter bots that try to stir up online fights and deepen the country’s ideological divides are gathered under that label.

Last year the writer Joe Bernstein published a long essay in Harper’sabout something he called “Big Disinfo.” He defines Big Disinfo as a “new field of knowledge production that emerged during the Trump years at the juncture of media, academia and policy research” that marshals the resources of experts, pundits and the like to present solutions to the disinformation problem. These answers, which include the regulation of tech companies, ultrasensitive filters and tribunals that will oversee the division of “good information” and “bad information” are oftentimes vague or even utopian. One problem is that nobody can really agree on what “disinformation” actually entails: Is it a coordinated campaign from a foreign government, or is it simply people who disagree with you and decide to spread their opinions in an online forum?

As an example of what Big Disinfo looks like, Bernstein mentions the Aspen Institute’s Commission on Information Disorder, a 15-person group that included Katie Couric and Prince Harry.

Bernstein goes on to argue that Big Disinfo, with its vaunted personalities and its talent for sounding a constant state of alarm in the prestige media, the academy and the Beltway, actually serves the best interests of tech giants. Big Disinfo relies on the assumption that online information — a political advertisement, a meme or an article from a dubious source — is so intoxicating and powerful that it can swing elections, spawn splinter groups like those that stormed the Capitol on Jan. 6 and turn normally rational people into brainless zombies who believe everything in their social media feeds. This supports the idea that social media platforms like Facebook and Twitter have been trying to sell their advertising buyers since their inception: “Hey, people really pay attention to what they read on here.”

This alignment, of course, does not mean that disinformation isn’t a problem. Anti-vaccine information alone should be cause for serious concern. But like Bernstein, I am generally skeptical of many of Big Disinfo’s prescriptions because they place too much faith in both the ability and the willingness of big tech companies to solve their own problems.

I also believe that Big Disinfo has oversold the problem a bit, not so much in how harmful specific disinformation campaigns can be but in how much everything we see can change our opinions. Social media has certainly changed nearly every facet of our lives, but it’s difficult to see any streamlined narrative in the daily chaos of the information that’s presented to us every time we pick up our phones.

Instead, the explanation Bernstein presents in his essay seems far more convincing: We are captivated by our devices, but our fixation is mostly dull and dumb. The people who have an incentive to present it as an alluring, powerful and ultimately behavior-changing medium are the social media companies that use that perception to sell advertisements. It may also be true that typical people see more disinformation than they did 30 years ago, but it’s difficult to quantify the difference between reading, say, fantastical tabloid headlines about U.F.O. sightings every time you walk down the grocery aisle and the falsehoods that come across our feeds.

As I type this, the Joe Rogan controversy is slogging on. For those who have somehow avoided the story or, perhaps, paid attention to more important matters, he initially fell into hot water when he began having anti-vaccine guests on his wildly popular podcast. Rogan is not vaccinated against Covid and has not pushed back much on his guests’ unscientific claims. As a result, a variety of artists, most notably Neil Young and Joni Mitchell, pulled their music from Spotify, which reportedly paid Rogan $100 million for exclusive rights to his show. This episode was followed by the resurfacing of dozens of instances over the past 12 years in which Rogan used a racial slur. He also told a disgusting anecdote in 2013 in which he compared a movie theater in a Black neighborhood to “Planet of the Apes.”

Rogan should be condemned for his racist comments. There’s no context that can explain away his casual use of slurs, nor is there any comedic explanation for the bigotry of his “Planet of the Apes” story. I certainly respect the right of musicians to pull their music from the platform, especially given the paltry sums it pays artists. Spotify customers, of course, are welcome to pull their subscriptions in protest.

But these concerns are largely tertiary to the disinformation question. Americans have become so focused on how tech companies handle voices we find distasteful, repugnant or dangerous. My sense is that this has happened because social media largely serves up a world of entertainment, news and sports. It also allows its users to believe that they are participating in activism by posting about everything from police brutality to the Oscars, especially when their sentiments are part of a groundswell of opinion. As a result, online outrage will almost always be about things that are consumed online, like Rogan’s podcast, actors and comedians who say something offensive and some supposedly salacious books that are being dubiously canceled by the online right.

The ecosystem is closed and, at this point, almost entirely self-referential. News media, entertainment and sports go in; outrage over news media, entertainment and sports comes out.

Within these parameters, does the fight over disinformation simply mark the limit of what we are willing to do in the name of change? Do we care deeply because we really believe that people are being led astray? Or are we just responding to what’s in front of us and admitting that while our political imaginations might be limited, we at least can clean up our timelines? Disinformation is certainly a real concern, but it also allows us to pretend that all the country’s problems can be solved by better algorithms and terms of service.

The effects of this aphasia have bled out into other parts of our daily interactions. Big Disinfo now shapes how we think about our fellow citizens, especially those we think are in thrall to a magical Facebook post. After the 2020 election, the news was filled with stories about how minority communities, particularly Asian American and Latino ones, had been bombarded with foreign language disinformation campaigns.

This particular disinformation panic coincided with a shift in both of those demographics to the Republican Party, one that has mostly continued over the past two years. The implication was that these voters had somehow been tricked by right-wing messaging to abandon the Democratic Party or, at the very least, its ideals. A 2018 paper from the UCLA Civil Rights Project, for example, argued that Asian American voters who opposed affirmative action had fallen for misinformation. That idea implies that if we just shut down the sources of misinfo, everyone will suddenly line up to vote for progressive candidates. It is also a broken way to think about our neighbors and fellow citizens.

There may very well be some misinformation about race-based preferences in college admissions floating around somewhere on the internet, but it’s far more likely that Asian Americans, many of whom believe that elite colleges are discriminating against them, simply oppose racial preferences out of pure self-interest. In these instances, the charge of misinformation obscures more than it illuminates.

At the same time, it’s true that too many people believe dubious sources of information on the internet. In 2019 researchers at Stanford published a study about how well American high schoolers could discern online disinformation. Of the more than 3,000 students who were shown a “grainy video claiming to show ballot stuffing in the 2016 Democratic primaries,” 52 percent believed it showed “strong evidence” of voter fraud. (The video was shot in Russia.) The report also found that 96 percent of students “did not consider why ties between a climate change website and the fossil fuel industry might lessen that website’s credibility.” When asked how they evaluated the credibility of a site, they “focused on superficial markers of credibility: the site’s aesthetics, its top-level domain or how it portrayed itself on the About page.”

The study found that the same income-based gaps that exist in the rest of society are also present in how well students could detect disinformation. Wealthier, cosmopolitan students with highly educated parents were better at spotting disinformation than poorer students from rural areas. Black and Latino kids did worse than white kids. These splits mirror how Big Disinfo sometimes frames the problem it seeks to solve, especially when it comes to the Covid pandemic: Educated liberals in cities listen to science, while the rest of the country sees an Instagram post and decides never to get the vaccine. I do not think there’s any solution to disinformation that relies on the wealthy and educated hectoring everyone else to just wake up and get the vaccine or whatever, nor do I think there’s any way to meaningfully rein in disinformation online.

The path toward solving the disinformation problem should go toward broadening access to education and fixing income inequality instead of trying to persuade tech companies to remove a few notorious accounts. The focus should be not so much on how Big Tech acts but more on trying to create a resilient public that can spot truly harmful disinformation. This will require a relatively narrow but functional definition of what disinformation entails. One clear dichotomy: We should think about how to educate the public effectively about vaccines, but we should avoid the temptation to ascribe all political difference to the brainwashing of everyone who disagrees with us. The more disinformation is used as a bludgeon to beat down everything we don’t like, the less we will understand how it actually influences people’s thoughts.

In Thursday’s edition of the newsletter, I will look at what some countries in Europe and Asia are doing to prepare their populations for their online lives.

Have feedback? Send a note to [email protected].

Jay Caspian Kang (@jaycaspiankang), a writer for Opinion and The New York Times Magazine, is the author of “The Loneliest Americans.”

Related Articles

Back to top button