Over the past several years, we’ve witnessed how AI and other advanced technologies can be used to inflame conflict, warp our perceptions of reality, and even undermine the truth itself. How do we better protect ourselves against the dangers of these technologies? Have we misunderstood the problem? And what role should tech play in finding new solutions? We unpack it all in the latest episode of Working Better, including a deep dive conversation with Sam Woolley, author of The Reality Game: How the Next Wave of Technology will Break the Truth.

Featuring:

Credits:

  • Produced, written, and edited by Maxx Parcell
  • Sound engineering by Chris Mitchell
  • Music by Luc Parcell
  • Additional editing support by Ashley Higuchi
  • Production support by Belen Battisti

Show Notes

(00:02) Have you ever heard of The Vegetable Lamb?

The Vegetable Lamb is extraordinary in that it belongs to a class nearly all its own. The ultra-rare half-plant / half-animal. To the naked eye, it looks like a small shrub you might find in gardens all over the world, marked by a single thick stem that shoots straight up like a sunflower. At the top of that stem are small encased bulbs….open one up and you’ll find…a small lamb, alive, fuzzy, and waiting to be plucked and raised for its wool.

Okay yes, it’s nonsense.

But while The Vegetable Lamb most definitely doesn’t exist, it was accepted and spread as truth for literally hundreds of years. Europeans in the Middle Ages knew that cotton was this thing that arrived from India, but weren’t sure exactly how it grew. Enter a fantastical story about lambs growing from the ground like Cauliflower. The folklore first took hold because it satisfied a gap in understanding.

As travel writers brought stories from around the world, the tale of the vegetable lamb seemed to propagate all on its own. Even as scientific thinking exploded during the Renaissance, and academics debunked the possibility of such a creature, the stories of the Vegetable Lamb still pressed on until ultimately falling apart around the 1600s after attempts to prove the creature existed were revealed to be plants cut and bundled together in the shape of a lamb. Sort of the “three children stacked on each other’s shoulders under a trench coat” approach to deception.

So what can the Vegetable Lamb teach us today? False stories spread like a virus, particularly when they help us make sense of something we don’t understand.

Back in the 1400s, it was tall tales about sheep grown from the ground. Today it’s conspiracy theories and misinformation, now being amplified by modern technologies like AI bots, social media algorithms, and deep fake facial recognition.

Today’s question: Has technology broken the truth?

We’ll hear from Josh Levin from Cascade Data Labs and take a deep dive with Sam Woolley, the author of “The Reality Game: How the next wave of technology will break the truth.”

Now just like I say to my wife when I need to both reintroduce myself AND show her I fixed a clogged bathtub…

I’m Scott Hermes, this is Working Better.

(02:42) Do We Still Know What The Truth Is?

Okay, so in part one of “Data vs Goliath” we talked about the role of data in our lives at a human level; how we interact with new information, how confirmation bias and cultural identity can steer us away from data we don’t agree with in work, our relationships, in our communities.

So if part one looked at how we’re inherently wired to put up a fight against the truth, part 2 will call into question – do we still know what the truth is? Playing the role of Goliath this time around: Technology.

This is clearly a politically charged topic at the moment. There are many lenses we can use to understand the post-truth era, the proliferation of fake news, and the day-to-day status of truth and democracy. Our intention is to help unpack the role of technology in both the problem as well as potential solutions – depending on who you talk to.

We’re going to explain how social media algorithms and AI bots might not work the way you think they do, how emerging technologies pose an entirely new challenge and the different schools of thought about how we should solve the problem.

We’ve created a reinforcing feedback loop between our media consumption and our political consumption. Things have grown to the point where it’s impossible to separate the two, and every news article seems to have some implicit political bend.

Josh Levin – Co-Founder, Partner at Cascade Data Labs

(03:52) Social Media Bubbles

“I’m going to go off script. I’m here because I’m very concerned” – Tristan Harris

That’s Tristan Harris (tr-STAHN like “naan”, not like “kristin”), speaking at a Congressional Hearing in January 2020. You might recognize Tristan’s voice if you’ve seen the hit Netflix documentary “The Social Dilemma.” The film explores the dangerous impact of social media, as told by former employees of Facebook, Instagram, Twitter, YouTube, and Google.

The argument goes like this: Social media companies are having a much more profound effect on society than their founders ever could have imagined, particularly when it comes to the rapid spread of hate speech, propaganda, and widely debunked conspiracy theories. While the fake news phenomenon has brought plenty of absurd stories worth laughing at, Tristan argues the underlying dangers are more severe than we might think.

“Flat earth conspiracy theories were recommended hundreds of millions of times. This might sound funny and ‘look at those people’ but actually this is very serious. I have a researcher friend who studied this – if the Flat earth theory is true, it means that not just all of government is lying to you, but all of science is lying to you. So think about that for a second. That’s a meltdown of all of our rational epistemic understanding of the world.”” – Tristan Harris

Okay so how does this happen, and how do companies like Facebook fit into it all? For starters, there are 2.7 billion people around the world on Facebook. Over a third of the human population. Because Facebook’s business model is centered around ad sales, the thing really being sold isn’t the platform itself – but the attention of those 2.7 million people.

Advertisers will always pay a premium to be where the eyeballs are. That’s nothing new. What IS different is the level of sophistication to which these companies can do so. Because everything comes down to holding your attention, highly nuanced algorithms are designed to feed you the PERFECT thing to keep you clicking and scrolling.

“We’re always connected and these algorithms have hacked our brains to constantly want to engage with them.” – Josh Levin

That’s Josh Levin. Josh is a data scientist and co-founder of Cascade Data Labs in Portland Oregon, who recently joined the Kin + Carta family.

“It’s such a subtle and unconscious addiction. I’ll find myself checking my phone – email, newsfeed, whatever – when I’m walking to the bathroom. Like I can’t have a moment of space for it. It’s completely mindless. And there’s all this research on the health benefits of mindfulness and we’re completely pushing our brains in the opposite direction.” – Josh Levin

Every photo, status update, friend suggestion, group recommendation, news article – it’s all driven by an evolving model of what you want to see next. “Machine learning” means the model learns and improves itself with every click. What do people tend to click on the most and respond most favorably to? As we learned from Part 1 – there’s nothing we gobble up faster than reinforcement that we’re the good guys.

“Collectively, over time, we’ve created a reinforcing feedback loop between our media consumption and our political consumption. As our politics became more polarized, we became more easily sortable, and our media has followed suit. In turn, content creators have picked up on the interplay of these algorithms and our consumption, and things have grown even more extreme and satisfied to the point where it’s impossible to separate the two, and every news article seems to have some implicit political bend.” – Josh Levin

On either end of the political spectrum, if you’ve ever felt like the other side appears to be living in their own reality…in a sense, they are. When our feeds learn to present us one side of the equation, polarization becomes inevitable. A study from Upturn revealed it actually cost more for a politician to advertise across the aisle. It’s quite literally more expensive to try and change someone’s mind than it is to reinforce their beliefs.

And when an algorithm is learning that feeding you conspiracy theories will keep you there, guess what’s going to keep showing up on the menu. For breakfast, may I recommend a secret cabal of lizard people controlling the world. For lunch, we are featuring a platter of JFK jr never died and is secretly Cher, and for dinner, our house specialty, …an ad for reptile deterrent spray because it’s important to protect your family.

Propaganda and deliberate DIS-information works the same way. Content is shared and amplified by fake AI bots, designed to stoke outrage and destabilize functioning society.

So what do we do?

First, we cleanse the palette. It’s time for Cooler Terms with Pooler & Hermes.

(08:50) Cooler Terms

SCOTT:

Okay good I’m feeling hopeful again.

To summarize so far:

Social media companies want our attention

Algorithms hold our attention by predicting what we’ll click on based on what we’ve clicked on in the past and what people who are like us have clicked on in the past

We all end up in our own echo chambers

As polarization gets worse and worse, we become susceptible to propaganda and influence campaigns

By the time you’ve learned the new tik tok dance, it’s already out of style.

So what do solutions look like?

(11:40) Innovation-Led argument

Some say new technology and media forms have always raised concerns about collateral damage. In 1897 a London writer lamented the invention of the telephone saying “we shall soon be nothing but transparent heaps of jelly to each other.” My doctor says so far I’m only an opaque heap of jelly, so jokes on him.

Free market advocates say regulation stifles competition, and that people should consider using different social media or messaging services like Signal, Telegram or Mastodon.

Others, many on both sides of the aisle, argue that the government should stay out of it, that censorship of any kind, like Twitter banning Trump, or Parler being removed from Amazon Web Services… is a slippery slope, and it’s up to individuals to create change. Here’s a clip from a video from the Foundation for Economic Education:

“We should all have concerns about the amount of control a handful of coders and executives at facebook, twitter and youtube have over the ideas we’re allowed to talk about. But the way we stop them from dictating the limits of people’s speech isn’t to let the government control what we’re allowed to say or decide how the platforms are allowed to operate.”

People have gotten savvier to the fact that there are people out there trying to manipulate them online, and that these different types of technologies and political strategies are part of social media.

Sam Woolley – Writer, Researcher, and Professor at University of Texas

(12:54) Pro-regulation argument

Reform advocates like Tristan Harris say of course individuals have power, but when we focus the problem on specific CONTENT, or we frame it as a few executives in silicon valley who control everything…we’re still missing the bigger problem:

“We often frame this issue as a few bad apples, we’ve got these bad deep fakes, bad content, bad bots…dark patterns, what I want to argue is we have dark infrastructure. This is now the infrastructure by which 2.7 billion people, bigger than the size of Christianity make sense of the world. It’s the information environment. If private companies went along and built nuclear power plants all across the United States and they started melting down, and they said well it’s your responsibility to have hazmat suits, and build a radiation kit. That’s essentially what we’re experiencing now. The responsibility is being put on consumers. When in fact if it’s the infrastructure, the responsibility should be put on the people building that infrastructure.”

Protection of children and teenagers has become a primary focus of those calling for regulation. Tristan points out that policies designed to safeguard the content kids are exposed to is not a new idea at all:

“We used to have Saturday morning cartoons. We protected children from certain kinds of advertising, time place manner restrictions. When YouTube gobbles up that part of the attention economy, we lose all those protections. So why not bring back the protections of Saturday morning?” – Tristan Harris

There is a LOT of nuance to this debate, and a lot more to unpack in terms of how we solve the issue of technology and misinformation. To help us do exactly that, we’re thrilled to welcome Dr. Sam Woolley. Sam is a writer, researcher and professor with a focus on emerging media technologies and propaganda. His work looks at how automation, algorithms and AI are leveraged for both freedom and control. His recent book, The Reality Game: How the Next Wave of Technology Will Break the Truth explores the future of digital disinformation across virtual reality, video, and other media tools, including a pragmatic roadmap for how society can respond.

(14:24) Interview With Sam Woolley

SCOTT:

Tell us a little bit about your background and then what are you working on, and how did you get interested in it?

SAM:

I’m primarily a writer and researcher. I work at the University of Texas, where I oversee something called the Propaganda Research Lab within the Center for Media Engagement. My team is mostly focused on doing analysis of emerging media and trends like disinformation and manipulation of public opinion. So, we study all these sorts of emerging things on platforms, from Parler to Facebook to Twitter, and the ways in which these platforms are leveraged for various forms of manipulation or hate or harassment. Simultaneous to that, we also focus on solutions, so we look at the ways in which we can solve these problems.

SCOTT:

Do you feel like the truth is already broken in some ways, and if not, what will it mean to eventually break it?

SAM:

In some ways, the truth has always been a broken concept, right? Hannah Arendt wrote back in the 60s about the connections between truth and politics and basically made the assertion, very correctly, that politicians have always have a very flexible relationship with the truth. And in fact, that lying is oftentimes seen as a necessary part of politicians, not just by demagogues or authoritarians, but also by statesmen and regular politicians in Congress in the United States.

That’s not to say though that facts are broken, because I think truth and facts are two different concepts that require a little bit of parsing. Today, in today’s day and age, you have people like Kellyanne Conway sitting before the news media saying things like, “We’re presenting our own alternative facts,” in reference to Spicer’s comments, as press secretary. But alternative facts are not facts, right? Fact actually refers to science and refers to empirical knowledge, being able to verify, being part of the scientific process.

I would say that while the truth is under attack and the truth is always and already broken in some ways, what we have to work to do is we have to work back to get on track where we have a shared reality.

10 years ago, a decade ago when I first started studying this stuff, no one really wanted to talk about the fact that social media and new media technologies were being used for manipulation and that this was causing a real challenge to the truth.

SCOTT:

What is computational propaganda?

SAM:

Computational propaganda is a term that Phil Howard, the director of the Oxford Internet Institute, and myself, back when we were both at the University of Washington co-coined together. We were thinking about the ways in which social media was being used during the Arab Spring and during the Occupy Wall Street protests, particularly looking at automated fake profiles. These profiles that were on Twitter that had software behind them to automate their tasks and spread particular political perspectives.

Basically these bots, political bots, were being used to manipulate public opinion by amplifying particular content. The goals were either getting people to re-share their content, because it gave that content the illusion of popularity, or getting algorithms to pick it.

We landed on this term, computational propaganda, because really what we saw was the ways in which propaganda, which is something that’s been around for a very long time, was being enhanced by things like automation and anonymity. It was bots but it was also the algorithmic processes that I was just talking about. You had social media companies out there saying things like, “We’re not the arbiters of the truth. We just present information,” and trying to dress themselves up as a modern-day AT&T or some such.

SCOTT:

How automated actually are the bots or is it still fairly a manual process to run those, the bots?

SAM:

Social media bots or social bots, and more specifically what my colleagues have called political bots. These are forward-facing bots that have presence on social media. It’s an automated piece of software that gets built to automate the social media profile, so it can automate the posts, it was automate likes, or in the case of Twitter retweets. It can post automated comments on say a news site below the line. And so bots are fairly automated things. They usually get built through the application programing interface on a given site, the API.

There’s also sock puppets, which you mentioned. Sock puppets are usually run by people. They’re accounts that have no clear identity on a site like Facebook, Twitter, YouTube, whatever, that are manually run.

Increasingly, we’re seeing a merger of these two things. There was more separation. Most of the political bots that were out there on Twitter back in 2013, or even 2016 during the US election then, were very heavy-handed. They were used just to massively boost of particular political candidates, or to spam comment sections with repetitive comments over and over and over again. That was all they needed to do because propagandists and people attempting to spread disinformation are very pragmatic. They use the cheapest tools they need and the simplest tools that they need oftentimes to get the job done.

SCOTT:

In your book, you argue that fighting fake news AI bots with more AI is a mistake. Why do you think that?

SAM:

Every time I go to a conference there’s always someone at the conference that says, “Well, if Russia wants to use bots or AI, then we should use AI in the US,” or the UK, or wherever I am. So, there’s this very adversarial idea that we should fight fire with fire.

I disagree with this. I think that responding to bots with more bots, or responding to AI with more AI systems, oftentimes can lead to unexpected and problematic consequences. The number-one thing that happens when you fight bots with bots is a tremendous amount of noise. Bots already mess up our information ecosystem. They already have created a tremendous amount of noise on Twitter. And they’ve created a tremendous amount of distrust in social media, because people just don’t know what to believe. They don’t know whether or not random profiles that they’re interacting with might be aimed at manipulation. And that distrust is something that we don’t want to throw fuel on, right? To extend that fire metaphor, right?

SAM:

Let’s think about ways to build policy that helps to limit the effectiveness of political bots. Say for instance stopping them from posting every minute, which Facebook and Twitter have picked up in the last several years and other companies have as well. We’ve seen this be quite successful.

On the other hand, however, we also need to do a redesigning of some of the social media technology in a way that also doesn’t allow cyborg accounts, so combinations of bots and people, to function quite so well.

SCOTT:

AI bots is not a good use technology. Is there policies, obviously, we can get these platform providers to agree to this? Is there any sort of technology that might help in this fight?

SAM:

I think, to add nuance to the point on bots and AI, there are beneficial ways we can use both of these things, both of these things in concert to create smart bots that, say for instance, help to verify information or help to educate or help to connect particular communities that need to be connected. Say educational communities that otherwise wouldn’t be speaking. So, maybe we can use AI bots as the stitches in the patchwork quilt to connect diverse communities in order to create more equality and better democracy.

As we build these AI systems we have to think very carefully about what we’re putting in, because what we put in matters on the backend.

SCOTT:

Do you feel like government or policy, the government has in enforcing some of these ideas or these policies?

SAM:

I’m in favor of regulation. I think regulation has to happen. I think that the social media companies know, in the United States in particular, that regulation has to happen. The US Government has got to stop divesting from the regulation of the digital sphere just because they don’t understand it, because there’s lots of people that do understand it here in the United States and can help them to build sensible regulation. It’s just a question of getting organized and doing it.

If we’re going to regulate the space, and I think we are, we have to involve technologists, public interest technologists and researchers in this process, so that the policy that gets created is actually sensible and is able to be mapped onto the internet and digital ecosystems, rather than just trying to retrofit old policies that were created pre-internet.

SCOTT:

How do you recognize that there’s a bot in play or that someone’s attempting to perhaps influence you?

SAM:

People have gotten more savvy to the fact that there are people out there trying to manipulate them online, and that these different types of technologies and political strategies are part of social media. So, that awareness is good. I would say that on an individual level there’s lots of different tools that are out there.

On our website, you can find a lot of different tools for tracking whether or not a profile is a bot profile, or whether or not a story has been fact checked. It might be disinformation.

There’s also organizations like the Poynter Institute, which generally is aimed at helping journalists to become better reporters, but that has also done a lot of work to put out information on fact checking and disinformation in the digital sphere.

We also need to think before you’re posting something that’s very political or inflammatory.

SCOTT:

It would be a great design principle to build into a place where you don’t want disinformation to spread rapidly.

SAM:

We have to ask ourselves what does it look like to redesign social media and other communication tools in the digital realm with the fact in mind that people will discuss politics, that people will be discussing elections, voting, people will be discussing science, climate change, these sorts of things?

SCOTT:

Anything else you want to leave us with Sam?

SAM:

The last thing I would say is that we need to be thinking about what’s next for social media. The move to encryption and encrypted sites and private sites is worrying to me, because I think what it means is that we will become more ignorant to the fact that there still is a lot of disinformation and extremism flowing, and that might allow us to pretend like it doesn’t exist but it still very much exists. So, pushing people to dark corners of the internet is not necessarily a good thing and it’s not a solution.

As this continues to happen, and I think it will, we need to be thinking very carefully both as companies, as policymakers, as researchers about the ways in which we can maintain encrypted spaces, because they’re really important for all sorts of human rights-oriented communication, while also preventing a lot of the horrible misuse of these platforms that we see, that we know about, like terrorism but also disinformation, child pornography, these sorts of things that are really problematic.

I think we already have mechanisms for understanding how we can prevent the sharing of that kind of content through allowing people to tag it and share it with the company and say, “Hey, this is bad.” But yeah, let’s not just let companies move towards encryption, and as companies let’s not just move towards encryption and think that that’s the solution. It’s not.

We need to be thinking carefully as companies, as policymakers, as researchers about the ways in which we can maintain encrypted spaces, because they’re really important for all sorts of human rights-oriented communication.

Sam Woolley – Writer, Researcher, and Professor at University of Texas

Conclusion

Thanks again to Sam – again the name of the book is The Reality Game: How the Next Wave of Technology will Break the Truth. Go find it, I know we just scratched the surface of what Sam has to say.

Clearly, this is a complicated problem. Pulitzer prize winning entomologist and biologist E.O. Wilson sums it up well. He says “The real problem with humanity is that we have paleolithic emotions, medieval institutions, and god-like technology.”

Said another way: We react like cave people, cling to outdated ways of functioning as a society, and inflame it all with extraordinarily potent technology that seems to have a mind all its own.

For anyone looking to change their relationship with social media, Sam outlined a lot of great advice in terms of what to do at the individual level. There’s a LOT of great thinking around this topic, so we wanted to include some other recommendations.

The Center for Humane Technology is an organization founded by Tristan Harris. They recommend some simple things you can do right now:

Turn off notifications

Remove apps you think are toxic

Seek out voices you don’t agree with

Limit outrage from your diet

Support local journalism

And finally – remember to focus on the positive. Like we’ve explored in these last couple episodes, we’re sometimes wired to be our own worst enemy, and that can include focusing on the negative. But we started this show to seek problems worth solving, and this most certainly qualifies. If you want some light reading in the meantime, I found this great 14th century book called “The Most Fearful and Extraordinary Secrets of Cotton as toldeth to the author verily by a member of yon most secret cabal who, forsooth, wisheth that THOU dost NOT knoweth of these secrets”

Credits

That is our show for this week. Thanks again to Sam Wooley and Josh Levin for joining us. I hope you enjoyed our first two-parter. Please join us for our next episode which will be the last episode of our first season. I am really excited about it. We have had our Labs team working on solutions to the problems of social distancing that we covered waaay back in Episode 2. Our last episode will dive into how they approached the problems, the three different solutions they came up with, and how those solutions are working out.

Let us know how we are doing. If you liked the episode, please subscribe and rate is in your favorite podcast dispenser. If you hated this episode, why are you still listening to me talk?

If you haven’t already removed the following apps from your phone, please reach out to us on Facebook, Instagram, LinkedIn, or Twitter. If you have forsworn all technology and retreated to a state of nature in a local cave, you can always send us a note by mashing up some of those red berries down near the creek, being careful not to drink any of the berry juice, then use a stick or a porcupine quill (not currently attached to a live porcupine), to write us a note and set it adrift on the aforementioned creek. If the beavers don’t eat it, we will get your message in the spring.

See you next episode!