Pando

“After the pandemic ends, there won’t be a return to normal,” says futurist P.W. Singer

P.W. Singer, author of Burn-In: A Novel of the Real Robotic Revolution, discusses how COVID-19 will impact the future of work, how business leaders are coping with it, and how fiction can shape technology policy.

By Evan Selinger , written on September 15, 2020

From The Interviews Desk

P.W. Singer is a strategist, author, and Senior Fellow at New America. The Wall Street Journal as described him as “the premier futurist in the national-security environment". 

I’m excited to talk with him about Burn-In: A Novel of the Real Robotic Revolution, his latest collaboration with August Cole, who used to be a defense industry reporter for the Wall Street Journal.

Burn-In is a fantastic, genre-defying work that paints a picture of an unspecified time in the near future. Singer and Cole bring us along for an action-packed ride that follows fictional protagonist Lara Keegan, an FBI Special Agent and U.S. Marine Corps veteran, as she teams up with TAMS (short for Tactical Autonomous Mobility System), a military-grade robot that’s being field-tested for a possible new role in law enforcement—a role that has the potential to disrupt traditional approaches for pursuing, apprehending, and interrogating suspects.

The human-machine pair are after a terrorist who will stop at nothing to complete a destructive mission, one that’s driven by personal and ideological motives and implemented by futuristic technological means. Early on the story we’re shocked to learn he avoided being detected while committing a murder because he disguised his face with an AI-designed mask that was manufactured on a 3-D printer. 

To present credible scenarios involving cutting-edge hardware and software, Singer and Cole did meticulous research on emerging technologies. The book contains 27-pages of endnotes that detail the real-world research and events that underlie and constrain the world the they have created. For example, the world is filled with self-driving vehicles and contains delivery drones that are modeled off of an existing Amazon patent. And so, while recognizable literary influences are palpable, like Isaac Asimov and William Gibson, Singer and Cole nevertheless are experimenting with a distinctive narrative. The text isn’t fully fiction nor non-fiction but an informative and entertaining blend of both.

I’m going to push the conversation in the direction of technological trends, rather than put too much emphasis on plot points. This way, we can discuss poignant scenes, avoid spoilers, and consider how the book can be used as a springboard for discussing where society is heading, what dangers deserve special attention, and what can be done to prevent problems and foster resilience.  

 

The following is an edited version of a conversation between Evan Selinger, Professor of Philosophy at Rochester Institute of Technology, and P.W. Singer:

Evan: Before jumping to the future, let’s start off by talking about the present, a pandemic period marked by tragedy, uncertainty, and anxiety. Today, many technologies are being used as governance tools to assist with pressing objectives related to surveillance, compliance, and the distribution of goods and services. What trends are being accelerated now that play important roles in Burn-In? And do you find any of them surprising or particularly distressing? I’m asking because privacy and civil liberties advocates are emphasizing that decisions made to address immediate threats can have lasting effects. Simply put, there are slippery slopes dangers: privacy-protective norms can erode; personal information that’s collected, stored, and analyzed as well as surveillance infrastructure can be repurposed; and powerful private and public sector entities can gain even more influence and control.

Peter: That’s a fantastic question. The trends and technologies that Burn-In explores using this smash up of novel and nonfiction were obviously all in place before the pandemic hit. But the pandemic has definitely accelerated them. In some areas, we’ve seen the timeline move forward past what experts projected, like in telemedicine, where we jumped in a matter of weeks to what the industry thought it would be ten years from now. In others, it went well beyond what anyone ever expected. For example, the scale of distance learning and remote work. 

The state of emergency is allowing technologies to proliferate in the physical realm. The case that hits home for me brings us back to the opening of Burn-In. Two characters are talking in front of a train station, and to show the scene is futuristic, a delivery robot goes by on the sidewalk. Yet this is real technology that’s come out of Estonia, and then tested in Mountain View. During the pandemic, the system was rapidly rolled out to deliver groceries in Washington D.C. making that future set scene now. The same has happened in everything from policing drones to subway cleaning robots to a broader AI fueled tracking of individuals and society, surveilling everything from movement to body temperature in a way that even the Chinese government hadn’t dreamed. Yet, with AI, the government can do more than track people. It can also predict and influence their behavior at scale.    

Evan: In light of all this, is the desire to return to normal after the pandemic ends a misplaced fantasy?

Peter: Yes, we're not going 100% back. Robots aren’t only being deployed to keep workers safe, they’re also being used to replace them. Business leaders are already thinking about the next pandemic. They don’t want to go through the same experience of shutdowns. And they realize one way to avoid it is to automate as much of the workforce as they can. 

Much like the trends, the last few months have surfaced other issues that were already in play, from the ones about privacy that you laid out to others like inequality, including that caused by the ripple effects of all these displaced jobs. 

Remember, the science fiction writer William Gibson wrote, “The future is already here—it’s just not very evenly distributed.” Today, everyone doesn’t have access to expensive technology. While some kids are able to do remote learning easily, others are relying on free WiFi from Taco Bell. That will all have lasting effects.     

Evan: Great point. When people talk about how the use of technology can exacerbate deep problems in the future, they’re often characterized as dystopian. The problem with this label is it suggests things are okay now but eventually will become catastrophically bad. In reality, society is permeated by inequalities. The status quo contains some people who are living well and many who are living through hell. A period of utopia for some, the privileged, is a dystopia for others, the marginalized and systemically disadvantaged. Burn-In doesn’t shy away from spotlighting inequality and depicting technological advancements as linked to market dynamics and political forces that will elevate as well as disenfranchise. What are the biggest technological threats that can widen inequalities? And what can be done to counteract them?     

Peter: Every book has underlying themes. Moby Dick is not actually just a story about a whale. So too, one of the underlying themes of Burn-In is there’s an incredibly fine line between utopian and dystopian views of the future. What you experience depends on where you are in society. That’s why Burn-In doesn’t have to be interpreted merely as a text that makes predictions. It also can be understood as an act of prevention that spotlights problems that deserve to be addressed. Society can shape whether the dystopian or utopian outcome is more likely by focusing on issues like privacy, user control, and algorithmic bias. 

Evan: Let’s try an imaginative exercise. Pretend that before Burn-In was published an editor noticed that you and Cole don’t include civil society organizations and asked the two of you to
write a few more pages that revolve around future technology and civil liberties attorneys at the American Civil Liberties Union writing a report about the world they live in—one that outlines how the world has changed between now and their time period and spells out an emancipatory agenda of what needs to be fought for and who needs to be challenged. What would the report look like? And if today’s ACLU staff could read these bonus pages, what would they conclude?   

Peter: Perhaps the main thing that Burn-In visualizes, which organizations like the ACLU would want to wrestle with is the reality of a city that is always watching you. How will AI and robotics actually be used by everyone from the police to parents? That can be for the good. As we’re seeing play out right now, the new proliferation of sensors documents all kinds of violations of rights that have a long history, but now with information going viral all the time. Yet, there are also enormous concerns that go beyond even privacy in a city that uses technologies like facial recognition to track people at scale. Such tracking can be used to influence people, including what they purchase and what political opinions they hold. 

Another point from Burn-In that organizations like the ACLU might want to think about is that whatever surveillance technology is going to be used against you is going to be determined by where you are geographically. The big questions are: Who controls the space and what power they do have? The book follows a character as they go everywhere from a train station, to a Starbucks, to an office building, to a college campus. In each of these spaces, there’s a massive amount of information collection going on. But in the U.S. each space doesn’t have to function in the same way because there’s a cacophony of decentralized activity. For example, we’re starting to see some university campuses becoming no-go zones for face recognition, as opposed to urban and rural police forces who are deploying it, as well as social media companies and even Kentucky Fried Chicken.    

I think the biggest challenge for civil society organizations is much the same as it is for corporations and the military. Recognize that even the issues that seem like science fiction in Burn-In are playing out right now and will be determined over the next generation. To give a contrast of how not to handle it, Steve Mnuchin, the Secretary of the Treasury said issues of AI and automation are not on his “radar” because they’re not going to be an issue for “50-100 years.” Wrong!  

Evan: Speaking of universities, I want to share one of my favorite moments in the book. Relative to advancing the plot, it’s a small scene. But since I’m a professor and a parent, it stood out. You and Cole describe elite universities as having specially designated tech-free areas called IQ zones. They’re safe spaces for students to freely converse about whatever is on their minds, including dissent political leanings and subversive ethical ideals, without being monitored or nudged by machines. 

In Re-Engineering Humanity, Brett Frischmann and I argue society needs to have ample spaces like this—spaces to exercise what we call “the right to be off” where people can critically interrogate how they’re being programmed to behave and think. What were you trying to convey by situating IQ zones at prestigious and presumably expensive schools? And to get at a broader point, what will it take to create spaces for breathing room as the internet of things expands and surveillance as well as other mechanisms of social control further intensify in so-called smart homes, cars, and cities?  

Peter: The scene is great for quickly painting a picture of the need for locales where you aren’t completely observed. Given the problems we’ve been discussing, the question is where they can be created. Since universities are artificial zones compared with many other places in society they can be designed as escapist oases. The interesting thing in the book is that they aren’t designed for students to get away from police monitoring or anything like that. It’s to escape parental observation and criticism. It’s also important to note that when the main character walks by the zone an hour later, all of the students we saw gathering there have left. They’ve gone back to other parts of the world where an internet of intelligent things will be always monitoring, predicting, influencing. That’s just how it’s going to be, for both better and worse.   

Evan: After reading Burn-In I came away with a sense that one of your intellectual agendas is to persuade readers that they should be skeptical of associating any technology with either exclusively good or bad uses. The same technologies that good actors can use for socially beneficial purposes bad ones can redirect to create all kinds of harm—from personal harassment to existential destruction through cyberwarfare and cyberterrorism. 

One the one hand, I think it’s an important message. The very technologies that industries and government agencies hype up by selectively and idealistically focusing on positive use cases are also ones that bad actors predictably will try to weaponize. This is especially the case with democratized AI-infused products and services. As Burn-In suggests, the costs of acquiring and deploying them are dropping and the idea that especially potent ones will remain isolated to sectors like the military is laughable. 

On the other hand, I personally don’t want to characterize all technologies as fundamentally dual-use. That’s because, in my opinion, some technologies have strong affordances—affordances that are likely to lead society down tightly constrained paths when they manifest in specific environments. Take facial recognition, a technology that features prominently in our discussion and Burn-In. Woodrow Hartzog and I have been calling it a perfect tool of oppression and arguing it should be banned because we can’t see how regulation—even regulation informed by trustworthy AI principles—that permits some use under some circumstances will ever be enough to prevent routine abuse, especially by law enforcement. What’s your take on arguments like this—arguments that fit the abolitionist mold of what Frank Pasquale calls the “second wave of algorithmic accountability”?  

Peter: Maybe my normative position is I’m a realist. I don’t think we’ll be able to ban new tech like face recognition. Once the technology has been invented, you can’t put it back in the box. It’s out there in the world. Given the low barriers to entry, even if you ban facial recognition technology you’ll still have all kinds of bad actors using it anyway. 

More importantly, I don’t see in our economy and politics any inclination or real ability to ban it. There’s simply too much utility in facial recognition technology, whether it is public security or public health or private profit. To think that we can ban it, that’s the actual dreamy science fiction. And how would you do it? Would you ironically have to use AI to hunt down people using facial recognition tech? 

Instead it is better to think of how to shape it and our use. Think of a triangle with security, privacy, and convenience/profit on different ends. Whether it’s face recognition an app or one of my favorite technologies from the book, a drone crossed with a teddy bear that gives your kid an entirely new awesome toy, every technology lies somewhere within the triangle. The question is which end it is drawn towards and who decides. And, if you aren’t deciding on the tradeoff of security/privacy/profit, someone else is deciding for you. 

Evan: I’m really interested to see what impact Burn-In will have on conversations about technology and technology policy. In an age of information abundance, it’s hard to grab people’s attention, much less sustain it, especially when you’re asking them to look to the horizon. Since you and Cole previously collaborated on Ghost Fleet: A Novel of the Next World War, I thought it would be fitting to end by asking whether you remain confident about the value of blending fact and fiction and if you can contextualize your remarks by reflecting on the influence of the previous book.    

Peter: Absolutely. We ended up having more real world impact with our fiction than our traditional work. It was this smash up of a novel and nonfiction that got us invitations to brief The White House and testify to Congress. The briefings and lessons sparked changes that range from three different government investigations to a redo of army training courses. The Navy even named a $3.6 billion ship program “Ghost Fleet.” 

All of this and more happened because we took the approach of using “useful fiction” vs a traditional white paper or PowerPoint. The power of narrative is remarkable. And what has been exciting is that now we’ve started to aid others to do the same. A few weeks back, we set up a training course for the U.S. Air Force’s futures team, to help them do forecasting more effectively and better communicate their insights into strategy. And just like the topic, it melded experts in fiction and nonfiction, sharing the insights of people who ranged from venture capitalists to the creators behind The Walking Dead, Hunger Games, and Game of Thrones.

The key takeaway for me is that a well-told story can be a useful story indeed. 

 For more from Pando, subscribe to our weekly newsletter.