Chitra Ragavan

Courtney Bowman

Ep. 43 –– A physics major pursues a grounding in philosophy and finds his niche in Silicon Valley / Courtney Bowman, Director, Privacy and Civil Liberties Engineering Team, Palantir.

Courtney Bowman thought he was destined for a career in physics until he took a philosophy class. It triggered a deep skepticism of the ability of hard sciences to solve mankind’s biggest problems.

When Bowman told his academic mentor and family about the desire to pursue philosophy, they were alarmed and tried to dissuade him, fearing a dead end to his career.

But Bowman went with his gut and ignored conventional wisdom and it paid off strangely enough in Silicon Valley where Bowman found his niche in a one-of-a-kind Privacy and Civil Liberties Engineering team that he leads at Palantir, the big-data analytics platform deployed by the U.S. government and other governments around the world to contain the spread of #coronavirus.

#COVID-19 raises unprecedented legal, ethical, moral, even existential questions around the use of mobility tracking, contact tracing, immunity passports, and other powerful big data tools.

“We’re talking about contact tracing applications that rely on mobile phones and specific applications on mobile phones. But not everyone carries a mobile phone. Not everyone has a mobile phone, or is technologically savvy and use of their mobile phones. So then you raise all sorts of issues about the ‘digital divide.’ Does this mean that the people who maybe are most advantaged and most privileged because they have access to technology, are going to get a disproportionate advantage in the use of that technology?” asks Bowman. “Meaning that some of the most vulnerable communities that are less technology savvy are not receiving the public health benefits of something like contact tracing. And those are real concerns, particularly when you have kind of disproportionate spread of a disease and disproportionate accessibility and availability of public health resources. So there’s real, kind of, broader cultural and sociological, and environmental concerns that come into play when you’re talking about applying this type of technology to the real world.”

That’s where Bowman’s unique philosophical grounding and non-traditional perspectives come in handy as nations around the world ponder these weighty questions over how to put the lid on the pandemic.

Note: I was a senior advisor at Palantir Technologies from 2007-2015 and own equity in the startup.

Read the Transcript

Download the PDF

Chitra Ragavan:

Courtney Bowman thought he was destined for a career in physics until he took a philosophy class that triggered a deep skepticism of the hard sciences and the ability of science alone to address the issues raised by technological advances.

Chitra Ragavan:

Bowman decided to pursue his quest for a philosophical underpinning for his life and work, which unexpectedly gave him the tools to address some of the most challenging and salient technology questions of the day.

Chitra Ragavan:

Hello everyone. I’m Chitra Ragavan, and this is When It Mattered. This episode is brought to you by Goodstory, an advisory firm helping technology startups find their narrative. I’m joined now by Courtney Bowman, a former colleague and Director of Privacy and Civil Liberties engineering at Palantir Technologies. Bowman’s work addresses complex issues at the intersection of policy, law, technology, ethics, and social norms. Bowman is working closely with the U.S. government and governments around the world to address the issues around the collection and analysis of massive amounts of data from the COVID-19 pandemic. Courtney, welcome to the podcast.

Courtney Bowman:

Chitra, thank you so much. It’s an honor to be invited to your podcast and I really appreciate it.

Chitra Ragavan:

So what were you doing in life when you first began to understand the need for philosophy as an underpinning for your life and work?

Courtney Bowman:

Yeah, so it’s an interesting question that I’ve been reflecting on quite a bit lately. As now I’ve been sheltering in place where I grew up in New Mexico a little bit outside of Albuquerque. And one of the reflections and recollections I had was as a high school student and a college student where I had the opportunity and the privilege to be able to intern in a physics laboratory at Sandia National Labs. So in those early days, I was really quite passionate about physics and pursuing a career in physics. And was really trending in that direction when I went to university. My initial thinking was really directed at how can I learn from the hard sciences, physics in particular and what might be the course and trajectory for a career moving in that direction. But things changed as I started to take more courses and explore more of the humanities as an undergraduate at Stanford.

Chitra Ragavan:

So was there a particular issue or a problem that you were trying to solve via physics when you kind of had this existential change of thinking?

Courtney Bowman:

Yeah, so at the time I was taking kind of core coursework in physics as an undergraduate. And during my summers, I would come back to New Mexico and was doing internships at the labs, mostly focused on optical physics.

Courtney Bowman:

But when I started to take classes in philosophy, one of the things that I discovered is that the hard sciences prepare you for answering a certain set of questions and give you a certain set of tools for dealing with problems in the world. But they don’t always address the richness of the things that we encounter. And as I started to take more philosophy courses and became interested in what I guess would now be considered a more obscure aspect of 20th century philosophy, one of the realizations I had is that there’s a richness to understanding and exploring the world that you don’t necessarily get from the toolkit that’s provided by the hard sciences and physics in particular.

Chitra Ragavan:

So what did you do then? Did you break away from physics, and how were your teachers and mentors responding to your change of heart?

Courtney Bowman:

So I had a couple of kind of distinct conversations. I think this was one summer when I came back to New Mexico and was working in a lab, the National Labs for my internship. I remember speaking with my mentor and advisor and started to articulate that I was being pulled in different directions and had begun taking the philosophy courses, and wasn’t as convinced as I had been in previous years that physics was the path forward for me.

Courtney Bowman:

And then I remember one very distinct conversation with my advisor where I started to articulate this alternative or branching path. And she sort of reacted quite abruptly and was concerned, and maybe even a little bit disappointed and began to discourage me from getting distracted from physics and engineering. And I in that moment really thought to myself despite the guidance that I was getting from her and despite conversations I was having with my family at the time, who all sort of saw philosophy as kind of a dead end or something that hermits or academics pursue. I had this kind of inner gnawing, this intuition that this was something that was really important that I needed to pursue. And I did. And I kind of hedged my bets at the time, pursued both courses of study.

Courtney Bowman:

So I ended up getting degrees in both physics and philosophy. But philosophy from that point on became kind of the stronger passion for me and one of the driving thrusts for the things that I wanted to focus on, not just in my personal life, but see if I could figure out ways of applying that knowledge to the things that I was doing professionally.

Chitra Ragavan:

So what did you do next and how did you start to move your career forward with this dual interest?

Courtney Bowman:

So after I graduated, I ended up spending some time abroad, mostly in Germany. And at that point I was trending in the direction of philosophy graduate work. And one of the things that you do if you’re preparing for philosophy graduate studies is you make sure that you have both the academic grounding, but also language grounding in a core language that’s associated with philosophical study. So usually it’s German, French, Latin, or Greek. And my focus and interest being in the school of 20th century German philosophy dictated that German was the course of study that I need to pursue. So I ended up living in Berlin for a while taking language courses, getting more proficient in German language. And that continued for a while as I was thinking about applications for graduate study.

Courtney Bowman:

But another interesting thing happened at that point. I started to become a little bit disenchanted with the prospect of becoming a ‘professional’ philosopher. Or going into academic study in part because of conversations that I had with many friends who were in that course of study, but also realizing that the richness of descriptions of the world that I was looking for and that I found insufficient in physics also kind of nagged at me in the views that I was getting on the humanity side and the philosophy course of study that I was pursuing.

Courtney Bowman:

So I was kind of at a bit of a crossroads. And ended up coming back to the states a bit shiftless at the time and on a lark interviewed and took a job in Silicon Valley and started working at Google. This was roughly back in 2004, 2005.

Chitra Ragavan:

And were you able to kind of bring these things together starting, because you were now working on some of these technology issues and problems at Google?

Courtney Bowman:

A little bit. So at the time I was mostly in a position where thankfully I had some physics and mathematics background, was able to apply that to quantitative analytics. Which got me really involved in understanding some of the complex issues of data analysis, and I developed a depth of knowledge in that space in application to those areas of technology. But was a little bit disinterested in the specific course of work that I was doing. So I was in that role for a few years, but by happenstance I also became familiar with this other company several years later, Palantir Technologies, which is where I am now. Where I was presented with an opportunity to work on something that more squarely seemed to at least point in direction of bringing together these different threads of background and interests that had historically informed my view of the world.

Chitra Ragavan:

Now you were finding through all of your exploration that particularly in the area of artificial intelligence for instance, that your initial hypothesis that science and physics and engineering can’t solve some of these hard issues that people are trying to solve through artificial intelligence was actually in fact true. What were you seeing? Where was that gap that you were seeing?

Courtney Bowman:

This is one of the interesting things where I started to realize that these rather arcane areas of study that I had focused on this, the school of German philosophy from the early 20th century called phenomenology that attempted to articulate a methodology and approach to an alternative view of the world that isn’t focused or grounded in what we think of as a subject object distinction, but looks at a more holistic picture of our experiences and tries to describe those things in terms of the embodied life that we live. The situational elements of what constitutes our experiences. All of these things that kind of in the abstract academic sense seem a little bit removed from the world, started to come into clear relief to me as I began to think about the promises and failures of certain types of technology. And specifically questions around things like artificial intelligence.

Courtney Bowman:

And the conviction in the space of artificial intelligence has historically been that you can come up with a theory of the mind and then build a technology that models that theory, and somehow you have intelligence.

Courtney Bowman:

And because of a lot of these musings, and readings, and discussions that I had had over the years in the course of my philosophical work, I began to question those premises and recognize that what we actually find from the philosophical inquiries and philosophical discussions, is a clear sense of what the limits are of technology. And this was actually quite an insightful moment for me to realize that a lot of what happens in Silicon Valley, a lot of what happens in the technology sector often times is subject to this sort of solutionist mindset. That technology can solve all of the world’s problems. But in fact, most of the world’s problems exist and adhere in spaces that are so complex that if you focus on just a technological approach to them, you’re losing out on the richness of the challenge, and oftentimes failing to address the underlying issues in a meaningful way or in a complete way.

Courtney Bowman:

And that was something that really has followed me throughout all of the different components of the work that I’m currently engaged in. Not just in artificial intelligence, but more generally recognizing that the challenges that we’re dealing with today with respect to applying complex data science technology information systems are much deeper than the data or the data science themselves. For those problems and challenges to attach to the real world, they also have to address the complexities in the real world. And that means understanding the sociology, the cultural aspects, the philosophical, ethical aspects. All those things have to brought together into a richer approach and appeal to those types of problems.

Chitra Ragavan:

And the work that you’re doing at Palantir now of course has to do with data analytics. The data, what meaning you can derive from data. But also because there’s so much data that’s being generated and analyzed, it raises a lot of interesting, complex questions of how to protect people’s privacy and civil liberties. And you’re part of a pretty unique team at Palantir that’s trying to find some of the solutions to these pretty thorny questions. Talk about this team that you are leading and some of the sort of challenges that you’re facing in general before we talk about the pandemic, which has brought everything to relief.

Courtney Bowman:

Yeah. So when I joined Palantir in 2010, I came on board with another colleague with whom I was asked to start to construct this idea of what we call privacy engineering or privacy and civil liberties engineering team. And the founding concept here was a recognition of the complexity of applying information science and technologies to these extraordinarily nuanced, complex challenges in the world.

Courtney Bowman:

When we looked around to see what the existing models were for approaching questions, normative questions like what are the privacy obligations that one has in applying technology to address things like counter-terrorism or economic supply chain questions. All of these sorts of issues raise concerns about whether, sorry, just going back. When you apply those questions in existing models for treating those challenges, oftentimes the toolkit that’s available are basic legal approaches.

Courtney Bowman:

So when we were founding this team, what we observed was that most of how Silicon Valley and other industries were tackling questions of privacy was on a purely legalistic dimension.

Courtney Bowman:

The challenge with that is that technology is constantly straining the boundaries of what exists in the law and what our normative conceptions are for dealing with those technologies. So another way of putting that is that the law is always lagging what technology is doing. And in order to start to get ahead of that problem, you have to constantly look around corners and try to anticipate how the norms of applied technology are going to play out. And think of that, think of the laws kind of the floor and the ethical considerations and normative considerations as the thing that you want to build against.

Courtney Bowman:

So what I started to work on with colleagues at Palantir was building out this concept of privacy engineering as a pursuit that’s multidisciplinary. That has to incorporate not just the pure engineering considerations, but also the ethics, the humanities, the legal considerations, the cultural, sociological, and psychological dimensions of applied technology. And put those things in the context of the uses of the technology in the world. And to do that, if you do that, you start to get to better solutions that are more closely grounded in the realities as opposed to grafting technology onto a problem space where the technology may not actually make sense.

Chitra Ragavan:

Are there any examples or very general examples of how you can kind of explain that complexity?

Courtney Bowman:

Yeah, I mean there’s many examples of where that plays out. So a lot of what we’re doing is helping to situate the idea of complex institutions like for example, commercial entities that have been built up over the years, let’s say like a bank or some other financial institution. They’ve been built up over the years through mergers and acquisitions. And what that means in practice is that the data foundations that they’re drawing upon come from all different sorts of systems. And those systems don’t necessarily clearly talk to each other. But if you’re at a leadership level within an organization like that within a financial institution and you’re trying to make comprehensive decisions about where to direct the business, you need a broader view of the data that’s available. You need to understand the landscape of all of the contractual obligations that they have and the consumers or businesses that you’re working with. And you need to put those considerations into a broader context of what’s happening in the economy throughout the world.

Courtney Bowman:

So to merge all that information together and to deal with information sources that are maybe highly regulated because you’re in the financial sector, and because that information is personally identifiable, you need to understand what the consumer expectations are for the use of that information, what the privacy obligations are, where the law and regulation is likely to trend. So that if you’re interested in building new financial products or applying new applications for data science, you can anticipate what the consumer requirements are going to be, not just under the law, but also in accordance with the expectations that people have when they sign up and form a relationship of trust with a powerful institution like that.

Chitra Ragavan:

Yeah. I think that issue of trust is so important. And even technology companies like Palantir as you know have become incredibly controversial, especially in sort of some of the far left advocacy groups because of the power of these platforms. You know, whether it’s Palantir, or Facebook, or Google, there’s a tremendous power that is, as you said because the law is always lagging behind. And it’s how do you protect people’s privacy and civil liberties. And even Palantir that has a group like this that’s dedicated to understanding this raises suspicion and trust among people, a lack of trust among people? How do technology companies deal with this? How can you cope with the growing complexity of technology and the growing distrust of the public?

Courtney Bowman:

It’s, I think a great question, Chitra. Because it’s one of the founding concepts of Palantir. And for people who recognize the reference, Palantir comes from the Lord of the Rings Tolkien’s fantasy novels. And the concept that Palantir is sort of a seeing stone, and it’s analogous to a piece of technology. And this is by intent, right? Technology that may be powerful that allows you to see great distances or understand a lot of information is not ethically neutral. It implies a sense of a moral responsibility to the users. And that was kind of the origin of this naming convention for the company. That we as a company wanted to recognize that in building powerful technology, we also have responsibilities to help ensure that that technology is constructed and applied responsibly.

Courtney Bowman:

So to answer to your question, I think that one of the ways that you start to address these questions of trust is starting from the premise that the technology is not just a neutral application. That it needs to attach to the complex situations that you’re trying to address in the world. Inclusive of not just the mission objectives, but also the core concerns around privacy and civil liberties.

Courtney Bowman:

And one of the ways that we’ve taken it, one of the approaches that we’ve taken to addressing that is building out a set of critical institutional practices that are grounded in work that I and my team do around privacy engineering. Not just in the sense of building new privacy protective capabilities in our software platforms, but treating those capabilities as things that relate to organizational processes that have to tie into business practices that have to build in a working fluency of new data protection regulations in all of the landscapes and sectors in which we operate.

Chitra Ragavan:

And now you have COVID-19, and it seems that everything that you have been working towards in terms of understanding through both a philosophical and engineering lens, has kind of brought you to this moment in the world where a pandemic has essentially upended all the norms and rules of all the games that we are engaged in. How do you sort of deal with some of these fundamental privacy and civil liberties challenges of tracking this pandemic? What are some of the broad issues that you’re confronting? And then we can sort of break it down further.

Courtney Bowman:

Yeah. So there’s a lot that’s happening right now. And as a starting point, I would say yes there are particularly exigent circumstances that are driving concerns, and there are very real and legitimate concerns that need to be addressed. But in many ways, those of us in the technology sector and the world at large in many respects are well equipped to answer the challenge. In part because we’ve dealt with similar exigent circumstances in the past. But also because we have kind of strong playbooks for understanding why the needs of the moment should be treated appropriately. They shouldn’t be accepted as the absolute new norms. We should think about limits on the types of interventions that we’re going to apply now and make sure that we’re building in checks and balances over time to protect against invasions or concerns around surveillance or other concerns from the privacy and civil liberties and civil society community.

Courtney Bowman:

But to speak specifically about the types of challenges that are being raised right now, I should say that we as a company at Palantir are focused on the data integration side of the problem. So we work with public health and other institutions that have access to certain types of data. We work with them to process that information in responsible ways. We’re not involved in the data collection side. But as we help support this work in terms of integrating information sources from the various applications that public health sector is involved in. We’re looking at questions around things like mobility tracking. So the devices that we carry with us, the applications that we use on our phones. The services that we rely on. Platforms like Google, Facebook and Apple. They’re all kind of constantly generating different forms of telemetry or geolocation data and that data can be used for varying purposes, for ad targeting. But also can be applied in aggregate to understand how people are moving.

Courtney Bowman:

So if you, for example as a public health agency wanted to understand if the state at home order that you’ve established for your region is effective and is resulting in people in aggregate staying at home more often or traveling less, you might be able to use that data to answer those sorts of questions. That’s one class of problems. But it also raises concerns from the privacy community in that even though you’ve aggravated the data, there may be a prospect of re-identifying that data if it hasn’t been aggregated and de-identified or anonymized in a responsible way. The concerns would be that that data could then be used to, repurposed at a later point to try and infer how people have moved. Whether they’ve moved to sensitive locations or they carried out activities that might be in contravention of the law or other norms. Those applications might be outside of the scope of the original intent, but they raise real concerns.

Courtney Bowman:

Another area of interest from the privacy community in the use of technology in this current crisis is with what’s called the immunity passports. And this is a concept of, let’s assume that we have broad scale testing to understand whether people are infected. And then you can take that information of whether you’ve been recently tested or with the eventuality of a vaccine, whether you’ve been vaccinated, preserve that status on your phone and use that as a passport for entry into your workplace, into stores and as other parts of the economy start to open up into other private sector and public sector spaces.

Courtney Bowman:

So there are real concerns with that idea because then you run into issues of well, are people going to be incentivized to cheat the system? Are there other types of data that are being generated through the application that tracks your immunity passport that could be used to further identify where you’re going or what you’re doing? And that carries some kind of disconcerting overtones for many people. It sort of falls into that kind of quasi-fascist “paper please” paradigm that I think raises the hackles on some people’s necks.

Courtney Bowman:

And then there’s kind of a third tranche of technology application that I think a lot of people are talking about and you see in the news, of digital contact tracing, exposure notification systems. And this gets to the idea of using your mobile devices that many of us carry around all the time, as a proxy for understanding whether if we’ve been infected, we’ve had contact with other people. So your phones can create a low energy signal using Bluetooth technology that you attach your earphones with. But those create a signal to other phones that can tell you whether you may have been in contact with other people. And depending on what the implementation regime is for those types of contact tracing applications, there’s a lot of information that’s being generated that could be centralized or could be held in a decentralized mode and could raise concerns about the ability to track people’s movement, to understand their interactions with other people. And from the perspective of privacy or civil liberties advocates, that’s the worrisome prospect. Especially if you’re not entirely convinced of the utility or efficacy of that type of application.

Chitra Ragavan:

So when it comes to contact tracing for instance, this is something that we’ve always done from a public safety perspective. For instance, in E. coli outbreaks or other food safety outbreaks. But I guess in that case it’s done more, has been done in more of an analog way. But with digital technology, it takes it to a whole different level, doesn’t it?

Courtney Bowman:

It does. It does. There’s the seeming promise here of suddenly you have this technology that seems to reflect or synthesize the activity of the human analog. Because the basic idea with traditional contact tracing is let’s say you’ve contracted disease like salmonella or E. coli through a bad food that you’ve eaten. You want to be able to trace whether you’ve had contact or your food supply chain has had contact with other people. And the contact tracing process involves people asking questions of who you’ve seen, where you’ve been, the types of stores that you’ve been shopping at. And there’s well established precedence for applying that type of regime. And it relies on subject matter experts who can ask qualified questions and help determine what of the information that you’re providing a signal and what’s noise.

Courtney Bowman:

Now the appeal of the digital contract tracing in concept is that you can do that same thing that humans have done but you can maybe do it at scale. But there’re also drawbacks in that. And there are the temptation may also be tempered by other considerations.

Courtney Bowman:

So the drawbacks are contact tracing generally speaking doesn’t work for disease unless you can actually broadly be able to identify who’s contracted the illness and who hasn’t. So it requires large-scale testing. But it also is only useful paradigm to apply in a situation where the cases of contracted illness are reduced to a certain threshold. So we’re currently in a phase in the U.S. with COVID where COVID is being transmitted through community transmission, which means it’s sort of a large, much larger scale to the extent that it doesn’t make sense to try and trace individual points of contact because it’s being spread across communities.

Courtney Bowman:

But as those numbers start to drop and assuming we get to a point where there’s more general testing that’s available, then you can start to think about well, what if we used our phones to help augment the ability to trace who we’ve had contact with?

Courtney Bowman:

And that is an appealing prospect because for example, we don’t always remember who we’ve talked to or who we’ve interacted with in the course of the day. But there’s also downsides that accrue to that type of application.

Chitra Ragavan:

What are some of those downsides?

Courtney Bowman:

Yeah, so some of the challenges there are if you’re talking about using your cell phone and the Bluetooth signal that’s that’s broadcast by your cell phone. If you’re walking through a parking lot and you happen to walk past a car with the door closed, the window closed, and you are wearing a mask and the other person is wearing a mask. That may trigger a signal of contact with the other person when in fact there’s virtually no prospect of having shared the same air and possibly transmitted COVID.

Courtney Bowman:

So there are all sorts of cases like that where you can imagine the risk of a false positive emerging. And that’s one of the virtues of traditional contact tracing because you can provide that context. And this is kind of goes back to one of the broader themes of, point of realization that I’ve had throughout my career, throughout the things that I’ve thought about as someone who’s both focused on an interest in the hard sciences and the technology and engineering. But also the broader implications of that tech technology is having the tools to be able to put technology in the right context where you understand the limits and constraints and don’t treat the technology as the pure solution. So the implication here is that as you start to identify those false positive risks, there’s risks of misuse or misaligned incentives for things like digital contact tracing. You need to be able to create the supporting organizational structures to be able to roll out those systems in an effective way that acknowledged the limitations and do so in our way that can get to better outcomes.

Courtney Bowman:

But there are also other concerns that are raised in contact tracing. So for example, we’re talking about contact tracing applications that rely on mobile phones and specific applications on mobile phones. But not everyone carries a mobile phone. Not everyone has a mobile phone, or is technologically savvy and use of their mobile phones. So then you raise all sorts of issues about the ‘digital divide.’ Does this mean that the people who maybe are most advantaged and most privileged because they have access to technology, are going to get a disproportionate advantage in the use of that technology? Meaning that some of the most vulnerable communities that are less technology savvy are not receiving the public health benefits of something like contact tracing.

Courtney Bowman:

And those are real concerns, particularly when you have kind of disproportionate spread of a disease and disproportionate accessibility and availability of public health resources. So there’s real kind of broader cultural and sociological, and environmental concerns that come into play when you’re talking about applying this type of technology to the real world.

Chitra Ragavan:

And you can kind of see some of those parallels in artificial intelligence and facial recognition technologies for instance, where there have been studies that show this inability to recognize faces of people from minority communities. And things like that. So it just seems like not just in the ability of the underprivileged to have access to some of this, but also in having data collected that would benefit them in terms of treatments. You kind of see that in clinical trials, which are disproportionately again, focused on privileged communities that can have access to it. So it just seems like there are a lot of parallels here.

Courtney Bowman:

Yeah. The reality is bias is everywhere in the world. And if you don’t take the time to understand the nature of that bias and you’re using technology like artificial intelligence, like machine learning and so forth as blunt tools for addressing problems or cool, fascinating tools for addressing problems. Without understanding the limitations and drawbacks of those tools and technologies, you end up with incomplete solutions or solutions that are worse than the underlying problems themselves. Because those solutions further instantiate and even further reify the bias that’s already existing in the analog world.

Chitra Ragavan:

And in terms of mobility and contact tracing and testing, etc., where does the individual’s rights versus societal good come in? How do you balance that need and where are we in that process here as we start to talk about and implement maybe some of these things?

Courtney Bowman:

It’s not an easy question to answer universally in part because different societies have different cultural norms and expectations there. For example, more communitarian societies that tend to value the interest of the community above the individual demands or individual rights. So we see different regimes trying out in Southeast Asia where there seems to be a higher acceptance or willingness for the government to impose certain solutions. But in Western societies, in the U.S. and in many parts of Europe we tend to see more of a focus on individual rights. And the importance of creating consent or often regimes for applying these types of technologies.

Courtney Bowman:

So what that means in practice is that if for example, we decide that it’s appropriate to use digital contact tracing and effective to use digital contact tracing. The way that that’s used is people get to choose whether they’re going to going to download an application onto their phone and whether that application is going to be turned on all the time. They might also choose whether they’re uploading information about their health status if they’ve had a recent test for COVID that has come up positive. Whether they’re uploading that information to a centralized authority, a public health institution that can then make further determinations.

Courtney Bowman:

So a lot of the focus right now is around this idea of end user or citizen consent. But that’s also predicated on the ability of public health institutions to effectively convey why it’s important for people to use these applications, what the benefits are, and convince them that they should opt into to those types of applications. Because the reality is if you don’t have broad adoption of a tool like contact, digital contact tracing. And by some estimates that means more than 60% or even higher of a population. Not just adopting the contact tracing applications, but actively and consistently using them. If you don’t have that high threshold of use, the technology is going to be ineffective because you’re just not getting significant signal across the full population of who’s coming into contact with whom.

Courtney Bowman:

So there are real challenges and trade offs that need to be determined. If you’re going to choose a consent oriented regime over a mandatory regime, you have to have compelling reasons to get people to adopt that practice. Otherwise it’s not going to work.

Chitra Ragavan:

So Courtney, how much of this is in the realm of the theoretical and how much of it is actually being implemented in the U.S. and then around the world?

Courtney Bowman:

So with respect to contact tracing, there are lots of examples that we can point to already where different governments have started to implement these programs. There’s versions of this in China, there’s versions of it in South Korea, in Israel, and a number of other jurisdictions. And I think in many environments, governments and even in private sector, there’s a lot of active efforts to explore the appropriate applications. And in civil society, in technology communities, in academic circles, there’s been a lot of work to scope out different forms of these types of technologies. So for example, many of us have read about Apple and Google working on this joint protocol. That would be the layer on the operating system that’s underneath applications for contact tracing, but creates kind of a common standard for what they call decentralized approach to contact tracing, where the critical information of who you come into contact with is de-identified and preserved on your local device. And there’s limited information that’s shared in a centralized place. So there’s a lot of active research and development and work that’s going into this in different locations, including the U.S.

Courtney Bowman:

So that’s one example. For some of the other technologies that I talked about. Like the immunity passport concept, that’s being considered and deployed in some places like China’s an example to varying degrees of success and raising a lot of concerns in then community around both how effective it is and whether it’s an appropriate thing to do from a privacy and individual rights perspective.

Courtney Bowman:

Mobility tracking I would say is probably one of the more broadly used tools. And one of the reasons that maybe it’s more broadly adopted is because there are reasonable ways of addressing the privacy interests that accrue for that type of technology. Because in order for that mobility tracking data to useful, it really is only a question of understanding the aggregate movements. So if you can take data from individual devices and come up with reasonably strong assurances that that data is being aggregated in ways that de-identify it and make it at reduced risks of being re-identified. And by that I mean reducing the risk of taking that aggregate level data and identifying an individual phone or an individual person from the information. If you can do that effectively, then you may be getting a really strong signal for public health officials about whether stay at home orders or shelter and place orders or other public health messages are being disseminated and are being taken seriously by the population. So from the perspective of understanding how effective you’re being with the public health initiatives, that’s a tool that I think has much broader use than some of the other technologies that I talked about.

Chitra Ragavan:

Governments can say to their people that we’re doing all of this because the public good and the gathering all of these massive amounts of data. But given politics and all of the competing interests and potentially competing agendas and given how much even this pandemic has become politicized, even in the U.S. given the proximity of the presidential race. How do you actually protect people given that we’re not just setting a precedent for this pandemic but for future ones or any kind of crisis. But you’re collecting, not you, but governments are collecting and analyzing data and potentially holding onto data that could be used for other purposes. Is that something that we should worry about?

Courtney Bowman:

Absolutely. And I think this is one area where we have strong precedence for how to address this class of concerns. We have for example principle-based systems like the Fair Information Principles that gives us a good sense in general terms of how we should be addressing these concerns. So the Fair Information Principles goes back to the late ’70’s. And these are principles that are well enshrined in data protection law and regulation around the world, not just in the U.S.

Courtney Bowman:

But they speak to things like necessity, proportionality and transparency. So necessity is the idea of starting with just the data that you. And one of the ways that you might look at that is if you’re trying to build out a response, a technology oriented response to COVID. What are the specific problems or challenges that you’re dealing with?

Courtney Bowman:

Starting there, and then using those problems or challenges to define the data landscape that you need in order to address them. You can think of that approach in opposition to the idea of well, let’s go out and grab all data that may be available in an environment in which all of the applications and technology and gadgets that we have are generating a huge exhaust of data. The temptation is to pull all that data together into a massive data warehouse and start to mine it for insights. If instead you focus on the things that matter, the data sources that matter for a specific problem or question, then you’re thinking about the necessity component. And you’re also thinking about proportionality in terms of the minimum amount of data that’s needed to solve the problem. That helps minimize the risk of overexposure of data. It minimizes the temptation of repurposing data for applications that go beyond the initial justification.

Courtney Bowman:

But the other prong here, one other principle that I think is important is just transparency. Explaining what it is you’re trying to do with data that’s available or if you need to acquire new sources of data, how and why you’re going to acquire that data. And what you’re going to do with it. Because I think if you can explain it to the general population, what you’re doing, why you’re doing it, why it’s important to addressing the public health demands of the moment. I think you generally get a much better adoption and a much better sense of comfort. And it demystifies the technology. It makes the applications more accessible.

Courtney Bowman:

But one of the other lessons that we’ve learned from historical precedent, and particularly we can think of 9/11 as one example here is that we are in an emergency situation right now. And we want to be careful that that emergency situation is treated as such, which is to say that it’s not held as the new norm for all time going forward.

Courtney Bowman:

So if we think about the current circumstances as a time down situation where the things that we do now in terms of things of applications like contact racing only apply for the duration of the COVID public health crisis. And there are natural points to re-examine or to tune down the systems, turn them off, and delete the data. Then you can also get closer to building the public trust that this is a specific application for a specific point in time, but it’s not defining the new normal for all time to come.

Chitra Ragavan:

In terms of your own personal life Courtney, have you had any of what I call viral insights about your life and work because of COVID, kind of that moment of clarity brought upon by a crisis?

Courtney Bowman:

Yeah, I think every day is sort of a realization that the work that I’m doing, the history of experiences that I’ve had and I’ve been quite fortunate to be able to have in terms of exploring all of these different academic pursuits and being able to merge them into a career that really attaches to meaningful challenges of the day, and more generally. It’s informed the way that I look at the world. It’s informed my view of the role of technology in both its importance but also its limitations. So when I think about basic choices that I make around the applications that I install on my phone, or even the choice to carry around my phone, or to use other gadgets that are a part of my life. What am I choosing? What are the implications? What does this mean in terms of the data that I’m generating about myself and the prospects of using that data? And as someone who is on kind of operating on the other side of the equation, working for a company that’s building technology, what are my responsibilities to the communities that are served by the customers that my company works for? What are my obligations? How should I be thinking about those responsibilities? And those are things that constantly weigh on me, inform basically every choice that I make in my personal life and in my professional life.

Chitra Ragavan:

Looking back at that young intern at Sandia National Labs, telling his mentor that he really needed to, really, really needed to study philosophy. What would you say to that, that young man and the journey that you’ve been on and where you find yourself today as a philosophy driven privacy and civil liberties expert in the middle of a pandemic?

Courtney Bowman:

Yeah, I would say trust your intuitions. I had a sense then that I couldn’t articulate. But I knew it was important. I knew that there was a pull towards understanding of a broader set of issues, and not just being fixated on this space of technology, and engineering, and hard sciences. That it was important for me to explore the humanities. It was important for me to branch out and see the world through a number of different lenses.

Courtney Bowman:

But sticking to that intuition in a moment where people who are extraordinarily credible, who I had a great deal of respect for, who were telling me that there may not be a lot of professional applications for the things that I was thinking about in a kind of an abstract academic sense. I’m really proud that I kind of stuck with that intuition. And I think it turned out to be one of the most important choices that I’ve made in my life.

Chitra Ragavan:

Courtney, thank you so much for joining me today and for these deep insights. Really enjoyed having you on the podcast.

Courtney Bowman:

Chitra, thanks so much for having me. This is a lot of fun.

Chitra Ragavan:

Courtney Bowman is a former colleague and Director of Privacy and Civil Liberties engineering at Palantir Technologies. Bowman’s work addresses complex issues at the intersection of policy, law, technology, ethics, and social norms. Bowman also is co-author of the book The Architecture of Privacy, which provides a multidisciplinary framework for designing and building privacy protective information systems. He is currently deeply involved in understanding the privacy and civil liberties implications of data collection and analysis relating to the COVID-19 pandemic, working with the U.S. government and governments around the world. This is When It Mattered. I’m Chitra Ragavan.

Chitra Ragavan:

Thanks for listening to When It Mattered. Don’t forget to subscribe to the show on Apple Podcasts or your preferred podcast platform. And if you like the show, please rate it five starts, leave a review, and do recommend it to your friends, family, and colleagues.

Chitra Ragavan:

When It Mattered is a weekly leadership podcast produced by Goodstory, an advisory firm helping technology startups with strategy, brand positioning, and narrative. For questions, comments, and transcripts, please visit our website at goodstory.io or send us an email at podcast at goodstory.io. Our producer is Jeremy Corr, Founder and CEO of Executive Podcasting Solutions. Our theme song is composed by Jack Yagerline. Join us next week for another edition of When It Mattered. I’ll see you then.