How I Got Here: Erica Finkle ’09

Erica Finkle ’09, the former Meta AI policy director, is now director of responsible AI at Microsoft AI.

Erica Finkle headshot with pale blue background
What is responsible AI?

Responsible AI is the notion that ethical principles should guide the development and use of AI. It can encompass concepts like safety, privacy, and accessibility, which apply to any technology that is developed. Responsible AI can also encompass new areas that are specific to AI; examples might be the psychosocial risks of interacting with AI, how AI might impact people’s critical thinking, AI becoming autonomous or acting on its own, how AI might impact elections or democracy, or the ways people interact with the world around them via the AI they are using. It can also overlap with ensuring product quality and that the AI is appropriate for the uses that people will make of it.

What does your job entail?

Microsoft’s AI product is called Copilot, and I work primarily on consumer Copilots. My job is really twofold. I work hand in hand with product counsel and legal teams that are focused on compliance. They love someone in this role who has a legal background because we already understand the compliance risks but, more importantly, are often anticipating the next compliance challenge and trying to address it before things go wrong. I also work on internal voluntary governance, ensuring that new models or new features that are related to AI go through a process whereby all risks related to safety, privacy, accessibility, etc., are assessed and mitigated. I shepherd that review process. One developing area that is very cutting-edge is how to evaluate an AI for harmful manipulation of people.

Thinking about responsible AI more broadly and the role you play, does the pressure to do the right thing for humanity weigh on you?

In some respects, yes. I grapple with this sense of responsibility by consulting with a wide range of opinions and expertise, which are then brought to bear on a lot of the ethical challenges. I have no formal ethics training other than what I received via my law degree and my public policy degree. But I have worked with ethicists at Meta and Microsoft. From a public policy perspective, this might mean speaking to regulators or government officials or civil society groups and academics and researchers. At Meta, we worked with Stanford’s Deliberative Democracy Lab, which helped us get users’ opinions on different aspects of AI. 

What were some of the highlights of your time at Columbia Law School?

Being part of the Environmental Law Moot Court Competition and the Environmental Law Clinic with Professor Edward Lloyd. We worked with the Earth Institute on amicus briefs around climate change, and we had to describe the science in a way that would be meaningful in a court of law and for judges. And that was very impactful for what I do now when I have to describe technology in ways that can be understood by regulators, government officials, or internal legislative teams.

After graduating from Columbia Law, you earned a master’s degree in public policy at the University of Oxford, then worked at Linklaters. How did you eventually get your first job at Facebook, and what kind of work did you do there?

I was a policy manager for the city and county of San Francisco, protecting the privacy of the folks who live in San Francisco while also making data available in areas like public health and epidemiology so it could be leveraged by researchers and the public. I had exposure to educational privacy, health privacy, and consumer privacy issues. But I was really interested in WhatsApp, which I started using when I lived in London, when no one in the U.S. was using it. Facebook, before it was called Meta, had recently acquired WhatsApp, so I took a role at Facebook and started working on Facebook Messenger and WhatsApp as a privacy program manager. 

Over time, I worked on youth privacy, which intersected with my knowledge of educational privacy. There was significant and rapid change around understanding youth protections, primarily for teen users aged 13-plus, and trying to develop the best set of mitigations, parental controls, and other ways to address youth issues.  

What made you leave Meta and decide to take a role with Microsoft?

Throughout my time at Meta, I had worked on early machine learning, then shifted to the field of generative AI, and then moved to working on AI policy full-time. The opportunity to work on building new models and new experiences within Microsoft as part of a young and growing team was what I was looking for. 

What is your greatest fear about AI?

What keeps me up at night is the broad notion that society—in all aspects of the economy and every sector—wouldn’t be ready if an AI research lab rapidly develops AI with generalized intelligence of some sort superior to humans’.

Do you think AI might improve the world?

I truly believe AI can advance some of the scientific and health purposes that the world and humanity need. I’m not saying AI is going to be the cure to cancer, but I do think that there are small steps in the various medical and scientific advancement processes where AI can be appropriately and helpfully leveraged.

This interview has been edited and condensed.

The Brief on Erica Finkle ’09

Hometown: Voorheesville, New York
Current city: Half Moon Bay, California
Degrees: Cornell University, B.A. in government
Columbia Law School, J.D.
Amsterdam Law School, L.L.M.
University of Oxford, M.P.P.
On the benefits of international study: “I earned my criminal law degree from the University of Amsterdam during my third year of law school. The University of Amsterdam students came to New York and took classes with us for the fall semester of my 3L year, and then we took classes in Amsterdam with them in the spring. We had a cohort focused on learning together and gaining comparative legal perspectives. That experience has been foundational for me in working at global companies, particularly in terms of understanding and interacting with board folks who have very different ways of thinking about what the law should be in a variety of areas.”
Hobbies: Hiking, gardening, creative writing
Time spent daily on social media: Zero. “Unless I’m really bored at an airport or use LinkedIn for work.”
Last question you asked AI: “Can you improve this prompt?”