Next steps on trustworthy AI: transparency, bias and better data governance
Over the last few years, Mozilla has turned its attention to AI, asking: how can we make the data driven technologies we all use everyday more trustworthy? How can we make things like social networks, home assistants and search engines both more helpful and less harmful in the era ahead?
In 2021, we will take a next step with this work by digging deeper in three areas where we think we can make real progress: transparency, bias and better data governance. While these may feel like big, abstract concepts at first glance, all three are at the heart of problems we hear about everyday in the news: problems that are top of mind not just in tech circles, but also amongst policy makers, business leaders and the public at large.
Think about this: we know that social networks are driving misinformation and political divisions around the world. And there is growing consensus that we urgently need to do something to fix this. Yet we can’t easily see inside — we can’t scrutinize — the AI that drives these platforms, making genuine fixes and real accountability impossible. Researchers, policy makers and developers need to be able to see how these systems work (transparency) if we’re going to tackle this issue.
Or, this: we know that AI driven technology can discriminate, exclude or otherwise harm some people more than others. And, as automated systems become commonplace in everything from online advertising to financial services to policing, the impact of these systems becomes ever more real. We need to look at how systemic racism and the lack of diversity in the tech industry sits at the root of these problems (bias). Concretely, we also need to build tools to detect and mitigate bias — and to build for inclusivity — within the technologies that we use everyday.
And, finally, this: we know the constant collection of data about what we do online makes (most of) us deeply uncomfortable. And we know that current data collection practices are at the heart of many of the problems we face with tech today, including misinformation and discrimination. Yet there are few examples of technology that works differently. We need to develop new methods that use AI and data in a way that respects us as people, and that gives us power over the data collected about us (better data governance) — and then using these new methods to create alternatives to the online products and services we all use today.
Late last year, we zeroed in on transparency, bias and data governance for the reasons suggested above — each of these areas are central to the biggest ‘technology meets society’ issues that we face today. There is growing consensus that we need to tackle these issues. Importantly, we believe that this widespread awareness creates a unique opportunity for us to act: to build products, write laws and develop norms that result in a very different digital world. Over the next few years, we have a chance to make real progress towards more trustworthy AI — and a better internet — overall.
This opportunity for action -- the chance to make the internet different and better -- has shaped how we think about the next steps in our work. Practically, the teams within Mozilla Foundation are organizing our 2021 work around objectives tied to these themes:
- Test AI transparency best practices to increase adoption by builders and policymakers.
- Accelerate the impact of people working to mitigate bias in AI.
- Accelerate equitable data governance alternatives as a way to advance trustworthy AI.
These teams are also focusing on collaborating with others across the internet health movement — and with people in other social movements — to make progress on these issues. We’ve set a specific 2021 objective to ‘partner with diverse movements at the intersection of their primary issues and trustworthy AI’.
We already have momentum — and work underway — on all of these topics, although more with some than others. We spent much of last year developing initiatives related to better data governance, including the Data Futures Lab, which announced its first round of grantee partners in December. And, also in 2020, we worked with citizens on projects like YouTube Regrets Reporter to show what social media transparency could look like in action. While our work is more nascent on the issue of bias, we are supporting the work of people like Mozilla Fellows Deborah Raji and Camille Francios who are exploring concrete ways to tackle this challenge. We hope to learn from them as we shape our own thinking here.
Our high level plans for this work are outlined in our 2021 Objectives and Key Results, which you can find on the Mozilla wiki. We’ll post more detail on our plans — and calls for partnership — in the coming weeks, including overviews of our work and thinking on transparency, bias and better data governance. We’ll also post about efforts to expand partnerships we have with organizations in other movements.
As we look ahead, it’s important to remember: AI and data are defining computing technologies of today, just like the web was the defining technology 20 years ago when Mozilla was founded. As with the web, the norms we set around both AI and data have the potential to delight us and unlock good, or to discriminate and divide. It’s still early days. We still have the chance to define where AI will take us, and to bend it towards helping rather than harming humanity. That’s an important place for all of us to be focusing our attention right now.
P.S. for more background on Mozilla's thinking about trustworthy AI, take a look at this blog post and associated discussion paper.