Driving Levers for Change
Artificial intelligence is changing how we work, learn, and socialize.
With the potential for this technology to perpetuate individual and collective harms, action from consumers is needed to pressure companies to change their practices. Over the past year, this has been an important focus of our movement-based approach to making AI more trustworthy. A key component of Mozilla’s approach to mitigating harms and imagining new pathways is through supporting Mozilla fellows, who are creating new ways to hold platforms to account on behalf of consumers and fuel the movement for internet health.
We’ve run campaigns highlighting goings on at Facebook and Zoom. We launched the RegretsReporter extension to give users a way to take action when they are recommended harmful videos. We released a new and improved Privacy Not Included Buyer’s Guide. We supported artists spotlighting the dangers of deepfakes and raising awareness about filter bubbles and bots.
Mozilla Fellows have also been applying pressure, pushing for greater transparency, and pulling levers to drive change for holding platforms and corporations to account. Over the course of the last year, Fellows have convened stakeholders, sparked conversations, interrogated systems, and developed tools to demand more corporate accountability and transparency. This post highlights some of that work done by our most recent fellows.
Harriet Kingaby | London, UK
Hosted at Consumers International, Harriet's fellowship was focused on addressing this critical issue: How can we integrate AI into advertising in a way that is net positive for consumers, society and the environment?
Harriet says this issue is important because “digital advertising is a booming industry: worth over $300 billion in 2019 alone. It’s also the primary business model sustaining the internet, humanity’s most important communications tool. But as AI-powered advertising grows more pervasive and sophisticated, it is doing so without guardrails. There are few rules to ensure it doesn’t surveil, misinform, or exclude consumers. If the industry doesn’t undergo major reform, these problems will only grow more pronounced.”
Harriet’s impact, in her own words
My report, AI & Advertising: A Consumer Perspective, used the Consumers International Digital Trust Framework to identify seven major threats that AI-powered ads present to consumers, from discrimination to misinformation. Many of these harms will be fundamentally changed or exacerbated by the addition of machine learning, or emotional recognition to ad creation and targeting, particularly in countries without data protection legislation. Some of the most disturbing findings include:
- Evidence of patents which would allow facial data to be created and stored via home televisions and used to target ads
- Online scams optimised via machine learning, or deepfake technology
- A total lack of consumer agency in the face of algorithmic decision making, which will be exacerbated by machine learning
- The resurrection of debunked science, such as phrenology, via facial recognition startups
- A huge environmental toll, if AI is introduced to advertising without a plan to reduce its carbon footprint
This research also uncovered a multitude of initiatives, law-suits, products and technologies aimed at fixing or combatting these harms, but a lack of proactive planning for an AI enhanced advertising future. It calls for mediated cross-disciplinary forums to be created, which place human rights on equal footing with commercial issues.”
Coverage of Harriet’s work
- Advertising is the canary in the coal mine
- Alternative business models for the web
- The ethics of using Facebook as a non-profit
- Combatting climate misinformation & advertising
- Ad spend, white supremacy and climate denial, Harriet’s blog
How you can take action with Harriet
If you work for an organisation with an advertising spend, get them to sign up to The Conscious Advertising Network. If you create AI for advertising or use AI in your advertising, and are concerned about the consequences, get in touch.
What Harriet is doing next
She will be working on climate denial and how it is being funded and enabled by advertising. She is working with a team of researchers, brand safety experts, and social media specialists to create guidance on how we communicate about climate in a more inclusive way, and fight back against the misinformation. If you'd like to know more, get in touch here.
Finally, what change Harriet has seen as a result of the fellowship?
From COVID misinformation, to Black Lives Matter, to the Stop Hate For Profit campaign, advertising as a funding model for the web has become a hot topic in 2020.
Emmi argues this issue is important because “At this point I think everyone agrees that we are facing an infodemic that is cracking societies at the seams. We need more people working on more parts of the problem and fast.”
Emmi’s impact, in their own words
“We've made a suite of open-source and easy to use tools (Social Media Analysis Toolkit) to investigate and visualize things happening on social media that have been used by journalists, researchers, and activists all over the world.” You can see more of Emmi’s work at Rebellious Data.com.
Coverage of Emmi’s work
- USA Today : “When Trump started his speech before the Capitol riot, talk on Parler turned to civil war”
- Business Insider : “Amazon's decision to sever ties with Parler might not kill the controversial social media platform”
- NY Times : “They Found a Way to Limit Big Tech’s Power: Using the Design of Bitcoin”
- MIT : “A guide to being an ethical online investigator”
- Bellingcat : “Exposed Email Logs Show 8kun Owner in Contact With QAnon Influencers and Enthusiasts“
- Rebellious Data : “The Decentralized Web of Hate”
How you can take action with Emmi
“We are looking for core users, funding, and compute primarily but dev support is welcome as well.”
What Emmi is doing next
“After the fellowship I will continue developing SMAT and Rebellious Data to help us transcend the infodemic.”
Finally, what change Emmi has seen as a result of the fellowship?
“Through the course of my fellowship I have seen a lot more people become interested in this problem space and hacking the problem in different ways as it's impact becomes more and more obvious.”
Aurum Linh | NY, US
Aurum explains the problem they addressed in their fellowship while hosted by the Digital Freedom Fund: “Law has not been able to keep pace with the rapid growth of technology. Far too often, lawyers who are challenging algorithmic systems have difficulty equipping themselves with the technical expertise required to build effective cases.
The struggle to grasp how machine learning technology works results in ineffective action, which does not reflect a lack of will, but a lack of sharing knowledge between industries.” They assert that the reason this is important is because “today, biased algorithms are being used to decide how long someone stays in jail; whether a parent is deemed fit to take care of their children; and whether someone is eligible for welfare. To challenge these systems effectively, those in law must break the black box of technology.”
Aurum worked closely with fellow Jonathan McCully to develop Atlas Lab. Aurum explains: Atlas Lab is a platform for lawyers to:
- Fill the knowledge gap of how an ADM is built, beyond the collection of the dataset, using an approach that requires no code and limited math knowledge
- Stay updated on ongoing litigation cases globally
- Gain the technical understanding to craft more effective FOI requests when challenging automated decision-making systems
Coverage of Aurum’s work
- How Machines Make Decisions is a Human Rights Issue
- What Decolonising Digital Rights Looks Like
- A New Resource for AI, Litigation, and Human Rights
- Atlas Lab — Breaking the Black Box of Law and Tech
How you can take action with Aurum
They are currently seeking partners to continue this project. Please reach out to email@example.com. For lawyers: Atlas Lab submissions are open - and they’d like to feature your case study on any challenge to algorithmic decision-making.
What Aurum is doing next
The vision for Atlas Lab is to become the global platform for collaboration, strategy, and education at the intersection of law and technology. This year, Aurum is organizing an event series on algorithmic injustice. The series is focused on the voices of those whose lives have been affected by ADMs, and those who are working within the law to challenge its use. RSVP to receive updates as this event series develops throughout 2021!
Finally, what change Aurum has seen as a result of the fellowship?
Their platform is the bridge between law and technology with a core focus on strategic litigation cases against biased algorithmic decision-making systems around the globe.
Jonathan McCully | London, UK
Jonathan McCully is based in London, UK and took on a fellowship alongside his work as a Legal Adviser to the Digital Freedom Fund.
Jonathan explains that through his work in the fellowship, “I wanted to explore ways of breaking down knowledge barriers between litigators and technologists. Two disciplines that seem miles apart, in many ways, but that have a crucial role to play in vindicating our rights before the courts in cases involving “black box” technologies.” This is important, he asserts, because “litigation is an important tool for vindicating our rights against powerful actors. Rights violations are increasingly obscured by the use of automated systems that need to be “unpacked” prior to a legal challenge. Litigation like this is stronger when technologists and litigators work closely together and understand each other’s disciplines.”
Jonathan’s impact, in his own words
“This year, I have been speaking with a range of different litigators and technologists interested in the topic of safeguarding human rights against harmful artificial intelligence through litigation. By hosting a combination of workshops and one-on-one conversations, I was able to get a better sense of the knowledge gaps that exist around litigation and machine learning technologies, as well as the resources that would be useful for these two audiences. Since then, I have been working with Aurum Linh in building a tool that seeks to demystify litigation and machine learning tech, called Atlas Lab.
Coverage of Jonathan’s work
- Explainer: What Is The Digital Welfare State?
- Taking Police Tech To Court
- A Project to Demystify Litigation and Artificial Intelligence
- Tackling AI in the Time of COVID
- UK Police Targeting Black People With Fingerprint Scanners, Digital Privacy News
- A New Resource for AI, Litigation, and Human Rights
- Atlas Lab — Breaking the Black Box of Law and Tech.
How you can take action with Jonathan
He says, “I think this is a project that will never really be finished, there will always be knowledge and information to share across the disciplines of litigation and tech. I would love to continue working with others to facilitate this in some way. If this interests you too, get in touch!”
What Jonathan is doing next
He will continue his role as Legal Adviser to the Digital Freedom Fund, an organisation that supports strategic litigation on digital rights. They have some exciting plans for 2021, and hope to be able to take action in close collaboration with litigators from the digital rights field.
Finally, what change Jonathan has seen as a result of the fellowship
He says, “the deployment of automated systems continues to grow but, thankfully, so does a community of human rights litigators and public interest technologists eager to safeguard our rights against these technologies. So watch this space!”