Research: A Feminist Approach to Right to Privacy and Data Protection

Mozilla

By Mozilla | Dec. 8, 2020 | Fellowships & Awards

New research by Mozilla Fellow Chenai Chair explores how, without sufficient safeguards in data protection in place, AI systems in South Africa can disproportionately harm women, girls, and non-gender conforming individuals. It also provides recommendations for gender-responsive policy approaches and increased civil society engagement.


Across the world, AI systems are becoming more and more embedded in our everyday lives: They power social media platforms, help governments carry out essential services, and are deeply entwined with economies.

But rather than acting as equalizers, these systems often perpetuate and amplify societies’ most persistent inequalities, especially with regard to gender. AI systems can discriminate, surveil, and be weaponized against women, girls and non-gender conforming individuals such as lesbian, gay, bisexual, transgender and/or intersex people.

Today, new research by Mozilla Fellow Chenai Chair examines how these harms are playing out in South Africa, a country both accelerating AI adoption and struggling with gender inequality. The paper also examines feminist approaches to data protections to mitigate these harms.

Titled “My Data Rights – A Feminist Reading of the Right to Privacy and Data Protection in the age of AI,” the paper answers two chief questions. What would a gender-responsive data protection and privacy law entail to ensure gender safeguards against AI gendered harms? And, How can civil society play a role in ensuring a gender transformative law and practice with a focus on the right to privacy and data protection?

The paper entails deep, qualitative interviews with digital rights, gender and sexual justice activists; technical and policy analysts; and legal experts across South Africa. It also includes a quantitative survey of activists that work on gender and sexual justice issues, about privacy and data protection concerns and recommendations.

Says Chair, a South Africa and Zimbabwe-based researcher who also holds the position of Research Manager at the World Wide Web Foundation: “The current state of inequality in society that affects women and the gender diverse communities also extends to the digital space. But the narrative of AI for development and economic growth at times overlooks this reality. By taking on a feminist conceptual framework, this research highlights the challenges specific to women and gender diverse people in society overall and how this is a continuum with digital technologies. The development of AI follows the trend of excluding gender diverse people in the development and implementation of these innovations.”

At the core of the paper are five key takeaways and also four opportunities. The takeaways include:

  • AI adoption is accelerating in South Africa. AI innovative solutions are on the rise in South Africa, largely being driven by the private sector’s need for a competitive edge in markets and the public sector’s need for efficient developmental solutions. Meanwhile, policy conversations on governing AI have begun in the country with a focus on ensuring economic gains, innovation, and trade compliance. And, the Presidentially-appointed Fourth Industrial Revolution Commission has recommended the establishment of an Artificial intelligence institution.
  • Privacy and data rights are especially important for the marginalized. The right to privacy from a gender perspective is particularly important, as access to digital platforms may be problematic for women and gender diverse people facing patriarchal dynamics online and offline. Issues emerging from a gender perspective include lack of agency and control over data, unequal power dynamics, loss of privacy, and discrimination and bias at the intersection of race, class and gender.
  • The range of data harms is extensive. AI systems and the data that powers them can discriminate in a range of ways. These include facial recognition systems that misidentify non-normative bodies; deep fakes that harass and humiliate; the use of targeted anti-LGBTQI+ ads; hiring algorithms that prioritize males; and more.
  • Current regulatory safeguards are lacking. The current Protection of Personal Information Act isn’t suited to address the intersection of AI and gender issues. As the law was set up close to two decades ago, there are significant gaps with AI specific provisions. There’s a need for the current law to be contextually responsive, address the issue of gender exclusionary language (he/she used in the text) and the lack of nuance of gender and sexuality harms embedded into the law.

And the opportunities include:

  • Policy and regulation. New laws should provide context-based implementation and be future-proofed for further AI innovations. Meanwhile, new oversight bodies should monitor and evaluate AI systems. And, civil society and the legal community should pursue strategic litigation to build up cases around gender specific harms.
  • Research and documentation. Research and documentation is necessary to fill the knowledge gap in understanding the context based impact of AI based innovations. Case studies documenting the impact of AI on marginalised groups should explore the necessary responsive means to safeguarding against harms and injustices. Research can also be used to support development of governance models and develop AI registrars to document the proliferation of AI, where it is used and the impact it has.
  • Public awareness. Public awareness requires collaborative, relatable, and innovative campaigns to raise understanding of the opportunities and challenges of AI with regards to privacy and data protection. Public awareness should focus on campaigns for diverse marginalised groups; the creation of collaborative spaces that are safe for gender and sexual minorities to learn and raise their concerns; and resourcing from different stakeholders to ensure the necessary support needed for public participation.
  • Responsibility of the technical community. The technical community has the responsibility of carrying out public engagement and sharing information on how their systems work to ensure accountability and trust of AI-based innovations. Civil society can be drawn into ethical guideline development that is cognisant of experiences of injustice for marginalised groups. In design and implementation of AI based solutions, digital literacy, privacy by design and context responsiveness should form underlying guiding principles.

For further research and analysis by Chenai Chair, visit https://mydatarights.africa/