No items found.
8 min

Technology can be used to increase social injustice- where do YOU stand?

A poster attached on a tree.
Written by
Tracey Gyateng
Published on
July 3, 2020

Talk for Open Tech 2020

The growth of the digital, data and technology sector continues at pace, which makes it an exciting area for anyone to be involved in. I couldn’t resist the lure of data science, and for almost two years, I was the data science manager at DataKind UK, a wonderful charity that has a passionate community of volunteer data scientists which provide data science support and projects to the public and social sector. It is a pioneer in doing ‘data for good’; but as the data for good industry began to widen, with more and more tech companies creating AI for good initiatives, I became increasingly uncomfortable. Labelling work as data/tech/AI for good doesn’t necessarily mean you are doing any good, especially when applying these tools and techniques into the public/social sector*.

Fail Fast, Fail forward? Not in the social sector

The mantra of fail fast, fail forward, and inexplicable algorithmic black boxes generally should not be deployed into the complex and complicated public/social sector, as it can exacerbate societal injustice — the barriers which prevent people from gaining equal access to wealth, health, wellbeing, justice and opportunity due to their race, ethnicity, gender, age, disability, religion. And it is because these inequalities exist, we need technologists to understand the context & environment in which technology will be deployed, and work with the people who are most likely to be affected.

To quote Desmond Tutu- “If you are neutral in situations of injustice, you have chosen the side of the oppressor. If an elephant has its foot on the tail of a mouse, and you say that you are neutral, the mouse will not appreciate your neutrality.” To illustrate this point, at DataKind UK, the head of the ethics committee, Christine Henry and Ruby Childs ran a session about ethics in AI- highlighting many of the recent cases where technology had, or was likely to cause significant harm. This included the ProPublica investigation which drew attention to algorithms used in the US criminal justice sector which showed that a black person was more likely to receive harsher penalties than white people who were found to receive more lenient outcomes. There’s also the case of algorithms used within Amazon’s job recruitment that were biased against women. I discussed the session with a data scientist who had attended and he admitted that he had no interest in ethics, and didn’t accept that an algorithm used within the criminal justice system could be biased. He believed that police officers treat everyone fairly, and you only enter the criminal justice system if you have done something wrong. I was slightly taken aback. It hadn’t occurred to me that people within my circle didn’t know that the criminal justice system can be biased against ethnic minorities, and not just in the USA.

The Good Immigrant

For those who don’t know me, I am black British with Caribbean heritage. My parents were immigrants and I would describe their parenting as modeling me to be a ‘good immigrant’- to work hard and not to draw attention to myself. If I encountered the police, I should always be calm and polite- no matter what happens. I knew why they said this to me- I grew up aware of the volatile history of police relations with black and brown communities. Just focusing on key policing events in my lifetime, we have the 80s riots across the UK from largely black communities protesting against the unequal & violent treatment they received from the police. In the 1990s we had the McPhearson report which found that the Metropolitan police force was institutionally racist, following the many failures that the family of Stephen Lawrence, a black teenager who was murdered by racists in south London received. And to this date, the Stop and search statistics continue to show the disproportionate number of stop and searches on ethnic minorities- 2018 financial year shows black people are 9 times as likely to be stopped and searched than white people. Asians are 5 times as likely. Stop and search is a tool that disproportionally effects ethnic minority groups despite the lack of evidence that it is an effective crime deterrent.

By not understanding the history of policing on black and other ethnic minorities in the UK and US, (which could be expanded to other sectors- education, work, health) my ex-colleague had assumed that everyone was treated equally and so outcomes of policing should be fair. At surface level, this colour blindness appears to be a good thing. Few would disagree that everybody should be treated equally. We must be able to openly say that I don’t judge you because of your colour. But we should also say that I see your colour, gender, age, disability, your sexual preference because that is a part of who you are. To ignore it, is to ignore a part of that person, a part which may dictate how they are treated in society and their sense of being included; a part which makes them who they are. Ignoring differences can often lead to the majority setting a default standard that reflects the majority, and in the tech industry that is the middle class white male. This wouldn’t be such a problem if we were only building products and services for middle class white males.

Using digital / data / technology to dismantle social injustice

So as technologists or fans of technology working in the public/social sector where do we go from here? First step is to be humble, recognise that you don’t have all the answers. Be open minded and genuinely listen to people from different disciplines; experiences; and backgrounds. I found it irritating that on several occasions whilst working at DataKind UK, I would have my job and organisation explained to me, critiqued, and given ideas for improvement by men who barely knew me or DataKind UK. These ‘mansplainers’ didn’t reflect the vast majority of the DataKind UK community, but it occurred frequently enough for it not to be a random one off.

Let’s have some humility, let’s be open, and also try to tame the instinctive reaction to inflict hurt when someone highlights a problem. A good example of this is the work to expose problems of bias on facial recognition technology (FRT), largely started by three black women, Timnit Gebru, Joy Buolamwini and Deb Raji. Their pioneering work exposed the high error rates of the technology on dark skinned women, and opened the box to further essential critiques such as the issue of consent when using images of faces scraped from social media, and other underhanded data collection techniques such as the collection of images from homeless people. Plus the overall purpose of FRT, which to date has been concentrated in policing and security, generating an environment of constant surveillance & lack of privacy. Although the recent spotlight of racial injustice and the black lives matter movement has led to tech companies listening and acting on their research; either committing to stop developing FRT, or to having a moratorium on the technology- they are still met with criticisms & abuse when they speak about racial injustice in the tech industry. It shouldn’t be this hard. Be open and listen when someone raises a problem.

Secondly technologists need to have a clear understanding of the problem they are working on, and the context it is situated in. Where is the data from? What is its social/political history? Who stands to win & lose? It’s important to do research. The recent black lives matter movement has amplified a number of reading lists going around to support people to understand the wider societal implications of technology on black, and other marginalised communities. To highlight a few (of many excellent reads) I found Race After Technology by Professor Ruha Benjamin very illuminating**- she shines a light on historic social injustices within technology and how these continue to be recycled within new technological innovations. On gender, I’d encourage you to read Data Feminism by Catherine D’Ignazio and Lauren F Klein, which discusses power dynamics through an intersectional feminist lens. I haven’t done my homework to recommend works on age and disability, but that is on my list as I seek to improve my knowledge to fight social injustice.

Third, as part of understanding the wider context of where we build tech, we must include the voices of people who hold less power in society and who are most likely to be disproportionately effected. But how do you include a range of stakeholders in the problem definition and throughout technology product/service development & implementation? I’m encouraged to see more discussion about this taking place, for example a recent paper from Google & DeepMind computer & data scientists explores how data scientists can involve communities as the problem definition stage. Ada Lovelace Institute’s recent paper No Green Lights no Red Lines uses the technique of public deliberation to meaningfully engage the public about public health monitoring technologies. We must include a broad spectrum of people into the discussions of technology development and use, but we must also value people’s time to contribute. Now is not the time to be extractive without financially valuing people’s time, nor to place the burden on communities to continually advocate for inclusion.

TLDR. Neutrality where social injustices are present means a continuation & sometimes exacerbation of social injustice. This might sound corny, but use your skills and power towards social/economic/political justice for all. Please listen- openly and honestly without the need to immediately act defensively when a problem is highlighted. Have an understanding of the social, historical & political environment in which you are working in. This requires you to do research, and actively include groups that hold less power in society. There is more to be said on working to dismantle social injustice (which I’m still learning about), but hopefully you will join me on this journey.

So I will leave you with a quote from Audre Lorde a black American feminist, writer, librarian and civil rights activist- which I found when looking at the Open Heroines website- a fantastic community which seeks to support the inclusion & empowerment of women & gender minorities in the civic tech/ open data/ open govt field.

“My silences had not protected me. Your silence will not protect you. But for every real word spoken, for every attempt I had ever made to speak those truths for which I am still seeking, I had made contact with other women while we examined the words to fit a world in which we all believed, bridging our differences.”

  • *See this excellent post by Alix Dunn on the problem of using ‘#TechForGood’
  • **Race After Technology was discussed at DataKind UK’s ethics book club. See my write up of the event here
Subscribe to newsletter

Subscribe to receive the latest blog posts to your inbox every week.

By subscribing you agree to with our Privacy Policy.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Continue reading

Let's Catch Up!

Happy 8th Anniversary, Open Heroines!
Mor Rubinstein
January 17, 2024

Speak Freely, Speak Safely: Committing to Feminist Online Civic Space

Outcomes from OH/Pollicy Session at OGP Summit
October 4, 2023
6 min read