No, I can’t show you: why systemic bias is invisible in individual pieces
I write and talk about Post Normal design. At core it is about destroying the anchors we have to what was previously viewed as Normal. This is because that historic sense of normal was biased to a very specific and very small group of people: white men. The anchors that hold us back are the datasets, the tools and the epistemologies that were created then to justify and maintain that bias to the superiority and priority of white men in design and provision of products and services. We cannot build for new futures and innovate for global diversity if the whole system is anchored to and dragging us backwards to the past.
This problem is systemic. The biases are embedded into the structures and philosophies of research and design.
In particular, a lot of the biases are there due to the deliberate actions of researchers in the 19th Century who wanted to prove white male superiority during the period of colonial expansion. The German and British academics were heavily involved in development of research methodologies, survey tools and statistical analysis that could prove why racism, ableism and sexism were natural and typical. The use of categorisation, averages and statistics were guided by the need to center white men as the model human and then to other and “edge case” everyone else. This is why human factors and ergonomic design still thinks of women as smaller men: maleness is default. This is why AI recognition systems still fail to register black people: whiteness is default.
The Normal is default. It is structural and systemic. Statistical research tools were developed by men, such as Francis Galton, in late 19th and early 20th century universities because they had a higher purpose.
Normal was never neutral and yet it is the root of our episiotomies.
This the systemic bias.
Can you show me the specific problem tool?
I am writing this post because I was asked last night to name specific tools and practices that are biased.
And I can’t.
This is because the problem isn’t in any particular tool or method.
There is a particularly Western idea that more understanding comes thru Atomisation. That dividing problems into smaller and smaller chunks and naming each one of them will enable greater comprehension. This idea has become more powerful in a modern society that treasures the Individual (as user/consumer/economic unit).
If there is a problem then granularity (more detail, more parts, more names) is the pathway to a solution.
The problem is that Atomisation does not work when the problem is systemic. Breaking systemic bias into an analysis of individual tools will show nothing. The bias is not in that layer.
This is similar to the discussions about white male privilege. It is very easy to point at individual white men living in poverty (or black men with wealth) and say that clearly it is not true. Individualisation does not help. White male privilege is in how white men are not generally disadvantaged in life. They do not encounter barriers like women and black people. Opportunities are more open. That specific men fail is not proof than such general privilege does not exist.
In the same way, just because specific tools are not explicitly biased and can be used neutrally does not mean systemic bias does not exist. The way white maleness as the Normal, as the default, works is in a tendency not an overt action. It’s in hundreds of years of going the wrong way for the majority of people.
I recommend reading The End Of Average by Todd Rose as better way of understanding some of the different perspectives of specific versus systemic.
Look at the world
So, to end, I am going to say that I cannot prove that systemic bias to white men is demonstrable in any particular tool or practice.
Yet it is there and you need only look at the world to know it is there.
Silicon Valley’s BroTech (the endless array of products and services that seem only to serve the needs of young single white men who miss their mother’s housekeeping skills) is not accidental. It is systemic.
The continued exclusion of women, black people and disabled people (and all the intersections thereof) is not accidental. It is bias.
In theory terms, it is worth thinking about the Matrix of Domination. This is Patricia Hill Collin’s theory about the intersection of identities and layers. Knowing what layer (institutional, communal and individual) you are working in and arguing about is helpful. Critiquing systemic bias (an institution layer problem) as untrue while using counter-examples from specific people or tools (individual layer artefacts) is the problem I have been describing. It is a rhetorical trick to switch layers while arguing.
The only practical advice I can give is from AI research. I have peripherally been involved in the ethics of Machine Learning. I have attended events by both Google and Microsoft. The Google AI ethics event I attended provided about the same level of confidence and trust that you might feel when you discover that the driver of your child’s school bus was someone dressed as Barney the Dinosaur wearing a blindfold. It was terrifying in its naivety about bias. Microsoft however, offered tools. The most useful was Datasheets for Datasets. This is the idea of being open in how you work and being responsible thru explicit naming of team and maintaining contactability over time. You may be unable to perceive the systemic bias in your tools and work here and now but other can.
We can create change and fight systemic bias thru shared community and critical communication over time. It is in that layer that we can recognise those historic tendencies and move away from them.