The world is disclosed to each of us in our own particular ways. As designers we want our designs to be functional, we want them to be useful but also more importantly, we want them to be usable.
So, my designs need to be useable by more people than "just me." Why? Because I respond to the world in response to how the world is disclosed to me. But I am a population of one.
What about the other billions of people in the world? Are they me?
What about the other billions of people in the world? Are they me?
I (we) can't just make an assumption that everyone is like me. A designed thing, a thing in the world, must therefore be inclusive, not biased (in so far as that is possible).
And so, in principle, I should ensure that everyone who wants to use my software can use it. Design thinking attempts to authentically shift the focus of design towards designing for inclusion. The challenge is no longer to develop a bare minimum of functionality. The challenge is to design for some one, any number of "some ones". Not design for everybody because "everybody" ends up being nobody. Design for a somebody, a person, and many persons.
As a designer I need to be aware of my own biases (we all have them). Our (my) biases may be gendered, cultural, age related. Our (my) biases skew our (my) understanding of the world. Consequently it is almost inevitable that our (my) designs encode these assumptions and biases, our (my) own ways of perceiving the world. And as an aside; unfortunately, AI, and technology, simply applied without reflection, will almost certainly amplify the various biases. A bias is a pre-judgement, you can consider it a kind of decision-making shortcut, a way to quickly resolve complex questions. They have utility, acting as a shorthand for a personal preference, and bias as preference, mostly, does the job, expressing our preferred choice or configuration. But they frequently go awry.
Design bias is particularly insidious in software design, particularly in systems that use data to drive the performance of the system which in turn generates more data for ‘tweaking’ or driving the performance of the system; basically, algorithmic bias. neural nets, AI, machine learning.
The deep problem with design biases is that they produce systemic effects upon the entire population:
- Confirmation bias (interpreting information in a way that confirms my preconception)
- Interaction bias (homophily, preferring interaction with others exhibiting certain traits or behaviours)
- Automation bias (preferring information presented by a computer or other interface rather than people or situation)
- Association bias (attributing qualities to new situations based on limited previous experience)
- Dataset bias (algorithmic bias arising from a skewed, partial, or non-representative dataset)
And there are many many other forms of human cognitive bias that spill over into the designs and things we make.
How can we solve this challenge? Cooper Design, (founded by Alan Cooper author of "About Face" and designer of Microsoft Visual Basic) is often credited with creating the "persona" method. The persona method is a disciplined approach to design for a person with a biography, a person with an age, a height, a history. A person who lives in the work, with their own projects, their own goals. A persona has a name, she/he could be busy, they could be using a design in a public space or while performing some specific activity, they are a fictional somebody, not an average somebody.
The persona approach is one way. A more authentic and valid approach is to bring in experts, people from different communities, backgrounds, abilities. People who represent themselves and people like them.