April 17, 2023

How Do UX Professionals Identify Harmful Technology Before It Launches

What is good design if not #designingforgood?

While the previous events have all emphasized the possibilities and problems that come with technology, we wanted to end the series with a focus on action - what can emerging Black Digital Humanists do to alleviate harm in technology? In this workshop, UX Professional Lisa D. Dance guides participants through a mock UX journey, showcasing some of the ways harm can show up during the design process and strategies for mitigating them.

As Shamika talked about in our opening talk, technology has a wide impact on people - both positive and negative. Lisa highlights that decisions we make as UX designers can make major differences in user experience of products. Decisions made through the entire process of research, design, development, and deployment of products, services, and technology have to be thought about critically to ensure we don't cause unnecessary or unintended harm.

Ethical Research is the careful consideration of the rights, well-being, and dignity of people involved in research activities. Inclusive Design goes further - it is about creating products that understand & enable people of all backgrounds and abilities beyond those directly involved in our research. AI Ethics is the adoption of ethical guidelines and governance for research, planning, development, and deployment of artificial intelligence.

So, how do we practice ethical research and inclusive design? One way is by including diverse users in your research process from the very beginning. A lot of identifying harm is about context. There are, of course, some hard no's - your product can't cause physical harm, for example - but there are a lot of other things that aren't as clear cut and require deeper consideration. Who's using this product? In what contexts? What are the potential ways people can use your product for harm and how can we alleviate this risk? These are questions we have to ask while designing.

There are many categories of harm that we have to consider as UX designers including financial, health-related harm, and privacy, as shown in the following chart:

With this emergence of AI technologies, Lisa highlights some key problems: deepfakes (being used for political misinformation & creation of nonconsensual sexual context); where the data is used in datasets comes from & who is/isn't included; racist AI generated content.

On how to begin a conversation in our own work environments, Lisa suggests starting with finding numbers or metrics like customer complaints to help make your case. And, to talk to stakeholders, sometimes you have to frame things through money - how much financial impact will this have if you don't make these changes?

Further Readings

  • Design Justice: Community-Led Practices to Build the Worlds We Need, Sasha Costanza-Chock
  • Mismatch: How Inclusion Shapes Design, Kat Holmes

linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram