The Ethics of AI - An Empathy-Based Approach?

Posted on July 25, 2018 by Noon van der Silk

There’s lots of talk about the Ethics of AI at the moment. As with any research, there’s too much for any one person to read. Here’s a bunch of papers that I’ve collected haphazardly in the early part of this year:

One thing I wanted to think about is, speaking as someone working in this field and interested in making changes in my day-to-day life, what kind of tools or ideas would be useful for me? What should I do?

Alongside this thought, another thought I had is that somehow the big lists of rules feel very impersonal and disconnected from my experiences. I also feel a little bit unsatisfied about opt-in rules. Here’s a few from the around the place, that I’ve seen:

I have a few problems with these rules:

The positive aspects of them are:

So, what should any given engineer working in this area do? One thought I’ve had recently is a simple one: Let’s just aim at building empathy for the people that will be affected by our software.

This is reasonably actionable, say, with local groups by organising meetings between technical people and the people that may be affected. I.e. in the medical-AI setting, let’s organise regular catch-ups between the engineers, the doctors, nursing staff, and hospital adminstration types, along with perhaps patient representatives.

In the setting, of, say, law software, again we just set up regular events for the two groups to chat through issues, work together on small projects, and build a mutual understanding of difficulties.

I think this approach is a bit nicer than, say, creating a new set of rules that make sense for us locally, and then forcing people to follow them. One idea I like about the empathy-based/collaborative approach (or “human-centered design”; another term for this kind thing), is that it allos people to adapt to local circumstances, which I think is really crucial in allowing any one person to feel like they have some control over the application of any rules they come up with, and thus getting them to actually take an interest in enforcing them in their organisation.

So, my new rule of thumb for this ethics-related AI stuff will be: Can I meet with some of the people that will be affected? What are their thoughts? What problems are they working through and what are they interested in?

As always, I’m interested in your thoughts on the matter!