The Ethics of AI - An Empathy-Based Approach?
There’s lots of talk about the Ethics of AI at the moment. As with any research, there’s too much for any one person to read. Here’s a bunch of papers that I’ve collected haphazardly in the early part of this year:
- “Dave…I can assure you…that it’s going to be all right…” – A definition, case for, and survey of algorithmic assurances in human-autonomy trust relationships
- A Voting-Based System for Ethical Decision Making
- Actually, It’s About Ethics in Computational Social Science: A Multi-party Risk-Benefit Framework for Online Community Research
- Attentive Explanations: Justifying Decisions and Pointing to the Evidence (Extended Abstract)
- Automated Reasoning for Robot Ethics
- Blue Sky Ideas in Artificial Intelligence Education from the EAAI 2017 New and Future AI Educator Program
- Children and the Data Cycle: Rights and Ethics in a Big Data World
- Concrete Problems in AI Safety
- Does mitigating ML’s impact disparity require treatment disparity?
- Ethical Artificial Intelligence - An Open Question
- Ethical Considerations in Artificial Intelligence Courses
- Ethics of autonomous information systems towards an artificial thinking
- Formalizing Preference Utilitarianism in Physical World Models
- Goal Conflict in Designing an Autonomous Artificial System
- In The Wild Residual Data Research and Privacy
- Institutionally Distributed Deep Learning Networks
- Maintaining The Humanity of Our Models
- Mammalian Value Systems
- Mapping for accessibility: A case study of ethics in data science for social good
- Modeling Epistemological Principles for Bias Mitigation in AI Systems: An Illustration in Hiring Decisions
- Predict Responsibly: Increasing Fairness by Learning To Defer
- Recovering the History of Informed Consent for Data Science and Internet Industry Research Ethics
- Responsible Autonomy
- Stoic Ethics for Artificial Agents
- The Dark Side of Ethical Robots
- This robot stinks! Differences between perceived mistreatment of robot and computer partners
- Towards Moral Autonomous Systems
- Using experimental game theory to transit human values to ethical AI
One thing I wanted to think about is, speaking as someone working in this field and interested in making changes in my day-to-day life, what kind of tools or ideas would be useful for me? What should I do?
Alongside this thought, another thought I had is that somehow the big lists of rules feel very impersonal and disconnected from my experiences. I also feel a little bit unsatisfied about opt-in rules. Here’s a few from the around the place, that I’ve seen:
- Future of Life (June 2018, relevant items)
- 5 - Race Avoidance: Teams developing AI systems should actively cooperate to avoid corner-cutting on safety standards.
- 6 - Safety: AI systems should be safe and secure throughout their operational lifetime, and verifably so where applicable and feasible
- 7 - Failure Transparency: If an AI system causes harm, it should be possible to ascertain why.
- 8 - Judical Transparency: Any involvement by an autonomous system in judicial decision-making should provide a satisfactory explanation auditable by a competent human authority.
- 9 - Responsibility: Designers and building of advanced AI systems are stakeholders in the moral implications of their use, misuse, and actions, with a responsibility and opportunity to shape those implications.
- 10 - Value Alignment: Highly autonomous AI systems should be designed so that their goals and behviours
- 11 - Human Values: AI Systems should be designed and operated so as to be compatible with ideals of human dignift, rights, freedoms, and cultural diversity.
- 12 - Personal Privacy: People should have the right to access, manage and control the data they generate, given AI systems’ power to analyze and utilize that data.
- 13 - Liberty and Privacy: The application of AI to personal data must not unreasonably curtail people’s real or perceived liberty.
- 14 - Shared Benefit: AI technologies should benefit and empower as many people as possible.
- 15 - Shared Prosperity: The economic prosperity created by AI should be shared broadly, to benefit all of humanity.
- 16 - Human Control: Humans should choose how and whether to delegate decisions to AI systems, to accomplish human chose objectives.
- 17 - Non-subversion: The power conferred by control of highly advanced AI systems should respect and improve, rather an subvert, the social and civic processes on which the health of society depends.
- 18 - AI Arms Race: An arms race in lethal autonomous weapons should be avoided.
- AI For Humanity (June 2018)
- 01 - Develop an aggressive data policy
- 02 - Targeting four strategic sectors
- 03 - Boosting the potential of French research
- 04 - Planning for the impact of AI on labour
- 05 - Making AI more environmentally friendly
- 06 - Opening up the black boxes of AI
- 07 - Ensuring that AI supports inclusivity and diversity
- Humans for AI (June 2018)
- Broaden the pipeline of minorities currently in tech careers, seeking to move to careers in AI by being the go to destination for all things AI because we believe that diversity of thought and opinion ultimately builds better products.
- Open and inclusive community of people interested in AI by facilitating interactions with experts, practitioners and thought leaders in the field.
- Leverage AI to release a set of free products built by this community to further our mission of bringing diversity to AI.
- Demystify AI by providing a basic understanding of the concepts, thinking and events in AI for novices and non-technical people interested in how AI will impact their lives and their jobs.
- Concrete Problems in AI Safety (2016)
- Avoid Negative Side Effects
- Avoid Reward Hacking
- Scalable Oversight
- Safe Exploration
- Robustness to Distributional Shift
I have a few problems with these rules:
- It’s easy to imagine situations in which they are counter-productive,
- I don’t feel a lot of ownership of them, as I wasn’t involved in their construction,
- No-one is enforcing them on me,
- They’re often highly impractical or contain colloquial/regional/policital concerns (“Boost French Research …”),
- They’re also very overwhelming and demanding, how can I ensure that we do all of them?
- Even if I say I’m doing these things, how does any non-technical person know? How can I prove it?
The positive aspects of them are:
- It’s sometimes easy to think about how to apply them to day-to-day work,
- They help me think of things that I might not care about day-to-day (i.e. the environmental concerns?),
- It might help to lobby governments/organisations to get funding to make progress on certain aspects?
- It provides a framework that might be useful for discussing with colleagues/other people
So, what should any given engineer working in this area do? One thought I’ve had recently is a simple one: Let’s just aim at building empathy for the people that will be affected by our software.
This is reasonably actionable, say, with local groups by organising meetings between technical people and the people that may be affected. I.e. in the medical-AI setting, let’s organise regular catch-ups between the engineers, the doctors, nursing staff, and hospital adminstration types, along with perhaps patient representatives.
In the setting, of, say, law software, again we just set up regular events for the two groups to chat through issues, work together on small projects, and build a mutual understanding of difficulties.
I think this approach is a bit nicer than, say, creating a new set of rules that make sense for us locally, and then forcing people to follow them. One idea I like about the empathy-based/collaborative approach (or “human-centered design”; another term for this kind thing), is that it allos people to adapt to local circumstances, which I think is really crucial in allowing any one person to feel like they have some control over the application of any rules they come up with, and thus getting them to actually take an interest in enforcing them in their organisation.
So, my new rule of thumb for this ethics-related AI stuff will be: Can I meet with some of the people that will be affected? What are their thoughts? What problems are they working through and what are they interested in?
As always, I’m interested in your thoughts on the matter!