A reading list on AI ethics
There is no substitute for doing your own research, as this is a fast-changing field. Here are some resources that got me started on the road to understanding the complex issues around ethics & AI. The first section has open-ended descriptive sources covering the issue from multiple perspectives. Next are some practical frameworks, tools, & to-do lists. After this, a collection of some of the most-cited works highlighting scenarios where bias is known to exist.
Philosophers, Social Sciences & AI
AI Policy Guide & AI Safety Syllabus by 80,000 Hours.
Algorithms as Culture: some tactics for the ethnography of algorithmic systems. A social scientist's view of how algorithms are contextualised by their makers. For Big Data & Society.
Archaeology for Cyborgs: An invitation to adapt the archaeological protocols around researching human remains as a framework for dealing with personal data ethically.
Before you Make a Thing: Jentery Sayers's list of resources combining theory and practice for critically assessing context and impact when creating technical objects.
Big Data for the People: it's time to take it back from our tech overlords: A Marxist framework suggesting that the value of big data would be more fairly distributed if treated as common property.
Big Data Is People! Pragmatic, real-world examples of the differences between primary uses and secondary uses of data and a call to action for renegotiating how we obtain consent for these uses at a corporate and societal level. Echoes the point about "treating data as people" from PLoS's 'Ten Simple Rules for Responsible Big Data Research', linked below.
The Biggest Buzzword in Silicon Valley Doesn't Make Any Sense: "[Seaver] asked coders at technology companies about their relationship to the word “algorithm” and discovered that even they feel alienated by the projects they work on, in part because they often tackle small pieces of bigger, non-personal projects—tentacles of an algorithmic organism if you will—and end up missing any closeness to the whole." QZ's view of Seaver's academic article "Algorithms as Culture" (above).
Conceptualising the right to data protection in an era of Big Data: Examining the cultural & societal values underpinning European rights to data protection, including the normative morality of Habermas, Mann's three types of veillance, & Foucault's panopticon.
The Cybersurveillance Dilemma: Foucault, Hobbes and Mill Weigh In: a critical-thinking introduction to cybersurveillance by Macat. Focused on government surveillance & the "social contract" but covers many of the same ethical questions faced by all who work with large data sets. Foucault's approach to power as "discipline" seems particularly relevant for civic & commercial usages of large data sets.
The Ethics of Artificial Intelligence: O'Reilly article with a summary of the distinction between 'ethics' & 'morals' & how individuals/teams can use this to frame their decision-making about the tools they work on/with.
The Ethics and Governance of Artificial Intelligence: MIT Media Lab's interdisciplinary course covering ethical, moral, & legal ramifications of AI development.
Europe’s silver bullet in global AI battle: Ethics: Europe seeks an outsize impact on setting out the rules for engagement with AI in the same way its GDPR regulation has had a global impact on privacy.
Facebook has a new process for discussing ethics. But is it ethical? A critical reflection on how the ethical decision-making process is approached by Facebook's research teams. Noting that all institutional review boards are subjective dependent on the ethics of the people who sit on them, the article pushes for greater transparency about the makeup of Facebook's ethical review board. (Critical questions from the curator of this list: but how much transparency is there for other IRBs? Are we holding Facebook up to a standard that is different to our critical investigation of other institutions? Even if we are, does the pervasive nature of Facebook warrant closer scrutiny?)
Moral Crumple Zones: Cautionary Tales in Human-Robot Interaction: "Just as the crumple zone in a car is designed to absorb the force of impact in a crash, the human in a robotic system may become simply a component — accidentally or intentionally — that is intended to bear the brunt of the moral and legal penalties when the overall system fails." Uses historical examples from aviation & nuclear energy to contextualise the current situation with reference to moral requirements in AI.
No One Should Trust AI: Joanna Bryson's article in the UN's AI & Global Governance. "More importantly, no human should need to trust an AI system, because it is both possible and desirable to engineer AI for accountability. We do not need to trust an AI system, we can know how likely it is to perform the task assigned, and only that task. When a system using AI causes damage, we need to know we can hold the human beings behind that system to account."
A People's Guide to AI: an accessible guide to AI focused on equality.
A Review of Future and Ethical Perspectives of Robotics and AI. Jim Torreson's views on the current state of AI ethics and discursive review about what the future holds.
Scaling Ethics: A Kantian approach to understanding what types of ethical frameworks can scale in an industry focused on speed & reach.
Surveillance, Power & Everyday Life: Another look at power as 'discipline' in terms of the automated decisions made about people (legal, commercial, & societal) without their knowledge. From the Oxford Handbook of Information & Communication Technologies. Draws heavily on Gary T. Marx's concept of a "surveillance society" (with nods, again, to Foucault.) Surveillance here transcends the state & is permeated through daily life interactions in all kinds of ways.
Towards an Ethics of Algorithms: Convening, Observation, Probability and Timeliness: Science, Technology & Human Values journal article with the grounding of ethics as "the study of what we ought to do."
Use Our Personal Data for the Common Good: a proposal that the open-source ethos of the Human Genome Project could be a way to equitably use personal data for public benefit, with a critical acknowledgement of concerns about privacy & inequality that might become problems if this model is adopted.
Ethical Frameworks, Tools & To-Do Lists
A Code of Ethics for Data Scientists: DJ Patil's crowd-sourced call for developing an ethical framework defined by data scientists (in conjunction with Bloomberg.)
After a Year of Tech Scandals, Our 10 Recommendations for AI: by the AI Now institute. Focuses on regulation, governance, and creating public protections for citizens, consumers and whistleblowers. Includes a link to their 2018 report.
AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations: Five ethical principles with 20 recommendations devised by Atomium-EISMD to create a "Good AI Society."
AI Ethics: Seven Traps: a sort of reverse ethical framework designed to help decision-makers avoid seven common pitfalls in AI ethics strategy. The traps are reductionism, simplicity, relativism, value alignment, dichotomy, myopia, and the rule of law.
AI Ethics Resources: a reading list compiled by Rachel Thomas. Grouped by building technical skills, starting a reading group/syllabi from ethics courses, experts to follow, institutes and fellowships, build your own network, and related posts on fast.ai.
AI in the UK: Ready, Willing and Able? The House of Lords Artificial Intelligence Committee's recommendations on developing a policy framework for AI.
Artificial Intelligence: the global landscape of ethics guidelines: comprehensive overview of public, private sector, and academic research organisations' recommendations on AI ethics, demonstrating five overarching principles emerging ((transparency, justice and fairness, non-maleficence, responsibility and privacy).
An Overview of National AI Strategies: review of current national strategies and related policy frameworks on AI by Tim Dutton.
The Data Ethics Canvas: the Open Data Institute's canvas can be used to identify, assess, and debate ethical issues around big data projects in order to create practical next steps to address the issues.
Data Science Ethical Framework: Gov.UK's ethical framework for data scientists working in the UK civil service.
Doteveryone Ethical Tech Initiatives Directory: a crowdsourced list of initiatives to produce ethical and responsible technology. Standards, training courses, advocacy, guidelines, campaigns, regulations, networks, tools, etc.
Ethical Assessment of New Technologies: a Meta-Methodology. Developed for BCS the Chartered Institute of IT's Ethics group in 2010, a simple framework for assessing ethical issues in technology: Define questions, Issues analysis, Options evaluation, Decision determination, Explanations dissemination (DIODE). Notably they do not consider this a code of ethics in itself, but a framework for assessing ethical issues in emergent technologies.
The Ethics Certification Program for Autonomous and Intelligent Systems (ECPAIS). The Institute of Electrical and Electronics Engineers (IEEE) is developing a certification program for marking Transparency, Accountabilty, and Algorithmic Bias.
Ethics in Action. IEEE's standards and working practice initative for ethical guidelines around automation and intelligent systems.
Ethics guidelines for trustworthy AI: The EU's recommendations and requirements for a trustworthy AI system. Ethical AI should be lawful, ethical and robust. Determining this involves seven key areas: Human agency and oversight, Technical Robustness and safety, Privacy and data governance, Transparency, Diversity, non-discrimination and fairness, Societal and environmental well-being, and Accountability.
Ethical OS. Toolkit by the Omidyar Network's Tech and Society Solutions Lab and the Institute for the Future.
Everyday Ethics for Artificial Intelligence. IBM's practical guide to AI ethics, with example use-cases and recommendations.
Fairness Through Awareness: a formula for measuring fairness in classification models. Could be used to assess whether bias exists in a classification system.
The Futurice Principles for Ethical AI: five simple principles for ethical AI, with a data ethics canvas tool to work through particular scenarios.
NIST Cybersecurity Framework: often cited as a critical implement for giving cybersecurity experts a common language to speak about cybersecurity issues, the National Institute of Standards and Technology (NIST)'s forthcoming (c.2019) framework on data privacy will likely draw on the precedents set via its cybersecurity framework.
The Partnership on AI to Benefit People and Society: to study and formulate best practices on AI technologies, to advance the public’s understanding of AI, and to serve as an open platform for discussion and engagement about AI and its influences on people and society.
Platforms Should Become Information Fiduciaries: one way to ensure service providers are acting in the best interests of their users is to make them data fiduciaries, giving them extra responsibilities for the people using their services.
Policy and investment recommendations for trustworthy Artificial Intelligence: recommendations by the EU's High-Level Expert Group on AI. Trustworthy AI should be lawful, ethical and robust. Specific ethics guidelines provided in "Ethics Guidelines for Trustworthy AI", linked above.
Responsible Data: mailing list to discuss responsible data issues, tools and approaches set up by The Engine Room.
Responsible Tech: the Organisational Ecosystem Map. Who's doing what in the field of responsible technology? Use this network graph to find out. Covers a broader area than AI; includes tags for AI & other specific domains.
Ten Simple Rules for Responsible Big Data Research: Computational Biology / Public Library of Science article of 10 simple, jargon-free guidelines for conducting ethical research with big data.
Where Bias Exists: Reported Bias Scenarios (& Some Solutions)
AI Essentials. A curated news feed with the latest on AI.
Algorithms Are the Wrong Usual Suspect! Look to the developer's unconscious bias, not the algorithm itself, to ferret out underlying algorithmic problems. With some examples of where bias has been identified & industry-wide suggestions for how to combat bias (e.g. more diversity in the field.)
How Algorithms Can Punish the Poor: Slate's coverage of Virginia Eubanks' book _Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. _Includes examples like automated decisions for access to subsidised housing, welfare benefits & predictive modelling for child abuse & neglect.
How Coders are Fighting Bias in Facial Recognition Software: practical examples of overcoming training set bias from facial recognition companies Gfycat & Modiface. The solution in both cases: gathering additional data to make the training sets more representative of different population groups. (Another solution not covered in the article: fixing the open-source data sets to ensure that these are representative of diverse population groups.)
How to Make a Racist AI Without Really Trying: a cautionary tale about how bias can accidentally creep in when the builder of the AI is not aware of all the context.
Invisibilia: Do the Patterns in Your Past Predict Your Future? NPR's look at the difficulty of identifying a consistently predictive model for sociological factors like school grades, propensity for homelessness, etc.
Machine Bias: ProPublica article detailing the racial bias in risk assessment software for predicting future criminality among defendents in the USA.
My Algorithm is Better than Yours! Why it's essential for everyone, not just data scientists, to understand the limitations & biases of different algorithmic techniques & training sets. This enables people to make informed decisions about how much to trust the outcomes of specific instances of automated decision-making. (This seems analogous with helping patients understand the likelihood of their test results being accurate.)
The Government Is Using the Most Vulnerable People to Test Facial Recognition Software: NIST, the agency charged with creating standards and regulation for technology including facial recognition tech, has been using data sets obtained without consent, often of photos taken in highly vulnerable circumstances, to train its verification test for facial recognition programmes--the gold standard test in the industry.
Tech's Sexist Algorithms and How to Fix Them: The FT's examination of gender bias in AI. Main points of bias identified in the article: biased training sets, lack of diversity in the field (i.e. the people impacted by the technology are not the ones building the technology) & lack of clear legal frameworks to mitigate the effects of bias.
Technology is Biased Too. How Do We Fix It?: A deeper look at the risk assessment software described by ProPublica as well as highlighting similar risks in facial recognition software. Suggested solutions in the form of legislation, transparency & accountability.
Towards Provably Moral AI Agents in Bottom-Up Learning Frameworks: Can you train a machine learning algorithm to behave morally?
Thread on AI in hiring bias: https://twitter.com/random_walker/status/978447070909607936