A reading list on AI ethics

There is no substitute for doing your own research, as this is a fast-changing field. Here are some resources that got me started on the road to understanding the complex issues around ethics & AI. The first section has open-ended descriptive sources covering the issue from multiple perspectives. Next are some practical frameworks, tools, & to-do lists. After this, a collection of some of the most-cited works highlighting scenarios where bias is known to exist.

Philosophers, Social Sciences & AI

  • AI Policy Guide & AI Safety Syllabus by 80,000 Hours.

  • Algorithms as Culture: some tactics for the ethnography of algorithmic systems. A social scientist's view of how algorithms are contextualised by their makers. For Big Data & Society.

  • Archaeology for Cyborgs: An invitation to adapt the archaeological protocols around researching human remains as a framework for dealing with personal data ethically.

  • Before you Make a Thing: Jentery Sayers's list of resources combining theory and practice for critically assessing context and impact when creating technical objects.

  • Big Data for the People: it's time to take it back from our tech overlords: A Marxist framework suggesting that the value of big data would be more fairly distributed if treated as common property.

  • Big Data Is People! Pragmatic, real-world examples of the differences between primary uses and secondary uses of data and a call to action for renegotiating how we obtain consent for these uses at a corporate and societal level. Echoes the point about "treating data as people" from PLoS's 'Ten Simple Rules for Responsible Big Data Research', linked below.

  • The Biggest Buzzword in Silicon Valley Doesn't Make Any Sense: "[Seaver] asked coders at technology companies about their relationship to the word “algorithm” and discovered that even they feel alienated by the projects they work on, in part because they often tackle small pieces of bigger, non-personal projects—tentacles of an algorithmic organism if you will—and end up missing any closeness to the whole." QZ's view of Seaver's academic article "Algorithms as Culture" (above).

  • Conceptualising the right to data protection in an era of Big Data: Examining the cultural & societal values underpinning European rights to data protection, including the normative morality of Habermas, Mann's three types of veillance, & Foucault's panopticon.

  • The Cybersurveillance Dilemma: Foucault, Hobbes and Mill Weigh In: a critical-thinking introduction to cybersurveillance by Macat. Focused on government surveillance & the "social contract" but covers many of the same ethical questions faced by all who work with large data sets. Foucault's approach to power as "discipline" seems particularly relevant for civic & commercial usages of large data sets.

  • The Ethics of Artificial Intelligence: O'Reilly article with a summary of the distinction between 'ethics' & 'morals' & how individuals/teams can use this to frame their decision-making about the tools they work on/with.

  • The Ethics and Governance of Artificial Intelligence: MIT Media Lab's interdisciplinary course covering ethical, moral, & legal ramifications of AI development.

  • Europe’s silver bullet in global AI battle: Ethics: Europe seeks an outsize impact on setting out the rules for engagement with AI in the same way its GDPR regulation has had a global impact on privacy.

  • Facebook has a new process for discussing ethics. But is it ethical? A critical reflection on how the ethical decision-making process is approached by Facebook's research teams. Noting that all institutional review boards are subjective dependent on the ethics of the people who sit on them, the article pushes for greater transparency about the makeup of Facebook's ethical review board. (Critical questions from the curator of this list: but how much transparency is there for other IRBs? Are we holding Facebook up to a standard that is different to our critical investigation of other institutions? Even if we are, does the pervasive nature of Facebook warrant closer scrutiny?)

  • Moral Crumple Zones: Cautionary Tales in Human-Robot Interaction: "Just as the crumple zone in a car is designed to absorb the force of impact in a crash, the human in a robotic system may become simply a component — accidentally or intentionally — that is intended to bear the brunt of the moral and legal penalties when the overall system fails." Uses historical examples from aviation & nuclear energy to contextualise the current situation with reference to moral requirements in AI.

  • No One Should Trust AI: Joanna Bryson's article in the UN's AI & Global Governance. "More importantly, no human should need to trust an AI system, because it is both possible and desirable to engineer AI for accountability. We do not need to trust an AI system, we can know how likely it is to perform the task assigned, and only that task. When a system using AI causes damage, we need to know we can hold the human beings behind that system to account."

  • A People's Guide to AI: an accessible guide to AI focused on equality.

  • A Review of Future and Ethical Perspectives of Robotics and AI. Jim Torreson's views on the current state of AI ethics and discursive review about what the future holds.

  • Scaling Ethics: A Kantian approach to understanding what types of ethical frameworks can scale in an industry focused on speed & reach.

  • Surveillance, Power & Everyday Life: Another look at power as 'discipline' in terms of the automated decisions made about people (legal, commercial, & societal) without their knowledge. From the Oxford Handbook of Information & Communication Technologies. Draws heavily on Gary T. Marx's concept of a "surveillance society" (with nods, again, to Foucault.) Surveillance here transcends the state & is permeated through daily life interactions in all kinds of ways.

  • Towards an Ethics of Algorithms: Convening, Observation, Probability and Timeliness: Science, Technology & Human Values journal article with the grounding of ethics as "the study of what we ought to do."

  • Use Our Personal Data for the Common Good: a proposal that the open-source ethos of the Human Genome Project could be a way to equitably use personal data for public benefit, with a critical acknowledgement of concerns about privacy & inequality that might become problems if this model is adopted.

Ethical Frameworks, Tools & To-Do Lists

Where Bias Exists: Reported Bias Scenarios (& Some Solutions)

  • AI Essentials. A curated news feed with the latest on AI.

  • Algorithms Are the Wrong Usual Suspect! Look to the developer's unconscious bias, not the algorithm itself, to ferret out underlying algorithmic problems. With some examples of where bias has been identified & industry-wide suggestions for how to combat bias (e.g. more diversity in the field.)

  • How Algorithms Can Punish the Poor: Slate's coverage of Virginia Eubanks' book _Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. _Includes examples like automated decisions for access to subsidised housing, welfare benefits & predictive modelling for child abuse & neglect.

  • How Coders are Fighting Bias in Facial Recognition Software: practical examples of overcoming training set bias from facial recognition companies Gfycat & Modiface. The solution in both cases: gathering additional data to make the training sets more representative of different population groups. (Another solution not covered in the article: fixing the open-source data sets to ensure that these are representative of diverse population groups.)

  • How to Make a Racist AI Without Really Trying: a cautionary tale about how bias can accidentally creep in when the builder of the AI is not aware of all the context.

  • Invisibilia: Do the Patterns in Your Past Predict Your Future? NPR's look at the difficulty of identifying a consistently predictive model for sociological factors like school grades, propensity for homelessness, etc.

  • Machine Bias: ProPublica article detailing the racial bias in risk assessment software for predicting future criminality among defendents in the USA.

  • My Algorithm is Better than Yours! Why it's essential for everyone, not just data scientists, to understand the limitations & biases of different algorithmic techniques & training sets. This enables people to make informed decisions about how much to trust the outcomes of specific instances of automated decision-making. (This seems analogous with helping patients understand the likelihood of their test results being accurate.)

  • The Government Is Using the Most Vulnerable People to Test Facial Recognition Software: NIST, the agency charged with creating standards and regulation for technology including facial recognition tech, has been using data sets obtained without consent, often of photos taken in highly vulnerable circumstances, to train its verification test for facial recognition programmes--the gold standard test in the industry.

  • Tech's Sexist Algorithms and How to Fix Them: The FT's examination of gender bias in AI. Main points of bias identified in the article: biased training sets, lack of diversity in the field (i.e. the people impacted by the technology are not the ones building the technology) & lack of clear legal frameworks to mitigate the effects of bias.

  • Technology is Biased Too. How Do We Fix It?: A deeper look at the risk assessment software described by ProPublica as well as highlighting similar risks in facial recognition software. Suggested solutions in the form of legislation, transparency & accountability.

  • Towards Provably Moral AI Agents in Bottom-Up Learning Frameworks: Can you train a machine learning algorithm to behave morally?

  • Thread on AI in hiring bias: https://twitter.com/random_walker/status/978447070909607936

results matching ""

    No results matching ""