
Whose worldview should serve as the template for AI morality?
Loading player...
Artificial intelligence (AI) was once the stuff of science fiction. But it’s becoming widespread. It is used in mobile phone technology and motor vehicles. It powers tools for agriculture and healthcare.
But concerns have emerged about the accountability of AI and related technologies like machine learning. In December 2020 a computer scientist, Timnit Gebru, was fired from Google’s Ethical AI team. She had previously raised the alarm about the social effects of bias in AI technologies. For instance, in a 2018 paper Gebru and another researcher, Joy Buolamwini, had showed how facial recognition software was less accurate in identifying women and people of colour than white men. Biases in training data can have far-reaching and unintended effects.
Over the past several years, concerns around AI ethics have gone mainstream. The concerns, and the outcomes everyone wants to avoid, are largely agreed upon and well documented. No one wants to push out discriminatory or biased AI. No one wants to be the object of a lawsuit or regulatory investigation for violations of privacy. But once we’ve all agreed that biased, black box, privacy-violating AI is bad, where do we go from here? The question most every senior leader asks is: How do we take action to mitigate those ethical risks?
To talk about this Michael Avery is joined by Emma Ruttkamp-Bloem, Professor and Head of Department of Philosophy at University of Pretoria, AI Ethics Lead at South African Centre for AI Research (CAIR); Dr Tanya de Villiers-Botha, Head: Unit for the Ethics of Technology - Centre for Applied Ethics & Johan Steyn, chair of the special interest group on artificial intelligence and robotics with the Institute of Information Technology Professionals of SA
But concerns have emerged about the accountability of AI and related technologies like machine learning. In December 2020 a computer scientist, Timnit Gebru, was fired from Google’s Ethical AI team. She had previously raised the alarm about the social effects of bias in AI technologies. For instance, in a 2018 paper Gebru and another researcher, Joy Buolamwini, had showed how facial recognition software was less accurate in identifying women and people of colour than white men. Biases in training data can have far-reaching and unintended effects.
Over the past several years, concerns around AI ethics have gone mainstream. The concerns, and the outcomes everyone wants to avoid, are largely agreed upon and well documented. No one wants to push out discriminatory or biased AI. No one wants to be the object of a lawsuit or regulatory investigation for violations of privacy. But once we’ve all agreed that biased, black box, privacy-violating AI is bad, where do we go from here? The question most every senior leader asks is: How do we take action to mitigate those ethical risks?
To talk about this Michael Avery is joined by Emma Ruttkamp-Bloem, Professor and Head of Department of Philosophy at University of Pretoria, AI Ethics Lead at South African Centre for AI Research (CAIR); Dr Tanya de Villiers-Botha, Head: Unit for the Ethics of Technology - Centre for Applied Ethics & Johan Steyn, chair of the special interest group on artificial intelligence and robotics with the Institute of Information Technology Professionals of SA





