TV

TV's Centre for Ethics launches Oxford Handbook of Ethics of AI

photo of Markus Dubber
Markus Dubber, the director of TV's Centre for Ethics, co-edited the nearly 900-page handbook, which examines the evolving field of AI through an interdisciplinary and international lens (photo courtesy Faculty of Law)

From smart cities and autonomous vehicles to facial recognition software and digital assistants standing by to answer questions, artificial intelligence (AI) is quickly becoming an important part of everyday life – and it’s only the beginning. 

But the AI revolution also brings with it important ethical questions about the technology’s impact on people and society – questions that Markus Dubber, director of the University of Toronto’s , is delving into with the release of the Oxford Handbook of Ethics of AI, a first of its kind globally. 

Launched earlier this month, the nearly 900-page handbook examines the ever-evolving field of AI through an interdisciplinary and international lens, exploring 44 topics, including fairness and bias, race and gender, AI and consent, the ethics of automating design and more.   

 

“I like these Oxford handbooks because of the opportunity to define the canon of the field and also shape it,” says Dubber, a professor in the Faculty of Law and co-editor of the handbook. “We can include approaches, perspectives and feature scholars who perhaps hadn't been thought of as central to the scholarly enterprise.” 

Building on the centre’s “Ethics of AI in Context” initiative – an interdisciplinary workshop series launched in 2017 – Dubber worked with co-editors Sunit Das, an associate professor in TV’s Faculty of Medicine, and Frank Pasquale of the University of Maryland, to create a handbook that was interdisciplinary. They wanted to broaden the conversation around the ethics of AI to include perspectives from the humanities, social sciences, law, medicine and engineering. 

Read a Q&A with Markus Dubber about the handbook project

“AI now touches every aspect of everything we do individually, politically, communally, socially. So, all disciplines should be part of this conversation,” Dubber says. 

“It's exciting to see the range of people participating: engineers, philosophers, political scientists, lawyers and cognitive scientists. Some didn't necessarily think of themselves as being part of this discipline at the beginning, but now they do.” 

And while Dubber is no stranger to Oxford Press handbooks – he has co-edited several law handbooks – the ethics of AI did come with its own unique challenges. 

“There’s no handbook for the handbook,” he says. 

“The discipline is so fast moving – not just the tech but also the ethics. Anytime there's a new tech development, there will be some new ethical issue. So, it's not just that the technology keeps evolving, the ethics – the reflection on the normative dimensions of the technology – will change, too.” 

But Dubber says the Centre for Ethics, an interdisciplinary unit from the onset, was the perfect place to take on this challenge – and it allowed him to involve his students from the centre in the project. 

Graduate and undergraduate students in the centre’s cross-divisional Ethics of Artificial Intelligence in Context course, along with research assistants and student affiliates of the centre’s Ethics of AI Lab, collaborated in pulling together an annotated bibliography to act as an open-access  to the handbook.  

“It was fun to see the students take charge and produce something of first-rate quality that, as far as I know, is the only one of its kind anywhere,” Dubber says. “This is a comprehensive annotated bibliography of more than 800 sources covering 44 different subjects in the ethics of AI, ranging from smart cities to the singularity to European AI policy.” 

What does Dubber want readers take away from the handbook? He hopes people to realize they don’t have to be experts in AI to think critically about the ethical issues surrounding the field. 

“Instead of being overwhelmed by this accumulation of expertise across the world and across disciplines, they can see these are all important approaches and perspectives,” he says. “Ultimately, it's up to each one of us to decide what role AI – or technology generally – should play.” 

For Dubber, ethics is a great equalizer. 

“You don't need a degree in ethics to have an ethical view of anything,” he says. “You don't need a degree to have a sense of what's right and wrong and to think things through.” 

Topics

The Bulletin Brief logo

Subscribe to The Bulletin Brief

Arts & Science