Companies need to stop designing new artificial-intelligence technology just because they can, and people need to stop adapting their practices, habits and laws to fit the new technology. Instead, AI should be designed to fit exactly what people actually need.
That’s the view of 50 global experts who’ve contributed research papers to , a new book co-edited by two Université de Montréal experts that explores the risks — and missed opportunities — of the status quo and how it can be made better.
One important way would be through legal mechanisms, now woefully inadequate to the task, said contributor , an UdeM law professor and faculty vice-dean who specializes in competition law.
Treating AI as “a standalone object of law and regulation” and assuming that there is “no law currently applicable to AI” has left some policymakers feeling inadequate to an insurmountable task, said Larouche.
“Despite the scarcity – if not outright absence – of specific rules concerning AI as such, there is no shortage of laws that can be applied to AI, because of its embeddedness in social and economic relationships,” he said.
The challenge is not to create new legislation but to extend and apply existing laws to AI, he argued. That way, policymakers won’t fall into the trap of “delaying tactics designed to extend discussion indefinitely, while the technology continues to progress at a fast pace.”
Montreal lawyer , vice-president of policy, society and global affairs at the UdeM-affiliated Mila (Quebec Artificial Intelligence Institute), one of the largest academic communities dedicated to AI, agrees.
He urges policymakers to “start moving away from the dichotomy between innovation and regulation (and) that we acknowledge it might be okay to stifle innovation if that innovation is irresponsible.”
Prud’homme cited the European Union as an example of being pro-active in this regard, via its “very ambitious AI Act, the first systemic law on AI, (which) should be definitively approved in the next few months.”
Co-edited by UdeM professor and health law expert and UdeM public-health expert , along with the University of Cambridge’s and Osaka University’s , Human-Centred AI brings together specialists in disciplines ranging from education to management to political science.
The book examines AI technologies in a number of contexts – including agriculture, workplace environments, healthcare, criminal justice and higher education – and offers people-focused approaches to regulation and interdisciplinary ways of working together to make AI less exclusive of human needs.
University of Edinburgh philosophy professor points to increasingly popular generative AI as an example of technology which is not human-centred. She argues the technology was created by organizations simply wanting to see how powerful they can make a system, rather than making “something designed by us, for us, and to benefit us.”
Other contributors to the new book look at how AI is impacting human behaviour (via Google, Facebook and other platforms), how AI lacks data on minorities and hence helps marginalize them, and how AI undermines privacy as people ignore how their information is collected and stored.