Initiatives to regulate artificial intelligence (AI) have sprung up around the world, spearheaded by the likes of the OECD and UNESCO. It‘s time to harmonise and consolidate, a conference on AI ethics held under the auspices of Slovenia’s presidency of the EU Council heard this week.
“We are clearly at a developmental point where you’ve got a lot of actors right now contributing to this movement from principles to practice, and we simply need to work together in a multistakeholder way to harmonise these approaches,” said David Leslie, of the Council of Europe’s Ad hoc Committee on Artificial Intelligence (CAHAI). Leslie leads on ethics at the UK’s Alan Turing Institute and is the author of the UK government’s official guidance on the responsible design and implementation of AI systems in the public sector. Between 2015 and 2020, 117 bodies of varying standing published AI ethics principles, and the number keeps growing. In total, 91 came out of Europe and North America. This high number shows there are hopes and concerns related to AI technologies, but also an interest in building some type of mechanism of consensus on AI governance”,
said Marielza Oliveira, director for partnerships at the United Nations’ agency for education, science and culture (UNESCO).
All these initiatives are already moving towards harmonisation, she believes. The next big step will come in mid-September with the launch of the Globalpolicy.AI platform, enabling eight intergovernmental organisations, including the European Commission, the OECD, United Nations and the World Bank, to work together on defining principles for future AI applications.
AI holds the promise of changing more or less every industry, boosting productivity, improving forecasting and supporting moves to be more energy efficient. AI-based technologies can help predict crop yields, are already playing a role in drug discovery, being applied to autonomous cars, and automating administrative tasks.
In 2018 market anaylsts, McKinsey estimated AI-based technologies could give the global economy a $13 trillion boost by 2030, amounting to 1.2% additional global GDP growth per year.
But there are evident risks. AI-powered systems can infringe privacy, while applications based on low quality data can lead to bias and discrimination. In 2016, an investigation by the US news organisation Pro Publica showed an algorithm used by US law authorities to assess a criminal offender’s likelihood of reoffending was racially biased.
Rules for AI technologies are needed to limit the risks in areas such as health intervention, credit scoring and insurance ratings.
Read more here.