Top academic weighs in on artificial intelligence regulation

The buzz regarding the potential for artificial intelligence to revolutionise our lives is inescapable.

Development of artificial intelligence (AI) technology is a huge growth area, and investors are banking on an "AI boom" in everything from cybersecurity and healthcare. The capabilities and achievements of AI in some areas are certainly astonishing – self-driving cars are no longer theoretical but a reality, and AlphaGo is now arguably the strongest Go player in history. But the picture isn't all rosy, which The Economist has recently described as a "Techlash" against the digital giants. As with any technology, there are negative as well as positive effects of AI. Applications of AI in social media can help us find long-lost friends, but those same channels can be manipulated to disseminate fake news and influence our decisions. Are these and other similar worries matters of public concern that warrant a societal response? AI applications, whether it’s a smart city, logistics management, or build to order and just-in-time manufacturing can be optimized to increase efficiency but who should take responsibility when automated processes cause harm?

Richard Diffenthal and Helen McGowan of our London Tech Hub team talk to Karen Yeung, one of our Academics Advisory Panel members, about the case for regulating AI.

Read More: Artificial intelligence: time to get regulating?


Download PDF Share Back To Listing
Loading data