The Challenge of Generative AI Regulation: Fragmentation, Data Protection, and Policy

An interview with author Olaf Groth, co-author of The Great Remobilization

AI regulation and governance

Emerging technologies such as Generative AI are powerful.They also involve an incredible amount of data. How should we think about regulating these technologies, while at the same time, maintaining a rich, innovative business ecosystem? On a global scale, different governments have taken a very different approach to questions of data privacy and AI governance, leaving us with a highly fragmented regulatory environment. How can we bring different concerns, policymakers, and perspectives to the table in order to bring about the appropriate level of AI regulation?


To address these questions, we’re speaking with Dr. Olaf Groth, the founder and CEO of Cambrian Futures. He is the co-author of Solomon’s Code: Humanity in a World of Thinking Machines (Pegasus, 2018), and most recently of The Great Remobilization: Strategies & Designs For A Smarter Future (MIT Press, forthcoming Oct. 17th, 2023).


In this interview, we explore the thorny issue of AI regulation and data protection: How can we safely govern these technologies on a global scale?  

 

We’ve spoken previously about Brain-Computer Interfaces and Cognitive Technologies. These emerging technologies present a huge challenge to regulators. How can regulation play an important role, without stifling innovation?

Many of us are grateful that the European Union has been a 500 pound gorilla, which has essentially put regulation forward. It’s seeking to protect the dignity of the individual. This is laudable, and this is a partial success. I say “partially” because at the same time, it is also mercilessly behind in innovating on this. It is stifling its own innovators, because it is a 500 million person market that is hopelessly fragmented. By laying all this blanket regulation on top of these innovators, it isn't really helping itself or others. That's the flip side of regulation like GDPR and The European AI Act. 


Overall, the new EU AI regulations are all very highly imperfect, and are not helpful for small innovators. In doing so, they are sending a signal into the world that we need to figure this out. And yes, Europe has started to be on that painful path of trying to figure it out. It's catching a lot of flack as it should, but at least it’s trying to figure this out. 

Outside of the ongoing work of The European Union, where else are you seeing a positive push for more oversight and data regulation?


Outside of Europe, a few places stand out. The first is here in California, with the CCPA, the California Consumer Protection Act. Here in California, we're also now looking at new acts on protecting data for consumers that are in the works. 


Singapore is another country at the forefront of this. The government here is looking at privacy and agency for both individuals and for businesses, as well as the need to protect their data, and their footprints. What’s interesting here is that Singapore, of course, is not a democracy. And in terms of global progress on this issue, it’s a good thing that we have both democracies and non-democracies which are working on this. Similarly, the UAE has sequenced all of its citizens, and is very strict on data protection. You may agree or disagree with the sequencing, but it's a good thing that they are so staunch on data protection


Ultimately, these concerns are an important step in operationalizing governance, and for developing new types of AI technologies that allow you to still extract insights. This is important to retain. Because once you sort of shut out various personal data footprints, or make them very restricted, how are you going to derive insights from them? That's the conundrum. We need international digital property rights protection, for instance, to create trust and free up data in turn. That’s not a contradiction, but a requirement for more data-driven insights.


In the meantime, there are nascent approaches to extracting these insights ethically, for instance, new types of algorithms that can crawl encrypted data without ever lifting up the real data. And these are the technologies we need to consider when maintaining this balance. Overall, the regulatory fragmentation creates a lot of uncertainty around these technologies and the data they gather. 


These concerns around data privacy seem to reach their fullest expression when we're looking at Generative AI. Some individuals from the private sector have called for a moratorium, but there is nothing unified there when it comes to this technology. How should we think about regulation for Generative AI? 


The first thing is already happening, and I was very happy to be part of it. This was at The World Economic Forum’s center for the Fourth Industrial Revolution. Here, we launched a global AI Governance Alliance. We recently had our first official working group meeting this morning, and it went really well. We also have a convening body that acts as a think tank and brings together policymakers, entrepreneurs, corporate executives, activists and the media together. It’s a really important facilitation mechanism - the sort of bridge to fight this fragmentation and to make suggestions for new and better regulation and safeguards


This is all very new. Ultimately, some of these safeguards may entail self regulation. Some of this will have to be country by country, region by region, while some of this will have to be global. 


We need the organization itself to recognize that data, AI, cognitive technologies, all require a different approach. They require us to examine and then regulate very different phenomena than what we've dealt with before, much faster with much, much greater technical depth. Ultimately, this also means that organizations and industries need to hire the right people at the right times, in order to be able to do that. And then we need the same thing on company level, on a national level, and on a global level.


The overall regulatory framework may end up taking various forms, and the World Economic Forum can make a real difference in fostering these discussions. That includes striking a bridge to China. People and their data travel across borders, and China is too big but also too different not to engage with a great deal of attention and resolve.


More from Dr. Olaf Groth on Brain-Computer Interfaces and Cognitive Technologies


About The Author

Olaf Groth, PhD is Olaf is lead co-author of The Great Remobilization: Strategies & Designs For A Smarter Future (MIT Press, forthcoming Oct. 17th, 2023) and of the AI book Solomon’s Code: Humanity in a World of Thinking Machines (Pegasus, 2018). He has 30 years of experience as an executive and adviser building strategies, capabilities, programs and ventures across 35+ countries with multinationals (e.g. AirTouch, Boeing, Chevron, GE, Qualcomm, Q-Cells, Vodafone, Volkswagen, etc.), consultancies, startups, VCs, foundations, governments and academia. He is a Professor of Practice at UC Berkeley’s Haas School of Business and Adjunct Professor at Hult International Business School.


Previous
Previous

How The Psychology of Time is Warped by Media and Novelty

Next
Next

How should we think about Cognitive Technologies and Brain-Computer Interfaces?