At the beginning of March 2023, missions #3 and #4 brought me to Bonn and Zurich. It’s always refreshing to discuss EU topics outside the bubble and to meet experts & practitioners.
◾ On 2 March, I gave the introductory keynote at a policy workshop on 'The EU AI Act and Voices from the Global South' organized by Academy of International Affairs NRW in Bonn. I underlined that the EU should be more humble as our AI Act is based on internationally agreed concepts. We are not the only ones trying to regulate, many other countries are working on AI laws. A 2nd Brussels Effect will only happen if: (a) we step up the regulatory dialogue with our international partners, (b) do not deviate too much from globally accepted frameworks, and (c) strike the right balance between protecting from AI risks and promoting AI deployment. Find more information.
◾ On 10 March, I was one of the panelists on 'Regulation of AI in insurance' during the #PROGRES23 conference organized by The Geneva Association in Zurich. I emphasized that a horizontal EU law will never be capable to cover every sector or AI use case in an adequate manner. There are just too many and they are too diverse. Thus, the EU regulator should have gone for a US Bill of Rights + Sectorial Laws approach. Now that the European Commission has chosen a horizontal approach, we should at least make the AI Act broad and flexible enough to take the differences into account. And we must hope for excellent Harmonised Standards. Find more information.
◾ Afterwards, I met the Lakera Team - a very exiting AI safety startup that helps developers to test, monitor, protect, and improve their model performance and data quality in order to become AI Act compliant. Their products would complement AI Trust Labels such as VDE's by testing models with regards to specific criteria and indicators. Although I tend to be rather critical on the implementation of the AI Act, meetings like this one show me that there is still hope that the new legal framework can work after all.
Comments