Olívia Erdélyi - Standards as the Missing Link in AI Regulation

Keynote Speech

In a lively and informative session, Olívia Erdélyi outlined how standards can become a powerful ally for organizations striving to meet complex AI regulatory demands. She began by noting the ever-expanding set of regulations—such as the EU’s AI Act—that can leave practitioners wondering where to start. According to Erdélyi, standards should be seen not as optional extras but as integral tools that bridge gaps and offer actionable detail often absent from high-level legislation.

Regulations vs. Standards

Erdélyi clarified that “regulation” is an umbrella term covering both statutes (like the AI Act) and more detailed instruments, including standards. While high-level statutes set out broad objectives, they typically lack the technical detail needed for everyday implementation. That is where standards come in, providing the nuts and bolts for compliance.

Key Standardization Bodies

Erdélyi highlighted the global significance of ISO (International Organization for Standardization) and IEC (International Electrotechnical Commission). Their Joint Technical Committee 1 (JTC1), and specifically its Subcommittee 42 (SC42), is central to developing AI-related standards—ranging from foundational definitions (e.g., ISO/IEC 22989) to data governance and risk management. At the European level, CEN and CENELEC are similarly vital, working on standards that often align with international norms while also accounting for regional requirements.

Navigating the Landscape

One major challenge lies in determining which “target” is regulated. Are we talking about the AI system as a product or service? Or is it about the organizations themselves? The answer matters because different standards address each scenario. For instance, risk management (aligned with ISO/IEC 31001 and 23894) appears in Article 9 of the AI Act, while quality management systems—potentially corresponding to ISO/IEC 42001—feature in Article 17. Understanding the interplay between these frameworks is crucial.

Certification Gaps and Future Directions

Formal certification of AI products and services remains in its infancy. Erdélyi mentioned ISO/IEC 17067, which governs how certification schemes should be designed, but does not define specific AI requirements. Similarly, ISO/IEC 17026 (for tangible products) and 17028 (for intangible services) lay the groundwork that AI-specific versions will build upon. Erdélyi expects further development in these areas, signaling a fast-evolving space where organizations should keep a close watch.

A Call to Action

Erdélyi acknowledged the “messy” state of AI regulation and standardization but urged organizations not to wait. Embracing existing standards, even if imperfect, can help maintain market trust and protect competitive advantage. As regulators and standards bodies refine their frameworks, adopting practical measures now is the best route to staying ahead.

Olívia ErdélyiProfessor, University of Bonn & Canterbury / Partner, Phi Institute