Select your language

As Artificial Intelligence (AI) continues to advance rapidly across industries, its integration into regulatory frameworks and the development of new regulations to address its implications are becoming increasingly significant. AI has the potential to transform the way products and services are designed, tested, and brought to market, particularly in sectors such as healthcare and medical technology. But does this necessitate a complete overhaul of the regulatory landscape? This blog explores the evolving role of AI in regulation, providing insights into how regulators and industries respond to this fast-paced technological change. Are you curious to find out how our experts explore this topic? Then, read on and get to know the ISS AG experts Samuel Kilchenmann, Senior Consultant Digital Health & AI, and Kaspar Gerber, Regulatory Compliance Expert.

AI and Regulation: A Call for Evolution or Revolution?

When discussing whether existing regulatory systems need to be rethought entirely, the two experts hold differing opinions. Kaspar contends that AI’s impact on regulation is more about refinement than radical transformation. In his view, the technology itself is not fundamentally new – it represents an evolution of existing tools. He argues that a more nuanced approach to standards and testing is needed rather than a complete rewriting of regulations. While AI’s rapid growth may demand greater emphasis on how products are checked, verified, and validated, Kaspar believes it does not necessarily require a shift in the overall regulatory process.

Samuel offers a contrasting perspective. He points out that AI has already made a significant impact on regulations, particularly in the medical sector. For instance, the introduction of frameworks like the U.S. Predetermined Change Control Plan (PCCP), which allows for the approval of medical devices with the flexibility of ongoing updates, demonstrates how AI-driven innovations have influenced regulatory approaches. While initially developed to address AI-specific challenges, the program has evolved to accommodate to other types of submissions not directly related to AI. These changes, while not fundamentally new, represent a shift toward more adaptable and real-time regulatory processes designed to accommodate AI’s rapid evolution. The EU's response to regulating AI is the AI Act. The Act’s approach allows for products to access the market with the understanding that they may undergo further development, adding an element of flexibility to the certification process.

Medical devices are regulated regardless of the technology they incorporate and must meet the same requirements. Although verification processes may differ, I see no major gaps for AI.

Kaspar, how can AI enhance regulatory compliance and transform the broader regulatory landscape?

In my daily work, I frequently consult regulations and standards to answer compliance questions. While delving into these documents, I occasionally encounter contradictions between legal texts. Resolving these inconsistencies can be challenging, as the proposed course of action to be compliant often depends on how these contradictions are interpreted. AI could assist in identifying contradictions or gaps in regulatory documents, such as guidelines and policies, making the compliance process more efficient and precise. AI could also be leveraged to cross-check and standardise documents, ensuring new guidelines align with existing regulations and are free of internal inconsistencies. This could assist regulators in spotting errors in real-time, ultimately improving the quality and consistency of regulations across industries.

 


AI Regulation in the Context of Emerging Technologies

AI regulation shares similarities with the regulation of other emerging technologies, like nanotechnology. These similarities are reflected in the emerging concept of conditional approvals, which has become an integral part of the evolving regulatory landscape and is recognized as state-of-the-art in certain jurisdictions. It is also visible in the ongoing discussion surrounding regulatory approaches and strategies. Especially the questions about the need for horizontal regulations, which provide broad frameworks applicable across technologies, versus vertical regulations, which are tailored to specific industries or applications. Kaspar points out that AI and nanotechnology introduced unknowns that regulators struggle to address. For nanotechnology, the potential risks of nanoparticles in the human body were (and still mostly are) unknown, leading to regulation based on a precautionary principle.

Similarly, AI introduces new risks, such as algorithmic biases, misclassifications, and data security concerns, which require careful management. While AI presents both known and emerging risks, it also benefits from more robust risk mitigation and management strategies compared to nanotechnology. As AI systems become increasingly integral to decision-making, particularly in healthcare, regulators must address both the technological complexities and the broader implications of AI-driven decisions.

 


Balancing Innovation and Regulation: Addressing Risks in AI-Driven Decision-Making

As AI advances and becomes integral to critical sectors such as healthcare, concerns about its potential for misuse remain significant. One major issue is the lack of transparency surrounding the data used to train AI systems. Without a clear understanding of data sources and processing methods, there is a heightened risk that AI-generated outputs could be misleading or unreliable, particularly in decision-making contexts. In healthcare, for instance, AI-powered decision-support tools might lead professionals to rely excessively on recommendations, increasing the likelihood of misdiagnoses or inappropriate treatments. The notion that humans remain accountable for final decisions often fails to hold in practice. Kaspar points out that, over time, many users might follow AI recommendations without critically evaluating them, particularly when the system is designed to present highly persuasive results – an observation that aligns with the considerations outlined in the AI Act, Article 6.3, exceptions b and c. These exceptions recognise scenarios where AI systems are intended to enhance the outcomes of prior human actions or to identify patterns and deviations without replacing or unduly influencing human judgment, provided proper human review is maintained. Initially, AI-generated results are often met with critical scrutiny. However, as trust in the system grows, this level of scrutiny tends to diminish – a natural and understandable aspect of human behaviour.

Furthermore, the integrity of the data remains paramount. Data leakage or poisoning can severely compromise a system’s reliability. Moreover, AI’s scalability adds another layer of complexity, as even minor changes to data or algorithms can have unpredictable and far-reaching consequences. Even if one believes that existing regulatory frameworks don’t need a complete overhaul, it is evident that adapting standards and enhancing oversight are essential to mitigating these risks. Ultimately, as AI technology continues to evolve, achieving the right balance between fostering innovation and maintaining robust regulation will be critical to ensuring its safe and responsible application.

While the basic working principles of AI systems are known and easily understood, AI models' sheer size and complexity make them hard to understand fully. Small changes can lead to drastically different outcomes, highlighting the need for explainability.

Samuel, how can AI misuse and the safeguarding of data security be addressed?

To prevent AI misuse and ensure data security, especially in high-risk sectors like healthcare, several considerations and measures are needed:

1. Validation and Verification: AI systems must undergo rigorous checks to ensure accuracy and reliability, including independent verification to prevent errors. Misinterpretation of data or misapplication of AI outside its intended scope can lead to critical errors, such as incorrect diagnoses. Recognising these challenges, significant efforts are underway to develop new standards and frameworks aimed at addressing these critical issues.
2. Data Integrity: Companies must manage data securely, ensuring proper consent and clear separation of training, test, and verification datasets to avoid contamination. Additionally, emphasis should be placed on data quality, ensuring that datasets are as concise and clean as possible.
3. Explainability: While full explainability may be difficult to achieve, a robust understanding of how the technology operates is essential. Equally important is ensuring that users are adequately trained to use the system effectively and to critically evaluate whether its outputs are reasonable.

By focusing on these areas, companies can prevent misuse and ensure that AI systems remain effective, secure, and trustworthy.

 


 

Considerations on recurring topics and discussions

Buzzwords often evoke varied reactions within specialised fields, ranging from positive to sceptical. Samuel and Kaspar offer differing perspectives on key topics such as efficiency, security, and interpretation.

Efficiency

Efficiency

To me, it feels like a buzzword. Everyone strives for maximum efficiency, to the point where we become overly efficient and don’t allow ourselves any time to simply relax.

Efficiency

Efficiency

With 10,000 tools for efficiency, the options are overwhelming and can lead to a loss of focus. Sometimes, disconnecting is the best way to remain efficient.

Security

Today, there's a push for 100% security, but taking risks is key to progress in entrepreneurship. A sole focus on security also induces harm as it hinders or delays new technologies from becoming available. A healthy risk-benefit ratio is key.

Security

While the goal is patient safety, security serves as a tool to demonstrate and maintain it.

Interpretation

To each their own.

Interpretation

My daily work consists of interpreting regulations to reach a conclusion. Still, it's very much a personal approach.

Efficiency

Samuel

To me, it feels like a buzzword. Everyone strives for maximum efficiency, to the point where we become overly efficient and don’t allow ourselves any time to simply relax.

Kaspar

With 10,000 tools for efficiency, the options are overwhelming and can lead to a loss of focus. Sometimes, disconnecting is the best way to remain efficient.



Security

Samuel

Today, there's a push for 100% security, but taking risks is key to progress in entrepreneurship. A sole focus on security also induces harm as it hinders or delays new technologies from becoming available. A healthy risk-benefit ratio is key.

Kaspar

While the goal is patient safety, security serves as a tool to demonstrate and maintain it.



Interpretation

Samuel

To each their own.

Kaspar

My daily work consists of interpreting regulations to reach a conclusion. Still, it's very much a personal approach.

 


The Superpowered Teammates That Would Boost the Two Experts

In the ever-evolving world of AI, the right teammates can make all the difference. Kaspar and Samuel's ideal superpowered counterparts perfectly complement their expertise in navigating AI and regulation’s complexities, each offering unique abilities. Samuel’s counterpart, Inspector Gadget, epitomises his ability to deploy the perfect AI tool for any challenge. Like Inspector Gadget’s extendable arms and magnifying glasses, Samuel’s toolset spans advanced data analytics to AI troubleshooting. While these tools, much like Gadget’s gadgets, may occasionally fall short of perfection, Samuel knows how to adapt and optimise them to meet challenges head-on, from navigating regulatory frameworks to analysing intricate data sets.

Kaspar’s ideal teammate is Data from Star Trek: The Next Generation, a synthetic being renowned for his unparalleled intellect and the ability to process complex information at lightning speed. To Kaspar, Data symbolises the ultimate AI assistant – one capable of managing regulatory processes with minimal input, analysing vast datasets, making informed decisions, and even drafting guidelines. With Data’s super-intelligence, Kaspar envisions a future where AI not only enhances efficiency but also empowers Regulatory Compliance Experts to stay ahead of regulatory changes and easily navigate the complexities of AI and regulation.

As AI continues to advance, so too must the regulatory frameworks that govern its use. The conversation surrounding AI regulation remains ongoing, with no universal approach yet agreed upon. However, clear trends are emerging, and regulators are continuously learning and adjusting. One certainty is that the regulation of AI must be as dynamic and adaptable as the technology itself. While AI may not necessitate a complete overhaul of existing regulatory structures, it requires more sophisticated and flexible approaches capable of keeping pace with its rapid evolution.

 


About The Experts


Samuel, a dedicated AI enthusiast joined ISS AG in 2023, where he leads the development of services focused on AI product validation, verification, quality assurance, and regulatory compliance. He holds a PhD in Bioengineering and Biomedical Engineering from EPFL.
Samuel Kilchenmann, PhD
Senior Consultant Digital Health & AI
Kaspar, experienced in navigating complex regulatory compliance cases, joined ISS AG in 2016. He holds an MS in Biomedical Engineering from the University of Bern.
Kaspar Gerber
Regulatory Compliance Expert

 


Author


Read Other Blogs Related to This Topic