Select your language

AI Act_ISS AG

Artificial intelligence is poised to transform healthcare by expediting the digitalisation of medical devices. Predictions say the use of AI in medical devices is anticipated to proliferate worldwide in the coming years, while regulatory-wise, the world's attention has also been focused on the EU's AI Act and the implications of these first comprehensive AI rules. In 2021, the European Commission proposed the AI Act, which has since been the subject of intense debate. This new Act regulates AI systems in general but creates specific challenges for medical device manufacturers, who must comply with the requirements under the AI Act and the Medical Device and IVD regulations. The implications of the AI Act are still uncertain, and its full impact will only be understood once the implementation tools are in place, a situation reminiscent of the MDR and the IVDR. The following elaborations offer a broad overview of the requirements introduced by the new Act and how these may impact the availability of AI-based medical innovative devices in the European market. 

The AI Act sets out a regulatory framework based on risk

On 13 March 2024, the European Parliament approved the Artificial Intelligence Act (AI Act) as a proposed text. The AI Act comes in the form of an EU regulation, which will be binding in its entirety and directly applicable in all EU Member States. This is a type of legal Act the medical device industry is familiar with since the MDR and IVDR replaced the previous Directives. Six months after the Act is published in the Official Journal of the EU, it will enter into force and start the transition timelines for the provisions and requirements. Similar to the MDR and IVDR, around 20 delegated and implementing acts are expected following the adoption of the Act to implement the new regulatory framework.

The AI regulation establishes a framework to categorise applications based on risk. Applications with unacceptable risks will be banned, high-risk applications will be heavily regulated, and limited-risk applications will be subject to transparency requirements. While an individual risk classification assessment will be required in every case, most medical devices will fall into the high-risk group. 

This means that there will be requirements regarding:

  • Rigorous risk assessment and mitigation: Companies must thoroughly assess the potential risks associated with their AI applications. This includes identifying potential hazards, evaluating the likelihood and severity of harm, and implementing measures to mitigate these risks.
  • High-quality data sets, must be accurate, complete, and free from biases that could lead to discriminatory outcomes.
  • Transparency and traceability.
  • Appropriate human oversight to ensure accountability.

Medical device software that uses AI technology must undergo a conformity assessment by a notified body. Technical documentation and a Declaration of Conformity will be required to conform to the AI Act and to be able to use the respective CE mark. The legal framework will apply to both public and private actors inside and outside the EU as long as the AI system is placed on the Union market or its use affects people located in the EU. It can concern both providers and deployers of high-risk AI systems, and importers of AI systems will also have to ensure that the foreign provider has already carried out the appropriate conformity assessment procedure, bears the CE marking and is accompanied by the required documentation and instructions of use. 

All of this sounds rather familiar, but the regulation of AI brings its very particular challenges. AI is advancing at an unprecedented pace, which makes it challenging to predict its future capabilities or applications, making it difficult to develop flexible regulations without stifling innovation.

The EU provides sandboxes to test AI applications under regulatory supervision

One way to address the challenges to regulate AI are so-called regulatory sandboxes, which are a specific type of regulatory mechanism. They offer businesses the opportunity to test and experiment with innovative products, services, or companies under the supervision of a regulator for a defined period. Sandboxes give companies access to near-market conditions under higher regulatory supervision in favour of less stringent requirements. Using these tools also promotes the exchange between companies and authorities, and the knowledge gained can be made available to other companies.

However, such sandboxes are only useful if they use a flexible and less bureaucratic process, particularly in the application, assessment, and exit processes. Furthermore, to be functional, these sandboxes should not be overcrowded and should be able to provide expert advice and assistance in many fields, including technology and regulatory ones.  

Regulatory sandboxes and real-world testing will be established at the national level. Much hope is being placed on already running pilot projects, and it can be expected that the knowledge gained from such and similar projects will also contribute to practical guidelines for the responsible use of AI. For this reason, companies that participate in such projects can not only secure early market access but can also help make the technology safer. Such sandbox projects have already been initiated in various countries such as Denmark, Finland and Switzerland. In Switzerland, the canton of Zurich has completed its first projects and has already published guidelines with findings, which are available on its homepage.

AI Office on EU level to support governing bodies in Member States

Understanding the Act's implications and potential to shape a more regulated and secure AI landscape is paramount. Recognising this, the EU has established the AI Office, a centre for AI expertise, to provide the necessary support and practical guidance to the member states' governing bodies. This office will implement the AI Act and coordinate AI policy. Its primary goal is to promote AI knowledge and innovation. It will also set up EU-level advisory bodies, facilitate support and information exchange, and help ensure the coherent application of the AI Act across member states.

The office, in addition to its other responsibilities, will also draw up codes of practice and guidelines and implement and delegate acts to support the implementation of the AI Act. It will provide the access to AI sandboxes and real-world testing and encourage collaborations with institutions worldwide. To effectively and uniformly implement the new rules, the office will issue recommendations and opinions to the commission regarding high-risk AI systems and other relevant aspects. It will also support standardisation activities in the area.

Are start-ups and small firms the ones drawing the short straw – again?

While some praise this Act, others have been strongly critical of it. The most common opposing views suggest that the AI Act will harm EU-produced technology, start-ups and research, potentially leading to their demise. Facing additional costs for compliance hits start-ups and SMEs the hardest. Since the EU Commission emphasises that the Act aims to enhance safety and not to hamper innovation, small and medium-sized enterprises (SMEs) will benefit from priority access to regulatory sandboxes.

Critics fear clashing conformity assessment procedure requirements and additional administrative inefficiencies

Industry associations are urging for effective alignment mechanisms between the AI Office, the Medical Device Coordination Group of the European Commission, and all stakeholders in the upcoming implementation of the AI Act. This alignment is needed to ensure the safety, performance, and effectiveness of AI-enabled medical devices and streamline the regulatory process, potentially reducing costs and delays. The requirement for another conformity assessment by a notified body is especially concerning and may lead to further delays in accessing medical devices in the EU market.

The AI Act introduces requirements, such as risk management and conformity assessment procedures that may clash with existing elements of the MDR despite the fact that the MDR already ensures patient safety for medical devices. Critics point out that crucial terminology and obligations in the AI Act and the MDR/IVDR are not aligned, which could lead to the propagation of contradictory and inefficient implementing measures. Additional guidance is necessary to prevent the new legal text from complicating the regulation of medical devices and IVDs, aggravating potential incompatibilities with medical device regulations, guidance documents, and harmonised standards. 

One example of how the AI Act introduces additional administrative inefficiencies is that manufacturers of medical devices with an AI system that qualifies as a biometric categorisation must register in EUDAMED and a yet-to-be-established AIA database.

Outlook 

The AI Act presents an opportunity for Europe to position itself in the global AI market. However, this is not without its challenges. The regulatory framework may impose additional burdens on companies, particularly those in the high-risk category. On the other hand, it also provides a clear framework for the responsible use of AI, which can enhance trust and encourage investment.

It is also exciting that the European Union states in the AI Act that start-ups and SMEs should be prioritised for access to AI sandboxes. This is also a clear commitment to strengthening local players and thus creating a strong AI ecosystem instead of relying on a few global players. Recognising the unique challenges posed by AI technology, authorities worldwide emphasise collaboration with industry and expert groups. This approach not only acknowledges the expertise and insights of these stakeholders but also ensures that the regulatory strategies are comprehensive and effective. 

It remains to be seen if the Act succeeds at promoting innovation and investment in AI. Medical device manufacturers should stay informed about the Act's requirements, standards, and potential implications for product development and safety to make informed decisions and maintain compliance.

Samuel Kilchenmann, PhD   
Digital Medtech Consultant

Got a question or need advice?

Do not hesitate to contact us if you have any questions or need advice.