Martha Cahill

National Contact Point for Cluster 1 Health

Enterprise Ireland          

Artificial intelligence in Horizon Europe

  

Artificial intelligence (AI) is continuing to revolutionise how we communicate, work, learn, analyse and research. Its use and how it’s applied in Horizon Europe, however, must be carefully considered. This was the focus of the Artificial Intelligence (AI) in Horizon Europe panel at the Horizon Europe Impact Conference last December. 

The panel included: John Durcan, Chief Technologist, IDA; Patrick Fenton, Chief Operating Officer, IDIRO Analytics; and Professor Alan Smeaton, Member of the Government's AI Advisory Council, Professor DCU, and a Founding Director of the Insight Centre for Data Analytics.

In the conversation facilitated by Martha Cahill, National Contact Point for Horizon Europe's Cluster 1 – Health for Enterprise Ireland, they discussed how AI should be addressed in proposals. They explored the potential implications of upcoming regulatory changes and the importance of data quality with some key dos and don’ts. 

Addressing AI in proposals 

  

The application of AI and AI research has featured significantly in Horizon Europe since the first call in 2021. In 2024, the EU commission then launched the GenerativeAI4EU initiative to stimulate the uptake of generative AI across all sectors. This will result in the deployment of €4bn worth of Horizon Europe funding being dedicated to generative AI projects across all sectors for the remainder of the programme.

Despite this strong emphasis on AI, feedback from Horizon Europe evaluators has shown AI is not being properly addressed in research proposals. 

When AI forms part of a research project, there are elements that must be explained by applicants. The use of AI, techniques and any associated risks must be addressed in the ethics section of an application. This includes the scope of AI activities, deployment and use of AI-based systems or techniques. A detailed description of any ethical issues like biases must be provided and how these challenges have been addressed. In the case of an AI algorithm, applicants must explain how conclusions are drawn and disclose any risks or biases that may be built into that algorithm. 

Different pieces of legislation could govern different applications of AI. In health, for example, AI could be classed as a medical device. This would mean it falls under a different regulation than the new AI Act. It’s essential that applicants understand what legislation may be relevant to their sector, and ensure this is addressed and highlighted in the application process.  

Approaching high-risk AI 

  

The regulatory landscape is set to become more complex, and researchers must be aware of this and prepared. The European Union (EU) AI Act is set to come into effect within the next couple of years, for example. This legislation governs high-risk AI, which can include systems relating to health, safety or human rights. 

While legislation is important to support the development of trustworthy AI, it adds a layer of complexity for researchers who need to be prepared to navigate it. The onus will be on them to decide if their projects are considered high-risk or their use of AI falls under specific legislation. 

  

As our understanding of the EU AI Act unfolds, applicants must do due diligence around deciding what level of risk their project is classed as. The best way to approach this is to keep standards high and treat every project as high-risk. By doing this, all bases will be covered. It’s important also to keep humans in the loop throughout the process, to stay on top of evolving regulatory changes.

As with any new guidelines, for example when the General Data Protection Regulation (GDPR) came into effect in 2016, guidance will become available as time goes on. It’s crucial for organisations to embrace AI, seek the relevant guidance and training, and ensure everyone involved understands the concepts behind AI. 

  

Maintaining data integrity 
  

Protecting the quality and maintaining the integrity of data is an important consideration in any area of research, and must be considered in proposals. It’s vital to encrypt data and ensure sensitive information is treated with absolute care and respect. 

  

When transferring data, especially sensitive information, this must be encrypted before being shared with relevant stakeholders. 

  

It’s also important to ensure data is trustworthy. With the emergence of generative AI and its applications for research, this means being mindful of the quality of data the AI has been trained on. 

  

The EU Commission will continue to embrace the use of AI and generative AI. To be successful in Horizon Europe, applicants must approach it in a responsible and ethical way. This includes explaining the approach taken and the steps taken to ensure AI is being used in a transparent way. Researchers must be prepared for upcoming regulatory changes, set to come into effect in 2027. Finally, they will need to consider data quality and address any concerns or biases in their applications. 

  

 

      

Built with