Horizon Europe embraces Artificial Intelligence in Health Research
There is little doubt about the popularity and buzz around artificial intelligence (AI), which is set to revolutionise numerous industrial sectors over the coming years. And there is no greater sector filled with opportunities for the application of AI than in the field of health research and healthcare.
Health is inherently complicated with large volumes of data, significant patient-to-patient variability and a wide expanse of illnesses, conditions and infectious diseases.
The scope and breath of opportunities to apply AI in health are enormous. These include clinical decision-making, personalised medicine, drug discovery, clinical trial analysis, radiology and image screening, pathology, surgical interventions, patient care, as well as remote monitoring of disease progression, robotics and surgery.
In the area of public health surveillance AI can be used as a tool for the prediction of epidemics, modeling disease physiology and predictive modelling. Within the healthcare setting AI can support healthcare professionals in terms of clinical decision-making, assisting a diagnosis, determining the optimal patient treatment and care and increasing efficiency across the entire healthcare system.
Artificial Intelligence (AI) in the Horizon Europe Health Cluster
The application of artificial intelligence (AI) and AI research has featured strongly in the Horizon Europe health cluster since the first call in 2021. Here are some examples of previously funded topics from the first health work programme 2021 to 2022:
If you view the statistics on the number of applications under the call update section at the links above, you will notice that most of the topics, which focused on the development of AI, were extremely competitive and over-subscribed. Based on this, a decision was made that AI would now be considered as a tool in a ‘toolbox’ or as a platform/enabling technology, in a similar way to proteomics and genomics for health. Applicants are advised to be aware of this and should scan the topics texts in the work programmes for specific mentions of ‘artificial Intelligence’, ‘AI’, ‘new or next generation tools and technologies’ and even if not mentioned, AI can be applied where relevant or required in a proposal.
Artificial Intelligence now needs to be addressed in the Ethic Section of the application forms
The use of AI, the techniques used and its risks now need to be addressed in the ethics section of Horizon Europe’s application. The scope of AI activities involve the development, the deployment and/or use of AI-based systems or techniques. All applicants will be asked the following questions in their proposal:
1. Does this activity involve the development, deployment and/or use of Artificial Intelligence? (if yes, detail in the self-assessment whether that could raise ethical concerns related to human rights and values and detail how this will be addressed). [YES/NO]
2. Are there any other ethics issues that should be taken into consideration? [YES/NO]
If there are ethical issues, a detailed description of those will need to be provided. In addition to this, in the health programme, the use and application of AI may need to be addressed in more detail in what is referred to as the Clinical Trial Document Essential Information for Clinical Studies (please note: this document has an expanded use which includes: a place to provide details on sensitive data used, a data management plan, information on patient consent, the application of AI and how it is applied for the diagnosis and treatment and care of patients). In most cases this is a mandatory requirement for health-related research projects.
Feedback from the Horizon Europe Evaluation of AI Projects in the Health Cluster
After each call or round of funding in Horizon Europe, the health programme committee or national representatives are provided with collective feedback on the evaluation process. Following the first call in 2021, the expert panel concluded that the AI elements of the proposals were not adequately addressed. The application of AI, the basis of its development and logic behind the algorithms were not adequately explained in the proposals. Future applicants are therefore, advised that projects utilising AI should provide adequate information, using explainable AI tools if necessary, on how AI is applied and the basis of the AI developed.
Explainable AI and Ethical Consideration for Health
Explainable AI, also known as interpretable AI, (or explainable machine learning), is artificial intelligence in which humans can understand the reasoning behind decisions or predictions made by the AI tool or program.
For example, a doctor or researcher uses explainable AI for cancer detection and treatment, where algorithms show the reasoning behind a given AI model's decision-making. This makes it easier, not only for doctors to make treatment decisions, but also to provide data-backed explanations to their patients.
Healthcare professionals and patients need to know if AI is being used responsibly, accurately, equitably and that it is inclusive and applicable to all patients. This is a difficult balancing act as there is often patient-to-patient variation, age differences, gender differences or ethnic variations when patients present for care with a disease and condition. Here are some examples of these variations based on ethnicity, gender and age:
· Skin cancer prevalence is influenced by race and ethnicity whereby Caucasians (white or pale- skinned individuals) are more likely to develop many types of skin cancer than any other racial group.
· Men can develop breast cancer, but this disease is about 100 times more common among women than men.
· You can develop type 2 diabetes at any age, even during childhood. However, type 2 diabetes occurs most often in middle-aged and older people.
To add to these naturally occurring variations the growing issue of co-morbidity or multi-morbidity in a patient is also a significant issue and concern. This relates to the simultaneous presence of two or more diseases or medical conditions such as diabetes, cardiovascular disease or COPD and should be considered when applying AI and building AI algorithms. With multimorbidity comes the concerns and risks of poly-pharmacy (the use of multiple medicines, that may have a synergist effect or antagonistic effect). This also needs to be considered in terms of predicting a patient’s treatment and care using AI technology.
Many data sets are not broad enough to accurately reflect the entirety of the population and thus this could inadvertently lead to unintentional bias when utilising AI tools and algorithms to direct a patient’s treatment and care. In health, unique consideration needs to be given to ensure the accuracy of an AI program that not just determines a patient’s care, but also to ensure a patient is not denied care due to an inherent bias in an AI algorithm.
The complexity of human health with its many challenges and variables, provides a compelling case for the need and application of AI, however, health care deserves extra consideration because it presents a much broader spectrum of risks than most other uses of AI. Afterall, an error by an AI programme that is used in diagnosis or to treat a disease could cause physical harm to a patient or even death. We simply can’t afford to get this wrong.
Key Take Aways
AI is now considered a tool or platform technology that can be applied across the health research program. Applicants are advised to provide an adequate explanation of the logic or reasoning behind the AI algorithms and at all times they need to be mindful of AI bias.