Abbreviations
EEG
Electroencephalogram
MEG
Magnetoencephalography
AI
Artificial Intelligence
HTA
Health Technology Assessment
MCI
Mild Cognitive Impairment
ML
Machine Learning
DL
Deep Learning
GDPR
General Data Protection Regulation
Ethics and trustworthiness
Below, you will find a collection of resources exploring the ethical considerations of AI in healthcare. This includes insights into European regulations, human rights issues, technical uncertainties, and the broader societal impacts of AI-driven tools. Aimed at clinicians, patient organizations, patients, carers, politicians, healthcare managers, and informal patient representatives, this collection supports understanding and navigating the ethical dimensions of AI in the healthcare sector.
Ethics in healthcare
Resources addressing the ethical dimensions of healthcare, including information on patient consent, privacy concerns, and the impact of emerging technologies on patient rights. Relevant guidelines on ethical practices, including maintaining equity and transparency in treatment and care.
Stay up to date – visit soon to access any new reports on this topic.
AI-Mind deliverables
D1.5, August 2022: Report on ethics and acceptability of digital diagnostic solutions: This deliverable examines the ethical considerations surrounding using and communicating AI-based risk prediction tools in clinical settings. It reflects on how clinicians convey AI-generated dementia risk predictions to individuals with Mild Cognitive Impairment (MCI) and outlines a strategy to foster trustworthy communication. Additionally, the report explores the potential impact of these technologies on the doctor-patient relationship, acknowledging that AI-driven changes in diagnostics will affect not only patients but also their partners, families (including “families of choice”), and healthcare professionals.
Info cards: dementia, MCI, risk factors. Available here.
Trustworthiness of AI
Collected resources provide insights into evaluating AI models for reliability and accuracy, with an emphasis on transparency and accountability. Information on user interface design principles and a human-centric approach. Details on best practices for validating AI systems and integrating user feedback.
Challenges and Trends in User Trust Discourse in AI Popularity. Read here: DOI
A systematic literature review of user trust in AI-enabled systems: an HCI perspective. Read here: DOI
AI-Mind deliverables
D1.2, November 2021: AI-Medical Device software compliance requirements map: This deliverable provides a preliminary assessment of the Medical Device Regulation (MDR) and the proposed Artificial Intelligence Act (AIA) in relation to the AI-Mind Connector and Predictor. The MDR establishes mandatory requirements for medical devices in Europe, replacing the previous Medical Device Directive (MDD) as of May 2021. The AIA, introduced by the European Commission, aims to regulate AI applications through strict documentation, training, and monitoring requirements. Based on an initial evaluation, AI-Mind is expected to be classified as a Class IIa medical device and a high-risk AI system.
D4.2, May 2022: AI-Mind UI/UX Design and Guidelines: This deliverable describes the research and development of key user-focused elements for the AI-Mind Platform, including user personas, usage scenarios, and journey maps. It also presents early and advanced prototypes of the AI-Mind Connector and Predictor applications, along with guidelines for their use. These elements follow a Human-Centered Design (HCD) approach to create a platform that is easy to use and trusted by its users. The report explains the methods used, the development process, and the main results.
Stay up to date – visit soon to access any new resources on this topic.