top of page
Search

How do we address the risks of Artificial Intelligence?




You will probably have heard a lot about Artificial intelligence (AI) and how it is being used to transform healthcare. Here are a few examples:

  • Improving patient empowerment: AI is being used to help patients manage their health and access information about their condition.

  • Improving diagnostic accuracy: AI can be used to monitor vital signs, track patient progress and detect potential issues, leading to improved condition management and triage, and ultimately better care outcomes for patients.

  • Clinical Decision Support: AI-powered diagnostic tools are able to analyse large amounts of medical data, such as images and patient records, and make more accurate diagnoses than humans can.

  • Predictive analytics: AI is being used to predict and prevent medical issues by analysing patterns in patient data. such as a patient's genetic and medical information to identify risk and create personalised prevention and treatment plans.

  • Streamlining clinical workflows: AI is being applied to improve the productivity and efficiency of healthcare delivery, automating routine tasks, such as appointment scheduling, freeing up healthcare professionals to focus on more complex tasks and more time looking after patients.

  • Drug discovery: AI is being used to examine large data sets to identify new targets for drugs, simulating drug interactions, resulting in faster drug discovery


However what you may not have heard about is that the benefits of AI do not always flow equally across a population. There is increasing evidence that biases inbuilt into AI technologies, mirroring inequities inherent in our society and health systems, can lead to unintended consequences and may even cause harm. Questions are being raised about what potential risks and unintended negative impacts we may introduce, for example leading to inaccurate or unfair diagnoses and treatment decisions. These ethical debates centre around how AI and importantly, the data that underpins it, should be used and its potential to perpetuate societal biases as there are increasing reports uncovering serious issues.


In 2018, Amazon's AI system developed to assist in the selection of job candidates was found to be biased against women[1]. The algorithm was trained on resumes submitted to the company over a ten-year period, during which the majority of applicants were male. As a result, the system developed a bias against resumes containing words and phrases that were typically associated with women, such as "female" and "women's." Additionally, the system downgraded resumes containing words associated with women's organizations and even penalized candidates who had attended all-women's colleges. After the discovery Amazon reportedly took steps to address the issue, scrapping the AI model and taking steps to diversify its pool of job applicants by actively reaching out to women and underrepresented groups in the technology industry, and monitoring its hiring practices to ensure that they are fair and unbiased.


This isn't a new problem. In 1988, the UK Commission for Racial Equality declared a British medical school guilty of discrimination after discovering its computer program used for inviting applicants for interviews was biased against female candidates and those with non-European names[2] . Despite the program's high accuracy rate of 90 to 95%, which was achieved by mimicking the bias inherent in human admissions decisions; the medical school had a higher representation of non-European students compared to other medical schools in London.


Fast forward 30 years and the Artificial Intelligence tools we have available today are considerably more complex, but the challenges we face are the same. There is growing concern within healthcare of the risk to some populations particularly in marginalised populations such as women, Black individuals, and low-income patients from increasing use of AI algorithms. Such biases can result in under-diagnosis, where the AI algorithm inaccurately labels a patient as healthy, potentially, delaying crucial access to medical care.


In 2019 a clinical algorithm used in the US to identify patients requiring clinical care, found that black people had to be sicker than equivalent white people before they were recommended for identical care[3]. On further examination, it was identified that the algorithm had been trained using health spending as an indicator of historical clinical need. Black individuals historically spent less on healthcare due to long standing disparities in wealth and income, not due to any reduced need for care. Secondly, an AI system used in chest X-ray pathology classification have been found to consistently and selectively under-diagnose under-served patient populations, and the under diagnosis rate was even higher for those belonging to ‘intersectional’ groups, such as 'Hispanic women' [4].


Examples of such biases raises ethical concerns of AI resulting in worsening existing inequities in access to healthcare and treatment, leading to unequal delivery of care and perpetuating existing inequalities is health.


Bias can be introduced into an AI system at any stage of its development. In the data used to train the models, in the algorithms used to develop the models, and in the way the models are deployed and used in clinical care. When an AI system is trained on data that contains prejudiced human decisions or mirror historical or social disparities, the AI will perpetuate these prejudices, even when variables such as gender, race, or sexual orientation are excluded. As our understanding of and the use of AI grows, is increasingly important to understand these sources of bias, in order to implement effective mitigation strategies.


A lack of regulation thus far has enabled this situation to continue, however this is now changing. Governments across the world are considering these issues and consulting on the right regulatory frameworks and governance structures that will effectively mitigate against the risks the of AI, ensuring AI-driven health technologies are effective, safe and equitable, while also accommodating the speed at which technology develops.


The EU for example, is enacting a raft of legislation to make AI system safer. The AI Act is one such proposed piece of legislation[5]. The Act aims to create a comprehensive framework for AI, covering issues such as ethical considerations, transparency, and accountability. The EU AI Act is expected to provide a legal basis for AI regulation, ensuring that AI systems are used in a manner that is consistent with European values and principles, such as respect for human dignity, non-discrimination, and privacy. The Act is also expected to provide guidelines for the development of AI systems and ensure that these systems are tested and evaluated for their safety, reliability, and effectiveness.


It's not all bad news. AI brings significant opportunities for enhancing traditional human decision-making, which as we all know, is just as flawed as machines are. Machine learning systems disregard variables that do not impact outcomes, based on the data available to them. This contrasts with humans who may conceal or be unaware of the factors that influence their flawed decision-making, for example when hiring or disregarding a job candidate. It is also easier to assess and unpick algorithms for bias, and potentially also exposing hidden human biases.


AI can also be utilised to enhance decision-making and offer benefits to traditionally marginalised groups, a concept referred to as "disparate benefits from improved prediction”. This refers to the idea that the use of AI provide unequal benefits to different groups. For example, using AI for improving predictions to disproportionately benefit marginalised groups, who may have historically been subject to human biases in decision-making processes, ultimately reducing the impact of existing biases and leading to more equitable outcomes.


Artificial Intelligence has been hailed as transformative technologies in healthcare, but there are also concerns that the biases inherent in our society and health systems can lead to unintended consequences and harm. Despite advancements in AI, the challenges of preventing biases remain. It is important to understand these issues and sources of bias in order to implement effective mitigation strategies to ensure fair and equal delivery of care.


Want to find out more, read Equiti Health’s report on mitigating bias in Artificial Intelligence, which you can find here.


REFERENCES

 
 
 

Comments


bottom of page