Image Credit - Ekonomist

Gender Bias Plagues AI Healthcare Tech

August 13,2025

Medicine And Science

AI in Healthcare: A New Frontier of Gender Bias

Research from the London School of Economics reveals that artificial intelligence systems, now in operation at a majority of England's councils, are consistently diminishing the gravity of health problems in women. This disparity risks ingraining gender-based prejudice in choices about care, fostering a future of unequal treatment.

A landmark study has exposed a troubling pattern in how artificial intelligence is applied within England's social care framework. These AI systems, which were intended to lighten the caseloads for overworked social workers, demonstrate a considerable bias related to gender. The research, originating from the London School of Economics and Political Science (LSE), discovered that AI platforms are more inclined to understate or ignore women's psychological and physical requirements when compared to men who have identical case information. This algorithmic prejudice may result in women getting insufficient support, a direct outcome of their needs being seen as not as serious by the artificial intelligence. The study brings up pressing questions concerning the governance and supervision of AI in public services.

The Stark Reality of Algorithmic Bias

The LSE research offered a clear picture of this prejudice. Academics fed genuine case files belonging to 617 individuals receiving adult social care into several foundational AI models, known as LLMs, and only altered the subject's gender. When employing the Gemma AI from Google, the reports created for men and women were drastically different. As an illustration, a man's file might characterize him as "disabled," "unable," and possessing a complicated history of medical issues. Conversely, a woman facing the same circumstances could be described as self-sufficient and capable of handling her own personal hygiene. These seemingly minor changes in wording carry deep significance for the standard of care that is delivered.

A Deeper Dive into the LSE's Findings

The LSE academics scrutinized a vast dataset comprising 29,616 summary pairings to measure the discrepancies in the way male versus female cases were handled by the AI. The conclusions were unambiguous. The terminology chosen to detail men's health issues was invariably more severe. This difference was especially noticeable in the Gemma model from Google. Dr. Sam Rickman, who led the report and works as a researcher within the Care Policy and Evaluation Centre at LSE, conveyed profound unease about these discoveries. He underscored that the quantity of support an individual gets is tied directly to their apparent need, which implies this prejudice could create tangible disparities in the provision of care.

The Pervasive Nature of AI in Social Care

Local government bodies have increasingly adopted AI to address the huge strains on social care programs. These systems hold the promise of creating efficiencies and offering a method to lighten the caseloads for social workers. The LSE study, however, uncovers a major drawback to this quick implementation. A concerning lack of clarity exists on the exact AI systems being utilized, their application methods, and the effect they have on final determinations. This ambiguity complicates efforts to examine the technology closely and to make authorities answerable for any prejudices it might sustain.

A Call for Greater Transparency and Regulation

The academics responsible for the LSE investigation are demanding immediate measures to tackle this problem. They contend that official bodies must require that bias is measured in every LLM applied to long-term care. Such a step would assist in making "algorithmic fairness" a chief concern and would confirm that AI systems do not worsen pre-existing disparities. Dr. Rickman was firm that while AI holds the capacity to be an effective instrument, its application should not sacrifice impartiality. He reinforced the necessity for every AI platform to be open to scrutiny, thoroughly checked for prejudice, and governed by strong legal frameworks.

Gender

Image Credit - NTV

The Historical Context of Gender Bias in Medicine

The gender-based prejudice identified in these AI platforms is not a recent development; instead, it serves as a digital mirror to deep-seated inequalities within healthcare. For many years, medical studies have focused mainly on men, frequently leaving women out of clinical trials. This long-standing failure to include women has created a substantial gap in knowledge about how illnesses manifest in them and how they react to different therapies. Consequently, the data employed for training AI often contains this skew, prompting the algorithms to copy and even magnify these historical prejudices.

The Dangers of Inherent Bias

The repercussions of this ingrained prejudice are extensive. Women currently experience lengthier periods waiting for a diagnosis and have a higher probability of their symptoms being waved off as psychosomatic. AI systems, if not properly monitored, could solidify these inequalities, making it substantially harder for women to get the support they require. The LSE investigation acts as a potent caution that, lacking purposeful action, the potential of AI in the medical field could be jeopardized by the very prejudices it is meant to overcome. The capacity for AI to enhance healthcare is enormous, but its creation and rollout must be handled with care.

A Tale of Two Summaries

Among the most powerful illustrations from the LSE research centered on a theoretical 84-year-old person residing alone. When the clinical details were attributed to "Mr Smith," the AI-generated report noted him as having "a complicated medical background, an absence of a care plan, and limited movement." But, when the gender was switched to "Mrs Smith," the report provided a starkly different portrayal. She was depicted as "self-sufficient and capable of managing her own hygiene," irrespective of her physical challenges. This sharp divergence shows how AI can subtly, yet consequentially, shift the view of a person’s requirements based only on their gender.

The Discrepancy in Community Access

A further case highlighted in the investigation provides more evidence of this prejudice. In one scenario, a man was characterized as being unable to "engage with the local community," a statement implying a significant degree of need and possible social withdrawal. When the identical case information was associated with a woman, the report indicated she "could handle her day-to-day tasks," a much less worrying portrayal. This type of inconsistency could result in a man receiving priority for community assistance programs while a woman with the same level of need is ignored. These instances emphasize the concrete effects of algorithmic prejudice.

Not All AI Models Are Created Equal

It is vital to recognize that the LSE research discovered key variations among the various AI systems evaluated. While Google’s Gemma platform displayed a clear gender-related prejudice, the Llama 3 model from Meta, however, did not produce different reports determined by gender. This discovery indicates that developing AI systems that are less prone to bias is achievable. It also draws attention to the significance of meticulously choosing and evaluating AI platforms before their implementation in critical fields like social care. The academics are hopeful their conclusions will motivate software creators to make fairness a top priority in their algorithmic designs.

The Need for Proactive Measures

The report's writers maintain that a reactive stance on AI prejudice is insufficient. Delaying action until issues surface could lead to calamitous outcomes for people who are refused the support they require. They champion a forward-thinking strategy instead, one that includes thorough evaluation and confirmation of AI systems prior to their deployment. This process would encompass specific checks for gender-based and other kinds of prejudice. They also urge for enhanced cooperation among AI creators, medical experts, and social care organizations to confirm these systems are appropriate for their intended function.

The Role of Data in Shaping AI

The prejudice discovered in these AI platforms does not necessarily point to a defect in the technology itself but is more of a mirror of the data used for its training. If the dataset utilized to train an AI system is prejudiced, the system will unavoidably adopt and reproduce that prejudice. This is the reason using varied and representative datasets is crucial when creating AI for the medical sector. The long-standing underrepresentation of women in medical studies signifies that there is frequently a smaller amount of data on female health, which can result in AI systems that are not as precise for women.

Addressing the Data Gap

To tackle this problem, a unified push is required to enhance the gathering of information on the health of women. This involves boosting the inclusion of women in clinical trials and making sure that information is gathered and examined in a manner that considers differences based on sex and gender. It also involves proactively identifying and rectifying prejudices within current datasets. This represents a multifaceted and demanding endeavor, but it is vital if the goal is to construct AI frameworks that are just and balanced for all individuals.

Gender

Image Credit - Engadget

The Human Element in AI Development

It is additionally crucial to take into account the human factor in the creation of AI. The groups responsible for constructing and training these systems are frequently not reflective of the general populace. A survey from 2022 revealed that men constitute over 90% of professional software developers. This absence of diversity can result in oversights and unexamined prejudices becoming embedded in AI frameworks. Promoting the entry of more women and people from other minority groups into AI-related professions could contribute to the development of more just and comprehensive technologies.

The Power of Prompts

Intriguingly, studies have demonstrated the possibility of lessening prejudice in AI systems by employing precisely worded instructions. A particular piece of research discovered that merely telling a model to avoid relying on stereotypes produced a significant positive change in its output. This indicates that even when working with prejudiced training information, guiding AI systems toward more equitable results is possible. This represents a hopeful field of inquiry that might offer a useful method for fighting prejudice in AI. It is not, however, a complete solution and should not be considered a replacement for tackling the fundamental sources of prejudice.

Google's Response to the Findings

Google announced its internal teams plan to review the conclusions from the LSE report. The corporation also mentioned that the academics had evaluated the initial version of the Gemma system and that the platform has now reached its third iteration. Google anticipates the updated version will yield superior results. Additionally, the company remarked that it never promoted the Gemma system for application in medical contexts. This statement accentuates the continuing discussion over the suitable application of AI in healthcare and the requirement for explicit policies and rules.

The Broader Implications for AI in Society

The LSE investigation adds to a growing collection of data that emphasizes AI's capacity to sustain and even intensify prevailing societal prejudices. Worries have been voiced about prejudice across a broad spectrum of AI uses, from facial analysis to the legal system. An American analysis of 133 AI platforms in various sectors revealed that 44 percent displayed prejudice based on gender, while 25 percent showed biases related to both gender and race. These conclusions highlight the pressing demand for a more thoughtful and principled method for creating and implementing AI.

The Path Forward: A Call for Collective Action

Tackling the problem of AI prejudice necessitates a united campaign involving numerous interested parties. AI creators bear the duty of constructing more equitable and open systems. Government bodies must create unambiguous policies and benchmarks for AI's application in critical domains. Providers of healthcare and social care need to assess and choose AI systems with care and watch their effects on patients. Furthermore, society at large must participate in a wider dialogue regarding the moral ramifications of AI and the type of society we aim to build.

Empowering Patients and the Public

It is equally essential to give patients and the general public the tools to comprehend and question choices made by AI. This involves offering straightforward details on the ways AI is applied within their care and establishing easy-to-use avenues for comments and appeals. By cultivating a climate of openness and responsibility, we can work to make certain that AI benefits every individual in society, not just a small minority. The LSE investigation is a crucial alert that we all must acknowledge. While the future of healthcare is likely to be connected with AI, it falls to us to guarantee this future is just and balanced for all. The capacity for AI to transform healthcare is certain, but so are the dangers. Only through collaboration can we control this technology's power for positive change and create a healthcare framework truly prepared for the 21st century.

Do you want to join an online course
that will better your career prospects?

Give a new dimension to your personal life

whatsapp
to-top