Pulse oximeters, essential in monitoring blood oxygen levels, play a key role in healthcare technology. Their importance has grown with the rise of artificial intelligence in health tech, yet they often provide inaccurate readings for individuals with darker skin tones. This flaw highlights systemic healthcare inequities and challenges trust in medical devices.
Addressing these disparities involves revisiting the development of pulse oximeters, reforming regulations, and promoting inclusivity in design. This journey also sparks conversations about how artificial intelligence in health tech could help prevent similar issues in future innovations.
The origins of bias in pulse oximeters
Pulse oximeters operate by emitting light through the skin and measuring the absorption by oxygenated blood. However, melanin, the pigment responsible for darker skin tones, can interfere with this process, resulting in falsely high oxygen saturation readings. This discrepancy can have life-threatening consequences, as it may delay necessary treatments for patients who appear to have adequate oxygen levels but do not.
The origins of this bias date back to the 1970s when pulse oximeters were first developed in Japan, a country with relatively homogeneous skin tones. These designs were later adopted globally without modifications to address racial diversity. By the 1980s, pulse oximeters became standard tools in healthcare, but the limitations stemming from their design were overlooked.
Previous studies repeatedly highlighted this issue. However, manufacturers and regulators failed to act, perpetuating the bias. The devices continued to be calibrated predominantly for lighter-skinned individuals, embedding inequity into the foundations of this healthcare technology.
The role of regulatory oversight
Regulatory bodies such as the U.S. Food and Drug Administration (FDA) are pivotal in ensuring the safety and efficacy of medical devices. However, during the 1980s, when pulse oximeters entered the market, the FDA’s oversight was less robust. Devices were approved under the 510(k) pathway, which allowed new products to bypass rigorous testing if deemed “substantially equivalent” to existing devices. This approach inadvertently reinforced the biases of earlier models.
Until recently, FDA guidelines for pulse oximeter testing required minimal diversity among study participants. In 2013, the FDA recommended but did not mandate, that 15% of test subjects have darker skin tones. Additionally, these tests often involved healthy volunteers, which failed to replicate the complex conditions seen in real-world clinical settings.
The FDA acknowledged the performance disparities in pulse oximeters and updated its guidance. The agency now emphasizes inclusive testing across a broad spectrum of skin tones using the Monk Skin Tone Scale, which accounts for greater variation in pigmentation. However, questions remain about whether manufacturers will apply these standards retroactively to devices already in use and how swiftly these changes will be implemented.
COVID-19 highlights bias in healthcare technology
The COVID-19 pandemic brought unprecedented reliance on healthcare technology, including pulse oximeters. These devices became indispensable for monitoring oxygen levels in patients, especially those battling severe respiratory symptoms. However, their shortcomings became starkly apparent, particularly for Black and brown patients.
In a pivotal 2020 study, researchers found that Black patients were three times as likely as white patients to have dangerously low oxygen levels that pulse oximeters failed to detect. This inaccuracy delayed critical interventions and contributed to the disproportionate impact of COVID-19 on communities of color.
The pandemic underscored the urgent need for trust in healthcare technology. Patients and clinicians alike depend on these devices for life-saving decisions, making it critical to address flaws that undermine their reliability.
AI in health tech: A double-edged sword
The integration of artificial intelligence in health tech has the potential to address biases in traditional medical devices like pulse oximeters. AI algorithms, when trained on diverse datasets, can detect and correct disparities in device performance across different skin tones. For example, AI-enabled oximeters could dynamically adjust their calculations based on skin pigmentation, ensuring accurate readings for all patients.
However, AI is not immune to bias. If algorithms are trained on skewed or incomplete data, they can perpetuate the very inequities they aim to solve. Ensuring fairness in AI-driven healthcare technology requires meticulous data curation and ongoing validation. Trust in these innovations hinges on transparency, inclusivity, and accountability throughout the development process.
Revising standards and advancing equity
Addressing the bias in pulse oximeters involves more than updating regulatory guidelines. It requires systemic changes in how healthcare technology is developed, tested, and implemented. These efforts must center on equity, acknowledging that medical devices are not neutral tools but products of a system shaped by historical and social inequalities.
Challenges in revising standards: One major obstacle is the reliance on legacy devices. Many pulse oximeters currently in use were approved under outdated standards that did not account for racial diversity. Retrofitting these devices with improved calibration models is technically and logistically complex. Regulators must enforce stringent testing requirements not only for new devices but also for those already on the market.
Additionally, increasing diversity in clinical trials is critical. Historically, people of color have been underrepresented in medical research, leading to gaps in knowledge and inequities in care. Ensuring that clinical studies include participants with a wide range of skin tones is essential for developing universally effective healthcare technology.
The role of medical education: Medical education also plays a crucial role in addressing bias. For decades, the limitations of pulse oximeters were absent from medical curricula, leaving clinicians unaware of their potential inaccuracies. Incorporating health equity into medical training can equip future healthcare providers with the knowledge and tools to recognize and mitigate systemic biases.
Building trust through inclusion and accountability
Building trust in healthcare technology requires meaningful action to address disparities and prevent future inequities. This includes fostering collaboration between engineers, clinicians, and community advocates to design technologies that serve diverse populations effectively.
Legal and advocacy efforts: Advocacy groups and legal actions have also brought attention to the issue. Lawsuits, such as one filed by Roots Community Health Center, have pressured regulators and manufacturers to prioritize equity in healthcare technology. These efforts underscore the importance of accountability in driving systemic change.
In parallel, organizations like the National Institutes of Health are working to promote inclusive research practices. By funding studies that focus on health disparities and supporting researchers from underrepresented backgrounds, the NIH aims to foster innovation that benefits all communities.
The promise of Artificial Intelligence: Artificial intelligence in health tech offers a pathway to more equitable solutions. By leveraging AI, developers can design devices that learn from diverse datasets and adapt to individual patient characteristics. However, realizing this potential requires vigilance to ensure that AI systems do not inherit or amplify existing biases.
Efforts are underway to incorporate fairness metrics into AI algorithms and create frameworks for ethical AI deployment in healthcare. These initiatives aim to enhance trust in healthcare technology by demonstrating its reliability and inclusivity.
Toward a more inclusive future
The bias in pulse oximeters underscores the broader challenges of achieving equity in healthcare technology, emphasizing the need for a comprehensive approach to address systemic racism in research, education, and practice.
Inclusive healthcare technology requires dismantling structural barriers and rethinking regulatory processes, diversifying clinical research, and integrating health equity into medical education. AI in health tech presents opportunities to minimize biases, but it demands rigorous oversight to ensure ethical use. Emphasizing inclusivity and accountability can foster trust in medical devices and ensure that technological progress benefits everyone, irrespective of race or ethnicity.
The racial bias in pulse oximeters reflects broader inequities in healthcare technology. Collaboration across sectors and ethical use of AI can help eliminate bias and ensure fairness. Trust in healthcare technology is a moral imperative. Addressing past errors and prioritizing inclusivity can build a more equitable future.