Beyond distance and stigma: AI's potential to democratise healthcare

Patients
AI and healthcare

Access to healthcare and health information remains vastly unequal, fuelled by factors like wealth, geography, language, and literacy. Women in India often lack access to basic sexual health resources, with up to 71% of girls unaware of menstruation before it arrives. In the US, rural hospital closures, especially obstetrical units, have also left many women in these regions bereft of adequate maternal care.

To combat this issue, a growing number of organisations are exploring the potential of AI. One group has piloted the Myna Bolo chatbot designed to help Indian females in marginalised communities gain access to reliable information on a range of taboo topics, such as menstruation, contraception, and sexual health.

Looking closer to home, Northwell Health launched a pregnancy chatbot to educate and support pregnant and postpartum women in the US, improving maternal health outcomes and identifying women with any urgent health concerns.

However, the growing prominence of AI chatbots in healthcare also raises some real concerns about patient trust, misinformation, and data privacy. With these points in mind, what is AI's potential to democratise healthcare access while remaining mindful of the roadblocks?

Challenges with healthcare access among underserved communities

Nearly one-third of the US population lives in a healthcare desert - an area with insufficient access to primary care. Lack of insurance or a language barrier widens the gap.

The National Cancer Institute (NCI) notes that people with low incomes, limited (if any) access to paid medical leave, insufficient health literacy and insurance rates, and extensive distances to clinics are less likely to have recommended screening tests or be treated accordingly.

In these conditions, underserved communities are less likely to investigate symptoms until they develop into more serious and, in some cases, irreversible illnesses, including strokes, heart attacks, early-stage cancers, untreated infections, and diabetes. Many of these conditions could be prevented with better access to healthcare information.

Empowering disadvantaged communities on taboo health topics

As smartphones become more widely available, healthcare providers can increasingly reach and educate underserved populations. Healthcare information needs to be non-judgmental, confidential, and accurate. Large language models (LLMs) equipped with natural language processing (NLP) and sentiment analysis can offer culturally sensitive responses, reliable health education, and a safe space for self-assessment.

AI chatbots can empower underserved communities with knowledge regarding their sexual health. For example, chatbots like SnehAI, a Hindi chatbot aimed at young people in India, can provide non-judgemental responses to a range of questions about safe sex, sexually transmitted diseases (STIs), and contraception. Users can access relevant information in their local language and from the privacy of their smartphone.

Access to mental health support is another area where AI can assist. In Northern Europe, ChatPAl has been piloted. This multilingual AI chatbot promotes the mental well-being of individuals living in rural areas with limited access to therapists.

Of course, we have to recognise the limits of chatbots in this field. Rather than replacing therapists, mental health chatbots, like ChatPal, are used to complement the work of clinicians and to prevent the escalation of mental health difficulties.

Making diagnosis and healthcare accessible from home

LLMs trained on diverse, up-to-date source material in different languages help break down healthcare access barriers in underserved communities. Built with fact-checking capabilities and global positioning systems (GPS), these chatbots can provide information, referrals, and treatments relevant to local women and men.

The proliferation of AI-powered symptom checkers can offer doctor-approved advice to individuals or communities struggling with limited healthcare access. These tools provide 24/7 availability and remote accessibility. Patients can sign up and describe their symptoms to the chatbot by text, or verbally - bypassing the need for literacy, to receive basic health information relevant to their needs. This can empower individuals to identify potential issues early, seek professional help sooner, and experience reduced anxiety.

Again, as with the case of mental health chatbots, it's important to acknowledge limits. These checkers are for informational purposes only and cannot replace diagnosis or treatment by a qualified healthcare provider. In cases where more severe symptoms are listed, doctors can receive alerts and approve responses or prescribe specific treatments based on the user's nearest pharmacy.

In addition, with deliverable health kits such as STI detection packages and wearable devices that integrate with smartphones, we are increasingly seeing less need for physical access to hospitals. Paired with chatbots that are compliant with the Health Information Portability and Accountability Act (HIPAA), patients can receive real-time health updates and converse using their native tongue to ask questions and ensure they follow treatment plans confidently and safely.

Discreet devices like Wei Gao’s wearable biosensor ring allow doctors and patients to monitor oestrogen levels at home in real time. And for parents, Firstday Healthcare offers wireless sensors that monitor babies' vital signs from home. In both cases, telehealth providers can integrate chatbot services to provide real-time support and relieve any concerns.

Public awareness campaigns must be undertaken to increase prevention and early diagnosis for underserved communities with telehealth usage as part of an effective plan forward.

How to create safe digital spaces for open conversations on health

Countries globally have been implementing stricter data privacy regulations using AI. The Coalition for Health AI’s blueprint for trustworthy AI is one example. It highlights usefulness, safety, accountability, explainability, and fairness as critical elements of trustworthy AI. Within this, rigorous ongoing testing of accuracy, data integrity, and bias are essential.

Alliance with existing medical data regulations, including HIPAA is also critical. HIPAA outlines collection, handling, transmission, and storage governance rules, along with standards for deidentification of protected health information (PHI). On top of this, trained moderators who can address inappropriate behaviour, ensure discussions stay on topic, and encourage users to offer constructive and supportive feedback will help create a safe digital space for users.

No matter the patient’s literacy level, HIPAA-compliant NLP tools are helping healthcare providers give culturally relevant health information, in audio or script, translated into many languages. The rise of smartphones, wearables, and HIPAA-compliant chatbots can bridge accessibility gaps by offering remote consultations and assistance with the latest medical devices — paving the way for a more empowered and healthier future.

Image
Nate MacLeitch
profile mask
Nate MacLeitch