Responsible AI in Global Health: Solutions from the Global South
Issue 19, February 2025

In this month's Research Roundup, guest editor Chaitali Sinha presents a new report from the International Development Research Centre (IDRC), Strengthening Health Systems through Responsible AI: An Emergent Research Landscape. The report explores the growing field of implementation research on responsible AI, highlighting Global South case studies that aim to address health challenges and strengthen health systems, particularly in underserved contexts. This month’s Roundup presents key insights from the report, examining the prerequisites for guiding responsible design, implementation, and evaluation of emerging AI-based technologies in global health. The full paper is available in English, French, and Spanish.
Guest Editors’ Remarks:
As many parts of the world are grappling with a polycrisis, there is no time to waste when it comes to supporting evidence-based solutions that positively influence health and wellbeing for all. Artificial Intelligence (AI) solutions can contribute toward this goal, so long as their promise of advancing diagnostics, public health surveillance, and health service planning and delivery are examined alongside perils of increased risks to patient safety, data privacy, and strained trust in health systems stemming from coded biases, misinformation, and disinformation.
Real-world applications and evidence are needed to ensure that the application of AI-based technologies leads to net gains for people most in need. Toward filling this gap, a new discussion paper presents an emergent research landscape to leverage responsible AI solutions across the Global South, with a focus on unmet health needs among populations experiencing the highest levels of vulnerability.
Published by Canada’s International Development Research Centre (IDRC), the paper showcases 12 case studies from across the Global South. Each case study, funded by IDRC and the UK Foreign Commonwealth and Development Office (FCDO), starts with a global health challenge, and then presents research questions, outcomes and evidence demonstrating how the locally designed AI solution is responsible in both its design and deployment.
From Brazil to Bangladesh, South Africa to Indonesia, Lebanon to Ethiopia, and many countries in between, the case studies use implementation research to answer questions of if, how, for whom, and in what contexts responsible AI solutions are improving health outcomes, redressing health inequities, and strengthening health systems. For example, in Ghana, highly effective AI-powered acoustic analysis is transforming mosquito classification to combat malaria, while in Guatemala, AI-assisted prenatal ultrasounds for Indigenous women are improving culturally responsive and respectful care.
Too often, the discourse on researching responsible AI solutions in global health is dominated by Global North perspectives, yet many of the most innovative, contextually relevant, and meaningful applications are being developed in the Global South. In providing 12 examples of how locally championed social and technological innovations are addressing global health challenges in context-specific and rigorous ways, this paper argues for a shift in the narrative—one where contributions and leadership from the Global South are valued and celebrated.
Responsible AI Implementation Across Health System Levels
Three key entry points—health services, community health, and individual health—have been identified through literature review, project analysis, and key informant interviews to guide the responsible use of AI. In the full paper, a set of indicative implementation research questions are provided alongside each entry point.
Health Facility Level: Strengthening the Health Workforce
AI solutions can address skill gaps, improve workforce planning, and enhance provider-patient relationships, supporting effective service delivery and long-term health system resilience.
Community Level: One Health Surveillance and Solutions
AI-driven One Health approaches offer a valuable framework for disease surveillance, risk prediction, and misinformation control by leveraging large datasets, spatial modeling, and real-time monitoring tools.
Individual and Caregiver Level: Self-Care Interventions
AI-driven self-care interventions empower individuals and/or their caregivers to manage their health through tools such as personalized tracking, early disease detection, and digital coaching.
Prerequisites for the Design and Implementation of Responsible AI
While AI holds significant potential to help address pressing health challenges, prevailing research agendas—often shaped by dominant power structures—frequently overlook marginalized populations and underrepresented health conditions in the Global South. To address this imbalance, locally driven and context-specific investigations that close the "know-do gap" through implementation research are essential. Five cross-cutting prerequisites have been identified to inform research design and implementation strategies for AI-based digital health innovations.
Regulation, Policy, and Governance
This prerequisite encompasses the development of robust regulatory and policy frameworks that ensure the safe, ethical, and effective use of AI in health systems, balancing the need for innovation with regulatory compliance to protect patient rights and public health.
Many countries lack dedicated legislation for the use of AI in healthcare. Instead, they rely on broader medical device regulations that fail to address AI-specific challenges, which increases risks related to data privacy, accountability, and equitable access.
Best practices:
- Enforce data protection laws
- Promote transparency in AI decision-making
- Generate clinical evidence for AI tools
- Establish clear accountability lines
- Develop ethical guidelines with human oversight
- Research pricing regulations to improve accessibility
- Foster collaborative interoperability standards from a Global South perspective
Data Quality and Representation
Using high-quality, representative, and disaggregated data when developing responsible AI systems is essential for effectively addressing diverse health needs.
Low-quality data can result in various forms of bias that compromise the fairness and accuracy of AI-driven decisions. These issues are exacerbated by structural weaknesses in data availability, governance, and the underrepresentation of linguistic diversity, particularly in low-resource settings.
Best practices:
- Address poor data quality stemming from under-representation, misrepresentation and over-representation
- Strengthen robust data triangulation during model training
- Ensure secure interoperability of data systems
- Assess the availability and reliability of local datasets
- Promote cross-border data sharing with strong governance practices
Gender Equality and Inclusion
Because health outcomes for women, men, and gender-diverse individuals are shaped by biological, social, cultural, and political factors, AI-driven health solutions must actively address gender disparities and systemic inequities.
AI technologies risk perpetuating and amplifying existing inequities if their design and implementation do not consider systemic oppression, exclusion, and gender norms. Without deliberate action, AI solutions may reflect the dominant perspectives of those in power, resulting in both visible and invisible forms of discrimination.
Best practices:
- Ensure equitable access to AI-enabled tools and connectivity
- Understand how social and gender norms influence AI solutions and regulations, alongside health-seeking and health provider behaviors
- Address impacts of data biases and underrepresentation of certain groups
- Design solutions to support individuals with disabilities
- Mitigate bias and exclusion in diagnosis and treatment
- Diversify pool of locally trained talent across disciplines
Ethics and Sustainability
Aligning AI solutions in healthcare with ethical principles that protect human rights is paramount, along with promoting transparency and ensuring equitable outcomes.
Current AI systems have been relying on granular personal data more and more. For this reason, safeguarding privacy and minimizing harm from data misuse are crucial measures for preventing the exacerbation of health inequities.
Best practices:
- Align AI solutions with human rights principles
- Minimize bias to ensure fair treatment across user groups
- Improve transparency and clarity of explanation
- Ensure appropriate human oversight in decision-making processes
- Enhance data security to protect privacy
- Reduce the environmental impact of AI technologies
Global South-led and Equitable Partnerships
Prioritizing locally-led research and decision-making ensures that experts and institutions from the Global South drive the translation of evidence into relevant policies and practices.
Addressing historical power imbalances between the Global North and Global South is essential to give those most affected by AI and global health policies a meaningful role in shaping them.
Best practices:
- Strengthen research ecosystems in the Global South
- Support lead authorship from local institutions
- Prioritize strengths & needs of Global South-based organizations in partnerships
- Embed equity principles in multi-partner collaborations to ensure fair resource distribution and inclusive decision-making
Global and Regional Hubs Advancing AI for Global Health
Four regional hubs in the Global South and one global hub (with presence in four regions) are leveraging AI to address region-specific health challenges while fostering equitable partnerships and capacity-building.
The Global South AI for Pandemic & Epidemic Preparedness & Response Network (AI4PEP) examines how responsible AI can improve existing public health preparedness knowledge and practice gaps in the Global South. It seeks to strengthen the capacity of interdisciplinary researchers and policymakers across Africa, Asia, Latin America and the Middle East to support early detection, response, mitigation, and control of developing infectious disease outbreaks.
Artificial Intelligence for Global Health (AI4GH)

Latin America and the Caribbean
The Center for Artificial Intelligence and Health for Latin America and the Caribbean is led by the Center for Implementation and Innovation in Health Policies of the Institute for Clinical and Health Effectiveness in Argentina, with support from IDRC, to strengthen AI-driven health research in the region.

Asia
AI-Sarosh is an IDRC-funded initiative based in Pakistan focused on leveraging AI solutions to reduce the burden of sexual, reproductive, and maternal health issues in South Asia through early detection, prevention, and intervention strategies.

Sub-Saharan Africa
The Hub for Artificial Intelligence in Maternal, Sexual, and Reproductive Health is a networking platform in Uganda fostering collaboration among Pan-African researchers, organizations, and innovators to advance AI-driven health solutions in maternal, sexual, and reproductive health.

Middle East and North Africa
The Global Health and Artificial Intelligence Network in the Middle East and North Africa, led by the E-Sahha Program at the Global Health Institute with IDRC support in Lebanon, empowers researchers in the region to design and implement responsible AI-based digital health interventions through capacity-building and partnerships.
Case Studies on Locally-driven AI Initiatives across the Global South:
The case studies featured in this report highlight diverse AI initiatives showcasing how responsible design and implementation of AI technologies can address pressing health challenges while promoting equity, inclusivity, and sustainability. They cover a range of applications, including disease surveillance, maternal and reproductive health, environmental monitoring, and combating misinformation, illustrating how AI can be adapted to meet local health needs and contexts.
Combating Disinformation in Brazil with Dominique, an AI Assistant
The AutoAI-Pandemics hub in Brazil developed Dominique, an AI-powered conversational assistant that combats disinformation—such as that on vaccine use—by analyzing statement truthfulness, promoting fact-checking, and providing accessible verification tools. Built using a GRU-model trained on a Portuguese news corpus, Dominique achieved a 95.7% accuracy rate and is available in Portuguese and English, earning international recognition for enhancing public information consumption.
Responsible AI Elements: The project prioritizes transparency, fairness, and ethical information sharing, with a user-friendly design that promotes critical thinking and reduces vulnerability to misinformation.
Improving Prenatal Ultrasound Access for Indigenous Women in Guatemala
The NatallA project in Guatemala is developing an AI-enabled tool to help midwives perform fetal ultrasounds without the need for specialized personnel, helping address maternal health disparities among Indigenous women. Leveraging convolutional neural networks (CNNs), the tool extracts diagnostic features from ultrasound videos, enabling decentralized primary health care in local languages and cultural contexts.
Responsible AI Elements: The project ensures transparency and inclusivity by involving Indigenous communities, protecting data privacy, and using representative datasets to minimize bias.
Detecting Prenatal and Perinatal Depression in Bangladesh
A multidisciplinary team at Eminence Associates for Social Development in Bangladesh developed an AI-enabled tool to assist non-specialized healthcare providers in detecting prenatal and perinatal depression. By using a graph neural network model, the tool assists these providers in analyzing visual, facial, and acoustic cues from pregnant and postpartum women. This intervention complements traditional PHQ-9 assessments and reduces bias in the data collected.
Responsible AI Elements: The solution incorporates human-in-the-loop validation, prioritizes patient privacy, and uses culturally-relevant datasets to ensure fairness and empathy in mental health diagnostics.
Enhancing Sexual and Reproductive Health Information Access for Adolescents with Disabilities in Ghana
Researchers at the University of Ghana developed an AI-powered solution to improve sexual and reproductive health (SRH) information for adolescents with hearing, speech, and visual disabilities. Using fine-tuned large language models (LLMs) like ChatGPT and Gemini and based on surveys with 400 students, the AI system generates culturally-sensitive, age-appropriate content.
Responsible AI Elements: The project addresses inclusivity through offline options, engages stakeholders for content validation, and ensures bias mitigation through culturally-informed prompt engineering of the AI model.
Boosting Sexual and Reproductive Health Knowledge for Refugee Women in Turkey via AI Chatbot
The Medical Rescue Association of Turkey developed an AI-powered chatbot to improve SRH knowledge among refugee women, overcoming barriers such as language, xenophobia, and digital literacy. The tool provides personalized, culturally-sensitive advice, accessible via WhatsApp, and engaged 110 users with 248 conversations in its beta phase.
Responsible AI Elements: The chatbot emphasizes ethical data use, user privacy, and adaptability to meet users' cultural and situational needs.
Expanding Maternal Health Support with PROMPTS in Kenya
Jacaranda Health’s PROMPTS platform provides real-time, personalized maternal health information in English as well as Swahili, Sheng, and five other less common African languages. By classifying users into vulnerability segments, the tool provides health guidance while collecting socioeconomic data to address maternal health disparities.
Responsible AI Elements: PROMPTS employs open-source models, ensures human-in-the-loop validation, and incorporates user feedback to improve fairness and usability.
Classifying Mosquito Species in Ghana with AI-Enabled Acoustics
To aid malaria control efforts, the Kwame Nkrumah University of Science and Technology (KNUST) developed a deep learning AI model that classifies mosquito species based on wingbeat sounds. Trained on over 25,000 recordings, the model achieved 92% accuracy identifying species, offering a non-invasive alternative to traditional identification methods.
Responsible AI Elements: Community engagement and inclusive consultations ensure the tool addresses local needs and is accessible to vulnerable populations.
AI-Driven Outbreak Early Warning System in the Dominican Republic
The Health Research Institute of the Universidad Autónoma de Santo Domingo is developing an AI-enabled early warning system to predict mosquito-borne diseases such as dengue and Zika viruses. By integrating epidemiological and environmental data, the tool provides timely alerts to improve public preparedness and response.
Responsible AI Elements: The project incorporates community input, ensures transparency in data use, and focuses on equitable access to outbreak information.
Strengthening Dengue Surveillance with AI in Indonesia
A team at the Universitas Gadjah Mada is developing an AI-enabled prediction model to enhance dengue surveillance by integrating environmental and syndromic surveillance data. Working with Indonesia’s Ministry of Health, the project aims to predict infection rates and improve data quality in healthcare facilities.
Responsible AI Elements: Stakeholder engagement ensures transparency, alignment with local health needs, and ethical use of medical records data.
Enhancing Air Quality Monitoring in South Africa with AI
Wits University and iThemba LABS developed a cost-effective, AI-powered air quality monitoring network to combat pollution in Gauteng province, South Africa. Affordable sensors provide real-time data visualized through dashboards to guide interventions against respiratory and cardiovascular diseases.
Responsible AI Elements: The project makes data publicly accessible, incorporates community feedback, and ensures transparency in AI predictions.
Improving Wastewater Surveillance for Pathogens in Tunisia
The Institut Pasteur de Tunis, in partnership with the Ministry of Health, developed an AI-enabled dashboard to monitor wastewater for high-priority pathogens and antimicrobial resistance. Covering 21 treatment plants, the tool supports outbreak prediction and resource allocation.
Responsible AI Elements: The model promotes transparency, ethical data use, and equitable resource distribution, particularly benefiting marginalized communities.
Community-Based Acute Flaccid Paralysis Surveillance Using AI in Ethiopia
Jimma University developed PolioAntenna, a mobile app leveraging deep learning to enhance acute flaccid paralysis (AFP) surveillance. Trained on 428 images from six Ethiopian regions, the AI system automates detection and supports early polio response efforts.
Responsible AI Elements: The project involves local health workers, prioritizes gender-inclusive data collection, and ensures culturally-sensitive AI model deployment.
Click the links below to read previous editions of Research Roundup and to receive the latest updates in global digital health!
Meet our guest editor:

Chaitali Sinha, a Senior Program Specialist at Canada’s International Development Research Centre (IDRC), has over 20 years of experience supporting research for development programs in Africa, Asia, Latin America and the Caribbean, and the Middle East. Her areas of expertise include sexual and reproductive health and rights, health information systems, digital health, feminist research and gender analysis, and responsible use of artificial intelligence solutions to strengthen health systems. Chaitali holds a master’s degree in international development studies, a bachelor’s degree in management information systems, and completed graduate courses in epidemiology and global health policy.