Public trust of AI in healthcare in South Africa: results of a survey | BMC Medical Ethics

This literature review details main themes found in the literature on patient opinions on artificial intelligence (AI) use in healthcare. We begin by considering patient attitudes to AI use in healthcare generally, mentioning some demographic determinants of AI trust. Next, we consider sources of trust in healthcare and AI specific challenges to that trust. Lastly, we describe patient opinions on AI’s impact on the doctor-patient relationship.

Patient attitudes

A number of studies from the United States and Europe report mixed attitudes towards AI use in healthcare. Some studies indicate positive attitudes from patients, particularly regarding the potential for increased accuracy and efficiency in clinical decision-making, although these participants also indicate some wariness towards AI use. [17, 18]. In particular, oncology patients in the United States and the United Kingdom are hopeful that AI can improve the speed and quality of data analysis and help detect mistakes, especially in the context of cancer care [19]. Additionally, in Europe, 63.4% of respondents report approving or strongly approving of AI use [20].

Americans also generally perceive healthcare as a field where AI applications could bring benefits, particularly in advancing public health, but they express caution about AI making personal health decisions and managing healthcare data [21]. Although some American participants expressed optimism about AI improving healthcare, this enthusiasm was moderated by significant discomfort and doubt regarding potential negative impacts, including privacy risks, reduced clinician interaction, increased healthcare costs, and AI’s lack of explainability [22].

Despite some positive perceptions, studies consistently highlight significant apprehensions and discomfort among patients, especially when AI systems lack transparency or explainability [22,23,24]. In particular, there is considerable discomfort among American patients when AI is relied on by health practitioners for their medical care, despite some potential benefits like reducing bias in healthcare [24].

When it comes to personal medical decisions, caution dominates. Two studies from the United States found that participants were generally hesitant to rely on AI systems in these contexts. Specifically, Tyson et al. [24] found that 60% of Americans would feel uncomfortable with their healthcare provider relying on AI for diagnosis and treatment, while Beets et al. [21] reported that only 8% of participants were comfortable with an AI making end-of-life care decisions.

While positive attitudes towards AI do exist, they do not necessarily lead to higher AI adoption rates. A Canadian study found that positive attitudes did not correlate with increased AI uptake, nor did negative attitudes invariably result in resistance [25]. A similar trend was observed in the use of surgical robots, where American participants generally viewed the technology positively but became more cautious when considering its use for their own treatment [24]. More determinative of AI adoption are factors such as familiarity with the technology and education. Studies from the United States and the United Kingdom show that participants unfamiliar with AI-powered surgical robots were more apprehensive about using them for personal care, whereas those familiar with AI were either evenly divided or supportive of their use [19, 24].

Education emerged as a key determinant of support for AI use in healthcare in studies conducted in the United States, United Kingdom, and Europe. Higher educational attainment and income levels were positively associated with support for AI applications in healthcare [19, 24 ]. Specific education on AI in healthcare increased trust among European participants [20]. Additionally, digital literacy and familiarity with AI technology were positively associated with acceptance in healthcare settings [26]. Education on AI in healthcare also led to increased willingness to share data for AI-driven health research among American participants [21].

Comparison of trust in AI vs. trust in humans

Institutions and professionals are an important source of trust for AI in healthcare. A study from the United States found that participants were more likely to place trust in AI systems when physicians endorsed or recommended their use [27]. Similarly, a Canadian study found that AI applications aligning with appropriate regulation and approved by regulatory bodies were more acceptable to participants [28].

In studies in the United States and Europe, the least trusted stakeholders were commercial entities [20, 29]. Participants showed concern over the safety of their data where private stakeholders have access in Canadian and Australian studies [25, 30]. Thisconcern may explain why some participants in a further study from the United States to favour physical consultations with their physicians, showing the trust they have in their physicians [31].

An important means of building trust was knowing how an AI came to its decision. One study from the United States suggested that a lack of interpretability may produce lower trust in AI recommendations [31]. However, when American and Australian participants were asked to prioritise aspects of AI in healthcare, accuracy was valued much higher than interpretability [30, 32], being one of the most important factors determining AI use [33]. American participants considered AI accuracy crucial in its ability to access more data than humans [33]. Similarly, in another study with American participants, they were substantially more likely to select AI systems where they are proven to be more accurate than humans [28].

Importantly, the prioritisation of accuracy over interpretability became more pronounced as the stakes of AI decisions were higher and resources were scarce []. But comfort of AI use varied depending on the clinical application of the system [22, 24]. Accordingly, studies from the United States and the United Kingdom found AI system acceptance was lower as severity of the disease being assessed increased and resources became more scarce [,22, 26]. Therefore, American patients were comfortable with AI recommendations in low-risk interventions such as general wellness strategies or talk therapy; however, they were uncomfortable with AI systems diagnosing disease or recommending medication [18]. This may be a consequence of mixed opinions of participants on AI impact on healthcare outcomes. Whilst some studies in the United States reported positive attitudes to AI impact [22], others reported ethical concerns and a significant lack of support for AI use improving healthcare outcomes [21, 22, 34].

The mixed perceptions of AI impact on healthcare outcomes are further illustrated in a study from the United Kingdom in participants’ concern for AI becoming a source of error [19]. This concern was founded in American participants in the perceived rapid emergence and deployment of AI technologies in healthcare and the worry that current issues will be exacerbated without fully understanding how to remedy them [, ]. A study in the United States found that this aversion proved so persistent that, as a result, AI uptake may not increase even where AI is proven to be accurate and a physician is given the final decision on medical care [27].

AI impact on the patient-physician relationship

A key healthcare concern which American participants raised was the disruption which AI systems may make to the patient-physician relationship [24]. The patient-physician relationship is a primary source of trust in healthcare, and participants expressed discomfort with the possibility of AI interfering with this bond, especially in mental healthcare settings [18]. In a study from the United Kingdom, radiotherapy patients, particularly, were concerned with potentially less personal interaction and fewer opportunities to raise concerns or have worries assuaged by another human [35]. AI’s inability to embody crucial criteria of human relationship, empathy, and warmth led European participants to believe that AI could not replace humans in healthcare [17]. Aligning with this, European patients generally trusted physicians over AI in most clinical settings except when considering the most current clinical knowledge and generally trusted AI most when it was under the supervision of a physician [36].

Relegating the AI system to an advisory role was not a complete solution though. Where AI merely provided recommendations, participants in a study from Europe showed concern at these recommendations not being challenged, leading to overreliance and loss of expertise or blind trust in AI systems [17].

In the interest of upkeeping responsibility for decision, American, European, and Australian participants stated an interest in preserving their choice to use AI systems and know when AI systems are being used [17, 30, 33]. This maintains a means for patients to dispute AI decisions or correct recommendations [17]. In a study from Europe, not all participants considered disclosure appropriate though, some suggesting that it may be unnecessary, confusing and overwhelming and further erode trust or cause harm [17]. Further, in a study from Australia although some participants were empowered by the ability to challenge results, others argued that they did not challenge current results from their physicians, and therefore did not consider it important to be able to challenge AI recommendations [30]. Nevertheless, the majority of participants recognised knowing who is responsible for decision-making in healthcare as foundational to patient challenging and accepting of decisions [30].

Importantly, in a study from the United States where an AI system and a physician conflicted in their decision, participants reported being more likely to trust the physician’s decision, even where AI systems and physicians were equally effective [33]. This was echoed in determinations of who should generally make final decisions, with most participants in studies from the United States, Canada and Europe agreeing that physicians should make all care decisions [17, , , 37]. At least one study from the United States suggested this question may not be significant as they found that whether the physicians merely deferred to AI recommendations or incorporated them into their care did not affect acceptance of AI recommendations [27].

Ultimately, American patients strongly preferred supervised AI use, considering AI as a means to doublecheck or compliment physicians’ efforts instead of a stand-alone technology [33]. They were unwilling to undergo procedures such as autonomous robotic surgery without immediate human supervision [38]. Even where they were willing, it was only where the surgeon had fully explained the exact application of the AI system in the surgery [33].

Ethical considerations in AI adoption in healthcare settings in Africa

The adoption of artificial intelligence (AI) in healthcare across Africa involves complex ethical considerations that intersect with cultural, religious, and socio-economic factors. Elendu et al. [39] highlight how societal trust in AI extends beyond technical reliability to encompass alignment with moral and cultural values. In a similar vein, Ferlito et al. [40] suggest that community-based ethics, such as Ubuntu—which emphasise interconnectedness and shared responsibility—offer an important foundation for ethical AI governance in Africa. Public concerns around data misuse, algorithmic bias, and the dehumanisation of healthcare decisions are central to these considerations, necessitating context-sensitive approaches.

Naidoo et al. [41] focus on the gaps in South Africa’s existing legal frameworks for AI adoption, identifying biases in data collection and outdated regulations as significant barriers to public trust. They advocate for modernising policy to address these gaps, proposing a national governance framework that incorporates fairness and accountability while empowering healthcare workers through reskilling initiatives. These steps are essential to bridging the divide between technological advancement and societal acceptance.

Sihlahla et al. [42] explore ethical principles governing AI in South African radiology, emphasising the need to mitigate algorithmic biases and ensure equitable access to AI benefits. Their work underscores the importance of culturally responsive governance to build trust in AI applications. In a broader analysis, Townsend et al. [43] examine AI governance in 12 African countries, highlighting the lack of public engagement and ethical oversight as barriers to effective implementation. Their findings reveal the need for inclusive policymaking that incorporates diverse cultural and ethical perspectives, fostering trust and acceptance among communities.

Eke et al. [44] echo these sentiments by stressing the importance of public engagement in shaping AI policies. They propose integrating African philosophical values, such as Ubuntu, into governance models to ensure ethical alignment and societal trust. These recommendations align with global frameworks like those of the World Health Organization, which advocate for transparency, accountability, and cultural sensitivity in AI deployment.

Kabata and Thaldar [45] highlight the challenges of implementing the human-in-the-loop requirement in low-resource settings, such as rural Africa, where medical expertise is often limited. They propose a human-rights-based regulatory framework prioritising accessibility and safety, shifting oversight to public institutions to ensure accountable AI governance. Their emphasis on ethical principles, including autonomy and beneficence, underscores the importance of culturally sensitive approaches in building public trust.

In conclusion, the effective adoption of AI in African healthcare requires governance frameworks that integrate cultural values, address biases, and incorporate public engagement. A human-rights-based approach, as advocated by Kabata and Thaldar [45], provides a promising pathway for fostering trust and ensuring that AI technologies align with the needs of diverse and resource-constrained settings.

Continue Reading