info@biomedres.us   +1 (502) 904-2126   One Westbrook Corporate Center, Suite 300, Westchester, IL 60154, USA   Site Map
ISSN: 2574 -1241

Impact Factor : 0.548

  Submit Manuscript

Research ArticleOpen Access

Ethical Guidelines of Integrating Artificial Intelligence in Healthcare in Alignment with Sustainable Development Volume 59- Issue 4

Abdulmajeed Faihan Alotaibi*

  • Biomedical Engineering, Medical Services Directorate-Ministry of Défense (MOD), Kingdom of Saudi Arabia

Received: November 04, 2024; Published: November 20, 2024

*Corresponding author: Abdulmajeed Faihan Alotaibi, Biomedical Engineering, Medical Services Directorate – Ministry of Défense (MOD), Kingdom of Saudi Arabia

DOI: 10.26717/BJSTR.2024.59.009325

Abstract PDF

ABSTRACT

The integration of Artificial Intelligence (AI) in healthcare presents significant opportunities and challenges, particularly concerning ethical considerations. This research employs a narrative review method to synthesize existing literature on the ethical guidelines that govern AI technologies in healthcare, focusing on principles such as equity and accessibility, transparency and accountability, data privacy and security, collaboration, sustainability, patient-centricity, and the promotion of health and well-being. By examining diverse studies, the review identifies emerging trends, highlights gaps in current research, and contextualizes findings within the broader healthcare landscape. The flexibility of the narrative review allows for a comprehensive exploration of the complexities surrounding AI in healthcare, revealing the need for multidisciplinary collaboration and robust regulatory frameworks to ensure responsible implementation. Furthermore, the research underscores the importance of engaging underserved communities and prioritizing sustainability to enhance health outcomes and promote equity. Ultimately, this study contributes to a deeper understanding of how ethical principles can guide the effective integration of AI technologies in healthcare systems, paving the way for improved patient care and alignment with sustainable development goals. In conclusion, while these ethical guidelines provide a robust framework for the integration of AI in healthcare, their successful implementation requires ongoing evaluation and adaptation. Stakeholders must remain vigilant to ensure that AI technologies are not only effective but also equitable, transparent, and aligned with the ultimate goal of improving patient care and health outcomes across diverse populations.

Keywords: Ethics; Guidelines; Artificial Intelligence; Healthcare; Sustainable Development

Introduction

The integration of Artificial Intelligence (AI) in healthcare has emerged as a transformative force, promising to enhance diagnostic accuracy, optimize treatment pathways, and improve patient outcomes (Kulkov, et al. [1]). However, the rapid adoption of AI technologies in this sensitive field raises significant ethical concerns, particularly when viewed through the lens of sustainable development (Liaw, et al. [2]). Sustainable development emphasizes meeting the needs of the present without compromising the ability of future generations to meet their own needs (Fan, et al. [3]). This principle necessitates a careful examination of how AI systems are designed, implemented, and governed, ensuring they contribute positively to health equity, environmental sustainability, and social justice (Di Vaio, et al. [4]). As AI technologies become increasingly prevalent in healthcare, it is essential to establish ethical guidelines that not only address the immediate implications of these technologies but also consider their broader societal impact (Joshi, et al. [5]). Ethical frameworks in AI must encompass principles of equity, accountability, transparency, and patient empowerment, while aligning with the Sustainable Development Goals (SDGs) set forth by the (United Nations [6]). These goals serve as a comprehensive blueprint for addressing global challenges, including health inequalities, access to quality healthcare, and sustainable resource management.

AI’s potential to revolutionize healthcare is particularly salient in contexts where traditional healthcare systems face challenges such as resource scarcity, workforce shortages, and rising costs (Murphy, et al. [7]). For instance, AI applications in telemedicine have facilitated remote consultations, thereby improving access to healthcare services in underserved areas (Buruk, et al. [8]). However, as these technologies evolve, it is crucial to scrutinize their ethical implications, especially concerning data privacy, bias, and the potential for exacerbating existing disparities in healthcare access (Palomares, et al. [9]). The ethical guidelines for AI in healthcare must therefore be informed by an understanding of the interconnectedness of health, social equity, and environmental sustainability. The World Health Organization (WHO, [10]) emphasizes that health is not merely the absence of disease but a state of complete physical, mental, and social well-being. This holistic view necessitates a multidisciplinary approach to AI implementation in healthcare, where ethical considerations are integrated into every stage of development and deployment (Vinuesa, et al. [11].

Research Questions

This research seeks to answer the following questions:

1. What ethical principles should guide the development and implementation of artificial intelligence technologies in healthcare to ensure alignment with sustainable development goals?

2. What are the potential impacts of ethical AI practices on the sustainability of healthcare systems, particularly in terms of resource allocation, patient outcomes, and environmental considerations?

3. How can stakeholders in the healthcare sector, including policymakers and practitioners, effectively integrate ethical guidelines for AI to promote equitable access and minimize biases in healthcare delivery?

Research Objectives

This research attempts to fulfil the following objectives:

1. To identify the ethical principles that should guide the development and implementation of artificial intelligence technologies in healthcare to ensure alignment with sustainable development goals.

2. To assess the potential impacts of ethical AI practices on the sustainability of healthcare systems, focusing on resource allocation, patient outcomes, and environmental considerations.

3. To explore effective strategies for stakeholders in the healthcare sector, including policymakers and practitioners, to integrate ethical guidelines for AI, promoting equitable access and minimizing biases in healthcare delivery.

Research Significance

The exploration of ethical guidelines for artificial intelligence (AI) in healthcare from a sustainable development perspective is of paramount importance in today’s rapidly evolving technological landscape. As AI continues to reshape healthcare practices, the need for robust ethical frameworks becomes increasingly critical. This research aims to identify the ethical principles that should guide AI development, ensuring that these technologies not only enhance healthcare delivery but also align with the broader goals of sustainable development. By establishing a strong ethical foundation, this study contributes to creating a healthcare environment that prioritizes patient welfare and societal well-being. Moreover, this research addresses the pressing need for equitable access to AI-driven healthcare solutions. By investigating effective strategies for integrating ethical guidelines, it seeks to empower stakeholders—including policymakers, healthcare providers, and technology developers—to work collaboratively towards minimizing biases and disparities in healthcare delivery. This is especially pertinent in diverse populations, where the risk of exacerbating existing inequalities can undermine the potential benefits of AI.

By fostering an inclusive approach, the research emphasizes the importance of social justice in healthcare, aligning with sustainable development goals that advocate for equity and inclusivity. Lastly, the assessment of the potential impacts of ethical AI practices on the sustainability of healthcare systems provides valuable insights into resource allocation, patient outcomes, and environmental considerations. As healthcare systems worldwide face challenges such as rising costs, resource scarcity, and environmental degradation, understanding how ethical AI can promote sustainability is crucial. This research not only highlights the necessity of integrating ethical considerations into AI applications but also underscores the broader implications for health systems, ultimately contributing to more resilient and sustainable healthcare frameworks. By addressing these interconnected issues, the research aims to pave the way for responsible AI adoption in healthcare, ensuring that technological advancements serve the greater good while safeguarding ethical standards.

Methodology

This research is a narrative review in design. A narrative review is a qualitative research method designed to synthesize existing literature on a specific topic, providing a comprehensive overview rather than a systematic assessment of studies (Rother [12]). This approach is particularly beneficial for exploring complex issues, such as the integration of artificial intelligence in healthcare, as it helps elucidate ethical guidelines, challenges, and opportunities associated with these technologies (Ferrari [13]). By weaving together findings from various studies, a narrative review presents a cohesive narrative that captures the current state of research and identifies implications for future practice and policy. One of the key strengths of a narrative review lies in its methodological flexibility (Rother [12]). Unlike systematic reviews, which adhere to strict protocols for literature selection, narrative reviews allow for the inclusion of a broader range of literature, including theoretical papers and opinion pieces. This flexibility enriches the discussion by incorporating diverse viewpoints and insights. Additionally, narrative reviews effectively identify emerging trends in the literature and highlight gaps in research, guiding future investigations into areas such as the impact of AI on marginalized populations (Ferrari [13]). This narrative review provides valuable contextualization by synthesizing findings from various sources, enabling a deeper understanding of how AI technologies interact with ethical guidelines and healthcare practices.

Literature Review

Ethical AI refers to the development and deployment of artificial intelligence systems that prioritize ethical considerations throughout their lifecycle (Cumming, et al. [14]). This encompasses various principles, including fairness, accountability, transparency, and privacy (Amann, et al. [15]). According to Gabriel [16], ethical AI should ensure that algorithms function without bias, safeguard user data, and provide transparent decision-making processes. These principles are particularly crucial in healthcare, where AI systems can influence clinical decisions that directly affect patient outcomes. Sustainable development in healthcare is defined as the provision of medical services that meet the needs of the present without compromising the ability of future generations to meet their own needs (Roberts, et al. [17]). The World Health Organization (WHO) emphasizes that sustainable healthcare systems must be equitable, efficient, and resilient. This perspective aligns with the United Nations Sustainable Development Goals (SDGs), particularly Goal 3, which aims to ensure healthy lives and promote well-being for all at all ages. Sustainable healthcare practices encompass not only the economic and environmental sustainability of health systems but also the social dimensions, including equitable access to care and the reduction of health disparities (Raman, et al. [18]).

As the integration of AI in healthcare accelerates, the need for comprehensive ethical frameworks has become increasingly evident (Li [19]). Several organizations and researchers have recognized this necessity, establishing guidelines that aim to ensure AI technologies are developed and deployed responsibly. Among these, the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems has articulated a foundational framework that emphasizes key ethical principles essential for guiding AI in healthcare (Stahl [20]). The IEEE framework highlights the importance of transparency, advocating for clear communication about how AI systems operate, including the algorithms used and the data on which they are trained (Taeihagh [21]). This transparency is vital for fostering trust among healthcare providers and patients, as it allows stakeholders to understand the decision- making processes of AI technologies. Additionally, the framework underscores the principle of accountability, emphasizing the necessity of holding individuals and organizations responsible for the outcomes produced by AI systems (Shafik [22]). In high-stakes environments like healthcare, where erroneous outputs can have serious consequences for patient safety, establishing clear lines of responsibility is crucial. Central to the IEEE initiative is the prioritization of human well-being, which ensures that AI technologies are designed to enhance patient welfare, promote health equity, and improve overall health outcomes (Christy, et al. [23]).

Similarly, the World Health Organization (WHO) has recognized the importance of ethical considerations in deploying AI technologies within health systems (Renda [24]). In its guidelines for the ethical use of AI, the WHO emphasizes stakeholder engagement, calling for the active involvement of a diverse range of participants—healthcare professionals, patients, ethicists, and policymakers—in the development and implementation of AI systems. This participatory approach ensures that multiple perspectives are considered, enhancing the relevance and acceptability of AI technologies in various healthcare contexts. Furthermore, the WHO guidelines focus on the promotion of equity, stressing that AI should not exacerbate existing disparities but should instead serve to reduce inequalities in access to healthcare services. This commitment to equity is essential for achieving the United Nations Sustainable Development Goals, particularly those related to health and well-being (Jobin, et al. [25]).

The WHO also advocates for thorough risk assessment and management prior to the deployment of AI systems. This involves evaluating potential ethical, social, and clinical risks associated with AI applications, enabling healthcare providers to mitigate negative impacts and enhance the safety and effectiveness of these technologies. Additionally, the guidelines highlight the necessity for continuous monitoring and evaluation of AI systems post-implementation. This ongoing assessment ensures that any ethical concerns or unintended consequences can be promptly addressed, fostering an adaptive approach to AI governance in healthcare (Thiebes, et al. [26]). Several other frameworks and guidelines have been proposed globally to address the ethical use of AI in healthcare, each emphasizing critical principles to ensure responsible deployment. One significant framework is the European Commission’s Ethics Guidelines for Trustworthy AI, which outlines key requirements for AI systems to be considered trustworthy (Kulkov, et al. [1]). These guidelines stress the importance of human agency and oversight, ensuring that AI empowers human decision-making rather than replacing it. Additionally, they highlight the necessity for technical robustness and safety, where AI systems must be resilient and secure to prevent harm.

Privacy and data governance are also paramount, with a strong emphasis on respecting privacy and ensuring data protection (Liaw, et al. [2]). Transparency is critical, requiring clear communication about AI operations, while principles of diversity, non-discrimination, and fairness demand that AI be inclusive and avoid biases. Furthermore, the guidelines advocate for societal and environmental well-being, ensuring that AI contributes positively to society and the environment, alongside mechanisms for accountability in AI decision-making (Fan, et al. [3]). Another influential set of principles is the Asilomar AI Principles, developed during the Asilomar Conference on Beneficial AI (Di Vaio, et al. [4]). These principles focus on ensuring that AI development is safe and beneficial, asserting that research should be conducted for the benefit of humanity. They emphasize that AI systems should be robust and verifiable and designed to be compatible with human values. Moreover, the impact of AI on society should be continuously monitored and evaluated (Di Vaio, et al. [4]). The American Medical Association (AMA) has also published guidelines that specifically address ethical considerations for AI in healthcare (Buruk, et al. [8]). These guidelines stress the need for clinical effectiveness, requiring that AI demonstrate clinical efficacy and safety. They advocate for equity, ensuring that AI applications promote equitable access to healthcare, while also calling for transparency in AI decision- making processes. Respecting patient autonomy and upholding informed consent are crucial elements of the AMA’s ethical framework.

The National Institute of Standards and Technology (NIST) has developed a framework that emphasizes transparency through clear documentation of AI systems and highlights the importance of robustness and security to protect against adversarial attacks. Additionally, the NIST framework addresses fairness by focusing on mitigating bias in AI training datasets and algorithms (Palomares, et al. [9]). In the realm of interoperability, Health Level Seven International (HL7) provides guidelines that incorporate ethical considerations for AI, particularly regarding patient safety—ensuring that AI applications do not compromise patient safety—and protecting data privacy in AI applications (Vinuesa, et al. [11]). The Oxford Principles for AI, developed by researchers at the Future of Humanity Institute at Oxford University, emphasize that AI should be used for the common good, developed transparently, and designed to be robust and secure. These principles advocate for the governance of AI development by ethical standards (Cumming, et al. [14]). Lastly, the Joint Commission’s Guidance on AI in Healthcare outlines standards for the quality and safety of healthcare organizations, focusing on ensuring that AI applications maintain or improve the quality of patient care. The guidance underscores the importance of patient safety, mandating that AI systems undergo rigorous testing to ensure they do not harm patients (Amann, et al. [15]). Collectively, these frameworks and guidelines provide a comprehensive foundation for the ethical use of AI in healthcare, emphasizing transparency, accountability, equity, and patient safety as essential components of responsible AI deployment.

Results and Discussion

The integration of artificial intelligence (AI) in healthcare presents opportunities to enhance patient care and optimize health systems. However, to ensure that these advancements align with sustainable development goals, seven ethical guidelines have been identified. Each guideline plays a critical role in framing the responsible use of AI in healthcare.

Equity and Accessibility

Equity and accessibility are fundamental principles that ensure all individuals, regardless of their socio-economic status, geographical location, or demographic characteristics, have equal access to AI-driven healthcare solutions (Raman, et al. [18]). This principle emphasizes the need to address disparities in healthcare access and outcomes that may be exacerbated by AI technologies. Efforts should be made to design AI systems that consider the unique needs of marginalized and underserved populations, ensuring that these innovations do not reinforce existing inequalities but rather promote health equity (Gabriel [16]).

Transparency and Accountability

Transparency and accountability are crucial for building trust in AI systems within healthcare. Transparency involves clear communication about how AI algorithms operate, the data used for training, and the decision-making processes involved (Li [19]). This openness allows stakeholders, including patients and healthcare providers, to understand and trust the technology. Accountability refers to the responsibility of developers, healthcare organizations, and policymakers to ensure that AI systems are used ethically and effectively. Mechanisms must be established to hold parties accountable for the outcomes generated by AI applications, particularly in cases where decisions may significantly impact patient health and safety (Roberts, et al. [17]).

Data Privacy and Security

Data privacy and security are paramount in the context of AI in healthcare, as these technologies often rely on vast amounts of sensitive patient data (Stahl [20]). This principle stresses the importance of protecting personal health information from unauthorized access and breaches. AI systems must be designed with robust security measures that comply with relevant regulations (such as HIPAA in the United States or GDPR in Europe) to safeguard patient privacy. Moreover, ethical guidelines should address informed consent, ensuring that patients are aware of how their data will be used and have the option to opt-in or opt-out of data sharing (Taeihagh [21]).

Collaboration and Multidisciplinary Approaches

The complexity of healthcare requires collaboration and multidisciplinary approaches to effectively integrate AI technologies (Shafik [22]). This principle encourages the involvement of diverse stakeholders— such as healthcare professionals, data scientists, ethicists, and patients—in the development and implementation of AI systems. By fostering collaboration, organizations can leverage a wide range of expertise and perspectives, leading to more effective and ethically sound AI solutions. Engaging patients and communities in this process also helps ensure that AI applications are relevant and responsive to the needs of the populations they serve (Christy, et al. [23]).

Sustainability and Environmental Considerations

Sustainability and environmental considerations are increasingly important in the context of healthcare and AI integration (Renda [24]). This principal advocates for the development of AI technologies that not only improve health outcomes but also minimize environmental

impact. For example, AI can be used to optimize resource use within healthcare facilities, reduce waste, and enhance energy efficiency. By prioritizing sustainability, healthcare organizations can contribute to broader environmental goals, such as reducing carbon footprints and promoting public health (Jobin, et al. [25]).

Patient-Centric Approaches

Patient-centric approaches place the needs and preferences of patients at the forefront of AI integration in healthcare (Jobin, et al. [25]). This principle emphasizes the importance of designing AI systems that enhance patient engagement and empower individuals to take an active role in their healthcare decisions. AI should be used to provide personalized recommendations, facilitate shared decision- making, and improve the overall patient experience. By focusing on patient-centricity, healthcare providers can foster a more compassionate and effective healthcare environment (Kulkov, et al. [1]).

Promotion of Health and Well-Being

The promotion of health and well-being is an overarching ethical guideline that underscores the ultimate goal of AI applications in healthcare: to enhance the health of individuals and communities (Liaw, et al. [2]). This principal advocates for the use of AI to not only treat illnesses but also to prevent them and promote overall well-being. AI technologies can play a vital role in identifying health risks, supporting preventive care, and facilitating health education. By aligning AI initiatives with the promotion of health and well-being, healthcare systems can contribute to the sustainable development of healthier populations (Fan, et al. [3]). Implementing ethical AI practices can significantly enhance the sustainability of healthcare systems by optimizing resource allocation, improving patient outcomes, and addressing environmental considerations (Di Vaio, et al. [4]). Ethical AI can lead to more efficient resource allocation by identifying high-priority areas for intervention, thereby reducing waste and ensuring that resources are directed toward the most pressing healthcare needs (Joshi, et al. [5]).

Improved patient outcomes can result from AI’s ability to provide personalized treatment recommendations and predictive analytics, which can enhance the quality of care and reduce hospital readmissions (Murphy, et al. [7]). Furthermore, ethical AI practices that prioritize sustainability can help healthcare organizations assess the environmental impact of their technologies, promoting energy-efficient solutions and reducing the overall carbon footprint of healthcare operations (Buruk, et al. [8]). By integrating these ethical considerations, healthcare systems can not only improve their operational efficiency but also contribute to broader sustainability goals, ultimately leading to a healthier population and environment (Palomares, et al. [9]). Inclusive design is paramount in developing AI systems that cater to diverse populations. Researchers argue that engaging underrepresented communities in the design process can lead to more effective solutions that address specific needs (Vinuesa, et al. [11]). However, achieving true inclusivity requires not only stakeholder engagement but also a commitment to understanding the unique challenges faced by these communities. Failure to do so may result in AI systems that inadvertently reinforce existing disparities (Cumming, et al. [14]).

AI applications must be equipped to communicate effectively across different linguistic and cultural contexts. Studies indicate that culturally competent AI can enhance patient engagement and satisfaction (Amann, et al. [15]). However, the challenge lies in developing algorithms that accurately interpret and respond to diverse cultural nuances. Without this capability, AI tools risk alienating users and failing to provide equitable care (Gabriel [16]). Affordability is a critical factor in ensuring accessibility to AI-driven healthcare solutions. While AI has the potential to reduce costs through efficiency, the initial investment in technology can be prohibitive for low-income populations (Roberts, et al. [17]). Policymakers must explore models that subsidize costs for underserved communities to prevent widening the healthcare gap (Raman, et al. [18]). Enhancing digital literacy is essential for empowering individuals to utilize AI technologies effectively. Research shows that low digital literacy can hinder the adoption of telehealth services, particularly among older adults and marginalized groups (Li [19]). Therefore, targeted educational initiatives are necessary to equip these populations with the skills needed to navigate AI-driven healthcare systems (Stahl [20]).

AI can play a pivotal role in identifying health disparities by analysing large datasets to uncover patterns that may not be visible through traditional methods (Christy, et al. [23]). However, the effectiveness of this approach depends on the quality and representativeness of the data used. If datasets are biased or incomplete, AI may perpetuate existing inequalities rather than mitigate them (Renda [24]). Personalization of care through AI can enhance treatment outcomes by considering social determinants of health. However, researchers caution that algorithms must be carefully designed to avoid reinforcing biases present in historical data (Jobin, et al. [25]). Continuous monitoring and adjustment of AI systems are necessary to ensure that they adapt to the evolving needs of diverse patient populations (Thiebes, et al. [26]). Establishing robust regulatory frameworks is crucial for guiding the ethical deployment of AI in healthcare. Current regulations often lag behind technological advancements, leading to gaps that can be exploited (Kulkov, et al. [1]). Policymakers must prioritize equity and accessibility in these frameworks to ensure that AI technologies do not exacerbate existing disparities (Liaw, et al. [2]).

Stakeholders in the healthcare sector, including policymakers and practitioners, can effectively integrate ethical guidelines for AI by adopting a collaborative and inclusive approach (Fan, et al. [3]). This involves establishing multidisciplinary teams that include ethicists, healthcare professionals, and community representatives to ensure diverse perspectives are considered in AI development. Policymakers should create regulatory frameworks that mandate transparency and accountability in AI systems, requiring developers to disclose how algorithms are trained and validated to minimize biases (Di Vaio, et al. [4]). Additionally, training programs focused on ethical AI practices can equip healthcare providers with the knowledge to recognize and address biases in AI applications. Engaging with underserved communities during the design and implementation phases can also help ensure that AI solutions are accessible and relevant to all populations [27]. By prioritizing these strategies, stakeholders can promote equitable access to AI technologies and mitigate the risk of exacerbating existing health disparities (Murphy, et al. [7]).

Conclusion & Recommendations

The development and implementation of AI technologies in healthcare should be guided by several ethical principles to ensure alignment with sustainable development goals. Key principles include Equity and Accessibility, which emphasize the need for inclusive design that allows marginalized populations to benefit from AI advancements. Transparency and Accountability are crucial for fostering trust among stakeholders, ensuring that AI systems operate in an understandable and responsible manner. Additionally, Data Privacy and Security must be prioritized to protect sensitive patient information while enabling effective data utilization.

The ethical guidelines for AI in healthcare must be rooted in the principles of sustainable development, addressing the multifaceted challenges posed by the integration of AI technologies. By prioritizing equity, transparency, data privacy, and interdisciplinary collaboration, these guidelines can ensure that AI serves as a force for good within the healthcare sector. As the landscape of healthcare continues to evolve with technological advancements, it is imperative that ethical considerations remain at the forefront, guiding the development and implementation of AI in ways that enhance health outcomes for all while safeguarding the planet for future generations. The establishment of robust ethical frameworks will not only bolster public trust in AI technologies but also contribute to a more equitable and sustainable healthcare system.

While the integration of AI in healthcare holds significant promises for enhancing equity and accessibility, it is imperative to critically evaluate the guidelines that govern its implementation. By addressing the challenges associated with inclusive design, cultural competence, affordability, digital literacy, health disparity identification, personalized care, and regulatory frameworks, stakeholders can work towards a more equitable healthcare system that leverages the full potential of AI technologies. To effectively harness the potential of AI in healthcare while upholding ethical principles, several key recommendations should be implemented. First, it is essential to establish multidisciplinary collaborations that bring together healthcare professionals, data scientists, ethicists, and community representatives. This collaborative approach will ensure that diverse perspectives are integrated into the development and implementation of AI technologies, enhancing their relevance and effectiveness in addressing the needs of various populations.

Additionally, developing and enforcing robust regulatory frameworks is crucial. These regulations should mandate transparency and accountability in AI systems, requiring clear documentation of algorithmic decision-making processes and data usage. Such measures will foster trust among stakeholders and patients, ensuring that AI technologies are deployed responsibly and ethically. Training and education also play a vital role in promoting ethical AI integration. Targeted training programs for healthcare providers should focus on recognizing and addressing biases in AI technologies. By equipping practitioners with the knowledge necessary to promote equitable care delivery, healthcare systems can mitigate the risks of exacerbating existing disparities.

Engaging underserved communities in the design and testing of AI applications is imperative to ensure that their specific needs are met. This active involvement promotes accessibility and relevance in AI solutions, ultimately resulting in better health outcomes for marginalized populations.

Incorporating sustainability assessments into the evaluation of AI technologies is another important recommendation. By prioritizing environmental considerations, healthcare organizations can evaluate the ecological impact of AI applications and promote practices that reduce the carbon footprint of their operations, aligning with broader sustainability goals. Finally, establishing mechanisms for the continuous monitoring and evaluation of AI applications will allow stakeholders to assess their impact on health equity, patient outcomes, and resource allocation. This ongoing evaluation enables timely adjustments to improve the effectiveness of AI technologies, ensuring that they contribute positively to the healthcare landscape.

References

  1. Kulkov I, Kulkova J, Rohrbeck R, Menvielle L, Kaartemo V, et al. (2024) Artificial intelligence‐driven sustainable development: Examining organizational, technical, and processing approaches to achieving global goals. Sustainable Development 32(3): 2253-2267.‏
  2. Liaw ST, Liyanage H, Kuziemsky C, Terry A L, Schreiber R, et al. (2020) Ethical use of electronic health record data and artificial intelligence: recommendations of the primary care informatics working group of the international medical informatics association. Yearbook of medical informatics 29(01): 51-57.‏
  3. Fan Z, Yan Z, Wen S (2023) Deep learning and artificial intelligence in sustainability: a review of SDGs, renewable energy, and environmental health. Sustainability 15(18): 13493.‏
  4. Di Vaio A, Palladino R, Hassan R, Escobar O (2020) Artificial intelligence and business models in the sustainable development goals perspective: A systematic literature review. Journal of Business Research 121: 283-314.‏
  5. Joshi R, Pandey K, Kumari S (2024) Artificial Intelligence for Advanced Sustainable Development Goals: A 360-Degree Approach. In Preserving Health, Preserving Earth: The Path to Sustainable Healthcare, pp. 281-303.
  6. (2015) United Nations Transforming our world: The 2030 agenda for sustainable development. United Nations.
  7. Murphy K, Di Ruggiero E, Upshur R, Willison DJ, Malhotra N, et al. (2021) Artificial intelligence for good health: a scoping review of the ethics literature. BMC medical ethics 22: 1-17.‏
  8. Buruk B, Ekmekci PE, Arda B (2020) A critical perspective on guidelines for responsible and trustworthy artificial intelligence. Medicine, Health Care and Philosophy 23(3): 387-399.‏
  9. Palomares I, Martínez Cámara E, Montes R, García Moral P, Chiachio M, et al. (2021) A panoramic view and swot analysis of artificial intelligence for achieving the sustainable development goals by 2030: progress and prospects. Applied Intelligence 51: 6497-6527.‏
  10. (2020) World Health Organization Sustainable Development.
  11. Vinuesa R, Azizpour H, Leite I, Balaam M, Dignum V, et al. (2020) The role of artificial intelligence in achieving the Sustainable Development Goals. Nature communications 11(1): 1-10.‏
  12. Rother ET (2007) Systematic literature review X narrative review. Acta paulista de enfermagem 20: v-vi.‏
  13. Ferrari R (2015) Writing narrative style literature reviews. Medical writing 24(4): 230-235.‏
  14. Cumming D, Saurabh K, Rani N, Upadhyay P (2024) Towards AI ethics-led sustainability frameworks and toolkits: Review and research agenda. Journal of Sustainable Finance and Accounting 1: 100003.‏
  15. Amann J, Blasimme A, Vayena E, Frey D, Madai VI, et al. (2020) Explainability for artificial intelligence in healthcare: a multidisciplinary perspective. BMC medical informatics and decision making 20(1): 1-9.‏
  16. Gabriel I (2020) Artificial intelligence, values, and alignment. Minds and machines 30(3): 411-437.‏
  17. Roberts H, Zhang J, Bariach B, Cowls J, Gilburt B, et al. (2024) Artificial intelligence in support of the circular economy: ethical considerations and a path forward. AI & SOCIETY 39(3): 1451-1464.‏
  18. Raman R, Pattnaik D, Lathabai HH, Kumar C, Govindan K, et al. (2024) Green and sustainable AI research: an integrated thematic and topic modeling analysis. Journal of Big Data 11(1): 55.‏
  19. Li Z (2024) Ethical frontiers in artificial intelligence: navigating the complexities of bias, privacy, and accountability. International Journal of Engineering and Management Research 14(3): 109-116.‏
  20. Stahl BC (2021) Artificial intelligence for a better future. An ecosystem perspective on the ethics of AI and emerging digital technologies, pp. 124. ‏
  21. Taeihagh A (2021) Governance of artificial intelligence. Policy and society 40(2): 137-157.‏
  22. Shafik W (2024) Artificial Intelligence and Machine Learning with Cyber Ethics for the Future World. In Future Communication Systems Using Artificial Intelligence, Internet of Things and Data Science, pp. 110-130.
  23. Christy V, Manda VK, Gnanadasan ML (2024) Ethical Frameworks for Use in Artificial Intelligence Systems. In Generative AI and Implications for Ethics, Security, and Data Management, pp. 122-154.‏
  24. Renda A (2019) Artificial Intelligence. Ethics, governance and policy challenges. CEPS Centre for European Policy Studies.‏
  25. Jobin A, Ienca M, Vayena E (2019) The global landscape of AI ethics guidelines. Nature machine intelligence 1(9): 389-399.‏
  26. Thiebes, S, Lins S, Sunyaev A (2021) Trustworthy artificial intelligence. Electronic Markets 31: 447-464.‏
  27. Leimanis A, Palkova K (2021) Ethical guidelines for artificial intelligence in healthcare from the sustainable development perspective. European Journal of Sustainable Development 10(1): 90-90.‏