Indian Journal of Community Medicine

: 2021  |  Volume : 46  |  Issue : 2  |  Page : 178--181

Role of public health ethics for responsible use of artificial intelligence technologies

Sudip Bhattacharya1, Md y Hossain2, Ruchi Juyal1, Neha Sharma1, Keerti Bhusan Pradhan3, Amarjeet Singh4,  
1 Department of Community Medicine, Himalayan Institute of Medical Sciences, Dehradun, Uttarakhand, India
2 Department of Health Promotion and Community Health Sciences, School of Public Health, Texas A and M University, Texas, USA
3 Department of Healthcare Management, Faculty Healthcare Management, Chitkara University, Rajpura, Punjab, India
4 Department of Community Medicine and School of Public Health, PGIMER, Chandigarh, India

Correspondence Address:
Dr. Sudip Bhattacharya
Department of Community Medicine, Himalayan Institute of Medical Sciences, Dehradun, Uttarakhand


Recent advancements in artificial intelligence (AI) technologies have shown promising success in optimizing health-care processes and improvising health services research and practice leading to better health outcomes. However, the role of public health ethics in the era of AI is not widely evaluated. This article aims to describe the responsible approach to AI design, development, and use from a public health perspective. This responsible approach should focus on the collective well-being of humankind and incorporate ethical principles and societal values. Such approaches are important because AI concerns and impacts the health and well-being of all of us collectively. Rather than limiting such discourses at the individual level, ethical considerations regarding AI systems should be analyzed enlarge, considering the complex socio-technological reality around the world.

How to cite this article:
Bhattacharya S, Hossain My, Juyal R, Sharma N, Pradhan KB, Singh A. Role of public health ethics for responsible use of artificial intelligence technologies.Indian J Community Med 2021;46:178-181

How to cite this URL:
Bhattacharya S, Hossain My, Juyal R, Sharma N, Pradhan KB, Singh A. Role of public health ethics for responsible use of artificial intelligence technologies. Indian J Community Med [serial online] 2021 [cited 2021 Jun 20 ];46:178-181
Available from:

Full Text


Technological advancements such as artificial intelligence (AI) are all set to rapidly change our daily lives across the globe.[1] Our life now has become easier due to better procession of data and informed decision-making with the help of AI technologies. For example, we are now using a virtual assistant called Alexa, for our routine domestic activities.[2]

It is generally agreed that most of the technical or research work in future will be based on machine learning, AI, and robotic technologies.[1],[2]

AI means inherited intelligence of a computer which can do wonders with 90%–96% accuracy and reliability of the activities designed by it, and this AI is inherited from the computer by its repeated use.[1]

Such applications are emerging across most of the service industries including health care, for example, the use of personalized AI-based robots to solve geriatric care-related problems in Japan. It is helping geriatric persons in their self-care such as taking the morning pills or switching off the tube light during the bedtime.[3] While most of these applications of AI have shown success in the developed nations, many resource-constrained developing countries also have started adopting such technologies, though on a limited scale.

Niramai Health Analytics (Bengaluru) has developed a Thermalytix technology embedded with AI, by which breast cancer can be detected 5 years earlier than mammography or clinical examination; this is indeed a path-breaking research in the field of cancer.[4],[5]

In India, many tertiary care centers are now conducting robot-assisted surgery (urological surgery, neurosurgery, and many more) due to its certain advantages over conventional surgery (e.g., suturing small nerves, which is beyond the human skills).[6]

However, both at the clinical settings and public health organizations, critical challenges may affect the applications of AI in health care, for example, effects on patient–doctor communication, patient safety, efficacy and efficiency of health services, and humane aspects of formal and informal caregiving. Moreover, individuals, organizations, and nations with limited resources may not leverage the ethical and safe use of AI to advance their health and well-being.

Ethical aspects include many questions such as whether there will be sociodemographic biases such as racial discrimination and cultural differences in AI-based health-care decision-making? How it will affect the jobs related to health care and health-care budgeting? Who will be responsible for any accidental errors during any invasive AI-based medical procedures or untended consequences in AI-driven treatment algorithms? What would be the ethical implications of using automated machines in health care?

These questions are quite pertinent, and answering these AI-enabled health care-related ethical questions may not be easy. For example, a Nigerian AI-enabled start-up namely Safermom enabled pregnant women for opting low-cost mobile technologies to build informed decisions for conveying health information such as danger signs in pregnancy and vaccination schedule.

Yet again, the point is, if as a result of any misinformation occurs leading to some complication, who will be caught for this? The software developers of that company or the individuals who gave the affirmation for the use of this platform?

One prevailing example related to such a thing is “facial recognition” which has become contentious by virtue of its low standards, especially for recognizing African faces than Caucasians. The reason is that the training datasets had systems which were basically collected from Caucasian faces.

Erroneous results in medical research can occur if the research cannot conquer the nature of the whole treated population. Another universal example is that current medical research datasets of heart attacks are focused on men, while symptoms for heart attacks differ among males and females. This leads to biased results by not incorporating women in researches. In a similar fashion, the priority of health issues may change with respect to time, place, and person.[7]

Thus, the promised advantages of AI in improving systems of health care-related organizational practices can be jeopardized due to the above-mentioned challenges.

These issues highlight the need for a more detailed analysis of different aspects of AI technology before adopting or applying it in the health-care sector.[8],[9],[10]

Due to progressive advancements in AI technologies, there is a pressing need to conduct health impact assessment of adopting such technologies in health care.[11] Responsible members from the society such as policymakers, religious leaders, scientists, health workers, civil society, and the common people may have multiple queries about the prospects and challenges of using AI-based medical technologies. It necessitates an entirely new understanding of techno-social and techno-medical interactions, the ethical considerations of AI systems, and efforts to develop innovative procedures to determine the autonomy, beneficence, and human rights in the age of AI-based medical systems.[11]

To better understand and address the issues, it is essential to adopt an accountability framework approach for the design, development, and implementation of AI-based medical technology. This approach should center on the well-being of humankind incorporating the ethical principles and societal values. This kind of approach is important because AI concerns and impacts all of us, collectively. We should not limit our analysis at the individual level; rather, we have to study AI-based medical systems in the context of multifaceted technomedical reality and dynamics within the society.[11]

Regarding responsible AI approach, Stuart Russell opined that “If we are developing machines to act with some autonomy, then we had better be quite sure that the purpose put into the machine is the purpose which we really desire.”[12]

Defining “responsibility” is challenging as well as exhausting tasks. Here the moot question is what is the real meaning of responsibility? Who is responsible, for what, in what capacity, and who decides that? As per our knowledge, AI-based medical systems are nothing but medical devices, built for an intended purpose, that is saving life or improving life. These cannot be perceived as responsible players. That's why a whole list of responsibilities is required that can associate the AI system's behavior to the accountable players.[13]

Recently, the European Parliament has claimed for a legal personhood for AI-based medical systems which is based on extrapolation of current potentials on AI competencies. However, it is not scientific at all.[13]

Even if we agree about the legal personhood of AI systems, still we have many unanswered questions in our mind. For example, where will the responsibility lie for a clinical decision-making?

In an AI system, who will be accountable for the clinical decisions? Whether the diagnostic algorithm developer is responsible, or the human data provider or the producers of the sensors which were used for human data collection, or the medicolegal authorities that authorized the usage of such softwares, or the user who agreed and acted according to the medical devices? Answering to these questions is difficult, and more difficult thing is ascertaining the distribution of responsibility accurately.[13]

To tackle this issue, formation of a new governance in AI systems is required, which will ensure and monitor the chain of responsibility across all the players in different settings. The AI governance also ensures that the advance of AI technology is centered around the beneficence of the society and well-being. To implement this, lawmakers require a thorough understanding of the capacity and limitations of each AI system for addressing the challenging issues such as accountability, responsibility, and transparency. We are ultimately responsible for the positive and negative consequences of AI technology.[13]

From a public health perspective also, we must acknowledge the importance of the human values during devising any new medical technology like AI and simultaneously we must maintain a continuous series of accountability and faith for the activities and results of AI systems across different settings in our community. Responsibility and accountability rest not only with those who produce or implement AI systems but it is also a collective responsibility of the governments that legislate about their implementations in different departments of health care. The final aim of an evolving AI should not be all about the construction of intelligent medical devices superseding health workers. Rather, it should aim to make our lives better through advanced technologies. It is all about considering and shaping technology which can influence our daily lives in a healthy way. The focus of AI should not only be on making or using hi-tech machines but also to ensure that it remains person centric and humanity grounded. That is why AI technology is trans-disciplinary in nature; it requires contributions from all sectors such as medical sciences, social sciences, law, economics, and humanities besides technological advances.[13]

The term “responsible AI” is also often interchangeably used with ethics. However, these two are not same concepts, even if being tightly knitted. Responsibility is the practical aspect of ethics. In addition, it deals with legal, economic, and cultural aspects to decide benefit or loss of the society, whereas ethics is “the study of morals and values.”

Ethical success demands a series of observations of what happens, whereas responsible AI is activity oriented.[13]

It is more than checking the ethical checklists. Rather, it is the development of smart systems based on core human values and ethics. In simple words, AI systems are basically machines made by humans to achieve certain objectives. That is why it is of utmost important to look into its societal, medicolegal, and ethical aspects during all stages of AI system development.

Following are the ways to maintain ethics in designing AI-based systems in health care:[13]

Ethics in design, which refers to the dogmatic and technical methods that consider the cultural aspect during the design of AI systemsEthics in application, which is more about the behavioral ethics of developing and adopting AI systems in health care. Such ethical discussions may focus on the behavioral aspects of people designing and using AI systems in health services, which is likely to shape the future digital behavior of AI systemsEthics for diverse stakeholders, which may refer to the rules and governing requirements that safeguard the veracity of all players as they develop and implement AI systems. Understanding these ethical constructs may inform how complex AI systems can effectively serve the population's health needs, which is contingent upon the actions of individual and collective human intelligence in the social contexts.

Several agencies already have started developing regulatory frameworks for using AI in clinical practice such as “Ethics guidelines for trustworthy AI,” by the European Commission. The US Food and Drug Administration also acknowledges this (regulatory framework development) issue. Such guidelines are extremely essential for optimal development and implementation of AI-based systems.[12],[13],[14]

The framework by Singapore on how ethically and responsibly AI can be used, unfolds new perspectives and challenges. Few articles of this framework also describe the subtlety knowledge of challenges related to aggregated data and perpetuating human autonomy. Article 3.6 states that as people live according to their societal contexts, the organizations contriving in various countries contemplate different norms and values of society. Article 3.7 also describes that certain risks are manifested at group level.

The “Ethical AI Toolkit” invented by Smart Dubai Office admonishes individuals and the organizations affording AI services.[13]

This toolkit phases out the “black box” issue and prescribes intelligible AI systems. It also offers suggestions in decision-making processes that propose bias.

Despite offering good high-level guidelines, the main drawback of this toolkit is that it fails to describe the implementation of such values.[14]

The demand for AI has increased so much that international organizations want to encompass AI-related research, agendas, and work plans. One such example is the AI guideline formulated by the Organization for Economic Co-operation and Development. Similarly, a third high-level meeting named “AI for Global Good Summit” will be conducted by the International Telecommunication Union (ITU) very soon.

The World Intellectual Property Organization are too in the lead as they released a major report on AI and the WHO is trying to create a focus group for AI by collaborating with the ITU and International Labour Organisation.[7],[15]

We assume that this kind of efforts are capable to lay down powerful platforms in shaping standards and benchmarks to guide the national policymakers.[7],[15]

 Conclusion and Recommendation

In the era of rapid globalization and digitalization, it is already difficult to manage the preexisting digital division among global nations. In such a scenario, the increasing trend of using newer technologies such as AI may further increase health disparities in the era of digitalization. Arguably, how public health systems and organizations will respond to such complex challenges will largely depend on how effectively the ethical challenges are investigated and addressed. Such responses would need empirical research, metaphysical discourses, and a collective will among the key stakeholders in health care – upholding the promises of AI and minimizing digital health disparities around the world.


The authors would like to thank all the authors of those books, articles, and journals that were referred in preparing this manuscript.

Financial support and sponsorship


Conflicts of interest

There are no conflicts of interest.


1Artificial Intelligence and its Role in Near Future. GroundAI. Available from: [Last accessed on 2019 Feb 15].
2Amazon Alexa. In: Wikipedia. 2019. Available from: [Last accessed on 2019 Feb 15].
3Japanese Automakers Look to Robots to Aid the Elderly-Scientific American. Available from: [Last accessed 2019 Feb 15].
4Niramai – A Novel Breast Cancer Screening Solution. Available from: [Last accessed 2018 Nov 12].
5Thermalytix– Niramai [Internet]. Available from: [Last accessed on 2018 Oct 30].
6Adverse Events in Robotic Surgery: A Retrospective Study of 14 Years of FDA Data. Available from: [Last accessed on 2019 Feb 15].
7Sallstrom S, Morris O, Mehta H. Artificial Intelligence in Africa's Healthcare: Ethical Considerations. ORF. Available from: [Last accessed on 2020 Jan 14].
8Jiang F, Jiang Y, Zhi H, Dong Y, Li H, Ma S, et al. Artificial intelligence in healthcare: Past, present and future. Stroke Vasc Neurol 2017;2:230.
9Harari YN. 21 Lessons for the 21st Century. London: Jonathan Cape; 2018. p. 368.
10Woirol GR. The Technological Unemployment and Structural Unemployment Debates. Westport, Conn: Praeger; 1996. p. 224.
11Globalising Artificial Intelligence for Improved Clinical Practice. Indian Journal of Medical Ethics. Available from: [Last accessed on 2019 Nov 22].
12NW 1615 L. St, Suite 800Washington, Inquiries D 20036USA202-419-4300,M-857-8562,F-419-4372,M. Artificial Intelligence and the Future of Humans. Pew Research Center: Internet, Science & Tech; 2018. Available from: [Last accessed on 2020 Jan 14].
13Dignum V. Responsible Artificial Intelligence: How to Develop and Use AI in a Responsible Way. Cham: Springer International Publishing; 2019. (Artificial Intelligence: Foundations, Theory, and Algorithms). Available from: [Last accessed on 2019 Nov 22].
14Artificial Intelligence Principles & Ethics, Smart Dubai. Available from: [Last accessed on 2020 Jan 14].
15European Commission. Ethics Guidelines for Trustworthy AI. DigitalSingle Market. 2019. Available from: [Last accessed on 2019 Oct 20].