top of page
Writer's pictureLydia Laval

Practical AI Ethics

Advancements in artificial intelligence (AI) are revolutionising human life in unprecedented ways, ushering in what many refer to as the second machine age. In this era, AI technologies are reshaping societal interactions and norms, prompting a critical reassessment of ethics in the development and deployment of AI systems. It is no longer sufficient to view technology merely as a tool; instead, ethics must be deeply integrated into the very fabric of AI systems to ensure responsible and beneficial outcomes for society. In this article, we argue for the imperative of incorporating ethics into AI systems and propose three key considerations - autonomy, the right of explanation, and value alignment - to guide this discourse.



The rapid proliferation of AI technologies across various domains underscores the need for ethical considerations. While these technologies offer significant benefits, such as increased efficiency and productivity, they also raise profound societal questions regarding their impact on employment, decision-making processes, and personal autonomy. As AI systems become more autonomous and pervasive, concerns about their ability to make ethical decisions independently intensify. Therefore, it is essential to develop frameworks that embed ethical principles into AI systems, ensuring that they align with societal values and priorities.


Autonomy is a central aspect of AI systems, but it must be accompanied by ethical constraints. While autonomous agents have the capacity to make decisions independently, they should operate within ethical boundaries established by human stakeholders. The notion of autonomy in AI extends beyond freedom and free will; it encompasses the ability to act in alignment with ethical principles and rationality. By setting clear guidelines and constraints, humans can ensure that AI systems prioritise ethical considerations in their decision-making processes.


The right to explanation is another critical aspect of ethical AI. As AI systems make decisions that impact individuals and society, there is a growing demand for transparency and accountability. Users have a moral right to understand the rationale behind AI decisions, especially in cases where these decisions have significant implications. Explainable AI (XAI) seeks to address this need by providing clear and understandable explanations for AI decisions, enhancing trust and accountability in AI systems.


So far in this article we have merely scratched the surface of a vast field encompassing numerous guidelines and declarations pertaining to the ethical development of AI systems. While diverse in scope and audience, they all aim to foster ethical practices in AI design and deployment. However, the effectiveness of these efforts remains modest, with ethical codes often failing to significantly influence decision-making among developers and engineers. To address this gap, we explore the foundations of AI ethics and propose alternative methodologies to enhance its impact. 


AI ethics documents typically present normative principles and values intended to guide ethical decision-making in AI development. Despite variations in approach, they share a common goal of promoting responsible AI design and use. However, the translation of ethical principles into actionable guidance has proven challenging, with limited evidence of their influence on real-world practices. This disconnect underscores the need for a re-evaluation of the current model of AI ethics and a broader methodological approach. One prominent example of AI ethics is the "Ethics guidelines for trustworthy artificial intelligence" by the European Commission's High-Level Expert Group on Artificial Intelligence (AI HLEG). This document, while comprehensive, reflects a narrow understanding of applied ethics, focusing primarily on normative principles without sufficient consideration for practical implementation. Similar initiatives in the United States, such as the National Artificial Intelligence Initiative and IEEE's Ethically Aligned Design, also prioritise ethical principles but lack mechanisms for effective enforcement.


Despite widespread recognition of the importance of ethical AI development, existing guidelines have had limited impact on industry practices. Studies have shown that ethical codes, such as those issued by professional associations like the Association of Computing Machinery, have had minimal influence on the decision-making of software engineers. This discrepancy can be attributed to various factors, including the difficulty of applying abstract ethical principles to concrete design decisions and the lack of organisational support for ethical considerations in AI development.


To address these challenges, scholars have proposed alternative approaches to AI ethics, ranging from virtue ethics education to the development of ethical decision-making frameworks. While these efforts hold promise, they often overlook the systemic barriers to ethical practice within organisations. The instrumental logic of economic enterprises, which prioritises profit-making and efficiency, often undermines ethical considerations in AI development. Additionally, existing organisational structures may not incentivize employees to prioritise ethical concerns, leading to a disconnect between ethical principles and actual practices.


Bioethics offers valuable lessons for the evolution of AI ethics as a field of applied ethics. Emerging in response to the ethical challenges posed by advances in biomedical science and technology, bioethics has undergone significant development as an academic discipline. However, it has also faced criticism for its narrow focus on individual autonomy and its institutionalisation within bureaucratic structures. Critics argue that bioethics has become overly bureaucratized, with bioethicists assuming authoritative roles within healthcare institutions and policy-making bodies. This institutionalisation has led to a prioritisation of certain ethical concerns, such as individual rights and autonomy, at the expense of broader societal values and considerations of social justice. Additionally, the commercialization of bioethics has raised concerns about conflicts of interest and the influence of industry stakeholders on ethical decision-making.


The evolution of bioethics offers important insights for the development of AI ethics as a field of applied ethics. By adopting a more holistic and interdisciplinary approach, AI ethics can address the complex ethical challenges posed by AI technologies while incorporating insights from related fields such as systems theory, safety research, and impact assessment. Moreover, AI ethics must consider the broader societal implications of AI deployment, including issues of equity, justice, and human rights.


As machines evolve into moral agents, integrating ethics into AI systems becomes imperative, requiring a nuanced understanding of human-AI coexistence. The future of AI ethics depends on its ability to address complex ethical challenges while transcending disciplinary boundaries. Drawing insights from disciplines like bioethics and other fields of applied ethics, AI ethics aims to develop robust methodologies and frameworks for ethical AI design and implementation, ensuring that AI technologies serve collective welfare and uphold fundamental human values.


Future research should explore hybrid or mixed agency models and their correlation with ethical frameworks, incorporating both philosophical scrutiny and empirical research. AI ethics can benefit from the experiences of other domains like bioethics, leveraging formal ethical models to shape norms and recommend best practices.

Ultimately, the measure of success in AI ethics lies in promoting human flourishing, with virtue ethics serving as a promising approach to guide AI ethics toward enhancing human well-being.


References. 


Anderson M., Anderson S. L. (eds.). (2011). Machine ethics. Cambridge University Press. Cambridge. 


Anagnostou M., Karvounidou O., Katritzidaki C., Kechagia C., Melidou K. (2022). Characteristics and challenges in the industries towards responsible AI: a systematic literature review. Ethics Inf. Technol. 


Stoeklé, H.-C., Deleuze, J.-F., and Vogt, G. (2019). Society, law, morality and bioethics: a systemic point of view. Ethics Med.


Jonsen, A. R. (2012). “A history of bioethics as discipline and discourse,” in Bioethics: An Introduction to the History, Methods, and Practice, 3rd Edn., eds J. Silbergeld, A. Nancy, A. R. Jonsen, and R. A. Pearlman (London: Jones & Bartlett Learning).


Ashok M., Madan R., Joha A., Sivarajah U. (2022). Ethical framework for artificial intelligence and digital technologies. Int. J. Inf. Manag. 62:102432.


Baars J. B., Franklin S. (2009). Consciousness is computational: the LIDA models of global workspace theory. Int. J. Mach.


ACM (2018). ACM Code of Ethics and Professional Conduct: Affirming Our Obligation to Use Our Skills to Benefit Society.


Appelbaum, S. (1997). Socio-technical systems theory: an intervention strategy for organizational development. Manage.


Cooley K., Walliser J., Wolsten K. (2023). Trusting the moral judgments of a robot: perceived moral competence and Humanlikeness of a GPT-3 enabled AI. Forthcoming Proceedings. 56th Hawaii International Conference on System Sciences.


von der Pfordten D. (2012). Five elements of normative ethics - a general theory of normative individualism. Ethical Theory Moral Pract.


Etzioni A., Etzioni O. (2017). Incorporating ethics into artificial intelligence. J. Ethics.


Franklin K., Ramsey W. M. (2014). “Introduction” in The Cambridge Handbook Of Artificial Intelligence. ed. Frankish K., Ramsey W. (Cambridge, United Kingdom: Cambridge University Press; ).


Elliott, C. (2005). The soul of a new machine: bioethicists in the bureaucracy. Cambridge Q. Healthc.


Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., et al. (2018). AI4People - An ethical framework for a good AI society: opportunities, risks, principles, and recommendations.


Franklin K., Ramsey W. M. (2014). “Introduction” in The Cambridge Handbook Of Artificial Intelligence. ed. Frankish K., Ramsey W. (Cambridge, United Kingdom: Cambridge University Press; ).


Hooker J., Kim T. W. (2019a). Ethical implications of the fourth industrial revolution for business and society.  


Kim T. W., Routledge B. R. (2018). “Informational privacy, a right to explanation, and interpretable AI,” in 2018 IEEE Symposium on Privacy-Aware Computing (PAC). (Washington, DC, USA: PAC; ).


Howard, R. (2019). “Ethical distinctions for building your ethical code,” in Next-Generation Ethics: Engineering a Better Societ, ed. A. Abbas (Cambridge: Cambridge University Press).



36 views0 comments

Recent Posts

See All

Comments


bottom of page