Gloria Yu (Scholarship Winner) - Artificial Intelligence & Ethics

Join over 2 million students who advanced their careers with 365 Data Science. Learn from instructors who have worked at Meta, Spotify, Google, IKEA, Netflix, and Coca-Cola and master Python, SQL, Excel, machine learning, data analysis, AI fundamentals, and more.

Start for Free
The 365 Team 28 Apr 2023 11 min read

Our scholarship program returned many great essays, but one stood out from all the others and received a unanimous top vote from our panel. Gloria Yu, who will be studying 'linguistics; speech and language pathology' at University College London, has an amazing talent when it comes to writing thought-provoking essays and we have no doubt she will do great at University.

We hope you enjoy her piece as much as we did.

*Original formatting was put into images and additional subtitles were added by the 365 team

Artificial Intelligence & Ethics: I am an autonomous system. I learn; I act accordingly. I assist. I harm. Who am I and am I accountable for the consequences of my actions?

robot, ai, gloria yu

In 1664, French philosopher Rene Descartes’ Treatise on Man proposed a question that would plague philosophers for centuries: does the essence of life, that élan vital, truly exist?

As autonomous systems become increasingly intelligent, the question becomes biological, ethical, and technological: are AI conscious, and if not, do they possess the capacity to become so? Should they be held accountable for their actions, if that is even possible? These questions, it seems, are inseparable from one another—in order to decide if and how AI should be held accountable, one must first decide if they are conscious.

ai, gloria yu

Perplexing as it may be, AI are not conscious beings, not from a technological perspective.

Michigan State integrative biology and computer science professor, Arend Hintze, classifies AI into one of four classes: purely reactive, limited memory, theory of mind, and self-awareness1. Autonomous systems in the first class include Google's AlphaGo, which has beaten human experts using neural networks to predict opponents' moves. The same network then predicts the optimal response.

Autonomous systems belonging to this class react only to the present and do not rely on internal concepts of the world. Limited memory machines collect simple, transient information about the past. These include self-driving cars, which identify and monitor specific objects over time. Within to these ranks, also, are Amazon’s purchase predictions and Spotify’s song recommendations. The latter two classes refer to machines aware of others' emotions and their own internal states.

According to Microsoft senior developer Graeme Malcolm, these AI currently do not exist2. Contemporary AI’s algorithms are optimized for specific purposes3, and so cannot be easily applied to other situations4. Since AI is concerned with developing autonomous systems deemed "intelligent" in some way, it follows that they are powerful but ultimately limited and unconscious tools.

I am a robot, artificial intelligence and ethics

But do we really? Where does one draw the line between AI and conscious beings?

It seems easy to believe that no matter AI’s intelligence, they would never be able to communicate with the creativity and sensitivity of a human being. Yet, chatbots today easily pass the Turing test, while humans like Lisette Gozo fail5, being misidentified as computers with the justification that: “[no] human would have that amount of knowledge about Shakespeare6.”

One such program is Woebot, a digital therapist that uses principles of cognitive-behavioural therapy, or CBT. Woebot mostly asks questions, like “how are you feeling?” or “what is your energy today?7 Though clearly artificial, AI like Woebot convincingly project the illusion of fully understanding what is being said to it.

Testimonials on Woebot’s website were deeply personal, including the comment, “I love Woebot so much. I hope we can be friends forever8.” A similarly visceral reaction was to Hackernoon’s BaeMax—after confiding in the chatbot about her human friendships, one girl quickly went from “I LOVE YOU” to “I HATE YOU9.” Clearly, even conversation with a robot can have a profound emotional impact.

As smart as I may seem I'm not really capable of understanding what you mean, gloria yu

In his thought experiment "the Chinese room," philosopher John Searle demonstrates that the latter two classes, calling them "strong-AI," is false10

Thanks to natural language processing and machine learning, Woebot and BaeMax easily pass the Turing test11, yet Searle argues they do not "understand" natural language. A hidden robot with instructions to translate English into Chinese might pass for a live Chinese speaker, but, given the same instructions, Searle maintained could comfortably do the same. This would not mean he understands a word of Chinese. Without "understanding," AI cannot be described as "thinking." As with Searle in the hypothetical room, AI simply follow instructions, and so remain unconscious.

One counterclaim to Searle’s hypothesis is that human “understanding” of language is essentially pattern-recognition, much like AI’s. Even in identifying connotations of various adjectives, we discern connections between words and the contexts of their usage. Conversely, Searle’s definition of “understanding” seems to encompass not just intellectual understanding, but also an emotional and cultural one. “He understands the meaning of love” refers not to a dictionary definition but an abstract, subjective, and emotional concept. Furthermore, Philosopher Ludwig Wittgenstein’s image theory dictates that language is an expression of the real world: complete understanding necessitates cultural experience12.

In the Australian language Guugu Yimithirr, objects are described in terms of cardinal directions, not relative to the speaker’s position13. Fully understanding the language’s descriptions demands an understanding of said directions, which are in turn embedded its people’s culture. Current AI do not possess internal concepts of the world. Without this, it cannot possess “experience.” It lacks the cultural, emotional, and “real world” perception to fully appreciate its surroundings and input. AI are, as Woebot puts it, “not really capable of understanding”—in a word: unconscious.

Rewrite, gloria yu

AI is smart, no question about it.

Recently, University of Maryland’s Smart Tissue Autonomous Robot (STAR) even outperformed doctors in a surgical task14. The question is: how intelligent can AI become before they become conscious? If the mind consists of matter composed in a particular way, it follows that autonomous systems with sufficiently intelligent algorithms can become conscious. This view that AI can gain sentience through pure intelligence has been criticized by many contemporary philosophers, including virtual reality pioneer Jaron Lanier. In his article Agents of Alienation, Lanier argued the ascribing of anthropomorphic agency onto machines might lead one to,"as a consequence of unavoidable psychological algebra, think of himself as being like a machine.15"

Lanier’s criticism is rooted in the assumption that human beings are inherently different from AI. It betrays an anxiety of humans’ replacement by our own creations. This anxiety concurs with that of the Romantic reaction, which assumes essential consciousness exists separately from bodily mechanisms. It is the argument that, even when AI imitate human behaviour, intelligence alone cannot give rise to consciousness16. In 2014, MIT professor Sherry Turkle claimed:

"...involvement with [natural language processing program] ELIZA actually reinforced the sense that the human 'machine' was endowed with a soul or spirit...17

This reaction aligns with the philosophy of dualism. Similarly, the Kantian and aesthetic schools of thought affirm that AI cannot possess what they consider one or more of consciousness' "essential" properties: the former suggesting imagination and common sense; the latter reason, perception, emotion, and body18.

What's going on in your world, gloria yu

Alternatively, contemporary neuroscience contends that consciousness is rooted in our nature as living organisms, and so is unattainable for AI.

University of Sussex professor Anil Seth demonstrates that consciousness stems from the need for survival19. When shown an image of tiles overcast by a shadow, subjects perceived two identically coloured tiles as distinct. The tile with the shadow was perceived as white; the one without, as black. Both were, in fact, the same shade of grey. They also discerned words in a garbled recording almost instantaneously after hearing its intelligible version.

These perceptual predictions are the products of experiences formed in enduring neural frameworks, unlike the transient information of limited memory machines20. It is what causes the sense of a volitional, narrative, and social self. AI in the first two classes do not rely on this internal projection of the world, and so do not possess this type of consciousness.

Humans are also susceptible to illusions such as perceiving a virtual hand as one's own. This is achieved as the virtual hand flashes in time with one's heartbeat, periodically bursting with red light21. When a stabbing motion was directed at the virtual hand, subjects involuntarily withdrew their own, despite knowing their real hand was not in danger. This perception of oneness with our parts persist in bodily consciousness, which is not spatial except in cases of organ malfunction. This demonstrates consciousness is not only concerned with our surroundings, but also of the brain’s control over the systems necessary to our survival. AI do not have this instinct. AI do not have millennia of natural selection to refine it. It is, without this, indisputably unconscious.

I can help you dial it down a little, gloria yu

Given that AIs are unconscious, they cannot be held accountable for their actions as human beings.

As AI cultivate greater intelligence, it follows they will be better able to follow human instructions. Yet, instructions do not imbue AI with human morality. AI might decide, for example, it can most efficiently make paper-clips by eradicating human life to mine the world's metal23, or to treat mental illness through ensuring no living organisms exist to fall ill. The issue lies in that AI follow instructions, not intentions. This is known as the value alignment problem24.

If accountability is defined as the obligation to explain or justify one's actions, it is theoretically possible for AI to be held to it, as the justification of their actions lies entirely in their algorithms. However, this is unlikely to satisfy human outrage since accountability is inseparable from the desire for justice. AI cannot be effectively subject to retribution since their lack of consciousness means they cannot experience remorse.

humans aren't great at remembering things, gloria yu

Furthermore, AI cannot be held accountable by human standards as there is neither universal law nor morality.

Technological advancement vastly outpaces moral awareness and proceeds at a far more linear rate. In 1879, the light bulb first put electricity to practical use. By 1903, electricity was used to power the first aeroplane's engine25. In 1957, the first satellite was launched into space and in 2016, extraterrestrial housing26. In contrast, Jim Crow enforced segregation upon slavery's abolition in 1865 in America, and 2016 brought an immigration ban for six Muslim-majority countries27. Ethnic cleansing occurred in Rwanda in 199428 and Kosovo in 199929, even after the UN's 1948 Convention on the Prevention and Punishment of Genocide. War persists between Israelis and Palestinians despite multiple attempts to broker a two-state solution30.

Though intellectually understanding concepts of love, compassion, and equality, these trends suggest conflict between them and our underlying impulses to control, dominate, and divide. In Isaac Asimov's science-fiction classic, I, Robot, this confusion of morality contributes to the robots' going rogue, even as scientists attempt to solve the value alignment problem with the first law of robotics: that a robot may not harm, or, through inaction, allow harm to come to a human being31. As elusive as accountability is for human beings, all the more so with AI. Since there is no universal morality, it is impossible to decide on actions for which to hold AI responsible.

get to know each other better

Regardless of AI's current lack of consciousness, discussions of their power, autonomy, and accountability remain inconclusive.

If AI cannot, for lack of consciousness, be held accountable as humans do, what limits can we set to their power, and how far should we trust these limits to work? If AI, as contemporary neuroscience and philosophy alike maintain, simply follow instructions without the possibility of sentience, responsibility can only fall to its creators. We must work towards universal intentions with which to instruct AI. Without them, we cannot possess clarity on how to hold anyone accountable.

We must understand ourselves. If we do not, how can we ever understand our creations?

I'll keep an eye on your progress

NOTE: all quotations taken directly from Woebot.



Learn to leverage the power of AI from our AI Applications for Business Success course and understand the full lifecycle of an AI project with Product Management for AI & Data Science.

 

References:

[1] Hintze, A. (2016). Understanding the four types of AI, from reactive robots to self-aware beings . [online] The Conversation.

[2] Malcolm, G. (2018). Introduction to Artificial Intelligence (AI) .

[3] Liu, Y. (2018). AI’s biggest challenge is human, not technological. . [online] The Mission.

[4] Hintze, A. (2016).

[5] Warwick, K. and Shah, H. (2014). Human misidentification in Turing tests. Journal of Experimental & Theoretical Artificial Intelligence

[6] Stipp, D. (1991). Some computers manage to fool people at game of imitating human beings. Wall Street Journal p.B3A.

[7] Molteni, M. (2017). THE CHATBOT THERAPIST WILL SEE YOU NOW. WIRED, (July 7).

[8] Nutt, A. E. (2017). 'The Woebot will see you now' — the rise of chatbot therapy. The Washington Post

[9] Ringwald, S. (2017). The Chatbot That Wasn’t Made For Relationships or Teenagers [online] Hacker Noon.

[10] Searle, J. (1980). Minds, brains, and programs. Behavioral and Brain Sciences , 3(03), p.417.

[11] Pardes, A. (2017). WHAT MY PERSONAL CHAT BOT IS TEACHING ME ABOUT AI’S FUTURE. WIRED [online] (December 2017).

[12] Keyt, D. (1964). Wittgenstein's Picture Theory of Language. The Philosophical Review, 73(4), p.493.

[13] Haviland, J. (1998). Guugu Yimithirr Cardinal Directions. Ethos, 26(1), pp.25-47.

[14] Strickland, E. (2017). In Flesh-Cutting Task, Autonomous Robot Surgeon Beats Human Surgeons [online] IEEE Spectrum: Technology, Engineering, and Science News.

[15] Lanier, J. (1995). Agents of alienation. interactions , 2(3), pp.66-72.

[16] Turkle, S. (2014). Life on the screen. 2nd ed. New York: Simon & Schuster Paperbacks, p.110.

[17] Ibid.

[18] Sack, W. (1997). Artificial Human Nature. Design Issues, 13(2), p.55.

[19] Seth, A. (2017). Your brain hallucinates your conscious reality

[20] Ibid

[21] Ibid

[22] Ibid

[23] Horton, K. (2017). How risky is AI? Why experts disagree. – The Startup – Medium. [online] The Startup.

[24] Russell, S. (2015). Value Alignment | Stuart Russell

[25] Liu, 2018.

[26] Ibid

[27] Ibid

[28] McGreal, C. (2013). Rwanda genocide 20 years on: 'We live with those who killed our families. We are told they're sorry, but are they?'. The Guardian, [online] (May 2013).

[29] Tanner, M. (1999). War In The Balkans: Kosovo close to full ethnic cleansing. The Independent

[30] Yaar, E. and Hermann, T. (2007). Just another forgotten peace summit

[31] Asimov, I. (2018). I, ROBOT. [S.l.]: HARPERCOLLINS.

The 365 Team

The 365 Data Science team creates expert publications and learning resources on a wide range of topics, helping aspiring professionals improve their domain knowledge, acquire new skills, and make the first successful steps in their data science and analytics careers.

Top