Shazmaan Malek (Scholarship Runner-Up) - AI and Ethics

Join over 2 million students who advanced their careers with 365 Data Science. Learn from instructors who have worked at Meta, Spotify, Google, IKEA, Netflix, and Coca-Cola and master Python, SQL, Excel, machine learning, data analysis, AI fundamentals, and more.

Start for Free
The 365 Team 28 Apr 2023 11 min read

Shazmaan Malek entered our scholarship program with a wonderful essay and he raised some fascinating arguments in the area of ethics and AI. Shazmaan, who studies computer science at the University at Buffalo, came third in our competition and was well deserved. We wish him the best of luck at University.

Please enjoy his essay as much as we did.

*Formatting and images added by the 365 team

Artificial Intelligence & Ethics

Intelligence has been defined in many different ways including as one's capacity for logic, understanding, self-awareness, learning, emotional knowledge, planning, creativity, and problem-solving. That is how we define Intelligence. So, what is Artificial Intelligence?

Robot glaring

Artificial Intelligence would be something that is non-human and has the intelligence of a human that means, it needs to have all the factors like logic, understanding, self-awareness, learning, emotional knowledge, planning, creativity, and problem-solving and with all of these features of a human, it also needs to be better than a human in decision making else, why an A.I?

Everything that surrounds us is a set of code.

Every button we push to do something is already programmed with a code which commands it the task to do when that particular button is pressed. Like, when you press a button on the vending machines, based on what button you press, your selection of beverage/food item is given to you. All those tasks when broken down, results to something really simple called the binary language (0’s and 1’s). Everything that surrounds us basically drops down to a binary language.

 binary language represented by 0’s and 1’s

An Artificial Intelligence is something that can make its own set of code to decide the task it has to complete. So for a computer program to be able to do something like that requires us humans to have the knowledge of what intelligence, consciousness or ethical reasoning actually means.

What is ethical reasoning?

Like Aude Billiard states in his research article "The Ethical Landscape of Robotics", “Ethical reasoning is based on abstract principles that you can't easily apply in a formal, deductive fashion. So, the favorite tools of logicians and mathematicians, such as first-order logic, aren't applicable.” What this means is, ethical reasoning is something that exists in our minds and, something that exists in our minds can never be written down into something physical.

Shazmaan Malek (Scholarship Runner Up) - AI and Ethics

Aude Billiard basically states that we do not understand what ethical reasoning is and hence we cannot come up with an algorithm for something that we do not understand. The only reason we see a lot of developments around us because we have been able to comprehend what things around us do. We have robotic arms just because we know how an arm functions. If moving an arm was a mystery unknown to us, we wouldn’t have robotic arms. Understanding something is a huge step towards inventing something that is new to the human world.

Let’s avoid the fact that we do not know what ethical reasoning is and ask ourselves this question - How do we define what is right and what is wrong? What makes us differentiate between them?

When we classify something as “right” it needs to be ethically right, morally good, justified and it should be acceptable. And something “wrong” would be something that is not ethically right, is morally false, not justified and is not acceptable. This definition of right or wrong is very general to us humans. The definition I just stated is not very specific to each and every one of us, but in reality, right or wrong is very specific to each and every human. Shazmaan Malek (Scholarship Runner Up) - AI and Ethics

How we categorise right or wrong depends upon our morals and ethics on how we view that particular subject and, we all have different viewpoints towards a particular subject and hence even though the idea of right or wrong is very general, right or wrong isn’t very general when we consider the mind of each and every person. To prove my point, even though President Trump is making decisions that are ethically wrong to me, they could be ethically and morally correct to a group of people because their morals and ethics are different! So even if we have an A.I that makes decisions for us, we humans in general, will never completely be satisfied with an answer. No matter what side an A.I chooses, it will always conflict some part of the human society.

We came across what ethical reasoning was and how we still do not understand what it is, so what do you think affects ethical reasoning?

According to Peter H. Kahn, “Ethical issues touch human beings profoundly and fundamentally. The premises, beliefs, and principles that humans use to make ethical decisions are quite varied, not fully understood, and often inextricably intertwined with religious beliefs.” What was surprising from this statement by Peter H. Kahn was how even our religious beliefs affect our decisions! There are so many different religions in the world. Numerically, 4,200 religions. So we need to have an A.I that would keep into consideration the morals and ethical values from 4,200 different religions before giving out a decision!

Can we ever have an A.I that could understand ethical reasoning?

Ethical reasoning is very complicated in itself and like Paweł Łichocki says,

“Throughout intellectual history, philosophers have proposed many theoretical frameworks, such as Aristotelian virtue theory, the ethics of respect for persons. Act utilitarianism, utilitarianism, and prima facie duties, and no universal agreement exists on which ethical theory or approach is the best.”

By which he means to say that scientists and philosophers have tried to understand what ethical reasoning or what the 'ethical theory' is but, no one could ever come up with an approach that is the best. So why is no approach the best? For an approach to be the “best” it needs to be correct in all perspectives of ethical reasoning.

Whenever an ethical theory was introduced it would always conflict with some of the other ethical perspectives but contradictory to it, it would also solve an ethical perspective. The fact that it could never give a 100% correct decision makes any of the discovered theory an unstable one.

Several ethical decision-making software’s have been tried to build. I am going to be specific about two of them, namely, Casuistry (also known as Truth-Teller) and Sirocco. The creator of these two software, Aude Billard says,

“Truth-Teller, accepts a pair of ethical dilemmas and describes the salient similarities and differences between them, from both an ethical and a pragmatic perspective.The other program, Sirocco, accepts a single ethical dilemma and retrieves other cases and ethical principles that might be relevant.”

So basically either of them were focused on deciding what was right or wrong based on all ethics and values. But when these software’s were tested to check their accuracy, they never gave out a 100% correct output and the reason they could not give out a 100% correct output was that no matter what they chose, the decisions they chose would always be wrong in at least one perspective of ethical reasoning.

Shazmaan Malek (Scholarship Runner Up) - AI and Ethics

While it might be wrong in one perspective of ethical reasoning it also was right in some other perspective of ethical reasoning. Even though they would be right in some other perspective, none could come up to a decision that would be 100% correct considering all the different ethics and values.

Do we ever question ourselves about how and when did we even arrive at a thought of building an A.I that could make decisions for us?

Why does the thought of an A.I even exist? You see, humans have always been afraid of dying. Since our ancestors started hunting for food, we have been afraid of becoming the victims of death. And throughout time we have done everything we can to prevent death. Like when we found out that tuberculosis was killing a lot of people, scientists all around the world decided to find a cure for tuberculosis. Not only was this done towards tuberculosis but it was done towards every other disease humans have encountered. According to data, an average human lifespan has increased from 40 years old in 1800 to 81 years old in 2017. Every “breakthrough” that we humans have come across, eventually breaks down to us humans being afraid of mortality.

Well, how does this connect to artificial intelligence?

If Artificial intelligence was a breakthrough we achieved, we would know what consciousness is and if we knew what conscious is, we could have our conscious transferred into a robot’s body and henceforth, live forever! Charles Rubin states in his article “Artificial Intelligence and Human Nature.” And I quote, “Our combination of human limitations and human intelligence has given birth to a new human power (technology); and our new life as self-conscious machines would enable us to achieve what was once reserved for the gods alone (immortal life).”

By which he means to say that we humans have been wanting power over everything since the time we decided to invent the term technology. It is human psychology that when we don’t feel like we have control over something we tend to feel vulnerable towards it and when we feel vulnerable towards it, our instinct finds ways to overcome that feeling of vulnerability and in this case, it is our vulnerability towards death. So, every step in technology we make right now is us trying to become “Gods”.

Shazmaan Malek (Scholarship Runner Up) - AI and Ethics

We talk about an A.I having ethics and values and we forget the irony of us humans building an A.I to help us is against all our moral and ethical values. Like Charles Rubin says in his research article, “Artificial Intelligence and Human Nature.”, “If we can understand why this fate is presented as both necessary and desirable, we might understand something of the confused state of thinking about human life at the dawn of this new century - and perhaps especially the ways in which modern science has shut itself off from serious reflection about the good life and good society.” by which he questions us humans about our own ethics and values.

Why can’t we humans learn to build a good society instead of trying to build something that might help us make decisions? A good society is built by the people in it, not by the decisions in it. We have so many advancements that surround us, but has any of them helped us make our societies a better place to live?

Ethics and values are really confusing. How do we decide when something is right?

Like, why do we think that killing someone is considered wrong? So, I asked myself this question, who decided that something was right or wrong. If wearing clothes was considered “wrong” it would be wrong to walk around with clothes right now. In this case, when we talk about ethics and values, who decided that specific part of ethic or value was right or wrong? I asked my English professor, Professor Nate, bout what does he think about ethics and values, and he stated:

“Ethics is so tricky! I think it all comes down to your basic premises. Sometimes my daughter asks me about the point of things--if everything ends, why try? I ask her: "if there was a cold kitten outside, would you bring it in?" And she says "yes, obviously"--and I ask her "why, when in twenty years the kitten will grow old and die anyway?" And she says, "because it is cold." That's an ethic based on valuing the experience of others--others are real, their pain and joy are real. There are ethics based on efficiency, or law, or valuing some lives higher than others--to be ethical is to make a choice about how and who and how much to care.”

Shazmaan Malek (Scholarship Runner Up) - AI and Ethics

Connecting what my professor said to Artificial Intelligence, if we ever where to have an artificial intelligence software, we would require it to value what each and every ethic of the human world meant before it acted towards making its decision. And valuing a moral or ethic is what is going to be the hardest for an A.I to master.


Learn to leverage the power of AI from our AI Applications for Business Success course and understand the full lifecycle of an AI project with Product Management for AI & Data Science.

The 365 Team

The 365 Data Science team creates expert publications and learning resources on a wide range of topics, helping aspiring professionals improve their domain knowledge, acquire new skills, and make the first successful steps in their data science and analytics careers.

Top