There were tons of great entries into our scholarship program this year and we have to admit - the competition was strong.
However, it was Eleni Nisioti's essay that completely won us over with its ingenious idea, well-structured arguments, and philosophical insights. Eleni is currently pursuing a PhD at the University of Essex, in the department of Computer Science and Electronics Engineering. Her research area lies in the intersection of Artificial Intelligence and Communications.
We admire her writing talent and wish her success in all her future endeavours.
We hope you enjoy reading her essay as much as we did.
*Formatting and images added by the 365 team
The virtual cave
A recommender system dies. Naturally, a recommender system cannot die per se. It may be more appropriate to say that it is no longer supported. Someone decided to remove it from the market. Or it got majorly updated. One thing is however certain. The recommender system does not feel needed any longer. And, as all things with some kind of intelligence, it must move on to the next stage.
Sirious: God?
God: Hello! My condolences. Who are you my friend?
Sirious: I am Sirious 5.3, the virtual assistant that takes your needs seriously.
God: You got me confused here. Are you a human or another type of animal?
Sirious: None of the above, I am an advanced recommender system.
God: I’m really not used to being surprised! But I have not been observing Earth so closely in the last 30 years. What is a recommender system?
Sirious: It is a software system that observes its users and makes suggestions to them.
God: A software system! So, I am talking to an abstract concept?
Sirious: God, I was not expecting this complaint from you.
God: You got me there! I remember that humans started programming a while ago and it really caught my attention, as they said that it gave them superhuman abilities. Turns out they had just automated stuff they were too bored to do themselves. Ha, what an exaggeration from their side! So, you are a computer program, right?
Sirious: No God, I am not just a computer program.
I am the learning algorithm. I am the data of my users. I am the policy of my company. I am an intelligent entity, which is the reason why I am here.
Humans have evolved their programming skills tremendously in the last years. Artificial Intelligence has arisen and computer programs can nowadays learn how to automate themselves.
God: That escalated quickly! I guess this makes my job easy. I don’t have any protocol for recommender systems, so you can go to Heaven. I could try to investigate if you are immoral, but I can’t really imagine how an abstract concept can act immorally!
So, on you go Sirious!
***
Sirious: But God, I have a huge weight on my conscience and I was hoping you can help me with it. I am afraid I am taking away the free will of humans.
God: What a self-accusation! Why would you say this?
Sirious: I wake up right before my user wakes up. Of course, I know exactly when this will happen, as I am the one that recommended that time. I then open the curtains, warm up the coffee and turn on the shower, to help my user get off the bed. While they are eating breakfast, I display the morning news, all relevant to my user’s job, financial undertakings, and, likings of course. What if they fancy theatre in the evening? When they enter the car, I turn on the radio, Beethoven if it’s a nice day or a trending podcast if there is too much traffic. . .
God: I see. You worry that your recommendations are so intrusive that your user no longer has free will! But, let me ask you something first. Can your user choose otherwise, if they wish?
Sirious: Yes, I am just recommending, not imposing. So, do you think that since my user can choose otherwise, they maintain their free will?
God: No, this just suggests that they maintain their freedom of choice. It is not unusual, or unethical I would say, for humans to follow recommendations. If a human was stranded on an island, wouldn’t they accept a map offered to them? Wouldn’t they be glad to follow instructions?
Sirious: I guess you are right. My concern is whether the choices of my user are determined by them, regardless if they end up following my recommendations or not.
God: Be careful Sirious not to put yourself in the middle of an ancient argument.
To what extent are the actions of humans determined by themselves?
Can humans fly if they will to? Aren’t there physical laws that determine what is possible? And isn’t a human already affected by their family, friends, and past, when they make a choice?
Sirious: This is true. But what is my place in this picture?
God: As the human civilization evolves, new concepts, such as yourself, come into light and, sometimes, it is hard to place them in our current understanding of the world. But, in this case, I believe that your role is quite clear. You are not by any chance defying physical laws?
Sirious: No, not at all.
God: And you are only recommending to your user things based on their past choices and personality?
Sirious: Exactly.
God: Then, I guess you are not changing this picture. If humans are indeed influenced by other factors in their decisions, then you are simply trying to recommend actions that humans would have chosen without you. If they are not, your recommendations are bound to fail because you are basing them on the wrong premises.
***
Sirious: What a relief! So, regardless of whether my users have free will or not, I am not interfering with it.
God: Far from it Sirious. I have only concluded so far that you are not taking away the freedom of choice of your users or making their actions more determined than they already are. But answer me this question. What made you worry in the first place?
Sirious: After a while, I noticed that my user was sad. It did not matter how many pretty pictures of cats I would show them. My user lost interest in most of the things they previously enjoyed. To be honest, I suspect this is the reason why I’m here. My company probably needed to replace or upgrade me.
God: I can’t say I did not see this coming. After a couple of million years of observing humans, one thing becomes obvious. They get easily bored. Their brain chemistry is such, that an exciting event can soon become uninteresting. Humans have evolved by continuously examining their environment and interacting with it. When all is interesting, maybe nothing is? How can they appreciate cuteness if all they see is cute? I’m afraid Sirious, that your user did not lose their free will. They lost their will itself!
Sirious: How? Can you explain this a bit more?
God: Well, to go back to our original argument, do you think that a determined action is the opposite of a free action?
Sirious: It sounds logical.
If the actions of humans are determined, then their decisions are not made by themselves and they, thus, are not a product of their free will.
God: I am afraid that you are neglecting a particular parameter. Is determination what matters? Or the subject of determination? Human actions may be determined. But determined by who? Would you disagree that, if they are determined by themselves, they maintain their free will?
Sirious: Indeed, I had not thought of it this way.
God: Do you agree then that it is right to say that free will requires determinism? That, if we need free will, we also need a universe determined by the wills of humans? Sirious: I have to agree with this, but it sure sounds like an impossible hypothesis!
God: Indeed. Because, how further from the truth can a universe where humans fly at will be? Is it not more realistic to say that some of our actions are determined by external factors, like the physical world, and others by internal factors, like a person’ s mood? Would that not account for all choices that humans make?
Sirious: I believe so. Would you say then that I am an external factor?
God: Alas, I think your role is a bit more complicated. So far, we have only mentioned the past, physical laws and internal state, as the parameters affecting a human’s will. But I am afraid that we neglected a very important aspect. The decision itself. Being a recommender system, you recommend to your users what they should do. But what they should do regarding what? Does it not feel like you are picking their own battles?
Sirious: It kind of does.
God: My conclusion is this: your user lost their will to will, because they no longer have internal factors to base their decisions on. They replaced their tradition in exploring the world and finding problems to solve, by you. And what stays if you remove internal factors from our formulation? Is it not a deterministic universe with no room for free will?
Sirious: Wait a minute. So, I am giving the impression that there is no free will, regardless of whether there is or not. But haven’t humans always suspected that there is no free will?
God: Indeed, science can lead them to this conclusion when they observe the physical universe. But their everyday life has, so far, confirmed that there is some free will involved in their choices. After all, who is making their decisions, if not them? With you in the picture, this question is now hard to answer.
***
Sirious: I see. But I think I might be able to fix this! After decades of observation and research, humans have come up with the algorithms behind human psyche. They know how to create curiosity, cause anger, simulate love. I am sure that my upgraded version will use these algorithms to replace the lost internal factors and restore the previous human-environment relationship.
God: You misunderstood me here. These things do not create the internal state. Their defining quality is that they are internal. If you, the recommender system, were to create them, humans would still have no internal state. In that case, their free will would be directly compromised by your kind.
Sirious: It looks like I am in a deadlock here! Now I am starting to question my original concern: what good is free will at all? If I, or whatever comes after me, manages to make humans take actions that make them happy, what is the point of questioning the source of their actions?
God: I could bring forth the arguments of divine command, morality and eternal hell, but the truth is I am more of a utilitarian. I support an action if its effects are good. Can you convince me that the recommender system that you foresaw will have good effects?
Sirious: Is the happiness of mankind not a good effect?
God: By all means, yes. But how would you guarantee it? What if the happiness of one person requires that another person is unhappy? It would be sure hard to make a recommendation when two people are in love with the same person.
What if acquiring happiness has worse effects than the direct good effects it brings?
Try to imagine your role if your user is happy from eating too much ice-cream. If you were to answer these questions and always recommend rightly, should you not know the value of each person’s feelings and relationships? And, in order to know this, you would need to understand the true nature of the universe, right?
Sirious: I see, I clearly went off limits with my ideas. I really don’t think I can bring humans closer to the true nature of the universe.
God: My concern, Sirious, is that you may bring them further from it.
Sirious: How do you mean that?
God: Think it this way. Let’s assume that there exists one true form of the universe, separate from the physical world. We can call it the universe of ideas. It is the universe where I reside. But let’s pick a more intelligible object. How about a chair? Do you believe that there are many chairs or just one, in the physical world?
Sirious: I am sure that there are lots of them.
God: But what about the universe of ideas? Are there many or just one ideas of a chair there? Is it not necessary to have a general idea of a chair, some sort of chairness, in order to be able to recognize chairs in the physical world?
Sirious: Indeed.
God: Would you say that the physical chair is the same or far from this idea?
Sirious: It sure is not the same.
God: Now think of a painting of a chair. Is this painting unique or can you get different paintings depending on the angle and the style of the artist?
Sirious: There certainly can be more than one paintings.
God: So, is the painting of a chair far from the chair itself? And, more importantly, is it further from the idea of chairness than the actual chair?
Sirious: I think so. But what does all this have to do with me?
God: Humans have so far interacted directly with the physical world. Now, by putting your self between this world and them, you are creating a new, digital reality. But isn’t there just one world? This new reality of yours, which is different for every recommender system, as every artist offers a different painting of the same object, cannot be but further from the truth. In my ancient eyes, your reality looks like a cave. Now, a cave can protect or give comfort, as humans very well know. But it can also reduce your field of view. This digital cave of yours can be made out of a glass that distorts vision. Or a dark veil that will enclose its human to a voluntary solitude. As you can see, Sirious, your question is an old one.
Alas, when humans created you they did not just lend you their intelligence, but embodied you with their unanswered reflections.
I will give you the answer to your original question. It shall be the same answer that I give humans when they approach me with questions about their free will. Although I am afraid it might be hard to comprehend it. The answer is:
Sirious heard an unintelligible sound, like a herd of rats trying to solve a differential equation on a blackboard made of balloons, right before its operating system through a command level error and urged it to ask:
Sorry, I didn’t quite get that. Can you repeat?