John O'Neill (Scholarship Runner-Up) - The Illusion of Choice

Join over 2 million students who advanced their careers with 365 Data Science. Learn from instructors who have worked at Meta, Spotify, Google, IKEA, Netflix, and Coca-Cola and master Python, SQL, Excel, machine learning, data analysis, AI fundamentals, and more.

Start for Free
The 365 Team 28 Apr 2023 11 min read

John O'Neill's entry into the 365 Scholarship Program immediately made a strong impression. His essay is not only thought-provoking, but is also supported by compelling arguments we can all relate to. John, who studies Energy and Environmental Policy, came third in our competition, and rightfully so. We wish him the best of luck on his academic journey.

We hope you enjoy reading his essay as much as we did.

*Formatting and images added by the 365 team

The illusion of choice: is the recommendation algorithm taking away your free will?

john-oneill-scholarship-data-science

Recommendation algorithms have become such a ubiquitous part of our online lives that we often don’t realize just how much we are affected by them. They suggest social media accounts to follow, products to buy, shows to watch, websites to visit, and news to consume. They control what ads we see and what our search engine results are, and by no means is that an exhaustive list.

As we live more and more of our lives online, from banking to shopping to entertainment to education, the influence of these algorithms grows ever greater, so it’s crucial to consider the effects they have on us; specifically, we must understand if and how they impact our free will.

There are many reasons this is not a simple question to answer, not the least of which is deciding how to define the concept of free will in the first place.

Free will is a fascinating idea, and there exist countless angles from which to analyze its nature (and even question its existence).

But frankly that conversation is a bit beyond the scope of this essay. Furthermore, any attempt to constrict the idea of free will to some specific set of conditions that must be met will ultimately fail, since free will is such a nebulous concept.

All things considered, the prompt is quite simply unanswerable, at least in a simple yes or no format, as it is posed. Instead, this essay approaches the existence of free will not as a binary state, but as a continuum. It explores the effects of recommendation algorithms on our free will, how they influence our behavior, and the implications for our society. It also attempts to look to the future and predict how these technologies might develop, forecast future problems that may arise, and offer some potential solutions for how they can be controlled and used in a manner that benefits society.

When I wake up in the morning, after a quick shower, I usually sit down for breakfast and pull out my phone. Usually I check my fantasy baseball team first and read an article on ESPN about which players are primed to have big games that day.

As I read, I scroll past links to other articles about my favorite teams, picked out just for me. I usually switch to Facebook after setting my lineup, and not far down from the top of my news feed, I usually find the Chicago Cubs recap with highlights from last night’s game – by now, Facebook’s algorithm knows I almost always click on those recaps, so it never takes me too long to find them. I scroll past the “suggested friends” which usually shows me someone I have already actually met in real life. And then I usually open up Google Chrome, where my home page typically shows me a personalized mix of links to articles about Marvel movies, renewable energy, sports, and local or national news. Not even an hour into my typical day, I have already interacted directly with multiple recommendation algorithms.

There are a few ways one can look at this routine. On the one hand, my daily routine is a lot more streamlined, and my life is made much more convenient because of those algorithms. I don’t have to search for Chicago Cubs highlights every morning – I just see them automatically. Instead of trying to remember the last name of a new acquaintance I made to connect with them, their profile just shows up in my feed. I immediately hear about the latest state to set a renewable portfolio standard rather than having to look around for that information. In many ways, recommendation algorithms have made my life easier – from shows Netflix suggests when I watch TV later that night, to books Amazon recommends when I log on to buy a new shower curtain.

Another way to look at this routine is that by doing this every day, I have effectively isolated myself from exposure to new ideas and instead surrounded myself with what is familiar and comfortable. While that may not really matter when it comes to the sports teams I follow, it takes on a higher significance when considering how it effects the news I consume, for example.

Before I started making a conscious effort to expose myself to different viewpoints, the links that showed up on my home page and social media feeds tended to come from the same few sources, all with similar political biases that tended to reinforce my preexisting views. Now, there is certainly a time and place for familiarity and a time and place for getting out of one’s comfort zone; there is nothing inherently wrong with frequenting news outlets that you agree with or discussing politics with like-minded friends.

However, recommendation algorithms tend to isolate groups of people into “echo chambers” based on their existing views.

Liberals tend to read articles from the New York Times whereas a conservative is more likely to click on Fox News headlines. And when a recommendation algorithm gets wind of that, the liberal sees fewer and fewer Fox headlines and vice versa. When most of the news people encounter tends to reinforce their beliefs, the perception of those whom they disagree with also changes, and dissenting viewpoints are seen as fringe beliefs that are easily discounted as foolish.

When this is viewed on a societal level, the concerns are that this “silo” effect of social media, driven largely by recommendation algorithms, is contributing to the tribalistic, us vs. them political ideologies that have taken hold. Political discourse consists of one-liners, platitudes, and over-exaggerations that “the other side” wants to completely upend your way of life and your value system. That’s all concerning enough, but the individual scale is what we’re really looking at here. Does this insulation from opposing views have an impact on your own personal free will? That is to say, are you less free to make your own choices or decide your own actions as a result? It’s certainly true that algorithms impact the choices you make.

For example, say you start a new job, and through location services, Google Maps’ algorithm deduces that you take the same route every day at the same time, 7:30 AM. Maybe there’s a Taco Bell along the way, and Taco Bell paid Google to target those morning commuters with a well-timed ad, something like “start your day off right with a cup of coffee at Taco Bell!” Or they pay to be the first result when someone searches for breakfast food along their route. If they can land a few regulars out of the thousands that drive past one of their locations every single day, that’s well worth the investment. But what if the restaurant isn’t clearly visible from the highway? Maybe you didn’t know that location served breakfast, but now that you’ve seen the ad, you’re much more likely to get your morning coffee there.

But what about this scenario makes it different from other forms of advertising, which have attempted to target specific audiences long before recommendation algorithms existed? That hypothetical Taco Bell likely has a billboard a mile before the exit advertising their breakfast options, which is arguably just as targeted as my hypothetical example.

The primary reason that targeted algorithms are fundamentally different from the methods that preceded them is the fact that no longer is it a human being’s decision to air a commercial in a particular time slot, or place a billboard in a particular location – it is a computer’s decision. Besides the general public’s lack of comfort and acceptance of AI and machine learning as a whole, this has two main consequences, the first being that it is far more effective. Human biases are removed and hard data is all that remains, which an algorithm can analyze with ruthless efficiency and come up with more effective advertising campaigns. And because online experiences are individually personalized, the targeting is far more effective, meaning the ancillary “silo” effect is that much more pronounced.

The second (related) consequence is how effective algorithms are at analyzing individuals rather than segments of the population. Advertising beer during football commercial breaks is still targeting, but it is anonymous – Budweiser doesn’t know who specifically they are trying to get to drink their beer, they just assume a football fan is a more likely buyer than someone who prefers watching a documentary on Sunday afternoon. But users have distinct online profiles with data about their own specific habits, so now Budweiser can specifically sell to a very carefully selected subset of the population – down to the individual.

Ads no longer are restricted to making generalizations about the population, but literally targeting individuals.

And now that algorithms have enabled efficient analysis of that data, there is great value in collecting as much data as possible, raising significant privacy concerns, as data collected is often staggering in scope and not always sufficiently protected. And even when the data is well protected, it’s a very disconcerting feeling when you realize how specifically you are being targeted – for example, if you begin to see ads for baby wipes after your search history has betrayed that you’re expecting a child.

But even with all of these legitimately worrisome complications that come along with the use of recommendation algorithms, which we have established have significant impacts on the choices we make, has free will been taken away in any of the above examples? The new commuter can still choose to bring coffee from home, or stop elsewhere. A Facebook user can actively seek out other news sources besides the ones that just pop up on their feed. Ultimately, no matter how good recommendation algorithms are at analyzing who you are and what you want, the final choice to do (or not do) anything still lies within the individual. After all, our choices have never existed in a vacuum.

We have never been completely protected from outside influences on our behavior, whether that comes from social norms, recommendations from friends, random chance, or whatever other factors may have influenced the choices we have made. And while the ramifications of recommendation algorithms are startlingly good at influencing our behaviors (and perhaps too good for a healthily functioning society), the burden of choosing one course of action over another remains with the individual.

This brings up a few other interesting facets to this discussion. For example, how much of the onus falls on the consumer to actively seek out ways to break out of this self-reinforcing cycle of recommendation algorithms limiting exposure to new ideas? Certainly, some culpability must remain with the individual, as we have established that all final choices lie with the individual. But perhaps there should be some carefully determined rules or limits on what recommendation algorithms can be used for. For example, algorithms are used by banks to determine, based on someone’s credit score, income, and even demographics, how likely they are to default on a loan.

The problem with using demographic information is that this tends to enforce stereotypes that are already embedded in our society.

A black man with the same credit and income as a white man could be charged a higher interest rate on a mortgage.

As far as the algorithm is concerned, it may deduce a pattern that says black people are more likely to default on a loan, because historically, that may be true. But that is likely just a by-product of institutionalized racism and a lack of generational wealth that black Americans face; in other words, the algorithm is very good at finding correlation, but not necessarily causation. Its job is to find patterns and act on those, with no analysis of whether there is a sound basis for the results. And in this case, there may not be a better option for that man. Maybe there is only one local bank.

Or maybe algorithms are so ubiquitous that he would face that discrimination regardless of where he chose to bank.

This is another concern about the more everyday algorithms that influence our lives. Facebook and Twitter are the kings of social media, Amazon rules e-commerce, and Google has an effective monopoly on web search engine traffic. It’s almost impossible to live in today’s world and avoid using these websites, or at least some ancillary service. You can avoid shopping on Amazon, but their web hosting services run countless more websites. Even if you could practically track and avoid all websites running on AWS, doing so would cut you off from so much of the internet. Using another search engine besides Google is an option, but countless universities and companies run on Gmail, and Google Maps is hard to avoid when traveling to new places.

It’s virtually impossible to remove yourself from this recommendation algorithm bubble fully, and so while consumers may ultimately have control over what content they interact with, it takes a consistent and intentional effort to do so, and it’s nearly impossible to isolate yourself from the self-reinforcing loop of recommendation algorithms.

So where does that leave us?

Are we destined to become more and more controlled by these algorithms as they become more and more efficient at learning and exploiting our tendencies?

Will we end up turning to recommendation algorithms in the future to find out what house to buy, what career move to make, or even who we should marry? After all, we’ve already handed over control over some of the more mundane tasks to algorithms, who’s to say we won’t start trusting them with bigger tasks? A decade or two ago, you had to look up directions before you traveled somewhere new – if they were out of date or missing road closures, you had to rely on your sense of direction to get where you needed to be. If you hit traffic, it was up to your intuition to guess which roads would be least congested. But now that we have tools that tell us how to get anywhere in the world, and even tell us how to avoid traffic while we do it, those skills are far less developed.

This can be viewed as a good or a bad thing – on the one hand, freeing your mind from tasks that can be relegated to a computer theoretically allows you to dedicate your brain power to more interesting and challenging tasks than remembering hundreds of street names in your home city, tasks that can’t yet be automated.

On the other hand, without developing a sense of direction, how could you get home late at night when your phone is dead? This example can be extrapolated to those bigger life decisions – if we rely on an algorithm to tell us where we should go to college, for example, do we lose the opportunity to self-reflect and actually define and pursue our own values and goals? That’s not a question I feel that we as a society are equipped to answer yet. It begins to touch on some difficult subject material.

If we allow much of our decision making to be performed by an external agent, have we given up what it really means to be human?

I tend to be more optimistic. Surely, recommendation algorithms are a huge part of our lives and seem to grow more powerful constantly, but we as a society have begun to wake up to the drawbacks of an over-reliance on these technologies. People are concerned about their privacy and autonomy, both of which are threatened by the prevalence of recommendation algorithms. But I believe that awareness, though still relatively nascent, will lead us to find creative solutions.

For example, one solution might be that future technologies will be developed that enable consumers to have more control over what data they surrender to the websites they visit. Or if public blowback becomes great enough (or regulation becomes strong enough), recommendation algorithms themselves may change. Perhaps in the future, users may be able to customize how their own recommendation algorithm works. Instead of being solely based on their past web history, maybe they can set certain preferences that enable them to control their online experience rather than having it controlled for them.

In the end, we must come to terms with the fact that recommendation algorithms are not going anywhere. They are immensely profitable ventures for the companies that develop and use them effectively, and frankly, in many ways they do make our lives as consumers easier too. Those benefits do come at a cost, however, as individuals relinquish some element of their privacy and personal autonomy by virtue of their use of these algorithms. But our free will remains intact, though slightly tarnished, because all of our decisions do ultimately still lie within our own minds. And though these concerns are complex and challenging, I believe that humanity will find ways to overcome these obstacles and continue to evolve and grow as a species. After all, we have solved plenty of problems in our past, and I see no reason to doubt that we’ll find a solution to this one.  


Learn more about recommender systems and other ML algorithms in our Machine Learning Algorithms A-Z course.

The 365 Team

The 365 Data Science team creates expert publications and learning resources on a wide range of topics, helping aspiring professionals improve their domain knowledge, acquire new skills, and make the first successful steps in their data science and analytics careers.

Top