Debunking 10 Misconceptions About AI

Join over 2 million students who advanced their careers with 365 Data Science. Learn from instructors who have worked at Meta, Spotify, Google, IKEA, Netflix, and Coca-Cola and master Python, SQL, Excel, machine learning, data analysis, AI fundamentals, and more.

Start for Free
Marta Teneva 2 May 2023 12 min read

Today, misconceptions about AI are spreading like wildfire.

It seems that the rapid technological advancements inspire various myths about people losing their jobs and the demise of humanity as a whole.

At the same time, the sci-fi genre paints a dystopian future where robots have taken over.

You have probably heard some of those myths. You may even find some of them believable.

That’s why we compiled a list of the 10 most common misconceptions about AI and debunked them with genuine pleasure. One by one.

So, before you end up in a heated argument with your friends about whether there’s a difference between AI and ML, scroll down and get the facts.

So, here they are… 10 Common Misconceptions About AI Debunked.

1. AI works like the human brain

AI’s progress in recent years is truly mind-blowing. However, the notion that AI works like the human brain couldn’t be further from the truth. There are still areas of AI that remain extremely challenging, such as language and judgment of relevance.

Let’s discuss language first.

On the surface, it seems that we can communicate directly with programs – by speaking in English to Siri, or by typing Russian words into Google’s search engine. Yes, Google, and NLP (Natural Language Processing) programs, in general, find associations between words or texts. But they lack overall understanding and there are many difficulties related to content and grammar. For instance, familiar action sequences (‘scripts’) can be used as a source of AI novels and TV shows. But their plots are far from intriguing or entertaining. So, unless you enjoy reading computer-generated annual business reports, you will most probably find AI writing efforts boring or unintelligible.

ai misconceptions

In addition, computers often make grammar mistakes or use awkward expressions. Elegant writing style is still something only humans can boast with.

As for judgment of relevance… Well, AI still has a long way to go until it can figure out how to put on a T-shirt or fold a satin dress. Sure, there are some alluringly misleading examples of AI’s progression, such as Siri, Alexa, and Google Duplex who seem to have meaningful conversations, and even make reservations at your favorite restaurant. But they can be easily fooled into giving chaotic answers if the conversation goes off track. WATSON, for instance, beat the two top human champions in the game show Jeopardy!. But it doesn’t always win.

For instance, it tripped up on an “Olympic oddities” question and lost. The question was about the nature of the German-American gymnast George Eyser’s anatomical oddity. Namely, a missing a leg. WATSON correctly specified the part of the body that was… odd – the leg. However, it failed to comprehend that the essential fact in its stored data was that this person had a leg missing. Thus, the answer “leg” was deemed incorrect. Of course, that won’t happen again, because WATSON’s programmers have now indicated the importance of the word ‘missing’… But there will be other mistakes. Truth be told, even in day-to-day situations, people often rely on relevance judgments that greatly surpass WATSON’s.

So, we can leave this misconception to rest. If anything, AI has taught us that the processes in the human brain are even more complex and hard to recreate than we have previously thought.

2. Intelligent machines can learn on their own

Another common misconception is that computers can learn on their own. Well, not really. Sure, they can grasp how to perform a task in a better way. Or make predictions based on existing data. Nevertheless, we, the human programmers, data administrators, and users provide the necessary input for their learning and improvement. Machines can’t yet implement key components of intelligence, such as problem-solving and planning just by themselves.

In other words, unless provided with initial data, they can’t figure out how to achieve goals. Think of playing chess. You could argue that ML makes it possible for AI programs like DeepMind’s AlphaZero to achieve a superhuman level of chess play, after teaching itself for a mere 4 hours. Wrong. AlphaZero’s success would still be impossible without the data engineers who fed it with the initial data. What about reasoning? Once again, computer scientists enable AI technologies to interpret human languages, be it English or Chinese.

With that said, don’t worry. Our beloved technologies can’t do without us (at least not in the foreseeable future).

3. AI can be 100% objective

Hardly so. Algorithms are only as fair as the people who create them. So, a prejudiced data scientist will create prejudiced algorithms based on their intentional or unintentional preferences. Funnily enough, these may remain unexposed until the algorithms are used publicly.

An interesting example is Amazon’s recruiting tool which showed bias against women.

The company’s experimental hiring tool used AI to rate job candidates by giving them one to five stars - much like you rate products on Amazon.

But by 2015, it was strikingly obvious that candidates for software developer jobs and other technical positions were not rated in a gender-neutral way.

As it turned out, Amazon’s computer models were trained to scan applicants based on patterns in resumes received by the company over a 10-year period. Due to the male dominance across the tech industry, most came from men.

So, what happened is that Amazon’s system taught itself that men were the preferable candidates. It penalized resumes that included the word “women’s,” and undervalued graduates from all-women’s colleges. Sure enough, Amazon made the programs neutral to these specific terms. But does that guarantee the machines would not come up with other ways of scoring candidates that could prove discriminatory?

All in all, 100% bias-free AI is way out of reach for the time being.

4. AI and ML are interchangeable terms

To put it simply – no. And this glaring misconception probably stems from the fact that the terms AI and Machine Learning (ML) are often wrongly used as substitutes for one another. So, let’s clarify what’s what.

Machine Learning is a sub-field of AI of sorts. ML is the ability of machines to predict outcomes and give recommendations without explicit instructions from programmers. AI, on the other hand, is much larger in scope. It’s the science of making technology operate through traits of human intelligence. It’s a more general term (which is also open for philosophical discussion). The concept of AI is ever-changing, largely due to the constant technological advancements. For instance, in the 1980s the Gemini home-robot was revolutionary with his ability to take voice commands and keep a map of your home for navigation purposes. Today, however, it would be considered more as a charming relic than AI.

Anyhow, if the concept of ML sparked your interest, you can find a more thorough explanation of the subject in this article.

5. AI will take your job

That’s a common fear but it’s really more of a history-repeats-itself kind of thing. People had the same concerns during the industrial revolution. The fear of losing our job to robots, however, is far from grounded. AI is currently designed to work with humans in order to improve efficiency, rather than against them.

So, think of it in the following terms. AI could do boring and repetitive tasks, while you concentrate on more creative and challenging work (such as learning the skills you need to become successful in data science).

And even if in the future some roles are taken over by AI, this would only generate the demand for new types of jobs, based on new capabilities and needs.

Well, in case you’re still anxious about robots replacing you at the workplace, check out https://willrobotstakemyjob.com/, enter your job title and see the percentage of risk for your position. Then, finally, breathe a sigh of relief. Or not.

artificial intelligence, ai misconceptions

6. AI cannot be creative

There are many people who believe AI has very little to do with creativity. However, AI technology has generated many unprecedented and valuable ideas. While AI is certainly not autonomous, it can be creative when combined with human understanding and intuition. There are sufficient examples of AI creativity in designing engines, pharmaceuticals, and various types of computer art.

Rolls Royce uses AI to learn from their past engine design and their past simulation data. AI also helps them predict the performance of their brand-new engine designs. Furthermore, the company employs AI in new components manufacturing. AI also assists them in replacing all major components of an old engine, where inspection is needed.

Pharmaceutical companies use machine learning software, too.  It makes predictions about a patient’s response to possible drug treatments. How does AI do it? By inferring possible connections between factors, such as the body’s ability to absorb the compounds and a person’s metabolism.

AI certainly finds a creative outlet In CG art, as many visuals couldn’t have been created, or sometimes even envisioned, without it. For example, the computer program AARON, written by artist Harold Cohen, creates original artistic images. The program’s very author, Cohen, even says it’s a better colorist than he is himself.  New styles or imagery must be hand-coded by the artist, thus, excluding 100% human-free creativity. However, Cohen compares the relationship between him and his program to that of Renaissance painters and their assistants.

ai misconceptions

Examples of AI creativity can be discovered in music, too.

For instance, David Cope developed an AI program called EMI (Experiments in Musical Intelligence). It that analyzes music compositions, identifies and characterizes the musical genre and recombines pieces and patterns into new original works. Thus, it has composed musical pieces in the styles of Beethoven, Mozart, Chopin, Bach, and more.

So, although AI isn’t an independent artist, it definitely poses some important questions like: What is the essence of art? Is it created in the “mind” of the artist or in the eye of the beholder? Who knows, maybe AI will give us some creative answers in the future.

7. All AIs Are Created Equal

Not at all. Basically, there are three types of AI: ANI (Artificial Narrow Intelligence), AGI (Artificial General Intelligence), and ASI (Artificial Super Intelligence).

ai myths

ANI, or artificial narrow intelligence, perform single tasks, such as playing chess or checking the weather. In addition, it can automate repetitive tasks. Bots powered by ANI can perform tasks considered boring by humans They can search databases to look up product details, shipping dates and order histories.

AGI, on the other hand, hasn’t come into existence quite yet. In theory, AGI should be able to completely mimic human intelligence and behavior. It should be a creative problem-solver that can make decisions under pressure. But that’s very much in the future still. Now, it is widely believed that once we reach AGI, we’ll be on the Fastlane to ASI or Artificial Super Intelligence – a mighty and sophisticated program that surpasses human brainpower and will lead us to our demise. Fortunately, for now, this could only happen in your favorite sci-fi movies.

8. AI algorithms can figure out any and all your messy data

As powerful as it may be, AI needs our help to figure out data. Data engineers don’t expect AI to analyze raw data. They label it first.

artificial intelligence myths

Data labeling is the process of taking raw data, cleaning it, and organizing it for machines to ingest.

For example, the well-known pharmaceutical company Pfizer meticulously labels their data. To make sure the data stays relevant, the company updates it every six months. Once their data is labeled, it can be effectively used in ML. Pfizer uses machine learning for patient-and-physician-data analysis to assess the most successful approaches for different types of patients. The company created a model that leverages anonymized longitudinal prescribing data from physicians. To achieve results, it examines thousands of variables with machine learning. In the end, the data analysis reveals that physicians who optimally identify the most effective dosage of one of Pfizer’s medicines show better patient feedback. These insights helped to enable more patient-centric services and to think of other ways to support the patient population.

So, if we want perfect results and solutions, we’d better make sure we’ve provided perfect training data first.

9. AI is new

Although it seems like the latest thing, AI was first foreseen in the 1840s. That’s right. Lady Ada Lovelace (An English mathematician and writer) predicted part of it. In her words, a machine ‘might compose elaborate and scientific pieces of music of any degree of complexity or extent’.

A century later, Alan Turing and his team laid the foundations for Machine Learning. They created the Bombe machine to crack the Enigma code used by the Germans to send secured messages during World War II. After the war was over, Turing helped design the first modern computer in Manchester in 1948. But he couldn’t take AI much further, because the technology available at the time was too primitive. It was in the mid-1950s when more powerful machines were developed.

A prominent 1950s milestone was Arthur Samuel’s draughts player which learned to beat Samuel himself. You can imagine the number of headlines that made back then!

In the 1960s, computer scientists were devoted to developing algorithms for math problems solutions and Machine Learning in robots.

And, although AI research funding was scarce in the 1970s and the 1980s (the so-called AI Winters), things changed for the better in the 90s, leading to the highest achievements in AI today. So, the initial ideas behind the terms AI and ML go way back, although, over time, the concepts have changed from what they used to mean.

10. “Cognitive AI” technologies are able to understand and solve new problems the way the human brain can.

Generally speaking, Cognitive AI technologies mirror how the human brain works. They can identify an image or analyze the message of a sentence. But they definitely need human intervention.

Facebook, for example, has an image recognition application that analyzes photos on Facebook or Instagram and offers the user ads tailored to the content they interact with. The app also helps to identify banned content, inadmissible use of brands and logos, or terrorism-related content.

Nevertheless, Facebook has encountered problems with some types of cognitive technology.

When it tried to identify important and relevant news items to present to users, the automated process failed to distinguish real from fake news. In fact, Russian hackers managed to post deliberately false news on Facebook without detection by automated filters. That’s a prime example of security lagging behind. You want to know one of the reasons why? Turns out, there are certain patterns that can trick algorithms into misclassifying objects. If you’re curious for more details, check out this pretty interesting article on how such patterns are developed and layered on images.

Hence, we agree that cognitive technologies are a great tool, but your brain is far superior (and you should definitely apply it to distinguish fake news from reliable ones).

So, those were 10 Misconceptions about AI we did our best to shed light on. We hope you found them as intriguing as we did.

Now that you know the truth, feel free to go ahead and spread the word. And if you’re still thirsty for AI insights, follow the link to this amazing essay about Artificial Intelligence and Ethics, written by our scholarship program winner Gloria Yu, and take our AI Applications for Business Success course.

Marta Teneva

Senior Copywriter

Marta is a former Senior Copywriter at 365 Data Science. Digging into her own experience of transitioning into a new field and all the uncertainty that initially goes with it, she creates informative and fun to read content that helps our readers expand their career options in data science and achieve the goals they have set for themselves.

Top