ChatGPT: How to Understand and Compete with the AI Bot

Join over 2 million students who advanced their careers with 365 Data Science. Learn from instructors who have worked at Meta, Spotify, Google, IKEA, Netflix, and Coca-Cola and master Python, SQL, Excel, machine learning, data analysis, AI fundamentals, and more.

Start for Free
Aleksandra Yosifova 15 Apr 2024 11 min read

OpenAI released ChatGPT, and the world reacted with awe and trepidation.

Why?

What can this AI chatbot achieve? Where does it fail? Is it a threat to your job? How do you remain relevant in the 21st-century job market?

We answer these questions and more below.

Table of Contents

What Is ChatGPT?

ChatGPT is a text-based AI assistant from OpenAI—an AI research laboratory co-founded by Elon Musk (who stepped down from the board in 2018 and now remains an investor).

The new chatbot is not the first breakthrough creation from OpenAI—not even for 2022. The company is also responsible for DALL-E—an AI system that creates surprisingly realistic images based on text descriptions. OpenAI launched the new and improved second version of the program in November 2022. Not a month later, it rolls out ChatGPT, sparking a flurry of excitement.

Seriously, the public’s reaction was unprecedented. As always, Twitter’s take on it was beyond hilarious.

A Tweeter user asked ChatGPT to write a biblical verse in the style of the King James bible explaining how to remove a peanut butter sandwich from a VCR.

Elon Musk himself deemed the chatbot “scary good” and “not far from dangerously strong” in a tweet. Others compare it to Google and claim it surpasses the depth and usefulness of its responses.

But why did a chatbot trigger such a reaction?

The technology itself isn’t groundbreaking. GPT stands for “generative pre-trained transformer,” which is an autoregressive language model that uses deep learning to produce human-like speech. But there were other transformer-based models before it, such as the Bidirectional Encoder Representations from Transformers (BERT).

Besides, ChatGPT isn’t even the company’s first chatbot. OpenAI released the first GPT in 2018. Of course, the first version wasn’t nearly as good as their most recent creation. The GPT-3 bot, however, made quite the impression in 2020—standing out from other language models with the sheer number of parameters it was trained with.

ChatGPT is an upgraded version of GPT-3 and one of the largest and most powerful language processing AI models to date, trained with 175 billion parameters.

But what makes it special?

Mostly, the fact that it’s scary good. It produces human-like responses in various domains and tasks in the blink of an eye.

What Can ChatGPT Do?

OpenAI says ChatGPT can “answer follow-up questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests.” And what’s most impressive is that it can use humor. (Some of its jokes are actually funny.)

ChatGPT's joke is: 'Elon Musk walks into a bar and says, "Hey, have you heard about ChatGPT? It's the latest AI technology to make conversations with humans." Bill Gates and Steve Jobs look at each other and reply, "AI? We invented that!"'

In other words, it’s more human-like than anything we’ve seen.

With the right prompt, the AI chatbot can do anything! It’s capable of content generation, summarization, classification, categorization, sentiment analysis, data extraction, and translation.

It can generate various text types, including scripts, poems, essays, and computer code. Your creativity is the only limit—you can make it write a biblical verse explaining how to remove a peanut butter sandwich from a VCR or direct it to condemn itself in the tone of Shakespeare.

ChatGPT writes a sonnet in the style of William Shakespeare, condemning itself.

Its potential applications go beyond the anecdotal and to the practical realm. They span from game and app development to college-level essay writing and even medical disease diagnosis.

How did OpenAI achieve this?

How Does ChatGPT Work?

As we said, ChatGPT is an autoregressive language model that uses deep learning to produce human-like speech. If you’re unfamiliar with data science, that may sound like a string of complicated words with unclear meanings.

Let’s demystify this complex creation by defining each term, starting with language models.

Language Models

Language models are machine learning models which predict the probability of a particular word coming next in a sequence. They are at the basis of natural language understanding and generation, or the ability of machines to comprehend and produce human-like speech. We need to dive deeper into machine and deep learning to grasp how they work.

Machine Learning

Simply put, machine learning (ML) models are algorithms that leverage data to improve their performance on a given task. The quality of a model depends on how sophisticated the algorithm is and the quantity and quality of the data it is trained with.

The main types of machine learning are supervised, unsupervised, and reinforcement. In supervised learning, the ML model is fed with labeled data; every data point in the set we use for training has its category.

In reinforcement learning, we evaluate the model’s performance, and it learns based on our feedback. Regardless of the type of ML, data scientists and ML engineers can fine-tune the models, meaning they tweak the parameters to improve their performance.

The GPT-3.5 model was trained and fine-tuned using supervised and reinforcement learning. But the architecture of the language model was based on deep learning and, more concretely, on artificial neural networks.

Deep Learning

Deep learning is a more complex version of machine learning, consisting of several ‘layers’ of models. It relies on a structure called a ‘neural network,’ which was inspired by the human brain.

Artificial Neural Networks

Artificial neural networks consist of an input layer, one or more hidden layers where the incoming information is processed, and an output layer.

Each layer contains connected nodes—resembling the neurons in the brain. When data enters the nodes in the input layer, it is assigned a weight based on its relevance to the desired output. If the weight exceeds a given threshold, the node is activated, and the information is transmitted to the next layer for processing. Finally, the model produces an output—hopefully, a relevant one if the model is good and the training data is sufficient.

There are many approaches to building these complex AI systems. We focus on the ones involved in the creation of ChatGPT.

The Methodology Behind ChatGPT

As we mentioned, ChatGPT is a generative pre-trained transformer. This is a group of autoregressive language models that predict the most probable next word in a sequence based on the preceding text.

Being a transformer model, it uses attention mechanisms to assign weights to words according to their relevance to the next token. The model determines the weights based not only on the meaning but also on the order and hierarchy of the words in a sentence. It’s a sophisticated deep learning algorithm with the impressive capability to understand and produce human speech.

At this point, language models can be fine-tuned to perform specific tasks. But ChatGPT’s creators used a different approach. Instead of specializing the model for a given domain or challenge, they trained it with an enormous data set.

During the training phase, the model learned and improved its predictions based on 175 billion parameters. As a result, it developed various capabilities—which it wasn’t explicitly trained for—like translating from English to French, for example.

GPT-3.5 was trained with a colossal amount of text and code from the public web and the OpenAI lab, dated before Q4 2021. The model finished training at the beginning of 2022, and OpenAI fine-tuned specific systems from it to create ChatGPT.

It used Reinforcement Learning from Human Feedback (RLHF) for this purpose—a machine learning technique using human feedback to improve the model’s performance. This involves adjusting the parameters based on the human trainers’ ratings of performance accuracy.

The following is ChatGPT’s training and fine-tuning process step by step as described by OpenAI:

Step 1: Collect Demonstration Data and Train a Supervised Policy.

First, GPT-3.5 was trained with supervised learning. This means that the training data fed to the model consists of labeled input-output pairs. In this case, the data points were human-generated prompts and example responses. In other words, human trainers provided the output from both perspectives—the user and the AI bot.

Step 2: Collect Comparison Data and Train a Reward Model.

Next, OpenAI created a model for reinforcement learning. It collected comparison data with model-generated responses, which human trainers ranked from best to worst. Then, their feedback was used to train the reward model.

Step 3: Optimize a Policy Against the Reward Model and Using the PPO Reinforcement Learning Algorithm.

Finally, the GPT-3.5 model was fine-tuned using Reinforcement Learning from Human Feedback. This means that the reward model from Step 2 was applied to the supervised policy from Step 1. Simply put, GPT-3.5 responded to new prompts from a test set, its outputs were ranked by the reward model, and this feedback was used to improve its performance.

This process was iterated several times before OpenAI released ChatGPT to the general public. And while the AI chatbot is arguably better than anything we’ve seen, it isn’t without its downsides.

What Can’t ChatGPT Do?

Applying general knowledge to specific domains without prior experience with the task at hand is a typical human capability. ChatGPT is one of the few AI systems which produce comparable results.

But before you throw your diploma out the window, consider its limitations.

The AI chatbot may write like a human but cannot reason like one yet.

The Outputs Depend on the Instructions

Despite its profound knowledge of various topics, the quality of answers depends mainly on the instructions provided. For one thing, it requires detailed context to produce more sophisticated outputs.

In fact, it’s so reliant on context that it may forget everything it knows on another topic. For example, if we present it with a number sequence and then ask an unpredictable question (e.g., about a historical fact instead of the next number in the sequence), ChatGPT may respond that it doesn’t know the answer.

Gives Wrong Answers to Simple Questions

You’d expect a bot trained with 175 billion parameters to know the answers to all questions. But that’s not always the case with ChatGPT.

Despite the numerous examples we see of programs coded by the chatbot, its output isn’t perfect. In fact, Stack Overflow (a questions and answers platform for programmers) temporarily banned its usage. Allegedly, the website was overflooded by incorrect submissions generated by the AI chatbot.

Workarounds for Its Restrictions

ChatGPT is programmed not to provide information that could harm people or instructions for illegal activities. Some see it as a huge step toward ethical AI, while others argue it’s a downside. But the fundamental limitation is that people have found workarounds for its restrictions, tricking it into disabling its own safety features just by changing the instructions. This seems profoundly flawed for such a complex creation.

Unaware of Current Events

ChatGPT doesn’t work with real-time data nor crawl the web for recent information. In fact, its knowledge dates before Q4 2021, which means it cannot answer factual questions about current events, and its references may be outdated. Considering how much has happened in the past year alone, that’s a significant limitation.

But more importantly, this isn’t likely to change in the foreseeable future. Having a system trained with 175 billion parameters already requires an overwhelming amount of computational power. Feeding it real-time data on top of that is hardly achievable at this point.

But what will happen when AI bots become even better?

Will AI Make Tech Jobs Obsolete?

The AI text generator came out with a splash and was received with awe and apprehension.

But is ChatGPT really a threat to our jobs?

As we said, it’s not the first AI chatbot of this magnitude, and although some would argue it’s the best one yet, it’s still far from being widely implemented for commercial use.

In addition to the limitations listed above, the system requires vast computational power to implement and maintain. Besides, a chatbot trained with 175 billion parameters and capable of producing Shakespearean monologues might be overkill for, say, an e-commerce shop’s support channel.

So, is ChatGPT a precursor to mass unemployment?

It’s safe to say that your job is not threatened by this particular system at the moment. With that said, the rapid development of technologies of this magnitude is drastically changing the job market.

To remain competitive, you’d need to upskill. After all, technologies like ChatGPT are only possible thanks to the highly skilled workforce behind them.

When asked how it was created, ChatGPT said the following:

“The process of my creation began with the development of a concept and design. The software engineers then wrote code to create the basic structure of my program. After that, the designers worked on the user interface and visuals, making sure that the program was easy to use and visually appealing. Finally, the engineers tested the program to make sure it was functioning properly and that all features were working correctly.”

Here are some of the people who worked to design this mix of software and sorcery.

  • Data architects who designed the system’s architecture
  • Product managers responsible for the planning and execution of the product
  • Data engineers who created the basic structure and fine-tuned the system
  • Software engineers involved in the design, development, and testing of the system
  • Data scientists carrying out the model’s development and training
  • Data analysts responsible for data processing for the training of the model
  • Designers to create the UI and visuals
  • Human AI trainers involved in the training and fine-tuning of the model

The list is by no means exhaustive, but it is enough to illustrate an important point—the next in-demand jobs will be those involved in creating such technologies.

How to Remain Relevant on the Job Market

For starters, learn to use ChatGPT to aid your productivity at work with our Intro to ChatGPT and Generative AI course.

GPT-4—the next incarnation of the company’s large language model—came out in 2023. And it’s just one of the many AI bots being developed and a small (although significant) piece among the plethora of technologies flooding the market.

If you don’t see your profession on the list above, it doesn’t mean you’ll lose your job to AI. But if you want to be on the crest of a wave and not feel threatened, you need to upskill.

The data science field is on the rise and presents numerous career development opportunities. The numerous examples above demonstrate that ChatGPT is only as creative as the person who uses it. Learning how to leverage new technologies can take your career to another level. And if you want to be at the forefront of innovation, the “sexiest job of the 21st century” is the way to go.

With the proper training, you can become a product manager for AI, learn machine learning and deep learning to build complex models, familiarize yourself with convolutional neural networks to create image classification systems and leverage the power of AI to meet your business goals.

Our courses can help you build a successful career even if you’re starting from scratch. 365 Data Science is the best place to familiarize yourself with data science and take the first steps in your professional development.

Pick a course to upskill or build your data science knowledge from the ground up with our Data Scientist Career Track. Sign up via the link below to try our learning platform for free and see if this is the right career path for you.

Aleksandra Yosifova

Blog author at 365 Data Science

Aleksandra is a Copywriter and Editor at 365 Data Science. She holds a bachelor’s degree in Psychology and is currently pursuing a Master’s in Cognitive Science. Thanks to her background in both research and writing, she learned how to deliver complex ideas in simple terms. She believes that knowledge empowers people and science should be accessible to all. Her passion for science communication brought her to 365 Data Science.

Top