12.04.2024
I'm enjoying this course. I logged into Hugging face. I definetly desire to create my own LLM. I hope you have more courses on different aspects of LLM's
This LLM course will guide you step-by-step through transforming your knowledge from an interested field observer to a well-equipped NLP developer. We’ll detail the transformer architecture that powers these large language models and allows them to process and understand text successfully.
In recent years, large language models (LLMs) have dominated the tech news for their incredible ability to write poetry, essays, social media content, code, and more. They’re the hot new topic in natural language processing. This Intro to Large Language Models course teaches you the knowledge and skills required to experiment and create your own language model solutions. Through a combination of video lessons and practical coding exercises, we’ll cover what these language models are, their functions, and ways to implement them into your own projects. Whether you want to generate content, create a chatbot, or train these models on your own custom data and NLP tasks, this course equips you with the fundamental tools and concepts to fine-tune LLM models and tackle these challenges.
By the end of this Intro to Large Language Models course, you’ll have a solid foundation in large language models and the skills required to begin creating your own applications based on these models. You’ll also be able to gain LLM certification by completing the final exams.
We’ll begin our journey with an introduction into large language models., We’ll study the world of LLMs, their applications, training processes, and what datasets they’ve been trained on.
In this segment of the LLM course, we’ll break down the transformers' architecture and explain the mechanics behind encoders and decoders, embeddings, multi-headed attention, and the significance of a feed-forward layer. You’ll learn the advantages of transformers over RNNs.
We’ll examine GPT models closely and begin our practical part of the LLM tutorial. We’ll connect to OpenAI’s API and implement a simple chatbot with a personality: a poetic chatbot. I’ll also show you how to use LangChain to work with your own custom data, feeding information from the 365 web pages to our model.
The Hugging Face package is an open-source package, which allows us an alternative way to interact with LLMs. We’ll learn about pre-trained and customized tokenizers and how to integrate Hugging Face into Pytorch and Tensorflow deep learning workflows.
This section of our Intro to Large Language Models course will explore BERT's architecture and contrast it with GPT models. It will delve into the workings of question-answering systems both theoretically and practically and examine variations of BERT—including the optimized RoBERTa and the smaller lightweight version DistilBERT.
In the final Intro to Large Language Models course section, we’ll look under the hood of XLNET (a novel LLM), that uses permutations of data sets to train a model. We’ll also compare XLNet and our previously discussed models, BERT and GPT.
Student feedback
“We’ll use models like those in the GPT series, BERT, and XLNet and introduce you to various other models at your disposal. We will also explain how these models work in detail, and you’ll begin experimenting with them through different practical exercises and LLM tutorials. ”
Worked at BBC
Intro to LLMs
with Lauren Newbould