DS 6051

Decoding Large Language Models

New Add to Schedule

Course Description

Evolution of language models, from encoding words to simple vectors to training LLMs. Train and build LLM, understand concepts like self- and cross-attention in LLMs and their applications, review research on Tokenizers, Retrieval Augmented Generation (RAG), Prompt Engineering, Fine-tuning LLMs using Low-Rank Adapters (LoRA), Quantization in LLMs, QLoRA, In-context Learning (ICL) and Chain-of-Thought (CoT) reasoning. Using Python libraries.


Looks like this course isn't being taught this semester.

Sort by "All" in the top right to see previous semesters.