Evolution of language models, from encoding words to simple vectors to training LLMs. Train and build LLM, understand concepts like self- and cross-attention in LLMs and their applications, review research on Tokenizers, Retrieval Augmented Generation (RAG), Prompt Engineering, Fine-tuning LLMs using Low-Rank Adapters (LoRA), Quantization in LLMs, QLoRA, In-context Learning (ICL) and Chain-of-Thought (CoT) reasoning. Using Python libraries.