Home
  • CV
  • Tech Stack
  • Books
  • Projects
  • List 100
LLM Transformer architecture diagram showing complete workflow from token embeddings through self-attention layers, MLP, residual connections, softmax sampling strategies, training with backpropagation, and RLHF reward model for ChatGPT and large language models
How Large Language Models Work

Introduction Large Language Models (LLMs) like ChatGPT, Claude, Gemini, and Llama are built on a single breakthrough idea: the Transformer. This article...

LLM MODELS, PROVIDERS AND TRAINING
LoRA fine-tuning for efficient large language model training and optimization
Fine-Tuning LLMs with LoRA: 2025 Guide

Low-Rank Adaptation (LoRA) has revolutionized how we fine-tune large language models in 2025. This technique allows developers to adapt models like Llama...

LLM MODELS, PROVIDERS AND TRAINING

© 2025 Amir Teymoori