Home
  • CV
  • Tech Stack
  • Books
  • Projects
  • List 100
LLM Transformer architecture diagram showing complete workflow from token embeddings through self-attention layers, MLP, residual connections, softmax sampling strategies, training with backpropagation, and RLHF reward model for ChatGPT and large language models
How Large Language Models Work

Introduction Large Language Models (LLMs) like ChatGPT, Claude, Gemini, and Llama are built on a single breakthrough idea: the Transformer. This article...

LLM MODELS, PROVIDERS AND TRAINING
RAG system architecture with hybrid search and vector databases for AI
Production RAG Systems with Hybrid Search

Retrieval-Augmented Generation (RAG) has become the standard architecture for LLM applications that need accurate, up-to-date information. However, naive RAG implementations often fail...

RAG, GRAPH RAG AND VECTOR DATABASES

© 2025 Amir Teymoori