≡ Maps ▾
← Home
🗺️
LLM & AI Ecosystem
Concepts and categories — the best place to start
🤖
AI Agents
Building blocks, patterns and frameworks for autonomous AI systems
💻
AI Coding Assistants
IDE plugins, agentic IDEs and CLI agents for developers
🏢
Proprietary Models
AI labs and their flagship model families
🔓
Open Source / Weights Models
Organizations releasing open-weights models and their flagship releases
☁️
Cloud Managed AI
AWS, Google Cloud, Azure and Cloudflare AI platforms compared
⚡
Dense vs MoE
How Dense and Mixture-of-Experts architectures process a token
🗜️
Quantization — Memory by Format
How numerical precision (FP32 → INT4) affects a 70B model's memory footprint
🔍
RAG
Pipeline stages, components and tools for Retrieval-Augmented Generation
Links
▾
📊
Artificial Analysis
Independent analysis of AI — choose the best model and provider for your use case
📈
Epoch AI
Research institute studying AI progress — trends in compute, data, and capabilities
🎓
Andrej Karpathy
AI researcher and educator — deep learning courses, neural networks from scratch, and clear explanations of LLMs
🐦
Claude Devs (X)
Official Anthropic account for Claude developers — API updates, tips and community news
🧰
OpenAI Developers
Official OpenAI developer resources — docs, guides, examples, and platform updates
News
Dense vs MoE Models
v2026-Q1 · 2026-04-03
EN
FR
How Dense and Mixture-of-Experts architectures differ in processing a token