Practical LLM systems
Working with local-first models (Ollama, llama.cpp), retrieval over personal knowledge bases, and small-but-good models you can run for under $100. Tracking projects from Karpathy and the local-AI community: nanochat, LLM-wiki-local, context-engineering, locomo memory benchmarks.