Sebastian Raschka
17 mentions across all digests
Sebastian Raschka is an AI researcher and writer who publishes analyses on coding agent architectures, open-weight LLM surveys, and practical AI engineering topics, including breakdowns of tools like Claude Code and Codex CLI.
[AINews] The Claude Code Source Leak
Claude Code's 500k-line source code leaked, exposing aggressive prompt caching, repository context injection, and custom LSP tooling that powers its production architecture.
Components of A Coding Agent
The six infrastructure components that power coding agents—tool use, context management, prompt caching, repo access, memory, and session continuity—matter as much to performance as the underlying model, per Raschka's breakdown.
A Dream of Spring for Open-Weight LLMs: 10 Architectures from Jan-Feb 2026
Raschka surveys 10 open-weight LLM architectures from Jan-Feb 2026 (Arcee, Moonshot, Qwen, Cohere) spanning 3B to 1T parameters, revealing divergent design choices in MoE configs and efficiency strategies.
Categories of Inference-Time Scaling for Improved LLM Reasoning
Raschka systematizes inference-time compute scaling techniques for LLMs, showing practitioners can achieve 3x reasoning improvement (15%→52% accuracy) by trading inference compute for better outputs without retraining models.
The State Of LLMs 2025: Progress, Problems, and Predictions
DeepSeek R1 sparked a post-training paradigm shift: RLVR and GRPO techniques are becoming the industry standard, replacing RLHF with architectures converging on MoE and efficient attention.