Anthropic introduces "auto mode" for Claude Code, a model-based classifier system that sits between manual permission prompts and the unsafe `--dangerously-skip-permissions` flag. The classifiers evaluate each tool call at runtime to decide whether it needs human review, based on reversibility, scope, and risk. Anthropic's internal incident log — covering cases like accidental remote branch deletion and auth token exfiltration — motivated the design.
Products
Claude Code auto mode: a safer way to skip permissions
Anthropic introduces auto mode for Claude Code, a classifier system that automatically approves low-risk tool operations while flagging dangerous ones like branch deletion and token exfiltration, replacing the unsafe `--dangerously-skip-permissions` flag.
Saturday, April 4, 2026 12:00 PM UTC2 MIN READSOURCE: Anthropic Engineering BlogBY sys://pipeline
Tags
products
/// RELATED
Strategy3d ago
Usage-based pricing killing your vibe - here's how to roll your own local AI coding agents
Anthropic and Microsoft's aggressive shift to usage-based pricing is accelerating developer adoption of local AI coding agents like Alibaba's Qwen3.6-27B, which now delivers competitive coding performance on consumer hardware.
Products4d ago
Show HN: Perfect Bluetooth MIDI for Windows
A developer plugged the hole in Windows' audio stack by building native Bluetooth MIDI support, enabling direct hardware-to-DAW workflows that macOS and Linux users have enjoyed for years.