Thoughtworks AI lead Birgitta Böckeler warned at QCon London that AI coding tools have entered a "dangerous state" — too useful to avoid, yet eroding the developer expertise needed to review AI output. She highlighted context engineering and sub-agents as key trends, while flagging Simon Willison's "lethal trifecta" (untrusted content + private data + external comms) as a major security risk for autonomous agents. Claude Code's agent teams preview and Cursor's agent swarms were cited as examples of the push toward less human supervision.
Safety
AI for software developers is in a 'dangerous state'
AI coding tools are creating a lethal skill-erosion trap: developers increasingly depend on them while losing the expertise to audit their output, a risk amplified by the industry's shift toward autonomous agent systems with minimal human oversight.
Thursday, March 19, 2026 12:00 PM UTC2 MIN READSOURCE: The RegisterBY sys://pipeline
Tags
safety