AI/ML

AI for software developers is in a 'dangerous state'

Published

QCon London: AI is in a dangerous state where it is too useful not to use, but where by using it, developers are giving up the experience they need to review what it does, said a speaker at QCon London, a vendor-neutral developer conference underway this week.

Birgitta Böckeler, Thoughtworks AI lead, tells QCon that strong forces are tempting humans out of the loop
Birgitta Böckeler, Thoughtworks AI lead, tells QCon that strong forces are tempting humans out of the loop

Birgitta Böckeler, global lead for AI-assisted software delivery at Thoughtworks, reprised the subject she spoke on last year at the same event, the state of AI for developers.

"A year ago I was mainly talking about the new agentic modes. The term vibe coding was about two months old, and Claude Code… was not generally available yet," she said.

The focus today is on context engineering, she told attendees. "You want to curate the information that your model or your agent sees, to get better results."

Context engineering involves rules, commands, instructions, and resources, including MCP (model context protocol) tools that an LLM (large language model) can use to perform tasks more accurately. These are defined locally, reducing the size of the context that is sent to a remote LLM.

The longer it goes without supervision, the more I have to review afterwards...

"Even though context windows are a lot bigger now than a year ago, when they get full… the effectiveness of the agent degrades and it starts costing a lot more money," she said.

Another up and coming feature is sub-agents, where a main agent spawns other agents to perform specialized tasks and report back. This reduces the load on the main agent, and can also help by giving sub-agents a degree of independence. "People like to have a separate context window that doesn't know about all the history in the session, to do a code review or use a different model," she explained.

The sub-agent concept can go much further, with the trend being towards less supervision. Böckeler referenced AI enthusiast Steve Yegge, who defined eight stages of developer evolution to AI, culminating in building your own agent orchestrator. Yegge did this with a project called Gas Town, "an industrialized coding factory manned by superintelligent robot chimps."

Cursor and Anthropic are also experimenting with agent swarms, Böckeler said, and Claude Code has a preview feature called agent teams. "The key is, there needs to be a lot of orchestration," she explained.

Advances like these are forming "strong forces that tempting us out of the loop," said Böckeler.

The problem is that AI is not safe. It makes errors, and is vulnerable to issues such as prompt injection. This means developers are in the business of risk assessment. "Always a combination of three things, probability, impact, and detectability," said Böckeler

The potential productivity of reduced agent supervision is in opposition to the need for review. "The longer it goes without supervision, the more I have to review afterwards," she told QCon.

Risks include bad code, malware and secret extraction. Böckeler referenced Simon Willison's lethal trifecta. "When you have an agent that has exposure to untrusted content and access to private data and can externally communicate, then you have a high risk of getting data problems, getting security problems," she said, adding that just giving an agent read and send rights to email is enough to hit this problem.

There may be hints towards a solution in the shape of what OpenAI called "harness engineering" – devising environments in which agents can do reliable work.

Another AI trend is increasing cost. Agents are doing more; it used to be just autocomplete, now it is researching existing code, making a plan, reviewing it, running tests, revising the plan and so on. "Flat rates that are not flat rates, because you get request limiting, and then you see people on Reddit saying: Oh, only the middle of the month, and out of tokens, and what do I do? Because we can't work without them anymore."

Following the session, we asked Böckeler about the future. Are coding skills becoming irrelevant?

"We're getting into this dangerous state where AI is so useful that you do want to use it, but you cannot and maybe never will be able to give it everything," she told us. "You always have to understand what is going on. At the same time, you're getting less experience of that because you're not doing it yourself any more."

AI will evolve as we learn from our mistakes, she said, but with uncertainty about how long it will take to get to a less risky place.