ä»āęč§ęµē¼ēØāå°äøäøęå·„ēØļ¼2025å锾
Source: Thoughtworks
By KenĀ Mugrage

This year weāve seen a real-time experiment playing out across the technology industry: one in which AIās software engineering capabilities have been put to the test against human technologists. And although 2025 may have started with AI looking strong, the transition from āvibe codingā to whatās being termed ācontext engineeringā highlights that while the work of human developers is evolving they nevertheless remain absolutely critical.
This is captured in the latest volume of the Thoughtworks Technology Radar, a report on the technologies used by our teams on projects with clients. In it, we see the emergence of techniques and tooling designed to help teams better tackle the problem of managing context when working with LLMs and AIĀ agents.
Taken all together, thereās a clear signal of both the direction of travel in software engineering and even AI more broadly. After years of the industry assuming progress in AI is all about scale and speed, weāre starting to see that what matters is the ability to handle context effectively.
Vibes, antipatterns and new innovations
It was all the way back in February 2025 that Andrej Karpathy coined the term āvibe codingā. Although it might have been meant flippantly, it took the industry by storm. It certainly sparked debate at Thoughtworks; many of us were sceptical. On an April episode of our Technology Podcast, we talked about our concerns and were cautious about how it mightĀ evolve.
Unsurprisingly given the implied imprecision of vibe coding, antipatterns have been proliferating. Weāve once again noted, for instance, complacency with AI generated code on the latest volume of the Technology Radar, but itās also worth pointing out that early ventures into vibe coding also exposed a degree of complacency about what AI models can actually handleāāāusers demanded more and prompts grew larger, but model reliability started toĀ falter.
Experimenting with generative AI
This is one of the drivers behind increasing interest in trying to engineer context. Weāre well aware of its importanceāāāworking with coding assistants like Claude Code and Augment Code, providing necessary context or āknowledge primingā is crucial. It ensures outputs are more consistent and reliable which will ultimately lead to better software that needs less workāāāreducing rewrites and potentially driving productivity.
When effectively prepared, weāve seen good results when using generative AI to understand legacy codebases. Indeed, done effectively with the appropriate context it can even help when we donāt have full access to sourceĀ code.
Itās important to remember that context isnāt just about more data and more detail. This is one of the lessons weāve taken from using generative AI for forward engineering. It might sound counterintuitive, but in this scenario weāve found AI to be more effective when itās further abstracted from the underlying systemāāāor, in other words, further removed from the specifics of the legacy code. This is because the solution space becomes much wider, allowing us to better leverage the generative and creative capabilities of the AI models weĀ use.
Context is critical in the agenticĀ era
The backdrop of the changes that have happened over recent months is the growth of agents and agentic systemsāāāboth as something organizations want to develop as products and as something they want to leverage. This has forced the industry to properly reckon with context and move away from a purely vibes-based approach.
Indeed, far from simply getting on with tasks theyāve been programmed to do, agents require significant human intervention to ensure they are equipped to respond to complex and dynamic contexts.
There are a number of context-related technologies aimed at tackling this challenge, including agents.md, Context7 and Mem0. But itās also a question of approach. For instance, weāve found success with anchoring coding agents to a reference applicationāāāessentially providing agents with a contextual ground truth. Weāre also experimenting with using teams of coding agents; while this might sound like it increases complexity, it actually removes some of the burden of having to give a single agent all the dense layers of context it needs to do its job successfully.

Towards consensus
Hopefully the space will mature as practices and standards embed. It would be remiss to not mention the significance of the Model Context Protocol, which has emerged as the go-to protocol for connecting LLMs or agentic AI to sources of context. Relatedly, the agent2agent (A2A) protocol leads the way with standardizing how agents interact with oneĀ another.
It remains to be seen whether these are the standards that win out. But in any case itās important to consider the day-to-day practices that allow us, as software engineers and technologists, to collaborate effectively even when dealing with highly complex and dynamic systems. Sure, AI needs context, but so do we. Techniques like curated shared instructions for software teams may not sound like the hottest innovation on the planet but they can be remarkably powerful for helping teams work together.
Thereās perhaps also a conversation to be had about what these changes mean for agile software development. Spec-driven development is one idea that appears to have some traction, but there are still questions about how we remain adaptable and flexible while also building robust contextual foundations and ground truths for AIĀ systems.
Software engineers can solve the context challenge
Clearly, 2025 has been a huge year in the evolution of software engineering as a practice. Thereās a lot the industry needs to monitor closely, but itās also an exciting time. And while fears about AI job automation may remain, the fact the conversation has moved from questions of speed and scale to context puts software engineers right at the heart ofĀ things.
Once again it will be down to them to experiment, collaborate and learnāāāthe future depends onĀ it.
Originally published at https://www.thoughtworks.com.