Sandeep Chivukula
Sandeep Chivukula
5 min read

Categories

  • ai
  • leadership
  • product-management
  • strategy
  • ux

Tags

Context Pollution - A Natural Outcome of Poor UX

I was vibing an hour deep into a session with one of today’s most advanced AI agents, trying to hammer out a new idea exploration. We were pulling in industry reports, summarizing articles, and iterating on multiple sections of the document at once. In that flow state, the AI felt like a true partner.

Then suddenly, things started getting wonky. I had to repeat prior instructions. The agent started reintroducing concepts we’ve dropped 30 mins ago. I scrolled up to see what’s happening.

It was a sprawling, chaotic battlefield of ideas. We had easily crushed over 100,000 tokens of back-and-forth. The brilliant final strategy was in there, but it was buried under mountains of discarded summaries, false starts, and abandoned tangents. And it hit me: the AI hadn’t been a partner in my creative process; it had been a court reporter, meticulously recording every word but understanding none of the intent.

This is a critical design failure I call Context Pollution. It’s what happens when an agent meticulously tracks every output, the literal transcript, while completely missing the outcome we are trying to achieve.

The High Cost of Context Pollution

Context Pollution is what happens when an agent’s history becomes so cluttered it degrades the quality of collaboration. The problem is concrete:

  • Confusion & Drift: The agent loses the plot, referencing discarded ideas.
  • Performance Degradation: It re-processes thousands of irrelevant tokens, slowing the creative process.
  • Increased Cost: Every token costs money. Rereading a novel’s worth of brainstorming to add a sentence is fantastically inefficient.

1 Million Token Context Window Meme

Now you might be thinking, no problem, my model has a 1 Millon Token Context window that solves this. It doesn’t. A bigger window is just a bigger junk pile to search for the critical needle of right information. It simply postpones the inevitable signal loss.

AI Doesn’t Get vFinal_Final_revised_02.pdf

The root of this problem reveals a deeper truth: our current agents are literalists in a world of nuance. They don’t understand that the deep creative act of creation is a messy process. Their design misses a key point in how humans create: The process itself is iterative and deep BUT ephemeral. There are many intermediate outputs, but the outcome is what matters.

We’ve all seen the AI declare, “Here is the best and ultimate final version!” only for us to immediately reply, “That’s a good start, but change the tone.” This mirrors our own chaotic file habits. Our desktops are littered with Exec_Presentation_v2_final, Exec_Presentation_v3_final_new_final, and Exec_Presentation_v4_USE_THIS_ONE.

Final_Final_v9 Source: X - @Studio_aaa

The difference is, we know which one is the ground truth. The AI does not.

The Emerging Art of Context Engineering

To combat the general issue of Context management, a new discipline is emerging: Context Engineering LangChain - Rise of Context Engineering. It is the art of skillfully structuring prompts and managing conversational history to guide an AI toward a desired outcome. A recent guide on Effective Context Engineering from Anthropic details the incredible effort required to do this well.

Context is Everything Graphic Source: Dex Horthy

This discipline includes several clever computational techniques:

  • Automated Summarization: This technique, found in frameworks like LangChain, automatically condenses the conversation.
  • Retrieval-Augmented Generation (RAG): This powerful tool connects an LLM to external knowledge, as explained in depth by Meta AI.
  • Tree-of-Thought: This advanced prompting technique, detailed in the paper “Tree of Thoughts: Deliberate Problem Solving with Large Language Models,” encourages the LLM to internally brainstorm different reasoning paths.

While these techniques are powerful, they all frame the issue as a computational problem. They are trying to solve a human behavior problem with more sophisticated engineering, when a better understaing user needs might guide us to the simpler answer.

A New Model: The “Branch and Fold” Methodology

What if we solved this with design instead of just computation? I call this new design pattern the “Branch and Fold” Methodology.

The solution is to change the user interface from a linear log to a two-dimensional information plane. The main conversation flows vertically. But at any point, the user can create a horizontal Branch—a self-contained sandbox for exploration. As you can see in the prototype below, this isn’t just a separate chat; it’s an explicit ‘Brainstorming Active’ mode. This creates a self-contained sandbox where the exploration has its own dedicated session token count, completely isolated from the main conversatio

When you find the insight you need, you Fold the branch. The messy exploration collapses, committing only the distilled outcome back to the main conversation. The context pollution vanishes; the outcome remains.

Branch and Fold UX Visualization

The proof is in the pudding: the solution lies in a more thoughtful interface. To make this tangible, I started describing the concept to Gemini 3.0 in Google’s AI Studio, intending to build a simple UX mock-up. But something incredible happened: with the clarity of the user-centric concept I provided, the AI didn’t just create a mock-up; it built the basis of a full working prototype.

Here is The ‘Branch and Fold’ prototype in action. A sandboxed ‘Brainstorming Active’ session has its own self-contained token count, keeping the main conversation clean and focused with a collapse Artifact the synthesizes the information from the brainstorming.

The 'Branch and Fold' prototype in action. A sandboxed 'Brainstorming Active' session has its own self-contained token count, keeping the main conversation clean and focused

This idea is built on core principles of good product management: matching technology to user intent, driving outcomes over outputs, and building trust through transparency. I am looking forward to refining and experimenting to quantify the impact.

The Tangible Benefits

This user-centric model delivers immediate benefits. It creates True Composability, allowing finalized branches to become reusable components. It Increases User Trust and Control, providing a “scratchpad” for free exploration. And it drives Efficiency Gains in quality, speed, and cost for any use case the model is tackling.

The Hard Questions & The Path Forward

The most pressing question is how a methodology designed for a visual UX translates to a non-visual interface like a CLI. This is a fascinating area because the CLI offers powerful workflow capabilities, like running agents in parallel, that consumer UIs can’t easily match.

In some sense, A developer’s workflow is already parallel; the opportunity is to make the AI a true parallel partner. Instead of today’s sandboxed manual context management tied to a specific branch, the system would dynamically manage the context as work is completed across the project, long before a final git merge. This moves beyond the current branch to a multi-branch awareness of the entire project—a profound step towards a true multi-agent development system. I am curious to see how this progresses.

Another big open question to contemplate is how this pattern could apply to agent-to-agent communication. How do Agents brainstorm amongst themselves and share interim outputs but also drive towards a shared outcome.

The Timeless Principle

Ultimately, the fundamentals of great product design haven’t been repealed by the AI revolution. It all comes back to a deep, obsessive understanding of how users think, work, and create. As Product executives our job is not to be mesmerized by the capabilities of this new technology, but to bend it to the needs of our users. The tools are new, but the mission is, and always has been, the same: start with the user.