

One Architecture Decision That Saved My Weekend Project
Writing code is increasingly the agent's job, engineer's role is shifting toward designing the architecture that connects these intelligent components.
I spent the last few weekends building QuantAgent ↗ — a terminal-based quant analysis tool powered by an AI agent. It has two very different sides: a LangGraph agent that streams tokens and calls tools, and a Textual TUI that renders everything in the terminal.
One decision early on made all the difference: the agent never talks to the UI directly.
No imports, no callbacks, no shared internals. Instead, a typed event bridge sits between them — a small, deliberate boundary that keeps both sides completely decoupled.
Let me explain why that matters and how it works.
The Trap: Direct Coupling#
The straightforward approach is tempting. Import the agent into the UI. Register callbacks. Call methods directly. The UI watches the agent’s internals, the agent reaches into the UI to display results.
But it fails for four reasons.
First, circular dependencies. The agent imports the UI for rendering, the UI imports the agent for state — you now have a dependency graph that looks like a bowl of spaghetti. Good luck refactoring that.
Second, fragile testing. Want to test the agent? You need to spin up the full UI stack. Want to test the UI? You need a live agent making real LLM calls. Both sides become impossible to unit test in isolation.
Third, single-consumer lock-in. The agent can only talk to one UI at a time. Want to add a web interface alongside the terminal app? You’re rewriting the whole communication layer.
Fourth, leaky abstractions. LangGraph internals end up sprinkled across your widget code. Textual concepts leak into your agent logic. The system becomes harder to reason about because every layer knows too much about every other layer.
A well-designed architecture avoids all of this by introducing a deliberate boundary.
The Better Way: A Typed Event Bridge#
The solution is to place a small, focused module between agent and UI. No direct imports across the boundary. Instead, a set of typed events flows through an asynchronous queue.
The agent knows nothing about the UI. It produces typed events — “I’m streaming a token,” “I’m calling a tool,” “I hit an error,” “I need your approval” — and pushes them into a queue.
The UI knows nothing about the agent’s internals. It pulls events from the queue and dispatches them to the appropriate widgets. Human-in-the-loop decisions flow back through a simple future mechanism.
The dependency graph becomes a straight line:
agent → adapter ← UIplaintextThe agent imports from the adapter. The UI imports from the adapter. The agent never imports from the UI. The UI never imports from the agent. The adapter imports from neither.
This is the key insight: the communication contract lives in its own layer, not inside either subsystem.
Why This Architecture Wins#
Zero circular imports. The dependency graph is acyclic by construction. You can touch any layer without worrying about breaking the other. This matters more than most engineers realize — in a growing codebase, untangling circular dependencies is one of the most expensive refactors you’ll ever do.
Test isolation. The agent can be tested by creating a runner, sending a message, and asserting on the events that come out of the queue. No UI mocking required. The UI can be tested by pushing events into the queue and verifying the rendered output. No agent required. Both sides test independently, with minimal setup.
Multiple consumers, zero changes. The agent doesn’t know if it’s talking to a terminal app, a web frontend, a Slack bot, or a batch job. Each consumer subscribes to its own queue. Adding a new channel means writing a new consumer — the agent doesn’t change.
Clean lifecycle management. The agent runs inside an async task. Cancellation is a single call — the task is cancelled, resources are cleaned up, the UI moves on. No shared state corruption, no hanging connections, no race conditions.
Room for improvement#
But there is always room for improvement. Let me point out what works well and where you’ll want to pay attention as the system grows.
What’s done right. The event types map directly to user-visible concepts — streaming text, tool call start and completion, errors, notifications, approval requests. This is the right level of abstraction. It doesn’t leak LangGraph internals into the UI, and it doesn’t force the UI to parse raw agent output. The internal stream processing is cleanly separated into single-responsibility classes, so complexity stays manageable. The approval mechanism using futures is elegant — it converts a fundamentally synchronous interaction (“wait for the user to click a button”) into something the async event loop can handle naturally.
What needs attention over time. The event dispatch is currently a chain of type checks. This works well for a handful of event types but becomes unwieldy beyond fifteen or so. Consider a match statement or a visitor pattern before you cross that threshold. The shared state object referenced by both the runner and the UI is the one place where the boundary blurs. Separating it into a read-only snapshot for the UI and a mutable backend for the agent would tighten the design further. And while the current convention is to never remove or rename event fields, adding runtime schema validation would catch integration issues before they reach users.
Every architecture has seams that need attention at scale. The important thing is that they should be visible and manageable.
The project is on Github ↗ and feel free to let me know if you have any suggestion or feedback.
This blog post is enhanced by AI.