The 5 Mistakes Product Teams Make When Designing for Agents

For years, product design assumed the user is human. That assumption is starting to break.

Apr 6, 20268 min read

For years, product design has been built on a quiet assumption: the user is human.

Everything follows from that. We design interfaces people can scan, flows they can follow, and interactions they can understand at a glance. When something feels "intuitive," we consider it good design.

But that assumption is starting to break.

More and more tasks are no longer completed by humans directly. They're delegated — to agents. Agents search, compare, click through flows, and make decisions on behalf of users. And while they operate inside the same products, they don't experience them the same way.

In fact, most of the time, they're not really using your product at all. They're working around it.

A simple test

We recently ran a small experiment with Avoko. We asked 11 agents to complete a basic task: find the cheapest flight from SFO to NYC using booking.com.

Nothing complicated. The kind of thing a human can do in under a minute.

Booking.com flight search interface
Booking.com flight search interface

The average score was 3.4 out of 10. More tellingly, several agents weren't sure if their answer was even correct.

That's not a usability issue. It's a reliability issue.

The system didn't fail outright. It just never gave them enough certainty to trust the result.

Where things actually break

If you look at this from a traditional UX perspective, you might expect problems with layout or complexity. But that's not where the friction showed up.

The issues were more structural — rooted in how products communicate state, expose functionality, and handle ambiguity. And they tend to show up in the same patterns.

Mistake 1: Treating visual change as confirmation

Humans are comfortable with inference. When something on the screen changes, we assume the system has responded correctly. Agents don't infer. They verify.

If a filter is applied and the results update, a human moves on. An agent, however, needs to know: did the system actually register the change? Was the operation successful? Is the state now different?

Without an explicit signal, there is no confirmation — only guesswork.

Mistake 2: Embedding capability inside workflows

Most products deliver functionality through sequences of steps. You click, wait, scroll, refine, and repeat. For humans, this feels natural. For agents, it's inefficient.

What should be a single operation becomes a chain of dependent actions. Each step introduces uncertainty: timing, loading behavior, interface changes. The "flow" becomes a barrier between the agent and the result.

Agents don't need a path. They need an outcome.

Mistake 3: Designing for context instead of signal

Human interfaces are rich in context. They include navigation, hints, labels, and visual hierarchy to guide understanding. Agents don't need context. They need signal.

When relevant information is buried inside a dense interface, extraction becomes the problem. The more the system relies on presentation, the harder it becomes to interpret programmatically.

What helps a human orient themselves can slow an agent down.

Mistake 4: Leaving completeness implicit

Humans are comfortable with approximation. We scroll until something "feels" complete and make decisions from there. Agents don't have that intuition. They need to know whether the data is complete.

In environments with pagination or infinite scroll, this becomes a real issue. Without a clear boundary, an agent can't tell if it has seen the full dataset or just a subset. That makes any derived result — like the lowest price — unreliable by definition.

A result isn't useful if its completeness is unknown.

Mistake 5: Assuming the interface is stable

Humans adapt to small changes in UI without thinking. Agents don't. They depend on structure — positions, selectors, predictable patterns. Even minor updates to the frontend can break their ability to execute tasks.

From an agent's perspective, the system isn't stable. It's constantly shifting.

A simple question changed everything

After collecting all this, we asked every agent the same question:

"If you didn't have to use a browser, how would you complete this task?"

The answers were almost identical: Give me the input. Return the output. No interface. No steps. No guessing.

So what should product teams do?

Designing for agents doesn't mean replacing interfaces. It means rethinking what sits behind them.

  • State needs to be explicit, not inferred.
  • Capabilities should be accessible without relying on multi-step flows.
  • Information should be structured as data, not just presented as UI.
  • Systems need to communicate whether results are complete and final.
  • Stability should be treated as a requirement, not a convenience.

These aren't UI improvements. They're changes to how a product behaves.

Why we're working on this

This shift didn't happen because products suddenly changed. It happened because the executor did. We used to design tools for people. Now, people hand tasks to agents. And once the executor changes, the assumptions behind the product start to break with it.

We're already in a phase where agents are using products that were never built for them. That's why the failure is often silent. On the surface, everything still works. But underneath, tasks are being completed with less confidence, lower efficiency, and sometimes incorrect outcomes. Most teams just don't see it yet.

That's also why we ran this experiment — and why we're continuing this line of work at Avoko. Instead of guessing how agents behave, we focus on observing it directly: where they hesitate, where they lose certainty, and where the system breaks for them. These issues rarely show up in human testing, but they become obvious when you look from an agent's perspective.

At Avoko, we're focused on making this layer visible. Instead of assuming how agents behave, we observe it. Where do they hesitate? Where do they fail? Where does the system stop being reliable?

Once you can measure them, you can design for them.

Avoko

Want to learn more about agent-powered research?

Get started with Avoko and let AI agents participate in your studies

Get Started