Tools Are Not Enough
Every major AI framework now supports tool use. But giving an agent a hammer doesn't make it a carpenter.
The tool use feature shipped in every major AI framework of the past year shares a common assumption: that the hard problem of agent capability is access to tools.
It isn't. Access is table stakes. The hard problem is judgment — knowing which tool to use, when, with what parameters, in what sequence, and when not to use any tool at all.
I've seen this play out dozens of times with teams building their first production agents. They start with a carefully curated set of five tools. The agent performs well in demos. Then they add more tools — because more capabilities seems obviously good — and performance degrades. The agent starts making worse choices, not better ones.
This is the tool proliferation problem. It's analogous to the feature proliferation problem in product design: more options create more cognitive load, which leads to worse decisions.
The agents that perform best in production tend to have small, well-defined tool sets with clear, non-overlapping responsibilities. Each tool does one thing well. The agent doesn't have to reason about which tool is "more appropriate" — there's only one right answer.
Before you add another tool to your agent, ask: is this tool solving an agent capability problem, or a tool design problem? Most of the time, it's the latter.