Thank you for your comment! And sorry upfront for the long answer ☕...
PydanticAI's production readiness
While PydanticAI is still in early development, I believe it's production-ready for specific use cases, particularly when you need structured, type-safe AI interactions. The framework's connection to Pydantic and its simplicity gives it a solid foundation. That said, I'd recommend:
1. Starting with simple implementations and only adding complexity when needed
2. Having proper monitoring in place (PydanticAI's Logfire integration is excellent for this)
3. Thoroughly testing in sandbox environments before production deployment
I also have the same feeling - existing frameworks do not offer enough robustness and for me at least, PydanticAI feels much better.
The key is observability, being able to make as much observable as possible in the blackbox we call an agent. That was also the issue I had when experimenting - it was hard to follow the flow of when and why the agent used which tool function and how different prompts affected this. Making it observable is the first step to make it production ready, i.e., make it controllable.
The only reason I would be careful with PydanticAI in production is that it is under heavy development. It is in that stage of an open-source project where many major things change quickly. They also mention it in their documentation:
PydanticAI is in early beta, the API is still subject to change and there's a lot more to do.
On your philosophical question 😉
This is a fascinating question! In my view, AI agents solve fundamentally different problems compared to traditional APIs and UIs in several ways:
1. Dynamic problem solving: Unlike traditional APIs which follow fixed paths, agents can dynamically adapt their approach based on context. In the Airflow example from the article, the agent determines the relevant DAG without needing exact DAG IDs - something that would require complex logic in a traditional API. For example: A stakeholder just asks about why data in the monthly activity report is missing, while the agent can then use context from functions it uses autonomously to determine which DAG in this scenario is relevant. There is no need for the user to know the exact DAG IDs, so you have some sort of layer on top of traditional APIs allowing translation of requests into actual API calls.
2. Natural language: Agents bridge the gap between human intent and system capabilities. Instead of users learning specific API endpoints or UI workflows, they can express their needs naturally. The system handles the translation to technical operations.
3. Agents can:
- Break down complex tasks dynamically (unlike fixed API workflows)
- Chain operations based on context
- Recover from errors through reasoning
- Make autonomous decisions within defined boundaries
However, it's important to note that agents aren't always the answer:
- Complexity vs value: Often a simple API endpoint or UI is more efficient than an agent
- Control vs flexibility: Traditional interfaces offer more predictable behavior
- Cost and latency: Agent systems typically trade these for better task performance
The key is choosing the right tool for the job. In the Airflow example, an agent makes sense because:
1. The interaction pattern (natural language → DAG operations) benefits from dynamic reasoning
2. The scope is well-defined with clear success criteria
3. The system can leverage structured outputs and tooling
In essence, agents don't just provide a more advanced way to achieve traditional outcomes - they enable new kinds of interactions that weren't practical with traditional approaches. But they should be used thoughtfully, as part of a broader system architecture that includes traditional APIs and UIs where they make more sense. One issue is the current hype surrounding agentic systems, which will continue in 2025. I hope decision-makers will not pursue projects involving AI agents solely because they are trendy, but rather when they make practical sense.
Thanks again for your comment; it really made me think, and I enjoyed it a lot. It's a great and important philosophical question. I might turn my thoughts into an article. 😉