Aligned by Design: What Purpose-Built AI Looks Like in Modern Pharmacy Benefits
Written by
Gaston Concilio
Apr 15, 2026
Artificial intelligence (AI) is an overpromised word in healthcare right now. Every benefits vendor has an AI story. Most of those stories are light on specifics.
There's a difference between AI that's purpose-built for a problem and AI that's retrofitted onto systems that weren't built for it. If you're a benefits leader trying to figure out whether your pharmacy benefit manager’s (PBM) AI story is driving real impact or just driving headlines, that distinction matters more than you might think.
Before I share what SmithRx built, let's talk about what we didn't build, because that's where the real differences show up.
Not All AI Is Created Equal
When people hear 'AI,' they picture a general-purpose AI tool like ChatGPT that they've likely tried themselves. These systems are trained on the internet and maintain ongoing access to it. They generate confident answers based on probability, and occasionally make things up.
That's a reasonable model for a general-purpose consumer tool, but it's too broad of an approach for a system handling your members' prescription coverage.
The stakes are different in healthcare. The data is sensitive, the answers have to be right, and the reasoning behind every decision needs to be something a real person can follow and verify.
AI is only as good as what it’s built on. Many of the AI features deployed by players in pharmacy benefits right now are sitting on top of dated infrastructure that was never designed for modern AI. That's not an exaggeration; some of the largest players in this industry are still relying on outdated data exchange methods that predate the modern internet.
Garbage In, Garbage Out
Modern AI requires clean, structured, and accessible data. If the underlying system gets coverage calculations wrong, the AI tool inherits those errors. So when you hear a large PBM announce an AI feature, you still have due diligence to do before you can trust the output. Ask what data the AI is actually running on, and whether human judgment is applied before it reaches your members.
At SmithRx, we have been building our platform to be AI-ready from the ground up, and our data layer to be fully transparent by design. That's not a marketing line. It's an architectural decision we made before we wrote a single line of code for an AI system.
We Built a Researcher, Not a Chatbot
Our member support tool is one example of an autonomous AI agent we've built at SmithRx. Think of it less like a chatbot and more like a highly specialized researcher. It knows which questions to ask, exactly where to look for answers, and how to verify what it finds before showing it to anyone.
It doesn't browse the internet or pull from outside sources. It works exclusively from SmithRx's verified data sources: your plan design, your formulary, your members' coverage details, and internally curated knowledge articles. Nothing else gets in.
For example, when a member asks whether a specific drug is covered in their plan, the system doesn't just return a yes or no. It shows its work, step by step, in plain English. Here's what your plan says,here's how that applies to this medication, here's what your options are, and why.
We built this tool to provide answers people can actually follow and verify. That means our member services team spends less time hunting for information and more time talking to members.
Top AI Questions from Benefit Leaders
These questions come up in conversations I have with benefits leaders time and time again on sales calls, broker meetings, and customer check-ins.
Q: Is member data training your AI?
A: No. We use foundation models from providers like Anthropic. We are not in the business of training large language models, and member data has no role in that process.
Q: Will AI make decisions about my members without a human involved?
A: Our approach includes what’s known as "Human-in-the-loop" (HITL)", which means exactly what it sounds like. A real person is involved at every meaningful decision point. The AI surfaces information and reasoning. A person (a trained agent or clinician) makes the call.
Q: Is this the same AI I keep reading about getting things wrong?
A: No. General-purpose AI systems are built to handle anything you throw at them, which means they sometimes generate answers that sound right but aren't. This is what's known as "hallucinating." Our system is entirely different. It is rigidly scoped to a specific, verified set of data and a defined set of tasks. The universe it operates in is intentionally small, and in healthcare, that is a feature, not a limitation.
Q: Is this system secure? What happens to my members' data?
A: Our AI systems run on HIPAA-compliant AWS infrastructure, the same secure foundation that governs everything else we build. Member data is protected under the same privacy and compliance standards it always has been. The addition of our AI tools doesn't change those obligations; it inherits them.
How We Rigorously Tested AI at SmithRx
We didn't just turn this tool on and hope for the best. We ran a controlled experiment to definitively prove its value before broader rollout. We split our team into a control group following standard workflow and a treatment group using the AI-assisted tool. The treatment group was trained on the tool and used it consistently throughout the experiment, so we could isolate the AI's true impact on quality, accuracy, and overall performance.
We applied a similar bar for statistical significance to what you'd expect in a clinical trial, to ensure the improvements weren't just a statistical fluke. The results were definitive. We saw a statistically significant improvement across all measured metrics, including the metric that tracks how long service agents spend on the phone with members.
It's remarkably easy to slap basic AI functionality onto a problem and call it innovation. That's especially easy when the people using the tool aren't AI specialists themselves. We chose a different path, and the difference shows up in genuinely improved outcomes, not just for our team, but for the members on the other end of every interaction.
How to Vet Healthcare AI, Including Ours
The next time a vendor tells you they're using AI, these questions will tell you whether you're looking at real impact or a talking point:
What infrastructure does your AI run on, and was it designed for healthcare specifically?
Where does the data powering your AI tool come from?
Did you test your AI on your own team before deploying it to members?
If they can't answer those questions clearly and specifically, that's a flag worth paying attention to.
Ask us the same questions. We built SmithRx to have nothing to hide, and that includes how our AI works.
Want a full list of questions worth asking to help you evaluate AI promises? Download our Pharmacy Benefits Partner AI Innovation Checklist to bring the right questions into any PBM AI conversation.

Written by
Gaston Concilio
Vice President of Engineering, SmithRx
Gaston Concilio is VP of Engineering at SmithRx, where he leads development of the core platform and AI strategy behind the company's modern PBM model. With 17 years of experience at Amazon and IBM, he specializes in large-scale software development and building high-performing engineering teams. He holds a Master's in Industrial and Organizational Psychology, a B.S. in Computer Information Systems, and is currently a Doctoral candidate in Business Administration.




