A recruitment software company was spending £12,000 a month on customer support. Two full-time support agents handling an average of 1,400 tickets per month. Forty percent of those tickets were asking the same twelve questions: how to export data, how to reset passwords, how to configure integrations, how to read invoice reports.
Six hundred tickets a month on questions that had been answered hundreds of times before. Two experienced support agents spending a third of their working lives typing the same paragraphs.
We built them a custom AI assistant. Three months later, 53% of tickets were resolved without human intervention. Support agent time was freed for complex, relationship-sensitive issues. Customer satisfaction scores improved — because response times dropped from hours to seconds.
How AI Ticket Deflection Actually Works
AI chatbots now deflect 45–60% of incoming customer queries across B2B SaaS companies, with retail and financial services companies seeing even higher rates. The underlying mechanism is the same across industries: the AI is trained on your existing knowledge base, identifies the intent behind a query, and provides a relevant, accurate answer drawn from your own documentation.
The key word is "custom." An off-the-shelf chatbot that gives generic answers frustrates customers. A custom AI trained on your specific product, your specific documentation, and your specific tone resolves the query in a way that feels helpful rather than deflective.
The Build Process
Week 1–2: Knowledge Base Audit and Preparation
We started by auditing the client's existing support documentation: help articles, FAQ pages, past ticket resolutions, product guides, and video transcripts. We identified the 40 most common query types, mapped them to existing answers, and flagged where documentation was outdated, missing, or ambiguous.
This phase is often underestimated. The quality of an AI assistant is entirely dependent on the quality of the knowledge it's trained on. Garbage in, garbage out. We spent a week cleaning, updating, and restructuring the documentation before a single line of AI configuration was written.
Week 3: AI Configuration and Training
We used a combination of retrieval-augmented generation (RAG) — where the AI retrieves relevant documentation and uses it to construct an answer — and fine-tuned responses for the most common query types. The AI was configured with the company's tone of voice guidelines, escalation triggers (queries it should always hand to a human), and boundaries (topics it should not attempt to answer).
Escalation triggers were particularly important. Any query involving billing disputes, cancellation requests, data privacy issues, or explicit expressions of frustration were configured to immediately route to a human agent with full context. The AI handles volume. Humans handle nuance.
Week 4: Testing, Iteration, and Deployment
We tested the assistant against 200 historical tickets — checking accuracy of response, tone, escalation decisions, and edge case handling. Accuracy at the end of week four was 91% on the test set. We deployed to 20% of incoming traffic first, monitored for two weeks, and then rolled out to 100%.
The Results at 90 Days
- Monthly tickets handled by AI (no human): 742 of 1,398 (53%)
- Average AI response time: 8 seconds (vs. 4.2 hours for human agents)
- Customer satisfaction score: 4.4/5 for AI-resolved tickets (vs. 4.1/5 for human-resolved — counterintuitively higher due to response speed)
- Support team time reclaimed: ~22 hours per week
- Monthly cost saving vs. headcount: approximately £4,800
Freshworks research on AI in customer service found that 95% of decision-makers report reduced support costs after AI implementation, with 2–5x ROI multipliers within the first year. This client's trajectory was consistent with that — the assistant paid for its build cost within the first four months of operation.
What We Can Build for You
Custom AI assistants are not just for customer support. We've built them for: internal HR query handling (leave policies, benefits, payroll questions), sales qualification (pre-screening inbound leads before passing to salespeople), candidate FAQ handling for recruitment businesses, and internal knowledge bases for distributed teams.
The use case determines the architecture. What's consistent across all of them is the process: understand the query types, build quality knowledge, configure carefully, test extensively, and deploy incrementally.
If your team is spending time answering the same questions repeatedly, there's a strong chance a well-built AI assistant could handle the majority of those queries — freeing your people for work that actually requires them.
