top of page
Search

Stop Using AI. Start Implementing It.

Updated: Apr 7


Everyone is investing in AI. Far fewer are implementing it well.

We spoke with Gaspar Wong, Head of Product at TriloDocs, to understand what separates the two and what leaders in regulated industries need to know before they start.

 


1. Everyone is talking about adopting AI - but there’s a difference between using AI and implementing it. Where do most organisations go wrong?


The distinction matters more than people realise. Using AI by giving your team a ChatGPT license, letting people experiment, is relatively low risk and low commitment. Implementation is a fundamentally different undertaking. It touches your processes, your governance, your people. And that's where most organisations underestimate what they're signing up for.


The most common mistake is starting with the solution rather than the problem. Companies scan the market, find something that looks promising, and roll it out - without having clearly defined what they're trying to fix or how the tool fits into the way their teams actually work. The result is predictable: friction, resistance, and an AI investment that never really lands.


What separates successful implementations is the willingness to slow down at the start. To ask: where are we losing time? Where does repeatable logic eat into work that should require real expertise? Where do we still need human judgment, and where can we safely remove it from the equation? That diagnostic phase is what makes everything that follows coherent. Once you understand the problem, the right architecture becomes obvious. Without it, you're just adding complexity and calling it transformation.


2. There are fundamentally different types of AI - rule-based systems, large language models, agentic workflows. How do you avoid picking the wrong type of AI for the wrong problem?


There are two sides to this: understanding the tool and understanding the problem. And honestly, the first part is the easier one. There's plenty of material out there on what different AI systems can do. What you can't read up on is what to use for your specific situation.


For me, the key point is to tackle the problem at hand with the correct tools and approaches - and the easiest way to think about it is in layers.


At the base, you have tasks that require absolute consistency - structured data processing, compliance checks, calculations where the same input must always produce the same output. Rule-based systems were built for this. Above that, you have tasks that involve ambiguity: drafting narrative content, translating complex findings into lay language, working with context that can't be reduced to a formula. That's where large language models add real value. And at the top, agentic workflows that orchestrate both - powerful, but the most complex to set up and the easiest to misapply.


Where things go wrong is when there is a mismatch. If you ask a large language model to infer from messy structured data, or to make clinical interpretations that should be governed by external logic - ethical considerations in a paediatric trial, risk factors not previously documented - you're asking it to guess where you need certainty. The opposite is equally limiting. The safest approach is to separate your problems first, then build the architecture around that separation. Deterministic systems where precision is mandatory. Generative systems where flexibility is needed. And clear orchestration between them.


3. What does it cost to run AI in production - not the licence fee, but the real operational overhead that nobody talks about upfront?


The licence fee is the easy part. It's a number on a yearly invoice, and everyone understands it. What's harder to quantify - and what vendors rarely walk you through - are the layers underneath it.


The first is integration. How does this system connect to what you already have? How clean is your data, and what does it cost to get it there? Do you have the infrastructure, the technical architecture and backbone system, the IT resources and knowledge to adopt? If not, you'll have to build all of that before create value with AI adoption. These are real costs, and they're almost always underestimated.


The second is ongoing operation. This isn't a solution you deploy and forget. You're monitoring outputs, running quality checks, maintaining governance and audit trails, managing a human review process. And the underlying models evolve, including regulations change, data changes, the technology moves. Keeping pace with that is its own workstream.


The third is exit. It sounds premature to think about it at the start, but it's exactly when you should. What are the vendor's data security commitments? What happens if the relationship ends - by choice or otherwise? How do you disengage without exposure? In regulated environments especially, these aren't hypothetical questions. They need answers before you sign anything.



4. “AI saves time” is the headline everyone leads with. But what does timesaving mean when you factor in integration, change management, and workflow redesign?


It's real - but it's not immediate, and it's not automatic. Many AI tools feel fast upfront but shift the burden downstream. If experts still need to verify every output, instead of saving time, you've just redistributed the risk. The early stages of any serious AI implementation will often look like the opposite of efficiency. Teams are learning, processes are being redesigned, and the tool hasn't yet found its place in the workflow. That's normal, and organisations that expect instant returns tend to abandon good implementations too early.


The gains come once the system is properly embedded. And they're most visible in the work that nobody wants to talk about: the repetitive, high-effort tasks that consume expert time without requiring expert human contextual judgments. Validating data points. Reformatting outputs. Cross-checking the same tables. When AI absorbs that work reliably, the savings compound across the entire workflow. Even marginal improvements at multiple stages add up to something significant.


But it's important to frame this correctly. The value of AI isn't to replace the people doing the job - we still need people for the complex understanding, the contextual writing, the interpretation of what the data is actually telling us. What AI does is remove the repetitive effort, so that experts can focus on the higher-level, higher-value judgments that actually move things forward. In regulated environments, even small reductions in that kind of friction can translate into significant timeline improvements across an entire program.



5. You can’t just drop AI into an existing process and expect it to work. What does genuine integration into an everyday operational workflow look like?


There's an organisational dimension to integration, and then there's the human one - and the second is often what determines whether an implementation actually sticks.


The users using these tools day-to-day are not tech professionals. They're domain experts under deadline and delivery pressure, working in environments where accuracy isn't optional but a must. They don't have the bandwidth to learn a complex new system, and they shouldn't have to. If the tool requires significant effort to understand or is poorly designed, it will be quietly set aside the moment things get busy - and in this industry, things are always busy.


When we were building TriloDocs, we deliberately drew inspiration from the interactions that already felt natural to our users. The way medical writers ask each other questions when they pick up a new protocol for example. The instinct to search rather than navigate. The expectation that a system should surface what's relevant without requiring you to know exactly what to ask or prompt for. Design that starts from those moments of familiarity creates far less resistance than design that starts from capability.


The other piece is governance. Users need to trust that the system is operating within clear boundaries - that there are no surprises waiting for them when a senior reviewer or an inspector looks at the output. That confidence doesn't come from documentation. It comes from experience, and it builds over time when the tool consistently does what it says it will.



6. Most companies start with one AI tool. Why is that rarely enough - and how do you build from there?


Because most problems worth solving aren't one-dimensional. A single AI tool, however well chosen, is optimised for a specific type of task. The moment you move outside that scope - different data types, different outputs, different levels of precision required - you're asking it to do something it wasn't designed for.


In practice, a mature AI implementation looks more like an architecture than a product. You might have a rule-based system handling the deterministic processing, a language model managing narrative outputs, and a validation layer sitting across both. Each component does what it does well, and the orchestration between them is where the real value gets created.


We went through exactly this process with TriloDocs. The temptation to build one system that does everything is real - and it took discipline to resist it. The mistake is trying to build all of that at once. Start with the most clearly defined problem, implement something that solves it reliably, and let the architecture grow from there. Each layer you add should be connected to a specific need - not to a vision of what the system might eventually become. Organisations that take that incremental approach end up with something coherent and maintainable. Those that try to solve everything upfront tend to build systems that are expensive to run and difficult to explain.



7. How do you know which part of your business is actually ready for AI - and which parts aren't?


There are multiple signals worth looking for. The first is process clarity. AI performs best when it's operating against a well-understood workflow, where the inputs are defined, the business and expected domain logic is consistent, and the expected output is clear. If the process itself is still evolving or poorly documented, introducing AI into it will amplify the confusion rather than resolve it.


The second is repeatability. The strongest use cases are those where preset logic sets are being applied over and over, consuming time and capacity that could be better spent elsewhere. That's where AI delivers the most immediate and measurable return.


The third is data quality. Generative models in particular are only as reliable as what goes into them. Variable, inconsistent, or poorly structured data leads to outputs that require constant correction - which defeats the purpose entirely and creates significant downstream risk in regulated contexts.


Beyond these three, there's a dimension that gets overlooked in most readiness assessments: the people. Resistance to AI adoption often reflects legitimate concerns about job security, workload, and trust in the technology. Those concerns don't go away if you ignore them. Whether your teams genuinely believe the tool is there to help them - rather than replace them - will determine whether the implementation gains traction or stalls. And when choosing a vendor, it's worth asking whether they understand your user base well enough to support that conversation, or whether they're focused purely on the technical delivery.



8. For an operations or innovation leader starting this journey - what's the one thing you need to understand before they implement anything?


Simple: implementing AI is not just buying a tool. It's an organisational and architectural decision - a commitment to moving in a direction, not a one-time purchase.


Before anything else, understand your own organisation. What problem are you actually trying to solve? Who does this affect, and how ready are they? What operational overhead can you realistically absorb with the solution embedded? These questions matter because the answers shape everything - the tool you choose, the vendor relationship you need, and the timeline you set.


On that last point: think carefully about what kind of partnership you're looking for. AI is not a static solution - it evolves, and your needs will evolve with it. With TriloDocs, for example, the approach is deliberately non-invasive - minimal setup, with most of the operational overhead managed on our side. But some solutions may require significant internal investment to get running. Neither is inherently wrong, but you need to know what you're taking on before you commit.


And through all of it, keep the human dimension front and centre. You can have the right architecture, the right governance, the right tool, and still lose if the people using it don't want to. Implementation is as much a change management challenge as a technical one. Teams need to understand that this is there to make their work better, not to add five more projects on top with the time it saves on one. That message needs to be deliberate and consistent.


It's not an easy journey, and it shouldn't be treated as one. The organisations that navigate it well tend to be those that invest as much in the relationship with their implementation partner as in the technology itself - because the hard questions don't stop once the contract is signed. They're just getting started, and so are you.



Gaspar Wong is Head of Product at TriloDocs, where he leads the development of the company's AI platform for regulatory documentation. He brings a background in product leadership and cloud strategy, with experience delivering large-scale technology programmes across global organisations.

 
 
bottom of page