August 12, 2024

How trustworthy can AI be?

The value-add of AI is limited by how trustworthy the AI systems being used actually are. The more trusthworthy AI systems are, the more companies are able to be hands-off with the work done by AI. On the other hand,

Trust = Efficiency in the world of AI

The value-add of AI is limited by how trustworthy the AI systems being used actually are. The more trusthworthy AI systems are, the more companies are able to be hands-off with the work done by AI. On the other hand, the less trust that companies have in AI systems, the more manual effort they will do to verify the performance of those AI systems.

While this conceptually makes sense, it’s difficult for users of AI to determine when an AI is trusthworthy enough to do certain tasks without human oversight or with a specific amount of human oversight. Fundamentally, this is because LLM-based systems don’t have a finite solution set (or at least an easily observable finite solution set). In other words, it’s in principle impossible to see every possible output that an LLM could produce, which means it’s very difficult to definitively validate that it will work in every single instance.

This is where it becomes important to understand what things can be done to mitigate this trusthworthiness issue and how organizations can tie various aspects of AI systems and features to trustworthiness to better get a sense of how much they can actually trust their AI systems.

The rest of this blog post describes two of the main parameters that AI systems can improve their trustworthiness on.

Trust on the Technical Level: Building Robust AI Systems

At the heart of trustworthy AI lies the technical foundation upon which these systems are built. For AI to be reliable, particularly in a field as demanding as life sciences, it must be designed with rigorous safeguards to minimize errors and enhance consistency.

At the root of this is actually building a system that leverages as little AI as possible (as we’ve written about here). That’s because less AI tends to mean a more reliable software application overall.

Ensuring the work done by AI is narrowly scoped, fitting AI into workflows that involve non-AI components, and ensuring that critical parts of document-drafting in a regulated space are managed by non-AI components where possible is key to building a technically trustworthy AI system. In fact, that’s how Artos manages to achieve such high reliability with its systems.

Trust on the Transparency Level: Ensuring Openness and Accountability

Ultimately, trust is not something that can be solved solely by a more technically robust AI system. Trust in the system is ultimately something that humans will have to decide on.

And especially in the early days of AI, this is where transparency is key. High-performing AI systems that are transparent make it easier to build trust in that AI system.

Tracing how AI systems complete key actions, being able to leverage AI systems that “show their work”, and being able to see that on a repeated and consistent basis without too much effort are key to building this trust in the system.

Across the industry, we expect that it will be increasingly robust features on these two fronts that will contribute to increasingly trustworthy AI systems.

The implications of different levels of trust in an AI system

Having now worked with everyone from small startups to large pharma, trust is the real key to unlocking value with AI.

Trust allows companies to achieve far greater efficiency with AI (assuming the AI systems are robust and performant). This occurs really at two levels:

  1. At the user level, users can become far more efficient when they spend less time checking the work of an AI system. This individual level improvement unlocks tangible organizational benefits from AI - things like cost efficiencies, time-savings, and resource reallocation.

  2. This feeds into the second effect of trusted AI systems: easier change management. While change management as a whole is a complex topic that has many parts to it, the logic here is fairly straightforward: the more AI systems become efficient relative to the status quo, the easier it becomes for potential users to adopt the new system. This cost-benefit analysis is what organizations must be thinking of when thinking about change management strategy and one that we see under-emphasized in a lot of organization-wide hype around AI.

While trust in AI is an obvious consideration for those who have seen the shortcomings of AI, tackling the problem can seem a little intractable and vague.

However, by focusing on building technically robust systems and ensuring transparency, organizations can gradually build the trust necessary to fully leverage AI's potential. This approach - at a broad level - helps clarify what levers organizations will have to push and pull on to unlock the full potential of AI.

Stay Informed, Subscribe to Our Newsletter

Sign up for our newsletter to get alerts on new feature releases, free resources, industry insights, and more.

Stay Informed, Subscribe to Our Newsletter

Sign up for our newsletter to get alerts on new feature releases, free resources, industry insights, and more.

Stay Informed, Subscribe to Our Newsletter

Sign up for our newsletter to get alerts on new feature releases, free resources, industry insights, and more.