June 24, 2024
A Framework for Generative AI Adoption in the Life Sciences
In the life sciences industry, Generative AI presents an abundance of possibilities. While that’s exciting, it also poses a practical inconvenience to the people in the life sciences industry who have been tasked with figuring out what Generative AI is and how to use it: there’s just a lot to figure out.
Approaching AI
In the life sciences industry, Generative AI presents an abundance of possibilities. While that’s exciting, it also poses a practical inconvenience to the people in the life sciences industry who have been tasked with figuring out what Generative AI is and how to use it: there’s just a lot to figure out.
At Artos, we’ve now talked to and partnered with many of the professionals bringing AI to the life sciences, from small startups to large pharma. And, for many organizations, especially those who are just beginning to think about AI, we’ve realized that figuring out how their enterprise should adopt AI is easier said than done.
There are multiple teams involved in these decisions: the executive team, SMEs from different departments, IT, and legal. Each of these teams are important to the adoption of AI, but coordinating that effort can be difficult. How do you integrate what the SMEs and end users want and are seeing from vendors with what IT needs in terms of security and where the executive team wants to go with AI overall?
The Risk-Reward Paradigm
An easy way to think about it
The organizations that have thought about AI most robustly all seem to roughly guide their thinking with the same framework: what is the risk-reward ratio of adopting AI?
It seems like an obvious framework, but it’s remarkably effective at clarifying exactly how an organization feels about AI adoption. The rest of this post is focused on laying out a framework that can help companies figure out exactly how they are comfortable leveraging AI.
The Reward Side
The potential reward should always be thought about in terms of concrete business use cases that could benefit from AI. For each of those use cases, focusing on the main metrics that Generative AI should affect can help determine how large the potential reward could be. So far, three main categories of metrics seem to be of interest to companies who have already adopted Generative AI tools:
Time-savings: The use of Generative AI can significantly reduce the time spent on various tasks by automating and augmenting processes. Especially for key processes related to bringing treatments to market, time-savings is by far the most common metric tracked for companies deploying Generative AI.
Cost-savings: This is the other side of the same coin. Unsurprisingly, we find that AI is not seen a replacement to the existing workforce. AI will certainly change processes for the better, but a human will still be required in the process.
Competitive intelligence: Companies’ data is what makes them unique. As such, much of the conversation around AI involves how AI can help companies better leverage their data - for example, coming up with better responses to agency feedback based on prior studies or improving trial design - to continue developing their differentiated position in a market. This is different than just trying to achieve the same outcome on a shorter timeline or with a smaller budget; this is about improving the outcome itself. This is the trickiest metric to measure, since the value-add of AI to this process can’t be identified immediately or even in the short-term future. For companies looking to gain this out of AI, they will need to invest early and be patient.
Investing in learning about AI so that you can get a good sense of what the potential rewards could be for your organization is key to effectively determining your organization’s risk-reward ratio.
The Risk Side
The risk side is arguably more important. And the concerns generally break down into three categories:
Data Security: This is by far the largest concern that we’ve seen from those exploring AI, and it revolves around how much data is exposed to Generative AI tools, and how safe that data is. The exact answer to those questions will vary by organization and their tolerance for data-related risk, but we generally find that organizations evaluate the risk here by asking four categories of questions:
What data is being given to an AI system?
What exactly happens to that data when it is used with an AI system?
How many places does that data end up being stored in, and how secure are those places?
How much control/visibility do I get into the places my data is stored?
Implementation Costs: This is a question of how much it costs to get up and running with AI. This questions has two flavors to it: 1) AI will take effort for end users to adopt, since it demands a new way of doing work, and 2) AI can take effort for an organization to adopt from an IT and legal perspective. Determining how large the implementation costs will be comes down to a few factors:
How excited are end users about modifying their current workflows? Especially for more complex use cases in the life sciences that already have codified processes, whether end users will be comfortable with that change is an open question that is likely best answered by involving the potential end users of AI in the discussion. The more intrinsically excited your end users are about AI adoption for the use cases you’re considering, the easier it will be to get them to adopt and the lower the cost associated with change management.
On a related note, what bandwidth do your end users have to handle this change? They may be very excited about the technology, but all new technologies have a learning curve. If those potential users don’t have the bandwidth to learn the new technology, adoption and costs associated with implementation may be higher. Alternatively, you may find that, in order to achieve adoption, you need to add bandwidth to those end users, which can be an added cost. For example, some organizations that we’ve worked with choose to create a dedicated AI team that is responsible for change management around AI and/or lessening the burden of end users so that they have the bandwidth to adopt.
What does your current software/IT infrastructure look like? Where your data exists and how it exists can affect how much utility you can get out of AI. Generally speaking, the more your data exists in a single place that is well-organized, the lower the implementation cost is likely to be and the more your organization is likely to benefit from AI.
Technology: There is, of course, the risk that the technology does not actually work. Being educated on AI is the best way to mitigate this risk. However, when it comes to adopting a relatively new technology, there are a few things beyond that to consider when determining how much risk you’re taking on:
How specific is the use case? The more focused you are on exactly what you want AI to do, the more likely you are to have the technology work in your favor. This doesn’t necessarily mean you have to start with only one use case, but it does mean that you should be very clear about the job (or jobs) that the AI should do.
How well does the use case align with what AI is capable of? This is where having a good intuition of AI is important. If you find a use case that meets all your other criteria discussed here but is not a use case that AI can handle well, the result will still be a disappointment.
How capable is the team developing the technology? Whether you’re thinking about building AI in-house or working with a vendor, this is key to consider, because getting AI to work is not as simple as it seems. What makes this most recent hype around AI particularly troublesome for organizations looking to adopt AI is that AI has become easier than ever to try out, especially for those with limited experience in the AI space. This makes it incredibly easy for even teams inexperienced with AI to build a promising proof-of-concept that might work in one or two particular instances of a use case. However, building AI that is high-quality, works consistently, and does that for all the instances of a use case is much more difficult.
The Right Risk-Reward Ratio
Organizations in the life sciences industry appear to take these factors into account in some manner when formulating their strategy towards AI. However, exactly what risk-reward ratio is acceptable to each company differs - and whether Generative AI meets that minimum ratio - also differs for each company.
For example, some companies opt to test out AI with a use case that might have less time or cost-savings but has a lower risk profile. On the other hand, some companies might want to pursue larger savings and use that to justify incurring additional risk around technology failure.
Many companies may also be able to incur larger risks in one of those categories because the risk in another category is lessened. If you have an end user group that is very enthusiastic about adopting AI, the implementation risk might be lessened and you might be willing to incur additional risk around the use cases you want to develop AI tools for.
It is also entirely the case that your organization may decide, after thinking through things with this framework, that the risk-reward ratio is not favorable enough to justify adopting AI. And there’s nothing wrong with that.
While AI is one of the most powerful and exciting technologies the world has seen, it’s best treated as a tool, not unlike the other tools that organizations in the life sciences space would use to improve their business processes. And the processes to evaluate that tool should feel relatively similar.
Reach out to info@artosai.com with questions. We’d be happy to share what we’ve learned and help your organization design an AI strategy (for free, no strings attached).