September 2, 2024

A Guide to Evaluating the Security of Enterprise AI

A Guide to Evaluating the Security of Enterprise AI: AI Models and Enterprise Applications

Enterprise AI applications are powerful tools that combine the capabilities of AI models, such as large language models (LLMs), with enterprise-level software systems. These applications solve specific business problems by creating workflows that automate tasks, generate insights, or improve decision-making. Given the particularly sensitive nature of the data many biotech, pharmaceutical, and medical device companies are considering providing to enterprise AI applications, it is particularly critical for these organizations to thoroughly understand the security risks of this type of tool, especially in light of some of the negative press around security that AI systems like ChatGPT received in the early days following its release.

When evaluating the security of enterprise AI, it’s key to understand the components that make up an enterprise AI application. And, from a security standpoint, there are two main components to consider: AI models/systems and enterprise applications. Each of these elements presents unique challenges and considerations when it comes to protecting data and ensuring robust security protocols.

1. Evaluating the Security of AI Models and Systems

AI models, such as GPT-4 (a closed-source model) or LLaMA (an open-source model), serve as the backbone for many enterprise AI applications. The security considerations for these models are nuanced, especially since they are often designed to process and analyze vast amounts of data, including sensitive or proprietary information. To ensure proper security protocols are in place, businesses should ask the following questions:

Does the model change after it sees my data?

This question pertains to whether the AI model is being trained on your data. If the model is trained on proprietary or sensitive information, it’s essential to know how that data is used, who has access to it, and what other data is integrated into the model. Closed-source models like GPT-4 often operate as black boxes, meaning it can be difficult to ascertain what happens to your data once it's submitted. Although these models are generally not fine-tuned on every user interaction, some models store inputs for evaluation purposes.

On the other hand, with open-source models like LLaMA, you have more control over how data is handled. Since these models can be deployed and managed internally, organizations can make explicit decisions about what data is fed into the model and ensure that only authorized personnel access it.

Does the model or system store my data?

Even if a model isn’t actively trained on your data, some systems store inputs and outputs for various purposes—such as evaluation, performance improvement, or even troubleshooting. For closed-source systems, like GPT-4, this is often the case. It may store input/output logs temporarily to improve model performance or monitor for misuse. However, you may not have control over this process unless the service provider offers explicit guarantees about data retention and deletion. This presents a concern for organizations handling sensitive or regulated data.

In contrast, open-source systems provide more flexibility. Since they are self-hosted, businesses can define policies for whether data is stored, for how long, and who has access to it. When evaluating a system, always ask for clarity about whether it stores your data, and if so, how long the data is stored, what rights you have over its removal, and whether those rights are legally enforceable. Do note that open-source can be less performant and more difficult to maintain, which brings into questions around the risk-reward ratio associated with AI applications, which Artos also talks about in its blog here.

2. Evaluating the Security of Enterprise Applications

While AI models power enterprise applications, the security of the application layer—the software that facilitates workflows and user interactions—is equally critical. In many ways, securing these systems is an extension of long-standing IT practices, but AI introduces some new challenges.

Governance and Permissions: Who can see what?

Enterprise applications are typically designed to handle data governance and permissions. Traditional software systems often work with well-structured data, making it relatively straightforward to define who can see what. However, in AI-driven applications, this becomes more complicated. AI models analyze data dynamically and can produce outputs that cross data boundaries.

For instance, if your organization uses an AI application across multiple teams, each team should only access data that’s relevant to their products or tasks. Yet, an AI system that treats all users equally could inadvertently expose sensitive data from one project to another. Without granular controls built into the system, these risks become heightened. Businesses must ensure that AI applications have strict role-based access controls that mirror those of other enterprise software systems. Moreover, AI-generated outputs must be restricted based on the same permissioning rules as the underlying data.

How is the AI itself governed and monitored?

The AI within enterprise applications isn’t just a passive observer; it actively shapes outputs and recommendations based on the data it processes. Given this, AI systems must be subject to rigorous governance protocols. Monitoring AI activity is vital to ensure that it operates within predefined limits. For example, larger organizations will need audit trails to track the decisions made by AI models, ensuring that every action is explainable, accountable, and compliant with regulatory frameworks. Artos works closely to ensure these types of rules are respected.

AI systems should be governed with the same attention to detail as other parts of enterprise software: logging activities, restricting access to certain features, and ensuring the model's decision-making aligns with the organization's objectives. Without adequate governance, AI’s autonomy can lead to unintended or even harmful outcomes.

Conclusion

Evaluating the security of enterprise AI applications requires a deep understanding of both the AI models they are built on and the enterprise systems that enable their use. Closed- and open-source AI models pose different security concerns, and it’s important to know how your data is being used, stored, and accessed. At the enterprise application level, governance and permissions are crucial to ensuring that sensitive information is protected and that AI is accountable for its actions.

By asking the right questions and implementing strong governance protocols, organizations can safely and effectively deploy enterprise AI applications, unlocking their potential while maintaining security and compliance.

Similar Blogs

Stay Informed, Subscribe to Our Newsletter

Sign up for our newsletter to get alerts on new feature releases, free resources, industry insights, and more.

Stay Informed, Subscribe to Our Newsletter

Sign up for our newsletter to get alerts on new feature releases, free resources, industry insights, and more.

Stay Informed, Subscribe to Our Newsletter

Sign up for our newsletter to get alerts on new feature releases, free resources, industry insights, and more.