Business AI Preparedness- The Race to Leverage Secure AI

Business AI Preparedness- The Race to Leverage Secure AI

Authors:

With the meteoric rise of large language models (LLMs) in the 2020s, companies across a range of industries are rushing to harness the potential of this frontier AI, knowing that if they don’t, their competitors will. 

But growing concerns among companies and governments about the safety, security, and trustworthiness of LLMs are accompanying this explosion of interest and investment (the most prominent example is ChatGPT, the popular generative AI chatbot from OpenAI, which is estimated to have reached 100 million monthly active users within two months of launch, making it the fastest-growing consumer application of all time). In this blog post, we will analyze the benefits and risks of LLMs through the lens of our recent survey. We will discuss how companies might address those risks in order to leverage AI without compromising privacy, security, and trust.

What Are LLMs?

As a powerful new type of AI, LLMs combine the power of information we know with algorithms that enable deep learning techniques to speculate, understand, summarize, and predict. In essence, LLMs use neural networks, a type of machine learning (ML) model that can analyze vast amounts of data to learn billions of parameters during training. LLMs can include generative AI such as ChatGPT, mentioned above. The success of LLMs is dependent on the quantity and quality of data available.

Our LLM Survey

Recently, Inpher reached out to our community of customers, partners, and innovators to get their feedback on the use and challenges of LLMs in their environments. The following are some of our notable findings.

74% of respondents use LLMs

The rapid adoption of LLMs by individuals and businesses is underlined by the respondents to our survey, approximately three-quarters of whom are currently using LLMs.

Moreover, 100% of respondents anticipate a positive effect on business from LLMs. This positive perception about the potential of LLMs is backed by some research. For example, a Harvard-led study found that consultants working at the well-known Boston Consulting Group (BCG) improved their performance by 40% on average when using GPT-4, generally considered the most powerful LLM. One of the key findings of the study, according to BCG’s Francois Calderon, is the importance of clean, differentiated data for companies to use in their AI applications.

It is worth noting that, according to our survey, people are most often accessing LLMs via public websites. This fits with an observation from Devvret Rishi, Chief Product Officer at Predibase, that LLMs excel at addressing generic questions grounded in publicly available data, which is “90% of the journey.” However, the last mile proves to be the most formidable hurdle: providing the crucial contextual insights that truly matter for businesses. This is likely to involve using sensitive information, such as proprietary data and IP.

70% of respondents have concerns around LLMs

Despite recognizing the enormous potential of LLMs, most respondents are aware of risks as well. 

Quantity and quality of data are crucial for effective LLMs, which offer potential benefits in a variety of industries that use highly sensitive information, including banking and healthcare. So, it makes sense that according to our survey, privacy leakage is the top concern, followed by security risks and IP leakage. Moreover, ensuring privacy is the area where respondents are most likely to look for partners, underscoring that they do not currently have these capabilities in-house. 

In fact, 38% of respondents are already aware of security breaches in their industry. A recent example is Samsung, which in May 2023 became the latest in a series of companies to crack down on the use of AI after discovering an accidental leak of sensitive internal source code by an engineer who uploaded it to ChatGPT. Samsung banned the use of all “generative AI” tools.

Many publicly available LLMs are explicit about the lack of data privacy in their services. For example, Pi.ai states in their Terms of Service that they will utilize users’ data for training.

37% of respondents have no corporate policy around LLMs

There is a risk that in their hurry to tap the benefits of LLMs and frontier AI, companies will leave privacy and security concerns for later. More than a third of respondents to our survey have no corporate policy around LLMs, although 73% have regulatory factors or security standards they must adhere to when using LLMs.

The concept of Trustworthy AI promotes building and deploying reliable, ethical, transparent, and accountable AI systems; we discussed the concept in more detail in a previous blog post. Governments around the world are responding to the need to create guardrails around new AI technologies to ensure trustworthiness, among other priorities. The following are prominent recent examples.

President Biden’s Executive Order

In October 2023, President Biden issued an Executive Order on safe, secure, and trustworthy AI. The Order requires that developers of powerful AI systems share their safety test results and other important information with the U.S. government, in order to ensure that these systems are safe, secure, and trustworthy before they go public. 

The Order also funds a Research Coordination Network to advance research in privacy-preserving technologies. Developing guidelines for and promoting adoption of privacy-preserving technologies are also discussed in the Order.

The fact sheet notes that this Executive Order is intended to support and complement Japan’s leadership of the G-7 Hiroshima Process, the UK Summit on AI Safety, India’s leadership as Chair of the Global Partnership on AI, and ongoing discussions at the United Nations.

AI Safety Summit

In early November 2023, the U.K. Prime Minister hosted a summit on AI safety. Twenty-eight countries at the Summit, as well the EU, signed on to the Bletchley Declaration, with the goal of fostering international collaboration. The stated agenda of the signees is to identify AI risks of shared concern and build respective risk-based policies across countries, collaborating as appropriate.

Conclusion

Businesses do not have to throw caution (and security and privacy) to the wind in their hurry to start benefiting from LLMs. Neither do they have to ban LLMs outright. A third option is to pair LLMs with innovative privacy-enhancing technologies (PETs) that can enable companies to securely harness the advantages of LLMs while complying with regulations. 

PETs are most familiar when applied to more long-standing machine learning systems. But as frontier AI raises many of the same concerns as older ML, PETs have the potential to provide solutions. We will further discuss this promising direction in future blog posts.