The Privacy-Utility Trade Off with Generative AI

The Privacy-Utility Trade Off with Generative AI

Authors:

The rapid rise of generative Artificial Intelligence (AI) based on Large Language Models (LLMs), such as OpenAI’s ChatGPT and Google Bard, promises substantial business benefits – and substantial risks. In our recent survey, 70% of respondents have concerns around LLMs and 38% of respondents are already aware of security breaches in their industry. Especially when sensitive data and intellectual property are involved, inputting prompts to the AI service in a way that is not secure raises serious concerns about trust, privacy, and compliance. 

The risks of unsecured prompts are as multifaceted as the challenges and flaws of generative AI.

  • Data exposed to the AI service provider
    In May 2023, Samsung became the latest in a series of companies to ban the use of ChatGPT and other AI-powered chatbots after an engineer input sensitive internal source code.
    One of Samsung’s primary concerns was that data inputs get stored on servers owned by the companies operating the AI services, like OpenAI, Microsoft, and Google, without an easy option to access and delete them.
    .
  • Data accidentally revealed in AI outputs
    A common rule of thumb for generative AI is, Don’t input anything into a chatbot that you don’t want third parties to read. Many publicly available LLMs,
    including the free version of OpenAI (ChatGPT), note in their Terms of Service that users’ content will by default be used for training. When users input sensitive data, it could be returned by the LLM in output elsewhere.
    Generative AI services can mitigate this issue with data sanitization and the option for users to opt out of having their data included in the training model. Restrictions within the system prompt about the types of data the LLM should return can also help. But the unpredictability of LLMs means such restrictions might not always work.
    .
  • Data deliberately breached
    Sensitive input data can leak through a failure or a deliberate breach, such as a man-in-the-middle attack that makes use of security holes in LLM designs. Those security holes exist: for example, a few months ago
    OpenAI repaired a flaw that revealed parts of users’ conversations with the chatbot as well as, in some cases, payment details.
    Prompt injections are considered a key security weakness of generative AI. Using malicious inputs, prompt injections subvert AI models: direct injections overwrite system prompts, while indirect ones manipulate inputs from outside sources like websites. Security researchers have demonstrated how prompt injections could be used to steal data. 

The privacy-utility tradeoff

The Samsung case – in which various employees in South Korea used ChatGPT in violation of company directives – demonstrates the difficulty of preventing employees from taking advantage of the benefits of LLMs. Research from Add People shows that one-third of UK workers, for example, are using tools such as ChatGPT without corporate knowledge or permission.

Companies that don’t ban LLMs altogether may limit the inputs that employees can use – or even leverage a plug-in that blocks users from inputting sensitive data. Yet it is intuitive that restricting employees’ inputs also limits the usefulness of LLMs. This privacy-utility tradeoff is present whenever risk mitigation involves restricting user prompts.

Leverage LLMs privately, securely and with complete autonomy

Security technology must keep pace with the development of LLMs. A way out of the privacy-utility trade off is to secure users’ inputs. 

For example, Inpher SecurAI leverages Trusted Execution Environments (TEEs) to ensure that both the prompt and the output are secured. Data is securely communicated such that it is provably not visible to either the system administrator of the TEE or to any other external party (including Inpher). Aligning with ethical and regulatory demands, Inpher enables organizations to harness the potential of LLMs for sensitive data. For more information, read our white paper about enhancing LLMs with SecurAI.