An old friend—let’s call him "Danny Boy" (we’ve been friends a long time; he’s not actually old)—has been a serial entrepreneur for years. Danny Boy has always had a knack for spotting solutions to problems that others, myself included, just don’t see.
We caught up recently on a snowy mountain, having both parachuted in for a charity event. While waiting for our pick-up, we started talking about a mutual friend—let’s call him "The Colonel"—who heads up the sales function in Europe for a software services company. The Colonel was eager to expand beyond Europe into new markets but was running into the usual hurdles: logistics, language barriers, and unfamiliar cultures.
Naturally, the conversation turned to AI and large language models (LLMs) and how they’re reshaping the world. We wondered: would this tech help or hinder The Colonel’s global domination ambitions?
I was convinced it had to be a force for good. Imagine the efficiency gains—The Colonel could instantly translate RFP bid documents from any language into English, accelerating his company’s ability to compete globally.
Danny Boy, however, looked horrified.
He pointed out that one of the key reasons organizations are banning these tools is exactly that use case. Taking an RFP written in, say, German and uploading it to ChatGPT for translation breaches all sorts of privacy laws—and exposes you and your company to serious sanctions.
A bit later, holed up in our alpine hut sipping hot wine, I fired up ChatGPT and asked it (I think it’s an “it,” though who really knows these days when it comes to pronouns) the following:
In 500 words, can you outline the data and privacy risks associated with using ChatGPT and other LLMs when organisations are asking them to translate company privileged information.
Here's the response :
Data and Privacy Risks of Using LLMs for Translating Privileged Information
Organizations increasingly turn to large language models (LLMs) like ChatGPT for tasks such as translating internal documents. While these models offer speed and cost-efficiency, they pose significant data and privacy risks, especially when handling company-privileged or confidential information.
1. Data Exposure Risk
One of the primary concerns is inadvertent data exposure. LLMs process data by transmitting inputs to centralized servers (cloud infrastructure) where the model operates. Even if providers claim strong security, the act of transmitting sensitive company information outside the organization’s own controlled environment inherently increases the risk of interception or unauthorized access—whether through cyberattacks, technical faults, or insider threats within the service provider.
2. Lack of True Data Isolation
Although reputable LLM providers often state that they do not use specific customer data to retrain their models, the complexity of cloud-based systems means absolute data isolation is difficult to guarantee. Misconfigurations or unclear data-handling policies may lead to inadvertent retention or logging of sensitive inputs, potentially exposing privileged material.
3. Jurisdictional and Regulatory Compliance
When data is sent to an LLM service, it may be processed in multiple jurisdictions, depending on the provider’s data center locations. This can trigger compliance issues with regulations such as the GDPR (EU), CCPA (California), or other data protection laws, particularly concerning cross-border data transfers and the duty to maintain data residency. Organizations may unknowingly breach legal obligations if they do not verify where and how their data is processed.
4. Ambiguity in Terms of Service
LLM service agreements often contain broad language regarding data use, disclaiming responsibility for certain risks. This lack of precision can leave organizations exposed if disputes arise over misuse of privileged information. Without explicit contractual safeguards, there is limited recourse if confidential data is compromised.
5. Risk of Misinterpretation or Loss of Context
LLMs may lack contextual understanding of sector-specific or legally sensitive terms, increasing the risk of inaccurate translation. For privileged company data, especially in legal, financial, or technical domains, a mistranslation can result in operational errors, legal liabilities, or reputational damage.
6. Insider Threats and Supply Chain Risks
Even with strong encryption and security measures, LLM providers are part of a larger supply chain. Insider threats (e.g., employees at the provider) and third-party service dependencies (e.g., subcontractors managing infrastructure) expand the attack surface and the potential for breaches.
7. Long-Term Data Retention Uncertainties
Questions often remain about how long LLM providers retain input data and under what conditions it may be archived, deleted, or repurposed. Lack of transparency in data retention practices raises concerns about long-term control over privileged company information.
"There you are!" said Danny-boy.
So, what is the solution to this problem if you're a CISO or head of risk for an organisation?
I pondered a response with the sort of irritation held by a man who is being ignored at the bar (I was).
Asking employees not to use tools that, from their perspective makes their life easier is an uphill task and I suspect, if history is anything to go by, will ultimately prove unsuccessful!
It's at this point that Danny-Boy's eye's lit up. "The solution" he said, "is an AI Data Firewall - and if you want to know what that is, you'll have to look at this...."
https://www.contextul.io/datafirewall
Broadsword out!
About The Impact Team
The Impact Team is a digital transformation consultancy that partners with organisations to enhance their digital products and services. Their expertise encompasses advising on team structures, managing design operations, and implementing governance frameworks, all with a focus on customer-centric solutions and effective execution.
Recognising the importance of continuous improvement, The Impact Team integrates change within organisations to swiftly respond to evolving market demands. They foster a culture of innovation and adaptability, embedding these principles into the organisational fabric.
In the realm of cybersecurity, they employ advanced technologies and best practices to protect data, systems, and networks from malicious attacks and vulnerabilities. This approach ensures that digital assets remain secure and resilient against evolving cyber risks.
The Impact Team operates globally, with offices in London, New York, Hong Kong, Tallinn, and Dubai, enabling them to deliver tailored digital transformation services across various regions.
Their mission is to empower organisations to thrive in the digital age while fostering a sustainable and responsible future. They are committed to providing ESG-friendly solutions that drive meaningful change and create value for clients, society, and the planet.
Through their comprehensive approach, The Impact Team aims to transform businesses by fine-tuning operations to achieve tangible, impactful results, ultimately contributing to business growth and success.