Samsung has instigated a temporary artificial intelligence ban on the use of generative AI tools, following an internal data leak from within its systems. This data loss incident occurred in April 2023. This includes tools like ChatGPT, Google Bard, and Bing.
The data leak has been traced to various employees accidentally leaking sensitive data via ChatGPT (this most likely occurred when ChatGPT was used to help engineers to generate computer code in order to speed up their tasks). This has led Samsung to declare that it is preparing its own internal artificial intelligence tools (Samsung does not have its own generative AI product yet).
According to Samsung (in communication with TechCrunch): “The company is reviewing measures to create a secure environment for safely using generative AI to enhance employees’ productivity and efficiency. However, until these measures are ready, we are temporarily restricting the use of generative AI through company devices.”
Matt Fulmer, Cyber Intelligence Engineering Manager at Deep Instinct has expressed to Digital Journal why this incident is a concern.
Fulmer begins by assessing the background to the issue: “Samsung’s decision to ban usage of ChatGPT within their environments after an “accidental” exposure incident occurred while using the LLM (Large Language Model) generative AI highlights the growing need for organizations to understand how these platforms work on the backend.”
Fulmer blames human error for the issue, especially the way by which people interact with what is new and novel technology. Here he assesses: “In this situation, employees not understanding how LLMs are trained resulted in the system giving access to the proprietary data outside of the organization.”
Outlining what is required, Fulmer says: “To train any kind of LLM you need a data set to “feed” the system. That data set can (and is) pulled from the requests/conversations held with the LLM and then fed in to create a new model with better accuracy.”
This leads Fulmer to conclude: “Companies need to be wary of this as many are using LLMs to spot-check code or proofread documents to suggest changes.”
Returning to the Samsung incident, Fulmer drives down to the core concern: “The concern comes not only from a potential leak of intellectual property, but also that someone could find information which exposes a weakness within a company and then use generative AI to create a payload which specifically targets that company.”
Is the likely? It appears so, notes Fulmer: “Threat actors are combing the LLMs regularly to try and find information about companies which were accidentally slipped in requests (such as emails being included in documents which were spot-checked), so that they can leverage that information and payloads generated to efficiently launch targeted attacks.”
What should be done for now within enterprise environments?
In assessing the risk for technology based highflyers, Fulmer recommends: “To start, follow Samsung’s lead and implement a security lockdown to prevent members of the organization from using LLMs to try and “simplify” their jobs. In addition, create the necessary security policies to outline an AUP (Acceptable Usage Policy) and clearly define the penalties for violation of the AUP.”
In conclusion, Fulmer takes a nuanced view of technologies like ChatGPT: “This technology can be a boon to society but right now it’s a burden on anyone within security given the real and persistent threat it has become.”