i³ Threat Advisory: ChatGPT and AI Chat Tools

ACT NOW TO MITIGATE RISK

  1. Apply application control on company-issued endpoint devices.
  2. Provide regular employee training on the safe use of AI chat-based tools.
  3. Monitor confidential document usage alongside such tools.

INTRODUCTION

Artificial Intelligence (AI) has been growing in popularity since one of its first commercial usages in 1980 with a product known as XCON (expert configurer). One of the next milestones was the Deep Blue AI (developed by IBM), which was also the first computer to beat a human in chess in 1997.

As AI increasingly becomes integrated into everyday products and services, such as operating systems and browsers, many businesses are concerned about the potential implications. In 2023 an online study was performed with American workers both full- or part-time, which highlighted 57% of workers had tried ChatGPT, while 16% regularly used ChatGPT at work.

ChatGPT is a large language model created by OpenAI that is designed to understand and respond to natural language. It is trained on a wide variety of texts – including books, articles, and websites – which allows it to understand and answer questions on different topics and learn from conversations.

This Threat Advisory highlights the risks associated with the use of ChatGPT and other AI chat tools and provides steps for early detection and mitigation of ChatGPT.

OPERATIONAL SCENARIOS

The DTEX i³ team often hear organizations’ concerns surrounding the potential risks of AI tools with users who access and manage sensitive information relating to products that are in development. A growing concern is around the control and safety of data (such as source code) once it has been entered into an AI tool, like ChatGPT. In one instance, the DTEX i³ team observed a developer using an AI tool and source code to attempt to develop a Remote Code Execution (RCE) backdoor into their organization’s product. This was passed onto the organization for further investigation.

Ensuring that organizations are aware of what tools are being used in their environment and who is using them can assist in targeted user training to ensure security of source code is maintained.

Other Risks Associated with ChatGPT and Similar Tools:

Tools such as OpenAI’s ChatGPT ingest a user’s input and store that data within their servers. The data is sometimes viewed by OpenAI staff and reused as part of the AI tool’s machine learning process.

The DTEX i³ team has observed a number of incidents where employees or contractors have entered confidential business information or source code into tools like ChatGPT to help with work-related tasks. This poses an increased risk of accidental data loss and a number of organizations have begun heightened monitoring and alerting of this type of behavior.

OpenAI states how the input data is used on their platform in the following: “As part of our commitment to safe and responsible AI, we review conversations to improve our systems and to ensure the content complies with our policies and safety requirements. Your conversations may be reviewed by our AI trainers to improve our systems. No, we are not able to delete specific prompts from your history. Please don’t share any sensitive information in your conversations.”

EARLY DETECTION AND MITIGATION

DTEX Intelligence Release 6.10.0 contains a new Data Enrichment category designed to help DTEX insider risk practitioners detect ChatGPT and AI chat tools.

Internal organization practitioners are best placed to configure and tune detection logic within their organization. The AI chat-based technology space is in a current expansion phase with not just text-based solutions but also image, video and sound. The DTEX i³ team can be contacted for investigation support or further assistance.

How to Identify and Mitigate the Use of AI Chat Tools 

This content is classed as “limited distribution” and is only available to approved insider risk practitioners.
Login to the customer portal to access the indicators or contact the i³ team to request access.

DTEX i³ RECOMMENDATIONS

  • Review current acceptable use policies regarding the use of AI-based tools. Where gaps exist add specific language indicating what, if any, level of usage by employees is acceptable.
  • Monitor activity by any means available or by implementing the recommended Detections in the full iTA-23-02 and with the DTEX InTERCEPT platform. The tagging rules can then be leveraged to perform anomaly detection alerting (i.e. user vs user or user vs peer) or threshold-based alerting. Tagging can be used to help to quantify the current exposure to the problem.
  • Quantify current exposure to the problem by determining what current user population may be accessing such tools. Identify common usage patterns and impacted user populations.
  • Implement application control to limit the number of AI-tools. This can assist in focusing the activity monitoring when there is approved-for-use AI-based tools in the organization.
  • Leverage ‘teachable moments’ to automate user education around violations of updated policies. With this capability, users can be emailed directly and automatically without intervention by insider risk analysts.

CONCLUSION

AI-based tools are not going to diminish in use, particularly in corporate environments. The benefits are increased productivity, accuracy, and efficiency, allowing employees to focus more on complex tasks. Having the ability to control the use of AI tools and to monitor tool usage will be critical for organizations to protect themselves going forward into 2024 and beyond.

The DTEX InTERCEPT platform gives organizations the user-based awareness to stay on top of current and emerging technologies and how they are being used.

INVESTIGATIONS SUPPORT AND IMPLEMENTATION

For intelligence or investigations support on ChatGPT usage or rule configuration, contact DTEX i³. Extra attention should be taken when implementing behavioral indicators on large enterprise deployments.

RESOURCES

ChatGPT FAQs

The OpenAI ChatGPT website and frequently asked questions (FAQs) page will provide readers with the most up to date information on the applications terms of use and development.

ChatGPT Data Breaches: Full Timeline Through 2023

This is one example article of many detailing a data breach that affected ChatGPT. There are many more articles on similar platforms being affected by data breaches and likely many more data breaches that are not reported or covered by the media.

i³ Mission

DTEX i³ Mission Statement

DTEX i³’s mission is to uplift enterprise security by proactively detecting and mitigating insider risks.

Combining 20 years of insider risk experience with our potential risk indicators, we empower organizations to stay resilient, and maintain control of their public narrative and global success.

Importantly, DTEX i³ often discovers wider security threats that extend beyond insider risks. Such external threats are typically the outcome of an insider incident, not the intention of the insider.

In both cases, DTEX i³ prioritizes detection and deterrence, helping organizations to do away with reactive incident response.

Contact i³