A new approach for GenAI risk protection


When generative AI (GenAI) hit the consumer market with the release of OpenAI’s ChatGPT, users worldwide flocked to the product and started experimenting with the tool’s capabilities across industries. The release also sent an instant panic through the hearts of information security professionals whose job is to protect organizations from risks, including the loss or theft of sensitive data — including personally identifiable information (PII), protected health information (PHI) and sensitive corporate data and intellectual property.

Before we jump into protection mode, we must first ask ourselves: “What is it we are trying to protect with GenAI?” I see 3 primary objectives: 1) sensitive corporate data and intellectual property, 2) PII, PHI and 3) malware, maliciously generated code, etc.

Traditional enterprise data loss prevention (DLP) tools (such as Forta, Symantec, Netscope, Trellix, Microsoft, etc.) have been around for years, but are expensive, cumbersome to implement and require lots of care and feeding by IT professionals to make them effective in an organization. They offer comprehensive solutions typically built around data-centric and network-centric DLP, which integrates into data sources and monitors the network and any egress points. As a result, only large organizations with plenty of resources have the capability of deploying legacy DLP tools.

Leave a Reply

Your email address will not be published. Required fields are marked *