Quantcast
Channel: Tim Ferrill | CSO Online
Viewing all articles
Browse latest Browse all 863

Where in the world is your AI? Identify and secure AI across a hybrid environment

$
0
0

Artificial intelligence is quickly becoming an integral component of daily business operations — by 2026, more than 80% of enterprises will have used generative AI APIs or deployed AI-enabled applications, according to Gartner.

Most of this activity is happening in cloud and SaaS applications such as WorkDay, Salesforce, and Office 365. Amazon Web Services, for example, has helped over 100,000 organizations adopt AI-enabled machine learning to support contact centers, virtual assistants, investment analyses, legal and professional services, medical diagnostics, manufacturing, and more.

As more AI features are added to SaaS and cloud apps, it becomes more and more important for CISOs to find and bring these apps into their governance, risk, and compliance (GRC) programs. And that means taking a closer look at data usage and protection strategies, experts say.

“Your AI strategy is as good as your data strategy,” says Brad Arkin, chief trust officer at Salesforce. “Organizations adopting AI must balance trust with innovation. Tactically, that means companies need to do their diligence — for example, taking the time to classify data and implement specific policies for AI use cases.”

Easier said than done, given how deep these AI layers extend below the surface. For example, Salesforce, which attributed an 11% increase in revenue in fiscal 2024 to its unified Einstein 1 AI platform, integrates AI into pre-built modules supporting customer service, commerce, marketing, and sales. These modules integrate with fourth-party AI apps such as chatbots that connect to secure data lakes for retrieval — or optionally to on-premises databases, depending on configuration.

“There are open-source, free, and paid generative AI tools — thousands of them — and businesses are innovating to make them useful in new ways.” says Matthew Rosenquist, CISO and cybersecurity strategist at Mercury Risk and Compliance. “You want to write contracts, review stacks of resumes, draft status reports, create media rich marketing content? There’s a gen AI for that, and so much more. Every department will want to start using this new disruptive tech,” Rosenquist says. “The question is, what sensitive data and systems are being processed or exposed? And what other fourth-party systems does the AI application connect to? These are tremendous blind spots.”

Rosenquist points to a past client that wanted to replace its human help desk with an AI chatbot for password resets. That bot, he says, would validate the user and reset corporate passwords for the IT department — a huge time-saver, but the system would require administrative access to sensitive credential systems that would be exposed to the internet without thorough testing, vetting, and protection. “Disruptive technology is powerful, but also comes with equitable risks that must be managed” he says.

Insecure AI connected to vulnerable systems can cause big problems

Threat vectors like the DNS or APIs connecting to backend or cloud-based data lakes or repositories, particularly over IoT (internet of things), constitute two major vulnerabilities to sensitive data, adds Julie Saslow Schroeder, a chief legal officer and pioneer in AI and data privacy laws and SaaS platforms. “By putting up insecure chatbots connecting to vulnerable systems, and allowing them access to your sensitive data, you could break every global privacy regulation that exists without understanding and addressing all the threat vectors.” 

Solving these issues won’t be easy, she says, and will require the right multidisciplinary expertise, including developers, data scientists, cybersecurity, legal/risk/regulatory compliance, and other groups.

When it comes to assessing AI usage, business units play a key role in shaping AI policy and managing AI risk, says Renee Guttmann, former CISO of Coca Cola and other Fortune 500 organizations. This includes helping to identify where AI has been adopted. “Initial discovery begins with relationships with the business units to help identify if AI is coming in the back door,” she explains.

To illustrate her point, she refers to an October 2023 Gartner survey of 2,400 global CIOs. In it, 45% of respondents say they are beginning to work with their C-suite peers to bring IT and business staff together to co-lead digital delivery, while 70% say generative AI is a game-changing technology that’s rapidly advancing this democratization of digital delivery beyond the IT function.

SSPM and other tools can help identify and secure AI components

Guttmann also advises CISOs to speak to their security solution providers about functionality that they have within their products to address AI risk. Capabilities like SaaS security posture management (SSPM) can scan SaaS applications and flag AI tools that have been integrated with core SaaS applications to provide visibility into the risk level of each tool as well as the users who authorized it and are actively using it. “This will enable organizations to understand how AI is being used within their organization and whether the AI governance policies of the organization are being followed,” Guttmann says.

Holistically assessing every SaaS, cloud, and third-party instance for sensitive data risk has never been easy given their different configurations and protocols, according to Philip Bues, cloud security research manager at IDC. The addition of AI, he says, makes this even more challenging. “While there’s no automated way to find every instance of AI running in cloud, SaaS and third-party apps, there are some discovery tools that are better suited for this mission. Logging analytics for example can help organizations understand and get a handle on what AI is being used, where, and for what purpose.”

It helps to break discovery down into three parts: AI hosted internally, as a service, or AI-powered SaaS, says Michael Rinehart, vice president of artificial intelligence at Securiti.ai. “Not only do you have to discover where these assets lie, you need to assess their risk. Yes, you may be uncovering SaaS apps, but you don’t necessarily know if they are AI powered, and you don’t know the terms of use of how they manage data and AI training,” he notes.  

Bring AI and data usage into compliance

Guttmann also recommends reviewing SaaS applications and third-party SaaS API integrations for data governance. “For example, SaaS security providers help enterprises better understand and manage the behavior and risk of their SaaS ecosystem,” she says.

AWS, Azure, Google, and other trusted cloud vendors have their own security controls built into their AI models, along with optional controls that security teams can and should take advantage of to secure their AI usage.

Dinis Cruz, founder of The Cyber Boardroom, is a strong proponent of the built-in configurability of cloud and SaaS apps to ensure a more secure environment that supports his AI-based business.

In large cloud environments, such as with AWS Bedrock, he has access to read-only models that don’t learn on sensitive data, don’t retain any data, and can leverage existing security controls to manage the data exposed to the models (like authentication, authorization, DLP, observability, and more).

“The entire security model is centered around the AI prompts sent to the best model for the task at hand, which dramatically reduces the attack surface,” Cruz says. “We bring our own content and space that the models should be operating on, and that’s controllable.”

Providers such as AWS are already providing more secure AI integration

Sherry Marcus, director of Amazon Bedrock Science at AWS, says the platform is engineered so customers can bring foundation models to their data already stored in AWS. With the right security configurations and data protection controls in place, the data on AWS used for AI and for building AI applications is also protected. “So, instead of having to move their data to external models to do generative AI, customers are staying within their own data perimeters,” she explains. “It becomes your own personal genAI model that stays within your AWS account.”

Bill Shinn, senior principal at AWS Office of the CISO at AWS, also points out that cloud access security broker (CASB), data loss prevention, DNS and/or proxy controls, and EDR services can be used to identify and monitor when users are engaging with AI services. These controls and industry best practices are mapped to various AI use cases in an AWS blog on safe AI configuration.

AI security as a shared responsibility between providers and customers

Arkin says security is a shared responsibility between cloud/SaaS provider and enterprise customers, emphasizing optional detection controls like event monitoring and audit trails that help customers gain insights into who’s accessing their data, for what purpose, and the type of processing being done. 

He also points to Salesforce Trust Services like Shield Platform Encryption, Event Monitoring, and Einstein Data Detect, to classify data in Salesforce secure data lakes, and configure data protection policies. On a more detailed level, Salesforce’s Einstein AI “trust layer” gives customers choices on secure data retrieval and dynamic grounding for safe provision of AI prompts with context, along with data masking and zero data retention to protect the privacy and security of sensitive data when the prompt is sent to a third-party large language model.

Not everyone agrees that this “shared responsibility” is effectively protecting sensitive data processed and stored in clouds and SaaS, given the history of breaches involving misconfigurations on the part of the users. Toss in AI, with all its forms and layers, and it certainly complicates matters.

“This is not a problem that can be fixed using traditional tools so to speak. If there are multiple avenues to help fix a business problem, we should look into all of them,” says Steve Dufour CISO at Embold Health, a physician recommendation service that’s innovating with AI. “And, you really have to have a good communication style internally, and a good training program to help educate employees on what these are actually doing.”

Application Security, Cloud Security, Network Security, Security Practices

Viewing all articles
Browse latest Browse all 863

Trending Articles