Stokkete – stock.adobe.com
A study finds that the adoption of generative artificial intelligence platforms that are not sanctioned is increasing risk and causing headaches for security teams.
Alex Scroxton
is the Security Editor.
In the three months leading up to the end of this year’s May, the use of
generative artificial intelligence (GenAI) platforms among enterprise end users, and while security teams work to facilitate the safe adoption of software-as-a-service (SaaS) AI frameworks such as Azure OpenAI, Amazon Bedrock and Google Vertex AI, the use of unsanctioned on-premise shadow AI now accounts for half of AI application adoption in the enterprise and is compounding security risks, according to a report.
This study was compiled by data protection platform supplier
Netskopeexamined the growing shift in users’ reliance on
GenAI platforms on-premiseare the most popular platform for building AI agents and applications.
These platforms include tools like Ollama LM Studio and Ramalama. They are now the fastest growing category of
Netskope said that shadow AIis a good choice for its relative ease of use and flexibility. In order to speed up their projects, employees give the platforms access to enterprise databases, leaving them wide open to data theft or leakage.
Ray Canzanese is the director of Netskope Threat Labs. He said that “the rapid growth of Shadow AI places on organisations the onus to identify who creates new AI apps and AI agent using GenAI platforms, and where they build and deploy them.”
Security teams don’t wish to hinder employee end users’ aspirations for innovation, but AI usage will only increase. To safeguard this innovation organisations need to overhaul AI app controls and evolve DLP [data loss prevention] to include real-time user training elements.
The most popular way to use GenAI local is to deploy a large Language Model (LLM) Interface, which allows interaction with different models from the same’store front’. Ollama, the most popular framework of these, is
. It does not have inbuilt authentication like the most popular SaaS solutions. Users must deploy it behind a proxy or private access solution with appropriate authentication. This is a difficult task for the average user. Netskope report (19659020)
While OpenAI, Bedrock and Vertex provide safeguards against model abuse; Ollama users are responsible for preventing misuse. Netskope stated that while on-premise GenAI has some benefits, such as helping organisations leverage their existing GPU resources or building tools that better interact and integrate with their on-premise systems, these may be outweighed by the responsibility that organisations have for the security of the GenAI infrastructure. This is not the case with SaaS. Netskope analysts are tracking 1,550 GenAI SaaS apps, which their customers can easily identify using focused searches within its platform. You can also track usage by monitoring who is accessing AI markets.
Hugging Faceis one example.
In addition to identifying the use,
IT and security leaders should consider formulating and enforcing policy that restricts employee access to approved services and blocks unapproved ones. They should also implement DLP to account data sharing in GenAI Tools, and adopt real-time coaching to encourage users to use approved tools and sensible practices.
Adopting a continuous monitoring of GenAI usage and conducting an inventory against frameworks provided to you by
NIST
OWASP
Mitre can also be recommended.
The report’s authors warned that “Agentic shadow AI is like having a person come into your office each day and handle data, take actions on systems without being background checked or having security monitoring in the place.”
By Grant Hatchimonji.
byGrant Hatchimonji.
RSAC rewind : Agentic AI governance gaps and insider threat

