Shadow AI and Security: How to Regain Control Before Regulators Step In
Employees love using AI tools, sometimes a little too much. From ChatGPT to image generators, staff are embracing AI in their daily work, often without IT’s green light. This phenomenon, dubbed “shadow AI,” is the new wildcard in corporate security. It refers to the unsanctioned use of AI applications by employees without IT or management approval.
Just as “shadow IT” meant rogue apps and cloud services running under the radar, shadow AI is all about unofficial AI usage, and it’s exploding in workplaces across Australia and New Zealand. A recent report found that over 80% of workers (and nearly 90% of security professionals!) admit to using unapproved AI tools on the job.
While these tools can boost productivity, they also create serious risks around data leakage, compliance breaches, and other security vulnerabilities. Here’s why shadow AI is a ticking time bomb and how organisations can regain control before regulators or disasters force their hand.
Why Shadow AI Is Widespread
One reason shadow AI has proliferated is that modern AI tools are just so easy to access. Many are free or low-cost SaaS apps available with a few clicks.
Employees often turn to these AI solutions to work smarter or faster when official systems don’t meet their needs[39]. For example, a marketer might use an AI copywriting tool to draft content quickly, or a developer might use an AI code assistant to troubleshoot all without telling IT.
There’s an upside: this initiative can spark innovation and agility. In fact, teams sometimes find creative solutions through shadow AI that management hadn’t considered. It’s a bit of a double-edged sword, shadow AI can drive quite improvements in workflows, which is why even some managers and executives engage in it. (Surprise: executives were found more likely than regular staff to regularly use unapproved AI tools, perhaps feeling confident they can handle the risks.)
Another factor is the overconfidence gap. Paradoxically, the more employees think they know about AI security, the more likely they are to bypass the rules. A study by UpGuard found that many users who believed they understood AI risks still routinely used unsanctioned tools. In their minds, they can “manage” the risk, even if that’s not true in practice.
Combine this with the finding that less than half of workers are even aware of their company’s AI usage policies, and you have a recipe for rampant shadow AI. People either don’t know the rules or think they don’t need to follow them.
The Security and Compliance Risks
The risks from shadow AI are very real. Foremost is data security. When employees plug sensitive company information into a random AI app or web service, there’s no telling where that data goes or who can access it.
Imagine an employee pasting a customer list or product design into a free AI tool where the data could be stored on external servers, potentially even used to train someone else’s AI model. It’s no wonder a CISO poll found 1 in 5 companies had experienced data leakage due to employees using generative AI tools.
In Australia, we’ve seen high-profile incidents of confidential data exposure via ChatGPT. For instance, earlier, Samsung had to ban ChatGPT after engineers accidentally leaked sensitive code to it. Once that data is out, it can’t be clawed back, and it may even become public. This is exactly why regulators like the OAIC (Office of the Australian Information Commissioner) warn organisations not to input personal or sensitive data into public AI tools.
Compliance violations are another major risk. Many industries have strict data protection laws (think health data, financial info, personal identifiers under the Privacy Act). If an employee feed regulated data into an AI without proper controls, the company could be in breach of those laws. There’s also the coming wave of AI-specific regulation internationally, for example, the EU’s GDPR explicitly holds companies liable for how they process personal data, and unvetted AI usage could run afoul of such rules.
Even locally, while Australia and NZ haven’t yet enacted AI-specific laws, regulators are watching. If businesses don’t self-regulate shadow AI, we could see tougher rules imposed. And regardless of AI-specific laws, existing laws still apply. A client record exposed via an AI tool is treated the same as any other data breach under the law. Companies have faced massive fines (Australian penalties for serious privacy breaches can reach tens of millions of dollars) and reputational damage for such lapses.
Beyond data issues, shadow AI can introduce unknown vulnerabilities. It doesn’t vet unauthorised software for malware or security flaws. An innocuous-looking AI browser extension, for example, could be a backdoor for attackers if it’s poorly secured.
Also, when AI tools make automated decisions (say, an employee using a free AI to screen job applicants or to make pricing recommendations), they might produce biased or incorrect results that conflict with company policy or ethics. If those outputs are acted on, the company could be inadvertently discriminating or making faulty business decisions. And if discovered, that’s a reputational black eye or even legal liability.
In short, shadow AI can undermine all three pillars of cybersecurity, confidentiality, integrity, and availability: confidential data leaks, compromised decision integrity, and system availability at risk if rogue tools cause incidents. It’s a headache the CIO and CISO can’t ignore.
Taking Back Control: Best Practices
How can organisations get a handle on shadow AI without stifling innovation? Here’s a playbook to regain control:
- Discover and Monitor
You can’t manage what you don’t know exists. Start by assessing the current state of shadow AI in your org. This might include surveys, network monitoring, or software asset checks to identify what AI tools people are using. You might be surprised, shadow AI usage spans all departments (with marketing/sales often the heaviest users).
IT teams can use data loss prevention (DLP) and monitoring solutions to spot telltale signs (e.g. large copy-pastes into web-based AI tools, or unusual API calls). The goal isn’t to snoop on everyone, but to map out the landscape. Once you know the popular unofficial tools, you can prioritise which ones pose a high risk and need immediate action.
- Update Policies and Educate Employees
If you haven’t already, implement a clear AI usage policy that spells out what is and isn’t allowed. This should cover generative AI tools, data-handling rules, and the requirement to obtain approval for new tech.
However, a policy on paper isn’t enough, fewer than 50% of employees currently know their company’s AI policies, so a major push on awareness is needed. Conduct training sessions specifically about the risks of shadow AI. Explain real examples of data leaks and compliance issues to make it concrete. Emphasise that sharing work data with public AI tools is akin to sharing it with the world.
Also, highlight that while AI is useful, it must be used within a safe framework. Make it clear there’s no punishment for past shadow AI usage; you’re not trying to police the past, but protect the future. Encourage employees to come forward with AI tools they find useful so IT can evaluate safer alternatives (making them part of the solution).
- Provide Secure AI Alternatives
One reason shadow AI flourishes is that official tools are lacking. To counter this, organisations should offer sanctioned AI platforms that employees can use safely and in line with policy. For example, if staff are using ChatGPT on their own, consider providing an enterprise-approved version (such as ChatGPT Enterprise or Microsoft’s Azure OpenAI service, where company data stays protected).
These paid solutions often include data privacy assurances (e.g., no data used for training) that mitigate leakage concerns. Similarly, there are enterprise-grade tools for code assist, image generation, and more. By provisioning vetted AI tools and integrating them into workflows, you remove the temptation for employees to go rogue.
It’s the classic carrot approach: give people the functionality they want, in a way that the company can oversee. Survey your teams to identify which AI applications would help them and prioritise officially rolling out those capabilities. When people have a fast, approved tool at their fingertips, they’re far less likely to seek unsanctioned apps.
- Implement Technical Safeguards
In parallel, use technology to enforce limits. This can include blocking or restricting access to known-risky AI web services on corporate networks, especially those with poor security records.
Many companies have already moved to ban or limit access to generative AI, in fact, one 2023 survey found that 93% of Australian organisations were implementing bans or restrictions on tools like ChatGPT due to security concerns.
While a wholesale ban may be overkill (and drive usage underground on personal devices), selective blocking, combined with the offer of alternatives, is effective. You can also set up data loss prevention rules: for instance, detecting if someone tries to upload a client file to an external AI site and blocking that action with an alert about policy. Another safeguard is requiring VPN or managed devices for internet access, so that personal devices on office Wi-Fi can’t quietly access shadow services without some oversight.
The aim is to create guardrails: employees could still break the rules, but it takes deliberate effort and they’re reminded at each attempt.
- Foster a Culture of Communication, Not Fear
Regaining control isn’t about Big Brother tactics; it’s about partnership between IT and staff. Encourage an environment where employees can ask, “Hey, I found this cool AI tool, can we use it?” without fear of reprisal. Then have a process to evaluate those tools quickly. Security teams should assess the tool’s data policies, security, compliance implications, and either approve it (perhaps with guidelines) or provide a safer alternative. When employees feel heard and supported in their desire to innovate, they’re more likely to cooperate with governance.
On the flip side, if the first response is always “No, banned!” they will find ways around. Executives need to set the tone here by themselves adhering to policies, if leadership clandestinely uses AI against the rules, it sends the wrong message. Leading by example (no more secret AI slide-deck generator, CFO!) reinforces the seriousness of following procedures.
- Prepare for Regulatory Scrutiny
It’s wise to get ahead of regulators on this issue. Document the steps you’re taking to control AI use, updated policies, trainings held, tools provided, incidents monitored, etc. This creates an evidence trail that your organisation is acting responsibly. If (or when) an audit or regulation comes, you can demonstrate compliance efforts. Keep an eye on the legal landscape: proposals for AI regulations in various sectors are brewing. For instance, the Australian government’s voluntary AI Ethics framework and the NAIC’s Guidance for AI Adoption might foreshadow future standards[34][52]. By aligning your internal practices with these guidelines now (transparency, accountability, risk management), you reduce risk of being caught off guard. Also, ensure your privacy officers are involved, shadow AI often intersects with privacy law, so their input on permissible use of personal data is key. In essence, treat shadow AI management as part of your compliance regime, not just IT hygiene.
The Clock Is Ticking
Regulators and industry standards bodies are increasingly aware of shadow AI. We’re already seeing warnings and guidance: for example, Australia’s privacy commissioner explicitly advises against feeding personal data into public AI[15], and financial regulators like APRA stress that firms must have “adequate guardrails” if using AI, otherwise proceed with extreme caution[53][54]. There’s a sense that if companies don’t get a handle on this voluntarily, stricter rules will come. We’ve observed overseas, authorities investigating companies whose employees caused breaches via AI. It’s not far-fetched that an NZ or Australian company could face legal action if shadow AI leads to a leak of customer data or a discriminatory outcome affecting customers. The reputational fallout alone should be motivation to act now.
In summary, shadow AI is a classic case of technology racing ahead of governance. It’s understandable, AI tools are exciting and can deliver real benefits. But uncontrolled use is a risk powder keg. Organisations in ANZ must strike a balance: enable innovation with AI within a secure, managed framework. By taking the steps outlined, from visibility and policies to cultural change, you can defuse the shadow AI threat before it blows up into a data breach or regulatory crisis. The goal is to bring those shadow activities into the light: sanctioned, secured, and still beneficial.
Don’t wait for a costly incident to act. iClick can help your organisation audit its shadow AI exposure and implement a robust AI governance plan. Contact us for a security assessment and take proactive steps now to secure your AI use, before someone else (or the law) forces your hand.
Lets create something extraordinary. Partner with us.
Start by saying hello