Last spring, Gartner issued eight cybersecurity predictions at their Security and Risk Management Summit. Among the eight, one really caught my eye:
“By 2027, 75% of employees will acquire, modify or create technology outside IT’s visibility – up from 41% in 2022.”[1]
In a world today fixated on privacy, data security, compliance, and control, how could this prediction come true?
To find out, I conducted an informal survey of people I interact with regularly to see if this might be the case. It turned out almost everyone I spoke with had a device or application they use regularly about which IT doesn’t know about.
The most concerning thing I heard was that nearly everyone mentioned using an unsanctioned AI tool at work to help them with daily tasks.
They use ChatGPT, Gemini, Adobe, Grammarly, Copilot, Semrush, Dall-e, and others daily.
The Conference Board backs that finding, with an estimate of 56% of people using AI tools at work regularly[2]. They go on to say that only 24% of companies have policies in place to govern the use of such tools. Another 23% are working to develop a policy for AI use.
In effect, we have a huge 'Shadow AI' problem and need to get ahead of it because the technology isn’t going away; it’s accelerating.
When I read Gartner’s prediction, I thought of the bring-your-own-device (BYOD) days – 'Shadow IT,' if you will.
If I bring in a personal laptop and it doesn’t have a security agent on it, the IT guys don’t know about it. That’s bad enough, but the risk from Shadow AI may surpass the risk from Shadow IT.
McKinsey and Company point out a dozen risks from AI that need to be addressed. The top six are inaccuracy, cybersecurity, intellectual property infringement, regulatory compliance, explain-ability (different answer every time), and privacy[3].
We are only beginning to understand how to mitigate these risks to acceptable levels. In the meantime, data is flying out the door, and the risks remain. I think Gartner might be right.
IT and cybersecurity teams are certainly not going to stop the use of these AI tools, so what should they do?
I believe this is a huge opportunity to demonstrate some leadership as they attempt to embrace this technology.
Take your technical expertise and guide your fellow employees on an educational journey. Guide their efforts to implement AI safely, securely, and effectively as this technology continues to develop.
Shift from a position of total control over what they can do to being a leader guiding your fellow employees and shepherding the company’s assets. You’ll have more insight into what they are doing and why.
Turn your company’s Shadow AI program into a workable program. The upside for you and the company is huge!
Contact us today if your organization needs help creating and implementing effective AI policies.
ABOUT THE AUTHOR
Chuck Matthews is the CEO of PacketWatch, a US-based boutique cybersecurity firm focused on incident response, managed detection and response, forensics, and advisory services utilizing their proprietary network-based threat-hunting platform.
References
[1] Gartner unveils top eight cybersecurity predictions for 2023-2024. (2023, March 28). Gartner.com. Retrieved March 22, 2024, from https://www.gartner.com/en/newsroom/press-releases/2023-03-28-gartner-unveils-top-8-cybersecurity-predictions-for-2023-2024
[2] Majority of US workers are already using generative AI tools. (n.d.). The Conference Board. https://www.conference-board.org/press/us-workers-and-generative-ai
[3] The state of AI in 2023: Generative AI’s breakout year. (2023, August 1). McKinsey & Company. https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai-in-2023-generative-ais-breakout-year