The Rise of Shadow AI
The use of shadow AI, or AI tools that are not approved for use by IT, is happening right now in many organizations and businesses, and is only going to increase. According to Microsoft and LinkedIn, 75% of global knowledge workers are using generative AI at work. This spans across all age groups, with Gen-Z at 85%, Millennials at 78%, Gen X at 76%, and Baby Boomers at 73%.
This broad use of generative AI is enabling employees to be more creative and productive. However, it may also increase potential risks that C-suite leaders should not ignore, such as proprietary data leaks, governance and compliance violations, and security gaps.
With these potential risks in mind, C-suite leaders need to promote and build a responsible AI culture. A culture that promotes transparency and open dialog about which tools work best, making an effort to better understand how employees are using these tools across the organization. Doing this will effectuate more responsible use.
Shadow AI – What is it?
Shadow AI refers to the use of artificial intelligence software, large learning models, or other AI functionality that is not properly reviewed and approved by IT and is not part of the IT strategy.
While most employees who leverage AI are just trying to work smarter and faster, they may be unintentionally creating and escalating certain risks. Similar to shadow IT, shadow AI can address immediate challenges that employees may be facing, and may also enable faster delivery. However, due to the advanced capabilities of AI, risks can also be escalated faster.
Why Employees are Turning to Shadow AI Tools
Employees have been increasingly using AI tools to make some tasks faster, such as drafting documents and emails, as well as writing and testing software code. Additionally, certain AI tools can provide fresh ideas for current projects that are easier to obtain than with traditional search engines.
Another reason for the increasing use: AI tools are everywhere. They are designed with users in mind and are often free. With the pressure to be more productive while innovating, many employees are motivated to start using new tools right away instead of going through a lengthy IT approval process.
This “bottom-up” approach to AI adoption is being driven by those who prefer immediate problem-solving to deliver on their commitments, even before those in the C-suite know that unsanctioned AI tools have been used.
The Risks You Need to Know About
While the reasons for using shadow AI tools are clear, so too are potential risks that can stretch across the organization, such as security, compliance, and operational risks.
Below are several to be aware of:
Unintentional Data Leaks
Employees can upload and paste proprietary and sensitive information such as financial records, source code, and personal details into public AI tools, such as generative AI platforms. As any data that is used in these public AI tools can be saved and used to train models, it can potentially be exposed to competitors or regulators when they use those same tools.
Violations of Laws and Regulations
Using shadow AI tools in regulated industries, such as healthcare, finance, and law, can result in a breach of data protection laws if sensitive data is used on platforms that are not compliant. This can include public platforms that re-use data and those that do not provide complete transparency.
Security Breaches
Tools that are not properly vetted by IT may have weaknesses that can expose organizations to increased cyber-attacks, including hackers gaining access to critical data and systems.
Damage to Your Reputation
AI-generated content can include inconsistent or inaccurate information. Even worse, it can contain ethnically questionable or sensitive information. If this type of information is sent to customers or posted on social media, it can misrepresent your brand, leading to lost sales and public rebuke.
Lack of Accountability
Employees who use shadow AI tools may be doing so without anyone in IT knowing how these tools are being used. This creates ambiguity as to who is responsible for errors or the misuse of data. Additionally, shadow AI tools may not track any decision-making, making it impossible to understand how decisions were made.
The Actual Work Being Done With AI
What are employees actually doing with AI? Here are a few examples:
- Writing emails or proposals – AI tools can draft emails based on key points provided by users. Additionally, users can specify tone (e.g., friendly, formal) and personalization to make the email more applicable to the reader. Proposal writing AI tools can analyze materials such as reports and solicitations to build responses that can be finalized later by a person.
- Writing social posts, blogs, or product descriptions – After providing brief user input, AI tools can generate captions, blogs, phrases, or images in seconds. These can then be tailored for different uses, such as in social media platforms or a corporate blog.
- Summarizing meetings – AI tools can take meeting notes, summarize key points, and allow the user to share those notes and key points. Additionally, AI tools can also provide complete transcripts, highlight decisions made, and provide next steps that translate to actionable tasks that can be assigned to team members.
- Creating presentation slides and decks – Employees are using AI to help choose slide layouts and designs and to translate text from documents, such as Word or PDF, into fully designed slides. With some tools, the user just needs to provide a brief outline, where slides will be created with content and visuals.
- Brainstorming names or ad copy – Using a brief description of a business or product, AI tools can generate a list of potential names, where employees can then apply a tone or style to fit the industry. Additionally, AI can be used to create compelling ad headlines, descriptions, and calls to action.
- Generating code – Employees are using AI tools to generate code in various programming languages. Once the code is generated, AI can scan for bugs and suggest fixes. In addition, AI can refactor and optimize older code while still retaining core functionality.
- Translating documents or generating FAQs – AI tools can quickly translate documents or text into multiple languages. As tools have gotten better at retaining the original meaning once translated, these translated documents can be used both professionally and academically. Regarding FAQs, AI can analyze topics and keywords, creating questions and answers.
How to Respond: Allow Responsible Use With Guardrails
AI tools are clearly valuable, and companies should balance enabling creativity and faster delivery while implementing guardrails to mitigate potential risks. Smart companies aren’t simply saying “no” to using AI tools, but they are saying, “Here’s how.” Below are a few tips to enable better use of AI tools that may not be sanctioned by IT:
- Don’t default to block everything. Having a policy that blocks all unsanctioned AI tools will most likely not result in that as the outcome and may push the use of unsanctioned tools under the radar.
- Set up a procedure that can more quickly approve tools based on security and compliance standards that are already established within the organization.
- Publish clear acceptable use policies, such as on data use (e.g., no proprietary data in public models), as well as consequences that may occur during unauthorized use.
- Educate employees on the risks of shadow AI, including security vulnerabilities and legal implications. Promote and attend hands-on demonstrations of new tools to enable conversation and smart use.
- Offer secure and approved alternatives. Using software that IT has already approved is always a good idea. Ensure that employees are aware of any AI capabilities by providing demonstrations.
- Consider leveraging monitoring solutions to track activities in real time.
- Facilitate Open Discussions: Regularly include AI usage discussions in workshops, team and department meetings, and internal messaging platforms. This fosters transparency, addresses concerns, and promotes responsible AI use, keeping AI topics at the forefront of employees’ minds.
Build a Culture of Responsibility
AI is not replacing employees. If used responsibly, it will amplify important skills and allow all employees to be more creative. With this in mind, encourage transparency, safe experimentation, and open dialog. Communicate the good, as well as any potential risks, and ensure there is a clear escalation path if the use of unsanctioned AI creates unintended consequences.
Shadow AI is a signal, not of rebellion but of need and untapped potential. Your teams want to move fast and work smarter. Your job is to make sure they can do that safely, ethically, and securely.
At Cox Business, we provide the enterprise-grade connectivity, cloud services, private networking, and managed IT solutions needed to support modern, AI-powered workplaces without compromising security or compliance.