After years of combatting shadow IT, organisations now have to cope a new kind of employee-created, unsanctioned IT setup that could expose a workplace to emerging cyber threats – shadow AI.
As its name implies, shadow AI refers AI apps or services such as chatbots ChatGPT and Bard, and advanced platforms like AlphaCode and SecondBrain that operate beyond an IT team’s control and brings added risk to organisations.
With more users jumping onto the AI bandwagon without the safeguards put in place at work, shadow AI will pose a greater risk than shadow IT, which refers to systems and devices not managed by an IT department, warns WalkMe, a software-as-a-service (SaaS) company.
“Shadow IT could look like employees sharing work files on a Google Drive instead of the company’s approved work drive or when virtual meetings are organised on Zoom instead of the official company platform, such as Microsoft Teams,” said Vivek Behl, digital transformation officer for WalkMe.
However, employees using AI outside of the company’s purview may unwittingly expose confidential company data, opening up the data to unknown risks and possible data breaches.
“Employees may not understand that using a generative AI platform is not the same as using Google search or working on a Word document or the company’s cloud-based systems,” said Behl.
For example, an employee using Large Language Models (LLMs) to verify programming code or plan business strategies can inadvertently disclose proprietary information to AI algorithms, as LLMs can replicate the exact details of customer data.
“This is concerning because aside from the general population, malicious cyber attackers are also using these same platforms to generate malware to attack key business databases,” said Behl.
In September, it was discovered that AI researchers at Microsoft accidentally leaked 38TB of sensitive company data from the backups of two employee workstations when they published open-source training data on GitHub, a software development platform.
The exposed data contained private access keys, passwords for Microsoft services, and more than 30,000 messages made by Microsoft employees on the Teams app.
Earlier this year, Samsung accidentally leaked sensitive data when staff uploaded data to ChatGPT. In response, the electronics giant temporarily banned the use of generative AI tools on company-owned devices, and non-company-owned devices running on internal networks.
Major banks, like Bank of America, Citi, Deutsche Bank, Goldman Sachs, Wells Fargo and JPMorgan, have placed restrictions on the use of ChatGPT by employees.
Research by YouGov and Microsoft shows that workers in Asia are using AI-based platforms for their work and even using generative AI platforms that are banned by their companies.
Banning generative AI within an organisation could create a competitive disadvantage, since it promises to increase creativity and improve productivity by streamlining administrative tasks.
Gartner predicts that by 2025, generative AI will become a workforce partner for 90 per cent of companies worldwide. As a result, businesses that are actively blocking the use of AI in the workplace will be left behind.
A balanced approach
Behl emphasises the importance of balancing the risk with the benefits of generative AI. “Organisations need to implement the right combination of technologies and guidelines to lower the risk of a data leak.”
In other words, business leaders should be educated about the ever-evolving AI landscape and its associated risks and rewards. This knowledge will enable them to allocate resources and align policies with business needs and compliance regulations to make AI adoption safer and more intelligent.
Open communication channels with employees, such as town hall discussions or workshops, can boost transparency and confidence in using AI technologies.
Finally, AI explainability is the foundation of effective AI adoption in the workplace, where all stakeholders are fully informed about how AI works, how data is being used, and how outcomes are determined. A lack of transparency can lead to distrust in AI systems, according to research firm Forrester.