paint-brush
Shadow AI: Reshaping the Future, But at What Cost?by@viceasytiger
1,541 reads
1,541 reads

Shadow AI: Reshaping the Future, But at What Cost?

by Vik BogdanovFebruary 17th, 2024
Read on Terminal Reader
Read this story w/o Javascript

Too Long; Didn't Read

In response to emerging security vulnerabilities from AI tools like Amazon Q and ChatGPT, major companies such as Amazon, Samsung, and Apple have implemented strict AI usage policies. Despite these efforts, a "Shadow AI" culture has emerged, with employees bypassing restrictions to use AI for efficiency, highlighting a significant gap between policy and practice. Recent studies show a widespread unofficial use of generative AI in the workplace, despite corporate bans. This scenario underscores the challenges of balancing security concerns with the benefits of AI, pushing organizations to explore strategies for managing Shadow AI, including developing comprehensive AI usage policies, fostering a culture of innovation, and enhancing data governance to mitigate risks and leverage AI's potential responsibly.

People Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - Shadow AI: Reshaping the Future, But at What Cost?
Vik Bogdanov HackerNoon profile picture


Exploring Shadow AI's impact on business: risks, strategies, and the quest for a secure, innovative future. How will companies navigate this new frontier?


In December 2023, Amazon unveiled its latest AI venture, Q, promising a safer alternative to consumer-focused chatbots like ChatGPT. However, the excitement was short-lived. Just three days after the announcement, Amazon Q was embroiled in controversy. Employees were alarmed over its inadequate security and privacy measures, revealing that Q fell short of Amazon's stringent corporate standards. Critics highlighted its tendency for "hallucinations" and leaking sensitive information, including AWS data center locations, unreleased product features, and internal discount programs. Amazon's engineers were forced into damage control mode, addressing critical issues tagged as a "sev 2" emergency to prevent future fallout.


Around the same time, Samsung Electronics Co. grappled with its own AI-induced headache. Sensitive internal source code found its way onto ChatGPT, spotlighting glaring security vulnerabilities. The response was swift: a company-wide ban on generative AI tools was communicated through an internal memo. Samsung's decision underscored the difficulties of managing data on external AI platforms, such as Google Gemini and Microsoft Copilot, where control over data retrieval and deletion is elusive. This move mirrored the concerns of 65% of Samsung employees, who viewed these AI services as a digital Trojan horse. Despite the ban's impact on productivity, Samsung remained firm, opting to develop in-house AI solutions for translation, document summarization, and software development until a secure AI-using environment could be established.


Apple, too, joined the fray, prohibiting its employees from using ChatGPT and similar AI-powered tools. The ban was partly fueled by the tools' affiliations with Microsoft, a direct competitor, stoking fears about Apple's sensitive data security. This trend wasn't exclusive to tech giants; financial behemoths like JPMorgan Chase, Deutsche Bank, Wells Fargo, and others also limited AI chatbots' use, aiming to shield sensitive financial information from third-party eyes.


Yet, these restrictions inadvertently birthed a culture of "Shadow AI," where employees use their personal devices at work in their quest for efficiency and time-saving, highlighting a significant policy-practice gap in AI usage.

Shadow AI: Unseen Threat

Although concrete data is scarce, numerous individuals at companies with AI restrictions have confessed to employing such workarounds — those are only the ones open about it! This Shadow AI usage is prevalent in many organizations, encouraging the use of AI in ways that contradict or violate company policies, thus becoming an activity employees feel compelled to conceal.


As I dug into this issue deeper, I found some recent studies confirming that despite numerous stories about companies restricting genAI use in the workplace, employees don't seem to be using it any less. Recent research by Dell indicates that 91% of respondents have dabbled with generative AI in their lives in some capacity, with another 71% reporting they've specifically used it at work.


The study conducted by ISACA highlights a significant gap between AI adoption in the workplace and the formal policies governing its use in Australia and New Zealand. While 63% of employees in these regions utilize AI for various tasks, only 36% of organizations officially allow it. The survey reveals that AI is being applied for creating written content (51%), enhancing productivity (37%), automating repetitive tasks (37%), improving decision-making (29%), and customer service (20%). However, only 11% of organizations have a comprehensive policy for AI usage, and 21% have no intention to establish any.


Moreover, the study by ISACA indicates a lack of AI-related training within organizations, with only 4% offering it to all staff and 57% not providing any training, even to those directly affected by AI technologies. This situation raises concerns similar to those regarding Shadow IT, where employees use IT resources without formal approval, potentially risking organizational security and governance.


Navigating the New Frontier of Risk and Responsibility

Much like how Shadow IT snuck up on businesses, Shadow AI is already present, forcing organizations to confront their GenAI stance head-on while still figuring out how to use it.


Experts believe guardrails will not stop employees from using AI tools as they significantly enhance productivity and save time. Therefore, company CIOs must confront this issue and explore mitigation strategies that align with their organization's tolerance for risk. Inevitably, employees with good intentions will utilize these tools to increase their efficiency, so corporate tech leaders can prevent any potential harm to the organization by proactively addressing this trend and managing it effectively.


Every employee's interaction with AI tools can be a potential point of vulnerability.


The history of Shadow IT is marked by significant data breaches, such as the infamous incidents involving unsecured Amazon S3 buckets, which led to the public exposure of the personal data of 30,000 individuals. These historical precedents serve as a cautionary tale, emphasizing the need for rigorous data governance in the age of AI.


Shadow AI is a more formidable challenge than Shadow IT for several reasons. First, the decentralized nature of AI tool usage means that the potential for data misuse or leakage is not limited to a technical subset of employees (e.g., developers) but extends across the entire organization. Additionally, AIaaS (AI as a Service) models inherently learn from the data they process, creating a dual layer of risk: the possibility of accessing sensitive data by AI vendors and the enhanced capability of bad actors to discover and exploit exposed data.

Strategies to Tackle Shadow AI

According to Amir Sohrabi, regional VP of EMEA and Asia and head of digital transformation at SAS, tech leaders with a data-first mindset will be able to drive enhanced efficiencies in 2024 and beyond. This is because maximizing the benefits of generative AI tools is contingent upon well-organized data, necessitating robust data management practices encompassing data access, hygiene, and governance.


In his article for CIO.com, Nick Brackney, Gen AI & Cloud Evangelist Leader at Dell Technologies, points to "three prescriptive ways" businesses should use to combat Shadow AI successfully.


First, establish a centralized strategy for generative AI usage, involving executive leadership to define use cases, create secure access, and protect data. This approach simplifies enforcement and scaling across the organization while requiring effort to build and identify easy wins to ensure success.


Second, keep your data organized and understand which types should not be placed in public or hosted private cloud AI offerings, such as trade secrets and sensitive information. Use AI solutions that allow for complete control or do not retain conversation logs for these types of data.


Third, control the AI service by bringing it to your data, whether on-premises or through secure cloud solutions, to leverage advantages in governance, employee productivity, and secure data access. This approach enhances the end-user experience, ensuring compliance and reducing the risk of data exposure.


Crafting a clear AI Acceptable Use Policy is crucial for delineating improper AI practices that could potentially harm your organization and for guiding the integration of AI applications in line with data security protocols and risk management strategies. This policy serves as a benchmark, allowing decision-makers to evaluate the usage of AI tools within the organization against established guidelines, swiftly pinpoint any risk exposures, and determine the necessary corrective actions.


Ethan Mollick, professor at the Wharton School of the University of Pennsylvania, offers another thought-provoking approach. He believes that traditional methods for integrating new technologies are ineffective for AI due to their centralized nature and slow pace, making it difficult for IT departments to develop competitive in-house AI models or for consultants to provide specific guidance. The real potential for AI application lies with the employees who are experts in their own jobs, suggesting that for organizations to truly benefit from AI, they must engage their workforce (aka "Secret Cyborgs") in using AI technologies.


First and foremost, brands should acknowledge that employees at any level could possess valuable AI skills, irrespective of their formal role or past performance. Having detected the secret cyborgs among their AI-savvy employees, companies must foster a collective learning environment such as crowd-sourced prompt libraries and create a culture that diminishes the apprehension around AI by offering assurances against job loss due to AI, promoting the use of AI for eliminating mundane tasks and encouraging more engaging work.


Establishing psychological safety is crucial to encouraging open AI usage among employees.


Employers should be able to offer substantial rewards for identifying significant opportunities where AI can aid the organization. This could include financial incentives, promotions, or flexible working conditions and be handled through gamification.


Today's organizations should act swiftly to determine how productivity gains from AI will be utilized, how work processes should be reorganized in light of AI capabilities, and how to manage potential risks associated with AI use, such as data hallucinations and IP concerns. This calls for a proactive approach to crafting comprehensive AI policies, engaging employees across all levels to leverage their insights, and fostering a culture that rewards AI-driven innovation.


As technophiles and businesses navigate the complexities of AI integration, what strategies will you deploy to responsibly and innovatively leverage this transformative technology within your organization or personal projects, ensuring a balanced approach to privacy, security, and efficiency?


Don’t forget to check out my previous article to discover AI’s dirty secret!