Security Implications When Adopting AI
With the power to transform not only the way we work but also how we live day-to-day, Artificial Intelligence (AI) has become a hot topic globally. Yet despite the transformative benefits, the adoption and rise of AI poses additional security risk. In today’s article Cyber Services Director, Sean Tickle shares some key security implications of AI.
Delivering proven business benefits, AI has grown in popularity in recent months. Like all new technologies, organisations utilising AI must navigate and remediate a new and complex set of cyber and data risks. While AI tools such as Microsoft Copilot for security can offer some assistance, these tools still lack the efficiency and scope of a human security professional.
As we continue to unlock the power of machine learning and AI-powered productivity inside our rapidly evolving digital world – in order to remain as cyber resilient as possible, here are multifaceted security challenges to keep in mind when implementing and rolling-out AI tools.
Key security implications of AI
Addressing the security implications of adopting AI requires a comprehensive approach; and it’s worth engaging with a professional security service provider that can help your organisation understand its data structure and specific security vulnerabilities before implementing AI tools.
Key security considerations such service providers may look out for include:
Data privacy and protection
Large scale data requirements: With datasets the lifeblood of any AI system, organisations embracing AI will typically require large datasets, some of which can contain sensitive personal, financial, or medical information. Poor handling of these datasets may have significant business consequences, including data breaches that compromise user privacy.
Inference attacks: even anonymised datasets may be vulnerable where an attacker deduces private information from seemingly harmless data.
Data ownership: a valuable business asset the question of who owns and controls data used for training AI models can also lead to conflicts, especially if sensitive or proprietary data is involved.
Adversarial attacks
Adversarial examples: manipulation of input data to deceive AI systems, for example in security, causing incorrect or dangerous outcomes.
Model poisoning: introduction of malicious data during the training phase to poison AI models, resulting in biased or harmful predictions and outputs.
Model inversion: by querying an AI system repeatedly, cyber-attackers can reconstruct sensitive data the model was trained on, exposing private information.
Bias and fairness issues
Discriminatory outcomes: Where biased data has been used to train AI systems this may perpetuate or even amplify biases, leading to unfair or discriminatory decisions, especially in critical areas like hiring and lending, e.g.
Black box decision making: A new and rapidly evolving technology, AI models decision making process, in particular deep learning systems, are yet not understood by most people. This lack of transparency can make it difficult to identify or correct biased or malicious behaviours in the system.
Automation and weaponised AI
AI-powered cyber-attacks: With AI models designed to adapt and learn from the environment within which they operate, AI can be used to launch more sophisticated cyber-attacks, attacks that adapt to avoid detection.
Deepfakes and misinformation: AI can generate highly realistic fake content (videos, images, audio), leading to the spread of disinformation and more successful phishing scams, e.g.
System vulnerabilities
AI dependency: as our dependency on AI grows, systems become vulnerable to AI-related failures or attacks, e.g., an AI system used for fraud detection might be targeted by adversarial attacks.
AI software vulnerabilities: as with all software, AI systems can have vulnerabilities that malicious actors can exploit, whether it’s in the AI algorithms or the infrastructure supporting them.
Insider threats and misuse
AI Misuse: employees or insiders could misuse AI systems to access sensitive information, manipulate results, or engage in malicious activities like surveillance or fraud.
HITL bypass: as AI systems automate more decision-making processes, fewer human checks and balances might be put in place, increasing the risk of misuse or errors going unnoticed; this is called human in the loop (HITL) bypass.
Ethical and regulatory compliance
Regulatory compliance: as governments introduce new laws and regulations governing AI, organisations must ensure that AI systems comply with data protection, ethical use, and transparency standards. Failing to do so may lead to legal penalties.
Ethical implications: misuse or mishandling of AI can lead to ethical dilemmas, where systems prioritise profits over fairness, privacy, or human welfare.
As we can see, while AI does indeed offer powerful use cases for enhancing productivity,
human oversight and security activity is still necessary if we are to ensure that these systems function ethically, fairly, and safely, providing a safeguard against unpredictable, biased, or erroneous outcomes.
An AI example: Copilot for Microsoft 365
To round off this article, it’s worth considering a real use case of AI; one so many organisations have already implemented and are using today: Copilot for Microsoft 365 (M365 Copilot for short).
It is important to note, that the selection of M365 Copilot is not because it is any less secure than other productivity tools (in fact it benefits from Microsoft’s own security and compliance protocols which can be very robust, when configured correctly), just that the tools growing popularity, means it is one many business users are familiar with.
A large language model (LLM), M365 Copilot interacts with data primarily by processing it in response to user prompts, generating content or insights based on the patterns it has learnt and the data it can access across SharePoint. Copilot interacts with applications (e.g., Word, Excel, MS Teams) and company data (chats, emails, documents) to assist users in automating tasks, generating content, prioritising their workload, and so on.
These powerful search capabilities, however, are precisely why – prior to implementing AI tools like Copilot – organisations must establish a strong foundation of data security. It is imperative that guardrails are put in place to ensure that the AI can only interact with data that it’s safe for users to access. For instance, without privileged access management controls in place, Copilot could inadvertently expose sensitive and personal data in response to a user prompt, leading to unauthorised access and data leakage.
In addition to ensuring data/user access and data privacy controls are in place when implementing AI tools such as M365 Copilot, we would also recommend organisations:
Interact with data outputs – ensure human oversight in reviewing and validating AI-generated outputs, especially in critical tasks such as drafting legal documents or processing sensitive business data.
Ensure data encryption – Copilot requires access to data in real-time, and this data is transmitted to and from the service, to mitigate the risk of interception, ensure robust data encryption is in place.
Enact strict sharing policies – without the clear communication of data-sharing policies and monitoring of audit logs, employees may unintentionally share sensitive information outside the organisation or expose proprietary data to unauthorised users.
Provide training for users – users need to understand the limitations of AI and maintain human control over critical or sensitive decisions. Regular audits and reviews of AI-assisted tasks should be implemented to ensure compliance and accountability.
Consider compliance requirements – With each organisation regulated by distinctive organisational, industrial and local compliance and data governance rules and regulations, Microsoft 365 Copilot must be configured to meet your organisations specific compliance requirements for example GDPR, HIPAA or CCPA.
Regularly audit systems – AI systems can be vulnerable to adversarial attacks (e.g., attackers injecting misleading data to manipulate AI-generated outputs). It will be important to regularly update and audit AI models for security vulnerabilities and have internal security protocols in place to monitor and respond to suspicious activity.
Before organisations introduce AI like Microsoft 365 Copilot into their environment, I, like Microsoft, recommend that a strong foundation of security is built first.
More information about how to build a Zero Trust security strategy for Copilot is available from Microsoft, covering seven layers of protection in your Microsoft 365 tenant. These are: data protection, identity and access, app protection, device management, threat protection, secure collaboration in Teams, and user permissions to data.
However, if you would like to find out more about how Storm Technology can assist in securing your IT environment for the adoption of AI tools, please get in touch with our experienced security specialists.