Board Meetings
10 AI cybersecurity considerations for boards

AI is reshaping how organisations operate and compete. It’s automating operations and supporting smarter decisions across the business landscape.
Yet with its promise comes increased risk, especially in data security. Every AI system brings potential vulnerabilities in how data is handled, how models behave, and how regulations are interpreted and applied.
Cybersecurity isn’t just a technical issue. It touches every part of the organisation, including the boardroom. That’s because AI decisions are strategic, and without the right oversight, innovation can undermine safety, compliance, and trust.
But AI governance isn't easy. For many boards, this remains unfamiliar ground.
Here are ten considerations to help boards evaluate the cybersecurity implications of AI and guide their organisations through responsible adoption.
What are the top 10 cybersecurity considerations for boards adopting AI?
AI is no longer experimental, it’s foundational. As adoption accelerates, boards must ensure that cybersecurity keeps pace with innovation.
According to PwC’s 28th Annual Global CEO Survey, nearly half of CEOs list AI integration, including GenAI, as a top priority for the next three years. That momentum places clear pressure on boards to lead with care and foresight.
The ten considerations that follow provide boards with practical insight to support responsible, secure AI governance.
1: Data protection and privacy
AI needs data. A lot of it. And often, that data is personal, sensitive, or legally protected. Think board documents, internal communications, strategic notes, not exactly the kind of thing you want slipping through the cracks.
When security isn’t built in from the start, AI can quickly become a liability. It doesn’t take much: a misconfigured setting, an overlooked consent form, an unclear data trail.
To help protect privacy and stay compliant, boards should:
- Make sure every AI initiative aligns with GDPR, the Swiss DPA, and any other local laws
- Push for privacy-by-design, so data protection is built in from the start
- Ask for clear governance rules: Who accesses what? What’s encrypted? What’s logged?
- Support regular reviews of how data is collected, stored, processed, and shared
You don’t need to be a technical expert. But if your AI setup isn’t protecting people’s data, it’s probably putting your organisation at risk. And that’s something the board can’t ignore.
2: Third-party vendor management
Most AI tools don’t work in isolation. They connect with external vendors such as cloud platforms, APIs, and proprietary models. Each one adds power, but also risk.
Some vendors are great on paper. But behind the polished features, there might be hidden dependencies: unclear data pathways, offshore hosting, or infrastructure you don’t control. That’s where things can get complicated, especially if something goes wrong.
To reduce third-party risk, boards should:
- Check not just what a vendor offers, but where and how your data is handled
- Look for tools that give you flexibility, rather than locking you into one provider
- Ask for clear commitments around security and response times in every contract
- Reassess vendor risk regularly, especially as your AI tools grow or change
You don’t need to micromanage vendors. But you do need to know who you’re trusting, and whether their choices could quietly become your problem.
3: AI system integrity
AI learns from the data you give it. That’s what makes it powerful and risky. If that data is flawed, or if someone tampers with it, the system can start producing odd results. Sometimes, you won’t even notice anything’s wrong until a poor decision has already been made.
These aren’t just technical glitches. They’re real risks to strategy, reputation, and trust. And if boards don’t ask the right questions, these blind spots can go unchecked for too long.
To help protect system integrity, boards should:
- Ask for regular testing that checks how AI systems behave under pressure
- Ensure there’s a way to spot strange or unexpected outputs quickly
- Support plans that make it clear how and when to turn off AI tools if needed
- Back tools that explain how decisions are made, so nothing stays hidden in a black box
It doesn’t take a deep technical dive. Just curiosity, clear roles, and the confidence to ask: “Can we trust what this system is telling us?”
4: Ethical AI use
AI should help people, not harm them. But it doesn’t always work that way. If left unchecked, AI systems can reinforce bias, treat groups unfairly, or make decisions that don’t align with your organisation’s values.
It’s not just an ethics issue. It’s also reputational, legal, and operational. When trust is lost, it’s hard to win back, and often expensive to fix.
To support ethical use of AI, boards should:
- Lead the creation of company-wide principles for how AI should and shouldn’t be used
- Make sure diverse teams are involved in building and testing AI systems
- Ask for independent reviews that look at fairness and potential discrimination
- Set clear lines of accountability for how AI decisions impact customers and employees
As KPMG points out, “Building trust in AI requires a commitment to ethical principles, transparency, and accountability.”
5: Regulatory compliance
The rules around AI are changing fast. Every few months, there’s a new proposal, a new framework, or a new region tightening its stance. For boards, this means keeping one eye on the horizon at all times.
You don’t need to become an expert in the EU AI Act or data transfer laws. But you do need to make sure your organisation isn’t flying blind. Because the risks like fines, bad press, operational setbacks aren’t just hypothetical anymore.
To help stay on the right side of regulation, boards should:
- Ask for regular updates on the legal landscape, especially across the regions you operate in
- Involve legal and compliance teams early, ideally before rolling out any new AI tool
- Make sure there’s a paper trail: What decisions were made? Why? What risks were flagged?
Deloitte highlights the urgency: “Organisations must stay informed and proactive to ensure compliance and avoid potential penalties.”
6: Incident response planning
AI can speed things up, including when something goes wrong. A small issue can spiral quickly if no one’s prepared. And the usual playbook for cyber incidents might not cut it when AI is involved.
So the question for boards isn’t just “Do we have a plan?” It’s “Have we tested it for this?”
To make sure your organisation can respond fast and well, boards should:
- Update existing incident response plans to include AI-specific scenarios
- Run drills that explore what could go wrong and how the team would react
- Agree on who’s in charge when things escalate, including who can shut down systems if needed
- Plan how to communicate clearly with regulators, stakeholders, and the public if a breach happens
It’s not about predicting every outcome. It’s about being ready to act when the unexpected hits, because with AI, it often will.
7: Employee training and awareness
AI changes how people work. It introduces new tools, new risks, and sometimes, a bit of confusion. If employees don’t understand how these systems work, or what to watch out for, they can quickly become the weakest link in your security chain.
Not because they’re careless. Just because no one told them.
To build a stronger culture of awareness, boards should:
- Make sure training includes practical, AI-specific risks and examples
- Support regular refreshers, not once-a-year tick-box exercises
- Encourage curiosity. It should be normal to ask, “How does this actually work?”
- Track who’s been trained, what they’ve learned, and where there might be gaps
You don’t need everyone to become an AI expert. But giving people just enough understanding, and permission to ask questions, goes a long way.
8: Continuous monitoring and evaluation
AI isn’t something you set up once and forget. It changes. The data shifts. The risks evolve. And if no one’s keeping an eye on it, problems can build up quietly in the background.
That’s why ongoing monitoring matters, not just from IT, but from the board’s side too.
To stay in control, boards should:
- Ask for real-time monitoring that flags anything unusual
- Support tools that provide visibility into how the AI is behaving and what risks might be emerging
- Request regular updates on system health, threat exposure, and what’s being done about it
- Set clear expectations for reporting, including what gets escalated and when
You don’t need to see every dashboard. But you do need to know that someone is watching, and that there’s a plan when something looks off.
9: Collaboration with cybersecurity experts
Not every risk shows up on a dashboard. And sometimes, the biggest threats are the ones your internal team hasn’t seen yet.
That’s why it helps to get a second opinion, or a third. When it comes to AI, fresh eyes can catch things that slip past even the best internal teams.
To broaden your field of vision, boards should:
- Bring in external experts to pressure-test systems and spot weak points
- Join threat-sharing networks to learn what others are seeing out there
- Look at lessons from other sectors, both the wins and the failures
- Consider having external advisors or a subcommittee focused on AI and cybersecurity
You don’t have to be the expert in the room. But bringing them into the conversation? That’s a smart move.
10: Transparent communication
AI can be complex. But if the people affected by it, your teams, customers, regulators don’t know what’s going on, they’ll start to assume the worst. Silence rarely builds trust.
The same goes for when something goes wrong. A delayed response or vague update can make a tough situation even harder to manage.
To build trust and keep people on side, boards should:
- Be clear about how AI is being used and what safeguards are in place
- Share policies and ethical principles wherever possible, inside and outside the organisation
- Communicate quickly and openly if a breach or issue happens
- Keep an open line with regulators, shareholders, and employees
It doesn’t have to be perfect. Just honest. Because when AI is involved, people need to know someone’s in charge and that they’ll hear the truth if things don’t go to plan.
What is Sherpany’s approach to AI cybersecurity?
As EY notes, “Boards have a responsibility to understand the full range and extent of the risks and opportunities presented by AI.”
Sherpany takes this responsibility seriously, blending rigorous security protocols with a privacy-first mindset to protect what matters most: your data. Visit the Sherpany Trust Centre to learn more.
Here’s how we do it:
1. Encrypted, private, and geo-redundant
Security isn’t a feature for us. It’s the foundation. Sherpany’s infrastructure is designed to keep your board data protected, private, and always available.
Here’s what that looks like:
- End-to-end encryption: Your data is encrypted both in transit and at rest using AES 256 and SSL-TLS protocols
- Swiss-based hosting: We host data in independent, geo-redundant data centres in Switzerland, ensuring high availability and strong local protection
- Private AI model: Any AI we use runs in a closed environment, with no connection to public models or external APIs
- No vendor lock-in: Our AI tools work within Sherpany’s secure ecosystem, not through third-party platforms
There are no shortcuts. No back doors. Just a secure, private environment built to meet the standards board data deserves.
2. Built-in, certified compliance
Compliance isn’t an afterthought. It’s built into everything we do. With laws and standards evolving quickly, Sherpany gives boards the confidence that their tools are aligned from day one.
Here’s how we keep things compliant:
- ISO 27001 certified: One of the most widely recognised standards for information security management
- ISAE 3000 Type II assurance: Independent validation of our controls and processes
- Fully aligned with GDPR and Swiss data protection laws
- No exposure to the US CLOUD Act: Your data stays sovereign and shielded from foreign access
It’s not just about ticking boxes. It’s about peace of mind, built in from the start.
3. Always-on risk management
Security isn’t something we set and forget. The risks change. The tools evolve. And we believe it’s our job to stay one step ahead.
Here’s how we keep things tight:
- Continuous monitoring: We watch systems around the clock so issues can be caught early
- Independent audits: Regular reviews keep us sharp, including checks against FINMA 18/3
- Bug bounty programme: Ethical hackers help us find and fix weak spots before others do
No system is perfect. But we’re committed to being proactive, curious, and always ready.
4. Security tools to avoid human errors
Most breaches don’t start with a clever hack. They start with a simple mistake. A reused password. A misplaced phone. A file sent to the wrong person. It happens.
That’s why we build tools that help people avoid small errors turning into big problems:
- Two-factor authentication (2FA) adds an extra layer of protection
- Access controls let you decide who can view, edit, or download sensitive documents
- Remote wipe for mobile keeps your data secure even if a device is lost
We all slip up sometimes. Our job is to make sure those moments don’t become security risks.
5. Built for regulated industries
Some sectors face tighter regulations and higher risks. Sherpany is built for both.
We provide features that help boards operate with confidence, even under strict compliance demands:
- Confidentiality labels restrict access based on document sensitivity
- Restricted actions prevent copying, downloading, or printing sensitive content
- Password-protected uploads add extra protection for shared files
It’s control where you need it, without slowing your board down.
Augment your board with AI, securely
AI can be a powerful asset, but only if it’s built on a strong foundation of trust, security, and clear governance. For boards, that means asking better questions, staying close to the risks, and making security part of every AI conversation.
These ten considerations are a place to start. They won’t solve everything, but they will help your board lead with confidence and avoid the kinds of risks that quietly grow when no one’s paying attention.
At Sherpany, we help boards embrace AI without compromising what matters most.
Want to see how it works? Book a free demo and explore how Sherpany helps you govern AI securely, clearly, and on your terms.