A standard sentence in Microsoft's own Copilot documentation: Copilot can use any content the requesting user has permission to read.
Read that twice.
Most M365 environments I work in have years of accumulated permissions debt. Files in SharePoint sites whose membership stopped being audited in 2022. Teams channels that picked up extra members during a project and never had them removed. "Anyone with the link" sharing on documents people forgot they ever shared. "Everyone except external users" used as a shortcut on at least one library.
None of that was a problem yesterday. Nobody opens a deeply nested folder by accident, and the sharing list of an old SharePoint site doesn't matter if nobody's actually browsing it.
Copilot changes that. The moment a user types "summarise everything our company has on [X]", the search runs against everything they have permission to read, and Copilot returns the most relevant content it can find. If a senior salaries spreadsheet was shared with "Everyone in the company" two years ago and quietly forgotten, Copilot will surface it the moment someone asks about salary benchmarks.
Below is what I'd check, in order, before turning Copilot on for any client.
1. Audit your SharePoint and Teams sharing posture
The biggest single source of accidental exposure is broad sharing that nobody cleaned up.
The patterns that matter:
- "Anyone" links. Documents shared with "Anyone with the link" are accessible to anyone who has the link, including outside your tenant. Microsoft has a tenant-level toggle and a per-site override. Most sites don't need it on.
- "Everyone except external users." A built-in M365 group containing every internal account. It gets used as a shortcut to share with the company, then the document is forgotten. Search for it across SharePoint and OneDrive. Anything permissioned this way is effectively company-wide readable.
- Site collection membership. Old project sites accumulated members who left the project. Most tenants don't have a regular access review. Now's the time.
- Team membership. Teams channels often have non-obvious external guests, contractors, or staff who've changed roles. Worth a sweep before Copilot indexes the conversations.
Microsoft Purview's Data Access Governance dashboard surfaces a lot of this. Useful, but not magic. You still have to make the decisions.
2. Deploy sensitivity labels and actually apply them
Sensitivity labels are M365's primary mechanism for telling Copilot "don't use this content as grounding material".
A label can be configured to exclude content from Copilot, even when the user has permission to read it. That's the cleanest way to ringfence the obviously sensitive things: HR records, board papers, contracts under NDA, financial detail.
Two parts to this:
- Configure the label policy so Confidential and Highly Confidential categories exclude content from Copilot grounding.
- Get labels applied. Auto-labelling rules can apply labels based on content (TFNs, credit card numbers, named keywords). User-applied labelling works if you've trained users to do it.
The auto-labelling effort is real, and it's the part most teams underestimate. Expect to spend the bulk of the project here, not on the technical configuration. The configuration takes a day. Tuning the policies on real content, dealing with the false positives (a "TFN" pattern matches a lot of nine-digit numbers that aren't TFNs), running simulation mode for long enough to trust the rules, and getting the business to validate that the right things are being caught: that's where the time goes. Plan for weeks of iteration on a mid-size tenant before you switch policies from simulation to enforce. If you turn it on cold, you'll either over-label and break workflows, or under-label and miss the things you were trying to protect.
Microsoft's Purview information protection docs are the right starting point. If you want a hand, that's the day job.
3. Tighten Conditional Access for the new interaction surface
Copilot inherits the user's identity. Whatever Conditional Access policy applies to the user applies to their Copilot interactions.
That means if your CA policies allow access from any location, or don't enforce MFA on standard users, those policies cover Copilot too. Tighten them before turning Copilot on, not after:
- MFA enforced for every user, not just admins. A surprising number of tenants still have standard accounts on password-only.
- Legacy authentication blocked. Copilot won't use it, but its presence is a side door around Copilot's protections.
- Compliant device policies. If Copilot can surface confidential content, the device asking should be Intune-enrolled and compliant.
This is also the bare-minimum Essential 8 ML1 territory for MFA and admin separation. If you haven't done it for compliance reasons, do it for Copilot reasons.
4. Refresh DLP for the new interaction surface
Data Loss Prevention policies still apply to Copilot, but the surface behaves differently to the ones your existing policies were probably written for. Three things worth knowing before you assume your current DLP coverage is enough.
First, Copilot interactions are a distinct location in DLP policy scope. If your existing policies cover Exchange, SharePoint, OneDrive, and Teams but not the Copilot location, they don't apply to Copilot prompts and responses. Add it explicitly.
Second, the user experience of a DLP block in Copilot is not the same as in Outlook. In Outlook a user sees a policy tip before they send. In a Copilot chat the response is filtered or blocked after the model has generated it, which means the user sees a refusal or a redacted answer with no explanation of why. Worth knowing before users start logging tickets that say "Copilot is broken".
Third, DLP fires on the response, not the grounding content. Copilot can ground on a document that contains a TFN, then produce a summary that doesn't include the TFN, and DLP won't fire because the response is clean. The control for keeping that document out of grounding in the first place is the sensitivity label from point 2, not DLP. The two layers do different jobs and you need both.
If your DLP policies are old, or were configured for a narrower scope, this is the prompt to revisit them across the modern set of identifiers (PII, financial, health, contractual) and the modern set of locations, including the Copilot one.
5. Turn on Purview audit logging and actually review it
Copilot interactions are loggable. Every prompt, every response, every file referenced as grounding material.
Two reasons to turn it on before users start:
- Incident response. If something surfaces that shouldn't have, you need the trail to find it, scope the impact, and harden against repeat.
- Posture review. What users actually ask Copilot tells you what content needs labelling, what permission gaps remain, and where training is needed.
If you're aligned to ISO 27001, this is straightforward Annex A audit-logging territory (A.8.15 and A.5.28 specifically). If you're working toward Essential 8, the ACSC hasn't published AI-specific guidance yet, but the moment you classify Copilot as a critical system, the existing logging expectations attach to it.
The piece this doesn't fix
Do all five and you've handled the technical side competently. There's still one thing the technical work doesn't address.
Users.
Specifically, three failure modes I see in tenants that turned Copilot on without thinking about the human layer:
People ask leading questions, accept the first answer Copilot gives, and act on it without verifying the grounding documents. Copilot's response sounds confident regardless of whether the underlying content was correct or current.
People assume Copilot has been "sanitised" of confidential content and treat anything it returns as safe to share. The permission model is the opposite. Copilot returns whatever the user has access to, which is often more than they realised.
People drop confidential context into prompts (client names, financial figures, internal project codenames) without thinking about where that prompt is logged or how it might surface in a future response.
None of that is fixable with Conditional Access or DLP. It's a workforce capability problem, and it sits outside what an MSP does. We can configure the tool. We can't teach 400 people how to think about a probabilistic system that sounds authoritative regardless of whether it's right.
That layer is its own discipline. The people I've started pointing clients at for it are Pretty Agile. They're the AI-Native Charter Partner for Australia and New Zealand and they teach the staff-capability layer that sits on top of whatever IT setup we've built. The split is clean. We make sure Copilot can be turned on safely. They make sure your team gets value out of it once it is.
If you skip the workforce piece, the technical work is wasted. The cleanest M365 tenant in the world doesn't help if the people using Copilot don't understand what it's actually doing.
Short version
Microsoft 365 Copilot doesn't introduce new permission risks. It surfaces the existing ones. Every "we forgot about that file" exposure that's been quietly sitting in your tenant since 2021 becomes findable the moment someone asks a question that touches it.
Five things, in order:
- Audit and tighten SharePoint and Teams sharing.
- Deploy sensitivity labels with Copilot exclusions, and budget weeks not days for the auto-labelling work.
- Tighten Conditional Access (MFA, legacy auth, device compliance).
- Refresh DLP, including the Copilot location and the layering with sensitivity labels.
- Turn on Purview audit logging and review it.
Then accept that the technical work is half the job.
If you want help with the technical side, book a discovery call. For the workforce side, Pretty Agile is who I'd send you to.