What Access Creep Actually Looks Like Across Three Clouds
Multicloud IAM has gotten complicated with all the permission sprawl and identity debt flying around. And the uncomfortable truth? Most access creep doesn’t announce itself. It whispers. A contractor’s service account in AWS that nobody deprovisioned. An Azure AD group still granting editor permissions to a storage account even though the project wrapped up in 2022. A GCP custom role built for a one-time data migration that now carries compute.instances.create across production resources.
This is the multicloud IAM setup that stops access creep — or more precisely, the setup that accepts creep already happened and walks you through finding and fixing it. Today, I will share it all with you.
Here’s what I’ve personally seen in real environments. A mid-sized fintech running workloads across AWS, Azure, and GCP operated under one informal policy: “spin up whatever you need.” No approval process. No expiration dates on permissions. Within 18 months, they had 847 service accounts spread across three clouds. When the security team finally audited, they found:
- 113 AWS IAM users with AdministratorAccess who hadn’t logged in for six months
- A managed identity in Azure bound to a long-deleted App Service, still holding Key Vault secrets
- A GCP service account with roles spanning five projects, created by someone who’d left the company entirely
None of it was malicious. Just the natural entropy of identity management at scale. Each cloud runs its own permission model, its own naming convention, its own audit trail. Without a unified view, permissions compound quietly — for months, sometimes years.
Probably should have opened with this section, honestly. The problem is so common that most teams don’t realize it exists until compliance comes knocking or an incident forces a retrospective nobody wanted to write.
Why Multi-Cloud Makes IAM Harder Than It Should Be
Single-cloud IAM is manageable. You pick a policy language, attach it to a principal, audit through one console. Multi-cloud breaks that entirely.
AWS uses role-based access control built on inline and managed policies. You assign permissions directly to a user, group, or role — granular, but it demands you think about every single action up front.
Azure RBAC works differently. Scopes come first. You define a role — built-in or custom — then assign it to a principal at a specific scope: subscription, resource group, or individual resource. The scope inheritance model trips people up constantly. A Contributor role at subscription level cascades down through everything beneath it.
GCP’s IAM is its own animal. Basic roles like Viewer, Editor, and Owner are genuinely dangerous — too broad for almost any real use case. Predefined roles are better. Custom roles require you to understand resource hierarchies deeply: projects, folders, organization nodes. Each level matters, and getting it wrong has real consequences.
But what is the core structural problem here? In essence, it’s that there is no unified identity plane. But it’s much more than that. You can’t federate a single AWS role to Azure and GCP simultaneously. You can’t write one policy that works across all three. You end up creating service accounts in each cloud — each with its own permissions, its own lifecycle, its own audit trail.
Federated identities complicate things further. You might use Azure AD as a central identity provider for AWS. Good start. But that Azure AD identity still needs cloud-specific role assignments in Azure itself, plus cross-account roles in AWS, plus separate service accounts in GCP. The permissions are distributed. The source of truth is murky.
Naming conventions diverge too. One team calls it prod-app-service-account in AWS, prod_app_svc in Azure, and prod-app-sa in GCP. An auditor trying to track a single logical identity across those three clouds hits friction immediately — and that friction is where things get missed.
How to Audit What You Actually Have Right Now
Before you fix anything, you need to see it. Not theoretically. Literally.
AWS IAM Access Analyzer is the most useful tool here, and most teams never run it. Go to the IAM console, find Access Analyzer, create an analyzer scoped to your organization. It runs continuously and flags:
- Unused roles — specifically anything older than 90 days with no recorded activity
- Overly permissive policies that grant more than they should
- Cross-account access pointing outside your organization entirely
Export the findings. You’ll see exactly which roles haven’t been touched in months. Cross-reference against your CMDB or service registry. If a role belongs to a service that’s been decommissioned, you just found your first target.
Look specifically for wildcards. A policy granting s3:* or ec2:* is a permission disaster waiting for a bad day. Access Analyzer flags these, but scan manually too. Run through this:
- Open the IAM console
- Filter by policies with
*in the actions field - Note the creation date — old wildcards are almost always technical debt nobody wanted to own
Azure AD access reviews are the Azure equivalent. Navigate to Azure AD, then Identity Governance → Access reviews. Create a review scoped to a resource group or subscription. Assign reviewers — ideally resource owners or team leads, not security staff. Ask one simple question: does this principal still need this role?
The review process forces real conversation. People actually think about it instead of assuming permissions are permanent by default.
Pay attention to:
– Assignments older than one year with no documented business reason
– Service principals bound to deleted resources — visible directly through the portal
– Groups with excessive nesting — a user in Group A, which sits in Group B, which holds Contributor on a production subscription, is a privilege escalation path waiting to be discovered
GCP Policy Analyzer — found in the Cloud Console under Security — shows you who has what across projects and folders. Unlike the other two clouds, GCP surfaces effective permissions clearly. Run a query like “list all principals with Editor role across all projects.” Export to CSV. Review it somewhere quiet.
GCP-specific red flags to watch for:
– Service accounts carrying Owner role at organization level — this should not exist
– Custom roles granting iam.serviceAccountKeys.create, which lets someone extract and exfiltrate credentials
– Service accounts with multiple active keys — stale keys are a liability you’re paying for without knowing it
Run this audit across all three clouds. Spend a full week on it. You’ll find hundreds of permissioned principals you genuinely forgot existed.
Fixing Access Creep Without Breaking Everything
Frustrated by hundreds of stale roles staring back at them, most teams panic and start revoking aggressively. This is the mistake. Revoke too fast and you’ll take down a batch job, a microservice, or some silent dependency nobody ever documented — usually at 2 a.m. on a Tuesday. Don’t make my mistake.
The safe sequence looks like this:
First, enforce least privilege on new principals only. Don’t touch the existing mess yet. From today forward, every new service account gets the narrowest permission possible. In AWS, scope IAM policies down to specific resources and actions. In Azure, use custom roles instead of built-in Editor. In GCP, create custom roles listing only specific resources needed.
Second, consolidate roles. You don’t need seventeen different service accounts if five role types cover all the actual work. Identify the common patterns. “Needs to read from S3 and write logs to CloudWatch” is one role — reuse it instead of creating a new one every time a service needs the same access.
Third, deprecate gradually. Pick one unused role. Mark it deprecated in your IaC or a shared tracking doc. Alert the team. Wait two weeks. If nothing breaks, remove it. One role at a time. Boring, yes — but safe.
Fourth, sunset old service accounts. For AWS, use the aws iam get-access-key-last-used API to find keys unused in 180 days. Disable them first — don’t delete immediately. Wait a week. Delete. Apply the same approach to Azure managed identities and GCP service account keys.
Fifth, implement Just-in-Time access for high-risk operations. Instead of granting a service account permanent admin access, grant it for 15 minutes when actually needed. AWS handles this via IAM roles with duration limits. Azure offers Privileged Identity Management. GCP has custom conditions. It’s a longer play, but it eliminates standing privileges that sit unused — the exact thing that gets exploited.
The most common failure I’ve seen: teams revoke a permission, a batch job fails silently, nobody notices for hours, and then everyone scrambles to restore access under pressure. Prevent this by:
– Adding monitoring to any service using the role you’re changing
– Keeping a rollback plan — the policy you removed, ready to reapply in under five minutes
– Testing removal in non-production first, every time
Keeping It Clean After the Initial Fix
You’ve audited. You’ve fixed the worst of it. Now the goal is making sure it doesn’t quietly rebuild itself over the next eighteen months.
Automate the reviews. Block out a quarterly audit on a shared calendar — make it an actual recurring event. AWS: run Access Analyzer monthly. Azure: run access reviews on all subscriptions every quarter. GCP: export IAM bindings via CLI and diff them against a saved baseline. Assign ownership to a specific person. It needs to be someone’s job, not everyone’s vague responsibility.
Use Infrastructure as Code to enforce permissions. Terraform and Pulumi let you define all roles and assignments in version control — every permission grant becomes a code change, reviewable and traceable. Pair that with policy-as-code tools like OPA (Open Policy Agent) to block dangerous permission patterns before they’re ever deployed. A simple example: block any role granting iam.* actions at the pipeline level.
Alert on permission changes. Configure CloudTrail in AWS, Activity Log in Azure, and Cloud Audit Logs in GCP to surface new role assignments in near-real-time. Create alerts for wildcards, cross-account trusts, or service accounts being added to sensitive groups. Route these into a SIEM or even simple log aggregation — anything that creates a visible signal.
Tools like Veza or Brainwave might be the best option for continuous monitoring, as multicloud IAM requires a unified view across identity sources. That is because manually correlating AWS, Azure, and GCP IAM configs every month doesn’t scale past about 50 service accounts. They’re not cheap — Veza runs roughly $40,000–$100,000+ annually depending on scale — but they replace manual quarterly audits with something that actually catches drift as it happens.
I’m apparently someone who runs into naming convention problems constantly, and a unified tool works for me while manual cross-cloud correlation never quite does. Your mileage may vary, but the math usually favors automation past a certain threshold.
A realistic maintenance cadence: automated alerts monthly, manual review quarterly, deep audit annually. Under 50 service accounts total, quarterly is probably enough. Over 200, monthly is the floor.
That’s what makes a structured IAM approach endearing to us security-minded practitioners — it turns an invisible, compounding problem into something you can actually manage. Multicloud IAM never stays clean on its own. But with the right audit habits, safe remediation steps, and continuous monitoring, access creep stops being an audit surprise and becomes a known, managed risk.
Stay in the loop
Get the latest multicloud hosting updates delivered to your inbox.