Most cloud breaches today are not the result of sophisticated zero-day exploits. They are the result of misconfigurations — permissions set too wide, tokens left active, storage buckets left open. The 2024 Verizon DBIR confirmed that misconfiguration and misuse account for a significant portion of cloud security incidents, and the trend has only accelerated.
What has changed in 2026 is the attack surface. Nearly 90% of organizations now run multi-cloud environments, meaning a single blind spot in one cloud can become the entry point for a cross-cloud incident. More importantly, the rise of AI-native infrastructure has introduced an entirely new category of misconfigurations that most security teams are not yet equipped to detect or remediate.
This article covers the 15 most critical cloud misconfigurations as of April 2026 — with a heavier focus on AI-related risks since that is where the new threat frontier lies. For each one, you will find what the risk is, why it matters, and the specific commands to fix it.
| # | Misconfiguration | Severity | Ease of Fix | Category |
|---|---|---|---|---|
| 1 | Exposed AI Training Data & Embeddings | Critical | Medium | AI & Data Pipelines |
| 2 | Unsecured AI Inference APIs | Critical | Easy | AI & Data Pipelines |
| 3 | Shadow AI Instances | High | Hard | AI & Data Pipelines |
| 4 | Lack of Data Provenance for AI | High | Hard | AI & Data Pipelines |
| 5 | Non-Human Identity (NHI) Sprawl | Critical | Hard | Identity & Access |
| 6 | Improper Offboarding of Automated Tokens | High | Medium | Identity & Access |
| 7 | Over-Privileged Service Accounts | Critical | Easy | Identity & Access |
| 8 | Secrets Leaked in CI/CD Logs | Critical | Medium | Identity & Access |
| 9 | Publicly Accessible Storage Buckets | Critical | Easy | Infrastructure |
| 10 | Disabled MFA on Privileged Accounts | Critical | Easy | Infrastructure |
| 11 | Open Management Ports (SSH/RDP) | High | Easy | Infrastructure |
| 12 | Unencrypted Snapshots and Backups | High | Easy | Infrastructure |
| 13 | Lack of Multi-Cloud Logging | High | Medium | Visibility & Governance |
| 14 | Misconfigured K8s Admission Controllers | Critical | Medium | Visibility & Governance |
| 15 | Shadow IT / Zombie Resources | High | Medium | Visibility & Governance |
Category 1: AI & Data Pipeline Misconfigurations
AI infrastructure has moved fast, and security has not always kept up. These four misconfigurations represent the highest-growth risk area in 2026 — and the one most likely to catch teams off guard.
1. Exposed AI Training Data & Embeddings
Teams building RAG (Retrieval-Augmented Generation) pipelines regularly store sensitive documents, PII, and proprietary data in S3 buckets, Azure Blob containers, or GCS buckets to feed their models. When those storage locations are not properly locked down, the training data — and the embeddings derived from it — become publicly accessible.
Why it matters: An attacker with access to your embeddings can reverse-engineer sensitive business data or use it to craft targeted prompt injection attacks against your AI systems.
How to Fix?
AWS:
aws s3api put-bucket-acl --bucket <ai-data-bucket> --acl private
aws s3api put-public-access-block --bucket <ai-data-bucket> \
--public-access-block-configuration \
BlockPublicAcls=true,IgnorePublicAcls=true,BlockPublicPolicy=true,RestrictPublicBuckets=true
Azure:
az storage account update --name <storage-account> --resource-group <rg> \
--allow-blob-public-access false
GCP:
gcloud storage buckets update gs://<ai-data-bucket> --no-public-access-prevention=false
2. Unsecured AI Inference APIs
When teams deploy LLM endpoints — whether via Amazon Bedrock, Azure OpenAI, or Vertex AI — they sometimes expose the inference API without authentication or rate-limiting. The result is an open endpoint that anyone can query, running up your cloud bill and exposing your model to abuse.
Why it matters: An unauthenticated inference endpoint can be used for prompt injection attacks, model extraction, or simply running up thousands of dollars in compute costs.
How to Fix?
AWS — Enforce IAM auth on Bedrock endpoints:
aws bedrock put-model-invocation-logging-configuration --logging-config \
'{"cloudWatchConfig":{"logGroupName":"/aws/bedrock/invocations","roleArn":"<role-arn>","largeDataDeliveryS3Config":{"bucketName":"<bucket>"}},"s3Config":{"bucketName":"<bucket>"},"embeddingDataDeliveryEnabled":true,"imageDataDeliveryEnabled":true,"textDataDeliveryEnabled":true}'
Azure — Enforce key-based or AAD auth on Azure OpenAI:
az cognitiveservices account update --name <openai-resource> --resource-group <rg> \
--custom-domain <domain> --api-properties '{"disableLocalAuth":false}'
GCP — Restrict Vertex AI endpoint access:
gcloud ai endpoints update <endpoint-id> --region=<region> --update-labels=access=restricted
3. Shadow AI Instances
A development team spins up a third-party GenAI tool, feeds it internal Slack messages and source code to ‘boost productivity’, and does not tell anyone in security. This is Shadow AI — unmanaged, unvetted AI deployments that operate outside your security controls.
Why it matters: Data submitted to an unvetted GenAI tool may be used to train third-party models, violating data residency requirements and potentially leaking IP or source code.
How to Fix?
Detection: Use CASB (Cloud Access Security Broker) tools to identify unauthorized AI service usage in network traffic. On AWS, use VPC Flow Logs + GuardDuty anomaly detection. On Azure, use Defender for Cloud Apps. On GCP, use Cloud Armor + VPC Service Controls.
AWS — Deny access to unauthorized AI service endpoints via SCP:
# AWS organizations put-service-control-policy (restrict outbound to approved AI endpoints only)
Azure — Block unapproved SaaS AI apps:
# az ad conditional-access policy create (use session controls to block unsanctioned app data upload)
4. Lack of Data Provenance for AI
Many teams cannot answer a simple question: where did this training data come from? Without data lineage tracking, you cannot detect if a malicious actor has tampered with your training dataset — a technique known as data poisoning. Corrupted training data produces a corrupted model, with outputs that subtly serve an attacker’s goals.
Why it matters: Data poisoning is one of the hardest attacks to detect because the model behaves normally until it encounters specific trigger inputs.
How to Fix?
- Enable object versioning and integrity checksums on all AI training data storage.
- Use AWS Glue Data Catalog, Azure Purview (Microsoft Purview), or GCP Dataplex to track data lineage end-to-end.
- Enforce immutable storage policies so training datasets cannot be overwritten silently.
AWS — Enable S3 Object Lock for training data:
aws s3api put-object-lock-configuration --bucket <training-data-bucket> \
--object-lock-configuration \
'{"ObjectLockEnabled":"Enabled","Rule":{"DefaultRetention":{"Mode":"COMPLIANCE","Days":90}}}'
Category 2: Identity & Access Management
IAM remains the #1 risk category in cloud security. The difference in 2026 is that non-human identities now vastly outnumber human ones — and they receive far less oversight.
5. Non-Human Identity (NHI) Sprawl
Service accounts, API keys, OAuth tokens, and CI/CD credentials now outnumber human users by roughly 80:1 in mature cloud environments. Most of them are over-privileged, many are unused, and almost none have automatic expiry. NHI sprawl is the quiet risk that most IAM programs miss.
How to Fix?
AWS — Identify unused IAM roles and access keys:
aws iam generate-credential-report && aws iam get-credential-report
aws iam list-roles --query 'Roles[?RoleLastUsed.LastUsedDate==null]'
Azure — Audit service principals:
az ad sp list --all --query '[].{Name:displayName, AppId:appId, Created:createdDateTime}'
GCP — List service accounts and their key age:
gcloud iam service-accounts list --format='table(email,disabled)'
gcloud iam service-accounts keys list --iam-account=<sa-email>
6. Improper Offboarding of Automated Tokens
When a developer leaves or changes roles, their human account is disabled — but the OAuth tokens, API keys, and service account credentials they created keep working. These orphaned credentials become live attack vectors with no active owner monitoring them.
How to Fix?
Tie all automated tokens to a team or role identity, not an individual human.
AWS — Deactivate old access keys immediately:
aws iam update-access-key --access-key-id <key-id> --status Inactive --user-name <username>
Azure — Revoke app tokens on offboarding:
az ad app credential delete --id <app-id> --key-id <key-id>
GCP — Delete orphaned service account keys:
gcloud iam service-accounts keys delete <key-id> --iam-account=<sa-email>
7. Over-Privileged Service Accounts
Giving an application service account Owner, Admin, or Editor rights because it is ‘easier’ is one of the most common and most dangerous misconfigurations in cloud. If that service account is compromised, the attacker has full control of your environment.
How to Fix?
AWS — Use IAM Access Analyzer to right-size policies:
aws accessanalyzer start-policy-generation --policy-generation-details \
'{"principalArn":"<role-arn>"}'
Azure — Review and downgrade over-privileged service principals:
az role assignment list --assignee <sp-object-id> --all
az role assignment delete --assignee <sp-object-id> --role Owner --scope /subscriptions/<sub-id>
GCP — Use IAM Recommender to reduce permissions:
gcloud recommender recommendations list \
--recommender=google.iam.policy.Recommender --location=global --project=<project-id>
8. Secrets Leaked in CI/CD Build Logs
A developer accidentally prints an environment variable in a GitHub Actions workflow. The log is public. The API key is now exposed. This happens more often than most teams realize — plaintext secrets in Jenkins, GitHub Actions, Azure DevOps, or Cloud Build logs are a persistent risk.
How to Fix?
Never pass secrets as plain environment variables in CI/CD pipelines. Always use secrets managers.
AWS — Use Secrets Manager in pipelines:
aws secretsmanager get-secret-value --secret-id <secret-name> --query SecretString --output text
Azure — Reference Key Vault secrets in DevOps pipelines:
az keyvault secret show --name <secret-name> --vault-name <vault-name> --query value
GCP — Use Secret Manager in Cloud Build:
gcloud secrets versions access latest --secret=<secret-name>
Enable secret scanning in GitHub (push protection) or Azure DevOps Advanced Security to catch leaks before they reach logs.
Category 3: Infrastructure Hygiene
These are the classics — misconfigurations that have existed for years and still appear in breach reports every quarter. They are straightforward to fix, which makes leaving them open inexcusable.
9. Publicly Accessible Storage Buckets
An open S3 bucket, Azure Blob container, or GCS bucket is still one of the most common causes of cloud data breaches. Automated scanners find them within minutes of creation.
AWS:
aws s3api put-public-access-block --bucket <bucket-name> \
--public-access-block-configuration \
BlockPublicAcls=true,IgnorePublicAcls=true,BlockPublicPolicy=true,RestrictPublicBuckets=true
Azure:
az storage account update --name <storage-account> --resource-group <rg> \
--allow-blob-public-access false
GCP:
gcloud storage buckets update gs://<bucket-name> --uniform-bucket-level-access
10. Disabled MFA on Privileged Accounts
Root accounts, Global Administrators, and Org Admins operating without MFA are a single phishing email away from full environment compromise. No further explanation needed.
AWS — Enforce MFA via IAM policy condition:
aws iam create-policy --policy-name RequireMFA --policy-document file://require-mfa-policy.json
Azure — Enforce MFA via Conditional Access:
az ad conditional-access policy create --display-name 'Require MFA for Admins' --state enabled
GCP — Enforce 2-step verification at org level:
gcloud organizations set-iam-policy <org-id> policy.json # include 2SV enforcement constraint
11. Open Management Ports (SSH/RDP)
Ports 22 and 3389 open to 0.0.0.0/0 are scanned and probed continuously. Use cloud-native session management instead.
AWS — Remove open SSH/RDP rules and use Session Manager:
aws ec2 revoke-security-group-ingress --group-id <sg-id> --protocol tcp --port 22 --cidr 0.0.0.0/0
Azure — Enable JIT VM access via Defender for Cloud:
az security jit-policy create --name default --resource-group <rg> \
--virtual-machines '[{"id":"<vm-id>","ports":[{"number":22,"protocol":"TCP","allowedSourceAddressPrefix":"*","maxRequestAccessDuration":"PT3H"}]}]'
GCP — Use IAP (Identity-Aware Proxy) for SSH/RDP:
gcloud compute instances add-iam-policy-binding <instance> \
--member='user:<email>' --role='roles/iap.tunnelResourceAccessor'
12. Unencrypted Snapshots and Backups
Unencrypted EBS snapshots, Azure Managed Disk backups, or GCP persistent disk snapshots can be copied and read by anyone with access to your cloud account — or by an attacker who gains temporary access.
AWS — Encrypt existing EBS snapshot:
aws ec2 copy-snapshot --source-snapshot-id <snap-id> --source-region <region> \
--encrypted --kms-key-id <key-id>
Azure — Enforce encryption on managed disks:
az disk update --name <disk-name> --resource-group <rg> \
--encryption-type EncryptionAtRestWithCustomerKey --disk-encryption-set <des-id>
GCP — Create encrypted snapshot:
gcloud compute snapshots create <snapshot-name> --source-disk=<disk> \
--source-disk-zone=<zone> --kms-key=<key-resource-id>
Category 4: Visibility & Governance Gaps
13. Lack of Multi-Cloud Logging
Watching your AWS CloudTrail while staying blind to Azure Activity Logs and GCP Audit Logs is not security — it is tunnel vision. Attackers routinely pivot between clouds precisely because cross-cloud visibility is weak.
How to Fix?
AWS — Enable CloudTrail in all regions with S3 log delivery and CloudWatch integration:
aws cloudtrail create-trail --name org-trail --s3-bucket-name <log-bucket> \
--is-multi-region-trail --enable-log-file-validation
Azure — Export Activity Logs and Entra Sign-in Logs to a centralized Log Analytics Workspace or Microsoft Sentinel:
az monitor diagnostic-settings create --name central-logs --resource <resource-id> \
--workspace <workspace-id> --logs '[{"category":"Administrative","enabled":true}]'
GCP — Enable org-level Audit Logs and sink to centralized Cloud Logging or a SIEM:
gcloud logging sinks create org-sink <destination> --organization=<org-id> --include-children
14. Misconfigured Kubernetes Admission Controllers
Running containers as root in EKS, AKS, or GKE — because no admission controller blocks it — is one of the most exploitable misconfigurations in cloud-native environments. A container escape from a root-privileged pod can mean full node compromise.
How to Fix?
Enable and enforce Pod Security Admission (PSA) at the namespace level with the ‘restricted’ profile.
AWS EKS:
kubectl label namespace <namespace> pod-security.kubernetes.io/enforce=restricted
Azure AKS — Enable Azure Policy add-on for Kubernetes:
az aks update --name <cluster-name> --resource-group <rg> --enable-addons azure-policy
GCP GKE — Enable Policy Controller (Anthos):
gcloud container clusters update <cluster-name> --enable-pod-security-policy --zone <zone>
15. Shadow IT and Zombie Resources
Test environments, forgotten dev instances, and untagged resources accumulate in every cloud environment. They rarely have current security patches, they often have permissive IAM settings from their original ‘temporary’ deployment, and nobody is watching them.
How to Fix?
AWS — Use AWS Config with a required-tags rule to flag untagged resources:
aws configservice put-config-rule --config-rule \
'{"ConfigRuleName":"required-tags","Source":{"Owner":"AWS","SourceIdentifier":"REQUIRED_TAGS"}}'
Azure — Use Azure Policy to enforce tagging and deny untagged resource creation:
az policy assignment create --name require-tags --policy <built-in-policy-id> \
--scope /subscriptions/<sub-id>
GCP — Use Resource Manager labels and Organization Policy constraints to enforce labeling:
gcloud resource-manager tags bindings create --tag-value=<value-name-id> --parent=<resource-name>
Strategic Prevention: Moving Upstream
Fixing misconfigurations after they are in production is reactive. The goal is to catch them earlier — ideally before code reaches your cloud environment.
- Compliance as Code: Use CSPM tools — AWS Security Hub, Microsoft Defender for Cloud, or GCP Security Command Center — to continuously evaluate your environment against security benchmarks (CIS, NIST, PCI-DSS). Treat failed checks as code defects, not operational tasks.
- Shift-Left Detection: Integrate tools like Checkov, Trivy, or Semgrep into your CI/CD pipelines to catch IAM misconfigurations and secrets in Terraform, CloudFormation, or Bicep templates before they are deployed. A misconfiguration caught in the IDE costs nothing to fix. The same misconfiguration caught after a breach costs millions.
- AI-Specific Posture: As AI workloads mature, expect dedicated AI Security Posture Management (AI-SPM) tooling to emerge as a category. For now, apply standard CSPM controls to AI data stores, enforce authentication on all inference endpoints, and track data lineage as a first-class security concern.
Conclusion
In 2026, the three pillars of cloud security risk are Identity, AI, and Visibility. Misconfigurations in these areas are not theoretical — they are the documented root cause of the most significant cloud breaches happening right now.
The good news is that every misconfiguration on this list is fixable with native cloud tooling, specific CLI commands, and clear policies. The barrier is not capability — it is prioritization.
Your Action This Week: Start your multi-cloud audit today. Run IAM Access Analyzer on AWS, review active PIM assignments and service principal permissions on Azure, and use GCP IAM Recommender to surface your most over-privileged service accounts. One hour of audit work today can close the gaps that attackers are actively looking for.
People Also Read
- Top 18 Challenges of Cloud Security in 2026
- Top 10 Challenges of Cloud Security Posture Management
- A Complete List of AWS IAM Misconfigurations
- A Complete List of AWS S3 Misconfigurations
- How to use CSPM to detect and remediate cloud misconfigurations
- Top 12 Container Security Best Practices
- Elevate Your Security with IAM Just-In-Time (JIT) Access
- Source Code Security Best Practices