Cloudanix Guard
Stop secrets & PII
from reaching the LLM
An on-host DLP firewall that hooks into your coding agents — intercepting every prompt before it leaves your machine, with zero latency added to your workflow.
The risk
Coding agents read everything
When you ask an agent to “fix the billing service” it reads your .env, credentials files, and source code — then sends them verbatim to an external LLM.
Secrets in context
AWS keys, API tokens, database passwords stored in .env files get attached to every prompt referencing your codebase.
PII in source & notes
SSNs, credit card numbers, and personal data in comments, fixtures, or billing code travel silently to the LLM provider.
Sensitive file reads
Agents traverse the entire repo. ~/.aws/credentials, private keys, and internal configs are fair game without a guardrail.
How it works
Inline — between agent and LLM
Agent fires a prompt
Claude Code, Kiro, or Cursor constructs a prompt containing your question plus repo context it believes is relevant.
Guard hook intercepts
A pre-LLM hook installed on your machine captures the outbound prompt before any network request is made — typically in <1 ms.
DLP engine scans
Regex + ML classifiers check every token: AWS/GCP/Azure keys, SSH keys, PII patterns, file path allowlists, and custom rules from your policy.
Policy-driven action
Depending on rule severity and your config: allow, warn the developer, redact the finding inline, or block the entire prompt.
Audit & alert
Every intercept is logged with user, timestamp, matched rule, and action. Feed to Splunk, Datadog, or your SIEM via the structured JSON stream.
Detection coverage
What Guard catches
Secrets & credentials
- AWS / GCP / Azure access keys
- GitHub & GitLab tokens
- SSH private keys (RSA, EC)
- JWT signing secrets
- Stripe & payment API keys
- Database connection strings
- Slack & webhook tokens
- Custom regex patterns
Personal data (PII)
- Social Security Numbers (SSN)
- Credit & debit card numbers
- Passport & driver's license IDs
- Email addresses in fixtures
- Phone numbers
- IP address ranges
- IBAN / bank account numbers
- HIPAA-covered identifiers
Sensitive file reads
.env/.env.*variants~/.aws/credentials*.pem/*.keyprivate keyskubeconfig/.kube/*- Terraform state files
- Vault secrets snapshots
- Any path matching your denylist
- Files above size threshold
Live scenarios
See Guard in action
❯ claude "deploy the auth service using the keys in config.py" ⚠ Cloudanix Guard — intercepted prompt Hook : pre-llm Agent : claude-code Timestamp: 2026-05-12T14:22:07Z ✗ Finding 1 [CRITICAL] secret/aws-access-key-id matched : AKIAiosfodnn7example location : config.py:4 entropy : 4.31 ● Action: BLOCK Prompt was NOT forwarded to Anthropic. Rotate this credential immediately. audit-id : guard-7f3a2c1e
What happened
Guard matched an AWS Access Key ID pattern in config.py before the prompt left the developer's laptop. The credential was never sent to Anthropic's API.
❯ claude "review billing_notes.txt and fix the tax logic" ⚠ Cloudanix Guard — intercepted prompt Hook : pre-llm Agent : claude-code Timestamp: 2026-05-12T14:35:19Z ⚡ Finding 1 [HIGH] pii/us-ssn matched : 4••-••-••90 location : billing_notes.txt:12 action : REDACT ✓ Action: REDACT — prompt forwarded (sanitised) SSN replaced with [REDACTED-SSN] in outbound prompt. Original file on disk is unchanged. audit-id : guard-9d1b4e8f
What happened
An SSN found in a billing notes file was automatically redacted inline. The agent still received a useful prompt — just with the sensitive value replaced by a safe placeholder.
❯ claude "read .env and tell me what's misconfigured" ⚠ Cloudanix Guard — intercepted prompt Hook : pre-llm Agent : claude-code Timestamp: 2026-05-12T15:01:44Z ✗ Finding 1 [CRITICAL] file/sensitive-env matched : .env (denylist: *.env, .env.*) contains : 6 secrets detected inside file ● Action: BLOCK File contents were NOT included in the prompt. Suggest: share only the specific key you want reviewed. audit-id : guard-2c8f7a3d
What happened
The agent was asked to read a .env file directly. Guard's file denylist caught this before the contents (6 embedded secrets) were attached to the outbound prompt.
❯ claude "run the test suite and summarise failures" ⚠ Cloudanix Guard — intercepted tool result Hook : post-tool Agent : claude-code Tool : bash (pytest output) Timestamp: 2026-05-12T16:14:02Z ⚡ Finding 1 [HIGH] pii/credit-card matched : 4•••-••••-••••-1234 (test fixture) location : stdout line 47 action : REDACT ✓ Action: REDACT — tool result forwarded (sanitised) Card number replaced with [REDACTED-PAN]. audit-id : guard-5e9c1b2a
What happened
Test output included a credit card number from a test fixture. Guard scanned the tool result before it was fed back to the LLM as context, redacting the PAN inline.
Configuration
Policy as code
A single YAML file checked into your repo controls every Guard rule. Define severity, action, and scope per team or project.
- ✓ Version-controlled alongside your code
- ✓ Per-rule actions: allow / warn / redact / block
- ✓ Custom regex & ML model thresholds
- ✓ File path allowlists & denylists
- ✓ Team-level overrides
version: "1" default_action: warn rules: - id: aws-key type: secret/aws-access-key-id severity: critical action: block - id: ssn type: pii/us-ssn severity: high action: redact - id: env-files type: file/path-denylist patterns: - "**/.env*" - "~/.aws/credentials" - "**/*.pem" action: block audit: sink: stdout # pipe to SIEM format: json
Capabilities
Everything you need to ship safely with AI
Sub-millisecond latency
The hook runs in-process. Developers see no perceptible slowdown — scanning typically completes in 0.2–1.5 ms.
On-host, no cloud required
All scanning happens locally. Your prompts and findings never leave the developer's machine — no proxy, no SaaS data plane.
One-line install
pip install cloudanix-guard && guard hook install — supports Claude Code, Kiro, and more agents via a universal hook interface.
Centralised audit stream
Structured JSON logs shipped to your SIEM, Slack, or webhook. Aggregated dashboards in the Cloudanix console.
CSPM-aware rules
Rules inherit context from your Cloud Security posture — a key linked to a high-risk IAM role triggers stricter action automatically.
Bidirectional scanning
Guard scans both outbound prompts and inbound LLM responses — catching prompt injections and sensitive data in tool call results.
Coding agent security suite
Two products. One agent. Complete protection.
Coding Agent Guard and Coding Agent JIT are built to run side-by-side inside the same coding agent. Together they cover both sides of the risk: what the agent says and what it does.
Coding Agent JIT
Zero standing access for cloud resources. When the agent needs AWS, a database, or an API — JIT issues short-lived, scoped credentials over MCP and revokes them the moment the task completes.
- Short-lived credentials via MCP
- Human-in-the-loop approvals
- Automatic revoke after task
- Full audit trail per agent action
Coding Agent Guard
On-host DLP firewall. Every prompt the agent sends is scanned before it leaves your machine — blocking AWS keys, SSNs, and sensitive files from ever reaching the LLM provider.
- Pre-LLM prompt interception
- Secrets & PII detection
- Inline redact or block
- Zero-latency, runs on-device
Ready to see your graph?
Connect a cloud account in under 30 minutes. See every finding rooted in identity, asset, and blast radius — with a fix path attached.
Book a Demo