Cloudanix Joins AWS ISV Accelerate Program

Securing the SDLC in the AI Era: Culture, Champions, and Defense in Depth with Ashish Bhadouria

Ashish Bhadouria of IKEA shares how to integrate security into SDLC through culture and champions programs, and how defense in depth applies to AI-native applications.

Integrating security into the software development lifecycle remains one of the hardest problems in the industry — not because of tooling gaps, but because of culture. And now, with AI-native applications introducing entirely new attack surfaces, the challenge has compounded.

We spoke with Ashish Bhadouria, Domain Engineering Manager for Security and Privacy at IKEA, on the Scale to Zero podcast. Ashish has built security infrastructure across TCS, T-Systems (for Nokia, Jaguar Land Rover), Skype, Microsoft Office, Unity Technologies, and Zalando before joining IKEA, giving him a broad perspective on how security culture scales across diverse industries.

You can read the complete transcript of the episode here >

What are the fundamental challenges of integrating security into SDLC?

Despite years of “shift left” awareness, most organizations still struggle with the basics. Ashish identifies the persistent challenges:

  • Security as an afterthought: Teams still treat security as a gate at the end rather than a property of the process.
  • Tool overload: Too many tools in CI/CD pipelines create alert fatigue. Developers cannot process the volume of signals.
  • Prioritization paralysis: According to the 2025 State of DevSecOps report, only 18% of critical vulnerabilities actually needed prioritization. Teams drown in noise.
  • Unclear threat modeling practices: Organizations do not know when to invoke threat modeling, how long it should take, or how to plan mitigations from findings.
  • Engineering velocity mismatch: Security teams cannot keep pace with deployment frequency.

The solution is not more tools — it is providing actionable metrics within developer workflows where they already work, so they do not need to check ten different dashboards.

How do you overcome developer resistance to security involvement?

Ashish draws on Jason Chan’s “Security Partnership” framework from Netflix. The core idea: build a culture where developers seek security guidance voluntarily rather than having it imposed on them.

His approach operates on three layers:

  • Executive layer: Agree on security checkpoints before major product releases. Speak in cultural metrics — champion engagement rates, NPS scores, security maturity trends.
  • Product layer: Align with product teams on what gets checked during sprints and how findings are prioritized. Use product metrics they already care about.
  • Engineering layer: Provide tooling that surfaces security signals directly in developer workflows. Use platforms like Backstage to consolidate signals — secret detection, vulnerabilities, dependencies — into a single view within their existing CI/CD pipeline.

The key principle: target developer pain points first. If they are drowning in alerts, reduce noise before asking them to do more. Small improvements build trust, and trust builds culture.

Why are security champions programs essential?

Ashish is emphatic: the signals and findings that come through security champions are more valuable than what any tool produces. Engineers know their context. They know their environment. When they surface issues through a champions program, those findings address fundamental security challenges that automated scanning misses.

Champions programs deliver value across multiple dimensions:

  • Contextual awareness: Champions understand the business logic and data flows that tools cannot infer.
  • Cultural alignment: Having an insider carrying the security agenda is far more effective than an external team pushing requirements.
  • Scalable coverage: In organizations with many lines of business (like IKEA), champions provide visibility that a central security team cannot achieve alone.
  • AI readiness: As AI adoption accelerates, champions become even more critical for detecting novel risks that traditional tools are not yet designed to catch.

For enterprises with multiple business units operating with autonomy, Ashish advocates “aligned autonomy” — foundational security standards that everyone follows, with flexibility in how each unit achieves maturity above that baseline.

How does the SDLC change for AI-native applications?

The traditional SDLC needs rearchitecting for AI-native environments. The challenges are fundamentally different:

  • MCP was built without security: When the Model Context Protocol launched, security was not part of the design. Controls are being retrofitted after adoption — repeating the same pattern as traditional software.
  • Prompt injection is not just SQL injection: It combines injection with social engineering and has 10x impact depending on context. The attacker pool is also vastly larger — anyone who can type a prompt can attempt exploitation.
  • Open-source AI models are not secure by default: People assume that because a model exists publicly, it has been vetted. Model poisoning, ethical issues, and data leakage risks are real.
  • Speed of adoption outpaces security: Engineers exploring AI tools move faster than security teams can build controls.

Ashish notes that the “S in MCP stands for security” joke captures the industry’s current state perfectly.

What architectural patterns protect AI workloads?

Ashish recommends a defense-in-depth model applied specifically to AI application stacks, with security controls at every layer:

  • Perimeter layer: Deploy AI-aware gateways that detect adversarial prompts and anomalous input patterns.
  • Network layer: Create segmentation for machine learning workflows — isolate training, inference, and data pipelines.
  • Application layer: Implement prompt sanitization, input validation, and output filtering at the application boundary.
  • Data layer: Enforce privacy practices, ethics regulations, and fairness guidelines on training and inference data.
  • Model layer: Conduct adversarial testing, build guardrails, and monitor for model drift and poisoning attempts.

Threat modeling at each layer is paramount. Understanding what can go wrong at the model layer versus the application layer versus the data layer requires different expertise and produces different mitigations.

How is AI transforming defensive security capabilities?

While attackers leverage AI for more sophisticated phishing (Microsoft reported a 57% uptick in sophisticated phishing attempts), defenders gain capabilities too:

  • SOC analyst augmentation: AI reduces false positives and improves signal quality. Analysts shift from triaging noise to higher-quality engagement with real threats. The volume of alerts may not decrease, but the quality of each analyst’s work improves dramatically.
  • Accelerated threat modeling: Collecting threat scenarios, creating diagrams, and generating action plans that previously took four hours now take 30 minutes with AI assistance.
  • Automated attack surface monitoring: ML models continuously scan for subdomain dangling, exposed secrets, and misconfigurations — then take corrective action automatically.
  • Improved security testing in CI/CD: Automated security code reviews and generated test cases within pipelines catch issues earlier.

Ashish emphasizes that context is king. The more contextual knowledge AI systems have about your specific organization, the better they can prioritize signals and reduce noise. Human-in-the-loop remains essential — AI improves the quality of work, but does not eliminate the need for human judgment.

Ready to see your graph?

Connect a cloud account in under 30 minutes. See every finding rooted in identity, asset, and blast radius — with a fix path attached.

Book a Demo

Blog

Read More Posts

Your Trusted Partner in Data Protection with Cutting-Edge Solutions for
Comprehensive Data Security.

Wednesday, Apr 29, 2026

Code Security Best Practices for DevSecOps Teams in 2026

In 2026, the speed of software development has reached a point where traditional security methods can no longer keep up.

Read More

Wednesday, Apr 29, 2026

Integrating Security into Every Stage: A Blueprint for Secure Software Development

The escalating frequency and severity of software vulnerabilities exploited in the wild forced a paradigm shift in how a

Read More

Tuesday, Apr 14, 2026

Top 15 Cloud Misconfigurations in 2026 - How to Fix Them?

Most cloud breaches today are not the result of sophisticated zero-day exploits. They are the result of misconfiguration

Read More