Security is often viewed as a series of technical hurdles or an “alphabet soup” of tool acquisitions. However, as Dakota Riley—a Staff Security Engineer and SANS instructor with roots in CyberArk and AWS—emphasizes, the most resilient programs are built on culture rather than compliance checklists.
In a recent discussion, Dakota shared a provocative but grounded perspective: “Good security is just good engineering”. This guide explores how organizations can shift from a “department of no” to a culture of ownership, automation, and continuous feedback.
The Core Philosophy: Security as Software Quality
The fundamental shift required for a strong security culture is viewing security not as a separate chore, but as an inherent aspect of software quality.
- Engineering Alignment: Instead of chasing “hygiene problems” after the fact, security should be integrated into the initial product design.
- Problem-First Thinking: Organizations should work backward from the specific problems they want to solve rather than buying products (like CASB or CSPM) and trying to fit them into the program.
- The Goal of Elimination: The ultimate mindset should be to eliminate entire classes of vulnerabilities through design rather than managing a never-ending mess with 50 different products.
Three Essential Elements of Security Culture
Dakota highlights three pillars that define a mature security engineering culture:
- Ownership and Accountability: Both product and security engineers must feel a sense of ownership over the things they create. If an engineer knows they will be paged because they wrote a poor detection, they are highly incentivized to fix it.
- Empowerment: Leadership must trust engineers to make decisions and provide them with the necessary business context to do so effectively.
- Psychological Safety: Innovation requires the freedom to change things, which inherently involves risk. If engineers fear termination for making a mistake, innovation stops entirely.
Navigating the Divide: Startups vs. Enterprises
The challenges of building culture vary significantly depending on organizational size and structure.
| Feature | Startup Culture | Enterprise Culture |
|---|---|---|
| Primary Goal | Survival and delivery. | Regulatory compliance and scale. |
| Structure | Lean, “T-shaped” engineers filling multiple roles. | Siloed teams (e.g., logging separate from detection). |
| Action | High individual impact; can often just “do it” via PRs. | Multiple stakeholders require cross-team alignment. |
| Friction | Resource constraints. | Organizational inertia and bureaucracy. |
Advice for New Leads
- For Startups: Identify the “bad things” that could cause irreversible business damage or put the company out of business, and build the roadmap around those risks.
- For Enterprises: Focus on team incentives and structure. Create “paved roads” and self-service capabilities so other teams can succeed without constant security intervention.
Reimagining DevSecOps and “Shift Left”
Dakota warns that “DevSecOps” has been diluted by marketing to mean just SAST, DAST, and SCA tools. True DevSecOps is a continuous feedback loop:
- Clear Requirements: Set prescriptive logging and audit requirements upfront so engineers know the standard they must meet from day one.
- Policy as Code: Enforce security posture through code in the pipeline. This gives developers the same feedback mechanism they get from unit tests.
- Tuning for Fidelity: Don’t block pipelines with low-fidelity checks that produce false positives. High-confidence checks should block; lower-confidence ones should be observed by security teams for later tuning.
- Reducing Fatigue: The goal is to reduce “ticket fatigue”. Providing high-context, detailed fixes—or even automated pull requests—makes it much more likely that engineering will address the risk.
Measuring Culture: The “Toil” Metric
Standard KPIs can be “slippery” because numbers without context lack meaning. Instead, Dakota recommends measuring Toil (a concept from the Google SRE handbook):
- Toil vs. Value: Measure the amount of undifferentiated heavy lifting—boring, repetitive work that doesn’t add value.
- Paved Roads: Track whether the team is re-solving the same problems or building repeatable “paved roads” for secure images, log ingestion, and cloud environments.
- Novelty: A successful culture should see humans moving away from hygiene tasks toward solving novel, intellectually stimulating problems.
The Future Mindset: AI and Mentorship
As the industry moves into the age of AI and non-deterministic systems, the engineering mindset becomes even more critical.
- Hiring for Mindset: Interviews should use open-ended, “security-flavored” system design questions to see how a candidate breaks down complex problems into smaller, actionable steps.
- AI Augmentation: AI tools (LLMs) should be used for problems that don’t require absolute determinism, such as summarizing patterns or handling classes of alerts in SOAR workflows.
- Human in the Loop: Because LLMs are statistical models that “guess the next token,” they will make mistakes. For critical decisions, a “human in the loop” remains essential to augment and validate AI-driven triage.
Conclusion: Get Your Hands Dirty
Building a security culture is not an overnight transformation; it is a series of “small wins” that challenge existing bureaucracies. Dakota’s ultimate advice for any professional in this space is to embrace the Pragmatic Programmer mindset: stay curious, tinker, build, and break things in environments with no consequences to truly learn how systems function.
Recommended Resources
- Book: The Pragmatic Programmer.
- Practice: “Paved road” engineering and Policy as Code.
- Concept: Toil reduction and psychological safety.