Security Posture & Asset Management
What is Threat Modeling?
Threat modeling is the practice of systematically identifying, prioritizing, and mitigating potential security threats to a system before adversaries can exploit them. Unlike reactive security measures that respond to incidents after they occur, threat modeling takes a proactive stance by asking fundamental questions: What are we building? What could go wrong? What should we do about it? And did we do a good job?
This structured approach forces teams to think like attackers while still in the design phase, when fixing vulnerabilities costs far less than patching them in production. A well-executed threat model reveals not just obvious attack vectors but also subtle architectural weaknesses that might otherwise remain hidden until a breach occurs.
Why Threat Modeling Matters
Security vulnerabilities discovered in production cost exponentially more to fix than those caught during design. IBM’s research consistently shows that defects found after release cost 30 times more to remediate than those caught during requirements gathering. Threat modeling shifts security left in the development lifecycle, catching issues when they’re easiest and cheapest to address.
Beyond cost savings, threat modeling provides a common language for technical and non-technical stakeholders to discuss security risks. Security teams can articulate threats in terms of business impact rather than technical jargon, while developers gain concrete guidance on which threats warrant defensive coding. Product managers understand the security implications of feature decisions before committing resources.
The process also creates valuable documentation that persists throughout the application lifecycle. New team members can quickly understand the security considerations embedded in system architecture. During incident response, existing threat models provide a structured framework for investigating how an attack occurred and what other systems might be vulnerable. Compliance frameworks like PCI-DSS and SOC 2 increasingly expect organizations to demonstrate systematic threat identification processes.
Perhaps most importantly, threat modeling cultivates a security mindset across engineering teams. Developers who regularly participate in threat modeling exercises begin thinking about attack surfaces and trust boundaries naturally, integrating security considerations into daily work without needing constant oversight from security specialists.
Common Threat Modeling Frameworks
Different threat modeling methodologies exist because different contexts demand different approaches. A payment processing system faces threats very different from a social media platform, and a medical device has constraints that don’t apply to web applications. Understanding multiple frameworks helps teams choose the right tool for their specific situation.
STRIDE
Microsoft developed STRIDE as a mnemonic for categorizing threats based on the security properties they violate. The acronym stands for Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, and Elevation of Privilege. This framework proves particularly effective for analyzing component-based architectures where you can map each component against all six threat categories.
Spoofing threats involve attackers impersonating users, systems, or data. A threat model might identify risks like weak authentication allowing attackers to assume another user’s identity, or missing cryptographic verification allowing malicious components to masquerade as trusted services.
Tampering covers unauthorized modification of data in transit or at rest. This includes attacks like man-in-the-middle manipulation of network traffic, SQL injection altering database records, or attackers modifying configuration files to change application behavior.
Repudiation threats occur when systems lack adequate logging or audit trails, allowing attackers to deny malicious actions. Without proper logging of administrative operations or cryptographically signed transactions, organizations struggle to establish accountability when incidents occur.
Information disclosure encompasses any unauthorized access to confidential data. This ranges from obvious threats like database exposure to subtle issues like timing attacks that leak information through performance characteristics, or verbose error messages revealing system internals.
Denial of service threats prevent legitimate users from accessing system resources. These include traditional network-based DDoS attacks but also application-level attacks like algorithmic complexity exploits that exhaust CPU, or resource exhaustion through uncapped API requests.
Elevation of privilege occurs when attackers gain higher permission levels than intended. This includes classic privilege escalation exploits but also architectural flaws like insecure direct object references allowing users to access resources beyond their authorization.
STRIDE’s strength lies in its completeness. By methodically considering each category for every component, teams catch threats they might otherwise overlook. Its weakness is verbosity for large systems, where exhaustively analyzing every component against all six categories becomes tedious.
PASTA (Process for Attack Simulation and Threat Analysis)
PASTA takes a risk-centric approach focused on business objectives rather than technical components. This seven-stage methodology begins by defining business objectives and ends with risk management and mitigation strategies, ensuring technical security controls align with business priorities.
Stage one defines business objectives, identifying what matters most to the organization. A financial services application might prioritize transaction integrity and regulatory compliance, while a social network emphasizes user privacy and platform availability. These objectives guide all subsequent threat analysis.
Stage two defines the technical scope, mapping the application architecture and infrastructure. This includes data flow diagrams, deployment diagrams, and identification of trust boundaries. Understanding scope prevents analysis paralysis while ensuring critical components receive adequate scrutiny.
Stage three decomposes the application into manageable pieces, identifying entry points, assets, actors, and use cases. This granular view reveals attack surfaces and helps prioritize which components warrant deeper analysis based on their criticality and exposure.
Stage four analyzes threats through attacker profiling and attack library analysis. Instead of generic threats, PASTA considers specific threat actors relevant to the organization (nation-states, organized crime, insiders) and their tactics, techniques, and procedures. This context-aware analysis produces more actionable findings than generic threat lists.
Stage five identifies vulnerabilities and weaknesses in the current architecture, correlating them with threats identified in stage four. This produces a concrete list of exploitable weaknesses rather than theoretical concerns.
Stage six enumerates attack scenarios, constructing attack trees that show how identified vulnerabilities could be chained together. This narrative approach helps non-technical stakeholders understand realistic attack paths and their potential business impact.
Stage seven produces risk and impact analysis, quantifying the likelihood and impact of each threat scenario. This risk ranking informs mitigation priorities and helps justify security investments to business leadership.
PASTA’s business-centric approach makes it particularly valuable when security teams need executive buy-in for mitigation investments. Its main drawback is the time investment required for thorough execution of all seven stages.
LINDDUN
LINDDUN focuses specifically on privacy threats, making it essential for applications handling personally identifiable information. The framework’s name represents seven privacy threat categories: Linkability, Identifiability, Non-repudiation, Detectability, Disclosure of information, Unawareness, and Non-compliance.
Linkability threats occur when attackers can correlate different pieces of information about users across contexts. Even when individual data points seem innocuous, linking them together can reveal sensitive information. For example, linking a user’s location history with their purchase history might expose medical conditions or political affiliations.
Identifiability threats involve revealing user identities from supposedly anonymized data. De-anonymization attacks have successfully re-identified individuals in “anonymous” datasets by correlating them with public information or other datasets.
Non-repudiation privacy threats occur when users cannot plausibly deny actions they took. While non-repudiation is often a security goal, in privacy contexts it can be problematic. Strong audit logs that prove every user action might enable surveillance or create liability risks for users.
Detectability threats arise when the mere existence of certain data or actions can be detected, even without accessing the data itself. Traffic analysis might reveal that two users are communicating even if message contents remain encrypted.
Disclosure threats cover unauthorized access to personal information, similar to STRIDE’s information disclosure but with privacy-specific considerations like tracking, profiling, and data aggregation risks.
Unawareness threats occur when users lack knowledge about how their data is collected, processed, or shared. This includes insufficient privacy policies, hidden data collection, or failure to obtain meaningful consent.
Non-compliance addresses violations of privacy regulations like GDPR, CCPA, or HIPAA. These threats often carry direct legal and financial consequences beyond the security implications.
LINDDUN proves particularly valuable for consumer applications, healthcare systems, and any context where privacy regulations apply. Its specialized focus makes it less suitable as a standalone framework for applications where privacy is not a primary concern.
Attack Trees
Attack trees provide a graphical representation of how attackers might compromise a system, with the root node representing the attacker’s goal and child nodes representing sub-goals or attack steps. This hierarchical approach helps teams visualize attack paths and identify critical nodes where defensive controls provide maximum impact.
An attack tree for “steal user credentials” might branch into sub-goals like “phish users,” “exploit authentication vulnerabilities,” or “compromise credential database.” Each sub-goal further branches into specific tactics. The phishing branch might include “spear phishing emails,” “fake login pages,” or “phone-based social engineering.”
Attack trees support both OR-relationships (attacker needs only one path to succeed) and AND-relationships (attacker must complete multiple steps). This distinction helps prioritize defenses. OR-nodes with many branches require defense at the root or across all branches, while AND-nodes can be disrupted by securing any single component.
Teams can annotate attack tree nodes with attributes like difficulty, cost, required skill level, or probability of detection. This enriched analysis reveals which attack paths pose the greatest realistic threat rather than just theoretical possibilities.
The visual nature of attack trees makes them excellent communication tools. Non-technical stakeholders can follow attack paths without understanding implementation details, while technical teams can drill into specific nodes for detailed analysis.
Attack trees work well in combination with other frameworks. You might use STRIDE to identify threats, then construct attack trees for the most critical threats to understand how attackers would actually exploit them.
The Four-Step Threat Modeling Process
While specific frameworks differ, most threat modeling follows a common four-question structure that guides teams from understanding the system to validating their security controls.
What Are We Building?
This foundational step requires creating an accurate representation of the system architecture. Data flow diagrams (DFDs) prove most common, showing how information moves between components, where trust boundaries exist, and what assets require protection.
Effective diagrams focus on security-relevant details rather than comprehensive technical documentation. A threat model DFD might abstract away internal microservice choreography to focus on external API boundaries, data stores, and authentication checkpoints. The goal is clarity about where data enters the system, how it transforms, and where it exits.
Trust boundaries deserve special attention. These represent points where data moves from a more trusted context to a less trusted one, or vice versa. A web application’s trust boundaries include the connection between user browsers and the web server, between web servers and databases, and between the application and external APIs. Attacks often exploit inadequate security controls at these boundaries.
Identifying assets and their sensitivity guides prioritization throughout the threat modeling process. Not all data carries equal risk. Customer payment information demands stronger protection than publicly visible product descriptions. Understanding asset sensitivity helps teams allocate security resources proportionally.
This step also catalogs entry points where attackers might interact with the system. Entry points include obvious items like authentication endpoints and API operations, but also subtler attack surfaces like file upload functionality, search features that might be vulnerable to injection, or administrative interfaces accessible only from specific networks.
What Could Go Wrong?
With the system architecture mapped, teams systematically identify threats using their chosen framework. This divergent thinking phase encourages brainstorming without premature filtering. The goal is comprehensive threat identification, not immediate solutions.
Using STRIDE as an example, the team examines each component and trust boundary against all six threat categories. Looking at a user authentication service, they might identify spoofing threats from weak password policies, tampering threats to authentication tokens, repudiation threats from insufficient login auditing, information disclosure through timing attacks on password validation, denial of service from unlimited login attempts, and elevation of privilege through session fixation vulnerabilities.
Effective threat identification requires the right participants. Security specialists bring knowledge of attack techniques and vulnerability patterns. Developers understand implementation details that might create unexpected attack surfaces. Architects see system-wide implications of design decisions. Product owners clarify business context and acceptable risk trade-offs.
Teams should challenge assumptions during this phase. “Users will never do that” often precedes exploitation. “That system is internal only” might ignore insider threats or the consequences of network breach. “We’ll add that security control later” often means never. Documenting and challenging assumptions surfaces hidden risks.
Existing vulnerability databases and security advisories provide inspiration for concrete threats. Rather than inventing theoretical attacks, teams can reference real vulnerabilities affecting similar components. If the system uses a PostgreSQL database, reviewing PostgreSQL CVEs reveals classes of vulnerabilities to check for.
What Should We Do About It?
Not every identified threat warrants mitigation. This convergent phase requires risk assessment to prioritize threats and select appropriate countermeasures. Teams typically evaluate threats based on likelihood (how easy is the attack to execute) and impact (what damage results from success).
Risk matrices help structure this assessment. A simple 3×3 grid with high/medium/low likelihood and impact produces nine risk levels. High-likelihood, high-impact threats demand immediate mitigation. Low-likelihood, low-impact threats might be accepted without action. The middle cases require judgment based on available resources and risk appetite.
For prioritized threats, teams select mitigations from four categories: prevent, detect, respond, or accept. Prevention eliminates the vulnerability or attack vector. Input validation prevents injection attacks. Authentication prevents unauthorized access. Encryption prevents eavesdropping.
When prevention proves impractical, detection provides a fallback. Intrusion detection systems alert on suspicious activity patterns. Anomaly detection flags unusual API usage. Comprehensive logging enables forensic investigation after incidents.
Response capabilities minimize impact when attacks succeed. Incident response playbooks guide rapid containment. Automated circuit breakers limit damage from denial of service. Backup and recovery procedures restore service after data corruption.
Risk acceptance acknowledges that some threats aren’t worth mitigating given their low likelihood, low impact, or prohibitive mitigation cost. Accepted risks should be documented with clear rationale and re-evaluated periodically as circumstances change.
Mitigation selection considers implementation cost, operational overhead, user experience impact, and maintenance burden. The goal is effective risk reduction, not perfect security. A $10,000 engineering effort to prevent a $1,000 loss makes no business sense.
Did We Do a Good Job?
Threat modeling requires validation to ensure the model accurately reflects both the system and the threat landscape. This final step verifies that identified threats are legitimate, selected mitigations are effective, and no critical threats were missed.
Code review and security testing validate that planned mitigations were actually implemented. Threat models often identify necessary controls that developers forget during implementation or that get cut from sprint scope. Explicit verification catches these gaps before deployment.
Penetration testing provides external validation by having security professionals attempt the attacks described in the threat model. Successful penetration tests reveal either incorrect assumptions in the threat model or implementation flaws in mitigations. Failed attacks confirm that defenses work as intended.
Red team exercises simulate sophisticated adversary tactics against the complete system, potentially uncovering attack paths that individual threat models missed. While threat modeling analyzes components in isolation, red teams exploit the interaction between components to find vulnerabilities that emerge from integration.
Threat models require maintenance as systems evolve. New features introduce new attack surfaces. Architectural changes alter trust boundaries. Deployment changes modify the threat landscape (moving from on-premises to cloud introduces different threats). Regularly revisiting threat models keeps them relevant.
Some teams integrate threat modeling into their change management process, requiring threat model updates for any significant architectural change or new feature. This continuous approach prevents threat models from becoming stale documentation that nobody trusts or references.
Who Should Do Threat Modeling
Effective threat modeling requires diverse perspectives, not just security specialists. Different roles contribute different insights that collectively produce more comprehensive threat analysis than any single person could achieve.
Security engineers bring deep knowledge of attack techniques, common vulnerability patterns, and security control best practices. They know what went wrong in similar systems and can anticipate subtle attack vectors that others might miss. However, security teams often lack detailed understanding of implementation specifics that create or eliminate vulnerabilities.
Software developers and architects understand the system’s actual implementation, not just its intended design. They know which libraries are used, what configuration options are set, where error handling might be weak, and what technical debt exists. This implementation knowledge reveals attack surfaces that design documents miss.
Operations and DevOps teams understand the deployment environment, infrastructure dependencies, monitoring capabilities, and incident response procedures. They know what detective and responsive controls exist, what visibility gaps remain, and what operational constraints limit mitigation options.
Product managers and business stakeholders provide context about intended use cases, business priorities, and acceptable risk tradeoffs. They help calibrate risk assessments by clarifying which assets are most valuable and which threats would most severely impact business objectives.
For small teams without dedicated security staff, the development lead often facilitates threat modeling while drawing on the entire team’s collective knowledge. External security consultants can facilitate initial threat modeling sessions while training teams to continue the practice independently.
The key is ensuring that nobody performs threat modeling in isolation. An individual working alone inevitably brings blind spots and biases. Group sessions leverage collective knowledge while building shared understanding of security considerations across the team.
Threat Modeling in Practice
Real-world threat modeling adapts to project constraints, organizational culture, and available expertise. Teams rarely execute textbook-perfect threat modeling processes, instead pragmatically adjusting methodologies to fit their context.
Some organizations embed lightweight threat modeling into sprint planning, spending 30 minutes reviewing security implications of upcoming stories rather than conducting marathon threat modeling sessions. This continuous approach keeps threat models current and integrates security thinking into regular development rhythm.
Other teams conduct intensive threat modeling workshops during major design milestones, when architects have solidified high-level design but before implementation details lock in security decisions. These focused sessions can span days, producing comprehensive threat models and mitigation plans.
Automation helps scale threat modeling across large organizations. Tools can parse architecture-as-code definitions to generate baseline data flow diagrams, flag common anti-patterns, or suggest threats based on component types. While automation cannot replace human judgment, it reduces manual effort and ensures consistency.
Many teams maintain threat libraries cataloging common threats and mitigations relevant to their technology stack and business context. Rather than starting from scratch for each threat model, teams adapt existing patterns to new contexts, accelerating analysis while capturing organizational knowledge.
Successful threat modeling programs measure and communicate value to maintain organizational commitment. Metrics might track threats identified per project, percentage of threats mitigated before production, security issues prevented through early detection, or cost savings from finding vulnerabilities during design versus production.
Integration with existing development processes proves critical for long-term sustainability. Threat modeling that operates as a disconnected security process often fails to influence actual development decisions. Embedding threat modeling into standard architecture review, design documentation, and sprint planning ensures findings translate into action.
Common Threat Modeling Mistakes
Teams new to threat modeling often fall into predictable traps that undermine effectiveness. Understanding these common mistakes helps avoid wasted effort and disillusionment.
Analysis paralysis occurs when teams attempt to identify every possible threat before taking action. Threat modeling should be iterative, not exhaustive. Focus on high-risk areas first, implement mitigations, then revisit for deeper analysis. Perfect is the enemy of good, especially for teams building their first threat model.
Scope creep produces threat models so broad and detailed that nobody can maintain or reference them. Effective threat models focus on security-relevant architecture at the appropriate abstraction level. Implementation details belong in security requirements or secure coding guidelines, not threat models.
Security-only participation creates threat models divorced from development reality. When security teams threat model in isolation, they miss implementation details that create or eliminate vulnerabilities, propose impractical mitigations that never get implemented, and fail to build security awareness among developers who actually write the code.
Modeling what you hope to build rather than what actually exists leads to threat models that incorrectly assume security controls are present. Threat modeling should reflect current architecture, not aspirational design. Document gaps between current state and desired state explicitly rather than pretending they don’t exist.
One-and-done threat modeling produces documentation that immediately becomes stale. Systems evolve continuously through new features, architectural changes, and deployment modifications. Threat models require regular updates to remain relevant. Teams should treat threat models as living documents that evolve with the system.
Focusing exclusively on prevention ignores detection and response. No system is perfectly secure, which means threat models should consider what happens when prevention fails. Detective controls provide visibility into attacks in progress, while response capabilities limit damage.
Ignoring business context leads to misallocated security resources. A threat model that treats all assets as equally critical might invest enormous effort protecting low-value data while under-protecting crown jewels. Effective threat modeling prioritizes based on business impact, not just technical feasibility of attacks.
How Praetorian Helps with Threat Modeling
Threat models are hypotheses. Praetorian tests them. The best way to validate whether your threat model accurately captures real risks is to have skilled attackers try to exploit them.
Praetorian’s offensive security engineers, including Black Hat and DEF CON presenters, published researchers, and CVE contributors, bring real-world attacker perspective to your threat modeling process. Through penetration testing and red team engagements, they validate which theoretical threats are actually exploitable in your specific environment and uncover attack paths your team may not have considered.
Praetorian Guard then keeps that validation continuous. By unifying attack surface management, vulnerability management, breach and attack simulation, continuous penetration testing, cyber threat intelligence, and attack path mapping into one managed service, Guard ensures your threat model stays current as your environment evolves. New assets are automatically discovered. New attack paths are continuously mapped. And every finding is human-verified before it reaches your team.