How-To Guides
How to Scope a Penetration Test
Scoping a penetration test is where most engagements live or die. Get it wrong, and you’ll either waste your budget testing systems that don’t matter or miss critical vulnerabilities in the assets you forgot to include. Get it right, and you’ll walk away with actionable intelligence that actually improves your security posture. The difference between these outcomes usually comes down to decisions made weeks before any testing begins.
Most organizations approach pen test scoping backwards. They start with a list of IP addresses or domain names, hand it to a vendor, and hope for the best. But effective scoping requires thinking strategically about what you’re trying to protect, what attackers actually care about, and how your business generates value. A well-scoped engagement tells a story about your attack surface and the threats that matter most to your organization.
Why Scoping Matters More Than You Think
The scope document is not just administrative paperwork. It’s the contract between you and your security testing provider about what success looks like. When executives ask “are we secure?” after a pen test, the honest answer is always “within the boundaries we tested.” Those boundaries are defined during scoping.
Under-scoping leads to false confidence. You might get a clean report because you only tested your marketing website when attackers actually target your customer portal and API endpoints. Over-scoping wastes resources and dilutes focus. Testing everything at once often means testing nothing particularly well. You end up with a surface-level assessment across 200 assets instead of deep analysis of the 20 that actually matter.
Budget plays into this, obviously. But the more important constraint is usually time and attention. Even with unlimited budget, your internal teams can only respond to findings, provide access, and remediate vulnerabilities at a certain pace. Scoping helps align the cadence of testing with your organization’s capacity to act on results.
This is why companies like Praetorian emphasize understanding your business context before proposing a scope. The goal is not to maximize billable hours. It’s to identify the testing approach that gives you the highest return on your security investment. For many organizations, that means starting narrow and deep rather than broad and shallow.
Defining Clear Objectives Before Anything Else
Before you list a single IP address or domain name, answer this question: what are you trying to learn? Different objectives demand different scoping approaches. Are you checking compliance boxes for a customer audit? Testing a new product before launch? Validating whether last quarter’s remediation actually worked? Simulating a specific threat actor?
Compliance-driven engagements have predefined requirements that shape your scope. PCI DSS mandates testing of cardholder data environments. HIPAA focuses on systems that store or process protected health information. SOC 2 Type II often requires annual penetration testing of customer-facing applications. These frameworks give you a starting point, but the minimum compliance scope rarely covers your full attack surface.
Pre-launch testing for new products or features demands a different approach. Here you’re looking for design flaws and implementation bugs before customers are exposed to risk. The scope should cover all components of the new functionality, including backend APIs, frontend interfaces, data flows, and third-party integrations. This is where working with a provider experienced in modern application architectures pays off. Praetorian’s team includes researchers who have discovered vulnerabilities in everything from mobile banking apps to cloud infrastructure platforms.
Threat modeling exercises help define scope for advanced engagements. If your primary concern is ransomware operators, the scope should emphasize perimeter defenses, credential security, and lateral movement paths. If you’re worried about insider threats, testing should focus on access controls, data loss prevention, and privilege escalation. If nation-state actors keep you up at night, you need red team operations that simulate sophisticated attack chains rather than traditional vulnerability assessments.
Types of Penetration Test Scope
Network Infrastructure Testing
Network penetration testing examines your perimeter defenses and internal network segmentation. External network tests start from the perspective of an internet attacker with no prior access. Internal network tests assume the attacker has already breached your perimeter or is operating as a malicious insider.
For external network scope, you typically provide IP ranges, domain names, and any public-facing services. This includes your primary website, email servers, VPN endpoints, and any other systems directly accessible from the internet. The tester will perform reconnaissance to discover additional assets, but explicitly defining what’s in scope prevents confusion about whether that old forgotten server in a cloud account counts.
Internal network testing requires a different conversation. Do you want the tester starting from a single compromised workstation? A wireless guest network? A contractor VPN connection? Each starting point tells a different story about how an attacker might move through your environment. The scope should specify which network segments are in scope, whether wireless networks are included, and any restrictions on testing production systems versus development environments.
Web Application Testing
Web application penetration testing focuses on vulnerabilities in your custom-developed software and how it handles user input, authentication, and sensitive data. The scope needs to define which applications matter most and how much testing depth you want.
Simple scoping lists URLs and says “test everything.” Better scoping explains the application’s purpose, user roles, critical functionality, and known risk areas. If your web app has 50 pages but only 3 handle payment processing, that context helps testers prioritize where to spend time. If certain user roles have administrative privileges, those code paths deserve extra scrutiny.
API testing deserves specific attention in modern applications. Many organizations have public APIs, partner APIs, and internal APIs that follow different security models. Your scope should specify which API endpoints are included, whether documentation is available, and what level of access credentials will be provided. Testing an API without documentation is possible but much less efficient than testing with OpenAPI specs or Postman collections.
Single-page applications and mobile app backends require different scoping considerations than traditional server-rendered websites. The scope should address whether testing includes the client-side code, the backend API, or both. It should specify whether testers can reverse engineer mobile apps or if you’ll provide source code access.
Cloud Environment Testing
Cloud infrastructure testing examines your AWS, Azure, or GCP configurations for security weaknesses. This is where traditional network penetration testing models break down because cloud environments are highly dynamic. IP addresses change constantly, services scale up and down, and your attack surface shifts based on how developers deploy resources.
Scoping cloud tests requires specifying which accounts are in scope, which regions, and which services. Are you testing compute instances, storage buckets, databases, serverless functions, container orchestration, or all of the above? Each cloud service has its own security model and potential misconfigurations.
The scope should also address whether testing includes cloud-specific attack paths like stolen IAM credentials, misconfigured access policies, or compromised CI/CD pipelines. Many organizations separate their production and development cloud accounts. Clarify whether both are in scope or if you want to start with lower-risk environments.
Praetorian Guard approaches cloud security differently than traditional pen testing models by combining automated scanning with manual validation. This continuous monitoring approach means your cloud security posture is evaluated constantly rather than during a two-week annual engagement. The platform maps your entire cloud infrastructure, identifies misconfigurations, and validates whether vulnerabilities are actually exploitable rather than just theoretically possible.
API Security Testing
APIs power modern applications but often have weaker security controls than web interfaces. API penetration testing scope should define which endpoints matter most, what authentication methods are used, and what kinds of data the API exposes.
REST APIs are the most common target. Scope should include base URLs, available endpoints, and whether the API is public, partner-facing, or internal only. GraphQL APIs need different testing approaches because of their flexible query language and potential for information disclosure through introspection.
Legacy SOAP APIs still exist in many enterprise environments. If these are in scope, testers need WSDL files or documentation to understand available operations. The scope should also specify whether backend systems accessed by APIs are in scope or if testing stops at the API gateway layer.
Authentication and authorization testing is critical for APIs. The scope should clarify whether testers should attempt to bypass authentication, escalate privileges between user roles, or access data belonging to other users. These are standard API security tests, but some organizations get nervous about “breaking” authentication, so explicit scope definition prevents surprises.
Social Engineering and Physical Testing
Social engineering tests attempt to manipulate your employees into revealing credentials or granting unauthorized access. Physical penetration testing tries to gain access to facilities where IT systems are housed. Both require careful scoping because they involve human targets and potential legal exposure.
For phishing campaigns, scope includes which employees are targets, what kinds of pretexts are acceptable, and what happens if someone falls for the test. Do testers stop after getting credentials, or do they use compromised accounts to see how far they can go? Some organizations want full-scale simulation where testers behave like real attackers. Others want gentler approaches that focus on awareness rather than exploitation.
Physical security testing scope must define which facilities are in scope, what methods are permitted (tailgating, lock picking, social engineering guards), and what success looks like. Are testers trying to reach a specific server room? Access executive offices? Plant a rogue device on the network? Clear objectives prevent security theater where testers prove they can sneak into a lobby but don’t demonstrate actual risk to critical assets.
Both social engineering and physical testing require more legal and HR coordination than technical testing. Your scope document should reference any separate agreements addressing liability, employee notification (or lack thereof), and what happens if someone calls the police on your penetration testers.
Black Box vs Gray Box vs White Box Testing
The amount of information you provide to testers dramatically impacts what they find and how long it takes. These approaches aren’t better or worse in absolute terms. They’re different tools for different objectives.
Black box testing gives testers nothing except a target name. They start where any internet attacker would start, using reconnaissance to discover assets and vulnerabilities. This approach tests your security from an outsider’s perspective and validates whether your security-through-obscurity measures actually work. The downside is efficiency. Testers spend significant time on reconnaissance and asset discovery that could be spent finding vulnerabilities.
Gray box testing provides some information but not everything. You might give testers IP ranges and URLs but not credentials. Or you might provide low-privilege user accounts but not administrative access. This is the most common scoping approach because it balances realism with efficiency. Testers don’t waste time rediscovering things you already know about, but they still have to work to escalate privileges and move laterally.
White box testing provides full transparency. Testers get source code, architecture diagrams, credentials for all user roles, and direct access to developers. This approach finds the most vulnerabilities because testers can examine code and configurations directly rather than probing from the outside. It’s particularly valuable for pre-launch testing where you want to find and fix everything before customers are exposed.
Praetorian’s methodology combines all three approaches depending on what stage of testing you’re in. Initial reconnaissance might be black box to understand what attackers see. Then testing shifts to gray box for efficient vulnerability discovery. Finally, when testers find something interesting, they might request white box access to determine root cause and whether similar issues exist elsewhere in the codebase.
What to Include and Exclude From Scope
Defining what’s in scope is obvious. The harder part is explicitly defining what’s out of scope. Without clear boundaries, you risk testers accidentally affecting systems you didn’t mean to include or wasting time on assets that don’t matter.
In-scope assets should be listed explicitly with IP ranges, domain names, or application URLs. Provide context for why each asset matters. “Production customer portal” is more useful than “app.example.com” because it tells testers the asset handles sensitive data and deserves extra attention. Include information about asset ownership, hosting location, and criticality to your business.
Out-of-scope exclusions protect fragile systems and third-party assets you don’t control. Legacy systems that might crash under scanning should be explicitly excluded with explanation. Third-party SaaS platforms where you’re just a customer should be out of scope unless you have written permission from the vendor. Development and staging systems might be out of scope if you’re only concerned about production security, or they might be in scope if you worry about pivot attacks through lower-security environments.
Testing restrictions are different from scope exclusions. A system might be in scope but with restrictions like “no denial of service testing” or “no automated scanning of this database.” Document these restrictions clearly so testers understand boundaries. At the same time, be careful about creating so many restrictions that testing becomes meaningless. If you prohibit everything an attacker would actually do, you’re not really testing security.
Praetorian Guard’s continuous testing model changes this conversation because scope can evolve over time. Instead of locking in a fixed list of assets for a two-week engagement, the platform continuously discovers your attack surface and adjusts testing accordingly. When you deploy new infrastructure or applications, they’re automatically included. When you decommission old systems, they drop out of scope. This dynamic approach better reflects how modern organizations actually work.
Setting Rules of Engagement
Rules of engagement define how testers can interact with your systems and what happens when they find vulnerabilities. This is the operational playbook that governs the engagement from kickoff to report delivery.
Timing windows specify when testing can occur. Some organizations restrict testing to business hours so staff are available to respond to issues. Others prefer overnight or weekend testing to minimize impact on production systems. The scope should specify time zones and any blackout dates like major sales events or system maintenance windows.
Communication protocols define how testers report findings in real-time. Critical vulnerabilities like SQL injection or authentication bypass should be reported immediately, not saved for the final report. The scope should list primary and backup contacts with phone numbers and expected response times. Decide upfront whether testers should stop when they find critical issues or continue testing to discover the full extent of vulnerabilities.
Authorization evidence protects everyone involved. The scope document itself serves as authorization, but you might also need letters for third-party hosting providers or ISPs explaining that testing is authorized and not an actual attack. Some organizations create “get out of jail free” cards for physical penetration testers with contact information for lawyers or executives who can confirm testing is legitimate.
Data handling requirements specify what happens to sensitive information discovered during testing. If testers find customer databases, PII, or trade secrets, the scope should mandate how that data is handled, stored, and destroyed. Many organizations require encrypted communications and prohibit testers from downloading sensitive data even if vulnerabilities allow it.
Timeline and Scheduling Considerations
Penetration test timelines depend on scope complexity, access requirements, and how quickly your team can respond to tester questions. Realistic scheduling prevents rushed testing that misses vulnerabilities and reduces disruption to your operations.
External network and web application tests typically run one to three weeks depending on scope size. Simple applications might need only a few days. Complex applications with dozens of user roles and hundreds of features might need a month. The key is matching testing duration to scope comprehensively. Too little time means testers only scratch the surface. Too much time hits diminishing returns where additional days find fewer new issues.
Internal network testing requires coordination with your IT team for access. If testers need physical presence in your office, schedule around travel time and facility access. If testing happens remotely through VPN, provision credentials and ensure testers can reach internal systems before the engagement officially starts. Nothing burns budget faster than having penetration testers sitting idle because firewall rules block their access.
Retesting after remediation should be included in your initial scope. Most providers include some amount of retesting to validate fixes. If remediation takes months, you might need a separate retest engagement. But ideally, the scope includes time for testers to return after you’ve patched vulnerabilities and confirm they’re actually fixed.
Continuous testing models eliminate many scheduling headaches. Praetorian Guard runs on an ongoing basis rather than bounded by artificial start and end dates. This means testing adapts to your deployment schedule. New features are tested when they launch, not months later during your annual pen test. The sine wave methodology alternates between broad reconnaissance and deep exploitation, providing consistent security validation without the feast-or-famine cycle of annual engagements.
Common Scoping Mistakes
The first mistake is treating scope as a checkbox exercise. Organizations copy last year’s scope without considering what’s changed. Did you migrate to the cloud? Launch new customer-facing features? Acquire another company? Your scope should evolve as your technology and business model evolve.
The second mistake is hiding information from testers. Some organizations deliberately withhold details thinking it makes the test more realistic. But unless you’re explicitly paying for a black box assessment, this just wastes time. Real attackers have unlimited time for reconnaissance. Your penetration testers have a fixed budget. Give them the information they need to focus on finding vulnerabilities rather than rediscovering your network architecture.
The third mistake is unclear prioritization. Not all systems matter equally, but scopes often present everything as equally important. Be honest about what keeps executives up at night. Is it the customer database? The payment processing system? Intellectual property in engineering networks? Clear priorities help testers allocate time appropriately and ensure high-risk systems get thorough examination.
The fourth mistake is scope creep without budget adjustment. Testing starts with 10 web applications, but halfway through, you ask testers to also look at 5 new APIs and a mobile app. Scope changes happen, but they should come with timeline and budget discussions. Otherwise, you end up with surface-level testing of everything instead of deep testing of what matters.
The fifth mistake is treating penetration testing as a one-time validation. You test once, get a report, fix some things, and declare success. But your attack surface changes continuously. New code ships every sprint. Cloud configurations change daily. Last year’s clean pen test report says nothing about this year’s security posture. Smart organizations either schedule regular testing or adopt continuous models that match the pace of modern development.
The Continuous Testing Alternative
Traditional penetration testing works on an annual or semi-annual cycle. You scope an engagement, wait for a testing slot, run the test over a few weeks, spend months remediating, and then start planning next year’s test. This model made sense when infrastructure was static and application releases happened quarterly. It’s increasingly mismatched to how modern organizations operate.
Continuous penetration testing inverts this model. Instead of periodic deep dives, you get ongoing security validation that adapts to your changing environment. The scope isn’t fixed at the beginning of a two-week engagement. It evolves as your attack surface evolves. New assets are discovered and tested automatically. Remediated vulnerabilities are retested continuously to ensure fixes hold up over time.
This approach solves several scoping challenges inherent to traditional engagements. You don’t need to predict in January which systems will matter most in November when testing finally happens. You don’t need separate scopes for different asset types because testing covers your full environment. You don’t need to choose between testing breadth and depth because continuous models provide both over time.
Praetorian Guard exemplifies this continuous approach. The platform combines automated attack surface discovery with manual penetration testing by elite security researchers. It continuously maps your external attack surface, identifies vulnerabilities through automated scanning, and then validates findings through manual exploitation. This eliminates false positives that plague automated scanners while providing coverage that matches the scale of modern environments.
The Guard methodology alternates between reconnaissance phases that discover new assets and exploitation phases that deeply test known assets. This sine wave pattern provides consistent security validation without overwhelming your team with findings. Instead of getting a 200-page report once a year, you get a continuous stream of validated vulnerabilities prioritized by actual risk to your organization.
Scoping Guard is simpler than traditional engagements because you’re defining your security program rather than individual testing projects. You specify which domains and IP ranges belong to your organization. The platform handles asset discovery from there. You define which types of testing matter most to your business. The platform allocates resources accordingly. As your environment changes, the testing scope adjusts automatically.
How Praetorian Helps With Scoping
Praetorian has conducted thousands of penetration tests across every industry and technology stack. This experience shapes how the team approaches scoping conversations. Rather than taking a list of IP addresses and signing a contract, Praetorian’s engagement process starts with understanding your business, your threat model, and your security maturity.
Initial scoping calls focus on your objectives rather than jumping straight to technical assets. What compliance frameworks apply to your organization? What kinds of security incidents keep executives up at night? What testing have you done before, and what did you learn? These conversations help identify the right testing approach before anyone starts listing domains and IP ranges.
Praetorian’s team includes researchers who regularly speak at Black Hat and DEF CON, contribute to open source security tools, and discover vulnerabilities in major platforms. This expertise means they can spot scoping gaps others might miss. If you’re migrating to microservices but your proposed scope only covers traditional web applications, they’ll flag that gap. If you’re testing cloud infrastructure but haven’t considered supply chain attacks through your CI/CD pipeline, they’ll raise that concern.
The Praetorian Guard platform provides asset discovery that validates and expands your initial scope. You might think you have 20 public-facing systems, but continuous reconnaissance often discovers forgotten assets, shadow IT, or sprawling cloud infrastructure that nobody fully mapped. This discovery process ensures testing actually covers your attack surface rather than just the assets you remembered to list.
Guard’s managed service model means you’re not just getting software and hoping it works. You have a dedicated team monitoring your security posture, validating findings, and providing context about which vulnerabilities matter most to your specific environment. This human-in-the-loop approach catches nuanced issues that automated tools miss and filters out false positives that waste your remediation resources.
Perhaps most importantly, Praetorian structures engagements around outcomes rather than checking compliance boxes. The goal is not to produce a report. It’s to help you understand your real security posture, prioritize limited resources effectively, and improve resilience against attacks that actually threaten your business. This outcome-focused approach starts with scoping conversations that dig deeper than “here’s our IP range, send us a quote.”