A Structured Security Approach

Our assessments follow a structured methodology focused on real risk — not checkbox findings. Each phase builds on the last, and every finding is manually validated before it reaches the report.

01
What we do
Before any vulnerability can be tested, we need a complete picture of your externally reachable attack surface. This phase is about building an accurate target map — subdomains, services, technologies, and cloud assets — rather than testing what the client remembers to tell us about.
Techniques
Passive DNS enumeration and certificate transparency log analysis
Subdomain discovery via brute-force and permutation wordlists
Technology stack fingerprinting (server headers, JS frameworks, CDN)
Port and service scanning across discovered IP ranges
Cloud storage enumeration (S3, GCS, Azure Blob)
Passive OSINT: job postings, GitHub, Shodan, security.txt
Tools used
amasssubfinderdnsxnmapshodancrt.shhttpxManual OSINT
Phase output
Confirmed in-scope asset inventory (subdomains, IPs, open ports)
Technology stack map used to direct subsequent testing
Any out-of-scope assets flagged for client awareness
02
What we do
Most scanners skip this step and fire tests blindly. We map the application's structure, authentication flows, API surface, and third-party integrations first. Understanding what the application does is what allows us to find business logic flaws that no automated tool will ever surface.
Techniques
Authenticated and unauthenticated application crawling
JavaScript bundle analysis for exposed secrets, routes, and API keys
API endpoint enumeration and schema discovery (REST and GraphQL)
Authentication flow mapping (login, registration, password reset, OAuth)
Third-party integration review (payment processors, analytics, CDNs)
Source map and build artifact discovery
Tools used
Burp Suite ProffufgauwaybackurlstrufflehogManual reviewBrowser DevTools
Phase output
Complete endpoint inventory including API routes and authentication paths
Identified high-value targets for focused security testing
Any credentials or secrets found during analysis (immediate notification)
03
What we do
Automated scanning runs first to catch surface-level issues efficiently. Then manual testing targets the areas that matter: authentication flows, authorization controls, API security, and application-specific logic that no tool can reason about.
Testing areas
Authentication: brute-force protections, credential stuffing, account enumeration
Authorization: horizontal and vertical privilege escalation, IDOR testing
Injection: SQL, NoSQL, command, template injection across all input surfaces
Session management: fixation, hijacking, cookie security attributes
API security: rate limiting, mass assignment, unauthenticated access
CORS and cross-origin request handling
Business logic: workflow bypasses, race conditions, price manipulation
Configuration: security headers, TLS strength, error disclosure
Tools used
Burp Suite ProNucleisqlmapffufcustom scriptsManual exploitation
Automated scanners are used as a starting point, not the endpoint. Every finding from automated tools is manually reviewed before being included in the report. Scanner output that cannot be manually confirmed is discarded — not reported.
04
What we do
This is the step most assessors skip — and the reason security reports often feel disconnected from reality. We eliminate false positives and contextualize severity against your specific architecture, compensating controls, and business context before a finding is written up.
Techniques
Is the finding actually exploitable, or just theoretically possible?
What's the realistic attack chain — not just "an attacker could..."
Does an existing compensating control already reduce the risk?
What's the real business impact: data exposure, session compromise, availability?
Is severity calibrated to context, not just CVSS base score?
Would remediation cost be proportional to the actual risk reduction?
Phase output
Validated finding list with false positives removed
Severity ratings calibrated to real-world exploitability and business impact
Prioritized remediation order based on actual risk, not CVSS score alone
Context
A missing Content-Security-Policy header with no XSS finding is informational, not medium severity. A CORS misconfiguration on a public API that serves no authenticated data is low, not medium. Severity inflation wastes your remediation budget on non-issues and trains developers to ignore security findings.
05
What we do
Every finding is documented with specific evidence, realistic impact, and actionable remediation guidance — not copy-pasted OWASP recommendations. The report is written for two audiences: a business owner who needs to understand the risk, and a developer who needs to know exactly what to fix.
Techniques
Executive summary: plain-English risk overview for non-technical stakeholders
Risk overview: severity distribution and primary risk categories
Technical findings: evidence, impact, and fix guidance per vulnerability
Remediation guidance: specific, implementation-ready instructions
Phase output
Finding ID, severity, and category classification
Actual evidence: request/response pairs, file paths, confirmed API calls
Scoped impact: what an attacker can realistically do with this, and what they can't
Implementation-ready remediation with code examples where applicable
After you remediate the identified findings, SurfaceDelta provides one verification retest at no additional cost to confirm the issues are resolved and no regressions were introduced. You receive an updated report reflecting the confirmed fixes.
Ready to start

Request a vulnerability assessment

Typical turnaround 5–7 business days. Includes one verification retest.

Request Assessment