Penetration Testing for Web Applications: Process, Value and Typical Weaknesses
A pentest is not an automated scan. What a real penetration test delivers, how it runs, where the most common gaps are (OWASP Top 10) — and when the effort actually pays off.

"We ran a security scan, it was green" — and three months later a record still leaked that any logged-in user could have seen via a modified URL.
That is not an invented scenario but the most common misconception: an automated scan and a penetration test are not the same thing. This article explains the difference, the process of a real pentest, the typical gaps, and when it pays off.
Scan, pentest, code review, audit — four different things
- Automated scan: A tool checks known patterns (outdated libraries, missing headers). Fast, cheap, finds the obvious — but no logic flaws.
- Penetration test: A human attacks the application deliberately like a real adversary: bypass roles, trick permissions, abuse inputs, defeat business logic. Finds exactly what scanners cannot see.
- Code review: Looking into the source, not just the running app. Finds causes, not just symptoms.
- Security audit: Assesses processes, configuration and organisation, not just technology.
A scanner says "this lock is a known model". A pentester checks whether your door with your keys, roles and edge cases actually holds.
How a pentest runs
A serious penetration test is structured, not chaotic poking:
- Scoping. What is tested (web app, API, login, roles), from which perspective (no account, as user, as admin), and what is explicitly off-limits (production vs staging, load tests).
- Reconnaissance. Map the attack surface: endpoints, parameters, roles, data flows.
- Exploitation. Targeted attacks on access control, authentication, inputs, configuration and business logic.
- Evidence. Every gap is documented reproducibly — with steps, impact and risk, not just "finding XY".
- Report & retest. Prioritised findings, concrete recommendations, and a retest that checks whether the fix actually holds.
The value is in the last step: a pentest without a retest is a snapshot, not a security gain.
The typical gaps: OWASP Top 10
The OWASP Top 10 (2021) are the recognised reference for web risks. Number one since 2021 — and still in the update — is Broken Access Control: users can reach data or actions they are not entitled to. In OWASP's analysis, 94 % of tested applications were tested for some form of broken access control.
The recurring patterns in mid-sized companies:
- Broken Access Control (A01): Other people's records via manipulated IDs/URLs, API endpoints with no permission check, "hidden in the menu" as a security concept.
- Cryptographic Failures (A02): Sensitive data unencrypted or weakly protected.
- Injection (A03): Input interpreted as a command (SQL, commands, and in AI features as prompt injection — see Understanding Prompt Injection).
- Security Misconfiguration (A05): Default passwords, open debug endpoints, over-broad permissions.
- Vulnerable Components (A06): Outdated libraries with known holes.
- Logging/Monitoring Failures (A09): A successful attack goes unnoticed because nobody is recording.
By far the most common real damage in the SME context is A01 — and that is exactly what no scanner finds reliably, because it is logic, not a pattern.
When a pentest pays off
Not every application needs a full annual pentest. But there are clear triggers:
- Before the go-live of an application with personal or business-critical data.
- After a major change to authentication, roles or payment logic.
- When a customer, partner or insurer requires it contractually.
- For regulatory relevance (GDPR-sensitive processing).
- When the app processes external content or triggers actions automatically.
Germany's BSI IT security situation report classifies the threat level as persistently high; ENISA's Threat Landscape confirms web-based attacks as a constant. Security is not a project end but a recurring state.
What a good pentest report contains
- Findings prioritised by real risk, not tool score.
- For each gap: reproduction, impact, recommendation.
- A management summary (what it means for the business) and a technical part.
- An agreed retest after remediation.
A report that is just a list of "findings" with CVSS numbers helps nobody decide what to do first.
Checklist before commissioning
- Is the scope clear (what, from which role, staging vs production)?
- Is it tested manually, not just scanned?
- Are business logic and access control examined, not just headers?
- Is a retest part of the offer?
- Do we get a prioritised, readable report — not just a tool export?
- Is it clear who fixes the findings and who verifies?
Frequently asked questions
Isn't an automated scan enough? For known, surface-level issues, yes. For access control and business logic — the most common real damage — no. They complement each other; one does not replace the other.
Pentest on production or staging? Preferably a production-like staging environment with real data structures but no real personal data. Production only with clear rules.
How often? Before relevant releases and after major changes to auth/roles/payment — not a rigid "once a year" but event-driven.
What does it cost? Like software: depends on scope. A clearly bounded test of an app with login and three roles is calculable; "test everything" is not.
Conclusion
A penetration test is not a green checkmark but a controlled attack by a human — exactly where scanners are blind: access control, roles, business logic. A structured process, OWASP-oriented depth and a retest separate a real pentest from an expensive scan with a report.
Further reading
- Understanding Prompt Injection: Why AI Applications Need Their Own Security Checks — the sibling risk class for AI features.
- GDPR-Compliant AI Applications — security and data protection belong planned together.
Next step
Before go-live or after a change to roles and auth? Talk to us about a clearly scoped penetration test and security review with a prioritised report and a retest.
Sources
- OWASP, Top 10:2021 Web Application Security Risks — owasp.org
- OWASP, A01:2021 Broken Access Control — owasp.org
- BSI, Die Lage der IT-Sicherheit in Deutschland — bsi.bund.de
- ENISA, Threat Landscape — enisa.europa.eu