OWASP Top 10: The Developer's Guide to Not Getting Hacked
Published by Pentesty · Security Practitioner Series
You didn't set out to write vulnerable code. Nobody does.
But every year, the same categories of vulnerabilities show up in production applications across every industry, every stack, and every team size. The OWASP Top 10 exists because these aren't random mistakes. They're predictable patterns, which means they're also preventable ones.
This guide walks through each one from the perspective of someone building software. Not "here's how an attacker exploits it," but "here's what's actually happening in your code, and what a safer version looks like." That framing matters, because understanding the root cause is what makes you a developer who doesn't repeat the pattern.
A01: Broken Access Control
This is the number one risk on the list, and once you see the pattern, you'll start noticing it everywhere.
Access control is the part of your application that answers the question: is this user allowed to do this thing? When it's broken, users can act outside their intended permissions. That means reading other people's data, modifying records they don't own, accessing admin functions from a regular account, or elevating their own privileges.
The failure usually isn't dramatic. It's a missing check. A developer builds a feature, wires up the UI to only show it to admins, and forgets to enforce the same restriction on the API endpoint that UI calls. The menu disappears for regular users. The endpoint doesn't.
A common version of this is an application that exposes /api/orders/4821 and doesn't verify that the requesting user owns order 4821. Anyone who discovers the pattern can walk through other people's orders just by changing the number. In our previous article on pentest reports, we saw how findings like these end up buried in scanner output without enough context to prioritize them.
The fix isn't complicated. Every request that touches a protected resource needs a server-side check that verifies the requester has the right to do what they're asking. The UI is not an access control layer. The frontend is not a trust boundary.
Platform software can violate the same rule at scale: when session state is trusted without strict validation, a single parsing bug can collapse the entire trust boundary. For a real-world hosting example, see our breakdown of CVE-2026-41940 (cPanel CRLF / authentication bypass).
A02: Cryptographic Failures
This category used to be called "Sensitive Data Exposure," and that name was actually more useful. The point isn't cryptography as an abstract concept. The point is: what happens to sensitive data when it's stored or transmitted, and is that treatment appropriate?
The failures here come in a few shapes.
Transmitting data over unencrypted connections is the obvious one, largely solved by HTTPS adoption. But it still shows up in internal service-to-service communication, mobile app backends, and legacy systems.
Storing sensitive data without proper protection is more common. Passwords stored as plain text, or hashed with MD5 or SHA-1, are effectively readable to anyone who accesses the database. These algorithms weren't designed for password storage. bcrypt, scrypt, and Argon2 were. They're slow by design, which makes brute-force attacks against them impractical.
Using encryption incorrectly is subtler. An application might use AES but with a hardcoded key, or reuse initialization vectors, or use ECB mode (which leaks patterns in the data). The presence of encryption doesn't mean data is protected.
The developer question to ask: for every piece of data your application handles, does it need to be stored at all? If yes, does it need to be stored in a recoverable form? Often the answer to the second question is no, which means hashing is appropriate, not encryption.
A03: Injection
Injection happens when user-supplied data is sent to an interpreter as part of a command or query, without being properly separated from the command structure itself. The interpreter can't tell the difference between "data the user provided" and "instructions I should follow."
SQL is the most well-known instance. But the same class of vulnerability exists for OS command injection, LDAP injection, NoSQL injection, and template injection, where user input ends up inside a server-side template engine that processes it as code.
The conceptual fix is the same everywhere: use parameterized interfaces that keep data and instructions separate. For SQL, that means prepared statements. For OS commands, that means avoiding shell calls with user input entirely, or using safe APIs that don't invoke a shell. For templates, that means never rendering user input as a template.
Input validation helps, but it's not the primary defense. Validation can be bypassed. Parameterization enforces separation at the interpreter level, which is what actually matters.
A04: Insecure Design
This one is newer on the list, and it points to something that the other categories don't: some vulnerabilities aren't implementation bugs. They're design bugs.
An implementation bug means the design was sound but the code introduced a flaw. Insecure design means the approach itself, if implemented correctly, would still produce an insecure system.
A few patterns that fall here:
A password reset flow that uses security questions. Even if that flow is implemented perfectly, the design is the problem. Security questions are guessable, researchable, and don't provide meaningful protection.
A multi-tenant application that uses a shared database with a tenant_id column to separate customer data. The design relies entirely on every query correctly filtering by tenant. One missed WHERE clause exposes cross-tenant data. A design that separates tenants at the database level entirely is more robust.
The practical implication for developers: security thinking needs to happen during design, not during code review. Threat modeling, even informally, asks "what could go wrong with this approach" before writing a single line of code. It's much cheaper than retrofitting security into a flawed design later.
A05: Security Misconfiguration
This is one of the most common vulnerabilities in real-world applications, and also one of the most preventable. It doesn't require a clever attacker. It just requires someone finding what was left unlocked.
Security misconfiguration covers a broad range of failures: default credentials that were never changed, error messages that expose stack traces and internal paths to end users, directory listing enabled on a web server, unnecessary features enabled (like a database admin interface exposed to the internet), cloud storage buckets with public read permissions, or missing security headers on HTTP responses.
The developer habit that helps most here: treat your production environment as a different thing from your development environment. Verbose error messages are helpful during development. They're harmful in production. Debug mode should never reach production. Default credentials need to be changed before deployment, not after.
A06: Vulnerable and Outdated Components
Your application isn't just your code. It's every library, framework, and dependency your code relies on, and every dependency those dependencies rely on. A vulnerability in any of those is effectively a vulnerability in your application.
The Log4Shell vulnerability in late 2021 is the clearest recent example. A critical flaw in a widely used Java logging library affected an enormous number of applications across industries, many of which had teams who had never thought much about that particular dependency.
The failure mode here is usually not ignorance of this risk in principle. It's that dependency management is unglamorous work. Libraries get pinned to a version that works and then rarely revisited. A year later, that version has known vulnerabilities, and nobody has updated it because "it works."
Use automated tools that track known vulnerabilities against your dependency list. GitHub Dependabot, Snyk, and similar tools will open pull requests when a dependency you're using has a published CVE. Let them. Review and merge those updates.
In finance, the blast radius of a weak vendor or an unpatched integration is not theoretical — it is measured in client notifications and regulatory filings. For how that shows up when a major bank confirms account-data access, read our BTG Pactual incident overview.
A07: Identification and Authentication Failures
Authentication is the system that answers "who are you?" Failures here mean the answer can be wrong, bypassed, or manipulated.
Common patterns: weak password policies, missing rate limiting that allows credential stuffing, session tokens that don't expire or aren't properly invalidated on logout.
Session fixation is a subtle one. If your application creates a session before a user authenticates and then elevates that same session after they log in, an attacker who plants a known session token before the login can use that token to access the authenticated session afterward. The fix is simple: regenerate the session identifier after any authentication event.
JWT misuse is increasingly common as more applications move to token-based authentication. Common mistakes include using the "none" algorithm, failing to validate the algorithm specified in the token header, or storing sensitive data in the payload without remembering that JWTs are only encoded, not encrypted.
Multi-factor authentication is the most impactful defensive control in this category. It doesn't solve every problem, but it significantly raises the cost of account takeover even when passwords are compromised.
Real incidents drive that lesson home: when millions of emails and names leave a consumer platform, attackers immediately weaponize them for stuffing and spear-phishing. For a recent case study on downstream risk after a large learning-platform claim, see the Udemy / ShinyHunters breach analysis.
A08: Software and Data Integrity Failures
This category became prominent after the SolarWinds attack demonstrated exactly what it looks like when software integrity is compromised at scale.
The category covers situations where an application makes assumptions about the integrity of software updates, critical data, or CI/CD pipeline components without verifying those assumptions. If your application auto-updates a plugin without verifying the signature of the update, a compromised update source delivers malicious code that gets automatically trusted and installed.
For developers, the most practical implication is around your supply chain. Where does your code come from? Where do your dependencies come from? Are those sources verified? CI/CD pipelines that have excessive permissions, or that pull from third-party sources without integrity checking, represent real exposure.
A09: Security Logging and Monitoring Failures
This category is different from the others. The vulnerability isn't that an attacker can do something they shouldn't. It's that when they do, nobody notices.
Insufficient logging and monitoring doesn't cause breaches directly. But it means breaches go undetected for longer, investigation is harder, and the opportunity to stop an attack mid-execution is lost.
What good logging looks like for a developer:
Authentication events should be logged: successful logins, failed logins, password resets, MFA events. Access to sensitive resources should be logged with enough context to answer "who accessed what, when, from where." Errors should be logged server-side in detail, but that detail should never be surfaced to the end user.
Logs need to be queryable and retained long enough to be useful. Logs paired with alerting on anomalous patterns — too many failed logins, unusual geographic access, access at unusual hours — convert passive records into active detection.
A10: Server-Side Request Forgery (SSRF)
When you build a feature that fetches a remote resource on behalf of a user, you're giving that user a degree of control over what your server contacts. That's fine when the expected inputs are URLs to public resources. It becomes a problem when there's nothing preventing your server from being pointed at internal services, cloud metadata endpoints, or other infrastructure that should never be reachable from outside.
The developer questions to ask when building any "fetch a URL" feature:
What is the legitimate set of URLs this feature should ever reach? Can you define that as an allowlist? Does this feature need to be able to reach internal IP ranges, loopback addresses, or private cloud infrastructure? The answer is almost certainly no, which means those should be explicitly blocked.
Allowlists are stronger than blocklists here. There are too many ways to represent an IP address for a blocklist to be reliably complete.
The Through Line
Reading all ten of these together, a pattern emerges that's worth naming explicitly.
Most of these vulnerabilities aren't exotic. They're not the product of a clever attacker finding a subtle flaw. They're predictable failures that happen when security isn't part of the design and development process from the beginning: missing checks, missing validation, missing logging, missing updates, missing verification.
Security isn't a personality type or a separate discipline you hand off to a specialist team. It's a set of questions you learn to ask at each stage of building something. What could go wrong with this design? What happens if this input is malicious? What does this log when something bad happens? How do I know this dependency is safe?
Those questions, asked consistently, are what the OWASP Top 10 is trying to teach. For a deeper look at how these vulnerabilities show up in real security assessments — and why they often get lost in report noise — read our article on why most pentest reports fail to drive remediation.
At Pentesty, we believe security knowledge should be accessible to everyone building software, not just specialists. If you want to go deeper on any of these, we have structured labs and learning paths at pentesty.co that let you explore these concepts hands-on in safe, legal environments.
Because understanding is where it starts. Practice is where it sticks.
Related on Pentesty
CVE-2026-41940: Critical cPanel & WHM flaw →
CVSS 9.8 authentication bypass via CRLF in session files — patch table, timeline, and host-level checks.
The Udemy breach & ShinyHunters →
Extortion, identity datasets, and why MFA plus unique passwords still matter when the vendor is in the headlines.
BTG Pactual & financial data security →
International accounts, LGPD/GDPR context, and why third-party risk is part of the same dependency story.
Rockstar & ShinyHunters: ransom refusal →
Exfiltration, deadlines, and why paying rarely buys deletion — the human and legal layer on top of vulns.
Inside ShinyHunters: extortion playbook →
Five phases from access to leak — maps cleanly onto the misconfig and dependency failures OWASP tracks.
TL;DR
Have a question about secure development or want to see these vulnerabilities in action? Get early access to Pentesty and explore hands-on labs for each OWASP category.
