OWASP Top 10 explained - 2021
The Open Worldwide Application Security Project (OWASP) is a nonprofit foundation focused on improving the security of software. It provides free, vendor-neutral tools, resources, and standards that help developers, testers, and security professionals build secure applications.
One of its most recognized contributions is the OWASP Top 10 — a curated list of the ten most critical web application security risks. It serves as both an awareness document and a foundational guide for secure coding and secure software design practices.
In this post, we go beyond just listing these vulnerabilities. We’ll break each one down by explaining how to identify it during reviews or testing, how attackers typically exploit it, and most importantly, how to mitigate or prevent it through secure development practices. Whether you're a developer or an aspiring security engineer, understanding these fundamentals is key to building and defending secure systems.
A01:2021 - Broken Access Control
Broken Access Control is the #1 vulnerability in the OWASP Top 10 (2021), highlighting how commonly applications fail to enforce proper restrictions on what authenticated users can do. When access control is broken, users can act outside of their intended permissions — for example, gaining unauthorized access to data, modifying resources, or escalating privileges.
🔍 What is Access Control? Access Control is the mechanism that governs who can access what in a system. It ensures that users can only perform actions or access resources they're authorized to — based on roles like user, admin, editor, etc.
There are several types of access control models:
- Discretionary Access Control (DAC)
- Mandatory Access Control (MAC)
- Role-Based Access Control (RBAC)
- Attribute-Based Access Control (ABAC)
Broken Access Control occurs when the system fails to properly enforce these access rules, allowing attackers to bypass restrictions. This is often due to:
- Missing or improper validation of user permissions
- Relying solely on client-side enforcement (e.g., hiding UI elements)
- Using predictable object references (like /api/users/123)
- Improper enforcement of role boundaries
Real-World Examples
-
Insecure Direct Object References (IDOR): A user modifies a URL like /invoice/1001 to /invoice/1002 and views another user's invoice because there's no check if they have access.
-
Vertical Privilege Escalation: A regular user accesses admin-only features by calling admin-only endpoints or modifying a request.
-
Horizontal Privilege Escalation: A user accesses data of another user at the same privilege level due to insufficient checks.
-
Forceful Browsing: An attacker discovers admin pages not properly protected, simply by guessing URLs like /admin/panel.
How to Test for Broken Access Control
- Try accessing resources without authentication or as a different role.
- Modify parameters like user IDs or roles in API calls.
- Use tools like Burp Suite, OWASP ZAP, Postman, or curl to test endpoints.
- Look for hidden UI elements and test if the backend still accepts the action.
How to Prevent Broken Access Control
- Deny by default – access should be granted only when explicitly allowed.
- Enforce access checks on the server-side, not the client.
- Use role-based or attribute-based access control models consistently.
- Avoid exposing direct object references (e.g., use UUIDs instead of incremental IDs).
- Log and monitor access control failures, and alert on anomalies.
- Use secure frameworks that provide built-in authorization checks (e.g., Spring Security, .NET Identity, NestJS guards).
- Implement access control mechanisms at code level rather than through configuration.
- Disable web server directory listing.
- Enforce record-level access control (e.g., users should only see their own records).
- Avoid CORS misconfigurations that might lead to unauthorized cross-origin access.
A02:2021 - Cryptographic Failures
This is also known as Sensitive Data Exposure Cryptographic Failures refer to the improper implementation or absence of encryption, leading to the exposure or compromise of sensitive data. This vulnerability is ranked #2 because of how frequently applications fail to protect data at rest or in transit.
Cryptographic failures happen when:
- Sensitive data (like passwords, credit card numbers, personal information) is not properly encrypted
- Weak or outdated cryptographic algorithms (e.g., MD5, SHA-1, RC4) are used
- Encryption is implemented incorrectly (e.g., missing IVs, improper key management)
- Data is transmitted over unencrypted HTTP or stored in plaintext
Real-World Examples
- Passwords stored in plaintext in a database that gets leaked in a breach
- An app sending login credentials over HTTP instead of HTTPS, allowing attackers to sniff traffic
- A mobile app using hardcoded encryption keys that attackers extract via reverse engineering
- Using weak ciphers or self-signed certificates without validation
- Not enforcing TLS 1.2 or above, leaving users vulnerable to downgrade attacks
A03:2021 - Injection
Injection occurs when untrusted data is sent to an interpreter as part of a command or query, tricking the system into executing unintended commands or accessing unauthorized data. Attackers exploit this by inserting malicious input into the application's code execution path.
Injection vulnerabilities are dangerous because they can:
-
Expose sensitive data
-
Allow remote code execution
-
Modify or delete data
-
Bypass authentication
Take full control of the server or database
Common Types of Injection | Type | Example |
---|---|---|
SQL Injection (SQLi) | Injecting malicious SQL into database queries | |
Command Injection | Injecting system commands (e.g., rm -rf / ) |
|
LDAP Injection | Modifying LDAP queries | |
XPath Injection | Targeting XML data queries | |
NoSQL Injection | Targeting MongoDB, Firebase, etc. | |
HTML/Script Injection | Cross-Site Scripting (XSS) – handled separately in OWASP |
How to Prevent Injection
- Use Parameterized Queries / Prepared Statements
- Prevent direct inclusion of user input in queries.
- Use ORM Frameworks or Query Builders
- Tools like Sequelize, Hibernate, TypeORM often handle escaping internally.
- Avoid Dynamic Queries Wherever Possible
- Escape User Input If Dynamic Construction Is Unavoidable
- Be very careful with escaping — this is error-prone and often incomplete.
- Use Whitelisting for Input Validation
- Only allow known good values (e.g., status = “active” or “inactive” only).
- Least Privilege Database Access
- Prevent injection from causing full DB compromise by limiting permissions.
- Use Web Application Firewalls (WAFs)
- Adds an extra layer of protection but should not be your primary defense.
- Keep Software and Libraries Up to Date
How to Detect Injection
- Manual Testing: Try inserting ' OR '1'='1 or ; ls into input fields.
- Automated Scanners: Use tools like:
- OWASP ZAP
- Burp Suite
- SQLMap
- Nmap + NSE scripts
- Code Review: Look for dynamic queries using user input.
A04:2021 - Insecure Design
Insecure Design refers to security flaws that originate from design decisions, rather than implementation bugs. It means the application was designed without security in mind, allowing attackers to exploit logic flaws, missing controls, or insecure workflows.
Unlike insecure implementation, which might be fixed with a patch or code update, insecure design reflects fundamental issues in how the app is structured.
Real-World Example A web app allows users to guess passwords without rate limiting. Even if the password check is perfectly coded, the lack of design for brute-force protection is a design flaw — not a bug.
Common Insecure Design Scenarios
Scenario | Risk |
---|---|
No rate-limiting on login attempts | Brute-force attacks possible |
Business logic allows users to cancel others' orders | Horizontal privilege escalation |
Poor workflow for password reset | Account takeover |
Lack of user journey validation (e.g., skipping payment step in checkout) | Free products or services |
Role-based access not enforced in design | Users accessing admin functions |
No threat modeling in design phase | Security risks go unaddressed |
🔍 Why It Happens Security wasn’t considered during the design phase.
No threat modeling or abuse case analysis.
Overreliance on functional requirements and neglect of security requirements.
Business pressure to deliver fast, skipping secure design practices.
Key Differences: Insecure Design vs. Insecure Implementation | Aspect | Insecure Design | Insecure Implementation |
---|---|---|---|
Root Cause | Poor architectural decisions | Bugs or mistakes in code | |
Example | No rate-limiting designed | Password hash function implemented incorrectly | |
Fix | Requires redesign | Often a patch or code fix | |
Prevention Strategy | Threat modeling, secure design principles | Secure coding practices, code reviews |
How to Prevent Insecure Design
-
Start with Secure Software Development Lifecycle (SSDLC)
-
Integrate security from the early stages of development.
-
Perform Threat Modeling and Risk Analysis
-
Identify potential attack vectors before implementation.
-
Tools: Microsoft Threat Modeling Tool, STRIDE model
-
Define Secure Design Requirements E.g., authentication, authorization, encryption, audit logging, etc.
-
Apply Design Patterns for Security
-
Least privilege, fail-safe defaults, separation of concerns.
-
Establish Abuse Cases Alongside Use Cases “How could a malicious user break this flow?”
-
Security Architecture Reviews
-
Involve security architects early in the design process.
-
Use Security-Focused Frameworks E.g., using Spring Security for Java, or OAuth2 standards.
-
Include Security Requirements in User Stories
“As a developer, I need to rate-limit login attempts to avoid brute-force.”
Design Questions to Ask
- What happens if someone skips this step in the flow?
- Can a user access or modify someone else’s data?
- Are any critical actions lacking authorization?
- Are default values or configurations secure?
- What are the potential abuse cases?
A05:2021 - Security Misconfiguration
Security Misconfiguration happens when systems, frameworks, libraries, or applications are configured insecurely or left with default settings that are unsafe. It's one of the most common and widespread vulnerabilities in real-world applications.
- Misconfigurations open the door for attackers to:
- Access admin interfaces
- View sensitive data
- Execute commands
- Gain full control of systems
Real-World Example A cloud storage bucket (e.g., Amazon S3) is publicly accessible because it was never configured to require authentication. Now, anyone on the internet can download private files.
Common Types of Security Misconfiguration
Misconfiguration Type | Risk / Impact |
---|---|
Default credentials (e.g., admin\:admin) | Easy compromise |
Directory listing enabled | Internal files exposed |
Unused services or ports left open | Increases attack surface |
Stack traces shown to users | Reveals sensitive internals |
Missing security headers | Allows exploits like XSS, clickjacking |
Overly permissive CORS policies | Data theft from other origins |
Verbose error messages | Leaks logic, structure, or credentials |
Debug mode enabled in production | Full application control |
🧪 Example: Misconfigured Web Server A web server that shows directory contents:
http://example.com/uploads/
Instead of blocking access, it shows:
Index of /uploads
• secret.txt
• password_backup.zip
A hacker can now download sensitive files without authentication.
Why It Happens
- Teams overlook default settings
- Lack of hardening guides or checklists
- Insecure features enabled in production (e.g., debug mode)
- Miscommunication between DevOps, developers, and security teams
- No regular reviews of cloud or server configurations
How to Prevent Security Misconfiguration
- Establish and Enforce Secure Defaults
- Disable unused features and services
- Change default usernames/passwords
- Automate Configuration Management
- Use tools like Ansible, Terraform, or Chef to enforce secure states
- Apply the Principle of Least Privilege
- Limit access to only what's necessary (e.g., file permissions, API keys)
- Perform Regular Security Reviews
- Review cloud permissions (e.g., S3 bucket policies, firewall rules) Scan for open ports, services, and misconfigured headers Disable Directory Listing and Debug Features Especially in production environments Implement Proper Error Handling Don’t expose stack traces or internal error messages to users Set Secure HTTP Headers
Examples:
X-Content-Type-Options: nosniff
Content-Security-Policy
Strict-Transport-Security
Use a Web Application Firewall (WAF)
Protects against many classes of attacks
Keep Software Up to Date
Old versions might revert to insecure configurations
Tools to Detect Misconfigurations
- OWASP ZAP / Burp Suite – scan for headers, exposed info, etc.
- Nikto – find common misconfigurations in web servers
- Cloud Security Scanners – e.g., AWS Inspector, GCP Security Command Center
- OpenVAS or Nessus – general-purpose vulnerability scanning
A06:2021 - Vulnerable and Outdated Components
This risk occurs when applications:
Use software libraries, frameworks, or dependencies with known vulnerabilities, or
Fail to update or patch components regularly, or
Include components without proper security checks
It affects both client-side (e.g., JavaScript) and server-side components (e.g., Java, Python, PHP, .NET libraries).
Even if your code is secure, a vulnerability in one of your dependencies (or their dependencies!) can expose your app to:
- Remote code execution (RCE)
- Data breaches
- Privilege escalation
- Denial of Service (DoS)
Real-World Examples
-
Log4Shell (Log4j - CVE-2021-44228) A critical RCE in the popular Java logging library Log4j Millions of apps were vulnerable because the component was embedded deeply in the Java ecosystem
-
Equifax Data Breach (2017) Caused by a known vulnerability in Apache Struts (CVE-2017-5638). Attackers exploited it because the system wasn’t patched in time
-
npm Dependency Hijacking Attackers publish malicious packages with similar names to legitimate ones or inject malicious code into widely used packages
Signs of Vulnerable/Outdated Components
- No inventory of third-party components used
- Old versions with known CVEs in production
- Dependencies not pinned to fixed versions
- No regular update or patching process
- Using components from untrusted sources (e.g., GitHub repos, random websites)
How to Prevent It
Best Practice | Description |
---|---|
Maintain an inventory (SBOM) | Know what third-party packages and versions you’re using |
Automate dependency checking | Use tools to scan for known CVEs |
Update regularly | Patch and upgrade libraries frequently |
Prefer well-maintained libraries | Avoid abandoned or unverified projects |
Pin versions | Avoid auto-updating to untested versions |
Use security-focused dependency managers | Like npm audit , pip-audit , npm-lock , etc. |
Recommended Tools
Tool | Language / Stack |
---|---|
npm audit |
Node.js |
yarn audit |
Node.js |
pip-audit |
Python |
OWASP Dependency-Check |
Java, .NET, etc. |
Snyk |
Multiple languages |
GitHub Dependabot |
GitHub-integrated |
Retire.js |
JavaScript |
Quick Checklist
- Do you have an inventory of all dependencies (with versions)?
- Are you using automated tools to detect vulnerabilities?
- Do you have a patch management policy?
- Are unused components removed from your codebase?
- Are you testing component updates in staging before production?
A07:2021 - Identification and Authentication Failures
This vulnerability occurs when an application fails to properly verify the identity of users or services, or implements authentication mechanisms insecurely. It includes flaws like:
- Weak or broken authentication
- Poor session management
- Credential stuffing vulnerability
- Brute-force attack susceptibility
- Misuse of identity tokens
Authentication is the first line of defense for any system. If attackers can:
-
log in as another user (horizontal privilege escalation),
-
log in as an admin (vertical escalation), or
-
bypass login entirely, they can compromise confidentiality, integrity, and availability.
Real-World Examples
-
Missing Rate Limiting on Login An attacker can brute-force login credentials without restrictions.
-
Predictable Session IDs If the app generates session tokens in a predictable way, attackers can guess another user's token and hijack their session.
-
JWTs Without Signature Verification Accepting unsigned JWTs or using "none" as the algorithm can allow attackers to forge tokens and gain access.
Common Causes
Misimplementation | Risk |
---|---|
No account lockout or rate-limiting | Brute-force attacks |
Use of weak or hardcoded passwords | Easy to guess or reuse |
Insecure password storage (e.g., plain text) | Credential theft |
Session IDs exposed in URLs or logs | Session hijacking |
Poor 2FA implementation or none at all | Easy account takeover |
Missing or misconfigured authentication tokens | Identity spoofing |
How to Prevent It
- Secure Authentication Practices
- Enforce strong password policies
- Minimum length, complexity, and password history checks
- Implement Multi-Factor Authentication (MFA)
- Prevents access even if password is stolen
- Limit Login Attempts
- Rate limit, captcha, or lock accounts after multiple failed attempts
- Use Secure Authentication Libraries
- Don’t build your own auth system from scratch
- Secure Session Management
- Use HTTP-only, secure cookies
- Rotate session IDs after login
- Invalidate sessions on logout or timeout
- Store Passwords Securely
- Use strong hashing algorithms like bcrypt, argon2, or PBKDF2
- Never store plain text passwords
- Verify All Identity Tokens
Validate JWTs properly (issuer, expiration, signature)
Tools to Detect | Tool/Technique | Purpose |
---|---|---|
🔧 Burp Suite / OWASP ZAP | Check for authentication bypass, token flaws | |
🔧 Hydra / Medusa | Brute-force username/password attacks | |
🔧 Mitmproxy | See if credentials or tokens are exposed | |
🔧 Auth security scanners | Specialized tools for login/session vulnerabilities |
A08:2021 - Software and Data Integrity Failures
This category refers to failures related to code and infrastructure that do not protect against integrity violations, particularly in:
- Software updates (auto-updates, plugins, libraries)
- CI/CD pipelines
- Serialized data or configuration file
- Trust boundaries being assumed, not validated
If attackers can manipulate code, libraries, or deployment mechanisms, they may be able to introduce malicious code or behavior.
Why It’s Critical Modern applications rely heavily on external sources:
- Open-source libraries
- Auto-update mechanisms
- Scripts in CI/CD
Without verifying the integrity and authenticity of these elements, attackers can:
- Inject malware
- Modify critical configurations
- Gain unauthorized access to systems
- Compromise the entire supply chain (as seen in real-world attacks)
Real-World Examples
- SolarWinds Supply Chain Attack (2020) Attackers compromised the build system of SolarWinds
Malicious updates were delivered to customers (including U.S. government agencies)
- Codecov CI/CD Breach (2021) Attackers modified a Bash uploader script in Codecov’s pipeline
Allowed them to steal credentials and secrets from CI systems
- Unsigned Software Updates Some desktop apps downloaded updates over HTTP or from unverified sources
Attackers performed man-in-the-middle (MITM) attacks and delivered malware
Common Vulnerabilities in This Category
Weakness | Risk |
---|---|
No signature verification of software | Code tampering |
Insecure deserialization | Remote Code Execution (RCE) |
Trusting files or scripts from unverified sources | Supply chain attack |
Exposed or vulnerable CI/CD tools | Unauthorized deployment |
Lack of runtime code integrity checks | Malware injection at runtime |
How to Prevent It
Best Practice | Description |
---|---|
Digitally sign code and updates | Ensure software authenticity |
Verify integrity of third-party code | Use checksums (e.g., SHA256) or signed packages |
Secure CI/CD pipelines | Limit access, audit changes, use secrets properly |
Restrict deserialization | Avoid it where possible; use secure formats like JSON |
Implement integrity checks | Verify runtime code hasn’t been altered |
Use trusted repositories | Avoid installing from unknown or modified mirrors |
Helpful Tools
Tool/Technique | Purpose |
---|---|
cosign , sigstore |
Sign and verify container images and binaries |
SLSA (Supply-chain Levels for Software Artifacts) |
Secure CI/CD practices |
checksum (SHA256, etc.) |
Verify file integrity |
OWASP Dependency-Track |
Monitor component integrity |
Software Bill of Materials (SBOM) | Know exactly what you're deploying |
A09:2021 - Security Logging and Monitoring Failures
This category includes failures in detecting, monitoring, and responding to security incidents due to inadequate logging, alerting, or visibility into the system.
If a breach occurs and there's no way to detect it, or logs are missing/unclear, attackers may:
- Stay hidden for long periods
- Exfiltrate data unnoticed
- Destroy or modify logs to cover their tracks
Why It’s Critical
Most successful attacks are not detected immediately. Poor logging and monitoring:
- Delays incident response
- Increases damage
- Allows attackers to maintain persistence
- Violates compliance standards like GDPR, PCI-DSS, HIPAA
Common Issues
Problem | Consequence |
---|---|
Logging disabled or incomplete | Attacks go unnoticed |
Logs stored without protection | Attackers tamper with evidence |
No alerting on suspicious activities | Delayed or no response |
No central log aggregation (e.g., SIEM) | Difficult to trace attacks |
No retention or backup policies | Forensics becomes impossible |
Real-World Example
Capital One Breach (2019) A misconfigured WAF was exploited via a SSRF vulnerability
Poor alerting and logging delayed detection
100 million+ customer records were exposed
Impact of Logging Failures
Attack Type | Effect if Logging Fails |
---|---|
Brute-force login | Goes undetected |
SQL injection | Leaves no trace in logs |
Privilege escalation | Can’t be audited |
File modification | Tampered logs = no forensics |
Insider threats | No record of user behavior |
How to Prevent It
Best Practice | Description |
---|---|
Enable detailed logging | Log all authentication, access, errors |
Protect logs | Encrypt logs and restrict access |
Monitor in real time | Use SIEMs like Splunk, ELK, or Wazuh |
Set alerts for critical events | E.g., multiple login failures, file changes |
Audit logs regularly | Look for anomalies and intrusion signs |
Store logs offsite or in append-only mode | Prevent tampering by attackers |
Comply with logging standards | E.g., PCI-DSS, NIST SP 800-92 |
Tools for Logging & Monitoring
Tool | Stack/Type |
---|---|
ELK Stack | Elasticsearch, Logstash, Kibana |
Splunk | Commercial SIEM |
Wazuh | Open-source SIEM |
Prometheus + Grafana | Metrics and alerting |
Fluentd | Log aggregator |
OSSEC | Host-based IDS |
Auditd (Linux) | System auditing |
Developer Checklist
- Are all access and error events logged?
- Are logs timestamped and labeled properly?
- Are alerts configured for critical events?
- Are logs regularly reviewed?
- Are logs stored securely (offsite or immutable)?
- Can you trace actions performed by users and services?
A10:2021 - Server-Side Request Forgery (SSRF)
Server-Side Request Forgery (SSRF) occurs when an attacker tricks a server into making HTTP requests to internal or external resources that the attacker shouldn't have access to.
The vulnerable server becomes a proxy, allowing attackers to:
Access internal-only services (like http://localhost:8080)
Bypass firewalls or IP restrictions
Enumerate internal systems
Exfiltrate data
Perform port scanning from the server’s perspective
🧨 Why It’s Dangerous Even if a service is not directly accessible from the internet, SSRF can let attackers:
Reach internal-only applications or metadata endpoints (e.g., AWS metadata: http://169.254.169.254)
Exploit trusted internal services
Chain SSRF with other vulnerabilities like RCE, credential theft, or cloud service abuse
💥 Real-World Example 🔥 Capital One AWS SSRF Breach (2019) Attacker exploited an SSRF in a WAF (Web Application Firewall)
Accessed AWS EC2 instance metadata
Retrieved IAM credentials
Used them to download customer data from S3
Over 100 million records were exposed
🧪 How SSRF Works Typical vulnerable scenario:
http Copy Edit GET /fetch?url=http://example.com Server makes a request to the provided url. An attacker can modify this:
bash Copy Edit /fetch?url=http://localhost:8080/admin Now the server is tricked into requesting an internal admin interface that the attacker cannot reach directly.
🔥 What Attackers Can Do Attack Objective SSRF Impact Internal network scan Enumerate internal IPs and ports Access cloud metadata Steal credentials (e.g., AWS IAM tokens) Reach admin endpoints Trigger unintended behavior Trigger callbacks Abuse SSRF for SSRF-to-RCE Data exfiltration Leak sensitive data via responses
🧷 Common SSRF Targets Target Type Example Cloud Metadata APIs http://169.254.169.254 (AWS) Docker APIs http://localhost:2375 Internal dashboards http://localhost:8080 Other services Redis, GCP metadata, Kubernetes APIs
✅ How to Prevent SSRF 🔒 Mitigation Strategy Explanation Do not fetch URLs from user input Avoid it completely if possible Allowlist domains/IPs Only permit safe, specific URLs Block internal IP ranges Deny requests to private/internal IPs Disable unnecessary protocols Block file://, ftp://, gopher://, etc. Use DNS pinning Prevent DNS rebinding attacks Monitor outbound requests Detect unusual internal traffic patterns Restrict IAM roles Limit the power of credentials on the server
🛡 Developer Checklist Does the app fetch any resources using user input?
Are internal IPs and metadata endpoints blocked?
Are only trusted, validated domains allowed?
Are all HTTP client libraries securely configured?
Are network egress rules (e.g., firewall) enforced?