Qodana / Static Code Analysis Guide / Code reviews for Security
Keep your codebase secure and compliant with a code review for security.
To a greater or lesser degree, security is a consideration in any software development project. How important it is depends on a number of factors, including the context in which the application will be used, the types of users, and the data available from it.
For applications where security is not considered a high priority, adopting software development best practices might be sufficient. In this case, incorporating security checks into your code reviews is a good way to ensure you don’t introduce flaws that you could easily avoid.
When security is a priority, such as in systems with access to sensitive data, it should be addressed from the design stage and considered throughout the software development lifecycle. This means using vulnerability assessments, secure code reviews, automated tests, and simulated attacks to make your codebase more bulletproof.
Andrew Hoffman put it well when he stated: “Writing a secure web application starts at the architecture phase. A vulnerability discovered in this phase can cost as much as 60 times less than a vulnerability found in production code.” – Web Application Security.
While assessments and tests are important for verifying the security of the software as a whole, code reviews provide an opportunity to assess each incremental addition to your codebase and relay feedback on how those changes affect the security posture of your application.
In a secure code review, your focus as a reviewer is on whether the code changes have introduced any new vulnerabilities and whether security best practices have been implemented correctly. As with all code reviews, providing feedback soon after the changes have been written improves code quality and shares knowledge within the team, which in turn improves quality over the long term.
In this guide, we’ll discuss common security issues, best practices to consider during code reviews, and ancillary tools that can support the process. If you have security experts in your organization, they may be responsible for secure code reviews. Where possible, consider pairing them with code reviews so that you can learn from their approach.
Threat modeling is the process of systematically identifying potential threats and vulnerabilities in your software so that you can implement effective countermeasures. There is a range of threat modeling frameworks and tools that you can use to facilitate the process, such as STRIDE, OCTAVE, and OWASP Threat Dragon.
Once you’ve identified the main threats to your software, use your threat model to inform code reviews and prioritize fixing flaws identified in existing code via security testing tools.
There is a wide range of application security tools and platforms that you can use to test your software. SAST and SCA are two of the simplest tools to incorporate into your development or code review process.
Static analysis security testing (SAST) tools help with analysis in a source code security review to check for potential vulnerabilities such as buffer overflows and cross-site scripting. Having identified issues, some also provide guidance on remediation steps. While Qodana isn’t a security tool per se, it does have taint analysis and checks for SQL injections, vulnerable dependencies, hard-coded passwords, and more to help you secure your code as a team.
Software composition analysis (SCA) tools check your open-source dependencies for known flaws. Out-of-date dependencies are a common source of vulnerabilities in all types of software and provide an easy target for hackers.
SCA tools mitigate this by automatically scanning your codebase for open-source code known to contain security flaws. In most cases, they also identify the fixed version that you should upgrade to in order to protect your system.
Ideally, the author of the code changes should have made use of these tools and addressed any issues raised by them before requesting a code review. However, if SAST and SCA do not form part of your team’s toolchain or you’re new to security testing, consider running them as part of your code reviews so you can query any findings they raise.
Once you’ve laid the groundwork by understanding the risks specific to your software and run automated scans to check for common flaws, you can start manually reviewing code changes.
The following are some of the most common sources of security flaws in application code, which are worth keeping in mind when performing a code review for security. Note that this list is not exhaustive. You can also find more guidance on best practices and common weaknesses in the OWASP Top Ten and MITRE CWE lists.
Validate user inputs
Any code that allows a user to input values into your system – such as a form to collect personal details or a REST API that accepts requests from clients outside your organization – should validate those inputs to protect against injection, cross-site scripting, and local file inclusion attacks.
Although input validation has long been a best practice, failing to implement it or relying on denylists to prevent malicious inputs remain common mistakes in software development. If the code you’re reviewing introduces an input, check there is both suitable input validation and an automated test to check for validation.
Store credentials and secrets securely
Although there are various tools available that can scan your code for this type of security flaw, hard-coded passwords, access tokens, or encryption keys are also generally possible to spot during a manual code review. If you find any, they should be flagged immediately to the author of the changes.
Code review is an opportunity to share knowledge and best practices. Explaining the risk that hard-coded secrets present and explaining how to avoid using them should avoid the issue arising again in the future. For credentials used to connect to other services (such as a database or API), use a secret vault and reference it from the application code. All credentials used to authenticate to your application should be salted and hashed.
Particular care should be taken to ensure secrets are not committed to version control, where they will persist in the revision history. When secrets have been committed, take steps to purge the history and/or recreate accounts with new credentials.
Check authentication and authorization logic
Flaws in authentication and authorization logic are a common target for malicious actors and should be considered carefully as part of a secure code review. There are several best practices in this area, from avoiding login error messages leaking data and not logging invalid passwords, to applying the right cryptographic technique.
If you find code that implements authentication and authorization logic from scratch, it’s worth questioning why you’re not making use of a standard framework. Not only have existing frameworks been well tested in the real world, but they are also regularly updated with fixes to address any new vulnerabilities.
If your business logic requires that a user or service is logged in before accessing parts of your system, ensure the code checks both that they are logged in and they have the right permissions before allowing access. A common mistake is to equate authentication with authorization, thereby elevating privileges incorrectly.
Share only what you need to
The data that an application outputs is another common source of security exploits. When reviewing code, consider whether confirmation and error messages contain more information than they strictly need to.
For example, including details of the cause of the error, such as a null pointer exception or out of memory condition, in a user-facing message can provide valuable clues about how your system works to potential attackers.
Similarly, if the code you’re reviewing includes log statements, check the log level and ensure that potentially sensitive data is restricted to debug level logs.
Make sure the code is robust
As a code reviewer, part of your role is to consider the design of the code and its maintainability in the future. Modern software can contain dozens or even hundreds of dependencies.
Even with good automated test coverage, it’s not possible to test that every interaction with each dependency works as you expected when the code was implemented. As well as creating potential for unexpected behavior in future, making assumptions about how other components will behave can have security implications.
For example, if the code expects that a username field will contain a maximum of 32 characters, that assumption should be validated in the data parsing code that relies on it, rather than relying on an upstream authentication framework to enforce that requirement.
Otherwise, an update to that framework in a few months or years could increase the permitted string length, removing the protection that used to be in place and exposing your application to an injection attack.
Checking that the logic is robust at the point it is written is more effective and practical than trying to work out the impact of updating a dependency long after it has been introduced, particularly if that dependency needs to be updated urgently to address a different security issue.
Look for duplication
Not all security flaws relate to security best practices. Part of the art of code review is questioning each piece of logic to check that it’s necessary (as well as doing what is required). For example, if the code under review provides the same data in two different ways, it’s worth asking why that is needed and whether there is a more elegant solution that avoids duplication.
The Heartbleed vulnerability in OpenSSL resulted from just such a flaw. The heartbeat request from the client included both a string and a value for the string length, and the server used the string length to determine how much data to return data from memory. That left a door open for a malicious user to retrieve additional data that they should not have been able to access. If the code had derived the string length from the string, the exploit would not have existed.
Think like a hacker
As a code reviewer, you’re normally looking at a piece of code for the first time, which means you can be more objective about how it might be misused than the author of the changes. In addition to thinking through potential exploits and security flaws, consider pulling the change down locally and testing how the program responds to inputs other than those expected by the business logic.
This practice is known as fuzzing, and is used by attackers to find potential exploits. Ideally, the code should handle the errors in every case. However, if the program crashes or does something that doesn’t make sense, it indicates a potential entry point for a malicious actor (even if you can’t imagine what that would be in practice).
See how Qodana helps you catch vulnerabilities early and keep your code safe and compliant from day one.