Companies that identify and remediate software vulnerabilities early and often will generate software maintenance savings that reduce overall development costs.
Security code reviews help software development teams find security bugs early in the development cycle. In 2011, Forrester reported that it can cost 30-times more to fix security bugs later in the development process. Not 30 percent more, but actually 30-times more!
By developing an in-depth source code review, using automated analysis and manual inspection, an organization will be able to identify and remediate software vulnerabilities earlier in the development lifecycle.
We start the security code review by using a suite of automated tools including open source static analysis tools, internally developed scripts, and commercial static analysis products. Automated static analysis has shown itself to be fairly effective at finding bugs due to their syntactical nature which make up approximately 50 percent of all software vulnerabilities. The results from these scans are used in creating a prioritized list of security mechanisms to review and potential security vulnerabilities to investigate. This prioritized list will be used in creating a test plan that will ensure complete and efficient coverage of the application and the areas of concern. Because automated scans are not as labor intensive as manual code inspection, automated tools provide organizations the ability to scale up the coverage of an application security program and provide at least some minimum of secure code analysis across an enterprise. In this way, automated static analysis tools have an advantage in their ability to quickly identify "low hanging fruit" across large sets of applications.
However, NSA studies have shown even if a software security team leveraged all static analysis tools available on the market today, the combined results would identify less than 40 percent of the security bugs within an application. Moreover, static analysis tools are incapable of finding application flaws and business logic vulnerabilities which require context and application understanding to identify. Subsequently, Praetorian experts manually validate every issue found and manually inspect the code to overcome the limitations of automated tools. This allows us to apply our knowledge of the business logic, use and abuse cases, and extensive prior experience in the identification of vulnerabilities to reduce the likelihood of false positives and false negatives. Unfortunately, manual methods are also labor intensive and expensive.
For these reasons, the process of automated review combined with a manual review is the best approach. Using both methods together enables our consultants to identify more software security vulnerabilities in both an efficient and cost effective manner.
In addition, when assessing larger applications of 100,000 lines of code or more, we recommend a threat model in conjunction with the security code review. The threat model helps us to understand the application's functionality, technical design, and existing security threats and countermeasures. For large code bases where a threat model is warranted, the threat model helps us to focus our review efforts on the key components of the code and can reduce the amount of code that needs to be reviewed by as much as 70 percent.
How long does a manual code review take to complete?
On average, a good software consultant can review 2,000 to 2,500 lines of code per day (10,000 to 12,500 lines of code per week). Manual code review is a labor intensive process and for this reason we recommend that it is reserved for critical components of the application. In order to identify critical components, manual code reviews are typically performed in conjunction with a threat model.
When is it appropriate to employ other software assurance services in conjunction with a code review?
Other security activities, such as threat modeling, should be considered when a code base is larger than 100,000 lines of code and a higher level of rigor than automated static analysis is required. During a threat model, the application is decomposed into its respective parts and the critical components of the code base are identified. Focusing on parts of the code that really matter will reduce the scope of manual review by up to 70 percent. In this way, the threat model will save a client considerable money when employing a manual code review.
How much does a security code review cost?
The cost is dependent on the number of lines of code (size) in the product and the level of rigor in which the code is to be inspected. This is determined through pre-sale client discussions and scoping questionnaires. The price of an engagement will be delivered as a fixed bid quote.
How do level of rigor, price, and quality relate to one another?
Level of rigor, price, and quality are all directly proportional to one another. For a baseline of secure code quality, automated static analysis tools offer large coverage of the code base at an attractive price point, but tend to identify only the low hanging fruit. For companies with additional budget and those who assign additional weight to the quality of inspection, a hybrid approach between automated analysis and manual inspection is often employed in an effort to identify more vulnerabilities. For companies who sell 1) enterprise software products, 2) software products which have considerable exposure, or 3) software products which are considered mission critical, a combination of threat modeling, automated static analysis, and manual code inspection will be leveraged. Please refer to the above graph which illustrates vulnerabilities discovered vs. level of rigor.
What would be an appropriate level of rigor if we have budget constraints for this initiative?
Automated static analysis tools are employed for a low level of rigor and as a first pass filter. This level of rigor is typically employed by companies who are constrained by budget or trying to simply satisfy a compliance or customer requirement.