Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Top 7 Myths of AppSec Automation

DZone's Guide to

Top 7 Myths of AppSec Automation

Is there anything keeping you back from implementing application security automation? Check out this post on the top seven AppSec automation myths.

· Security Zone ·
Free Resource

Discover how to provide active runtime protection for your web applications from known and unknown vulnerabilities including Remote Code Execution Attacks.

Security automation, application security automation specifically, has picked up tremendous steam over the past year. We have seen a steady increase in engagement amongst our customers and prospects in data sheets, blogs, and conversations pertaining to anything around AppSec tooling, security regression or DevSecOps in general. While we’ve met security and product engineering teams across varying complexities and market verticals, there have been a distinct set of conversations around surprisingly a finite set of common queries surrounding this topic.

Here is our compilation of what we’d like to call the top seven myths of AppSec automation for all of you who are either currently involved in or in the process of adopting DevSecOps in the near future.

MYTH: DAST + SAST = Complete Automation. Nothing More.

One of the obvious strategies that teams envisage automation is by integrating Static (SAST) and Dynamic (DAST) analysis tools in the development pipeline. Since tools have built-in integrations with Continuous Integration (CI) and defect tracking services, teams can get up and running with their tool automation plumbing in a matter of days. Sounds pretty straightforward, right? Well, not so much.

What tends to get overlooked is the fact that these scanners generate a lot of "noise" in the system, by way of false positives, repetitive results across scanners, the disparity in vulnerability nomenclature (same bug by many names), etc. In an automated system, all this noise gets raised as high priority tickets in the bug tracking system, making it close to impossible for the engineering team to prioritize and remediate issues.

Vulnerability correlation is essential as it greatly reduces manual triaging of results across tools. The crux of correlation lies it is the ability to deduplicate repeated results across SAST and DAST, normalization of vulnerability nomenclature, flagging of false positives, and arriving at a single set of unique results. These results can then be pushed to the defect trackers, ensuring that engineering and security teams have a single set of vulnerabilities to further scrutinize.

MYTH: Security Automation Will “Always” Delay my Build Timeframes

While it is true that security automation will have a certain impact on your build time, the impact can be minimized depending on the architecture of your build pipeline. One option (especially for teams who are early adopters) is to set up a separate security pipeline. This can be set up as a parallel process and can be configured to run scans on a frequency and timeline, which will not impact your mainstream application build time-frame.

Another option would be to set up the SAST and dependency checkers, as part of your main pipeline, and set up a separate pipeline for DAST scanning. Since SAST and dep checkers have a lesser execution time, they will have a reduced impact on your app pipeline. In this scenario, one can configure static analysis to run on a daily basis and dynamic analysis to run on a weekly basis. The impact can be further reduced by tuning the scanner policy to run sanity scans on daily builds and running deeper level scans on weekly builds.

Additionally, in the event high severity flaws being identified, teams can configure the pipeline rules to break builds.

MYTH: Quality Assurance (QA) and Security Are Mutually Exclusive

Product engineering teams have accepted the fact that in order to build a secure application from the bottom up, engineering and security teams need to work in unison. However, in almost every conversation that I have had on the subject, I am asked as to why QA needs to be involved and what value would they bring?

In simple-speak, functional walkthrough scripts developed by QA is crucial in providing DAST scanners additional context, thereby, enabling the tool to scan with greater efficiency and depth. Let me break it down for you.

Almost all DAST scanners tend to be crawler-based. This essentially means that, in order to be effective, the scanner will have to traverse through multiple pages of the web application. However, applications today are increasingly built as a single page app or heavily rely on a micro-services architecture. In such a scenario, application functions are not invoked based on URLs but are called upon based on user input. Without multiple URLs to traverse and build a sitemap, DAST scanners will only scan the home page at best and end up doing a very cursory scan.

A QA walkthrough script essentially validates the functionality of the application by traversing through various modules sequentially. For example, if you are booking air tickets on a travel e-commerce website, the flow would be Login→Select Destination→Select Dates→Search & Select Carriers→Proceed to Checkout→Make Payment→Ticket Booked. The walkthrough script provides all the input parameters and output responses of the application as part of this workflow. Proxying the walkthrough script through DAST gives the scanner context of the sequential workflow. The scanner can then traverse the application based on the workflow and fire its payloads accordingly.

This is especially important in the case of microservices or APIs. A walkthrough script will specify the API URL and specific input parameters extending the capabilities of DAST (which can only comprehend web pages) to APIs as well. Using walkthrough scripts also allows for module level scanning, which comes in handy when you want to test only for iterative feature additions to the application.

MYTH: There Is no Change Needed in the Current Penetration Testing Process

In order for continuous security automation to be effective, it is essential to change the current pen-testing process to an iterative fashion, aligning them to new releases as much as possible. The first step would be to adopt a threat modeling approach to pen-testing. Threat modeling is ideally conducted during application white-boarding and allows security testers to draw up threat scenarios and associated mitigations to critical sections of the application. Threat models can be further mapped to test cases. Pen-testers can focus their effort to validate threat scenarios pertaining to logic flaws, which are unique to each application. Threat scenarios of a generic nature (such as a CSRF or XSS) can be scripted and automated. These generic test scripts can run against every build, saving a lot of time as compared to manual validation of every vulnerability.

The current system of pen-testing involves testing the application in its entirety at periodic intervals and delivering "PDF" reports at the end, which does not fit in a DevOps environment. With PDF reports, developers are unable to reproduce the vulnerability or the specific conditions in which the bug was found. They also have no way of validating remediations without going back to the security team. With the constant pressure to maintain a stable app and deliver new features, more often than not, developers ignore security reports as it just does not fall within their list of priorities or daily work-flow.

To ensure that pen-testing stays relevant for DevOps, security needs to adopt regression testing. Logic flaws identified during pen-testing can be scripted and automated as part of engineering’s CI services. Every time the application is built, the CI will invoke the script to validate the presence of logic flaws. This benefits engineering by reducing their dependency on security and enables security to conduct pen-testing iteratively in line with new feature releases.

MYTH: Automation Replaces Manual Penetration Testing

It is a common (and widespread) misconception that with application security automation in place, penetration testing is no longer needed. Since pen-testing is primarily a manual process, it has no place in an automated environment.

Contrary to popular belief, testing done purely by tools does not construe a full pen-test. A pen-test comprises of Vulnerability Assessment (VA) and manual exploitation or a pen-test (PT). VA is a tool driven process, which scans the application in its entirety, thereby giving coverage. But, VA only identifies 30-40 percent of the vulnerabilities in an application; essentially, the low hanging fruits would be generic flaws present in all types of applications.

Let’s be clear — there is no substitute to manual penetration tests. PT is the only way to identify logic flaws, such as a privilege escalation or an authorization bypass, which cannot be identified by tools. These flaws are unique to each application and are typical of a high severity nature. While VA gives coverage, PT brings depth, and both put together to construe a comprehensive security assessment of your application.

MYTH: Testers Need Not Have Coding Skills

In order to be relevant and effective in a DevOps environment, security testers need to invest in developing coding skills. Typically, security testers have a black box or "outside-in" view of an application. They do not understand the application's architecture or the finer nuances of how the app has been coded. As a result, any assessment conducted by security is taken as a "finger-pointing" exercise by engineering.

Investing in coding skills helps them understand the unique aspects of each programming language and appreciate as to why a particular functionality has been coded a certain way. It also broadens their skill set to conducting white box assessments or code review. Conducting a tabletop code walkthrough alongside developers enables security to make appropriate code changes without adversely affecting the stability of the app.

Skills in coding also help security in creating "Exploit as Code." Scripting of high severity vulnerabilities (logic bombs) identified during pen-tests and automating them as part of the build ensures that they are identified early in the build process. Over time, these exploit scripts can act as a regression suite, validating logic flaws across multiple iterations of the application.

MYTH: AppSec Automation Kicks in 100 Percent From Day One of Implementation

Implementing continuous security automation does not mean that 100 percent automation will be achieved from day one. There are two core aspects to AppSec automation — the first is the raw plumbing of DAST/SAST integration in continuous delivery. The second is continuous security regression. Identifying the right mix of open-source and commercial tools, configured with appropriate policies, scan frequency, and, finally, integrating and automating them to run as part or in parallel to the build pipeline is achievable in a matter of days (as long as all dependencies and access requirements are met).

However, building security regressions will take more time. In most cases, the target app or apps are in production already. A base lining would have to be carried out to identify the backlog of security vulnerabilities. Once identified, these would have to be scripted and automated as part of the build pipeline.

The above compilation is reflective of very real conversations that we’ve had with security testers, product architects, Head of engineerings, developers, QA, and DevOps professionals. So, if any of these myths were holding you back from jumping on the automation bandwagon, hopefully, our humble opinion has changed your opinion.

Find out how Waratek’s award-winning application security platform can improve the security of your new and legacy applications and platforms with no false positives, code changes or slowing your application.

Topics:
devsecops ,application security ,appsec ,security ,myths ,pentesting ,automation

Published at DZone with permission of

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}