At the center of any security program, just like in any business, or marriage, is trust. That’s what makes bug bounty – and all gradients of crowd-sourced security models – difficult for some application defenders inside of banks to digest. 

For the uninitiated, these models break down into several flavors: 

  • Vulnerability Disclosure Policies are essential notices that give hackers a specified path to report software weaknesses in online service. Such policies are often associated with a specified email alias.
  • Public Bug Bounties are built atop of vulnerability disclosure policies. They provide monetary incentives to independent security researchers to point out software weaknesses.
  • Private Bug Bounties are defined by the scope of the program and are not time-bound. However, unlike public programs, only invited security researchers can participate. Those researchers often create more valid results for internal teams to triage.
  • Crowd-Sourced Penetration Tests – such as Synack and Cobalt – function just like a traditional engagement; only they include a targeted group of testers matched to projects because of their skillsets. The vendors vet their freelancers carefully and treat them more like contractors than freelancers. The end result is on-demand testing guided by a platform that sometimes incentivizes finding bugs where few are discovered.
All of the above rely on a gig economy that gives private companies (and increasingly governments) access to security researchers who either may not be able to or don’t want to work inside of a bank. There are of course security benefits. Continuous testing and (for private and public bug bounties) only paying for valid results. Both are important to the modern realities of digital banking. 

When asked about their top concerns about hackers testing their web applications, however, security professionals inside financial institutions (FIs) overwhelmingly questioned whether outsiders could be trusted. See the chart below. 

They’re right to worry. Brian Krebs recently unmasked a 22-year-old bug hunter – recognized by both T-Mobile and AT&T for his prowess at identifying security flaws in its respective services – as operating an illegal side-hustle that leveraged some similar flaws he found for the mobile carriers to sell their customers’ personal information.  

In 2015, a security researcher upset about a bounty for Instagram went too far. In essence, he hacked the website – publishing a blog post bragging how he exploited the weakness to download SSL certificates and private keys from the site. These are just two of the most high-profile examples. 

Earlier this year, we published a report on crowd-sourced security models. The research highlighted the importance of making engagement with independent security researchers part of an FI’s digital banking strategy. Vulnerability disclosure forces internal teams to create procedures to handle how to escalate reported bugs that come from outside the bank. 

American Express and Goldman Sachs, for instance, have published detailed instructions on how best to report flaws directly to them. Others have made sure to designate special security-and-responsible-disclosure@ email addresses publicly available. 

To their credit, the companies overseeing this testing are aware of issues of trust. The big message at platform vendors – HackerOne and Bugcrowd, included – is independent, security researchers are motivated equally by cash and making the Internet safer. 

That’s fine. But, what matters to security executives inside banks is shifting liability. One vendor’s general counsel told me that it’s a cocktail of assurances that keep its customers happy.  Indemnities, warranties, insurance, limitations of liability, data protection agreements (under GDPR), security policies and certifications, all contribute.

“It’s part of a larger program where the end goal is that a customer,” they added, “says to itself ‘we have a good and fair and reasonable outcome here.’" 

Crowd-sourced penetration testing shops offer much of, if not exactly, the same coverage as their competitors (think: NCC Group, IOActive and Bishop Fox). Cobalt’s researchers are covered by its Errors and Omissions insurance as well as their general, cybersecurity insurance. In addition to insurance, Synack’s platform monitors all of its researchers testing activity for questionable behavior. (Bugcrowd recently launched a similar penetration testing service)

Regardless, on a recent call with an online bank, an executive asked me, Can any of these companies be trusted? My answer… Yes, as long as you shift the risk.

Independent, security researchers are testing your web applications with or without your permission. Banks can either get on board with the idea and partner with those mostly economically incentivized bug hunters or just ignore the fact. 


About Sean Sposito

Sean Sposito is an analyst in the fraud & security practice at Javelin Strategy & Research. His primary focus is the intersection of retail banking and information security. The topics he’s keenly interested in are vulnerability disclosure, cybersecurity insurance, threat intelligence, and the overall challenges facing security executives inside financial institutions. 

Before joining Javelin, Sean worked as a reporter at the San Francisco Chronicle, the Atlanta Journal-Constitution, and American Banker, among others. As a content strategist at the Christian Science Monitor, he counseled security vendors, PR agencies, and in-house communications executives on storytelling techniques and media engagement. 

He has moderated panels at the Visa Security Summit, the ATM Debit & Prepaid Forum, the Emerging and Mobile Payments Card Forum, the Mobile Banking and Commerce Summit, and the Mobile Payment Conference, among others. He holds a bachelor’s degree from the University of Missouri’s School of Journalism. 

Stay in Touch!