Wednesday, May 13, 2009

5 great Web security blogs you haven't heard of

I read a tremendous amount of online material, much of which originates from 200+ RSS feeds. Sure the well-known blogs continue to generate great timely content, but there are a few diamonds in the rough that don't get a lot of attention. They instead focus on quality rather than quantity in their postings offering a deep infosec business and technical analysis on subjects not well covered elsewhere. Figured I should share of a few of my favorites.


Boaz Gelbord
With a business rather than technical tone, Boaz discusses how organizations act and react to certain events in the industry such as compliance, regulations and law. Management, spending, and incentives are routinely explored that influence organizational behavior.

ZScaler Research - Michael Sutton, Jeff Forristal, etc.
Heavy on the technical details and very timely in regards to Web security related issues. Cross-Site Scripting, Browser Security, Worms, etc etc. What more did you want!?

Tactical Web Application Security - Ryan Barnett
A technical and operational point of view on Web security matters with great attack analysis.

HolisticInfoSec.org - Russ McRee
The best way I can describe Russ is he keeps the infosec industry honest, and that includes vendors AND website owners. While exceptionally fair minded, he's not at all shy to call BS when he sees it.

The Spanner - Gareth Heyes
Deeply technical, browser vendors beware of Gareth Heyes the master of HTML/JS code obfuscation. Ecodings, strange "features", and XSS are just some of the topics covered in stelar detail.

Friday, May 08, 2009

WAFs and anti-SDL assumptions

When someone advocates Web Application Firewalls (WAF), some people mistakenly assume they also must be anti-Security Development Lifecycle (SDL). For myself and WhiteHat Security, nothing could be further from the truth. While WhiteHat advocates the use of WAFs, at the same time what most people do no realize is we also develop a significant amount of mission critical Web code in-house. Our SDL is incredibly important to us because just like many of you we also have a development team responsible for building website(s). Websites responsible for safeguarding and maintaining access control over extremely sensitive data, our customers data. That website would be WhiteHat Sentinel.

WhiteHat Sentinel, for all its mass-scale vulnerability scanning capabilities we are well-known for, it is also a sophisticated multi-user Web-based portal that manages website vulnerability data. Yes, we use WAFs, but don’t assume for a moment this means we are pushing insecure code and thinking it will keep everything safe. No way! The fact is our systems are attacked constantly by robots, competitors, customers, third-party vendors, and everyone else you can imagine. We commonly receive free (unauthorized) scans by products such as AppScan and WebInspect, which is somehow ironic. We also routinely attack ourselves where Sentinel scans Sentinel. Through it all our system MUST continue humming along securely 24x7x365. That can’t happen without a commitment to software security.

After nearly a decade we are used to this type of environment. It is part of being in the industry and this visibility is one thing that keeps us on the ball. We know that if we let down our guard, if only for a moment, bad things will surely happen like so many other security vendors who have been named in unfortunate headlines. In my opinion complacency is the enemy of security ever so much more so than complexity. While I can’t describe all the processes we do to protect the data, I can say it is significant. I’ll see what I can do about dedicating some future posts about our internal development processes. Who knows, some people might be interested. In the mean time one could assume we are loaded up with network, host, and yes even software security products and processes.

Real-World website vulnerability disclosure & patch timeline

Protecting large trafficked and high valued websites can be an interesting InfoSec job to say the least. One thing you quickly learn is that you are under constant attack by essentially everyone with every technique they got and all the time. Noisy robots (worms & viruses) and fully targeted attackers are just par for the course. Sometimes attackers are independent researchers testing their metal or the third-party security firm hired to report what all the aforementioned attackers already know (or likely to know) about your current security posture.

When new Web code is pushed to production its a really good idea to have a vulnerability assessment scheduled immediately after to ensure security, separate from an SDL defect reducing processes. PCI-DSS requires this of merchants. At this point it becomes a “find and fix” race between the good guys and the bad guys to identify any previously unknown issues. Below is a real-world website vulnerability disclosure and patch timeline from a WhiteHat Sentinel customer who takes security, well, very seriously. The website in question is rather large and sophisticated.

* Specific dates and times have been replaced with elapsed time (DD::HH::MM) to protect identity of those involved. Some exact dates/time were not able to be confirmed.


??::??::?? - New Web code is pushed to a production website
00:00:00 - WhiteHat Sentinel scheduled scan initiates
02:19:45 - Independent researcher notifies website owner of a website vulnerability
02:20:19 - WhiteHat Sentinel independently identifies (identical) potential vulnerability
* WhiteHat Sentinel scan total time elapsed: 00:26:19 (blackout windows)
02:21:24 - Independent researchers publicly discloses vulnerability details
02:23:18 - WhiteHat Operations verifies Sentinel discovered vulnerability (customer notification sent)
02:23:45 - Website owner receives the notifications and begins resolution processes
03:00:00 - WhiteHat Operations notifies customer of public disclosure
??::??::?? - Web code security update is pushed to production website
03:09:06 - WhiteHat Sentinel confirms issue resolved

Notice the independent researcher reported the exact same issue we found, but less than an hour before we found it! They could have very easily not have though (disclosed). Also note the customers speed of fix, under 12 clock hours, which is stellar considering most are in the weeks or months. As you can see the bad guys are scanning/testing just has hard fast and continuous as we are, which is a little scary to think about.

Saturday, May 02, 2009

8 reasons why website vulnerabilities are not fixed

Some reasons I've heard over the years. In no particular order...
  1. No one at the organization understands or is responsible for maintaining the code.
  2. Features are prioritized ahead of security fixes.
  3. Affected code is owned by an unresponsive third-party vendor.
  4. Website will be decommissioned replaced "soon".
  5. Risk of exploitation is accepted.
  6. Solution conflicts with business use case.
  7. Compliance does not require it.
  8. No one at the organization knows about, understands, or respects the issue.
Any others?

Friday, May 01, 2009

Mythbusting, Secure code is less expensive to develop

Conventional wisdom says developing secure software from the beginning is less expensive in the long run. Commonly cited as evidence is an IEEE article “Software Defect Reduction Top 10 List” (published Jan 2001) states, “Finding and fixing a software problem after delivery is often 100 times more expensive than finding and fixing it during the requirements and design phase.” Many security practitioners borrowed this metric (and others similar) in effort justify a security development life-cycle (SDL) investment because software vulnerabilities can be viewed as nothing more than defects (problems). The reason being is its much easier to demonstrate a hard return-on-investment by saying, “If we spend $X dollars implementing an SDL we will save $Y dollars.” This as opposed to attempting to quantify a nebulous risk value by estimating, " If we spend $X on implementing an SDL, we’ll reduce of risk of loss of $Y by B%.”

The elephant in the room is vulnerabilities are NOT the same as functional problems and the quote from the aforementioned article references research from 1987. That’s pre-World Wide Web! Data predating C#, Java, Ruby, PHP, and even Perl -- certainly reason enough to question its applicability in today’s landscape. For now though lets focus on what a vulnerability is and isn’t. A vulnerability is a piece of unintended functionality enabling an attacker to penetrate a system. An attacker might exploit a vulnerability to access confidential information, obtain elevated account privileges, and so on. A single instance represents an ongoing business risk, not guaranteed to occur, until remediated and may be acceptable according to an organizations tolerance for that risk. Said simply, a vulnerability does not necessarily have to be fixed for an application to continue functioning as expected. This is very different from a functional problem (or bug if you prefer) actively preventing an application from delivering service and/or generating revenue that does (have to be fixed - Thank you Joel!).

Functional defects often number in the hundreds, even thousands or more depending on the code base, easily claiming substantial portions of maintenance budgets. Reducing the costs associated with functional defects is a major reason why so much energy is spent evolving the art of software development into a true engineering science. It would be fantastic to eliminate security problems using the same money saving logic before they materialize. To that en
d best-practice activities such as threat modeling, architectural design reviews, developer training, source code audits, scanning during QA, and more are recommended. The BSIMM, a survey of nine leading software security initiatives, accounted for 110 such different activities. Frequently these activities require fundamental changes to the way software is built and business conducted. Investments starting at six to seven figures are not abnormal, so SDL implementation should not be taken lightly and must be justified accordingly. Therefore, how much of what could we have eliminated upfront at what cost is the question we need to answer.

Our work at WhiteHat Security reveals websites today average about seven serious and remotely exploitable vulnerabilities leaving the door open to compromise of sensitive information, financial loss, brand damage, violation of industry regulation, and downtime. For those unfamiliar with the methods commonly employed to break into a websites they are not buffer overflows, format string issues, and unsigned integers. Those techniques are most often applied to commercial and open source software. Custom Web applications are instead exploited via SQL Injection, Cross-Site Scripting (XSS), and various forms of business logic flaws -- the very same issues prevalent in our Top Ten list and not so coincidentally a leading cause of data loss according to Verizon’s 2009 Data Breach Investigations Report. External attackers, linked to organized crime, exploiting web-based flaws.

To estimate cost-to-fix I queried 1,100+ T
witter (@jeremiahg) followers (and other personal contacts) for how many man-hours (not clock time) it normally requires to fix (code, test, and deploy) a standard XSS or SQL Injection vulnerability. Answers ranged from 2 to 80 hours, so I selected 40 as a conservative value paired with a $100 per hour in hard development costs. Calculated:

40 man-hours x $100 hour
= $4,000 in labor costs to fix a single vulnerability

$4,000 x 7 (average # of vulnerabilities per website)
= $28,000 in outstanding insecure software debt

To be fair there are outlier websites so insecure, such as those having no enforced notion of authorization, that the entire system must be rebuilt from scratch. Still we should not automatically assume supporting Web-based software is the same as traditional desktop software as the differences are vast. Interestingly even if the aforementioned 100x-less-expensive-to-fix-during-the-design-phase metric still holds true, the calculated estimates above do not seem to be a cause for serious concern. Surely implementing a regimented SDL is orders of magnitude more expensive to prevent vulnerabilities before they happen than the $28,000 required to fix them after the fact. When you look at the numbers in those terms a fast paced and unencumbered development model of release-early-release-often sounds more economical. Only it isn’t. It really isn’t. Not due to raw development costs mind you, but because the risks of compromise are at an all-time high.

Essenti
ally every recent computer security report directly links the rise in cybercrime, malware propagation, and data loss to Web-based attacks. According to Websense, "70 percent of the top 100 most popular Web sites either hosted malicious content or contained a masked redirect to lure unsuspecting victims from legitimate sites to malicious sites." The Web Hacking Incident Database has hundreds more specific examples and when a mission critical website is compromised it is basically guaranteed to surpass $28,000 in hard and soft costs. The potential for down time, financial fraud, loss of visitor traffic and sales when search engines blacklist the site, recovery efforts, increased support call volume, FTC and payment card industry fines, headlines tarnishing trust in the brand, and so on are typical. Of course this assumes the organization survives at all, which has not always been the case. Pay now or pay later, this is where the meaningful costs are located. Reducing the risk of such an event and minimizing the effects when it does is a worthwhile investment. How much to spend is comparable to each organizations tolerance for risk and the security professionals ability to convey it accurately to the stakeholders. At the end of the day having a defensible position of security due care is essential. This is a big reason why implementing an SDL can be less expensive than not.