IP

Monday, November 29, 2010

AJAX: Is your application secure enough?


 
Introduction
We see it all around us, recently. Web applications get niftier by the day by utilising the various new techniques recently introduced in a few web-browsers, like I.E. and Firefox. One of those new techniques involves using Javascript. More specifically, the XmlHttpRequest-class, or object.
Webmail applications use it to quickly update the list of messages in your Inbox, while other applications use the technology to suggest various search-queries in real-time. All this without reloading the main, sometimes image- and banner- ridden, page. (That said, it will most probably be used by some of those ads as well.)
Before we go into possible weaknesses and things to keep in mind when implementing an AJAX enabled application, first a brief description of how this technology works.
The Basics
Asynchronous Javascript and XML, dubbed AJAX is basically doing this. Let me illustrate with an example, an email application. You are looking at your Inbox and want to delete a message. Normally, in plain HTML applications, the POST or GET request would perform the action, and re-locate to the Inbox, effectively reloading it.
With the XmlHttpRequest-object, however, this request can be done while the main page is still being shown.
In the background a call is made which performs the actual action on the server, and optionally responds with new data. (Note that this request can only be made to the web-site that the script is hosted on: it would leave massive DoS possibilities if I can create an HTML page that, using Javascript, can request thousands of concurrent web-pages from a web-site. You can guess what happens if a lot of people would visit that page.)
The Question
Some web-enabled applications, such as for email, do have pretty destructive functionality that could possibly be abused. The question is — will the average AJAX-enabled web-application be able to tell the difference between a real and a faked XmlHttpRequest?
Do you know if your recently developed AJAX-enabled or enhanced application is able to do this? And if so — does it do this adequately?
Do you even check referrers or some trivial token such as the user-agent? Chances are you do not even know. Chances are that other people, by now, do.
To be sure that the system you have implemented — or one you are interested in using — is properly secured, thus trustworthy, one has to ‘sniff around’.
Incidentally, the first time I discovered such a thing was in a lame preview function for a lame ringtone-site. Basically, the XmlHttpRequest URI’s ‘len’ parameter specified the length of the preview to generate and it seemed like it was loading the original file. Entering this URI in a browser (well, actually, ‘curl‘), specifying a very large value, one could easily grab all the files.
This is a fatal mistake: implement an AJAX interface accepting GET requests. GET requests are the easiest to fake. More on this later.
The question is — can we perform an action while somebody is logged in somewhere else. It is basically XSS/CSS (Cross Site Scripting) but then again, it isn’t.
My Prediction
Some popular applications I checked are hardened in such a way that they use some form of random sequence numbering: the server tells it, encoded, what the application should use as a sequence number when sending the next command. This is mostly obscured by Javascript and a pain in the ass to dissect — but not impossible.
And as you may have already noted: if there is improper authentication on the location called by the XmlHttpRequest-object, this would leave a possibility for malicious purpose. This is exactly where we can expect weaknesses and holes to arise.There should be proper authentication in place. At all times.
As all these systems are built by men, chances are this isn’t done properly.
HTTP traffic analysis
Analysing HTTP traffic analysis with tools like ethereal (yeh I like GUIs so sue me) surely comes in handy to figure out whether applications you use are actually safe from exploitation. This application allows one to easily filter and follow TCP streams so one can properly analyse what is happening there.
If you want to investigate your own application, the use of a sniffer isn’t even necessary but I would suggest you let a colleague that hasn’t implemented it, play around with your app and a sniffer in an attempt to ‘break’ through it.
Cookies
Cookies are our friend when it comes to exploiting, I mean researching any vulnerabilities in AJAX implementations.
If the XmlHttp-interface is merely protected by cookies, exploiting this is all the easier: the moment you get the browser to make a request to that website, your browser is happily sending any cookies along with it.
Back to my earlier remark about a GET-requests being a pretty lame implementation: from a developers point of view, I can imagine one temporary accepts GET requests to be able to easily debug stuff without having to constantly enter irritating HTTP data using telnet. But when you are done with it you really should disable it immediately!
I could shove a GET request hidden in an image link. Sure the browser doesn’t understand the returned data which might not even be an image. But my browser does happily send any authenticating cookies, and the web-application on the other end will have performed some operation.
Using GET is a major mistake-a-to-make. POST is a lot better, as it harder to fake. The XmlHttpRequest can easily do a POST. But I cannot get a script, for instance I could have embedded one in this article, to do a POST request to another website because of the earlier noted restriction: you can only request to the same web-site the web-application is on.
One can modify its own browser, to make request to other websites, but it would be hard to get the browser on somebody elses machine to do this.
Or would it?
If proper authentication, or rather credential verification, still sucks, I can still set up a web-site that does the exact POST method that the AJAX interface expects. That will be accepted and the operation will be performed. Incidentally I have found a popular site that, so far, does not seem to have proper checks in place. More on that one in another article.
Merely using cookies is again a bad idea.
One should also check the User-Agent and possibly a Referrer (the XmlHttpRequest nicely allows one to send any additional headers so you could just put some other token in the Referrer-field). Sure these can still be faked — but it may fend off some investigating skiddiots.
Sequence Numbering, kinda…
A possible way of securing one’s application is using some form of ‘sequence-numbering’-like scheme.
Roughly, this boils down to this.
One should let the page, or some include javascript, generated on the server side, include some token that the performs some operation on which gives a result which is used in any consecutive request to the webserver. The webserver should not allow any request with another ‘sequence number’, so to speak.
The servers’ ‘challenge-string‘ should be as random as possible in order to make it non-predictable: if one could guess what the next sequence number will be, it is again wide open for abuse.
There are properly other ways of hardening interfaces like this, but they all basically come down to getting some fixed information from the webserver as far away from the end-users reach as possible.
You can implement this as complex as you want but can be implemented very basic as well.
For instance when I, as a logged-in user of a web-enabled email-application get assigned aSession-ID and stuff, the page that my browser receives includes a variable iSeq which contains an non-predictable number. When I click “Delete This Message”, this number is transmitted with the rest of the parameters. The server can then respond with new data and, hidden in the cookies or other HTTP Requests field, pass the next sequence number that the web-server will accept as a valid request, only.
As far as I know, these seems the only way of securing it. This can still be abused if spyware sniffs HTTP communications — which they recently started doing.
Javascript Insertion
On a side note I wanted to throw in a remark on Javascript Insertion. This is an old security violation and not really restricted to AJAX, and not an attack on AJAX. Rather, it is an attackutilising the XmlHttpRequest object for malice.
If I would be able to insert Javascript in the web-application I am currently looking at in my other browser window, I would be able to easily delete any post the site allows me to delete. Now that doesn’t seem all that destructive as it only affects that user? Wrong, any user visiting will have its own posts deleted. Ouch.
Javascript insertion has been a nasty one for years and it still is when people throw their home-brew stuff into production.
On a weak implemented forum or web-journal, one could even post new messages — including the Javascript so that any visitor — with the proper permission — would re-post the message keeping the flood of spam coming.
As these technologies keep developing — and lazy website developers do not update their websites to keep up with these changes.
The recent ‘AJAX enhancements’ that some sites got recently might have been improperly implemented. This year might be a good time to check all those old web-applications for any possible Javascript insertion tricks.
If you didn’t mind the cookies getting caught — the sudden deletion of random items and/or public embarrassment might be something to entice you to verify your the code.

No comments:

Post a Comment