Tech lead of pdf.js here: All of the above exploits were issues with extension code in firefox, i.e. other extensions could have these issues too. If you were to use the web only version of pdf.js none of these exploits would apply.
For comparison, NIST NVD lists 445 CVEs for Acrobat, or at least 17 per year since introduction. However CVEs haven't been maintained since the early 90s, so that number should be much higher. I think pdf.js does just fine.
pdf.js does a lot less, of course. Really you should compare Firefox to Acrobat, as they are both rich media rendering apps with a lot of functionality.
One of the points of something like pdf.js is that in most cases you don't need all that extra fluff. You just want to look at some PDF. So doing less is exactly what allows pdf.js to be (more) secure.
The wording was confusing for me too. At first reading I understood it as saying CVEs were no longer being issued for Acrobat, which definitely isn't the case. I assume the intended meaning was that Acrobat was first released in 1993[0], but the first CVE was CVE-1999-0001 (source: downloaded the raw dump from [1], ran grep -m1 CVE-....-0001).
But, I'm doubtful there would have been all that many CVEs issued for Acrobat from 1993-1998. There was only one CVE that mentioned "Acrobat" each year from 1999-2001, and three in 2002. The more recent years are the fun ones - but I have no idea whether that's a result of freshly-introduced exploitable bugs or just increased attention.
I don't even want my browser to have a 'local file context', is there a way to switch such behavior off entirely until explicit permission is given?
All these extra bells and whistles added to browsers to allow websites to pretend they're 'native apps' should require a very large switch to be thrown from 'safe' to 'unsafe' whenever an application requests such a thing. And what a pdf reader has to do with javascript is a mystery as well. Systems that are too complex are almost by definition insecure.
Oh great. What could possibly go wrong, give javascript access to local storage through some 'hard to trigger' gate. That's just asking for it. Hindsight and all that but still, this is not a good idea. A browser should not use it's own internal language sandboxed for the web to have access to the local system through some loophole. It's only a matter of time before such a loophole becomes an exploit. I wonder if javascript has access to devices such as cameras and microphones through similar loopholes. That would be a bit of a problem.
You haven't the slightest understanding of software security, PDF.js was written to replace a component authored in a memory-unsafe language for which exploits were being found at a rate measured in tens per year. Since introduction PDF.js has only had 2 holes that were directly exploitable, neither leading to remote code execution which was the default behaviour for pretty much any bug found in Acrobat.
If you don't want a browser that has some notion of "local file context" you should just sell your laptop and go live in a cave. FWIW the entire Firefox UI and every plugin for it is _written_ in Javascript served from local disk. Chrome, Safari and IE aren't far behind
> If you don't want a browser that has some notion of "local file context" you should just sell your laptop and go live in a cave.
this is kind of a silly statement. nobody would argue a program shouldn't be able to access local files, in this case, we would presume, PDF content that's been downloaded into a cache. the very simple argument is that the code which deals with opening and reading files from disk should be completely isolated from scripting-language code that runs dynamically in the same object space as the front-end scripting environment. e.g., put the .js in a sandbox by default the way we used to take for granted.
I understand that in the Mozilla suite, this barn door was left open years ago and the horses are far and wide by now.
That's not true. There have been PDF.js exploits that lead straight to RCE. This has the additional downside of leading to immediate compromise on every platform.
If large amounts of code written in memory unsafe languages is such a concern then Mozilla should immediately stop adding large numbers of highly complex new features implemented in unsafe code to Firefox every year, mostly to do things that have absolutely nothing to do with displaying web pages but are enabled by default for political reasons.
Just like switching to PDF.js was a decision taken to try and reduce the security attack surface, the decisions to add webgl, webrtc, webfonts, webm, websockets, new css features and so on were all decisions taken in the full knowledge that adding those things would vastly increase the attack surface and inevitably lead to security exploits. These new web features are responsible for a slew of new vulnerabilities and new classes of information leaks.
> (a) If large amounts of code written in memory unsafe languages is such a concern then Mozilla should immediately stop adding large numbers of highly complex new features implemented in unsafe code to Firefox every year,
> (b) ... mostly to do things that have absolutely nothing to do with displaying web pages but are enabled by default for political reasons.
(a) Mozilla is working on adding/replacing parts of Firefox with a language emphasizing security (among other things). First Rust push in Firefox landed a Rust mp4 parser [1], on 2015-06-17. Others will come; in the meantime, the world keeps turning, and users / web. developers expect these new web features, which Moz devs implement with the infrastructure they have and know. They're not going to cross their hands and declare a moratorium until Rust (or other security-mitigating features/changes) are fully integrated.
(b) Not sure what you mean by political reasons and maybe you want to stay stuck in 1992, but I don't, and like many users I do want "webgl, webrtc, webfonts, webm, websockets, new css features and so on" .
EDIT I'd have added "You can install links if you want a simple browser letting you read static html documents", which you would have answered with "But I can't, every website require these features now", to which I'd have answered "a. Yeah, not everyone (that's an understatement) does progressive enhancement, but ultimately b. The times they are a-changing"
It doesn't help your position the fact that you are unable to express it without belittling anybody who disagrees with you using stuff like "stay stuck in 1992" ("If you don't like America you should go to Russia!").
Also, the links/lynx jokes have really gotten tired, plenty of people browse the web with ublock, no(t)script, webgl and webrtc disabled and so on.
The pretense that anybody who tries to retain a modicum of control on what its browser does and does not it a luddist is frankly irritating.
And the whole language debate is completely off point, we have plenty of safe(r) languages for writing stuff, the misguided idea is that the only way to do so is to use javascript and stick the resulting program inside the browser.
> It doesn't help your position the fact that you are unable to express it without belittling anybody who disagrees with you using stuff like "stay stuck in 1992" ("If you don't like America you should go to Russia!").
True, that was useless, could have just said "I and many users do want these features" . Thanks, and sorry anon.
> the links/lynx jokes have really gotten tired, plenty of people browse the web with ublock, no(t)script, webgl and webrtc disabled and so on. The pretense that anybody who tries to retain a modicum of control on what its browser does and does not it a luddist is frankly irritating.
That wasn't a links joke, I could have phrased it with your own words "You can install ublock, no(t)script, and disable webgl/webrtc if you want a simple browser letting you read static html documents" , and "But I can't, every website require these features now" would still be an answer.
My conclusion isn't that "anyone trying to retain a modicum of control on what its browser does and does not is a luddist" --and I do use some of these extensions too--, it's that the barebones web experience anon wants is broken now (and probably forever), due to:
a. Sadly, non-respect of progressive enhancement in cases where it's possible (documents).
b. The fact that _some_ parts of the web are increasingly not documents, but whole apps whose progressive-enhancement baseline (running without all the bells and whistles) would do nothing because they depend on these features.
> And the whole language debate is completely off point, we have plenty of safe(r) languages for writing stuff, the misguided idea is that the only way to do so is to use javascript and stick the resulting program inside the browser.
Yes. Development practices, testing, fuzzing, and safe(r) languages, like Rust.
Ok, I guess that I misread the tone of your post (i.e. we largely agree).
I don't however think that the web it that broken without those features (javascript being the harder one to police).
Judging from the browsing habits of my family members, they don't spend nearly as much time inside web applications as the HN news cycle would lead me to believe: some news sites, some webmail (and even there, when presented with a decent looking mail application they happily switched), the most basic functions of facebook, and "utilities" i.e. web banking, traveling, university websites.
None of these uses requires the ability to play quake3 inside firefox, or are really applications inside a webpage. Same probably goes for all the browsers in the workplace, for instance.
I'll agree with you that few sites will do progressive-enhancement (and decent accessibiliy), I'm just disappointed in the defeatist attitude of browser vendors and expert users: the idea of having a browser safe mode that you can lock down doesn't strike me as such an impossiblity and it would give some incentive to developers to put their act together.
> None of these uses requires the ability to play quake3 inside firefox, or are really applications inside a webpage.
Maybe, for now. But WebRTC/WebSockets have a value proposal for real-time interaction in collaborative office suites. Canvas/WebGL have one for performance in authoring tools and for articles illustrations. Documents are readable in your default serif/sans-serif set, but WebFonts are a good designer/author tool just like fonts are in print. Etc... Renouncing this added value because each new feature increases the attack surface sounds like throwing the baby with the bathwater.
> I'm just disappointed in the defeatist attitude of browser vendors and expert users: the idea of having a browser safe mode that you can lock down doesn't strike me as such an impossiblity and it would give some incentive to developers to put their act together.
Two thoughts:
1. Such a "Safe mode" disabling features presents high risks of breaking tons of sites, leaving non-expert users in the dark, and these users are the most likely to be clueless about what's wrong and may just switch to another browser.
In the case of JavaScript, Firefox is actually going the opposite way of what you want, by making it harder to disable it [1]. The closest to your wish with Firefox is probably to use their LTS version, ESR, where the dust settled for a little while more (but which ironically, was affected by today's exploit ^^).
2. Can what you are proposing be a "mode"? Take the "Reader View" mode of recent Firefox builds, proposing a Readability-like mode streamlining long reads: this one is clearly a _mode_, you click on it, the text turns big, page gets sepia, side content disappears, you know you're in it and you're not going to constantly browse with it. But would you alternate between "default" mode and "Safe" mode? What a terrible choice to make, you would certainly stay on "Safe" mode, and at this point it becomes transparent that the browser constantly altering content, deepening cluelessness for non-expert users in case of breakage.
2.1. EDIT this reminds me a lot of Polaris tracking protection [2], a project/feature of recent Firefox builds to block http requests of trackers, for privacy. I use the feature, and even I, a moderately "expert" user, was left puzzled when it blocked all the images an article (can't recover it, it was a Russian article/domain of a photographer exploring the remnants of a space shuttle launch military site). Anyway, Polaris had the images domain in its blacklist and blocked them. Glancing at the console, I saw Polaris blocking and disabled the time of a page refresh. But how to handle this simply for non-expert users? This is tough to implement, and directly opposes the "don't break userland" equivalent of the web.
1. Yes, you have higlighted a source of frustration: currently to limit certain features one must either install half a dozen extensions on chromium or firefox, or stick to ESR versions of firefox, or gtkwebkit browsers (which I'm afraid do lag behind the apple upstream when it comes to security fixes). Hopefully with CEF and servo swapping out an engine for another will be easier, so the situation may improve a bit.
In an ideal world, this would be the purpose standards are: all the browsers agree on a set of minimum features, and security conscious users or administrators can decide to stick to that (I have no clue on whether other browser vendors would be interested).
This would break websites in a predictable manner. After all sooner or later browser vendors will probably decide to break all tls-less websites.
Some websites would be broken, but for people using a screen reader the web is already broken, and at least the would have a clear metric to point at when dealing with banks/news sites/institutions: if it breaks firefox/chrome/safari/edge safe mode, the webdesigner is doing something wrong.
Similarly the limit imposed by organizations would help: if you are an entrerprise website you must render correcly in this mode. I'm convinced that administrators enforcing a "no IE policy" on the workplace did help move us away from a world in which frontpage's HTML was acceptable.
My parents and users of entreprise workstation don't have browser choice anyway: they cannot install software.
2. Sure, the problem with modes is the problem with the UAC: you end up asking permission so often that you devalue the role of permssions, or you require the user to constantly check the current status of the application (e.g. the lock icon for SSL), which most users won't do.
Polaris probably suffers from similar problems, as all "restrictive" extensions do.
I'll admit that my solution is squarely aimed at users that cannot switch browser (or cannot switch browser mode), similarly to the gatekeeper role of apple on iphone, only giving the power to switch to administators/technically advanced users, which apple does not.
>Mozilla is working on adding/replacing parts of Firefox with a language emphasizing security (among other things).
The safety of the implementation language is far from the only concern when considering the security impact of modern browser features. The recent WebRTC issues are well documented, as was the HSTS 'supercookies' issue. Even something seemingly fairly innocuous like css keyframe animation can be used to do remote timing attacks without js to leak browser state such as browsing history[1]. SVG filters in Firefox allowed information to be read from arbitrary pages through timing attacks, till they removed some of the optimisations[2]. Those kinds of things are not solvable with a safer language (in some cases that probably makes fixing timing attacks more difficult/impossible). I'm sure there are more of these kinds of things to be found. Some of them are realistically never going to be fixed now because they are baked into the standards and the browser vendors clearly care more about animating gizmos and not breaking existing sites than leaking users browser state.
>I'd have added "You can install links if you want a simple browser letting you read static html documents", which you would have answered with "But I can't, every website require these features now", to which I'd have answered "a. Yeah, not everyone (that's an understatement) does progressive enhancement, but ultimately b. The times they are a-changing"
I'm not concerned about myself. I disable stuff like WebGL that I don't use, and I block most Javascript etc etc. My concern is for the average user who has absolutely no idea these features even exist, never mind knowing which ones they can turn off without breaking the sites they use. The general insecurity of the web affects me (and everybody else). When a site gets hacked because one of the admins was exploited by a browser vulnerability and my details get leaked that affects me.
> The safety of the implementation language is far from the only concern when considering the security impact of modern browser features. [...] Those kinds of things are not solvable with a safer language (in some cases that probably makes fixing timing attacks more difficult/impossible). I'm sure there are more of these kinds of things to be found. Some of them are realistically never going to be fixed now because they are baked into the standards and the browser vendors clearly care more about animating gizmos and not breaking existing sites than leaking users browser state.
Good points, didn't know about the SVG exploit having taken so long. Rust (which, as you say, is no silver bullet) is one data point showing Mozilla's commitment to security, but the variance in the time to fixing exploits is worth consideration. Today's exploit was fixed in one day, SVG took 18 months. Why? Did Moz do a good job at prioritizing based on the severity / availability of exploits in the wild, or was the long time to SVG fix just caused by technical difficulties? I don't know, maybe a mozillian involved can comment.
> If you don't want a browser that has some notion of "local file context" you should just sell your laptop and go live in a cave.
Thank you for your constructive advice.
And I note that so far my stuff written in 'memory unsafe languages' has been in production since '99 or so without a compromise to date over 100's of billions of requests.
Maybe it's not just the language.
And what business does a browser have with a .pdf file anyway, where does that end? excel sheets? word documents? proprietary format 'x'? Web browsers should stick to web browsing or at least have a mode where they will stick to just web browsing.
> And what business does a browser have with a .pdf file anyway, where does that end? excel sheets? word documents? proprietary format 'x'? Web browsers should stick to web browsing or at least have a mode where they will stick to just web browsing.
Displaying arbitrary media content is web browsing; the web is an interconnected network of servers providing hypermedia content that is self-describing as to content type so that clients (like browsers) can appropriately choose how to handle content based on its type.
Its true that early web browsers only handled HTML, plain text, and a few image formats internally, and relied on external software to handle all other media -- but all of that, including the parts for which they relied on external software -- is part of "web browsing".
Sure, and if I install some plug-in to deal with a proprietary format that's my own doing and risk. But by default a browser should stick to a sensible subset otherwise we might as well author our web-pages in .pdf format instead of HTML.
Anyway, I've already been called grumpy and being told to sell my laptop and go live in a cave so I'll give HN a miss for the next couple of days or so.
You know, they would try to render Word documents and Excel sheets if they could, if the formats weren't quite as Lovecraftian as they are, to display properly.
I bet they'd even try to render .PSD files.
So maybe PDF just hits the bad spot of being just not too arcane to implement, yet still being crazy enough to be a gigantic attack surface.
With external pdf-viewers the browser asked each time before downloading and showing a pdf. Now it displays pdf's by default. That's why an exploit could sneak through as advertisment. It couldn't have done that before.
You're getting old and grumpy, Jacques, before you know it you will start your sentences with "back in the day..." :)
On a more serious note, I guess this is the toll we have to pay for innovation pushing. I can understand the reasoning behind writing everything in JS: it allows you to consolidate a lot of mechanisms in a single platform. Once you have that platform secure, any application you will write will (should?) be secure too.
Too bad that theory and practice are usually not the same, in practice..
What innovation and what benefits do I reap by using pdf.js? It's slower and has less features than okular. It's stuck inside a firefox window, so I cannot add a window rule for it (barring adding one for firefox in general).
The same holds on windows: why would I use pdf.js when there are faster, lighter pdf readers (e.g. sumatra) or the actual adobe acrobat reader and its eight bilions features?
Heck, I've also noticed that many users will skim the file and then forget to save it, so it doesn't even help less tech savy users.
There are genuine improvement in the new web technologies, but they are mixed with a lot of stuff that simply does not belong there, and with the insufferable attitude "you can do in javascript, hence you should it in javascript" (I'm not criticizing you, eh, and there is some security argument to be made).
Everyone is punished for Windows' Adobe Reader. I never got it either. PDF is not a web format. I would not want to read doc files in my browser either. Evince(-light) starts up in milliseconds.
> I would not want to read doc files in my browser either.
This doesn't make sense to me. Why should the viewer care about the implementation details of a document? If I click on a link to a document, I want to see the result in the browser, and I think that that's the correct default. Only if I'm clicking on something which produces something that isn't intended to be a document (an archive, for example) does opening another program make sense as the default.
I agree with you that this innovation isn't a particulary good one. However, a single platform in a language that allows you develop and test rapidly (which, arguably, javascript is) is a consequence about the ever increasing push for innovation, which I can understand.
In addition to that, I am very glad that Chrome and Firefox ship with their own PDF readers and I don't have to deal with Adobe anymore to read a portable document format.
The benefit I guess is that it acts like any other webpage. If I click on a link to a pdf, that pdf replaces the page I was looking at. If I click on a link in the pdf, the target of the link replaces the current page. I can have them open in Firefox tabs just like every other thing I look at online.
printing from pdf.js in linux is a bit of a headache as well, compared to okular or evince. Usually takes about 10 times as long (no joke) to print a pdf from inside firefox.
That's if it works at all! I've found that pdf.js fails to print entirely when the document is sufficiently large. For example, when printing a scanned white paper from 20+ years ago.
I have mupdf in firefox (iceweasel) using mozplugger. I could always set it up to not display pdfs, only downlad them, use mupdf through mozplugger or use the builtin pdf.js viewer. Having said that I'm not sure that mupdf is safer than pdf.js, but it's much faster.
I was debating the merit of having the pdf reader bundled and on by default, instead of having the pdf file downloaded like most other files.
One can certainly change firefox's behaviour in the settings (unlike webgl).
I disagree that you are the arbiter of defining what innovation is, and the rest of your comment is similarly far outside of the bounds of the applicability of your opinion.
I'm aware of those. It's the 'inband/out-of-band' problem of old rearing its ugly head again, if you mix code/control and data in one stream it's asking for trouble.
Actually, I'm pretty sure that's used to let you read webpages stored on file://. That's a feature that has been present in browsers since ~1993. I don't think you can deactivate it.
> And what a pdf reader has to do with javascript is a mystery as well.
It's a pdf reader written in JavaScript, just as there are other pdf readers written in other programming languages.
In that case, is the issue actually specific to pdf.js? If it's written entirely in javascript, could this not be exploited some other way? Or does pdf.js have special permissions in this context?
Every major browser has a built-in PDF viewer (except maybe Safari? I don't own a mac). Mozilla's is the only one that's written in JS. The rest are proprietary native blobs. There have been security vulnerabilities (significant ones) found in the native PDF viewer blobs before, so given the choice between a same-origin policy breakage in pdf.js and an exploit in a native PDF viewer that can own my entire machine, I'll take pdf.js any day.
Maybe it would be better if browsers didn't have a pdf viewer, though. Then I'd have to manually download it and open it in a viewer to get owned, which is not going to happen with an ad network.
The fact that every major pdf viewer app tries to install a browser plugin doesn't help.
Being well-tested from the perspective of regular use is not quite the same as being well-tested from the perspective of defending against a hostile actor.
>Maybe it would be better if browsers didn't have a pdf viewer
I think the main reason Chrome bothered making one was because Adobe's reader kept having security vulnerabilities. I trust Chrome's one more than Adobe's. (I just looked up cvedetails.com for Adobe » Acrobat Reader. Security Vulnerabilities Total number of vulnerabilities : 434)
But choosing a different default PDF viewer does not necessarily “disable” pdf.js, or does it? Does the exploit just show a malicious PDF file and then relies on Firefox using pdf.js to display that file, or does it somehow “request” pdf.js be used?
Run the browser in a container or in a sandboxed environment (kindof like chroot, but note that chroot itself should not be used for security purposes). There may be docker containers with just firefox, if not it's easy to create one. Or use vmware, though that is much more heavyweight than sandboxing or containers.
Why, docker-initiated chroot would prevent FF from acessing any not-FF+libs files, including which this malware steals; on Qubes it would have access to everything user-readable in the AppVM, which may include some secrets as Qubes workflow involves user-supervised copying files across VMs.
Obviously docker, as opposed to Qubes, won't stop more complex malware that exploits the kernel.
From all I have heard, docker is not even secure enough to let user A do things in dockerthingyA and user B in dockerthingyB. From what I was told, user A could easily break out into dockerthingyB and maybe even the host. Are you sure it really is not possible short of exploiting the kernel (or docker I guess)?
I don't know the details, but this is rather due to fact how docker operates -- there is a daemon that runs with root privs (which are esential to create a container) controlled by a client with a protocol that has no concept of fine-graind access lists. Consequently, user A can do anything with user B's containers because docker doesn't even have such thing as container ownership. Also docker protocol involves something which is basically opening shell as root, thus users with docker access have also a passwordless sudo. All those choices are basically ok for docker because it is designed for single-user systems like developer laptops or application servers.
Currently, for multi-user systems the only safe option for containers is sadly virtualisation or emulation; nice implementation of rootless chroot is proot, http://proot.me/
I've run Firefox in a Red Hat/Fedora SELinux sandbox [1] [2] for the past 5 years or so. It is a little more tedious for things such as file uploads/downloads and cut-and-paste -- but worth it, IMHO.
I organise my files, I don't put everything into the same directory. I save them all over my file system. Same for uploads, I do not put them into one directory prior to uploading.
I think a better method is what Apple has done in OS X. When the app needs to read from or write to a user specified file, the app calls a specific API that presents a file picker dialog. The file picker dialog is running in a separate process from the sandboxed app, and the app will temporarily be granted permissions to access this particular chosen file through this API.
If you have a directory that you want to expose, you can set that up. It doesn't have to be just ~/Downloads/Firefox. If you want to expose something like ~/Documents but deny access to ~/Documents/Private you can do that. With a little effort, you can probably even configure a helper utility that toggles access on and off dynamically with a status charm in the notifications area.
That sounds incredibly cumbersome, akin to things like umatrix or noscript (which I use but 99% of users would never touch or be able to correctly control).
and get instant process isolation and protection from these kinds of exploits.
Heck, mozilla could use the same underlying mechanisms internally (cgroups, namespaces) that docker already uses without introducing the dependency on docker (if that's whats bothering you). So while the implementation may not be ideal (installing docker is an overhead, I acknowledge that), what it does technology-wise is an improvement for security.
The new Firejail app [1] may be worth exploring as it is designed to run locally installed apps like browsers and games in sandboxes, as opposed to more chroot oriented container managers like LXC, Docker or Nspawn. They all use namespaces.
If you want to use a chroot oriented container manager its better to use an unprivileged container so you are not running as root. Currently only LXC has support for unprivileged containers. We have an experimental GUI app container with Chrome that can be used in unprivileged mode. [2]
You can even run your own sandbox with a simple command like this 'unshare -fp --mount-proc' That gives you a bash shell in its own pid space. You can expand this command further to use more namespaces like mount, net, user to get yourself a sandbox.
That is what apps like firejail and container managers are using, but its useful to know what's happening underneath. We are currently working on a guide on how to use unshare that may help. [3]
Actually, the nature of this vulnerability would prevent any attacks against docker
>> The vulnerability does not enable the execution of arbitrary code but the exploit was able to inject a JavaScript payload into the local file context. This allowed it to search for and upload potentially sensitive local files.
Moreover, since it seems you believe that javascript is a liability in this case (as much as I loathe the language: it's not!)
be aware that you still need to interpret javascript to read all of the pdfs you can find:
I created a docker container with firefox but everytime I launched it my network connection bliped. It wasn't a very good experience although we are talking about me using docker 1.3 when 1.7 is now out.
So Docker running your firefox probably won't be as secure as doing it in a vm, but it will start pretty much instantly on your desktop where your vm wont, and it will be more secure than just running it natively.
Most tutorials show sharing and X11 server, but that's not a great security solution as X11 is totally insecure. But I am working on this with my project subuser. See http://subuser.org/news/0.3.html
The lack of additional detail in this very sparse announcement really compromises users' ability to damage control effectively.
Would like to know if an installation is vulnerable if:
1) If Applications, PDF is set to "Always ask"
2) Ublock and/or privoxy are used
3) Javascript is disabled
4) pdfjs.previousHandler.alwaysAskBeforeHandling == false
5) pdfjs.disabled == true
Also which advertising network and which Russian site would be helpful for blocklists.
I reported this 0-day. It used a PDF.JS same origin policy violation to access local files. You should be safe because you have javascript disabled and pdfjs.disabled set to true. There's no way for the script to run. It was on a international news website operating from Russia. The exploit was not on an ad network. The exploit was simply injected on every news article page through an iframe. Therefore I assume the news site was compromised. It could have been deliberately injected by the website operators, but I highly doubt it. The exploit targeted developers or tech-savvy people. On Linux, it targeted the contents of the ~/.ssh directory and some other sensitive files. I should say that I am not a security expert and I came across this 0-day by accident.
No it was not. I'm not sure if I should mention which website it was (yet). The exploit is still active. I am trying to get in touch with them to get it removed.
> The exploit was simply injected on every news article page through an iframe
Was the "src" of the iframe 3rd-party to the web site? I want to know whether merely blocking 3rd-party iframes would also have prevented the exploit from working even if javascript is not blocked.
Agreed, I use an ad blocker and have Firefox's PDF viewer disabled and I have no clue if I'm still vulnerable. At a minimum, I'd like to know if disabling the viewer is enough to mitigate the risk, or if popular add-ons like Adblock Plus, NoScript, or Privacy Badger are enough.
Totally agreed. I use a few of those, and I have exempted pdf.js in the past because I would rather use that then native PDF readers on my work laptop, since Adobe Reader/Acrobat is a wonderfully famous vector.
Once again, this demonstrates that blocking advertisements is a really good idea from an InfoSec perspective. Ad blocking not only abates a nuisance, it's an important security measure.
In this case, it was disguised as an advertisement but it was not running on an advertisement server. My adblocker did not catch it. It was injected into the page as an iframe twice. Once disguised as an ad ([IP-address]/ad.php), another time with just the IP-address of the server. I guess it was included a second time in case and adblocker catched the first one. Because it doesn't make sense to include the same exploit twice, unless I am missing something?
The script triggered a file dialog showing it was trying to access a local file. I opened the Developer Tools and saw all kinds of other files being accessed, including my private and public keys. I nearly got a heart attack. I quickly revoked all SSH keys and started monitoring the requests to narrow it down before I submitted the bug ticket with all the information I had, including the exploit script that was executed.
Update:
I played around with the exploit some more to find out what exactly triggered the file dialog. Turns out my OS (Ubuntu 15.04) actually saved me.
When you try to open a file with Firefox it will first try to map the file to a mimetype using the ExternalHelperAppService (https://developer.mozilla.org/en-US/docs/How_Mozilla_determi...). In case a mimetype is found, a file dialog is shown so you can open the file with the right application, in case it is not, the contents of the file will be displayed in the browser. In this case my OS provided the ExternalHelperAppService with a mimetype for one of my public keys with the .pub file extension: application/vnd.ms-publisher. Of course that's not the correct mimetype for the public key file, but that's basically what saved me by showing a file dialog because it found a mimetype. All other files had no file extension so no mimetype was found.
I also discovered that my private keys were all encrypted with a passphrase so even though they have been compromised it was not as bad as I initially believed.
By that logic it's more like an argument for disabling JS entirely - there is nothing about this that's specific to ads, and the reporter has speculated that it was placed by an attacker and only disguised as an ad.
Not executing any JS is safer, sure, but that's beside the point. If you strive for absolute security, power off your computer and never touch it again. This is about what you can do to improve the situation without impairing usability.
An adblocker doesn't impact usability (in most cases, it improves it significantly, through lower page load times and less space occupied by non-content), but prevents the vast majority of malvertising. Blocking all Javascript blocks all of them, but makes the modern web nearly unusable.
Unfortunately, an adblocker impacts income of site owners.
Otherwise, I would have used these programs since a long time, but now my conscience does not allow it.
Well, they can ask my conscience to not run an adblocker because otherwise it impacts their income. If it was just that.
But they cannot ask my conscience to open myself up to security issues because otherwise it impacts their income.
(note that I have read the rest of the thread and am aware that simply running an adblocker wouldn't have prevented this exploit)
(second note/disclaimer is that I do run µBlock, for the personal reason that I feel they also cannot ask my conscience to open my attention to energy-draining distractions because otherwise it impacts their income)
I don't want to get into a discussion about ad-based business models and the moral discussion. For me, the trade off definitely favours security. I also just can't concentrate when the page is littered with flashing ads. Thus for me, alternative to adblockers is not seeing ads, it's not visiting the sites because I'm not willing to put up with that for content that very like isn't worth the ad bombardment.
I do block JS by default. If it's a site that won't render something readable without JS, I usually just move on. If it's one that I really need to interact with I'll enable it for that site, which does open some risk, but this approach generally makes drive-by exploits less likely.
I find the first sentence fascinating, "Yesterday morning, August 5, a Firefox user informed us...".
I'd love to know more about this person and their skill set. How was the exploit detected and isolated? How did this issue get reported and resolved in s day?
Assuming the Mozilla way, I wonder what the bugzilla report will read when it comes out of embargo.
It's me. I discovered the exploit in the wild when I became a victim of it. Skill-set limited. I was able to identify it and understand what it basically does, but not much more.
Modest too, "The script triggered a file dialog showing it was trying to access a local file. I opened the Developer Tools and saw all kinds of other files being accessed, including my private and public keys. I nearly got a heart attack. I quickly revoked all SSH keys and started monitoring the requests to narrow it down before I submitted the bug ticket with all the information I had, including the exploit script that was executed."
Wow, lucky that it triggered a prompt. Thanks for the response!
If at all possible it would be worth naming and shaming the advertising network that is allowing this exploit through.
Why do advertising networks allow advertisers to exectue Javascript? What need is there for it?
Every time one of these exploits that use advertising networks is found, it just increases the value of blockers such as uBlock. Whether you accept adverts or not, you shouldn't have to accept javascript being executed on your machine that isn't from the site you visited.
The networks themselves rely almost exclusively on javascript nowadays so the websites have little choice, the ad networks then in turn pass some or all of this trust to whoever makes the creatives, which up until recently were quite frequently done in flash and are now sometimes in javascript.
Personally I think all ads should be served up in a totally passive visual format (png, jpeg, gif) and have no other attributes than a non-javascript link target. That would take care of almost all drive-by injection. But adnetworks serve up what their customers want and their customers want interactive ads because the click-through rates are higher and because otherwise the competition would be doing it and they go out of business.
Ad networks that do serve up javascript should at a minimum pull the script to their own server and audit the code of the script. Good luck with that though.
Fortunately it's easy enough to install an ad blocker and get rid of that part of the problem entirely but it would be nice if users without an ad blocker wouldn't have to worry about this.
I agree. It's actually the animation of the adverts that I find most distracting. Text, and/or a static image - not an animated gif would be fine. I would enable ad networks that could guarantee that is all they will serve up.
> naming and shaming the advertising network that is allowing this exploit through
The person who found and reported the exploit said this particular exploit did not originate from an ad server[1].
Without disabling javascript, I have always argued that merely disabling 3rd-party iframe tags is a good first move[2]: significantly less breakage than disabling javascript, yet this will effectively step up security/privacy protection.
In the current case, the person who found it confirmed that just blocking 3rd-party frame tags would have foiled the exploit.[3]
One of the goals of ad networks is to fill as much capacity as possible before relinquishing control to the website owner. The website owner then passes that unused capacity to a chain of competing networks. The last in the chain is usually a poor quality remnant network with junk ads.
The way an ad network fills capacity is by allowing other ad networks to be their advertisers. Those ad networks buy the crappy traffic and fill it with junk ads.
It's those crappy ads that look bad and may have scams attached to them - they get passed around so much that they can get lost in the system.
That said, premium campaigns can also have bad ads. Like advertisers pretending to be premium clients but under the right conditions (like geolocation, date, time, viewing host) the ads will turn bad. It's a game of cat and mouse, and those ad networks are more geared for sales.
According to user fukusa is was not an advertising network rather it looks like the site was compromised to run the script and it was disguised as an ad.
I believe using "about:config" and setting "pdfjs.disabled" to "true" will neutralize the vulnerability, at least from the description they gave of it, but confirmation from them to that effect would be appreciated, especially for users stuck on the current (or older) version, as the download page acknowledges some might be:
Note: If you use your Linux distribution's packaged version of Firefox, you will need to wait for an updated package to be released to its package repository
It would be particularly scandalous if they knew that disabling pdfjs would suffice yet refused to mention it because they couldn't bear to see their precious CPU/memory-hogging scribd knockoff no one asked for being disabled by their users, in effect putting their grandiose vision of the browser-as-OS ahead of their users' security.
Some more details would be helpful here. Specifically:
1. If PDF files aren't set to open using Firefox's built-in PDF viewer, was the relevant system still vulnerable? (That is, if under Options->Applications, PDFs were set to something other than "Preview in Firefox", would this attack still work?)
2. Which were the 8 popular FTP clients potentially affected?
3. Was this specific case all that could be done or was it an example of a wider class of potential exploits? (That is, can we actually trust any sensitive credentials in any applications on any system that has been running Firefox before today? And could we have disclosed other sensitive information that was held in well known local files?)
I do deal with sensitive details, and have access to lots of external systems run by various clients. If there is a real danger here then I need to act. If there isn't, then I would prefer not to spend the next 1-2 days of my time updating everything that could have been silently compromised instead of doing revenue-generating work, and worse, contacting every client I work with to notify them that their security may have been compromised and it's my responsibility.
I'm roughly in the same boat as you but what I don't get is if your work is that sensitive then why don't you run with at least ghostery, umatrix and adblock on your machine?
The last thing I need is to have to contact a customer to tell them their data might have escaped my desktop computer because I took my browser to some unsafe site.
I do run with the browser locked up tight. It runs basically no plug-ins by default, and I have multiple privacy and blocker plug-ins active. I also have a complete log of every piece of software and update to it that has been manually/voluntarily installed in the entire lifetime of every affected system.
What I don't know right now is whether any of that actually helps me in this case.
I really recommend you use a VM for browsing, or even a physically separate computer from the one that you keep your sensitive stuff on. It's a pain but it's a lot better than any of the alternative scenarios.
In principle I agree, but unfortunately running in a VM is the one otherwise reasonable precaution that I can't realistically take on the machines in question. I do a lot of web development, so if I'm running everything in a VM all the time then I'm not testing using the same browsers that my clients' customers will be. Maybe it would have given some reassurance in this specific case, but in general if those client sites incorporate any third party resources this sort of attack is still a concern.
It's not so much having trouble as just inconsistency of implementation across platforms. For example, I usually do this kind of work on Windows. I certainly could spin up a quick Ubuntu VM and run Firefox in that, but various aspects of the page rendering might change as a result. Given that far more Firefox-using visitors on most real sites will be running Windows than any other platform, testing with real Firefox on Windows is less error-prone. Ditto for Chrome, etc.
Given that the company that makes the closed source Ghostery add on make their living selling user data I was say that for sensitive work this addon should probably be avoided.
Some combination of script and adblocker would probably be a good idea though.
If you're referring to 'Ghostrank' that's opt-in and off by default.
If there is something else going on then I'd really like to know about it!
Their sources are open and you can inspect them to make sure they don't do anything nefarious so that would have to be quite an elaborate play on their part with the downloads being different than the published source in critical parts.
Wow, that seriously sucks, they were open source at some point.
See:
Ghostery was acquired in January 2010 and is no longer open source. This is an old version of the extension. See http://www.ghostery.com/ for a current version.
Specifically, the "Securing the Web browser" section.
[edit] Also worth mentioning is the stuff about smartcards on that blog post. You can steal my ~/.ssh/ and my ~/.gnupg/, but because I'm using a smartcard, it wont do you any good.
That's a great post. Very thorough. However, a couple of observations concerning some security issues you might not be aware of:
First, X itself is very insecure, so by allowing your web browser to share the same X server as the rest of your apps, you are making the rest of your apps more vulnerable.
Second, the so-called "Trusted" Platform Module you're using for extra entropy may itself not be very trustable, despite the name. So you may want to rethink that.
Finally, according to the vendor of the GPG smartcard you're using, "the software on this card is not available as free software due to NDAs required for certain parts."
That there are NDAs on parts of the card or the software (it's not clear which) makes the card suspect, and I don't see where I can get the source of the code (free or not) that's running on the card. An ideal smart card would, like gpg itself, have completely open and transparent hardware and software. I'm not sure if any of those kinds of cards exist, however.
That said, I'm sure all the security measures you're taking in sum make you far better off than the typical computer user, but there's room for improvement.
Re X being insecure. Yep. People have brought this up in the comments of the blog post. It doesn't reduce security by shifting it to a different user, and no, it's not as good as running under a VM. However it does give it some extra protection. For example it would have protected your main users ~/.ssh/ and ~/.gnupg/ directories etc that this latest pdf.js vulnerability could have exposed.
Re the TPM, even in the worse case scenario where the TPM is totally evil, it can't reduce the randomness on my system. It will either keep it the same or improve it. At least on Linux, where it is just one extra source of entropy on top of the other existing ones.
Re the smart card, that may be the case, but it's probably the safest one out there, recommended and pushed by the guy who wrote GnuPG.
It's worth noting that the blog post is 4 years old now.
Updating software shouldn't give a sense of security, instead use sandbox/cipher technologies more generalized, for firefox you could use firejail[0] or sandfox[1].
Or even more general approaches like subuser[2] or QubeOs[3].
Personally I use FF 28.x + Noscript + Adblock plus + Firejail 0.9.28-1 and I feel quite confident I won't get hacked by random attacks.
The exploit basically uploaded the contents of a bunch of sensitive files to some server. Besides uploading the full list of files in the User directory (not the contents of the files) it uploaded the contents of the following files:
Hi, I run the site https://scriptobservatory.org, which scans the internet and keeps track of what JavaScript people are sent as they browse the internet. Could you drop me an email with a copy of the exploit script (OR a list of a few unique strings found in the exploit script)?
With that, I can search the history of what we've been sent to get a list of all webpages that this exploit has been seen on.
Email is scriptobservatory -at- gmail -dot- com or you can input it in the "Do you have a list of websites you want to be scanned regularly?" text box.
To people helping others on the internet who claim to be security professionals, remember to make sure that the person is actually trustworthy, so you're not helping criminals. Even though a_cherepanov is a new account with only this comment, I suppose their email domain makes them trustworthy enough: ESET is a Slovakian security company that's had a Wikipedia page for 5+ years.
Hi fukusa, I know a Russian website (not a news site, it is webdev oriented) that triggers some PDF error in Firefox 35 and does not do that with latest Firefox 39.0.3. I sent a bug report to owners 6 days ago (just because PDF errors on a webpage are strange) and they have not fixed it yet. Could you check this website? I can send you an URL the way you prefer.
The first page says: "There are two times when Firefox will communicate with Mozilla’s partners while using Phishing and Malware Protection. The first is during the regular updates to the lists of reporting phishing and malware sites. No information about you or the sites you visit is communicated during list updates. The second is in the event that you encounter a reported phishing or malware site. Before blocking the site, Firefox will request a double-check to ensure that the reported site has not been removed from the list since your last update." That seems pretty reasonable. But the second one looks like it checks every executable file you download. Why isn't that mentioned on the FAQ?
"When you download an application file, Firefox will verify the signature. If it is signed, Firefox then compares the signature with a list of known safe publishers. For files that are not identified by the lists as “safe” (allowed) or as “malware” (blocked), Firefox asks Google’s Safe Browsing service if the software is safe by sending it some of the download’s metadata."
All the comments thus far have focused on the un/reasonableness of the vulnerability, plus some potshots at FF.
I've not seen any discussion about how this exploit is targeting dev keys. I find that as a data point that we've turned the corner: The coder in this case decided to grab auth keys/passwords (with a presumably low rate of success).
As logical as it may be (without RCE, not much more they could have done with a higher rate of success), I don't think it'd have been done ten years ago.
As far as I understand with this exploit it was only possible to read files, not write to them or compromise the targets in some other way. With that in mind, it makes sense to target keys. Because the keys are an indirect way to compromise new targets.
> Yesterday morning, August 5, a Firefox user informed us that an advertisement on a news site in Russia was serving a Firefox exploit that searched for sensitive files and uploaded them to a server that appears to be in Ukraine.
Which russian website, excuse me? Why not share the name?
Browsers are supposed to browse that's all. More and more stuff like this will come up with HTML5/JavaScript and people will begin to wonder why the world is jumping through all the JavaScript hoops to build a web app that is essentially a rich client app when they could use tools that are designed for that. Are they more or less secure, neither, once you can touch the user's filesystem the risk is the same which is why it still baffles me that developers actually want to code in JavaScript and dozens of one off libs when they could use first class tools which are far better designed. Browsers are supposed to browse, that is all they are supposed to do.
Once upon a time, the Internet was supposed to just be a network of interconnected hypertext documents. But as soon as we decided that the Web should be a platform[1], and Netscape Navigator packaged JS, we started down a road where it's quite hard to return.
I know it's an unpopular opinion, but I actually miss the days where webpages were static and did not need JS to load basic functionality.
With the rise of the IoT, security is only going to be more and more difficult (e.g., all the automanufacturers' issues as of late); here's hoping we can figure out a way to make security mainstream…
> Browsers are supposed to browse, that is all they are supposed to do.
Try telling that to people who want them to do more. No one wants to download and install your desktop app - it's too much work and people are too concerned about security. Mobile app stores are much better at minimizing that friction which is why native applications are so popular on that platform... but there's still friction.
The web is awesome because it's so easily accessible. And people want to do sophisticated things easily - they don't want to mess with downloading and installing stuff.
The fact that the web started off a certain way and browsers are called "browsers" has literally zero impact on what people demand from their technology. What they want now is for their browsers to solve problems. So that's what people make.
To me browsing would include all the JS stuff we have now plus all kinds of things we haven't dreamed up yet. Call me old fashioned but I'm all for continuing to move the web forward.
There will be vulnerabilities in native apps, there will be vulnerabilities in web apps or, put more simply, there will be vulnerabilities.
What is browsing, then? Read-only? Are forums browsing, or interactive apps? Where do you draw the line?
I'm all for less bloat, and I can't figure why would a browser double as a PDF reader, for instance, when a native app is invariably faster, more feature-rich, more customisable and more secure. However, it's difficult to draw a concrete line between plain browsing and web apps.
> I can't figure why would a browser double as a PDF reader, for instance, when a native app is invariably faster, more feature-rich, more customisable and more secure.
A native app is less secure. They're all written in memory-unsafe languages, are not guaranteed to be up-to-date, and do not run sandboxed. Integrating a JS PDF viewer into the browser hurts performance, but it's more convenient (no separate app to open, can start reading before it finishes downloading), and much less likely to be a security risk.
Ugh here we go again... no idea what the constant complaining about adding features and functionality to web browsers or the web in general really accomplishes at this point. The ship sailed more than 5 years ago. The likelihood of browsers returning to light html document readers is exactly zero. Time to move on to more productive complaints.
The only viable rich client app frameworks I'm thinking of that are sandboxed are Flash, Java applets, and the OS X app store. The first two are at different stages of being universally disabled for security reasons, and the latter hasn't taken off and is not portable like the first two.
Mobile is a different story of course, but also not portable.
Semi off-topic: What does the security track record of Chrome's integrated PDF viewer (PDFium) look like? Should I make it Click-to-play or is it about as secure as any other part of the browser?
Edit: NVD does list a bunch of vulnerabilities with "PDFium" in them [1], and I guess there are a few more from when it wasn't called PDFium yet, but I'm curious as to how an expert would interpret these numbers.
time to start running everything in it's own container, i don't like the idea of docker for production, but i like the idea of docker for my desktop, i want to now run every single command in a container, i can run firefox in a linux container, eg. https://bbs.archlinux.org/viewtopic.php?id=196327
The article claims this exploit leaves no trace, but what about Linux atimes (assuming you don't have noatime set)? Eg. if you found multiple shell scripts with similar access times when you know you haven't worked on them at the same time. If this is a workable method of detection then it would be a good idea to avoid accessing any potentially affected files until you have recorded full access times.
These browser vulnerabilities have got me thinking that I should start browsing in a VM. Has anyone moved to this level of isolation? Steve Gibson on the last Security Now podcast said he's been experimenting with Sandboxie...
Sandboxie looks like a paid closed source solution, I'm not sure they give me a compelling value proposition over something like a light linux distro under VirtualBox.
I was for a while, using a W7 VM on VirtualBox. I hooked it up to the VPN interface so that if the VPN dropped, I wasn't leaking traffic and it couldn't access the local network or host machine without significant difficulty.
It was initially for minimising the risk of false positives while testing remote access from the network I was on at the time.
Probably not enough to be hacking NSA, but it quickly added layer of protection from leaking stuff.
I am not security pro, but I wonder if server-side installations of PDF.js are exploitable? WordPress plugins using PDF.js, can these become a new vector to attack webservers? Case, site uses PDF.js plugin to render pDFs for users. Is it possible to access server filesystem through PDF.js?
There are not exploitable (at least not the same way). Firefox PDF viewer is a modification of PDF.js, so PDF.js code would run in the browser without a web server. The exploit might poke a hole in EMBED tag security of the web browser (and not in the PDF.js code itself). WP plugin shall be safe as any web application (unless it introduces similar security hole in its code, e.g. XSS).
Firefox's 'About' page seems to lack enough information here.
My page just said 'Firefox 39 available' and 'restart to upgrade'. But the exploit page notes that you need version 39.0.3 in order to be protected. So it's unclear if the upgrade would fix things or not.
Another good idea is to never visit news sites based in Russia. Not only you won't get infected with random malware, you also won't have to read blatant propaganda that passes as "news" over here.
Because it is the only browser that is not tied hand-and-foot to some major global commercial player, and because each and every browser ever launched had security issues.
If you had taken a moment to actually scan those search results, you would have realized that "Chrome is the new IE" is typically a reference to its ubiquity.
Safari and Firefox is typically called the new IE because they are lagging behind the times.
Which is why I said "IE6" in my original comment. The later versions of IE were very decent. They certainly didn't seem like the frozen accident of history - an issue Firefox continues to grapple with.
Umm, no. See Firefox gives no one any reason to hate it based on idealogy. It's a pro-consumer, pro-internet user, privacy-respecting, standards-compliant browser. What is not to love in this ideology?
All the same, as of 2015, it is a poor implementation of a browser from the technical point of view. Not because it was designed badly, but because it has simply not kept up.
A quick Google search found only four:
https://www.mozilla.org/en-US/security/advisories/mfsa2013-9... (another local file disclosure)
https://www.mozilla.org/en-US/security/advisories/mfsa2015-3... (needs to be "combined with a separate vulnerability" to be exploitable)
https://www.mozilla.org/en-US/security/advisories/mfsa2015-6... (needs to be "combined with a separate vulnerability" to be exploitable)
https://www.mozilla.org/en-US/security/advisories/mfsa2015-7... (this one)
It still is looking better than the plugin it replaced.