Hacker News new | past | comments | ask | show | jobs | submit | arielby's comments login

No. This is a subtle vulnerability that involves the flags in the x86 page table not matching the hypervisor's view of them - not a mere buffer overflow. Ordinary static analysis couldn't have fixed this. Safe languages couldn't have fixed this. Even a complete formal proof would have missed this without a good model.


Disagree. If those flags were properly typed then whoever added superpages would have had to make a decision about whether that was something guests should be allowed to set. Sure, they could still have made the wrong choice, but having the mask for which bits are allowed defined separately makes it much easier for a programmer to simply forget.

Heck, even without a type system, the problem is that the check is backwards. There shouldn't be a mask of flags that the guest isn't allowed to set, the flags should have been &ed with a mask that says which flags they are allowed to set, that way any new flag would have been disallowed by default.


The mask of flags that the guest isn't allowed to set is created by taking a whitelist of flags that it is allowed to set and inverting it, so that every flag that's not explicitly allowed is denied. They're flags that the guest was intentionally granted the ability to change. The problem is that Xen subtly mishandled the consequences of a guest changing them in a way that wouldn't affect normal guests. I haven't seen a solution to this that doesn't, in the end, basically boil down to not making that mistake in the first place.


I wonder which pen-testing techniques were used by Alibaba to find this vulnerability.


> This bug might also be considered an argument for the view of ditching of para-virtualized (PV) VMs, and switch to HVMs

It's not like Xen HVMs have a better security story than PVMs. The paravirtualization code should probably be more heavily audited than it is already.


Not more difficult than C - you write the crypto functions in asm. You could use a C compiler to handle the ABI but the code isn't really C code.


I would split use #3 into two parts: 3a) disambiguation for builtin operators - e.g. a float local needs to go in a float register, adding 16-bit integers uses 16-bit addition, a struct local needs to go in an appropriately-sized memory location. This is basically the same as #2. 3b) schema-ed data storage. This allows laying out structured data in a compact way in memory (e.g. `struct x { int a; int b; }` consists of 2 integers one after the other). This functionality is useful even in unityped code.


While you are right that standard arithmetic operators are typically polymorphic, but that doesn't mean there is another use case. Another example I had in my head was little and big endian integers, or binary and decimal integers. A language where arithmetic operators are not polymorphic is for example assembler.

What I would say perhaps is that #2 and #3 are in a way complementary. In 2, we describe a common operation with a more abstract type, while in 3, we describe the specific representation with a more concrete type. So both #2 and #3 address issues of naming things on different levels of abstraction, which aren't directly related to the program specification or its correctness (use #1).


The arithmetic operators in x86/x86-64 are certainly polymorphic (over word-length, plus integer vs. x87 vs. SSE).

I think the distinction is that #3b-types, which denote the "encoding" of a value, mapping it to its meaning, are often used as a basis for a #2-types system (to parametrize operations).


> The arithmetic operators in x86/x86-64 are certainly polymorphic

I disagree - at least in machine code, the instruction code determines the type of the operands. If they were to be polymorphic, the type of operands would determine the specific instructions that are to be used (and so you could for example reuse the same code for different word lengths). Maybe modern assemblers can do that (and have a generic instruction name for addition, for example), it's been 20 years since I programmed in x86, I only recently used mainframe assembler where it is as I describe.

Polymorphism is all about names. You want the same name (and by extension, the same code) to refer to potentially different operations on data.

> I think the distinction is that #3b-types

I am still not sure how it is different from #3a in your definition. When I say 16-bit signed integer, I can also mean this as an encoding of abstract "integer number". The whole point of declaring type in sense of #3 is to be specific about encoding. In mathematics, you (typically) don't care about that; but you sometimes care about use case #2, which is typically dealt by mapping to more abstract concepts with some morphism.


That's why you make sure all operations you do are bidirectional.


If you are careful and each step gives an equivalent statement, this is correct. For example, if you read math books/papers, whenever such transformation is made, the author usually states that it is equivalent.

However, the problem is that students apply this technique blindly manipulating the formula in any way they can/want. Such carelessness is the real problem.


A few years ago it was popular to serve files via a not-really-trusted mirror (e.g. Sourceforge), while serving the website via a personal server.


libgcrypt was written before timing side channels were an issue. modern crypto libraries (and OpenSSL) are at least supposed to be constant time and not to be affected.


The problem with the original lottery is that most of its value is from high-EV tiny-probability events, e.g. the 2^{-50} probability of winning 2^50 dollars. The practical result of that event does not seem to be worth 2^50 utilons, to say the least. It is hard to think about events worth that many.

However, many perturbations of this lottery can actually be good bets.

For example, suppose you gain 3^n dollars with probability 2^{-n}. Then you have a 1/128 chance of winning $2187, a 1/256 change of winning $6561, and this game starts looking much nicer.

The "Pascal's Mugging" divergence is a different problem, where Solomonoff-style priors imply negative-exponential probabilities of Busy-Beaverish payoffs. Ordinary priors don't really have this problem.


It seems like the same class of problems, because they are both about high payoff, low probability bets. Solomonoff induction is just a formalization used to show the result is very general.

Any reasonable prior should have similar cases. Unless you really believe the mugger being a matrix lord has 0 probability, or that God has 0 probability, etc, you are forced to act as if they are true. Which results in wasted effort in the vast majority of possible outcomes, in exchange for a massive payoff in incredibly rare outcomes.

Assigning 0 probability is not something you should do lightly. It would mean you could wake up and find yourself outside of the matrix, and you still would not believe it had any chance of being true. It would mean God himself could come to and say "yeah it's all real." And you would be forced to believe there is still 0 probability he exists.


chroot was always intended to allow running processes picky about their system environment.


This is the well-known CRT fault attack, nothing new. SSL implementations that don't verify their signatures leak the private key if their signature routine has a bug - this is essentially a hardware problem. Verifying your signatures is fast, though, so doing it is worthwhile hardening (NSA can potentially use cosmic rays for this attack, probably nobody else).

The findings are very similar to the classic "Ron was wrong, Whit is right" paper - if you scan the entire Internet, you will find broken hardware. You will also find SSH servers with their root password being `12345678`.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: