Paper author here, thanks for the interest :). I just wanted to elaborate on the overhead numbers you mentioned.
I am not specifically familiar with the 35%+ numbers, but you should take into account the cost of any defense on top of the cost of putting tags in pointers, because that makes all the difference.
We found in our experiments on SPEC CPU2006 that, without static analysis for optimizations, the cost of pointer tagging itself (tagging + masking) is 22% geomean, which is almost entirely due to masking on loads/stores. This is with a 64-bit masking constant (0x80000000ffffffff) which is unique to our approach and is relatively inefficient on x86. With 32-bit masking (0xffffffff constant), which is more common, the number is about 14%.
This only covers metadata management and is not useful on its own, so you need to implement some kind of defense on top of this which is usually costly. The hardware support used by HWAsan only removes masking overhead by having the (ARM) hardware ignore the upper byte of the pointer, which is a nice way to not having to pay for masking.
"I am not specifically familiar with the 35%+ numbers, "
This is confusing to me
Last sentence of second paragraph of your paper:
"We show that Delta Pointers are effective in detecting arbitrary
buffer overflows and, at 35% overhead on SPEC, offer much better
performance than competing solutions."
:)
"
This only covers metadata management and is not useful on its own, so you need to implement some kind of defense on top of this which is usually costly. The hardware support used by HWAsan only removes masking overhead by having the (ARM) hardware ignore the upper byte of the pointer, which is a nice way to not having to pay for masking."
In "Parameter Selection" you say "We simplify this by having service owners define the current number of outliers, if there are any, at configuration time."
How do service owners get this data? by manual inspection of graphs like the ones shown earlier?
I have the same issue on Chrome 40.0.2214.111 on Linux. Other than that I agree, it's nice to have multiple alternatives for different purposes. For example, there is also Skeleton (http://getskeleton.com/) which has a very minimalistic design as well.
The examples are very relatable, at least for me personally.
Judging from this fragment, I would consider myself a stack person:
"For tasks that represent pain that must be immediately alleviated, it’s not that the stack feels I must fix it; rather, this must be fixed. The task is painful not because the stack must fix it, but because it exists, period."
The frustrating part about this is that it is very hard to explain to coworkers why you feel this, because you intuitively expect them to feel the same way. I can recall some cases in which this has caused some friction when working on group projects.
Coffeescript is a transpiler. It converts its own code directly into javascript code. This project converts its code to AST, not Python code itself. You can generate the Python code off from the AST, but it dosnt always imply the resulting Python code is runable.
> You can generate the Python code off from the AST, but it dosnt always imply the resulting Python code is runable.
I believe this is incorrect. Can you show an example of a valid abstract syntax tree that could not be represented in the grammar of the original language?
I know that variable names in Python's AST can be any string, so for instance you could declare and use a variable named "+a-b$ $$?", even though that's not valid Python code. That's pretty minor, though, so I'm curious if there's anything more significant.
We got this very "problem" in Hy. AST allows for several characters as Names, where python does not.
A great example is with gensym, here we will make an ast.Name node with a prefix ":", if you run astor over this, you produce AST containing that very character. What happens if you try run this Python code? Syntax error. There is also a few other cases i can't think off right now.
A good read, I especially like the parts about error handling and maps/applicatives. They are well-illustrated and tackle some difficult common problems I had shortly after I started using functional languages.
The part about functors/monads/monoids is also nice, although I feel like it would be better with the accompanying talk to bind it together a bit more.
By the way, I do think pattern matching could have been discussed more extensively. Pattern matching is one of the reasons I love functional programming so much, since combined with algebraic types they can be much more expressive than imperative control-flow statements (e.g. if-else). For programmers that are used to imperative programming, discovering the power of pattern matching can be quite a hurdle.
Nice explanation, the overall layout and text-to-code ratio make it easy/fun to follow. I would like to see the finished post when all the TODO's are filled in. I would suggest to add "AMD64" explicitly to the page title for clarity.
So, I just completed the "curiosity killed the mouse" level, took 3 people a while to realise what was going on, but after a few minutes we got to the non-obvious solution, and I got sent back to the first level. I feel a bit screwed over now... =|
Is that still true, Magnificent Cursors Creator? This morning, I finished it and, if I'm remembering my pre-coffee activities correctly, a few more stages after it and never reached the end.
A difference is that a GPU uses a lot of power and takes up a lot of space. I can imagine an optimized, energy-efficient chip would be useful in embedded systems. Something like a Raspberry Pi for image processing maybe?
Like Tegra K1? GPUs are more energy efficient than normal CPUs for some tasks, so getting lower absolute power consumption is just a matter of using fewer cores.
I am not specifically familiar with the 35%+ numbers, but you should take into account the cost of any defense on top of the cost of putting tags in pointers, because that makes all the difference.
We found in our experiments on SPEC CPU2006 that, without static analysis for optimizations, the cost of pointer tagging itself (tagging + masking) is 22% geomean, which is almost entirely due to masking on loads/stores. This is with a 64-bit masking constant (0x80000000ffffffff) which is unique to our approach and is relatively inefficient on x86. With 32-bit masking (0xffffffff constant), which is more common, the number is about 14%.
This only covers metadata management and is not useful on its own, so you need to implement some kind of defense on top of this which is usually costly. The hardware support used by HWAsan only removes masking overhead by having the (ARM) hardware ignore the upper byte of the pointer, which is a nice way to not having to pay for masking.
By the way, source code for Delta Pointers is online since today! https://github.com/vusec/deltapointers