Hacker Newsnew | past | comments | ask | show | jobs | submit | gtank's commentslogin

Super cool. I've been waiting for someone to pick up Zanzibar since the paper came out!

What are your plans for surfacing the policy relations to developers?


When you say policy relations, do you mean specifically the policies that we've written to drive ShareWith? Or do you mean the ability for developers to write their own policies for their own apps?

For the former, I can see that it might be important and informative to show how the various roles and their relationships relate to access, and we should write a blog post soon explaining how we interact with the platform to create ShareWith!

For the latter, we plan to open up the platform to developers as quickly as we can, so stay tuned!


Gotcha. I was wondering about both - it seems possibly useful to be able to automate my authZ and groups in ShareWith, but mostly I would like to see what a Zanzibar-aaS can do. Guess I'll stay tuned :)


This is a user-facing implementation of https://wiki.mozilla.org/Security/Binary_Transparency, built on top of Let's Encrypt (https://letsencrypt.org/) and exisiting Certificate Transparency infrastructure.

You may already know that packages are signed, and that signing prevents someone from shipping you a random evil package instead of the one that the developer intended to release.

Transparency is a new concept that fills in a missing piece of that story: how can you be sure that you got the same artifact as everyone else? It works by adding a hash of every release to an append-only public log. Now, when you're deciding if you want to install that package, you check not just the signature but also if the hash of the thing you've received is in the public log.

Because of the logging, someone can't just ship you a custom evil version even if they steal the signing keys! At minimum they'll have to submit their version to the log as well, which makes that previously undetectable attack publicly visible forever. In the world of TLS certificates, log monitors catch all kinds of mistakes and malice. I'm excited to see the idea finally making progress in other domains.


It’s enough because of the underlying PAKE technique. The passwords do not directly become the key, and an unsuccessful bruteforce attempt breaks the session.

PAKE isn’t really new, but it’s certainly underused for how much it changes the password game: https://blog.cryptographyengineering.com/2018/10/19/lets-tal...


The server can still bruteforce it though right?


(author of magic-wormhole here)

Nope, the server gets no more power than a random network attacker. The codes are single-use, enforced by PAKE, so an attacker (or the server) gets at most one chance to guess the code for any single execution of the program.


Author here, glad you enjoyed the talk!

Looking into it more I noticed that there's a Go implementation[1] that is noted to be constant-time with a !amd64 build tag. So it isn't just the assembly one.

[1] https://golang.org/src/crypto/elliptic/p256.go


Wow, I completely missed that (largely because I assumed a Go implementation could not guarantee constant time), sorry about that and thanks for looking deeper than I did.

On that note, how much of a guarantee is there in the Go implementation? I assume in most cases it's going to be constant time, but isn't that a little harder to guarantee when compared to the asm version? And if not why not just use the Go implementation everywhere for consistency?


The asm version is much more performant. The Go version is a port of the version in NSS, but the compiler is free to screw it up in various ways. Furthermore correctness is always a concern: cleverness in unsaturated arithmetic code can be hard.



You should see their (much faster!) subsequent paper. It develops an entirely new mapping to take advantage of Intel vector extensions, then invokes Valefor instead of Bael.


There's a lot more to defeating traffic analysis than random padding: http://freehaven.net/anonbib/cache/oakland2012-peekaboo.pdf


CoreOS has released one: https://github.com/coreos/go-oidc

We're currently using it in our own OAuth/OIDC identity provider.


I highly endorse this sort of thing! Reverse engineering online games is how I really got started with computers. It's a great teaching tool because the reward loop is short and immediately relevant - you get superpowers, in the game you already play with your friends, in almost direct proportion to how much you've learned.

Depending on the game you'll learn about binary reversing, executable formats, networking, rendering, x86 assembly, C, JVM bytecode, or more advanced topics. We dove right into hard things because it was fun and there was no one to tell us they were too hard for kids. The end result among my group of friends seems to be several careers in tech with a decided systems and security skew.

edit: I remember Runescape in particular. They applied such an escalating series of obfuscations to the client code and network protocol that we deployed things I now recognize as AST analysis and machine learning to work past them. These days, I really wonder what the view from the Jagex security team was like. Did they have fun constantly coming up with new challenges for bored teenagers?


I too got into my game development career by essentially, trying to hack Runescape. I sank an unhealthy amount of my early teenage years working on Runescape bots, for both original and RS2. Every time I think back on it, so many happy and exciting memories.

From AutoRune scripts, to writing bots, to computer vision, all for one game. That turned into an obsession with an industry that had me move half way across the world to work on our own multiplayer virtual worlds.

The community was pretty active, and at one point I was building/hosting the most-used public bots/sites. I can imagine our paths crossed one-way or another at some point!


I have a very similar set of good memories. I put more time into deobfuscation, updating, detection evasion, and (eventually) server emulation than bots per se.

My contact info is in my profile, it would be cool to see if we ran into each other back then.


Me too! It was such a fantastic experience in retrospect. I've never been so incentivized to learn than during my reverse engineering years.

I started off writing SCAR scripts for Runescape and then got into development for the Aryan bot and private RS servers (if any of those ring any bells). I remember deobfuscation tools being consistently released and updated, but I left the runescape scene for warcraft before I could that far involved.

I have a friend who worked on anti-cheat technology for a very well known gaming company. He in fact started off selling hacks for said company and was hired to protect against them many years later. It was entertaining because part of his job was to maintain his old aliases and create new ones and frequent all the hacking forums and IRC channels to listen in on all the discussions. He would even sometimes contribute random information as part of his ruse. The best part is when his team would leave secret messages to hack developers (like in the byte patterns of the signature scans or embedded within the anti-cheat modules that were being mapped into memory). They had a great sense of humor and the hacking community picked up on it and loved it.

I think coming from our background and having to work in a security capacity like that would be fun as hell. It's very open-ended and like a never ending cat-and-mouse game. I'm sure the Jagex security team felt a similar drive back in the day.


People who object to javascript crypto usually mean that in the context of "browser javascript", which is fraught with peril [1]. The javascript language itself isn't necessarily the problem (although parts of it are dodgy by the standards of what you'd like to implement crypto with).

[1] http://matasano.com/articles/javascript-cryptography/


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: