Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
List of the top smart contract vulnerabilities (dasp.co)
93 points by wepple on April 12, 2018 | hide | past | favorite | 38 comments


I'll never forget this pull request. Just having these two `internal` keywords there would have saved $150 million dollars.

https://github.com/paritytech/parity/pull/6102/files#diff-8e...

A lot of people think "smart contracts" are inherently insecure. I personally believe that most of the "hacks" were very preventable, if those who had written the code slowed down to think about what they were doing.


Every single mistake in the world is preventable under your definition, though; every single mistake has an alternative action that was a non-mistake, and if only the people had thought more and chosen the non-mistake course of action the mistake would have been prevented. If a ‘mistake’ has no alternative that would have prevented it, it is no longer a mistake and it is instead a tragedy.

This doesn’t help us prevent the mistake in the future, though. You can’t prevent bugs and vulnerabilities in code by just mandating that everyone think more before acting.

This is why we have a rule during our learning reviews at work following an incident that the action item can’t be, “be more careful next time.” That is not an action item, that is magical thinking.

We have a saying at my work - “plan for a world where we are just as stupid tomorrow as we are today”


> every single mistake has an alternative action that was a non-mistake

This is only true for a very narrow view of what a "mistake" is. Classical tragedy is more inclined to the view that in some situations, there will be at least one mistake somewhere no matter what choices people make.

In a more contemporary source, there's an episode of Friends in which the backstory is that, years ago, Phoebe married a gay ice dancer so that he could get a green card, and also because she was secretly in love with him. He comes back to ask for a divorce because he's learned that he isn't gay after all and he's fallen in love with another girl.

This is painful for Phoebe, and she asks "Do you think, if you'd realized you weren't gay earlier, that we could have..." "Never mind, I don't think there's any answer you could give that would make me feel better."


From the article they address your point. https://dasp.co/#item-10

A lot of "hacks" are preventable in hindsight though. I think the real issue is that we are just beginning to secure smart contracts and its more about the process than the person who made a mistake.

The tooling for Solidity is alright but most of the EVM and the languages associated with it are in their infancy. Formal Verification might be a solution here but we also have to consider the developer ergonomics of writing secure smart contract code.


This reminds me of 'Poya-Yoke', which is (as I understand it) about making accidents less likely by design. Functions should be carefully chosen to be externally visible, so why not make internal the default. When designing contracts, the language should be making every possible effort to be safe by design.


Related to this is the maxim to "make it easy to do the right thing, and difficult to do the wrong thing".

When applied to programming, a first step towards that could be as simple as using a safer language, where it's more difficult to shoot oneself in the foot, and where all sorts of errors are prevented by default.


Totally agree about making `internal` the default visibility for Solidity/EVM functions.

Should also require visibility to be declared (right now just compiler warnings happen).


Its mostly these minute details that gets overlooked during code reviews which leads to bigger problems. Bigger design mistakes rarely happen.


How wrong am I to equate "smart contracts" with a general purpose protocol negotiation thingie? Like TLS handshakes, but with code vs a spec? Or like a two party constraint satisfaction solver?


A smart contract can have persistent state, so it's more than just code it's also data. You can have a notion of users and privileges baked in the contract for example.


The new version of securify, which will soon be available at https://securify.ch, catches 7 our of the 9 mentioned issues automatically and is free.

I am one of the tool's authors. We went on to found ChainSecurity (https://chainsecurity.com) in order to deepen our research and understanding of smart contract security. Feel free to ask any questions about securify or ChainSecurity.


Do you have any views on Solidity as a contract language and how suitable it is for being formally verified? Does it make any properties you want to check difficult or impossible to verify?

My experience with formal verification is that the further a language is away from being a pure functional language with easy to check termination, the more difficult you make life for yourself. You really want a smart contract language that was specifically created to be easy to formally verify rather than trying to tack on formal verification later.


Note that tools like Securify work directly on the EVM, and are therefore agnostic to the high-level language the contract is written in. The EVM itself is not as amenable to formal analysis (no types, explicit function calls, etc.).


A bit confusing how to parse the audit. For example the output of this contract: https://etherscan.io/address/0xb6ed7644c69416d67b522e20bc294...

it's a bit opaque what lines of code you are warning of


Thank you for the feedback. We will add some extra information. In this particular examples there seem to be no warnings. However, as I said the new version is coming soon.


Please correct me if I'm wrong, but isn't it the nature of a turing machine (decentralized or not) to produce an endless series of "unforeseen" exploits?


Yes, but even if your language is total you have an analogue of the halting problem, where instead of analysis being undecidable it just takes an unrealistic amount of effort.


My understanding is more hand-wavy than formal, but would the combination of a total language and some limitation on the total number of instructions produce a provably predictable language?


Likely not. It takes only a very small number of instructions to implement an interpreter for a Turning-complete language, and then you can create arbitrary behavior by passing different input. The number of necessary instructions is shockingly small, if using something like Binary Lambda Calculus or Subtract-And-Branch-If-Negative.


I find that an interesting thesis but I don't quite understand how it's true. Could you expand on that?


Based on the halting problem, a provably complete analysis of the behavior of some (most?) programs in a turing-complete language is impossible. To put it another way, it is impossible to be sure that there isn't another vulnerability after you've patched the last one.

I don't own any crypto assets myself, but I think that using a non-turing-complete scripting language for the bitcoin blockchain was a wise decision rather than a lack of vision.


The halting problem doesn't say that you can't do a complete analysis of a PARTICULAR program to see if it halts, it just says you can't have a general algorithm that would do this for ALL programs.

Since we are trying to analyze particular programs, you aren't going to be limited by the halting problem.


If it's possible to do a complete analysis of any particular program, shouldn't it follow that it's possible to do a complete analysis of all programs?

If only a subset of all programs can be completely analyzed as particular programs, what are the boundaries of that set?


So the halting problem isn't stating that there exists a program that you can't analyze to see if it stops, the problem states you can't create a single PROGRAM that will be able to analyze all other programs.

A very crude explanation for the proof is something like this:

Imagine you design a program that takes another program as input and tells you if it halts.

Now, imagine you create another program, and all it does is call your first program, passing itself as the argument. Then, it does the opposite of whatever the first program predicts that it will do; if your 'halting detection program' says this new program will halt, the program instead loops forever. If your 'halting detection program' says this new program will loop forever, instead it halts.

So, in short, the idea is that you can't create a program IN ADVANCE that can't be tricked by a future program that knows about the halting detection program you are using.

So yes, you CAN analyze any program, but not by using a single algorithm.

A fun story using this idea is in Godel, Escher, Bach: https://genius.com/Douglas-hofstadter-contracrostipunctus-an...


I'm familiar with the proof and the story. My point is that the claim that any particular program can be fully analyzed is equivalent to saying that all programs can be analyzed.

Perhaps you're saying that it's possible to analyze any particular program for known exploits once they've been discovered?


Take the set of all finite C programs without loops or recursive function calls. They (clearly?) halt.

Or: the set of all finite Brainfuck programs that don't contain a `[` or `]` instruction. Those also halt.

Also, see Idris: https://www.idris-lang.org/. It's a language that has dependent types and a totality checker.

A dependent type is a value which depends on another value. For example, the length of an array depends on how many elements there are in the array.

A totality checker means that you must prove that a program that you write halts; otherwise, it fails to compile.

In Idris, if you ever write the equivalent of reduce recursively, you must prove that the recursion will terminate. This is done by noting that the length of an array is dependent on how many elements are in the array, noting that if an array has zero length then reduce will terminate, and noting that every time reduce is called with a non-zero array length it returns reduce called with that array length decremented by one. Then, since repeatedly subtracting one from a positive number eventually reaches zero (at which point the recursion halts) you're guaranteed that your reduce function halts.

EDIT: I guess you are right in the strict sense that languages like Idris aren't Turing complete. But they still can do a huge number of computable and useful things (and most of the things you'd want other languages to do), so I feel like "Turing complete" is mostly a semantic argument here.


In my mind it's more like an acceptable upper bound on provability of immutable and eternal transactions on some distributed machine. The downside to unexpected exploits far outweighs the upside to unlimited flexibility. Bitcoin chose a non turing-complete script for its transactions, and the short history of cryptocurrency has already prove that is much more secure than the infinite-possibilities of the Ethereum VM. Time will tell, but I personally expect "shocking" exploits of Ethereum smart contracts to continue ad-infinitum.

Perhaps there remains some unexplored region in the limited yet flexible yet verifiable space of languages that was until now uninteresting.


My point is that the halting problem doesn’t prevent you from analyzing any particular program for exploits.


I wasn't claiming that it is not possible to analyze a program to detect known exploits. Rather that it is extremely probable that another exploit will be found after the last one has been patched, and it is impossible to prove otherwise.

The halting problem doesn't prevent playing whack-a-mole against previous exploits, but it ensures that proving more exploits do not exist is impossible. That's why I think a turing-complete language for smart contracts is a poor choice.


Program 1:

  exit
Program 2:

  while 1: pass
The first one stops. The second one does not. It's possible - trivial - to analyze them, but that doesn't tell you anything about whether all programs can be analyzed.


This is not true. There are sound static analyses. The problem is that they have oodles of false positives.


What ever happened with the frozen funds caused by the Parity multi-sig wallet bug? Is there a plan to unfreeze them somehow?



What progress is being made though? This was a huge amount of money that was frozen. I would have thought something would have been done by now.


At the moment there are no plans. There were some attempts to design an "upgrade" that would make the funds recoverable while benefiting the platform, but they had dangerous side effects. A lot of people opposed a targeted recovery fork, for reasons including:

- It's not such a large amount that it's a systemic risk.

- The hack was arguably enabled by negligence; the contract was changed after its last security audit, hacked, changed again and still didn't get a new security audit, and only after that the funds were frozen. Strong incentives to be more careful are probably good. Forking every time somebody's negligent could get messy.

- The DAO hack involve an attack that was new to most people in the community, and even the tutorial code on ethereum.org was vulnerable to similar hacks. These hacks were more in the nature of simple oversights, enabled by overly complicated code. Good auditors would probably have found them.

- The largest loss of funds was to the entity that made the contract (or a related entity anyway). That entity also said they had plenty of other money for their project.

- Most of the remaining losses were to ICOs, who should have gotten competent advice to avoid this contract (given the first hack and lack of audit), and who've demonstrated fundraising ability so could conceivably get bailed out by their own investors.

- Despite heavy criticism from certain quarters about Ethereum's supposed lack of immutability (after the DAO hack), immutability actually is a strong community value. Some people supported the DAO fix on the grounds that it was early days, but feel that the network is more mature now.


I heard someone say that smart contracts don't eliminate the need for a trusted party, they just make that trusted party a program, since most people don't fully understand what these contracts are saying. At least when you get a lawyer to write-up a contract the lawyer is usually familiar with the law, and how similar contracts are interpreted by the court system.


The timeline for the first one is just sad.


1. front running has nothing to do with TOCTOU/race conditions. the real race condition example would be reentry attack actually.

2. #9 has nothing to do with "off-chain issues". Off chain has very clear context and refers to layer 2 solutions for scaling. No need to call a website bug "off chain issue".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: