Hacker News new | past | comments | ask | show | jobs | submit login

The SWC benchmark is not an effective way to test bug detection tools. The SWC only contains contrived examples that are not representative of vulnerabilities in real code. As such, it fails to capture important outcomes like false positives, which quickly lead to developers uninstalling the tool, and rewards "noisey" tools that simply alert on everything.

It would be more effective to test bug detection tools against a database of real smart contract code with precisely identified vulnerabilities and example patches. Any benchmark must contain more than simple tests: it must approximate real software with enough complexity to properly stress tools for automated bug discovery.

There is more discussion about what a good bug detection benchmark looks like in our own Challenge Sets from DARPA (https://blog.trailofbits.com/2016/08/01/your-tool-works-bett...) and something we were working on that is similar for smart contracts in "Not So Smart Contracts" (https://github.com/crytic/not-so-smart-contracts).

You'll note that ETH Zurich and ChainSecurity came up with their own benchmark that meets the description above when they evaluated VerX in the paper I linked above. I'm eagerly awaiting the release of those test cases since they'll help us improve the functionality of many of our tools.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: