Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

As with most processes, the dilemma with code reviews is in figuring out how they impact your team and your organization.

In a huge org, with thousands of engineers that's already burdened by hours per day of interruptions and process overhead, and release runways that already involve six stamps of bureaucracy, mandatory code revies have very little downside (it's in the noise) but highly variable return (many people are just droning under the weight of process). The org loses nothing much for mandating it, but only certain teams will see a lot of value for it.

On the other extreme, a startup with five engineers will get backlogged with reviews (which then get shortchanged) because everbody either is under pressure to either stay in their high-productivity flow or put out some pressing fire. The reviews probably could catch issues and share critical knowledge very regularly, but the org pays a pronounced penalty for the overhead and interruptions.

People long for "one size fits all" rules, crafting essays and publishing research papers to justify them, but the reality of what's right is often far more idiosyncratic.



I don't disagree with the idea that "it depends" but for me, code review has generally worked better with lower overhead in the "startup with five engineers" type organisation. Can I ask you some follow-up questions on your experience in reviewing and receiving reviews? If so, send me an email at hn@xkqr.org!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: