Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Maybe the review tooling you use aren't good enough? At google it shows all test runs related to the change, it shows linter information inline in the file diffs, you automatically associate it with tasks it is meant to solve and you write a 100 word description to explain what it is doing and why.

If the change adds tests, has proper naming, solves the issue it says it is solving in a reasonable way, doesn't solve anything it doesn't say it is solving, and none of the tools complains then you just accept it after looking at the tests being reasonable and the code not looking strange. It takes a while to get new engineers where they do this consistently before review, and then review takes time, but once you know how to write changes that are easy to review then reviews are very quick.

Edit: Btw, optimizing the code you write to reduce review time is also another aspect of productivity. It isn't productive to give a lot of extra work to others, so you practice until everything is as obvious as possible. And as people get used to your changes being easy to review it gets even quicker.



It's less about setting up for the review and more about getting teammates to review the PRs promptly.

I work remotely, so my (Small) team mostly works async even though we're in similar time zones. I doubt trying to ping them for 10 PR reviews in a single day would go well.

Async + people trying to do their own stuff + 10 interruptions is not a great combo. Comments/discussion on a PR can grind things to a halt as well.

I completely agree everything you're saying about making PRs easy to review though. My team already tend to do those things thanks to our manager.


Could be about incentives then. At Google the code reviews you do are tracked so doing many looks good for your performance reviews, so some people try to be fast with reviews to get more reviews. If you don't get any credit for doing code reviews then I can see it being very hard to get people to do them though.

Also as long as you have a pending review sent to you it shows up as a "you have stuff to do" icon in the google tools until you respond to it, just like if you got a message, I guess that helps as well.


Interesting. We don't track code reviews as far as I'm aware, and we use GitHub so you just get an email if you're tagged as a reviewer.

I would say it's not uncommon to have to wait a day for reviews.


The the first thing to do if you want to improve that is to ensure you track code reviews. Meaning, you can see what changes each person reviewed and not just submitted, and then you can use that data to talk about performance etc. If one person does all the reviews then he should get a ton of credit since that is hard important work. Typically lead programmers do more reviews and juniors writes more changes, so a person doing a lot of reviews is associated with seniority and something to strive for rather than avoid. Also I bet your manager would be interested in that information as well, so maybe you just need to suggest it to them.


There's a huge problem with this. The incentive you are describing isn't to do good code reviews. The incentive there is to do a lot of code reviews. Those two are not the same and while they are not mutually exclusive, just by tracking this metric and rating people on it, you are not giving an incentive for good code review. Only for fast review. But you really want both. You want fast and you want good and thorough.

I completely agree that senior and lead people should be doing more reviews and thus help everyone else, ultimately benefiting other programmers and the whole organization overall.


The incentives for reviewing code should be exactly the same as the incentives for writing code.

So the most important part is that the reviewer should be just as visible in all processes and tools as the writer of the code. That way both gets recognition and both gets the blame when the code is bad or adds technical debt. And for performance reviews you don't just look at all the code they wrote, you also look at all the code they reviewed.

You are right that it is dumb to just count the number and say more is better, but the same goes for code submissions. They are very similar problems though so you can just use the same process for both of them.


And that last part unfortunately is what happens in a lot of companies. Basically taking something like your comment I replied to: "Look at metric X" and verbatim looking at that metric and only that metric (or maybe that and 2 or 3 others) and there's your performance review.

You mentioned another metric "code submissions", so say the number of reviews done is augmented by number or PRs merged. You've just created another incentive to just get a lot of code out fast.

After all, that's the point of the metrics, right? Have a few numbers that accurately describe the "value" of that employee. </sarcasm> No qualitative analysis needed, that would take time and money we don't have. Been there, had to do that (or write huge amounts of text to say why I think that this employee is actually really good at what they do and the number of PRs they did was low because they got the short end and worked on the hardest problems in the worst parts of our code base and did an amazing job especially given they were new.

If that's not how it's actually done where you work, great!




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: