Hacker Newsnew | past | comments | ask | show | jobs | submit | dbolgheroni's commentslogin

Code review is another sacred process that seems too good not to have, but many teams use it as a "we care about quality" stamp when in fact they do not. Used for just nitpicking code style (important but not the whole reason to have CR, and there are tools for this), issue comments like "LGTM" and approve whatever arrives at the pull request anyway.


I've not yet seen code review implemented in a good way in places I have worked. It's not really considered "real work" (may result in zero lines of code change) and it takes time to properly read through code and figuring out where the weaknesses might be. I just end up being forced to skim read for anything obvious and merging, because there is not enough time to review the code properly.


As a manager, code review has two benefits that typically matter to me: (a) cost: it's cheaper to fix a defect that hasn't shipped (reading tests for missing cases is a useful review, in my experience); (b) bus-factor: make sure someone else has a passing familiarity with the code. And some ancillary (and somewhat performative benefits) like compliance: your iso-27001, soc-2 change control processes likely require a review.

It's hard, though, to keep code reviews from turning into style and architecture reviews. Code reviewing for style is subjective. (And if someone on the team regularly produces very poor quality code, code review isn't the vehicle for fixing that.) Code reviewing for architecture is expensive; settle on a design before producing production-ready code.

My $0.02 from the other side of the manager/programmer fence.


Out of interest, how are you using code reviews to be ISO-27001 compliant?


ISO-27001's change management process requires that [you have and a execute a change management policy that requires that] changes are conducted as planned, that changes are evaluated for impact, and are authorized. In my experience, auditors will accept peer-review as a component of your change management procedure as a meaningful contributor to meeting these requirements.

"All changes are reviewed by a subject matter expert who verifies that the change meets the planned activity as described in the associated issue/ticket. Changes are not deployed to production environments until authorized by a subject matter expert after review. An independent reviewer evaluates changes for production impact before the change is deployed..."

If you are doing code review already, might as well leverage it here.


Code review where I worked seem to either in practice be rubber stamping or back scratching. Never once have I felt the need for it. If people are unsure about a change they ask usually.


If teams care about each other’s code, they ought to collaborate on its design and implementation from the start. I’ve come to see code reviews (as a gate at the end of some cycle) as an abdication of responsibility and the worst possible way to achieve alignment and high quality. Any team that gets to the end of a feature without confidence that it can be immediately rolled out to create value for users has a fundamentally flawed process.


> they ought to collaborate on its design and implementation from the start

That's exactly right. After said process, it comes down to trusting your coworkers to execute capably. And if you don't think coworker is capable, say so (or if they're junior, more prudently hand them the simpler tasks — perhaps code review behind their back and let it go if the code is "valid" — even if it is not the Best Way™ in your opinion.)


I've installed flatpak to install VSCode/Codium to have an usable debugger for a Python project I'm working on. After some time tweaking VSCode/Codium trying to get the debbuger to work, just realized flatpak could be the problem. After another considerable amount of time trying different flatpak permissions, realized this is not a good use of my time. Installed the same packages from snap, and everything worked OK.


The emacs flatpak is just a long and painful road leading nowhere.


Flatpak is far better for applications rather than system tools, e.g., Chrom{e,ium}, due to the sandboxing.


That's what many OEMs have been doing for decades and this is exactly what many SDV have been trying to get rid of, since integrating many different products from many different manufacturers are slow, let alone iterating and designing new features.

Related to CAN, the bus is standard, but the thing is, CAN is just a bus, not a protocol. There are many ways you can have two ECUs (vehicle's modules) talking in incompatible ways.


Stefan Sperling does a great work on the OpenBSD side.


It's possible to create a custom network for libvirt, but you have to add a static route to in the router for the other hosts in your LAN to see the VMs.

Using virsh, you can dump the default network with net-dumpxml, which is the default bridge libvirt creates, modify it and create another network. Add the modified file with net-create (non-persistent) or net-define.

This way the VMs can participate in the LAN and, at the same time, the LAN can see your VMs. Works with wifi and doesn't depend on having workarounds for bridging wifi and ethernet. Debian has a wiki entry on how to bridge with a wireless nic [0] but I don't think it's worth the trouble.

[0] https://wiki.debian.org/BridgeNetworkConnections#Bridging_wi...


Thanks, now I remember I got stuck there because the router in question does not allow for custom routes.

But why do you duplicate the default bridge? Wouldn't adding a route in the router + default bridge be enough for this setup to work?


You can just use the default bridge, but still have to add a static route in the router.


Not supported. It can't be anything.


Bad if you have Nvidia, but works OK with Intel and AMD even for new hardware.


Last time I did embedded Linux software professionally I had ARM Mali Midgard GPU in the target chip, and consumed GLES 3.1 GPU API.


No, FreeBSD has jails but k8s uses different types of runtimes for OCI containers (containerd, CRI-O, Docker Engine, etc).


Besides Larry Finger, another Linux kernel developer passed away this week: https://lwn.net/Articles/979617/


This was posted here too, but unfortunately didn't gain a lot of traction (maybe because the title doesn't mention who he was?): https://news.ycombinator.com/item?id=40815468


Some points: 1) A tool automatically generating thousands or millions of patches won't scale since it can't be reviewed by developers; 2) A tool could automate patch generation but won't automate doing tests to check if it works or of if there are any regressions introduced; 3) https://marc.info/?l=openbsd-tech&m=171817275920057&w=2


I’m afraid that I didn’t translate my sarcasm into text.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: