Hacker News new | past | comments | ask | show | jobs | submit | neandrake's comments login

Git’s rerere functionality should address this, right?

https://git-scm.com/book/en/v2/Git-Tools-Rerere


I'm not a heavy jj user. But as I understand it, git-rerere is bolted on top and not well integrated. It just stores resolutions locally on each machine in .git/rr-cache. You need extra steps to share resolutions between contributors. https://stackoverflow.com/questions/12566023/sharing-rerere-...

In jj, resolutions are first-class objects—treated the same as any normal commit, pushed/pulled like normal, etc.


rerere is indeed bolted on.

Linus doesn’t want people who he pulls from to back-merge, i.e. to merge at “random points” from him into their trees to keep up to date. But what about merge conflicts? Then he has to deal with them. So then he might begrudgingly ask them to back-merge right before making a pull: a real zig-zag pattern.

First-class conflicts would solve all that I think.


In some cases, yes, but I think the way jj handles conflicts is easier to follow. You can see the conflict resolution in `jj diff` and you can rebase it like a regular commit. rerere's state is harder to understand, I think. See https://github.com/martinvonz/jj/issues/175#issuecomment-107... for some more discussion.


In theory yes. In practice I’ve had it enabled since it was first released and it helps… some?


I don’t even prefer it for indicating that something can return null. In most cases the Option API is more tedious and awkward than a null check and there are existing annotations that can be used to flag nullable return types, for documentation. The only places I’ve found Option to be helpful are where it helps with method composability, or in some hashmaps as a simple way to distinguish between “lookup not previously tried” vs “lookup resulted in no value” vs “lookup resulted in value” - it’s a hack and looked down upon because the Option value itself is null, but I view it the same as “Boolean” being nullable. As long as the details are encapsulated it’s not too problematic.


I mean, the same goes for all crime being a people problem. The anti-cheat keeps honest people honest as they say.


> I mean, the same goes for all crime being a people problem.

Yes, and technical solutions are usually a bad sign. How nice is the neighborhood that locks pharmacy razors vs the one that doesn't?


> The technique was called product-line engineering.

Thank you for adding this bit of info! We’ve been using this model for a while but I hadn’t come across a label for it. Now knowing this, I’m coming across a ton of great info about managing/sustaining these product families.


I think you misread the original post. They’re referring to LLVM the compiler toolchain, not LLMs.


Great write up. I read through maybe half of the points and skipped to the end, bookmarked to come back and finish later.

I’d like if the author could elaborate a bit about the types of projects that influenced these guidelines. Roughly what constitutes a “large” project, were the projects open source, private/commercial, or a mix of both, does web/service platform vs. desktop/local platform have an impact on these, etc.


Their website had just recently gotten a make over, which might be why this was posted.

https://github.com/apache/guacamole-website/pull/138

Great project


I work with software medical devices. Another aspect here aside from 510k is that it is required, or effectively required, to comply with something like IEC-62304 (developing software as a medical device), and ISO-13485 (quality management systems for medical devices).

For the scenario you describe the piece that’s missing is risk analysis, a requirement. In preparation to release to market they must evaluate the probability and severity of the button not doing X or doing X incorrectly, and develop mitigations if the risk is unacceptable. What you ask - documentation and specs - exist at some level, but the manufacturer has to define what level is necessary for them. I could see an argument against the manufacturer deciding this for themselves, though it’s likely impractical to do so.

For software medical devices that have hundreds of transitive dependencies it’s not feasible to go at the level you’re describing. Some management of dependencies is necessary but treating as a black box - with quality/test management and risk analysis of the black box - is what the current system defines as a reasonable trade-off. Again I could see arguments for changing this, though for many manufacturers the EU has instituted stricter regulatory in the past ~5 years which has been a bit painful but overall probably a good thing.

Today one of the aspects of medical device development which is under tighter scrutiny is cybersecurity. It’s pretty painful right now. Previously there was not much related to cybersecurity required - obviously not ideal - but the pendulum has swung to the other end of the spectrum making it a significant burden. We’ll see, most of it is adopting new processes which is always painful and slows down progress at first. After the initial hump it should be eased into, and ultimately better for patient care and medical institutions in the long run.


Agree with you here.

Wanted to echo " It's (cybersecurity) pretty painful right now".

The FDA just implemented new requirements. Basically they require penetration testing on all new medical devices. The issue is they don't have the expertise in house to know the technical details, and they haven't defined the tests, etc. Additionally they're isn't yet an ecosystem of partners and service providers yet to provide and compete in providing those penetrstion testing services.

Pragmatically what it means for folks trying to get a device cleared at the current moment:

You need to send your device to the one and only penetration testing house that does this for the FDA now and let them try to physically hack into your medical device. You have to make it impossible and evident if someone tampers with it in any way. This is in addition to all the software security stuff we need.

Imagine of you were making computer monitors. One day you are suddenly required to make it so a technical expert cannot open the monitor up using specialized tools.


From a submission standpoint, as I write this, FDA seemingly cares more about cybersecurity than your medical device actually demonstrating safety and efficacy within intended use. The time that review teams have to review any given device has stayed the same, but fear-driven, heavy-handed cybersecurity regulations (which must be followed) have been added to the mix.


Respectfully, it sounds like the FDA is trying to implement requirements for manufacturers to do what they should've been doing all along. The shitty flipside to my point is that market forces pushed manufacturers to cut costs and externalize the infosec risk onto the patients. The secure products aren't interesting to medical healthcare providers.

I'm, admittedly, a bit salty because I recently looked at a healthcare device that I was prescribed and found evidence that my data is likely being trivially exposed by anyone who wants to look. I can't verify this because it's very likely illegal, and I don't feel comfortable reporting it to the device vendor for fear of being accused of hacking. If there's a way to report it to the FDA, I'd be thrilled -- but I don't know what that looks like.


Did they provide you with Instructions for Use as a lay person? This might contain the legalese on what is or isn't permissible.

The company is required to have a complaint handling process, such that you making them aware of these vulnerabilities would mean they have to at least handle the feedback.

Maybe your findings can be rephrased in a way that don't require you to show how vulnerable their servers are, but that you suspect it's unsafe.


True. The issue is they aren't specific about it. One penetration testing house may pass a device when another doesn't.

They haven't solved the issue. They've highlighted it and left it up to chaos to solve it.


I'm more surprised that medical devices were not previously required to be tamper evident.


They are, and it's generally good practice. The difference is now they spend much more time and effort on the penetration testing.

An example. In the past it would be OK to use security bit screws for this. Yes you can buy the bits online, but it was at least one layer of perceived security.

This doesn't fly anymore.

The real challenge is they implemented these new requirements on devices that were already in the submission process. Also, these things aren't written down anywhere in standards etc. so you know them ahead of time when you design. You have to just wait until the penetration testing and find out.

Ultimately the new rules aren't the challenge, it's the fact that you don't get to know them when you start and finish the design, you find out later.


Isn't the intention that everyone submitting their devices for approval have done in-house penetration tests extensively? Or at least laid out specific claims as to what it can endure and what it cannot?

The third party test seems to be just the last verification stage to reassure the FDA the company is not making unsupportable claims.


Not really because these things aren't defined.

There is no definition or standard to which you would do your in house tests to. It's not like other things where you design it to comply with iso whatever and then you test to that.

Here the standard so to speak is defined by the penetration test itself.

An example in safes. No safe is untraceable. Safes are spec'd by number of minutes to resist a tool attack. Then when a safe company goes to UL or whatever to certify the safe, UL technicians get the best commercially available tools and try there best to break into the safe and time themselves. If it takes them more than the spec, it passes.

Here there is no spec. There is no defined time. There is no standard. It's just up to what you can get the penetration test house to agree to write.


But the company has to submit in writing an application laying out their claims?

I'm not really sure why the lack of such a standard definition prevents people from writing that down and then being willing to back up their words?

I can see a time efficiency argument, cost reduction argument, etc., for standard definitions here, but at the end of the day, they're not necessary.

The companies that offer the most credible products, verified via third party testing, get FDA approval. Everyone else gets weeded out.


>"But the company has to submit in writing an application laying out their claims?"

How so?


I'm saying the written submission doesn't contain this, and even if it did there is no one reviewing it that actually knows the technical details enough to provide meaningful oversight.

It's similar to that quote from a Boeing insider that came to light "they (Boeing airplanes) are designed by clowns suprivised by monkees".

Note - these are not my words or opinion, just a quote from another guy


You don’t think they contain… written claims?

I’m not saying they contain foolproof technical specs, but broad claims certainly.

If you genuinely refuse to believe this is possible or is currently done by some fraction of folks, then I guess I’ll leave it at that.


Are you unsure about the meaning of submitting a written application?

Or is there some other confusion here?


Please see my reply immediately above.


I thought most of the reasons - not being welcomed to modifying git - were known already for someone who went looking, but seeing the quotes from ex facebookers is interesting.

I’d be more interested in why facebook ditched mercurial for their own. Once they left more and more notable projects left, though for more than that as reason. The mercurial project is still under development but it feels like a ghost town. It’s a shame as I much prefer mercurial to git but it’s difficult to choose as an option when the integrations and tooling is lacking, and new tool development is less likely to integrate with mercurial when it has so little resources behind it.


It’s probably not part of their repo, but exists alongside, possibly gitignored. Either way in the same workspace the IDE will have to invalidate a lot of it when you switch, and again so when switching back. Separate clones, or worktrees, mean they don’t get invalidated and your IDE just sees them as a separate project altogether, each with its own cache, regardless of where that cache exists.


Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: