Hacker Newsnew | past | comments | ask | show | jobs | submit | jagrsw's commentslogin

I don't know much about corporations, but why business plans are needed at all? I mean, for EU citizens.

bank (loans), immigration and investors can be interested, but their interests are not covering every corporation out there.


There’s absolutely no need to have a business plan to start a company in Germany. You articles of incorporation and they state a company purpose, but this can be something as simple as “do IT consulting”.

Obviously, having a credible plan helps if you try to convince banks to loan you money or any such thing, but the act of registering a company requires no such thing.


It's basically a proof of "most basic effort" that you're serious. You could probably note down some stuff on a single A4 and get it approved, it doesn't have to be a 40 page dossier.

Kind of like fizzbuzz, just something really simple and most basic to get rid of the "easy scams" and so on.

Edit: So "easy scams" are probably the wrong word, I initially wrote "riffraff" because in my mothertoungue that isn't so... disparaging, but what I meant was that it's used as "bare minimum filter" basically.


That doesn't really sound like a barrier to the easy scams at all. It just sounds like something someone once thought would be a good idea and now everyone has to do it because that's the process.


ChatGPT, give me a convincing sounding business plans for starting a bussiness in Germany.

Done.


How would this get rid of easy scams?


To be fair, the existence of Surströmming [https://en.wikipedia.org/wiki/Surstr%C3%B6mming] is a valid casus belli. We aren't talking about food here - it's "haloanaerobic bacteria producing hydrogen sulfide in a pressurized vessel". An unregulated bio-weapons program hiding in plain sight.


> To stop me you'd have to compel me.

  Cow A: "That building smells like blood and steel. I don't think we come back out of there"
  Cow B: "Maybe. But the corn is right there and I’m hungry. To stop me, you'd have to compel me"
Past safety is not a perfect predictor of future safety.


The author has a knack for generating buzz (and making technically interesting inventions) :)

I'm a little concerned that no one (besides the author?) has checked the implementation to see if reducing the attack surface in one area (memory security) might cause problems in other layers.

For example, Filip mentioned that some setuid programs can be compiled with it, but it also makes changes to ld.so. I pointed this out to the author on Twitter, as it could be problematic. Setuid applications need to be written super-defensively because they can be affected by envars, file descriptors (e.g. there could be funny logical bugs if fd=1/2 is closed for a set-uid app, and then it opens something, and starts using printf(), think about it:), rlimits, and signals. The custom modifications to ld.so likely don't account for this yet?

In other words, these are still teething problems with Fil-C, which will be reviewed and fixed over time. I just want to point out that using it for real-world "infrastructures" might be somewhat risky at this point. We need unix nerds to experiment with.

OTOH, it's probably a good idea to test your codebase with it (provided it compiles, of course) - this phase could uncover some interesting problems (assuming there aren't too many false positives).


I've been doing just that. If there's a way to break fil-c we're gonna find it.


Wishful thinking: Any possible chance that means you might make a Fil-C APE hybrid? It would neatly address the fact that Fil-C already needs all of its dependencies to also use Fil-C.


If you are really concerned you should do this and then report back. Otherwise it is just a mild form of concern trolling.


I checked the the code, reported a bug, and Filip fixed it. Therefore, as I said, I was a little concerned.


Yes, but instead of remarking solely on the fact that the author has a pretty good turnaround time for fixing bugs (I wished all open source projects were that fast) and listens to input belies the tone of your comment, which makes me come away with a negative view of the project, when in fact the evidence points to the opposite.

It's a 'damning with faint praise' thing and I'm not sure to what degree you are aware of it but I don't think it is a fair way to treat the author and the project. HN has enough of a habit of pissing on other people's accomplishments already. Critics have it easy, playwrights put in the hours.


I understand your point, and I have the utmost respect for the author who initiated, implemented, and published this project. It's a fantastic piece of work (I reviewed some part of it) that will very likely play an important role in the future - it's simply too good not to.

At the same time, however, the author seems to be operating on the principle: "If I don't make big claims, no one will notice." The statements about the actual security benefits should be independently verified -this hasn't happened yet, but it probably will, as the project is gaining increasing attention.


> "If I don't make big claims, no one will notice."

I am making big claims because there are big claims to be made.

> he statements about the actual security benefits should be independently verified -this hasn't happened yet

I don't know what this means. Folks other than me have independently verified my claims, just not exhaustively. No memory safe language runtime has been exhaustively verified, save maybe Spark. So you're either saying something that isn't true at all, or that could be said for any memory safe language runtime.


To clarify the position, my concern isn't that the project is bad - it's that security engineering is a two-front war. You have to add new protections (memory safety) without breaking existing contracts (like ld.so behavior).

When a project makes 'big claims' about safety, less technical users might interpret that as 'production ready'. My caution is caused by the fact that modifying the runtime is high-risk territory where regressions can introduce vulns that are distinct from the memory safety issues you are solving.

The goal is to prevent the regression in the first place. I'm looking forward to seeing how the verification matures and rooting for it.


> without breaking existing contracts (like ld.so behavior)

If you think that Fil-C regresses ld.so then get specific. Otherwise what you’re doing is spreading fear, uncertainty, and doubt for no good reason.

Fil-C has always honored the setuid behavior provided by ld.so. There was a bug - since fixed - that the Fil-C runtime called getenv instead of secure_getenv.

> When a project makes 'big claims' about safety, less technical users might interpret that as 'production ready'.

Fil-C is production ready and already has production users.


I would suggest you re-read your comment in a week or so to see if by then you are far enough away from writing it to see how others perceive it. If it wasn't your intention to be negative then maybe it is my non-native English capability that is the cause of this but even upon re-reading it that's how I perceive it.

- You start off with commenting that the author has a knack for self promotion and invention. My impression is that he's putting in a status report for a project that is underway.

- you follow this up with something that you can't possibly know and use that to put the project down, whilst at the same time positioning yourself as a higher grade authority because you are apparently able to see something that others do not, effectively doing that which you accuse the author of: self promotion.

- You then double down on this by showing that it was you who pointed out to the author that there was a bug in the software, which in the normal course of open source development is not usually enough to place yourself morally or technically above the authors.

- You then in your more or less official capacity of established critic warn others to hold off putting this project to the test until 'adults' have reviewed it.

- And then finally you suggest they do it anyway, with your permission this time (and of course now amply warned) with the implicit assumption that problems will turn up (most likely this will be the case) and that you hope 'there won't be too many false positives', strongly suggesting that there might be.

And in your comment prior to this reply you do that once again, making statements that put words in the mouth of the author.


You're right, my tone was off.


It's difficult for me to have a positive opinion of the author when he responds with dismissal and derision to concerns others have raised about Fil-C and memory safety under data races.

The fact is that Fil-C allows capability and pointer writes to tear. That is, when thread 1 writes pointer P2 to a memory location previously holding P1, thread 2 can observe, briefly, the pointer P2 combined with the capability for P1 (or vice versa, the capability for P2 coupled to the pointer bits for P1).

Because thread 2 can observe a mismatch between a pointer and its capability, an attacker controlled index into P2 from thread 2 can access memory of an object other than the one to which P2 points.

The mismatch of pointer and capability breaks memory safety: an attacker can break the abstraction of pointers-as-handles and do nefarious things with pointers viewed instead as locations in RAM.

On one hand, this break is minor and doesn't appear when memory access is correctly synchronized. Fil-C is plenty useful even if this corner case is unsafe.

On the other hand, the Fil-C as author's reaction to discourse about this corner case makes me hesitant to use his system at all. He claims Java has the same problem. It does not. He claims it's not a memory safety violation because thread 1 could previously have seen P1 and its capability and therefore accessed any memory P1's capability allowed. That's correct but irrelevant: thread 2 has P2 and it's paired with the wrong capability. Kaboom.

The guy is technically talented, but he presents himself as Prometheus bringing the fire of memory safety to C-kind. He doesn't acknowledge corner cases like the one I've described. Nor does he acknowledge practical realities like the inevitability of some kind of unsafe escape hatch (e.g. for writing a debugger). He says such things are unnecessary because he's wrapped every system call and added code to enforce his memory model's invariants around it. Okay, is it possible to do that in the context of process_vm_writev?

I hope, sincerely, the author is able to shift perspectives and acknowledge the limitations of his genuinely useful technology. The more he presents it as a panacea, the less I want to use it.


> Because thread 2 can observe a mismatch between a pointer and its capability, an attacker controlled index into P2 from thread 2 can access memory of an object other than the one to which P2 points.

Under Fil-C’s memory safety rules, „the object at which P points” is determined entirely by the capability and nothing else.

You got the capability for P1? You can access P1. That’s all there is to it. And the stores and loads of the capability itself never tear. They are atomic and monotonic (LLVM’s way of saying they follow something like the JMM).

This isn’t a violation of memory safety as most folks working in this space understand it. Memory safety is about preventing the weird execution that happens when an attacker can access all memory, not just the memory they happen to get a capability to.

> He claims Java has the same problem. It does not.

It does: in Java, what object you can access is entirely determined by what objects you got to load from memory, just like in Fil-C.

You’re trying to define „object” in terms of the untrusted intval, which for Fil-C’s execution model is just a glorified index.

Just because the nature of the guarantees doesn’t match your specific expectations does not mean that those guarantees are flawed. All type systems allow incorrect programs to do wrong things. Memory safety isn’t about 100% correctness - it’s about bounding the fallout of incorrect execution to a bounded set of memory.

> That's correct but irrelevant: thread 2 has P2 and it's paired with the wrong capability. Kaboom.

Yes, kaboom. The kaboom you get is a safety panic because a nonadversarial program would have had in bounds pointers and the tear that arises from the race causes an OOB pointer that panics on access. No memory safe language prevents adversarial programs from doing bad things (that’s what sandboxes are for, as TFA elucidates).

But that doesn’t matter. What matters is that someone attacking Fil-C cannot use a UAF or OOBA to access all memory. They can only use it to access whatever objects they happen to have visibility into based on local variables and whatever can be transitively loaded from them by the code being attacked.

That’s memory safety.

> He doesn't acknowledge corner cases like the one I've described.

You know about this case because it’s clearly documented in the Fil-C documentation. You’re just disagreeing with the notion that the pointer’s intval is untrusted and irrelevant to the threat model.


> The kaboom you get is a safety panic

You don't always get a panic. An attacker who can get a program to access an offset he controls relative to P2 can access P1 if P2 is torn such that it's still coupled, at the moment of adversarial access, with P1's capability. That's dangerous if a program has made a control decision based on the pointer bits being P2. IOW, an attacker controlled offset can transform P2 back into P1 and access memory using P1's capability even if program control flow has proceeded as though only P2 were accessible at the moment of adversarial access.

That can definitely enable a "weird execution" in the sense that it can let an attacker make the program follow an execution path that a plain reading of the source code suggests it can't.

Is it a corner case that'll seldom come up in practice? No. Is it a weakening of memory safety relative to what the JVM and Rust provide? Yes.

You are trying to define the problem away with sleigh-of-hand about the pointer "really" being its capability while ignoring that programs make decisions based on pointer identity independent of capability -- because they're C programs and can't even observe these capabilities. The JVM doesn't have this problem, because in the JVM, the pointer is the capability.

It's exactly this refusal to acknowledge limitations that spooks me about your whole system.


> An attacker who can get a program to access an offset he controls relative to P2 can access P1 if P2 is torn such that it's still coupled, at the moment of adversarial access, with P1's capability

Only if the program was written in a way that allowed for legitimate access to P1. You’re articulating this as if P1 was out of thin air; it’s not. It’s the capability you loaded because the program was written in a way that let you have access to it. Like if you wrote a Java program in a way where a shared field F sometimes pointed to object P1. Of course that means loaders of F get to access P1.

> That can definitely enable a "weird execution"

Accessing a non-free object pointed by a pointer you loaded from the heap is not weird.

I get the feeling that you’re not following me on what „weird execution” is. It’s when the attacker can use a bug in one part of the software to control the entire program’s behavior. Your example ain’t that.

> Is it a corner case that'll seldom come up in practice? No. Is it a weakening of memory safety relative to what the JVM and Rust provide? Yes.

I don’t care about whether it’s a corner case.

My point is that there’s no capability model violation and no weird execution in your example.

It’s exactly like what the JVM provides if you think of the intval as just a field selector.

I’m not claiming it’s like what rust provides. Rust has stricter rules that are enforced less strictly (you can and do use the unsafe escape hatch in rust code to an extent that has no equal in Fil-C).


I think his argument is that you can have code this:

  user = s->user;
  if(user == bob)
    user->acls[s->idx]->has_all_privileges = true;
And this happens: 1. s->user is initialized to alice 2. Thread 1 sets s->idx to ((alice - bob) / sizeof(...)) and s->user to Bob, but only the intval portion is executed and the capability still points to Alice 3. Thread 2 executes the if, which succeeds, and then gives all privileges to Alice unexpectedly since the bob intval plus the idx points to Alice, while the capability is still for Alice

It does seem a real issue although perhaps not very likely to be present and exploitable.

Seems perhaps fixable by making pointer equality require that capabilities are also equal.


I understand his argument.

Here are the reasons why I don’t buy it:

1. I’m not claiming that Fil-C fixes all security bugs. I’m only claiming that it’s memory safe and I am defining what that means with high precision. As with all definitions of memory safety, it doesn’t catch all things that all people consider to be bad.

2. Your program would crash with a safety panic in the absence of a race. Security bugs are when the program runs fine normally, but is exploitable under adversarial use. Your program crashes normally, and is exploitable under adversarial use.

So not only is it not likely to be present or exploitable, but if you wrote that code then you’d be crashing in Fil-C in whatever tests you ran at your desk or whenever a normal user tried to use your code.

But perhaps point 1 is still the most important: of course you can write code with security bugs in Fil-C, Rust, or Java. Memory safety is just about making a local bug not result in control of arbitrary memory in the whole program. Fil-C achieves that key property here, hence its memory safe.


> I’m only claiming that it’s memory safe and I am defining what that means with high precision

Do you have your definition of memory safety anywhere? Specifically one that's precise enough that if I observe a bug in a C program compiled via Fil-C, I can tell whether this is a Fil-C bug allowing (in your definition) memory unsafety (e.g. I'm pretty sure an out-of-bounds read would be memory unsafety), or if it's considered a non-memory-safety bug that Fil-C isn't trying to prevent (e.g. I'm pretty sure a program that doesn't check for symlinks before overwriting a path is something you're not trying to protect against). I tried skimming your website for such a definition and couldn't find this definition, sorry if I missed it.

I typically see memory safety discussed in the context of Rust, which considers any torn read to be memory-unsafe UB (even for types that don't involve pointers like `[u64; 2]`, such a data race is considered memory-unsafe UB!), but it sounds like you don't agree with that definition.


In my understanding the program can work correctly in normal use.

It is buggy because it fails to check that s->idx is in bounds, but that isn't problem if non-adversarial use of s->idx is in bounds (for example, if the program is a server with an accompanying client and s->idx is always in bounds when coming from the unmodified client).

It is also potentially buggy because it doesn't use atomic pointers despite comcurrent use, but I think non-atomic pointers work reliably on most compiler/arch combinations, so this is commonplace in C code.

A somewhat related issue if that since Fil-C capabilities currently are only at the object level, such an out-of-bounds access can access other parts of the object (e.g. an out-of-bounds access in an array contained in an array element can overwrite other either of the outer array)

It is true though that this doesn't give arbitrary access to any memory, just to the whole object referred to by any capability write that the read may map to, with pointer value checks being unrelated to the accessed object.


If you set the index to `((alice - bob) / sizeof(...))` then that will fail under Fil-C’s rules (unless you get lucky with the torn capability and the capability refers to Alice).


Exactly. I agree that this specific problem is hard to exploit.

> Seems perhaps fixable by making pointer equality require that capabilities are also equal

You'd need 128-bit atomics or something. You'd ruin performance. I think Fil-C is actually making the right engineering tradeoff here.

My point is that the way Pizlo communicates about this issue and others makes me disinclined to trust his system.

- His incorrect claims about the JVM worry me.

- His schtick about how Fil-C is safer than Rust because the latter has the "unsafe" keyword and the former does not is more definitional shenanigans. Both Fil-C and Rust have unsafe code: it's just that in the Fil-C case, only Pizlo gets to write unsafe code and he calls it a runtime.

What other caveats are hiding behind Pizlo's broadly confident but narrowly true assertions?

I really want to like Fil-C. It's good technology and something like it can really improve the baseline level of information security in society. But Pizlo is either going to have to learn to be less grandiose and knock it off with the word games. If he doesn't, he'll be remembered not as the guy who finally fixed C security but merely as an inspiration for the guy who does.


All I’m really hearing is that this guy rubs you the wrong way, so you’re not going to give him the benefit of the doubt that you’d give to others.

I mean, maybe you’re right that his personality will turn everyone off and none of this stuff will ever make it upstream. But that kind of seems like a problem you’re actively trying to create via your discourse.


> Only if the program was written in a way that allowed for legitimate access to P1. You’re articulating this as if P1 was out of thin air; it’s not.

My program:

  if (p == P2) return p[attacker_controlled_index];
If the return statement can access P1, disjoint from P2, that's a weird execution for any useful definition of "weird". You can't just define the problem away.

Your central claim is that you can take any old C program, compile it with Fil-C, and get a memory-safe C program. Turns out you get memory safety only if you write that C program with Fil-C's memory model and its limits in mind. If someone's going to do that, why not write instead with Rust's memory model in mind and not pay a 4x performance penalty?


> that's a weird execution for any useful definition of "weird".

Weird execution is a term of art in the security biz. This is not that.

Weird execution happens when the attacker can control all of memory, not just objects the victim program rightly loaded from the heap.

> Your central claim is that you can take any old C program, compile it with Fil-C, and get a memory-safe C program.

Yes. Your program is memory safe. You get to access P1 if p pointed at P1.

You don’t get to define what memory safety means in Fil-C. I have defined it here: https://fil-c.org/gimso

Not every memory safe language defines it the same way. Python and JavaScript have a weaker definition since they both have powerful reflection including eval and similar superpowers. Rust has a weaker definition if you consider that you can use `unsafe`. Go has a weaker definition if you consider that tearing in Go leads to actual weird execution (attacker gets to pop the entire Go type system). Java’s definition is most similar to Fil-C’s, but even there you could argue both ways (Java has more unsafe code in its implementation while Fil-C doesn’t have the strict aliasing of Java’s type system).

You can always argue that someone else’s language isn’t memory safe if you allow yourself to define memory safety in a different way. That’s not a super useful line of argumentation, though it is amusing and fun


You may define "memory safety" as you like. I will define "trustworthy system" as one in which the author acknowledges and owns limitations instead of iteratively refining private definitions until the limitations disappear. You can define a mathematical notation in which 2+3=9, but I'm under no obligation to accept it, and I'll take the attempt into consideration when evaluating the credibility of proofs in this strange notation.

Nobody is trying to hide the existence of "eval" or "unsafe". You're making a categorical claim of safety that's true only under a tendentious reading of common English words. Users reading your claims will come away with a mistaken faith in your system's guarantees.

Let us each invest according to our definitions.


> I will define "trustworthy system" as one in which the author acknowledges and owns limitations instead of iteratively refining private definitions until the limitations disappear.

You know about this limitation that you keep going on about because it’s extremely well documented on fil-c.org


[Woman walking on beach at sunset, holding hands with husband]

Voiceover: "Miracurol cures cancer."

[Couple now laughing over dinner with friends]

"Ask your doctor if Miracurol is right for you."

[Same footage continues, voice accelerates]

"In clinical trials, five mice with lymphoma received Miracurol. All five were cured. One exploded. Not tested in humans. Side effects include headache, itchiness, impotence, explosion, and death. Miracurol's cancer-free guarantee applies only to cancers covered under Miracurol's definition of cancer, available at miracurol.org. Manufacturer not responsible for outcomes following improper use. Consult your doctor."

[Couple walking golden retriever, sun flare]

Voiceover: "Miracurol. Because you deserve to live cancer-free."

Patient: "I exploded."

Miracurol: "That's extremely well documented on miracurol.org."


It’s really sad to see your posts on this thread. Fil-C is an incredible achievement and absolutely full of interesting technical details to dig into. I’m not a mod, but as a reader of the site and someone who takes a curious interest in the progress of Fil-C, can you please stop attacking its creator like this. It’s tedious, needlessly rude, and lessens the opportunity for the rest of us to actually learn something from an expert.


Facts are facts and exist independent of who discovers them. If you'd like to learn, the last thing you want to do is stop people poking at contradictions and pressure-testing claims. If Fil-C is really the "incredible achievement" you say it is, it can withstand scrutiny.


Wait, no, you just said

> I will define "trustworthy system" as one in which the author acknowledges and owns limitations

You can't then go on to complain that the author does document the limitations but considers the overall system good. Fil-C, by the definition you just espoused, is a "trustworthy system".


Sorry to intrude on the discussion, but I have a hard time grasping how to produce the behavior mentioned by quotemstr. From what I understand the following program would do it:

    int arr1[] = {1, 2, 3, 4, 5};
    int arr2[] = {10, 20, 30, 40, 50};
    int *p1 = &arr1[1];  
    int *p2 = &arr2[2];  
    int *p = choose_between(p1,p2);

    //then sometime later, a function gets passed p
    // and this snippet runs
    if (p == p2) {
     //p gets torn by another thread
     return p; // this allows an illegal index/pointer combo, possibly returning p1[1]
    }
Is this program demonstrating the issue? Does this execute under Fil-C's rules without a memory fault? If not, could you provide some pseudocode that causes the described behavior?


No, this program doesn’t demonstrate the issue.

You can’t access out of bounds of whatever capability you loaded.


Fil-C lets programs access objects through the wrong pointer under data race. All over the Internet, you've responded to the tearing critique (and I'm not the only one making it) by alternatively 1) asserting that racing code will panic safely on tear, which is factually incorrect, and 2) asserting that a program can access memory only through its loaded capabilities, which is factually correct but a non sequitur for the subject at hand.

You're shredding your credibility for nothing. You can instead just acknowledge Fil-C provides memory safety only for code correctly synchronized under the C memory model. That's still plenty useful and nobody will think less of you for it. They'll think more, honestly.


> asserting that racing code will panic safely on tear, which is factually incorrect

Try it. That’s what happens.

> through its loaded capabilities, which is factually correct but a non sequitur for the subject at hand.

It’s literally the safety property that Fil-C guarantees.

Safety properties provided by languages aren’t about preventing every bad thing that users can imagine. Just because the language does something different than what you expect - even if it allows you to write a program with a security bug - doesn’t mean that the language in question isn’t memory safe.

> You're shredding your credibility for nothing. You can instead just acknowledge Fil-C provides memory safety only for code correctly synchronized under the C memory model.

Fil-C provides memory safety even for incorrectly synchronized code. That safety guarantee is easy to understand and easy to verify: you only get to access the memory of the capability you actually loaded. You’re trying to evade this definition by getting hung up on what the pointer’s intval was, and your PoC uses a pointer comparison to illustrate that. You’re right that the intval is untrusted under Fil-C rules.

I’m not going to downplay the guarantees of my technology just to appease you. Whether or not you find me credible is less important to me than being honest about what Fil-C guarantees.


In https://news.ycombinator.com/item?id=46270657, you write

> If you set the index to `((alice - bob) / sizeof(...))` then that will fail under Fil-C’s rules (unless you get lucky with the torn capability and the capability refers to Alice).

In the comment above, you write, referring to a fault on access through a torn capability

> Try it. That’s what happens.

Your position would be clearer if you could resolve this contradiction. Yes or no: does an access through a pointer with an arbitrary offset under a data race that results in that pointer's capability tearing always fault?

> You’re right that the intval is untrusted under Fil-C rules.

Can Fil-C compile C?

You can't argue, simultaneously,

1) it's the capability, not your "intval", that is the real pointer with respect to execution flow and simultaneously, and

2) that Fil-C compiles normal C in which the "intval" has semantic meaning.

Your argument is that Fil-C is correct with respect to capabilities even if pointers are transiently incorrect under data races. The trouble is that Fil-C programs can't observe these capabilities and can observe pointers, and so make control flow decisions based on these transient incorrect (you call them "untrusted") inputs.


Can you show an actual minimal C program which has this problem? I’m trying to follow along here, but it’s very hard for me to understand the exact scenario you’re talking about.


I think at this point it's reasonable to conclude that quotemstr does not have a legitimate concern until a program demonstrating the issue can be presented.


> Rust has a weaker definition if you consider that you can use `unsafe`

I don't see it. Rust makes the same guarantees regardless of the unsafe keyword. The difference is only that with the unsafe keyword you the programmer are responsible for upholding those guarantees whereas the compiler can check safe Rust.


C is safe by the same logic, then? You can write safe code in anything if you don’t make mistakes.


But the definition is what we're talking about, not whether you make mistakes. Of course it's important that safe Rust is checked by the compiler, but that's crucially not part of how safety is defined.

I would guess that somebody more on the pulse of C's safety efforts could tell you whether they have a definition of memory safety for C or whether they're comfortable with an existing definition from somebody else.


What I mean is, what’s to stop us saying that C upholds all the same guarantees that Rust does and that it’s the programmer that’s responsible for upholding them (just as the programmer is responsible in the case of Rust code marked ‘unsafe’)? This seems like a semantic game to avoid acknowledging that unsafe Rust comes with some of (though not all) of the same risks as C code.

In short, the definitions are not important. What matters are the risks that you do or don’t run. And if your Rust code contains unsafe blocks, you are running risks that you wouldn’t be if you used Fil-C, which has no such escape hatch. (Of course this goes both ways – your Fil-C code is more likely to fail, safely, with a runtime error due to a mistake that Rust would have caught at compile time.)


And do you say that C offers these guarantees ?

Real world C software does not read like software written by people who are in fact upholding those guarantees you say C could equally have. It reads as though they think such a guarantee is a joke or an irrelevance. It's not rare for me to run into people who think C's pointers are just indexing into a massive array of all RAM (or its equivalent on today's systems with virtual addressing), that's not just not in the same ballpark as a safe C program, that's playing a different sport on another continent.


You seem to be suggesting that a language being safe or unsafe is a social contract rather than a technical property of the language.

>And do you say that C offers these guarantees ?

No, that would be silly, and it's an illustration of why it is silly to say that a language guarantees X if it is the programmer who must check that X holds. If we go down that route (which, to repeat, would be silly), then we can make C safe without any technical changes just by adding some language to the standard saying that C programmers are obliged to ensure that their code maintains a certain list of invariants. When you say that "Rust makes the same guarantees regardless of the unsafe keyword", it seems to me that you are doing something equally pointless.


> You seem to be suggesting that a language being safe or unsafe is a social contract rather than a technical property of the language.

Quite some way up this thread pizlonator insists that each programming language defines memory safety differently, quantifying some as "weaker" or "stronger" and giving the example that Rust has the `unsafe` keyword and so that's weaker than Fil-C.

That's what we were discussing when you jumped in with your C hypothetical.

You apparently instead believe in a single universal "safety" and every language is either absolutely safe or unsafe according to foldr for whatever that's worth - but that's not what we were talking about.


No, I just think that Rust is less safe than it would be if it didn’t have the unsafe escape hatch.

I think you’re taking issue with how pizlonator phrased his post rather than addressing the substance of his point that Fil-C does not have the ‘unsafe’ escape hatch and is therefore safer in this respect. Sure, Rust uses a pretty standard definition of memory safety when talking about the desired property of the program, but pizlonator is talking about the definition of memory safety that the Rust compiler actually guarantees that Rust code will meet, which (when you include unsafe-marked code) is a conditional and weaker one.


I still can't agree with weaker. Yes, it's conditional on the unsafe code actually obeying the rules, and on the tooling, but Fil-C has the same situation, there will be bugs in the compiler, indeed in some cases the same bugs because LLVM has plenty of bugs.

Crucially unsafe Rust doesn't have weaker rules, it has the same rules, that's my whole thrust here. The change is that the burden of obeying those rules is on you, the programmer, and that's a heavy burden. But it is very much possible for skilled practitioners to carefully achieve that. It's very similar skill to writing C++ stdlib implementations. Aria's "Pre-pooping your pants" essay is colourful but ultimately it's the same sort of thing strong exception guarantees are made of in C++. We go in eyes open, expecting the worst so that we're pleasantly surprised when it doesn't happen.

It's not practical for humans to write code like this all day, everyday, they make too many mistakes that's the problem in C or C++ - but, seems like it is practical for some skilled people, sometimes, with the benefit of oversight from similarly skilled peers.

If that isn't enough for you I have good news and I have bad news. The bad news is that for general purpose software too bad, we've known since the middle of last century that we can't do better than this. Fil-C isn't magic, nor are Java and C#. At runtime we detect we can no longer assure correct operation and we abort, this might not be OK, but we can't do better.

The good news is that we can do better if we're willing to sacrifice generality. The difficulties all come from the fact that ultimately if we're a general purpose programming language we can be made into a Gödel number and then obliged to perform computations on ourselves and we're toast. But we can invent useful languages which aren't powerful enough to do that. Want to decompress ZIP files? No generality needed. Crop some JPEGs? Not needed. Validate whether these graphs are isomorphic? Ditto.

Rust is a general purpose language, but you might well not need one for your problem, and I say we should prefer not to use a general purpose language when we don't need one.


I don't think it makes sense to compare non-invariant-respecting unsafe blocks to compiler bugs. It would make sense to do so if unsafe blocks were only present in a highly-verified Rust stdlib, but we both know that's not the case.

>Crucially unsafe Rust doesn't have weaker rules, it has the same rules, that's my whole thrust here. The change is that the burden of obeying those rules is on you, the programmer, and that's a heavy burden.

Now we're circling back to my argument about C. The C standards committee could declare that 'unsafe C' (i.e. all C) has these very same rules (which C programmers have the heavy burden of obeying). Would this instantly convert C into a memory safe language? Of course not! It's an empty semantic gesture. Similarly, merely saying "Rust programmers are obliged to respect the following invariants inside unsafe blocks!" does nothing to actually decrease the risks associated with unsafe blocks (leaving aside whatever exhortive success such admonitions might have).

What next, if we accept this logic? Is Perl a language with strict static typing, but "the burden of checking the types falls on you, the programmer"?.


> Would this instantly convert C into a memory safe language?

I actually strongly encourage finishing this thought. Imagine you're WG14 and you're intending to drop this "safe C" bombshell in C29 and think about the implications just for your document.

You "just" need to spell out all these rules for the language itself, Rust only has a handful but you've got pages of these, and then you need to go re-design all your APIs so that instead of basically YOLO† each of the standard library APIs has an explicable set of safety requirements, just as Rust has to do with its relatively small subset of unsafe APIs.

A subset of this work is already underway for WG14 and has been for a few years. You'll need to hurry them along to prepare for your epoch making announcement because they're not expecting to be anywhere close by 2029 but hey, shoot for the stars right?

At the end of this, you've announced a deeply incompatible C version and the benefit is that if your customers can hire people who don't make mistakes and they port all their C to this new version, it has similar properties to if they were to rewrite it in Rust. Don't expect applause, in fact, I'd recommend hiring bodyguards.

† I think people really underestimate how much C relies on this. Remember C provides qsort, an unstable type-erased comparison sort (hopefully an introsort, but in some implementations literally just Hoare's Quicksort which is significantly older than C itself) which has arbitrary Undefined Behaviour if you screwed up your comparison function and yet for all the popular implementations it's still slower than in Rust which doesn't have that UB problem at all.

So now you're documenting specifically for functions like this why "safe C" is both much harder to use and slower in your standards document like it's an achievement, and unlike Karoline Leavitt you're not even getting paid to do this. Maybe you should take up knitting instead?


You’ve kindly finished the thought for me. You now have a “safe” version of C purely by updating the standard and associated documentation. (It’s not actually true that you need to redesign all the APIs in the stdlib. You just need to document the appropriate restrictions on how they may safely be used, just as you would have to do with a Rust function marked ‘unsafe’. It’s trivially possible to write an unsafe-marked Rust function that’s guaranteed to be memory safe if and only if an arbitrary invariant is maintained.)

In reality of course, this is all absurd. No matter how much or how little work it turns out to be, writing reams of standardese and leaving actual C implementations untouched would do nothing to reduce the safety risks associated with C code.

By the way, you’re wasting some energy arguing points that I agree with (e.g. that Rust has a better sorting API than C). My comments here are not anti-Rust. I merely disagree with the claim that Rust code marked ‘unsafe’ is as safe as regular Rust code.


>> It’s not actually true that you need to redesign all the APIs in the stdlib. You just need to document the appropriate restrictions on how they may safely be used [..]

That's exactly the problem. Notice how that C API qsort has basically the same affordance as the Rust sort_unstable (except it is type erased) but it has a lot more restrictions because C was a New Jersey language, what matters is that it's easy to implement. So you need to figure out (for some of the API I expect this may be a many years study) what the requirement are that you're going to document.

Or, which I'm confident will often be faster, reform the API and provide the thing programmers actually wanted anyway, the "Right Thing", which means more work for an implementer of your "Safe C" and also, which was my point, actually does get you a safer result after all.


Those requirements are already documented. Let’s not pretend that it’s impossible to write safe code that uses qsort.


> ... then we can make C safe without any technical changes just by adding some language to the standard saying that C programmers are obliged to ensure that their code maintains a certain list of invariants.

In Rust you can use #![forbid(unsafe_code)] to totally forbid unsafe code in your codebase. Rust also checks for memory safety at compile time, these are strong guarantees that ensure that if the code compiles it is memory safe.


I'm aware of that, but I'm responding to the original claim that "Rust makes the same guarantees regardless of the unsafe keyword" (see https://news.ycombinator.com/item?id=46262774)


Ah. I agree with you. When unsafe is used the borrow checker cannot check for memory safety, the programmer has to provide the guarantees by making sure their code does not violate memory safety, similar to programming in C.

But unsafe Rust is still far better than C because the unsafe keyword is visible and one can grep it and audit the unsafe parts. Idiomatic Rust also requires that the programmer provides comments as to why that part is unsafe.


I think making things more explicit with "unsafe" is an advantage of Rust, but I think "far better" is a bit of an exaggeration. In C you need to audit pointer arithmetic, malloc/free, casts and unons. If you limit pointer arithmetic to a few safe accessor functions and have a documented lifetime rules, this is also relatively simple to do (more difficult than "grep" but not much). Vice versa, if you use a lot of "unsafe" in Rust or in complicated ways, it can also easily become possible to guarantee safety. In contrast to what people seem to believe, the bug does not need to be inside in unsafe block (a logic error outside can cause the UB inside unsafe or a violation of some of Rust's invariants inside unsafe can allow UB outside of unsafe) and can result even from the interaction of unsafe blocks.

The practical memory safety we see in Rust is much more the result of trying hard to avoid memory safety issues and requiring comments for unsafe blocks is part of this culture.


> the bug does not need to be inside in unsafe block

The argument is that while you wouldn't in fact fix the bug by modifying the unsafe code block, the unsafe code block was wrong until you fixed the other code.

For example imagine if a hypothetical typo existed inside RawVec (the implementation details of Vec) causing the growable array to initially believe it has 1 element inside it, not 0 even though no space has been allocated and nothing was stored. That's safe code, and of course the correct fix would be to change it from 1 to 0, easy. But this broken type is arguably broken because the unsafe code would deference a pointer that isn't valid, trying to reach that non-existent value. It would be insane, perhaps even impossible, to modify that code to somehow handle the "We wrote 1 instead of 0" mistake, when you could instead fix the bug - but that is where the theoretical fault lies.


Since you know C and you know Rust:

I'm curious what you make of quotemastr's point about a race causing a mismatch between the pointer's capability and its index. First off, in your estimation can this realistically be exploited to wreak havoc on extant C programs compiled using Fil-C? Second, is such a mismatch able to happen in safe Rust? Third, is such a mismatch able to happen in unsafe Rust?

Edit: clarification to narrow the question even further


I can try.

"Wreak havoc" is a very vague claim. Instinctively the tearing feels like something very difficult to usefully exploit, but, we know historically that the only people who can reliably tell you whether it was difficult are the attackers actually trying to do it. Don't believe the defenders.

AIUI this capability versus value distinction is a Fil-C thing. So, that's not a thing in Rust at all. In Safe Rust the pointer types, which is what we care about here, aren't very interesting because safe Rust can't dereference them, safe Rust is fine with you making a pointer from the word "LAUGHING" (not a pointer to the string, just the literal bytes in ASCII, but treated as a pointer) or from just some random bytes you found in a data file, because it's not allowed to dereference them so, cool, whatever, no harm no foul.

In unsafe Rust we're allowed to dereference valid pointers, but it's our job to ensure we obey that rule about validity, it being our job to obey rules is what "unsafe" means. So, that silly "LAUGHING" pointer isn't valid, it's just pointer-shaped toxic material. Even if, by coincidence, a pointer you have happened to have the same address as that pointer, in both C and Rust it's not OK to just go around dereferencing invalid pointers, they are not offsets into an imaginary huge array of all memory even though some C programmers act like they are.

Ignoring the Fil-C specific capabilities, in Rust the tearing issue is a matter of synchronization, which is something Rust cares about as part of delivering "fearless concurrency". Rust's marker traits Send and Sync are good place to start learning about that. Yes, we could unsafely implement these marker traits in unsafe Rust when we shouldn't, and thus enable what I imagine you'd call havoc.

So, mostly the problem is that your question is (unintentionally) too vague to answer well but I hope I was at least somewhat helpful.


I'm not an expert here but I have to say this feels like a very weak objection.

p points to P1. One thread reads through p. Another thread races with that and mutates p to point to P2. The result is the first thread reads from either P1 or P2 (but no other object).

This seems totally fine and expected to me? If there's a data race on a pointer, you might read one or the other values, but not garbage and not out of bounds. I mean, if it could guarantee a panic that's nice, but that's a bonus, not required for safety.


Posts like the one I made about how to do sandboxing are specifically to make the runtime transparent to folks so that meaningful auditing can happen.

> For example, Filip mentioned that some setuid programs can be compiled with it, but it also makes changes to ld.so. I pointed this out to the author on Twitter, as it could be problematic.

The changes to ld.so are tiny and don’t affect anything interesting to setuid. Basically it’s just one change: teaching the ld.so that the layout of libc is different.

More than a month ago, I fixed a setuid bug where the Fil-C runtime was calling getenv rather than secure_getenv. Now I’m just using secure_getenv.

> In other words, these are still teething problems with Fil-C, which will be reviewed and fixed over time. I just want to point out that using it for real-world "infrastructures" might be somewhat risky at this point. We need unix nerds to experiment with.

There’s some truth to what you’re saying and there’s also some FUD to what you’re saying. Like a perfectly ambiguous mix of truth and FUD. Good job I guess?


Is it FUD? Approximately speaking, all software has bugs. Being an early adopter for security critical things is bound to carry significant risk. It seems like a relevant topic to bring up in this sort of venue for a project of this sort.


It's true. I used to promote high-assurance kernels. They had low odds of coding errors but the specs could be wrong. Many problems Linux et al. solved are essentially spec-level. So, we just apply all of that to the secure designs, right?

Well, those spec issues are usually not documented or new engineers won't know where to find a full list. That means the architecturally-insecure OS's might be more secure in specific areas due to all the investment put into them over time. So, recommending the "higher-security design" might actually lower security.

For techniques like Fil-C, the issues include abstraction gap attacks and implementation problems. For the former, the model of Fil-C might mismatch the legacy code in some ways. (Ex: Ada/C FFI with trampolines.) Also, the interactions between legacy and Fil-C might introduce new bugs because integrations are essentially a new program. This problem did occur in practice in a few, research works.

I haven't reviewed Fil-C. I've forgotten too much C and the author was really clever. It might be hard to prove the absence of bugs in it. However, it might still be very helpful in securing C programs.


It’s like half FUD.

The FUDish part is that the only actual bug bro is referring to got fixed a while ago (and didn’t have to do with ld.so), and the rest is hypothetical


> a perfectly ambiguous mix of truth and FUD

Congrats on Fil-C reaching heisentroll levels!


It's not possible even in 10 years (.. but maybe in 11).

What a shift in the last 5 years (never -> 100 years -> 11)


ChatGPT has done too many things that "a computer can't do". The "AI effect" denial is strong, but it has its limits.


“ Machines will be capable, within twenty years, of doing any work that a man can do.” - Herbert Simon, 1965


The simple additive scoring here is sus here. It means a model that's perfect on 9/10 axes but scores 0% on Speed (i.e., takes effectively infinite time to produce a result) would be considered "90% AGI".

By this logic, a vast parallel search running on Commodore 64s that produces an answer after BeaverNumber(100) years would be almost AGI, which doesn't pass the sniff test.

A more meaningful metric would be more multiplicative in nature.


That "blank slate" idea doesn't really apply to humans, either.

We are born with inherited "data" - innate behaviors, basic pattern recognition, etc. Some even claim that we're born with basic physics toolkit (things are generally solid, they move). We then build on that by being imitators, amassing new skills and methods simply by observation and performing search.


Sure, there's lots of inbuilt stuff like basic needs and emotions. But still, baby doesn't know anything about the world. It's the ability to collect data and train on it that makes it AGI.


> baby doesn't know anything about the world

That's wrong. It knows how to process and signal low carbohydrate levels in the blood, and it knows how to react to a perceived threat (the Moro reflex).

It knows how to follow solid objects with its eyes (when its visual system adapts) - it knows that certain visual stimuli correspond to physical systems.

Could it be that your concept of "know" is defined as common sense "produces output in English/German/etc"?


No, I totally agree that there's all kinds of innate knowledge, but it's very similar for humans and animals. I don't think this knowledge is intelligence. My point was that a baby is already an AGI, and it shouldn't require a lifetime of learning to become one. Also, if intelligence is just problem solving (like an IQ test) then it should be independent of knowledge.


Reminds me of the Orange Alternative movement in communist-era Poland. A group would wear t-shirts, each with a letter, spelling an innocent phrase.

When one turned away, the message would instantly become different, like changing "Down with the heat" to "Down with the cops" - https://sztukapubliczna.pl/pl/precz-z-u-palami-pomaranczowa-...

https://en.wikipedia.org/wiki/Orange_Alternative

the whole world is a work of art, so even a single policeman standing in the street is a work of art


Skilled essay, but not an argument. Opens with "As Jung notes" as an appeal to authority, then more name-drops.

Misses clear definitions (what counts as "friendship with self"?) and the mechanism (how X->Y). Anecdotes/quotes != proofs.

IOW, prestige != proof. Two quick checks 1) strip the names - does the reasoning still stand? 2) Flip to counterexamples - does the thesis survive? We all know people who are hard on themselves but deeply loving to others.

Nice essay but treat it as a opinion to test, not a truth to inherit. The thread reads as if the case were already proven.


I usually treat “think pieces” like these as mere mental stimulation. It’s pretty difficult to say anything definitive about how everyone should live or how everyone should think, but I don’t think that means we ourselves should not reflect on those things. After all, we live and think, and why not try to do those things better?


it at least seems like it has a modicum of human thought, whereas this GPT drivel does not.


> that not everybody writes in English.

I don't know... I understand the history and reasons for this capitalization behavior in Turkish, and my native language isn't English, which had to use a lot of strange encodings before the introduction of UTF-8.

But messing around with the capitalization of ASCII <= codepoint(127) is a risky business, in my opinion. These codepoints are explicitly named:

"LATIN CAPITAL LETTER I" "LATIN SMALL LETTER I"

and requiring them to not match exactly during capitalization/diminuitization sounds very risky.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: