Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

As someone who is already fairly hostile to C and actually getting even more hostile over time (the recent shenanigans where the compilers are cracking down on "undefined" behavior with "optimizations", in a language where it's virtually impossible to avoid "undefined" behavior at scale, is insane), part of my hostility is precisely that there there have historically been no other engineering-valid alternatives. (There have been alternatives, just not ones you would have been wise to bet your career on.) It is an example of one of the worst local optimas we can find in computer programming, where something comes out that is pretty good for its time, and then entrenches itself so thoroughly that nothing can ever extract it ever again even though it isn't actually a great foundation to build on for decades at a time.

Other examples include SQL, Javascript (though WebAssembly may free us from that before we're stuck with it in 2040), and Unix.

When I say this sort of thing, people tend to flip out and start defending those techs against the claim that I don't think they're good at all. That's not the problem. They are good. The problem is that they're not good enough to build on for the future, but they are good enough to entrench themselves.

The consequence is that it takes a lot of riling up the community with knowledge of the shortcomings to fertilize the ground for alternatives to arise and be supported enough to make progress.

So, please stop defending C. It is what it is. It had a good run. Nothing will ever make it stop being the most important language of the last four decades. But if you don't want it to be the most important language of the next four decades... and you really shouldn't... please stop defending it. Give the alternatives space to breathe.

$OBLIGATORY_RUST_CALLOUT.



I'm not defending C. If anything, I'm attacking all of the so called "better" replacements. Specifically your call out that they should have existed by now. I treat that argument as a fallacy nowdays. I'll call it the "wishful thinking fallacy."

I do this because if there is anything that calls into question the "science" part of computer science, it is the dogma of folks that refuse the empirical side of the field. It isn't like folks haven't tried to make better languages and toolchains throughout the years. You even acknowledge this. They just haven't delivered on their promises.

And, some of that is because you are ignoring all of the advances that have happened in C. If you are not using modern tools like valgrind/coverity/whatever, you are not comparing C fairly to contemporary languages.


FWIW, I worked at Coverity. I didn't write the analyzers, but I sat next to the team that did. It is an amazing piece of software and that team would regularly present new features to us that I would have sworn were provably impossible, like "you've just solved the halting problem" kind of things. I'm sure any of those engineers could glance at an IOCCC entry and say "oh, there's a missing semicolon in that part that looks like a cat's head". Seriously, it's black magic written by wizards and is the most impressive codebase I've ever seen.

But.

Holy crap, the idea that you have to use something like Coverity just to make C code not suck is a bitter pill. I mean, I'm glad it exists but it's terrifying that it has to and that it's still being developed. After decades of development, the Coverity team is still finding enough examples of new ways to write bad code that they can still improve it.

I don't write C because I'm on a first-name basis with the only people on the planet I trust to get it right, and I am not among them.


"If you are not using modern tools like valgrind/coverity/whatever, you are not comparing C fairly to contemporary languages."

I personally consider C + strong static analysis tools to be an entirely different language. I fully acknowledge this is my personal opinion.

Consequently, I consider it equivocation in this sort of context to be talking about C when convenient, then talk about C + strong static analysis tools when convenient. You're only programming in one or the other. My utter contempt is for C the language by itself. This is relevant because as near as I can tell, C has at least a 10:1 advantage in the field over C + strong analysis tools, and I'm probably off by at least a factor of magnitude. It is not realistic to pretend that C programmers are using these tools at scale. (Now, you can point at a ton of high profile C programs that do use these tools, because high profile C programs are exactly where they get used. But as near as I can tell they are firmly the exceptions, not the rule. And you can point at a ton of high-profile projects that don't use them, too.)

If everybody started using all these tools, including the ones that you actually have to buy because there's no open source equivalent of, it is true that most of my issues with C would be resolved. It would also be true that programmers would be a lot less gung-ho about C, though, because as anyone who has deeply integrated these strong tools into their workflow can attest, C + strong analysis tools is a much more complicated language. Now, in my opinion, that complication is being revealed by the analysis tools, not created, and all C code is actually that complicated and the programmers are merely kept in the dark by C's extensive structural deficiencies that make it so it's only safe to write in with these rather sophisticated tools, but the end result is still you're working in a much more complicated, demanding language with a much more baroque compilation process. Claims about C's productivity, compiler speed, probably execution speed, ease of programming, probably quite a few other things, all go flying out the window if this is the solution, and C looks eminently more replaceable.


We have not escaped the "wishful thinking" yet. Do I want there to be a replacement? Yes. I do my coding in other languages.

Have I grown weary of the attacks on C? Also yes.

Are not enough people using advanced toolchains? Yeah. I agree with that. But, this is effectively the same complaint that not enough people are using higher level languages. With the exception that fitting some of the extra tools on there is easier than switching languages.


Lint was created for C in 1979.

In Dennis own words:

"Although the first edition of K&R described most of the rules that brought C's type structure to its present form, many programs written in the older, more relaxed style persisted, and so did compilers that tolerated it. To encourage people to pay more attention to the official language rules, to detect legal but suspicious constructions, and to help find interface mismatches undetectable with simple mechanisms for separate compilation, Steve Johnson adapted his pcc compiler to produce lint [Johnson 79b], which scanned a set of files and remarked on dubious constructions."

Taken from https://www.bell-labs.com/usr/dmr/www/chist.html

We are in 2016, and static analysis is still ignored by the majority of C programmers that "just get work done".


Lint has a history of being proprietary.

The Free Unixes of the true UNIX DNA lineage have a lint that is a rewrite by Jochen Pohl (done for NetBSD sometime in the early 90's?) Somehow BSD didn't inherit lint. If you use lint on BSD, you're not literally using that program that dates back to 1979.

Other Lints like FlexeLint are proprietary.

C spread like wildfire in the 80's, but an implementation of lint did not follow. Most non-Unix systems that could be programmed in C did not have a lint accompanying that C compiler.

Definitely, it seems there was a lack of promotion of lint from the birthplace of C.

If I were to give a definition of lint, it would be: "that mythical C checking program everyone knows but hasn't used".

It must be suffering from a curse which affects all technology named using a four-letter word starting with L. (Or at least LI.)


> C spread like wildfire in the 80's,

Maybe on US, I only got my first contact with C in 1992, via Turbo C 2.0.

Before that already had used multiple languages and was using Turbo Pascal 6.0 by the time I learned C.

Then again, I hardly knew anyone with access to UNIX, my first contact being Xenix in 1994.

This type of experience was quite common in Portugal, who had money to buy expensive UNIX workstations....

Also the fact of being proprietary, well I usually paid for my tools, before FOSS started to be a thing.

And on the same place where they had Turbo C 2.0, they also just got Turbo C++ 1.0.

So there was a tool that had the type system that allowed me to bend C to be more like Turbo Pascal, provided I cared to use the said type system improvements.

Which lead to me never using C++ as an improved C compiler, rather taking advantage of C++ type system to ensure my arrays and strings were properly bound checked, and IO was done safely.

Oh and taking part on C++'s side on the whole C vs C++ USENET debates, since 1994. Or type safety in systems programming vs C.

So these HN and Reddit discussions regarding the C's unsuitability to write safe software, are hardly anything new to me.


Not just C programmers. I write mostly VB.Net, C#, Java, and I can never compile any of the programs that I work with (created by teams of smart people) with all warnings treated as errors, never mind any more sophisticated analysis.

And the justification is always the same, just as you said: "just get work done".


Apologies if this is rude, but I'm not sure what this is getting at. I know that static analysis is old. I also know that I have been guilty of ignoring it sometimes. Especially as I was learning.

I think it is still in the wishful thinking world to think "better tools" should have won by now.


The thing is that the C working programming class doesn't value quality and secure code, performance FTW everything else gets done later, if there is time that is.

Even the language authors were honest and:

1 - created a tool in 1979 that most C developers ignore even in 2016, 37 years later;

2 - realized that C was due its date and took part in Limbo and Go design

Yet lots of C coders continue to improve this NIST list, every single day:

https://nvd.nist.gov/visualizations/cwe-over-time

For me wishful thinking, is the mentality that there are C developers out there that manage to write memory corruption free without using such tools.

Never met any since my first contact with C in 1992.


Okay, so I have written ... multiple projects that ran as long as they had power with no evidence of memory corruption over a span of 30 years.

I've used Lint. Early on. It was great training. After a while, you don't need it so much.

I think you are Vastly overestimating the skill level needed to write normatively-correct C code by quite a margin.


> with no evidence of memory corruption over a span of 30 years.

How many people did you had on those teams and how was the development process?

Me too I have written such software, when working in teams of no more than two people where we taken a Pascal like approach to programming in C.

Basically:

- Use translation units as modules

- structs are always exposed as Abstract Data Types, with accessors either via macros (when speed matters) or functions

- assert everywhere any assumption that can lead to memory corruption

- on debug code, make use of assert with pointer validation library from debugger, e.g. IsBadReadPtr()

- arrays are never accessed with pointer syntax

- use of compiler extensions for bound checking during debug builds

- all warnings enabled and considered as errors

And quite a few others.

I can count up to 5, the amount of friends and co-worker that managed to write good quality C code, since I learned C in 1992.

C++ unfortunately also enjoys the flaws of C due to its C copy-paste capability, but at least I can make use of the type system via structs, classes and operator overloading to enforce code that in C cannot be more than plain patterns.

> I think you are Vastly overestimating the skill level needed to write normatively-correct C code by quite a margin.

Anyway I don't need to proof anything when we have so nice CVE databases full of examples.

Or companies like Apple, which are supposed to have such ideal C programmers, do an OS release with 36 bugs, 31 of these (86%) related to C memory corruption bugs.

https://support.apple.com/en-us/HT206903


I'm not trying to say you're wrong. Especially because "...so (many) nice CVE databases full of examples".

As I recall, you had six(?) defects identified on a rather significant subsystem. While that's hardly ideal, it doesn't sound like those six defects should cost that much to fix.

And I'm not trying to say I can achieve some abstract perfection - I will have the odd memory overwrite early in unit test. But SFAIK, I've caught the overwhelming majority of them. But I will also be able to spend several hours building scaffolding to flush 'em out... this is especially true of networked code. I tend to use Tcl to torture them.

I will say - all the old guys I knew who were good C coders just got out of coding pretty much altogether. When I see discussions from younger coders, based on the problems they have, I feel that there's been some loss of information.

Also - for a major release of something as big as El Capitan, 36 seems a not-unreasonable number of severe defects. Obviously, we'd prefer it was a smaller number, and we don't know how many will be found eventually.


  >> 36 seems a not-unreasonable number of severe defects.
The point is that it should have been 5 defects (36 total - 31 memory defects).


Also: "Me too I have written such software, when working in teams of no more than two people where we taken a Pascal like approach to programming in C."

There you go. That's how I have done it as well. The degree of "Pascal-like"-ness is always .. something to be humble about. I really, really wish it had been Ada all along but that seems not to be likely.


Static analysis isn't the only game in town anyway. An approach that seems to be gaining popularity is fuzzing. I'm using the AFL program (American FuzzyLop) in an ongoing way to find problems in code. Stress tests under Valgrind, and fuzzing are quite powerful.


Assuming one is using a C toolchain that allows for it, which usually isn't the case for most embedded projects and their commercial tools.


Also the skill level needed to write code that will evade lint type tools, while being incorrect. :)


Yeah, there's that :)


It is not helpful to characterize a discussion of C's problems as C-bashing. H2C03 titled his article "Let's Stop Bashing C" , but the article he was responding to was not a rant against C. It was actually a suggestion that some of the more dubious features of C should not be carried forward into future languages. You may disagree with specific issues (I am with H2CO3 on braces), but I think you agree with the principle.


Indeed. When we look at Valgrind for instance, that only runs on Linux (and not all architectures). So you can only use it to debug C code that you can port to Linux (which requires it to be portable, which is different from "correct"), and you can only debug those issues in that code which you can reproduce in that Linux port.


> It isn't like folks haven't tried to make better languages and toolchains throughout the years. You even acknowledge this. They just haven't delivered on their promises.

The question also has to consider, why did they fail? And what is the criteria for failure? That may be entirely the point: the languages may be better by some metrics, worse in others, but they cannot easily beat the inertia factor, as a result of being so entrenched for 4+ decades. Everyone is very aware of the inertia factor; it seems to permeate all levels of technical decisions in all kinds of fields, for good and bad reasons. If the failure criterion is "not used as widely as C", then, like, everything has failed, so it's a pretty bad criterion, I think.

Ada has been around for a while now, and has seen a fair amount of high-assurance industrial use. It's really unclear if you can actually characterize Ada as a total failure for example, it lives on, although it's certainly less popular. But it was designed with the field in mind from day 1. I think a lot of this has to do with things like institutional knowledge and inertia, and vast amounts of time and money sunk into tooling for systems like C.

That's not bad, the money and stuff obviously helped; but it makes the narrative that all these other things are strictly worse/never attempted a little bit muddier. Does the "empirical" lack of better replacements imply that better replacements cannot exist, or never existed? I think Ada alone specifically negates the core of that argument, they've certainly been tried and seen some level of success. Or perhaps we didn't actually try enough due to resources, maybe? Or maybe it suggests that other contributing factors, externalities, gave rise to such a "monopoly" of computing intellect?

Also, it's easy to forget, but today, it seems general programming languages have enough demanding requirements in general (from things like expectations of package managers, to libraries, to editors), that actually supporting and promoting a general purpose language, to the point of wide usage, is ridiculously time and money consuming. Programming languages are a commodity, for the most part. Existing languages almost all either struggle to exist, have their primary developers financed by some corporations, or have existed for long enough to not simply drift into the void and become effectively immortal.

Yet, there's almost no money in it as a field. It's insanely hard to make a business out of things like compiler tech or programming languages these days, outside of specialized fields and clients. Unless you have deep pockets and a willingness to throw money into the void for a while, bootstrapping a "competetive" programming language is going to be insanely hard, on any kind of reasonable timeframe. Rust is a good example: Mozilla financed a vast amount of the development for a long time, and sustained it even in the beginning stages when it was much riskier as an investment. That's likely one of the major reasons it's even remotely as popular as it is today, because they ate the cost, to catch up with modern expectations.

All of these compounding factors are not going away. I agree there are few viable alternatives; but I don't agree it's so easy to explain it all away "empirically" with such a simple analysis as yours, that it was all either never tried or just "failed" (without defining what that even means).

> And, some of that is because you are ignoring all of the advances that have happened in C. If you are not using modern tools like valgrind/coverity/whatever, you are not comparing C fairly to contemporary languages.

The fact you suggest a 15+ year old, insanely complicated static analysis tool that's proprietary to ensure your code should basically be allowed to exist on the modern internet (because otherwise it will just be a nightmare hellhole security problem for someone down the road), yet continues to fail to find true high profile vulnerabilities that keep occurring, and still requires constant evolution -- all so that C can be "fairly" judged with modern languages... it's a bit alarming when you think about it.

Well, I'm being slightly tongue-in-cheek. I mean, every time I write C, I heavily indulge in these tools as my insurance to ensure I'm doing things correctly. I even tend to use CompCert to find undefined behavior! Coverity is really good, no doubt.

But when I do this, it definitely gives me pause at the investment I have to make. I don't sit around thinking "Yes, this is basically a great solution to the problem". It would almost be, like, I dunno... calling the story of the Titanic a success? Lifeboats hardly seem like the ultimate risk insurance when you're the one crashing into ice, especially when it seems we never have enough of them.


I'm not sure where this conversation is going. :) I think I resonate/agree with your entire upper part. At least, I find myself nodding my head a lot.

For the criticism of age of static analysis tool. This criticism doesn't make sense. I could make the same criticism for a 26ish+ year old and insanely complicated language toolchain (Haskel). And that is if I just pick the popular one. I could go with your example, Ada, for a 36+ year old option.

If the criticism is simply due to the proprietary nature. That cuts against much of your upper argument. The lack of money in the field is a large part of the problem.


> The problem is that they're not good enough to build on for the future, but they are good enough to entrench themselves.

Richard Gabriel made the same observation in his 1989 "The Rise of Worse is Better": https://www.dreamsongs.com/RiseOfWorseIsBetter.html

The worse-is-better philosophy means that implementation simplicity has highest priority, which means Unix and C are easy to port on such machines. Therefore, one expects that if the 50% functionality Unix and C support is satisfactory, they will start to appear everywhere. And they have, haven’t they?

Unix and C are the ultimate computer viruses.

[...]

It is important to remember that the initial virus has to be basically good. If so, the viral spread is assured as long as it is portable. Once the virus has spread, there will be pressure to improve it, possibly by increasing its functionality closer to 90%, but users have already been conditioned to accept worse than the right thing. [...]

The good news is that in 1995 we will have a good operating system and programming language; the bad news is that they will be Unix and C++.


I came here to share the same link. This is the reason why C is still relevant today; it continues to be good enough! I think the same goes for Unix.


I mostly agree with you the exception of SQL.

I actually think SQL is a fairly good language for what it does and the alternatives have been fairly crappy (at the minimum it should not be lumped in with Javascript). If there is a Rust of SQL let me know.


SQL's problems can be seen by comparing the true relational database theory with what SQL can do.

In particular, relational database theory does not require tables to have homogeneous table rows that consist entirely of scalar values. That was an efficiency optimization introduced back in the 70s, for perfectly sensible reasons at the time. But writing that into the language at such a fundamental level bends the whole language around it in a really unpleasant way compared to what it could be. For a similar example, consider the difference between a language that has no first-class function references, vs. one that has them added in. The first language may do everything you need, but you'll be bending a lot of code around that problem.

Of course, a variety of extensions have been bodged on to the sides of SQL over the years to deal with the resulting problems, and generally in a good engine like Postgres or something you can do almost anything you want. I'd also say that if you can understand what I mean when I say "Postgres is actually a much better database than understanding SQL would lead you to believe", you're on the track of what I mean.

Again, I'm not saying SQL is bad, I'm saying its success is holding back the things that could be better.


> In particular, relational database theory does not require tables to have homogeneous table rows that consist entirely of scalar values.

Are you sure about that? https://en.wikipedia.org/wiki/First_normal_form

Homogenous values are not required, but having atomic/scalars as a fundamental unit of a relation is a pretty core concept in database normalization theory.

Or are you saying database normalization is a performance optimization and we should judge RDBMS' by their support for relational algebra/calculus and not normalization theory?


I agree SQL is not perfect but I don't know of many examples of alternative languages (that are not completely proprietary).

It honestly would be helpful to see some alternatives that you think are actually good (as the ones I have seen in the past are fairly awful). The homogeneous rows point is valid but I'm not entirely sure even SQL requires that (as you can have non scalar data types as columns in many databases... although I suppose that is to your point about postgres).

And yeah I mainly only use Postgres so I'm biased :)


"I agree SQL is not perfect but I don't know of many examples of alternative languages (that are not completely proprietary)."

That's the point.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: