The fact that today's and yesterday's models are quite capable of handling mundane tasks, and even companies behind frontier models are investing heavily in strategies to manage context instead of blindly plowing through problems with brute-force generalist models.
But let's flip this around: what on earth even suggests to you that most users need frontier models?
Everybody has difficult decisions to make in their daily lives and in their work.
Having access to a model that is drawing from good sources and takes time to think instead of hallucinating a response is important in many domains of life.
The fact that the C++ standard community has been working on Contracts for nearly a decade is something that by itself automatically refutes your claim.
I understand you want to self-promote, but there is no need to do it at the expense of others. I mean, might it be that your implementation sucked?
Late nineties is approaching thirty decades ago; if the C++ committee has now been working on this for nearly a decade, that's fifteen to twenty years of them not working on it. It's quite plausible that contracts simply weren't valued at the time.
Also, in my view the committee has been entertaining wider and wider language extensions. In 2016 there was a serious proposal for a graphics API based on (I think) Cairo. My own sense is that it's out of control and the language is just getting stuff added on because it can.
Contracts are great as a concept, and it's hard to separate the wild expanse of C++ from the truly useful subset of features.
There are several things proposed in the early days of C++ that arguably should be added.
I am not sure what the "truly useful features are" if you take into account that C++ goes from games to servers to embedded, audio, heterogeneous programming, some GUI frameworks, real-time systems (hard real-time) and some more.
I would say some of the features that are truly useful in some niches are les s imoortant in others and viceversa.
Not a very fair assumption. However, even if your not so friendly point was even true, I'd like people who have invented popular languages to "self-promote" more (here dlang). It is great to get comments on HN from people who have actually achieved something nice !
It should copy Zig's '= undefined;' instead of D's '= void;' The latter is very confusing: why have a keyword that means nothing, but also anything? This is a pretty common flaw within D, see also: static.
Nobody in D was confused by `= void;`. People understood it immediately.
> why have a keyword that means nothing, but also anything?
googling void: "A void is a space containing nothing, a feeling of utter emptiness, or a legal nullity, representing a state of absolute vacancy or lack."
> I am somewhat dismayed that contracts were accepted. It feels like piling on ever more complexity to a language which has already surpassed its complexity budget, and given that the feature comes with its own set of footguns I'm not sure that it is justified.
I don't think this opinion is well informed. Contracts are a killer feature that allows implementing static code analysis that covers error handling and verifiable correct state. This comes for free in components you consume in your code.
Contracts aren't for handling errors. That blog post is extremely out of date, and doesn't reflect the current state of contracts
Modern C++ contracts are being sold as being purely for debugging. You can't rely on contracts like an assert to catch problems, which is an intentional part of the design of contracts
> This seems extremely confused. The copyright system does not have a way to grant these permissions because the material is not covered under copyright!
This opinion is simplistic. LLMs are trained with pre-existing content, and their output directly reflects their training corpus. This means LLMs can generate output that matches verbatim existing work. And that work can very well be subjected to copyright.
Language models are good at translation and retrieval. This also extends to computer languages. LLMs translate from GPL to other licenses the same way Google translate turns French to English, except that the source material is implicitly stored in the LLM.
> QA has a reputation problem. Because it is considered as unimportant role, good people don't get attracted to it.
Hard disagree. QAs' main tasks are acceptance testing and V&V. This means their work is to act as independent parties whose role is to verify that the results are indeed being delivered. Their importance is up there next to Product Owners.
The problem is that some QA roles are relegated to running the same old test scripts and do some bullshit manual exploratory tests that catch and assert nothing. It's the ultimate bullshit job as it's performative. This is clear in the way that LLM agents are seen as threatening the very existence of the role.
> Good QA people know how to find regression and bugs _that you didn’t think about_ which is the whole reason why it shouldn’t be under “engineering” and that it should exist.
I think the core of the issue is the weasel word "Good QA" instead of just "QA". You're underlining the importance of having someone in a team who has good product understanding and has good ownership skills. How many QAs fit that description? Not many, unfortunately. Some FANGs outright eliminated the role and replaced it with a mix of product owners and team ownership, and some QAs are just doing their 9-to-5 job going through their manual test scripts to verify it something meets a definition of done. What happens when you can have a LLM agent doing the same instantly as a step in a random CICD pipeline?
> The attention this topic receives is disproportionate considering how rare we are, especially close to the Olympics level.
We all remember state-sponsored doping scandals from the 60s where iron curtain nations invested heavily on medical research and experiments on prospective athletes to try to get medals. It's not hard to understand how badly this would turn out to be if the same sort of unscrupulous regime could just abuse this loophole to seek the same benefit.
> It's not hard to understand how badly this would turn out to be if the same sort of unscrupulous regime could just abuse this loophole to seek the same benefit.
Surely this is something that can be addressed if it ever becomes a problem. Surely we don't need to write rules for scenarios that aren't causing issues...
> Surely this is something that can be addressed if it ever becomes a problem.
You're advocating to create pressure and incentives to commit this class of abuses, which have already been committed even at an industrial stage for decades, and your strategy is to ignore history and facts until the consequences of your actions catch up to you.
And all this in exchange for which tradeoff?
Even the fight against doping is far more proactive than what you are advocating.
Trans athletes at the Olympics are causing disruptions at an industrial scale?
There's been exactly one trans woman in the Olympics, Laurel Hubbards, competing for New Zealand. She won zero medals.
I'm advocating for "there is zero documented evidence this is a problem, the IOC should use their time and energy solving actual problems like doping."
All three medallists in the women's 800m at the 2016 Rio Olympics were male. This was highly controversial, as having three male athletes take gold, silver and bronze in what should have been a celebration of female athletic excellence wasn't exactly a desirable outcome.
Although the headline of the linked article focuses on males with a transgender identity, the purpose of the IOC's new policy is to exclude all male physiological advantage from the female category, including cases like the above.
I'm aware, but those women aren't trans. They have disorders of sexual development, were assigned female at birth.
Laurel Hubbard is trans, was assigned male at birth, and competed under hormone therapy. (Which studies have shown reduces or eliminates the biological male advantage for trans women.)
We can discuss DSD AFAB athletes as well, but I was focused on trans athletes.
> As far as I see, this issue is only tangentially related to transgender rights.
It affects the rights of transgender people, so it is directly related to transgender rights. Also, I don't at all think that it's coincidence that people spreading hate about transgender people are the same ones so concerned about this particular issue?
People spreading hate and prejudice always have <reasons>.
> We all remember state-sponsored doping scandals from the 60s
We all do? People born in the 1950s or earlier might remember, making them at least 65 years old. I've never heard of it from people of any age. In any case, it's hard to connect this 60 year old issue with today's decision.
If an unscrupulous regime wanted to get medals with that method they'd just give cis women testosterone during puberty. Nothing about the new trans-exclusionary standards would deter that.
No XY chromosome no SRY gene. You're left with validating that someone's entire development was done in the absence of testosterone, which would--if even possible--require incredibly invasive and extensive testing.
> It's pointless to write a whole article about how model collapse is actually happening and isn't just a theoretical concern with no evidence that model collapse is actually happening.
Except perhaps the link to article on the peer-reviewed paper that describes the problem in detail.
> Researchers at Oxford and Cambridge published work on this back in 2023, showing how iterative training on synthetic data leads to progressive degradation.
Unfortunately, it's more than that: the Linux installation instructions on the certbot website[0] give options for pip or snap. Distro packages are not mentioned.
I feel no need to. I'm quite certain that the certbot folks are aware of the existence of distro packages and even know how to check https://pkgs.org/download/certbot for availability. One might guess that they only want to supply instructions for upstream-managed distribution channels rather than dealing with e.g. some ancient version being shipped on Debian.
The fact that today's and yesterday's models are quite capable of handling mundane tasks, and even companies behind frontier models are investing heavily in strategies to manage context instead of blindly plowing through problems with brute-force generalist models.
But let's flip this around: what on earth even suggests to you that most users need frontier models?
reply