Can someone explain to me what is so horrible about curly braces that we need a whole host of "human-friendly" configuration languages with nontrivial parsing just to get around them?
We are stuck in the old paradigm of characters taking space on the screen and the idea that a markup language must support classic dumb TUI. If, just imagine it, we used some Unicode range for control characters for the semantic markup and standardized UX for it, we wouldn’t need using normal characters as delimiters and escaping them in strings.
The following would have parseable structure, but would be free of visual noise.
Title: Markup languages: decades of going in the wrong direction
Keywords: hypertext,
delimiters,
ˋ, ", \
People have suggested using the control characters for CSV structured files. The problem is that they are impossible to edit.
Control characters are invisible, using them means changing text editors to display them. They are also, outside the usual ones, hard to type. ASCII ones have Ctrl combos, but editors used those for other things.
Also, what is the difference between using some new character to start block and "{" or "\n"? Why have new thing to indicate new level when have space and tab?
Well in that case, it's all the ways that IDEs like to jank up whitespace, as well as the additional difficulty knowing 'context'.
With JSON it's fairly easy for me to know if I want to end my structure as well as the structure containing it, I can just type }}, and add the next element.
With whitespace you have to keep track of HOW MUCH whitespace, and trust me once you've got people who are entirely inconsistent with how much whitespace they use it becomes a huge PITA.
Honestly, nothing. Except the endless debate on where the braces go, and how long they're allowed to stay on a single line.
It seems trivial, but replacing scope delimiters with a per-line signifier (i.e. indent) makes the scope of each line self-contained and sidesteps that discussion.
Is that worth YAYAML (Yet another YAML)? I don't know. But I certainly get the desire to skip the discussion :)
The language learning premise in this post is a bit ridiculous - if I started with the goal of learning a language and ended up worrying about the asymptotic complexity of my automated k-book recommendation algorithm for arbitrary values of k, then I think I should worry about a serious case of procrastination.
But the algorithms are interesting, so I think a better title would have been "why submodular NP hard problems are cool" or something similar.
The thing about language is that words have a weird distribution. The most common 100 words show up in every single sentence, but then tons of "common" words show up statistically almost never. Like, "octopus" is a common word that is only going to be useful if you're talking to a marine biologist, or a three year old that's obsessed with octopuses, otherwise you're hardly ever going to use that word. There's a lot of words like that. "Spine" of a book? It's probably not "spine" in your target language.
Well, I'm sure you could build an amazing anti-procrastination app that has pluggable anti-procrastination strategies and uses the multi-armed bandit algorithm, as well as an RL-trained RNN to discover your personal, optimal schedule of anti-procrastination interventions, while automatically prompting an LLM to devise new strategies as soon as the old ones begin to lose their effectiveness and also giving you an option to post your anti-procrastination progress online and watch the anti-procrastination achievements of your friends or invite them to go on a virtual anti-procrastination quest together...
"Procrastination, perfectionism and writer's block are not moral flaws; nor are they caused by laziness, lack of discipline or lack of commitment. They are habits rooted in fear and scarcity - and the great news is that once we start alleviating our fears and resourcing ourselves abundantly, our procrastination and related problems are often remarkably easily solved."
It's directed at writers, but it's really for all perfectionists.
Agreeing with you that Duolingo seems more like a nudging/psychological manipulation testbed with a thin veneer of language learning on top to provide legitimacy.
But what makes you think that this is because "most consumers just want" it that way? The whole effect of dopamine hits is to manipulate what users believe they "want". But you cannot claim to be working in the interests of your users after you manipulated them.
I.e. if a user installed Duolingo because they genuinely wanted to learn the language and than got sidetracked by all the gamification stuff, I don't think you can say they "really" just wanted to play games the whole time.
(Duolingo is walking a fine line here, which was probably the reason they picked language learning in the first place: Because in that field, users really do want a certain degree of nudging and manipulation, to help them keep up with the tedious process of frequent repetition.
That was sort of the official value proposition if Duolingo and I think the reason why many users installed it. It's also why many of the nudging strategies work at all, because they can assume a cooperating user.
But if you use the app, you can see that it frequently tries to push beyond that mutually agreed purpose: Trying to upsell you to the paid version, invite friends, take part in global leaderboard challenges, etc - all of which has very little to do with language learning)
I think there is an important observation in it though: That dynamic, loosely-typed languages will let you create code that "works" faster, but over the long run will lead to more ecosystem bloat - because there are more unexpected edge cases that the language drops onto the programmer for deciding how to handle.
Untyped languages force developers into a tradeoff between readability and safety that exists only to a much lesser degree in typed languages. Different authors in the ecosystem will make that tradeoff in a different way.
In my experience, this only holds true for small scripts. When you're doing scientific computing or deep learning with data flowing between different libraries, the lack of type safety makes development much slower if you don't maintain strict discipline around your interfaces.
For this particular example where they have to do a runtime parse to do the string to number conversion, yes. But in general static type checks are resolved at compile time, so they incur neither runtime cost nor do they increase the size of the resulting code. This is the primary benefit of doing static type checking.
Sorry, but this makes no sense. Numerical instability would lead to random fluctuations in output quality, but not to a continuous slow decline like the OP described.
Heard of similar experiences from RL acquaintances, where a prompt worked reliably for hundreds of requests per day for several months - and then suddenly the model started to make mistakes, ignore parts of the prompt, etc when a newer model was released.
I agree, it doesn't have to be deliberate malice like intentionally nerfing a model to make people switch to the newer one - it might just be that less resources are allocated to the older model once the newer one is available and so the inference parameters change - but some effect at the release of a newer model seems to be there.
I'm responding to the parent comment who's suggesting we version control the "model" in Docker. There are infra reasons why companies don't do that. Numerical instability is one class of inference issues, but there can be other bugs in the stack separate from them intentionally changing the weights or switching to a quantized model.
As for the original forum post:
- Multiple numerical computation bugs can compound to make things worse (we saw this in the latest Anthropic post-mortum)
- OP didn't provide any details on eval methodology, so I don't think it's worth speculating on this anecdotal report until we see more data
I agree, more automated tools for API migration would be a good next step, but I think that's missing the point a bit.
Read the actionable part of the "dependency error" mail again:
> Please reply-all and explain: Is this expected or do you need to fix anything in your package? If expected, have all maintainers of affected packages been informed well in advance? Are there false positives in our results?
This is not a hard fail and demand that you go back and rewrite your package. It's also not a demand for you to go out on your own and write pull requests for all the dependent packages.
The only strict requirement is to notify the dependents and explain the reason of that change. Depending on the nature of the change, it's then something the dependents can easily fix themselves - or, if they can't, you will likely get feedback what you'd have to change in your package to make the migration feasible.
In the end, it's a request for developers to get up and talk to their users and figure out a solution together, instead of just relying on automation and deciding everything unilaterally. It's sad that this is indeed a novel concept.
(And hey, as a side effect: If breaking changes suddenly have a cost for the author, this might give momentum to actually develop those automated migration systems. In a traditional package repository, no one might even have seen the need for them in the first place)
If
was a list, what would be?reply