You seem to have a fundamental misunderstanding about type systems. Most (the best?) typesystems are erased. This means they only have meaning "on compile time", and makes sure your code is sound and preferrably without UB.
The "its only bits" thing makes no sense in the world of types. In the end its machine code, that humans never (in practice) write or read.
I know, but a type system works by encoding what you want the data to do. Types are a metaphor, and their utility is only as good as how well the metaphor holds.
Within a single computer that’s easy because a single computer is generally well behaved and you’re not going to lose data and so yeah your type assumptions hold.
When you add distribution you cannot make as many assumptions, and as such you encode that into the type with a bunch of optionals. Once you have gotten everything into optionals, you’re effectively doing the same checks you’d be doing with a dynamic language everywhere anyway.
I feel like at that point, the types stop buying you very much, and your code doesn’t end up looking or acting significantly different than the equivalent dynamic code, and at that point I feel like the types are just noise.
I like HM type systems, I have written many applications in Haskell and Rust and F#, so it’s not like I think type systems are bad in general or anything. I just don’t think HM type systems encode this kind of uncertainty nicely.
Gleam is nice. However it is still very lacking in the stdlib. You will need lots of dependencies to build something usable. I kind of wish Gleam could target something like Go, then you would have the option to go native without a "heavy" VM like the BEAM.
PHP suffers from this too. By too strict BC PHP has become a really mess of a language. IIRC there still is the ad-hoc parameter order convention and lack of any namespacing for builtins. Everything is global.
With JS i kind of get it as you cant control the env. Bit PHP does not have this limitation, and they still cant fix the mess that is PHP.
Personally I think a happy medium is to compile to C99. Then, after your own compiler's high-level syntax transformation pass, you can pass it through the Tiny C Compiler which is somewhere on the order of ~10x faster than Clang -O0. When you need performance optimizations at the cost of build speed, or to support a compilation target that TCC does not, you can freely switch to compiling with Clang, getting much of the value of LLVM without ever specifically targeting it. This is what I do for my own language, and it makes my life significantly easier and is perfectly sufficient for my use, since as with most languages my language will never be used by millions of people (or perhaps only ever one person, as I have not deigned to publish it).
I think writing a compiler targeting machine code from scratch only really makes sense if you have Google's resources, as Go did. That includes both the money and the talent pool of employees that can be assigned to work on the task full-time; not everyone has Ken Thompson lying around on payroll. To do better than LLVM is a herculean feat, and most languages will never be mainstream enough to justify the undertaking; indeed I think an undertaking of that scale would prevent a language from ever getting far enough along to attract users/contributors if it doesn't already have powerful backing from day 0.
That might be convenient if your language has semantics that map well-ish to C99 semantics. But C is a really messy language with lots of little quirks. For example, Rust code would compile to something slower if it had to use C as an intermediate representation.
Also, compiled languages want accurate and rich debug info. All of that information would be lost.
Comptimes aee an issue, not only for LLVM itself, but also for users, as a prime example: Rust. Rust has horrible comptimes for anything larger, what makes its a real PITA to use.
I think that’s primarily a Rust issue, not an LLVM issue. LLVM is at least competitive performance-wise in every case I’ve used it, and is usually the fastest option (given a specific linker behavior) outright. That’s especially true on larger code bases (e.g. chromium, or ZFS).
Rust is also substantially faster to compile than it was a few years ago, so I have some hope for improvements in that area as well.
It boils down to hardware support. Look at games like CS2, you need the max performance to play, and so far windows is just more optimized, as the driver authors prioritize it.
I know why typescript "succeeded", but always wonder what kind of web we would have if infact Haxe had become more popular for web in the early days. My guesstimate is we would have had bundlers in native code much, much earlier, and generally much faster and more robust tooling. Its only now we see projects like esbuild, and even TS being written in a compiled language (go), and other efforts in rust.
Also it would have been interesting sto ser what influence Haxe would have had on javascript itself
Thats true, but comes with a cost. TS has become an incredibly complex language, even it only provides types. Thats being said is will always lack features as it only "javascript".
Haxe has a more robust, but simpler typesystem, that comes from ocaml.
Haxe also compiles to C++ so making native tools would have been a breeze.
TS sits at the top chair, but there is many "better" options out there, like Elm (even its kild of a dead languge) and ReScript/ReasonML etc.
TS is good, but will never be a perfect language for javascript. It will always be a mediocre option, that is growing more and more complex in every new release.
Yes, amazing language - I recall having a look at it in 2013 when I was scrambling for a replacement for Flash (also amazing platform). Shame it doesn't solve the problem at hand.
Hardly anyone cares TypeScript isn't perfect when they can migrate their (or anyone else's) terrible, but business-critial JS library to TS and continue development without skipping a beat.
For the same reason ReasonML took years to overtake fartscroll.js in the number of stars on GitHub.
A huge part of TS's complexity is there so that library authors can describe some exotic behaviours seen normally only in dynamically-typed languages. When you're writing an app you don't need the vast majority of those features. Personally I regret every usage of `infer` in application code.
> For the same reason ReasonML took years to overtake fartscroll.js in the number of stars on GitHub.
Wow, that's a fascinating statistic! Certainly puts the popularity delta into a different light.
On a separate note, the fartscroll.js demo page[0] no longer works since code.onion.com is no longer accessible. Truly disappointing. Luckily their release zip contains an example.html!
This is pretty cool. The only (a huge issue imho) is the fact that the macbook screen does "not go all the way", meaning you cant use it as youd normally would draw or write (from a 90 degree angle)
I feel it depends whether you inspect and edit the code as part of the workflow, or just test what the AI produced and give feedback without participating in the coding yourself.
Most of the slop i witness is the latter. This is evident in huge multi 10K pull requests. The code is just an artifact, while the prompting is the "new" coding.
The "its only bits" thing makes no sense in the world of types. In the end its machine code, that humans never (in practice) write or read.
reply