When I was in third grade, I decided I want to make computer games to get more of them. Dad got me started with GW-Basic turtle graphics and I made pictures with them - usually non-functional title screens for my games.
At some point I had made a small space ship and was able to make it turn around with the wonderful angle command [1]. However, I could not figure out how to make it move "forward" regardless of the angle.
I was also attending an after hours computer graphics club, mostly about Deluxe Paint, taught by a 20-something student (who much later went on to found a GPU company and got acquihired by ATI/AMD). He would help me occasionally, and in this case he took a tiny slip of paper and wrote down a couple of lines about sin and cos. No questions, no explanations, no gatekeeping.
Just like that I internalized this foundational piece of trig - later when it arrived in school maths it was easy and obvious for me. I had a practical application, but even more I think was because it started as a need I had, and when given to me, felt like a gift and an enabler.
Still much later I studied Seymour Papert's pedagogy and understood I had lived it. I consider myself fortunate.
Finnish has been very peripheral and isolated due to geography. It is closely related to Estonian, but remains much more similar to their common archaic root, while Estonian has streamlined and developed due to more contact and exchange.
In my experience search engines have rapidly deteriorated - probably because of the SEO arms race - and LLMs often feel like search engines used to feel back when they worked. Who knows what will happen once all the marketing attention shifts towards influencing LLM output.
It’s worth noting that C++ standard libraries have mostly moved away from copy-on-write strings, due to their poor performance in multithreaded scenarios. And JavaScript engines have ended up adding a bunch of optimizations that simulate mutable strings in certain common scenarios. It depends on what the code in question is doing, and I think the ideal scenario is to allow both in different contexts as long as they can be kept distinct.
Mutable string literals can't be easily deduplicated, unless your language semantics are that a literal is a singleton and all mutations are visible by all other evaluations of that literal. But no sane language would do that.
If the strings are backed by reference counted buffers, you can use copy-on-write semantics to provide the API of a mutable string but share buffers when a string is copied. Most C++ standard libraries actually did this prior to the multicore era.
For sure. Data structures and call graphs like to converge, so when designing a data model, you are actually designing the (most natural) program flow too.
The recent C# feature called interceptors [1] pretty much looks like comefrom from where I stand. Yet everyone talking about it has either been serious, or very good at trolling.
they added a language feature which is sensitive to precise line/character offsets in your source code, so the tiniest change to the source code invalidates your code…
I’m speechless. Whatever they are aiming to achieve here, surely there is a more elegant, less ugly way
You are not supposed to use interceptors in code you write yourself. The feature exists for Roslyn Source Generators that runs every time you build the code.
I’m still confused though, if you’re generating the code anyway, why do you need an interceptor? Can’t you just generate the code to match what you want to redirect to, directly inline?
Yes if all the code was generated. The problem is when you want to modify the behavior of user-supplied code - Roslyn Source Generators are additive so you cannot make modifications directly to user-supplied code.
Basically they get the files (and ambient metadata) that are part of the compilation, filter to the parts it depends on, transforms to a in-mem representation of the data needed for the code generation, and then finally adds new files to the compilation. Since they can only add new files they cannot e.g. add code to be executed before or after user code is executed like with AOP. Interceptors are a solution to that problem.
Interesting, so you’re saying generated code can change the behavior of user code with no indication this is happening from directly reading the user code… that sounds pretty horrifying. I guess AOP in general is pretty horrifying to me though. Maybe it’s useful if you restrict its use to very specific things like logging or something.
Well yes, hopefully you know what you are doing when you reference a source generator. This could ofc. also be done with custom msbuild task that modfies the code sent to the compiler or the assembly after compilation (like Fody), Source Generators just makes the process more streamlined and integrates with things like IntelliSense.
> Well yes, hopefully you know what you are doing when you reference a source generator
I don't think there's much that's scary about generating source code in general. If it's self-contained and you have to actually call the generated code to use it, it's not really much different than any other code. But the idea of having code A change the behavior of code B is what's horrifying, regardless of whether code A is generated or not. If I'm reading code B I want to be able to reason about what I see without having to worry about some spooky action at a distance coming from somewhere else.
Things are constantly doing this. Frameworks use reflection or markup or all other kinds of things that count as magic if you don't bother to understand what's going on.
I wrote blitters in assembly back in those days for my teenager hobby games. When I could actually target the 386 with its dword moves, it felt blisteringly fast. Maybe the 386 didn't run 286 code much faster but I recall the chip being one of the most mind-blowing target machine upgrades I experienced. Much later I recall the FPU-supported quadword copy in 486dx and of course P6 meeting MMX in Pentium II. Good times.
You're 100% right that the 386 had a huge amount of changes that were pivotal in the future of x86 and the ability to write good/fast code.
I think a bigger challenge back then was the lack of software that could take advantage of it. Given the nascent state of the industry, lots of folks wrote for the 'lowest common denominator' and kept it at that (i.e. expense of hardware to test things like changing routines used based on CPU sniffing.)
And even then of course sometimes folks were lazy. One of my (least) favorite examples of this is the PC 'version' (It's not at all the original) of Mega Man 3. On a 486/33 you had the option of it being almost impossible twitchy fast, or dog slow thanks to turbo button. Or, the fun thing where Turbo Pascal compiled apps could start crapping out if CPU was too fast...
Sorry, I digress. the 386 was a seemingly small step that was actually a leap forward. Folks just had to catch up.
I was programming in Turbo Pascal at the time, which was still 16-bit. But when I upgraded my 286 to a Cyrix 486, on a 386 motherboard[1], I could utilize the full 32-bit registers by prefixing assembly instructions with 0x66 using db[1].
This was a huge boost for a lot of my 3D rendering code, despite the prefix not being free compared to pure 32-bit mode.
Imagine how it felt going from an 8086 @ 8 MHz to an 80486SX (the cheapo version without FPU) @ 33 MHz. With blazingly fast REP MOVSD over some form of proto local bus Compaq implemented using a Tseng Labs ET4000/W32i vga chip.
> But I don't think you can limit people's wealth and not call it communism.
In communism, an individual can not own any means of production - effectively 0% of the society's total capital. I don't think it follows that any non-communist system must permit any single individual to gain up to 100% of the society's wealth.
I don't know what the limit could look like or how to make it work, but societies commonly called capitalist already implement various brakes on free trade, from regulation to capital and immigration controls, subsidies, tariffs...
C++ monomorphises generics on demand too. That's why it can have errors specific to specialization and why template error messages spam long causal chains.
C++ compile times are due to headers. Which in case of templates result in a lot of redundant work then deduplicated by the linker.
In my mind Clojure is Lispy, Python is not, nor is Javascript.
In addition to REPL and macros, I think two other Lispy features are essential:
nil is not just the sad path poison value that makes everything explode: lisp is written so that optionals compose well.
Speaking of composing, Lisps tend to be amazing with regard to composability. This is another line that cuts between CL, Scheme and Clojure on one side, with Python and Javascript firmly on the other side in my experience.
Lisps are as dynamic a languages ever go, unapologetically.
I just wanted to add that "dynamic" doesn't mean untyped or weakly typed. Clojure is a strongly-typed dynamicly-typed PL. Clojurescript compiler for example, in many cases can produce safer JS code than even Typescript ever could.
Out of curiosity, can you give an example of where ClojureScript is safer than TypeScript? I'm pretty far removed from the frontend world so this sounds pretty interesting to me.
The last time I did ClojureScript in serious capacity was for a school project in 2021, specifically because I wanted to play with re-frame and the people who designed the project made the mistake of saying I could use "whatever language I want".
It makes sense, but I guess I didn't realize that ClojureScript generates some nice runtime wrappers to ensure correctness (or to at least minimize incorrectness).
I guess that means that if you need to do any kind of CPU-intensive stuff, ClojureScript will be a bit slower than TypeScript or JavaScript, right? In your example, you're adding an extra "if" statement to do the type check. Not that it's a good idea to use JS or TypeScript for anything CPU-heavy anyway...
> ClojureScript will be a bit slower than TypeScript or JavaScript, right?
In rare cases, sure, it can add some overhead, and might not be suitable I dunno for game engines, etc., but in most use-cases it's absolutely negligible and brings enormous advantages otherwise.
Besides, there are some types of applications that simply really difficult to build with more "traditional" approach, watch this talk, I promise, it's some jaw-dropping stuff:
Having read Let over Lambda, I would say I find Javascript to be (a superset of?) a lispy language. If functional values with lexical binding are supported, then you get all the power of The Little Lisper.
Perhaps the macro facilities are also convenient but that is not the part that makes Lisp mathematical, it's the higher order programming.
And it needn't even be something fancy, just being able to have a data table of tests and have the test functions generated and executed from the table is the power demonstrated.
At some point I had made a small space ship and was able to make it turn around with the wonderful angle command [1]. However, I could not figure out how to make it move "forward" regardless of the angle.
I was also attending an after hours computer graphics club, mostly about Deluxe Paint, taught by a 20-something student (who much later went on to found a GPU company and got acquihired by ATI/AMD). He would help me occasionally, and in this case he took a tiny slip of paper and wrote down a couple of lines about sin and cos. No questions, no explanations, no gatekeeping.
Just like that I internalized this foundational piece of trig - later when it arrived in school maths it was easy and obvious for me. I had a practical application, but even more I think was because it started as a need I had, and when given to me, felt like a gift and an enabler.
Still much later I studied Seymour Papert's pedagogy and understood I had lived it. I consider myself fortunate.
1: http://www.antonis.de/qbebooks/gwbasman/draw.html