> Models do not (broadly speaking) learn over time. They can be tuned by their operators, or periodically rebuilt with new inputs or feedback from users and experts. Models also do not remember things intrinsically: when a chatbot references something you said an hour ago, it is because the entire chat history is fed to the model at every turn. Longer-term “memory” is achieved by asking the chatbot to summarize a conversation, and dumping that shorter summary into the input of every run.
This is the part of the article that will age the fastest, it's already out-of-date in labs.
I'm struggling to reckon how that can even possibly be true, unless we're counting automation of the "dumping that shorter summary into the input of every run" thing.
I can imagine it being true with models so small that each user could afford to have their own, but not with big shared models like what're getting used for all the major services. Is that what you mean?
I see nothing to preclude a foundation model being augmented by a smaller model that serializes particulars about an individuals cumulative interaction with the model and then streamlines it into the execution thread of the foundation model.
AI is exactly the right term: the machines can do "intelligence", and they do so artificially.
Just like we have machines that can do "math", and they do so artificially.
Or "logic", and they do so artificially.
I assume we'll drop the "artificial" part in my lifetime, since there's nothing truly artificial about it (just like math and logic), since it's really just mechanical.
No one cares that transistors can do math or logic, and it shouldn't bother people that transistors can predict next tokens either.
> AI is exactly the right term: the machines can do "intelligence", and they do so artificially.
AI in pop culture doesn't mean that at all. Most people impression to AI pre-LLM craze was some form of media based on Asmiov laws of robotics. Now, that LLMs have taken over the world, they can define AI as anything they want.
In 2018, ie “pre-LLM”, the label “AI” was already stamped to everything, so I highly doubt that most people thought that their washing machines are sentient in any way. I remember this starkly, because my team was responsible at Ericsson (that time, about 120k employees) for one of the crucial step to have models in production, and basically every single project wanted that stamp.
The shift in meaning has been slowly diluted more and more across decades.
> Just like we have machines that can do "math", and they do so artificially.
Nobody calls calculators "artificial mathemeticians", though; we refer to them by a unique word that defines what they can and can't do in a far less fanciful and ambiguous way.
Yes, you're right! I'm just dolt who's never checked what a .kext on OS X actually is.
I had been under the impression that DriverKit drivers were quite a different beast, but they're really not. Here's the layout of a NS ".config" bundle:
The driver itself is a Mach-O MH_OBJECT image, flagged with MH_NOUNDEFS. (except for the _reloc images, which are MH_PRELOAD. No clue how these two files relate/interact!)
OS X added a dedicated image type (MH_KEXT_BUNDLE) and they cleaned up a bit, standardized on plists instead of the "INI-esque" .table files, but yeah, basically the same.
IOKit was almost done in Java; C++ was the engineering plan to stop that from happening.
Remember: there was a short window of time where everyone thought Java was the future and Java support was featured heavily in some of the early OS X announcements.
Also DriverKit's Objective-C model was not the same as userspace. As I recall the compiler resolved all message sends at compile time. It was much less dynamic.
Mostly because they thought Objective-C wasn't going to land well with the Object Pascal / C++ communities, given those were the languages on Mac OS previously.
To note that Android Things did indeed use Java for writing drivers, and on Android since Project Treble, and the new userspace driver model since Android 8, that drivers are a mix of C++, Rust and some Java, all talking via Android IPC with the kernel.
Yes, also the same reason why Java was originally introduced, Apple was afraid that the developer community educated in Object Pascal / C++, wasn't keen into learning Objective-C.
When those fears proved not true, and devs were actually welcoming Objective-C, it was when they dropped Java and the whole Java/Objective-C runtime interop.
I'm very familiar with Clojure, but even I can't make a good argument that:
(tc/select-rows ds #(> (% "year") 2008))
is more, or at least as, intuitive as:
filter(ds, year > 2008)
as cited above. I think there's a good argument to be made that Clojure's data processing abilities, particularly around immutable data, make a compelling case in spite of the syntax. The REPL is great too, and the JVM is fast. But I still to this day imagine infix comparisons in my head and then mentally move the comparator to the front of the list to make sure I get it right.
I am really not in data science, and I have decent Clojure experience. Is there a reason anyone would pick Clojure over something like K? From what I understand, those array languages are really good for writing safe but efficient code on rectangular data.
Personally, I don’t see the need for this with NixOS. Setting aside the fact that Omarchy is way too opinionated (Basecamp installed by default?), NixOS is already quite composable, so you can easily build a well-formed experience out of isolated NixOS modules.
Why? Most people’s system configurations are publicly accessible on GitHub. Stuff like Omarchy only makes sense* when the system must be configured imperatively and there is a cost to trying things (accumulation of application residue). When you build your system declaratively you can just copy the bits you like from other people’s configs, or even just run their config as-is.
* IMO Omarchy doesn’t make sense anyway, far too much opinion and too little utility. It’s not a distro it’s some guy’s overly promoted pile of crufty scripts and dotfiles.
reply