Hacker Newsnew | past | comments | ask | show | jobs | submit | erichocean's commentslogin

Another limitation: only five active user accounts (with UIs) per machine.

> Models do not (broadly speaking) learn over time. They can be tuned by their operators, or periodically rebuilt with new inputs or feedback from users and experts. Models also do not remember things intrinsically: when a chatbot references something you said an hour ago, it is because the entire chat history is fed to the model at every turn. Longer-term “memory” is achieved by asking the chatbot to summarize a conversation, and dumping that shorter summary into the input of every run.

This is the part of the article that will age the fastest, it's already out-of-date in labs.


I'm struggling to reckon how that can even possibly be true, unless we're counting automation of the "dumping that shorter summary into the input of every run" thing.

I can imagine it being true with models so small that each user could afford to have their own, but not with big shared models like what're getting used for all the major services. Is that what you mean?


> Is that what you mean?

I think the confusion is that, when I write "model", you read "LLM."

LLMs aren't the only kind of AI model, and they have the limitations Aphyr mentions, for the obvious reasons you're thinking of.

His mistake is thinking that's the only model that exhibits intelligence today, but it's not.


I see nothing to preclude a foundation model being augmented by a smaller model that serializes particulars about an individuals cumulative interaction with the model and then streamlines it into the execution thread of the foundation model.

Source?

In what way?

AI is exactly the right term: the machines can do "intelligence", and they do so artificially.

Just like we have machines that can do "math", and they do so artificially.

Or "logic", and they do so artificially.

I assume we'll drop the "artificial" part in my lifetime, since there's nothing truly artificial about it (just like math and logic), since it's really just mechanical.

No one cares that transistors can do math or logic, and it shouldn't bother people that transistors can predict next tokens either.


> AI is exactly the right term: the machines can do "intelligence", and they do so artificially.

AI in pop culture doesn't mean that at all. Most people impression to AI pre-LLM craze was some form of media based on Asmiov laws of robotics. Now, that LLMs have taken over the world, they can define AI as anything they want.


In 2018, ie “pre-LLM”, the label “AI” was already stamped to everything, so I highly doubt that most people thought that their washing machines are sentient in any way. I remember this starkly, because my team was responsible at Ericsson (that time, about 120k employees) for one of the crucial step to have models in production, and basically every single project wanted that stamp.

The shift in meaning has been slowly diluted more and more across decades.


> Most people impression to AI pre-LLM craze was some form of media based on Asmiov laws of robotics.

I'll reveal you a secret: "positronic brains" are just very fast parallel computers running LLMs.


> Just like we have machines that can do "math", and they do so artificially.

Nobody calls calculators "artificial mathemeticians", though; we refer to them by a unique word that defines what they can and can't do in a far less fanciful and ambiguous way.


As I remember it, they were basically the same—but IOKit is C++ (with restrictions) because 3rd party developers didn't want to learn Objective-C.

But that's a hazy, 20 year old memory.


Yes, you're right! I'm just dolt who's never checked what a .kext on OS X actually is.

I had been under the impression that DriverKit drivers were quite a different beast, but they're really not. Here's the layout of a NS ".config" bundle:

  ./CG6FrameBuffer.config/English.lproj
  ./CG6FrameBuffer.config/English.lproj/Info.rtf
  ./CG6FrameBuffer.config/English.lproj/Localizable.strings
  ./CG6FrameBuffer.config/CG6FrameBuffer_reloc
  ./CG6FrameBuffer.config/Default.table
  ./CG6FrameBuffer.config/Display.modes
  ./CG6FrameBuffer.config/CG6FrameBuffer
The driver itself is a Mach-O MH_OBJECT image, flagged with MH_NOUNDEFS. (except for the _reloc images, which are MH_PRELOAD. No clue how these two files relate/interact!)

Now, on OS X:

  ./AirPortAtheros40.kext/Contents
  ./AirPortAtheros40.kext/Contents/_CodeSignature
  ./AirPortAtheros40.kext/Contents/_CodeSignature/CodeResources
  ./AirPortAtheros40.kext/Contents/MacOS
  ./AirPortAtheros40.kext/Contents/MacOS/AirPortAtheros40
  ./AirPortAtheros40.kext/Contents/Info.plist
  ./AirPortAtheros40.kext/Contents/version.plist
OS X added a dedicated image type (MH_KEXT_BUNDLE) and they cleaned up a bit, standardized on plists instead of the "INI-esque" .table files, but yeah, basically the same.

You're focusing on the executable format, which is very much not the driver model.

From here:

https://news.ycombinator.com/item?id=10006411

"At some stage in the future we may be able to move IOKit over to a good programming language"


IOKit was almost done in Java; C++ was the engineering plan to stop that from happening.

Remember: there was a short window of time where everyone thought Java was the future and Java support was featured heavily in some of the early OS X announcements.

Also DriverKit's Objective-C model was not the same as userspace. As I recall the compiler resolved all message sends at compile time. It was much less dynamic.


Mostly because they thought Objective-C wasn't going to land well with the Object Pascal / C++ communities, given those were the languages on Mac OS previously.

To note that Android Things did indeed use Java for writing drivers, and on Android since Project Treble, and the new userspace driver model since Android 8, that drivers are a mix of C++, Rust and some Java, all talking via Android IPC with the kernel.


There was also the Java-like syntax for ObjC but I don’t think that ever shipped.

> there was a short window of time where everyone thought Java was the future

Makes me think of how plists in macOS are xml because back then xml was the future


Yes, also the same reason why Java was originally introduced, Apple was afraid that the developer community educated in Object Pascal / C++, wasn't keen into learning Objective-C.

When those fears proved not true, and devs were actually welcoming Objective-C, it was when they dropped Java and the whole Java/Objective-C runtime interop.


> Isn't it more-or-less the same as the Python REPL?

Not even close.


Also discussed in detail in Crafting Interpreters.[0]

[0] https://craftinginterpreters.com/


> Bridging that mismatch at the macro level seems like the harder problem than the basic REPL integration.

You can (and people do, core.async in Clojure works this way) put entire compilers in macros, macros are just functions that take and return code.


> is just so much more readable

I thought that too before I learned Clojure, now I find them equally readable.


I'm very familiar with Clojure, but even I can't make a good argument that:

    (tc/select-rows ds #(> (% "year") 2008))
is more, or at least as, intuitive as:

    filter(ds, year > 2008)
as cited above. I think there's a good argument to be made that Clojure's data processing abilities, particularly around immutable data, make a compelling case in spite of the syntax. The REPL is great too, and the JVM is fast. But I still to this day imagine infix comparisons in my head and then mentally move the comparator to the front of the list to make sure I get it right.


I am really not in data science, and I have decent Clojure experience. Is there a reason anyone would pick Clojure over something like K? From what I understand, those array languages are really good for writing safe but efficient code on rectangular data.


How about this?

    (filter ds (> year 2008))
That's a trivial Clojure macro to make work if it's what you find "intuitive."


What I'd like to see is Omarchy implemented via the Nix package manager. (Seems like a good project for AI, actually.)


Already exists, although I don’t know how well maintained it is: https://github.com/henrysipp/omarchy-nix

Personally, I don’t see the need for this with NixOS. Setting aside the fact that Omarchy is way too opinionated (Basecamp installed by default?), NixOS is already quite composable, so you can easily build a well-formed experience out of isolated NixOS modules.


Why? Most people’s system configurations are publicly accessible on GitHub. Stuff like Omarchy only makes sense* when the system must be configured imperatively and there is a cost to trying things (accumulation of application residue). When you build your system declaratively you can just copy the bits you like from other people’s configs, or even just run their config as-is.

* IMO Omarchy doesn’t make sense anyway, far too much opinion and too little utility. It’s not a distro it’s some guy’s overly promoted pile of crufty scripts and dotfiles.


You can point AI to omarchy repo and have it generate a plan and them implement step by step.

My entire config is almost "omarchy", at least visually with hyprland and some other packages.


> If you know of any other snippet of code that can master all that complexity as beautifully, I'd love to see it.

Electric Clojure: https://electric.hyperfiddle.net/fiddle/electric-tutorial.tw...


Sick!!! Great example! I'm actually a longtime friend and angel investor in Dustin but I hadn't seen this


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: