Humans are older than money, so evidently we don't need it to survive, but there is more to existence than mere survival. I agree that people's basic needs to be taken care of, but I think that is an issue that needs to happen because of automation. It needs to happen because it is simply the right thing to do. I would go as fas as saying It shouldn't just be basic needs. Society should be aiming to provide the entire hierarchy of needs for everyone.
I think having employment delivers some of the higher needs to a subset of people, but it is a privileged few. A huge number work just to provide the basic needs. Advocating using the advances in automation to raise everybody up is what we need. Instead we seem to be maintaining a system that gives a few what we want and the rest of us are too busy with the survival part to influence that change.
> Society should be aiming to provide the entire hierarchy of needs for everyone.
I don’t know. Society should provide the framework within which people can achieve their needs (and wants), but not the needs and wants themselves directly.
Otherwise you put an artificial cap on human growth and inefficient allocation of resources.
A lot of it is optimizing applications for higher-memory devices. RAM is completely worthless if it's not used, so ideally you should be running your software with close to maximum RAM usage for your device. Of course, the software developer doesn't necessarily know what device you will be using, or how much other software will be running, so they aim for averages.
For example, Java applications will claim much more memory than they need for the heap. Most of that memory will be unused, but it's necessary to have a faster running application. If you've ever run a Java app at consistently 90% heap usage, you know it grinds to an absolute halt with constant collection.
The same is true for caching techniques. Reading from storage is slow, so it often makes sense to put stuff in RAM even if you're not using it very often.
I also believe that this memory usage might be decreased significantly, but I don't know how much (and how much is worth it). Some RAM usage might be useful, such as caching or for things related with graphics. Some is a cumulative bloat in applications caused by not caring much or duplication of used libraries.
But I remember in 2016 Fedora Gnome consumed about 1.6GB of RAM on my PC with 2GB of RAM a decade ago. Considering that after a decade the standard Ubuntu Gnome consumes only 400MB more RAM and also that my new laptop has 16GB of RAM (the system might use more RAM when more RAM is installed), I think the increase is not that bad for a decade. I thought it would be much worse.
Buy why that much? The first computer I bought had 192MB of RAM and I ran a 1600x1200 desktop with 24-bit color. When Windows 2000 came out, all of the transparency effects ran great. Office worked fine, Visual Studio, 1024x768 gaming (I know that’s quite a step down from 1080p).
What has changed? Why do I need 10x the RAM to open a handful of terminals and a text editor?
> What has changed? Why do I need 10x the RAM to open a handful of terminals and a text editor?
It’s not a factor of ten, but a 4K monitor has about four times as many pixels. Cached font bitmaps scale with that, photos take more memory, etc.
> When Windows 2000 came out
In those times, when part of a window became uncovered, the OS would ask the application to redraw that part. Nowadays, the OS knows what’s there because it keeps the pixels around, so it can bitblit the pixels in.
Again, not a factor of ten, but it contributes.
The number of background processes likely also increased, and chances are you used to run fewer applications at the same time. Your handful of terminals may be a bit fuller now than it was back then.
Neither of those really explain why you need gigabytes of RAM nowadays, though, but they didn’t explain why Windows 2000 needed whatever it needed at its time, either.
The main real reason is “because we can afford to”.
Partly because we have more layers of abstraction. Just an extreme example, when you open a tiny < 1KB HTML file on any modern browser the tab memory consumption will still be on the order of tens, if not hundreds of megabytes. This is because the browser has to load / initialize all its huge runtime environment (JS / DOM / CSS, graphics, etc) even though that tiny HTML file might use a tiny fraction of the browser features.
Partly because increased RAM usage can sometimes improve execution speed / smoothness or security (caching, browser tab isolation).
Partly because developers have less pressure to optimize software performance, so they optimize other things, such as development time.
2 Programmers sat at a table. One was a youngster and the other an older guy with a large beard. The old guy was asked: "You. Yeah you. Why the heck did you need 64K of RAM?". The old man replied, "To land on the moon!". Then the youngster was asked: "And you, why oh why did you need 4Gig?". The youngster replied: "To run MS-Word!"
I remember running Xubuntu (XFCE) and Lubuntu (LXDE, before LXQt) on a laptop with 4 GB of RAM and it was a pretty pleasant experience! My guess is that the desktop environment is the culprit for most modern distros!
well to start, you likely have 2 screen size buffers for current and next frame. The primary code portion is drivers since the modern expectation is that you can plug in pretty much anything and have it work automatically.
That is dependent upon the quality of the AI. The argument is not about the quality of the components but the method used.
It's trivial to say using an inadequate tool will have an inadequate result.
It's only an interesting claim to make if you are saying that there is no obtainable quality of the tool that can produce an adequate result (In this argument, the adequate result in question is a developer with an understanding of what they produce)
The problem I see with this argument is that the ship sailed on understanding what you are doing years ago. It seems like it is abstraction layers all the way down.
If an AI is capable of producing an elegant solution with fewer levels of abstraction it could be possible that we end up drifting towards having a better understanding of what's going on.
This is the natural conclusion of what was really claimed about model collapse, and indeed natural evolution. Making an imperfect copy while invoking a selection mechanism is evolution.
Some of the claims about models training on their own data, in their enthusiasm to frame it as a failure, went further to suggest that it magnified biases. I had my doubts about their conclusions. If it were true, it would be a much greater breakthrough because the ability to magnify a property represents a way to measure a weak version that property. The ability to do that would mean they would have found a way to provide a training signal to avoid bias. It would be great if that's what they did but I suspect there would have been more news about it.
Perhaps this paper will put to rest the notion that AI output is useless as training data. It has only ever been the case that it was useless as an indiscriminate source of data.
I like the idea of tree curation. People view the branch of their interest. Anyone can submit anything to any point but are unlikely to be noticed if they submit closer to the trunk. Curated lists submit their lists to curators closer to the trunk.
The furthest branches have the least volume (need filters to stop bulk submission to all levels, but still allow some multi submission). It allows curators to contribute in a small field. They then submit their preferred items to the next level up. If that curator likes it they send it further. A leaf level curator can bypass any curator above but with the same risk of being ignored if the higher level node receives too much volume.
You could even run fully AI branches where their picks would only make all the way up by convincing a human curator somewhere above them of the quality. If they don't do a good job they would just be ignored. People can listen to them direct if they are so inclined
>It's sure baffling how Anthropic has kept Claude Code's plan mode so linear and inflexible
It's difficult to know what the appropriate process for a model would be without widespread deployment. I can see how they have to strike a fine balance between keeping up with what the feedback shows would be best and changing the way the user interacts with the system. Often it's easy to tell what would be better once something is deployed, but if people are productively using the currently deployed system you always have to weigh the advantage of a new method against the cost of people having to adapt. It is rare to make something universally better, and making things worse for users is bad.
The r's in strawberry presents a different level of task to what people imagine. It seems trivial to a naive observer because the answer is easily derivable from the question without extra knowledge.
A more accurate analogy for humans would be to imagine if every word had a colour. You are told that there are also a sequence of different colours that correspond to the same colour as that word. You are even given a book showing every combination to memorise.
You learn the colours well enough that you can read and write coherently using them.
Then comes the question of how many chocolate-browns are in teal-with-a-hint-of-red. You know that teal-with-a-hint-of-red is a fruit and you know that the colour can also be constructed by crimson followed by Disney-blond. Now, do both of those contain chocolate-brown or just one of them, how many?
It requires excersizing memory to do a task that is underrepresented in the training data because humans simply do not have to do the task at all when the answer can be derived from the question representation. Humans also don't have the ability that the LLMs need but the letter representation doesn't need that ability.
That’s what makes it a fair evaluation and something that requires improvement. We shouldn’t only evaluate agent skills by what is most commonly represented in training data. We expect performance from them on areas that existing training data may be deficient at providing. You don’t need to invent an absurdity to find these cases.
It's reasonable to test their ability to do this, and it's worth working to make it better.
The issue is that people claim the performance is representative of a human's performance in the same situation. That gives an incorrect overall estimation of ability.
I think having employment delivers some of the higher needs to a subset of people, but it is a privileged few. A huge number work just to provide the basic needs. Advocating using the advances in automation to raise everybody up is what we need. Instead we seem to be maintaining a system that gives a few what we want and the rest of us are too busy with the survival part to influence that change.
reply