To be clear a cerebras chip is consuming a whole wafer and has only 44 GB of SRAM on it. To fit a 405B model in bf16 precision (excluding kv cache and activation memory usage) you need 19 of these “chips” (and the requirement will grow as the sequence length increases for the kvcache). Looking online it seems on one wafer one can fit between 60 to 80 H100 chips, so it’s equivalent to using >1500 H100 using wafer manufacturing cost as a metric
Beeper mini only worked with iMessage for a few days before Apple killed it. A few months later the DOJ sued Apple. Hacks like this show us the world we could be living in, a world which can be hard to envision otherwise. If we want to actually live in that world, we have to fight for it (and protect the hackers besides).
I mean, he was right … for what we knew at that time. He predicted correctly that the only way to achieve a general intelligence it would require to mimic the extremely complex neural networks in our brain that the hardware of the time was very far away from achieving. He could not predict that things would move so fast on the hardware side (nobody could have)that made this somewhat possible. We are atill I would argue a bit out in having the appropriate computer power to make this a reality still, but it now is much more obvious that it is possible if we continue on this path
> He predicted correctly that the only way to achieve a general intelligence it would require to mimic the extremely complex neural networks in our brain
Besides naming neutral networks and human brains don't have that much in common
Most of the relevant similarities are there. Every plausible model in computational neuroscience is based on neural nets or a close approximation thereof, everything else is either magic or a complete non-starter.
ok - except detailed webs of statistical probabilities only emits things that "look right" .. not at all the idea of General Artificial Intelligence.
secondly, people selling things and people banding together behind one-way mirrors have a lot of incentive to devolve into smoke-and-mirrors.
Predicting is a social grandstand in a way, as well as insight. Lots of ordinary research has insight without grandstanding.. so this is a media item as much as it is real investigation IMHO
To be honest restricting funding to the kind of symbolic based AI research that is criticized in this discussion might have helped AI more than it hurt , by eventually pivoting the research toward neural networks and backpropagation. I don’t know how much of a good thing would have been if this kind of research continued to be funded fully.
>except detailed webs of statistical probabilities only emits things that "look right" .. not at all the idea of General Artificial Intelligence.
I mean, this is what evolution does too. The variants that 'looked right' but were not fit to survive got weeded out. The variants that were wrong but didn't negatively affect fitness to the point of non-reproduction stayed around. Looking right and being right are not significantly different in this case.
yes, you have made the point that I argue against above. I claim that "looking right" and "being right" are absolutely and fundamentally different at the core. At the same time, acknowledge that from a tool-use, utilitarian, automation point of view, or a sales point of view, results that "look right" can be applied for real value in the real world.
many corollaries exist. "looking right" is not at all General Artificial Intelligence, is my claim yes.
"Being right" seems to be an arbitrary and impossibly high bar. Human at their very best are only "looks right" creatures. I don't think that the goal of AGI is god-like intelligence.
Humans "at their very best" are at least trying to be right. Language models don't - they are not concerned with any notion of objective truth, or even with "looking right" in order to gain social status like some human bullshitter - they are simply babbling.
That this strategy is apparently enough to convince a large number of (supposedly) intelligent people otherwise is very troubling!
Not saying that General AI is impossible, or that LLMs couldn't be a useful component in their architecture. But what we have right now is just a speech center, what's missing is the rest of the brain.
Also, simply replicating / approximating something produced by natural evolution seems to me like the wrong approach, for both practical and ethical reasons: if we get something with >= human-like intelligence, it would be a black box we could never understand how any part of it actually works, and it might be a sentient being capable of suffering.
What makes it "much more obvious that it is possible" to simulate the human brain? If you're thinking of artificial neural nets, those clearly have nothing to do with human intelligence, which was very obviously not learned by training on millions of examples of human intelligence; that would have been a complete non-starter. But that's all that artificial neural nets can do, learn from examples of the outputs of human intelligence.
It is just as clear that there is one more ability that human brains have, than the ability to learn from observations, and that's the ability to reason from what is already known, without training on any more observations. That is how we can deal with novel situations that we have never experienced before. Without this ability, a system is forever doomed to be trapped in the proximal consequences of what it has observed.
And it is just as clear that neural nets are completely incapable of doing anything remotely like reasoning, much as the people in the neural nets community keep trying, and trying. The branch of AI that Lighthill almost dealt a lethal blow to (his idiotic report brought about the first AI winter), the branch of AI inaugurated and championed by McCarthy, Michie, Simon and Newell, Shannon, and others, is thankfully still going and still studying the subject of reasoning- and making plenty of progress, while flying under the hype.
Well it's a game engine and the language that is based on is important, you would for sure pass on a game engine written in Javascript and instead you will choose one that uses technologies you know.
So I think is important to explicitly tell which technologies are you using in a game engine.
Am I the only one not understanding what’s the point of all these efforts to try to run Linux on completely closed with no documentation devices (like iPhones and even mace for that matter). Seems like a lot of effort, for a crappy solution that maybe 0.0001% is going to use before realizing it is crap. Even only effort to keep up with the yearly hw updates that apple does and the complete disregard for backward compatibility and writing hw documentation makes these effort just a technocrat exercise (very cool for sure), but I can’t see anything more than that
Isn't that one of the "true" mantras of hacking ("hacker news") ?
Doing hard things just for the sake of it
> A hacker is a person skilled in information technology who uses their technical knowledge to achieve a goal or overcome an obstacle, within a computerized system by non-standard means.
It excites skilled hackers, which is good for the health of the overall community. Every platform has some poorly documented nooks and crannies and doesn't want to be fully opened up. This is practicing those skills in the most hostile environment.
Plus, it gives us backup plans, if Intel and AMD are hit by meteors.
And maybe they'll keep a couple iPads out of the landfills.
I see these as more art and a statement about Linux’s ubiquitousness rather than anything especially meaningful. There’s probably not a lot of people who want to run Doom on a lamp, but people did that anyway.
The point is that people in their free time, can break into systems that cost millions in design to keep those people out.
For me, the homebrew community is the reason why I have a career in technology in the first place. Breaking into things you're not supposed to is fun, and teaches a lot of practical knowledge.
I understand your gripe but think of it like this: manufactures like apple make locked down devices which are now able to be unlocked. This reduces e-waste while showing manufactures that we want more freedom and will do as we please with OUR hardware.
This doesn’t meaningfully reduce e-Waste. The average iPhone user is conditioned to trade in their phone with already decent recycling programs being their destination.
IMO, the main (only?) reduction of e-waste comes from reselling the old phones on the secondary market. Re-use is much more effective than recycling. Until the bitter end.
That's a second life, and a valid thing - but I'm specifically talking about people wanting to use (e.g) an iPhone 6S (which will be discontinued soon).
Apple's hit something like 20% of their materials being from recycled products. This is a significant number and is much better than these things sitting around taking up space.
I respect that you are an utilitarian, so I understand your point. But you shall also respect others' non-utilitarian hacker-ianisms, and let them hack whatever device they want to, including "non-useful" iDevices.
The alternative is all those old devices becoming garbage I mean recycled so they said. And then you go on buying a Pi or some cripled computing machine to run your home automation.