Hacker Newsnew | past | comments | ask | show | jobs | submit | boutell's commentslogin

For those who would like to try it for real and not wait in a queue. It's a very fast model, even with your CPU it doesn't take very long.

This repo is a convenience wrapper that sets up Python just so, runs ml-sharp, and then builds a static WebXR website so you can view the results on a Quest headset or similar.

I find it is most interesting for indoor scenes with people in them. It's a bit like a rag doll version of everyone. Outdoor scenes with nature tend to generate complex topologies of polygons that kill the headset's frame rate.

I've cleaned this up since my earlier submission. It should now work on any Mac or Linux environment.


Knowing all of these is exactly what a developer shouldn't need to do. Fix "big O" problems in your own code. And be aware of a few exceptionally weird counterintuitive things if it matters on a "big O" level — like "you think this common operation is O(1) but it's actually O(N^2)". If there actually are any of those. And just get stuff done.

I guess you could find yourself in a situation where a 2X speedup is make or break and you're not a week away from needing 4X, etc. But not very often.


I like Claude but this is an absolutely tone deaf thing on Anthropic's part.

I've been pondering that given what the inputs are, llms should really be public domain. I don't necessarily mean legally, I know about transformative works and all that stuff. I'm thinking more on an ethical level.


Socialized training. Socialized profits.


I love programming in listicle:

HERE ARE THE TOP 10 fibbonacci_numbers:

YOU WON'T BELIEVE n := 6


He lost me a bit at the end talking about running chat bots on CPUs. I know it's possible, but it's inherently parallel computing isn't it? Would that ever really make sense? I expected to hear something more like low end consumer gpus.

Recent generation llms do seem to have some significant efficiency gains. And routers to decide if you really need all of their power on a given question. And Google is building their custom tpus. So I'm not sure if I buy the idea that everyone ignores efficiency.


(Hi, Tom!) Reread the article and look for “CPU”. The whole article is about doing deep learning on CPUs not GPUs. Moonshine, the open source project and startup he talks about, shows speech recognition and realtime translation on the device rather than on a server. My understanding is that doing The Math in parallel is itself a performance hack, but Doing Less Math is also a performance hack.


Google owns 14% of Anthropic and Anthropic is using Google TPUs, as well as AWS Trainium and of course GPUs. It isn't necessary for one company to create both the winning hardware and the winning software to be part of the solution. In fact with the close race in software hardware seems like the better bet.

https://www.anthropic.com/news/expanding-our-use-of-google-c...


Had me going for a minute there.


Poe's Law strikes again!


This is absolutely the first thing I looked for too. They just barely mentioned thermal management at all. Maybe they know something I don't, but I know from past posts here that many people share this concern. Very strange that they didn't go there, or maybe they didn't go there because they have no solution and this is just greenwashing for the costs of AI.


No, they just literally assumed their design fits withing the operational envelope of a conventional satellite - the paper (which no one read, apparently) literally says their system design "assumes a relatively conventional, discrete compute payload, satellite bus, thermal radiator, and solar panel designs".

This is not the 1960s. Today, if you have an idea for doing something in space, you can start by scoping out the details of your mission plan and payload requirements, and then see if you can solve it with parts off a catalogue.

(Of course there's million issues that will crop up when actually designing and building the spacecraft, but that's too low level for this kind of paper, which just notes that (the authors believe) the platform requirements fall close enough to existing systems to not be worth belaboring.)


Since this isn't the 1960s, and it's Google with their resources, maybe they'd go for some superconducting logic based on Josephson Junctions, like RSFQ? In parts, at least?

So they wouldn't have the burden of cooling it down first, like on earth? Instead being able to rely on the cold out there, as long as it stays in the shadow, or is otherwise isolated from sources of heat? Again, with less mess to deal with, like on earth? Since it's fucking cold up there already? And depending on the ratio of superconducting logic vs. conventional CMOS or whatever, less need to cool that, because superconducting stuff emits less heat, and the remaining 'smartphony' stuff is easy to deal with?

If I had those resorces at hand, I'd try.


> Instead being able to rely on the cold out there, as long as it stays in the shadow, or is otherwise isolated from sources of heat?

All the sources of power to run anything are also sources of heat. Doesn't matter if they're the sun or RTGs, they're unavoidably all sources of heat.

> Since it's fucking cold up there already?

Better to describe it as an insulator, rather than hot or cold.

> If I had those resorces at hand, I'd try.

FWIW, my "if I had the resources" thing would be making a global power grid. Divert 5% of current Chinese aluminium production for the next 20 years. 1 Ω the long way around when finished, and then nobody would need to care about the duty cycle of PV.

China might even do it, there was some news a while back about a trans-Pacific connection, but I didn't think to check if it was some random Chinese company doing a fund-raising round with big plans, or something more serious.


The basic principle still is that you need to shed as heat whatever energy you absorbe from the Sun. Electronics don't create extra heat, they convert electricity into it. So, unless I'm missing something, I'd expect any benefit of superconducting to manifest as less power required per unit of compute, or more compute per fixed energy budget. Power requirements can't go to zero for fundamental reasons (that make parts of CS into branches of physics).

> If I had those resorces at hand, I'd try.

I would too, and maybe they will, eventually. This paper is merely exploring whether there's a point in doing it in the first place.


Not sure why so much love for the idea the article is slop. I suffer through slop regularly and this article didn't press that button for me at all.


Don't miss out on zooming in, each one is visualized so you can tell the non Starlink ones apart.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: