Genuine questions: why are they calling it “reciprocal”? Is the US just matching the tariffs set by the other countries?
Also, this announcement has wiped out any plans of buying tech products this year, plus a holiday to the US and Canada later in the year. Good thing too, as the entire globe is probably staring down the barrel of a recession.
Someone calculated the formula used. They divided the trade deficit of each country by total trade of each country and assumed that was all a tariff.
So for example Indonesia and the US traded $28 billion. The US has a 17.9 billion trade deficit with Indonesia. 17.9/28 =0.639, or 64%, which is assumed to be all caused by tariffs. So they divide by two and impose 32%.
Anyway no the US isn't matching tariffs they're dramatically exceeding them.
That's a bit of a messed up way to calculate things.
I also think the US deficits are hugely overstated because much of what the US produces is intellectual capital rather than physical goods and the profits are made to appear in foreign subsidiaries for tax reasons. Like if I buy Microsoft stuff in the UK, Microsoft make out it was made in Ireland for tax purposes, but really the value is created in and owned by the US. The US company both wrote the software and owns Microsoft Ireland. So much of the perceived unfairness Trump is having a go at isn't real.
You raise an excellent point that US corporate tax evasion is exaggerating the trade deficit. However, from the perspective of winning US elections, I think it does not change the issue that the trade deficit falls more on de-industrializing Midwestern states, and the corporations you are referring to are concentrated in Northeastern and Western states.
Secondly, if Microsoft or Apple makes the profit appear in Ireland, it cannot move that money back to the domestic US, right? So as long as the money sits overseas, it would not count towards US trade and thus the deficit calculation is fair.
They don't move the profit back to the US, but through Ireland and the Netherlands they move it out of the EU mostly to some tax havens in the Caribbean. From there they use them for their stock buybacks, which I think equals mostly flowing back into the US.
Again, not flowing back to the right people. All of this could have been solved by sane redistribution, but no. It'll still be redistribution but in a cruder, less apparent form.
If the profits went back to Apple HQ directly they would serve to raise the share price and allow stock buybacks and stock based compensation for employees. Same as they do now.
You may not like a tech company succeeding at exports and having a rising share price, but that is distinct from the overall point which is that properly considered these are US exports obscured by the US tax code which incentivizes profits abroad.
That's a great point. I checked into this, and if and when the profits are repatriated they indeed only show up in the capital account, not the current account.
However, in practice even if not repatriated those exports show up in the us economy. Profits raise the share price, which allows stock grants at higher values, effectively a wage as one example.
I wonder how big an effect this phenomenon you highlight has. Must be a fairly large overstatement of the US trade deficit.
If the US has a trade deficit, doesn't that mean the US is trading make-believe pieces of paper for real goods.
Like, if I scribble on a piece of paper and then trade you the piece of paper for an incredibly engineered brand new laptop, is that bad for me? Is this a sign of my weakness?
I know economics can be complicated, and probably "it depends", but why is a trade deficit bad? Why does the Trump administration want to eliminate trade deficits?
Because when the blowback comes, people will be looking to cast blame for starting this whole trade war, and when that time comes Trump will point to the word "reciprocal" and say "we didn't start this, we were only reciprocating".
> Is the US just matching the tariffs set by the other countries?
No. Trump claims that the new tariffs are a 50% discount on what those countries tariff US goods at. (Even if that's questionable - is VAT a tariff?)
If he's correct, or anywhere close, this is a "tough love" strategy to force negotiations. We'll see how it goes. It also plays to his base - why should we tariff any less than they do us? And they have a point, it's the principle of the thing.
According to [1], the White House claims Vietnam has a 90% tariff rate.
According to [2], 90.4% is the ratio of Vietnam's trade deficit with the US -- they have a deficit of $123.5B on $136.6B of exports.
The same math holds true for other countries, e.g. Japan's claimed 46% tariff rate is their deficit of $68.5B on $148.2B of exports. The EU's claimed 39% tariff rate is their deficit of $235.6B on $605.8B of exports.
Who knows, maaaaybe it just so happens that these countries magically have tariff rates that match the ratio of their trade deficits.
Or maybe, the reason Vietnam doesn't buy a lot of US stuff is because they're poor. The reason they sell the US a bunch of stuff is because their labour is cheap to Americans. (They do have tariffs, but they're nowhere near 90%: [3].)
America's government is not trustworthy. Assuming that what they say is truthful is a poor use of time.
Prior to yesterday's announcement, the claim regarding tarrifs was that the goal was to bring manufacturing back to american soil. This is unlikely to happen in any case, but it requires at minimum that consumers put up with high prices for a while (with "a while" being measured in years, if not decades). Actually, the "liberation day" tarrifs strongly agree with this goal: after speculation, the administration announced the formula for these new tarrifs, which has nothing to do with counter tarrifs or trade barriers as claimed, and instead comes from a ratio of the trade deficit in goods and the overall amount of trade. In other words, countries that export a lot of goods to the US (and the US doesn't have commensurate goods exports to) get high tarrifs. This makes sense if the goal is to incentivize manufacturing in the US, by making manufactured goods from outside more expensive.
There is another camp that thinks that trump doesn't really have a goal per se, and is rather doing all this as an exercise in showing off his strength and to draw attention to himself. This camp holds that eventually trump will get bored, or the public will turn on him, and he'll need to get rid of tarrifs to save face. We call these people "optimists".
Tangentially related: is there a good guide or setup scripts to run self hosted Postgres with backups and secondary standby? Like I just want something I can deploy to a VPS/dedicated box for all my side projects.
If not is supabase the most painless way to get started?
Using bun has been a great experience so far. I used to dread setting up typescript/jest/react/webpack for a new project with breaking changes all over the place. With bun, it’s been self contained and painless and it just works for my use. Can’t comment on the 3rd party libraries they are integrating like s3, sql etc but at least it looks like they are focused on most common/asked for ones.
Thanks for the great work and bringing some much needed sanity in the node.js tooling space!
Last I tried (several months ago) it didn't, the built-in frontend bundler was not very useful so everybody just used 3rd party bundlers so (for most people) it would not have any meaningful differences compared to nodejs. It seems they are putting more effort in the bundler now, so it seems like it can replace plain SPA applications just fine (no SSR). The bundler is inspired by esbuild so you can expect similar capabilities.
IMO the main benefit of using their bundler is that things (imports/ES-modules, typescript, unit tests, etc) just behave the same way across build scripts, frontend code, unit tests, etc. You don't get weird errors like "oh the ?. syntax is not supported in the unit test because I didn't add the right transform to jest configuration. But works fine in the frontend where I am using babel".
But if you want to use vercel/nextjs/astro you still are not using their bundler so no better or worse there.
not up to this point, but with this release, bun is now a bundler.
That means potentially no webpack, vite and their jungle of dependencies. It's possible to have bun as a sole dependency for your front and back end. Tbh I'll likely add React and co, but it's possible do do vanilla front end with plain web components.
i have been setting up these react/ts/etc project with vite or next.js, just fine , i think you're underestimating how much progress happened in other tooling as well
Idk about next 15 but you can literally bootsrap next 13 using a single index.tsx with typescript & next being the only 2 dependencies in package.json. No typescript is fine too.
It's not new, has been the case for a few years, so honestly I don't get people complaining about next's complexity.
The timing on when this essay is being published is interesting. Are all the tech billionaires falling in line before the next administration takes over? Also, let this be a lesson that no matter how “brilliant” and rich someone is, they can have comically bad takes.
I feel this is bigger than the 5x series GPUs. Given the craze around AI/LLMs, this can also potentially eat into Apple’s slice of the enthusiast AI dev segment once the M4 Max/Ultra Mac minis are released. I sure wished I held some Nvidia stocks, they seem to be doing everything right in the last few years!
This is something every company should make sure they have: an onboarding path.
Xeon Phi failed for a number of reasons, but one where it didn't need to fail was availability of software optimised for it. Now we have Xeons and EPYCs, and MI300C's with lots of efficient cores, but we could have been writing software tailored for those for 10 years now. Extracting performance from them would be a solved problem at this point. The same applies for Itanium - the very first thing Intel should have made sure it had was good Linux support. They could have it before the first silicon was released. Itaium was well supported for a while, but it's long dead by now.
Similarly, Sun has failed with SPARC, which also didn't have an easy onboarding path after they gave up on workstations. They did some things right: OpenSolaris ensured the OS remained relevant (still is, even if a bit niche), and looking the other way for x86 Solaris helps people to learn and train on it. Oracle cloud could, at least, offer it on cloud instances. Would be nice.
Now we see IBM doing the same - there is no reasonable entry level POWER machine that can compete in performance with a workstation-class x86. There is a small half-rack machine that can be mounted on a deskside case, and that's it. I don't know of any company that's planning to deploy new systems on AIX (much less IBMi, which is also POWER), or even for Linux on POWER, because it's just too easy to build it on other, competing platforms. You can get AIX, IBMi and even IBMz cloud instances from IBM cloud, but it's not easy (and I never found a "from-zero-to-ssh-or-5250-or-3270" tutorial for them). I wonder if it's even possible. You can get Linux on Z instances, but there doesn't seem to be a way to get Linux on POWER. At least not from them (several HPC research labs still offer those).
1000% all these ai hardware companies will fail if they don't have this. You must have a cheap way to experiment and develop. Even if you want to only sell a $30000 datacenter card you still need a very low cost way to play.
Sad to see big companies like intel and amd don't understand this but they've never come to terms with the fact that software killed the hardware star
People tend to limit their usage when it's time-billed. You need some sort of desktop computer anyway, so, if you spend the 3K this one costs, you have unlimited time of Nvidia cloud software. When you need to run on bigger metal, then you pay $2/hour.
Yes. Most people make do with a generic desktop and an Nvidia GPU. What makes this machine attractive is the beefy GPU and the full Nvidia support for the whole AI stack.
I have the skills to write efficient CUDA kernels, but $2/hr is 10% of my salary, so no way I'm renting any H100s. The electricity price for my computer is already painful enough as is. I am sure there are many eastern European developers who are more skilled and get paid even less. This is a huge waste of resources all due to NVIDIA's artificial market segmentation. Or maybe I am just cranky because I want more VRAM for cheap.
This has 128GB of unified memory. A similarly configured Mac Studio costs almost twice as much, and I'm not sure the GPU is on the same league (software support wise, it isn't, but that's fixable).
A real shame it's not running mainline Linux - I don't like their distro based on Ubuntu LTS.
$4,799 for an M2 Ultra with 128GB of RAM, so not quite twice as much. I'm not sure what the benchmark comparison would be. $5,799 if you want an extra 16 GPU cores (60 vs 76).
We'll need to look into benchmarks when the numbers come out. Software support is also important, and a Mac will not help you that much if you are targeting CUDA.
I have to agree the desktop experience of the Mac is great, on par with the best Linuxes out there.
A lot of models are optimized for metal already, especially lamma, deepseek, and qwen. You are still taking a hit but there wasn't an alternative solution for getting that much vram in a less than $5k before this NVIDIA project came out. Will definitely look at it closely if it isn't just vaporware.
They cant walk back now without some major backlash.
The one thing I wonder is noise. That box is awfully small for the amount of compute it packs, and high-end Mac Studios are 50% heatsink. There isn’t much space in this box for a silent fan.
It really mystifies me that Intel AMD and other hardware companies obviously Nvidia in this case Don't either have a consortium or each have their own in-house Linux distribution with excellent support.
Windows has always been a barrier to hardware feature adoption to Intel. You had to wait 2 to 3 years, sometimes longer, for Windows to get around us providing hardware support.
Any OS optimizations in Windows you had to go through Microsoft. So say you added some instructions custom silicon or whatever to speed up Enterprise databases, provide high-speed networking that needed some special kernel features, etc, there was always Microsoft being in the way.
Not just in the drag the feet communication. Getting the tech people a line problem.
Microsoft will look at every single change. It did as to whether or not it would challenge their Monopoly whether or not it was in their business interest whether or not it kept you as the hardware and a subservient role.
From the consumer perspective, it seems that MSFT has provided scheduler changes fairly rapidly for CPU changes, like X3D, P/e cores, etc. At least within a couple of months, if not at release.
Amd/Intel work directly with Microsoft for shipping new silicon that would otherwise require it.
Raptor Computing provides POWER9 workstations. They're not cheap, still use last-gen hardware (DDR4/PCIe 4 ... and POWER9 itself) but they're out there.
Raptor's value proposition is a 100% free and open platform, from the firmware and up, but, if they were willing to compromise on that, they'd be able to launch a POWER10 box.
Not sure it'd competitive in price with other workstation class machines. I don't know how expensive IBM's S1012 desk side is, but with only 64 threads, it'd be a meh workstation.
There were Phi cards, but they were pricey and power hungry (at the time, now current GPU cards probably meet or exceed the Phi card's power consumption) for plugging into your home PC. A few years back there was a big fire sale on Phi cards - you could pick one up for like $200. But by then nobody cared.
The developers they are referring to aren’t just enthusiasts; they are also developers who were purchasing SuperMicro and Lambda PCs to develop models for their employers. Many enterprises will buy these for local development because it frees up the highly expensive enterprise-level chip for commercial use.
This is a genius move. I am more baffled by the insane form factor that can pack this much power inside a Mac Mini-esque body. For just $6000, two of these can run 400B+ models locally. That is absolutely bonkers. Imagine running ChatGPT on your desktop. You couldn’t dream about this stuff even 1 year ago. What a time to be alive!
The 1 PetaFLOP spec and 200GB model capacity specs are for FP4 (4-bit floating point), which means inference not training/development. It's still be a decent personal development machine, but not for that size of model.
This looks like a bigger brother of Orin AGX, which has 64GB of RAM and runs smaller LLMs. The question will be power and performance vs 5090. We know price is 1.5x
> Nvidia says that two Project Digits machines can be linked together to run up to 405-billion-parameter models, if a job calls for it. Project Digits can deliver a standalone experience, as alluded to earlier, or connect to a primary Windows or Mac PC.
I’m not so sure it’s negligible. My anecdotal experience is that since Apple Silicon chips were found to be “ok” enough to run inference with MLX, more non-technical people in my circle have asked me how they can run LLMs on their macs.
Surely a smaller market than gamers or datacenters for sure.
It's annoying I do LLMs for work and have a bit of an interest in them and doing stuff with GANS etc.
I have a bit of an interest in games too.
If I could get one platform for both, I could justify 2k maybe a bit more.
I can't justify that for just one half: running games on Mac, right now via Linux: no thanks.
And on the PC side, nvidia consumer cards only go to 24gb which is a bit limiting for LLMs, while being very expensive - I only play games every few months.
The new $2k card from Nvidia will be 32GB but your point stands. AMD is planning a unified chiplet based GPU architecture (AI/data center/workstation/gaming) called UDNA, which might alleviate some of these issues. It's been delayed and delayed though - hence the lackluster GPU offerings from team Red this cycle - so I haven't been getting my hopes up.
Maybe (LP)CAMM2 memory will make model usage just cheap enough that I can have a hosting server for it and do my usual midrange gaming GPU thing before then.
I mean negligible to their bottom line. There may be tons of units bought or not, but the margin on a single datacenter system would buy tens of these.
It’s purely an ecosystem play imho. It benefits the kind of people who will go on to make potentially cool things and will stay loyal.
> It’s purely an ecosystem play imho. It benefits the kind of people who will go on to make potentially cool things and will stay loyal.
It will be massive for research labs. Most academics have to jump through a lot of hoops to get to play with not just CUDA, but also GPUDirect/RDMA/Infiniband etc. If you get older/donated hardware, you may have a large cluster but not newer features.
They have, because until now Apple Silicon was the only practical way for many to work with larger models at home because they can be configured with 64-192GB of unified memory. Even the laptops can be configured with up to 128GB of unified memory.
Performance is not amazing (roughly 4060 level, I think?) but in many ways it was the only game in town unless you were willing and able to build a multi-3090/4090 rig.
I'm currently wondering how likely it is I'll get into deeper LLM usage, and therefore how much Apple Silicon I need (because I'm addicted to macOS). So I'm some way closer to your steel man than you'd expect. But I'm probably a niche within a niche.
Doubt it, a year ago useful local LLMs on a Mac (via something like ollama) was barely taking off.
If what you say it's true you were among the first 100 people on the planet who were doing this; which btw, further supports my argument on how extremely rare is that use case for Mac users.
People were running llama.cpp on Mac laptops in March 2023 and Llama2 was released in July 2023. People were buying Macs to run LLMs months before M3 machines became available in November 2023.
I keep thinking about stocks that have 100xd, and most seemed like obscure names to me as a layman. But man, Nvidia was a household name to anyone that ever played any game. And still so many of us never bothered buying the stock
Incredible fumble for me personally as an investor
Unless you predicted AI and Crypto then it was just really good, not 100x. It 20x from 2005-2020 but ~500x from 2005-2025
And if you truly did predict that Nvidia would own those markets and those markets would be massive, you could have also bought Amazon, Google or heck even Bitcoin. Anything you touched in tech really would have made you a millionaire really.
Survivors bias though. It's hard to name all the companies that failed in the dot com bust, but even among the ones that made it through, because they're not around any more, they're harder to remember than the winners. But MCI, Palm, RIM, Nortel, Compaq, Pets.com, Webvan all failed and went to zero. There's an uncountable number of ICOs and NFTs that ended up nowhere. SVB isn't exactly an tech stock but they were strongly connected to it and they failed.
It is interesting to think about crypto as a stairstep that Nvidia used to get to its current position in AI. It wasn't games > ai, but games > crypto > ai.
Nvidia joined S&P500 in 2001 so if you've been doing passive index fund investing, you probably got a little bit of it in your funds. So there was some upside to it.
There's a titanic market with people wanting some uncensored local LLM/image/video generation model. This market extremely overlaps with gamers today, but will grow exponentially every year.
How big is that market you claim? Local LLM image generation already exists out off the box on latest Samsung flagship phones and it's mostly a Gimmick that gets old pretty quickly. Hardly comparable to gaming in terms of market size and profitablity.
Plus, YouTube and the Google images is already full of AI generated slop and people are already tired of it. "AI fatigue" amongst majority of general consumers is a documented thing. Gaming fatigues is not.
It is. You may know it as the "I prefer to play board games (and feel smugly superior about it) because they're ${more social, require imagination, $whatever}" crowd.
"The global gaming market size was valued at approximately USD 221.24 billion in 2024. It is forecasted to reach USD 424.23 billion by 2033, growing at a CAGR of around 6.50% during the forecast period (2025-2033)"
Farmville style games underwent similar explosive estimates of growth, up until they collapsed.
Much of the growth in gaming of late has come from exploitive dark patterns, and those dark patterns eventually stop working because users become immune to them.
>Farmville style games underwent similar explosive estimates of growth, up until they collapsed.
They did not collapse, they moved to smartphones. The "free"-to-play gacha portion of the gaming market is so successful it is most of the market. "Live service" games are literally traditional game makers trying to grab a tiny slice of that market, because it's infinitely more profitable than making actual games.
>those dark patterns eventually stop working because users become immune to them.
Really? Slot machines have been around for generations and have not become any less effective. Gambling of all forms has relied on the exact same physiological response for millennia. None of this is going away without legislation.
> Slot machines have been around for generations and have not become any less effective.
Slot machines are not a growth market. The majority of people wised to them literal generations ago, although enough people remain susceptible to maintain a handful of city economies.
> They did not collapse, they moved to smartphones
Agreed, but the dark patterns being used are different. The previous dark patterns became ineffective. The level of sophistication of psychological trickery in modern f2p games is far beyond anything Farmville ever attempted.
The rise of live service games also does not bode well for infinite growth in the industry as there's only so many hours to go around each day for playing games and even the evilest of player manipulation techniques can only squeeze so much blood from a stone.
The industry is already seeing the failure of new live service games to launch, possibly analogous to what happened in the MMO market when there was a rush of releases after WoW. With the exception of addicts, most people can only spend so many hours a day playing games.
I think he implied AI generated porn. Perhaps also other kind of images that are at odds with morality and/or the law. I'm not sure but probably Samsung phones don't let you do that.
I'm sure a lot of people see "uncensored" and think "porn" but there's a lot of stuff that e.g. Dall-E won't let you do.
Suppose you're a content creator and you need an image of a real person or something copyrighted like a lot of sports logos for your latest YouTube video's thumbnail. That kind of thing.
I'm not getting into how good or bad that is; I'm just saying I think it's a pretty common use case.
AI porn is currently cringe, just like Eliza for conversations was cringe.
The cutting edge will advance, and convincing bespoke porn of people's crushes/coworkers/bosses/enemies/toddlers will become a thing. With all the mayhem that results.
It will always be cringe due to how so-called "AI" works. Since it's fundamentally just log-likelihood optimization under the hood, it will always be a statistically most average image. Which means it will always have that characteristic "plastic" and overdone look.
The current state of the art in AI image generation was unimaginable a few years back. The idea that it'll stay as-is for the next century seems... silly.
If you're talking about some sort of non-existent sci-fi future "AI" that isn't just log-likelihood optimization, then most likely such a fantastical thing wouldn't be using NVidia's GPU with CUDA.
This hardware is only good for current-generation "AI".
I think there are a lot of non-porn uses. I see a lot of YouTube thumbnails that seem AI generated, but feature copyrighted stuff.
(example: a thumbnail for a YT video about a video game, featuring AI-generated art based on that game. because copyright reasons, in my very limited experience Dall-E won't let you do that)
I agree that AI porn doesn't seem a real market driver. With 8 billion people on Earth I know it has its fans I guess, but people barely pay for porn in the first place so I reallllly dunno how many people are paying for AI porn either directly or indirectly.
It's unclear to me if AI generated video will ever really cross the "uncanny valley." Of course, people betting against AI have lost those bets again and again but I don't know.
> No. There's already too much porn on the internet, and AI porn is cringe and will get old very fast.
I needed an uncensored model in order to, guess what, make an AI draw my niece snowboarding down a waterfall. All the online services refuse on basis that the picture contains -- oh horrors -- a child.
Yeah, and there's that story about "private window" mode in browsers because you were shopping for birthday gifts that one time. You know what I mean though.
I really don't. Censored models are so censored they're practically useless for anything but landscapes. Half of them refuse to put humans in the pictures at all.
Sure, but those developers will create functionality that will require advanced GPUs and people will want that functionality. Eventually OS will expect it and it will became default everywhere. So, it is an important step that will push nvidia growing in the following years.
That’s not what I’m saying. I’m saying that the people buying this aren’t going to shift their bottom line in any kind of noticeable way. They’re already sold out of their money makers. This is just an entrenchment opportunity.
If this is gonna be widely used by ML engineers, in biopharma, etc and they land 1000$ margins at half a million sales that's half a billion in revenue, with potential to grow.
If they're already an "enthusiast, grad student, hacker", are they likely to choose the "plumbers and people that know how to build houses" career track?
True passion for one's career is rare, despite the clichéd platitudes ecouraging otherwise. That's something we should encourage and invest in regardless of the field.
Boring fact: The underlying theme of the movie Her is actually divorce and the destructive impact it has on people, the futuristic AI stuff is just for stuffing!
The overall theme of Her was human relationships. It was not about AI and not just about divorce in particular.The AI was just a plot device to include a bodyless person into the equation. Watch it again with this in mind and you will see what I mean.
The universal theme of Her was the set of harmonics that define what is something and the thresholds, boundaries, windows onto what is not thatthing but someotherthing, even if the thing perceived is a mirror, not just about human relationships in particular. The relationship was just a plot device to make a work of deep philosophy into a marketable romantic comedy.
OpenAI doesn’t make any profit. So either it dies or prices go up. Not to mention the privacy aspect of your own machine and the freedom of choice which models to run
Recent report says there are 1M paying customers. At ~30USD for 12 months this is ~3.6B of revenue which kinda matches their reported figures. So to break even at their ~5B costs assuming that they need no further major investment in infrastructure they only need to increase the paying subscriptions from 1M to 2M. Since there are ~250M people who engaged with OpenAI free tier service 2x projection doesn't sound too surreal.
If Silicon Valley could tell the difference between utopias and dystopias, we wouldn't have companies named Soylent or iRobot, and the recently announced Anduril/Palantir/OpenAI partnership to hasten the creation of either SkyNet or Big Brother wouldn't have happened at all.
The fire-breathing 120W Zen 5-powered flagship Ryzen AI Max+ 395 comes packing 16 CPU cores and 32 threads paired with 40 RDNA 3.5 (Radeon 8060S) integrated graphics cores (CUs), but perhaps more importantly, it supports up to 128GB of memory that is shared among the CPU, GPU, and XDNA 2 NPU AI engines. The memory can also be carved up to a distinct pool dedicated to the GPU only, thus delivering an astounding 256 GB/s of memory throughput that unlocks incredible performance in memory capacity-constrained AI workloads (details below). AMD says this delivers groundbreaking capabilities for thin-and-light laptops and mini workstations, particularly in AI workloads. The company also shared plenty of gaming and content creation benchmarks.
[...]
AMD also shared some rather impressive results showing a Llama 70B Nemotron LLM AI model running on both the Ryzen AI Max+ 395 with 128GB of total system RAM (32GB for the CPU, 96GB allocated to the GPU) and a desktop Nvidia GeForce RTX 4090 with 24GB of VRAM (details of the setups in the slide below). AMD says the AI Max+ 395 delivers up to 2.2X the tokens/second performance of the desktop RTX 4090 card, but the company didn’t share time-to-first-token benchmarks.
Perhaps more importantly, AMD claims to do this at an 87% lower TDP than the 450W RTX 4090, with the AI Max+ running at a mere 55W. That implies that systems built on this platform will have exceptional power efficiency metrics in AI workloads.
Strix Halo is a replacement for the high-power laptop CPUs from the HX series of Intel and AMD, together with a discrete GPU.
The thermal design power of a laptop CPU-dGPU combo is normally much higher than 120 W, which is the maximum TDP recommended for Strix Halo. The faster laptop dGPUs want more than 120 W only for themselves, not counting the CPU.
So any claims of being surprised that the TDP range for Strix Halo is 45 W to 120 W are weird, like the commenter has never seen a gaming laptop or a mobile workstation laptop.
> The thermal design power of a laptop CPU-dGPU combo is normally much higher than 120 W
Normally? Much higher than 120W? Those are some pretty abnormal (and dare I say niche?) laptops you're talking about there. Remember, that's not peak power - thermal design power is what the laptop should be able to power and cool pretty much continuously.
At those power levels, they're usually called DTR: desktop replacement. You certainly can't call it "just a laptop" anymore once we're in needs-two-power-supplies territory.
Any laptop that in marketed as "gaming laptop" or "mobile workstation" belongs to this category.
I do not know which is the proportion of gaming laptops and mobile workstations vs. thin and light laptops. While obviously there must be much more light laptops, the gaming laptops cannot be a niche product, because there are too many models offered by a lot of vendors.
My own laptop is a Dell Precision, so it belongs to this class. I would not call Dell Precision laptops as a niche product, even if they are typically used only by professionals.
My previous laptop was some Lenovo Yoga that also belonged to this class, having a discrete NVIDIA GPU. In general, any laptop having a discrete GPU belongs to this class, because the laptop CPUs intended to be paired with discrete GPUs have a default TDP of 45 W or 55 W, while the smallest laptop discrete GPUs may have TDPs of 55 W to 75 W, but the faster laptop GPUs have TDPs between 100 W and 150 W, so the combo with CPU reaches a TDP around 200 W for the biggest laptops.
You can't usually just add up the TDPs of CPU and GPU, because neither cooling nor the power circuitry supports that kind of load. That's why AMDs SmartShift is a thing.
People are very unaware just how much better a gaming laptop from 3 years ago is (compared to a copilot laptop). These laptops are sub $500 on eBay, and Best Buy won’t give you more than $150 for it as a trade in (almost like they won’t admit that those laptops outclass the new category type of AI pc).
I think this is a race that Apple doesn't know it's part of. Apple has something that happens to work well for AI, as a side effect of having a nice GPU with lots of fast shared memory. It's not marketed for inference.
From the people I talk to the enthusiast market is nvidia 4090/3090 saturated because people want to do their fine tunes also porn on their off time. The Venn diagram of users who post about diffusion models and llms running at home is pretty much a circle.
Yeah, I really don't think the overlap is as much as you imagine. At least in /r/localllama and the discord servers I frequent, the vast majority of users are interested in one or the other primarily, and may just dabble with other things. Obviously this is just my observations...I could be totally misreading things.
> I sure wished I held some Nvidia stocks, they seem to be doing everything right in the last few years!
They propelled on unexpected LLM boom. But plan 'A' was robotics in which NVidia invested a lot for decades. I think their time is about to come, with Tesla's humanoids for 20-30k and Chinese already selling for $16k.
This is somewhat similar to what GeForce was to gamers back in the days, but for AI enthusiasts. Sure, the price is much higher, but at least it's a completely integrated solution.
Yep that's what I'm thinking as well. I was going to buy a 5090 mainly to play around with LLM code generation, but this is a worthy option for roughly the same price as building a new PC with a 5090.
Having your main pc as an LLM rig also really sucks for multitasking, since if you want to keep a model loaded to use it when needed, it means you have zero resources left to do anything else. GPU memory maxed out, most of the RAM used. Having a dedicated machine even if it's slower is a lot more practical imo, since you can actually do other things while it generates instead of having to sit there and wait, not being able to do anything else.
i think it isn't about enthusiast. To me it looks like Huang/NVDA is pushing further a small revolution using the opening provided by the AI wave - up until now the GPU was add-on to the general computing core onto which that computing core offloaded some computing. With AI that offloaded computing becomes de-facto the main computing and Huang/NVDA is turning tables by making the CPU is just a small add-on on the GPU, with some general computing offloaded to that CPU.
The CPU being located that "close" and with unified memory - that would stimulate development of parallelization for a lot of general computing so that it would be executed on GPU, very fast that way, instead of on the CPU. For example classic of enterprise computing - databases, the SQL ones - a lot, if not, with some work, everything, in these databases can be executed on GPU with a significant performance gain vs. CPU. Why it isn't happening today? Load/unload onto GPU eats into performance, complexity of having only some operations offloaded to GPU is very high in dev effort, etc. Streamlined development on a platform with unified memory will change it. That way Huang/NVDA may pull out rug from under the CPU-first platforms like AMD/INTC and would own both - new AI computing as well as significant share of the classic enterprise one.
>GPU databases are niche products with severe limitations.
today. For the reasons like i mentioned.
>GPUs are fast at massively parallel math problems, they anren’t useful for all tasks.
GPU are fast at massively parallel tasks. Their memory bandwidth is 10x of that of the CPU for example. So, typical database operations, massively parallel in nature like join or filter, would run about that faster.
Majority of computing can be parallelized and thus benefit from being executed on GPU (with unified memory of the practically usable for enterprise sizes like 128GB).
> So, typical database operations, massively parallel in nature like join or filter, would run about that faster.
Given workload A how much of the total runtime JOIN or FILTER would take in contrast to the storage engine layer for example? My gut feeling tells me not much since to see the actual gain you'd need to be able to parallelize everything including the storage engine challenges.
IIRC all the startups building databases around GPUs failed to deliver in the last ~10 years. All of them are shut down if I am not mistaken.
With cheap large RAMs and the SSD the storage has already became much less of an issue, especially when the database is primarily in-memory one.
How about attaching SSD based storage to NVLink? :) Nvidia does have the direct to memory tech and uses wide buses, so i don't see any issue for them to direct attach arrays of SSD if they feel like it.
>IIRC all the startups building databases around GPUs failed to deliver in the last ~10 years. All of them are shut down if I am not mistaken.
As i already said - model of database offloading some ops to GPU with its separate memory isn't feasible, and those startups confirmed it. Especially when GPU would be 8-16GB while the main RAM can easily be 1-2TB with 100-200 CPU cores. With 128GB unified memory like on GB10 the situation looks completely different (that Nvidia allows only 2 to be connected by NVLink is just a market segmentation not a real technical limitation).
I mean you wouldn't run a database on a GB10 device or cluster of them thereof. GH200 is another story, however, the potential improvement wrt the databases-in-GPUs still falls short of the question if there are enough workloads that are compute-bound in the substantial part of total wall-clock time for given workload.
In other words, and hypothetically, if you can improve logical plan execution to run 2x faster by rewriting the algorithms to make use of GPU resources but physical plan execution remains to be bottlenecked by the storage engine, then the total sum of gains is negligible.
But I guess there could perhaps be some use-case where this could be proved as a win.
"The GB10 Superchip enables Project DIGITS to deliver powerful performance using only a standard electrical outlet. Each Project DIGITS features 128GB of unified, coherent memory and up to 4TB of NVMe storage. With the supercomputer, developers can run up to 200-billion-parameter large language models to supercharge AI innovation."
"Grace is the first data center CPU to utilize server-class high-speed LPDDR5X memory with a wide memory subsystem that delivers up to 500GB/s of bandwidth "
I’m so tired of this recent obsession with the stock market. Now that retail is deeply invested it is tainting everything, like here on a technology forum. I don’t remember people mentioning Apple stock every time Steve Jobs made an announcement in the past decades. Nowadays it seems everyone is invested in Nvidia and just want the stock to go up, and every product announcement is a mean to that end. I really hope we get a crash so that we can get back to a more sane relation with companies and their products.
Not expecting this to compete with the 5x series in terms of gaming; But it's interesting to note the increase in gaming performance Jensen was speaking about with Blackwell was larger related to inferenced frames generated by the tensor cores.
I wonder how it would go as a productivity/tinkering/gaming rig? Could a GPU potentially be stacked in the same way an additional Digit can?
> they seem to be doing everything right in the last few years
About that... Not like there isn't a lot to be desired from the linux drivers: I'm running a K80 and M40 in a workstation at home and the thought of having to ever touch the drivers, now that the system is operational, terrifies me. It is by far the biggest "don't fix it if it ain't broke" thing in my life.
That IS the second system (my AI home rig). I've given up on Nvidia for using it on my main computer because of their horrid drivers. I switched to Intel ARC about a month ago and I love it. The only downside is that I have a xeon on my main computer and Intel never really bothered to make ARC compatible with xeons so I had to hack my bios around, hoping I don't mess everything up. Luckily for me, it all went well so now I'm probably one of a dozen or so people worldwide to be running xeons + arc on linux. That said, the fact that I don't have to deal with nvidia's wretched linux drivers does bring a smile to my face.
There will undoubtably be a Mac Studio (and Mac Pro?) bump to M4 at some point. Benchmarks [0] reflect how memory bandwidth and core count [1] compare to processor improvements. Granted, ymmv to your workload.
The nVidia price is closer (USD 3k) to a top Mac mini but I trust Apple more for the end-to-end support from hardware to apps than nVidia. Not an Apple fanboy but an user/dev, and I don't think we realize what Apple really achieved, industrially speaking. The M1 was launched in late 2020.
it eats into all NVDA consumer-facing clients no? I can see why openai and etc are looking for alternative hardware solution to train their next model.
Am I the only one disappointed by these? They cost roughly half the price of a macbook pro and offer hmm.. half the capacity in RAM. Sure speed matters in AI, but what do I do with speed when I can't load a 70b model.
On the other hand, with a $5000 macbook pro, I can easily load a 70b model and have a "full" macbook pro as a plus. I am not sure I fully understand the value of these cards for someone that want to run personal AI models.
Hm? They have 128GB of RAM. Macbook Pros cap out at 128GB as well. Will be interesting to see how a Project Digits machine performs in terms of inference speed.
No, macbooks pro cap at 128GB. But, still, they are a laptop. It'll be interesting to see if Apple can offer a good counter for the desktop. The mac pro can go to 192Gb which is closer to the 128Gb Digits + your Desktop machine. At $9299 price tag, it's not too competitive but close.
Bro we can connect two ProjectDigits as well. I was only looking at the M4 macbook because 128gb unified memory. Now this beast can cook better LLMs at just 3K with 4TB SSD too.
M4 Macbook Max (128 GB unified ram and 4TB Storage) is 5999.
So, No more apple for me. I will just get the Digits. And can create a workstation as well.
They are perfectly fine for certain people. I can run Qwen-2.5-coder 14B on my M2 Max MacBook Pro with 32gb at ~16 tok/sec. At least in my circle, people are budget conscious and would prefer using existing devices rather than pay for subscriptions where possible.
And we know why they won't ship NVLink anymore on prosumer GPUs: they control almost the entire segment and why give more away for free? Good for the company and investors, bad for us consumers.
> I can run Qwen-2.5-coder 14B on my M2 Max MacBook Pro with 32gb at ~16 tok/sec. At least in my circle, people are budget conscious
Qwen 2.5 32B on openrouter is $0.16/million output tokens. At your 16 tokens per second, 1 million tokens is 17 continuous hours of output.
Openrouter will charge you 16 cents for that.
I think you may want to reevaluate which is the real budget choice here
Edit: elaborating, that extra 16GB ram on the Mac to hold the Qwen model costs $400, or equivalently 1770 days of continuous output. All assuming electricity is free
It's a no brainer for me cause I already own the MacBook and I don't mind waiting a few extra seconds. Also, I didn't buy the mac for this purpose, it's just my daily device. So yes, I'm sure OpenRouter is cheaper, but I just don't have to think about using it as long as the open models are reasonable good for my use. Of course your needs may be quite different.
I dread the day I have to eat at US restaurants, or even get something delivered. Part of the reason is cultural: I’ve never lived in a country where a tip is mandatory and you’ll be called out if you don’t. The other reason being it involves a degree of social pressure and shame, if one doesn’t tip enough. Both don’t sit well with me.
But I can attest that if I’m forced to tip, I’ll not return to that establishment.
In my experience, it's uncommon to be called out for not leaving a tip, at least on the East Coast of the US. At worst, you might get a nasty look depending on the context (unlikely at a cafe, maybe at a nice restaurant with good service).
Exactly why I'll never visit twice any establishment that asks me for a tip. If I ever go eat in a restaurant, it's because I want to have less stress in my life, not because I want to put myself in a shitty situation where not only I need to be up to date with current social norms (which has always been a difficulty of mine), but I also need to do things that go against my beliefs (tipping is a scam akin to TicketMaster "€100 ticket + €20 shipping fee + €30 convenience fee + €35 what are you going to do about this fee + €15 fee won't make a difference = €200 vs advertised €100"). As a result, whenever I visit a restaurant, the thing I remember the most is not the food nor the ambience, but the moment of tipping when the waiter begs me for a tip like a Syrian refugee begs for water. This is not a problem in my country because thankfully begging isn't as common here, but when I was travelling I was once asked for a tip during a hotel breakfast, which BTW was shitty.
Also, tipping is a monument to human stupidity. Apparently, people would rather pay €10 + 20% tip than €12 with no tip, because the former feels cheaper, even though it's a stupid way to organize pricing.
I've lived here for a long time, but this stuff still gives me anxiety. I get you are supposed to tip in restaurants, but I'm unsure which other services need tipping. Is it required to tip, for instance, the HVAC repairman? Are you supposed to tip mechanics? Do native Americans have a spidey sense for which people need tipping that I just need to become sensitized to?
Any luxury service (consensus determines what a luxury is in this case, not the individual) that is personalized, intimate, and requires spending a lot of time with you would carry the expectation that you tip. HVAC is seen as a necessity in the US, the same with cars, and the same with medical care. Tipping wouldn't be expected in those instances.
The only situation where there is an unwritten expectation for a tip is at a sit-down restaurant with waitstaff and for food/grocery delivery. These are luxuries 99% of the time.
In all other cases that I know of, they will ask you outright if you want to tip. For example, when I get a haircut or a massage, they ask me explicitly if I want to tip, and, because it was a personalized, intimate, luxury service, I oblige. For simple walk-up services like coffee or take out, I wouldn't tip.
The only other times I tip are for exceptional service (e.g. in a fastfood drive through) or if it's a local business that I'm fond of.
>The other reason being it involves a degree of social pressure and shame, if one doesn’t tip enough.
It's standard to tip 15% for decent service in a restaurant(sans win). You are of course free to tip more for good service or less for crappy service, but unless your experience is truly exceptional (in either a good or bad way) you can never go wrong with 15%. This is standard any US restaurant where you sit down and are served by a waiter or waitress. You are never "forced to tip", but you will be universally looked down upon in any sit-down establishment that you fail to tip in, so you might be best off not returning.
The last metric in this 2017 study, before the pandemic, showed tipping was between 18% and 19% in, "surveys (that) are aimed at diners who patronize full-service midscale and upscale restaurants". It also shows a downward trend in the last few data points. All things considered, including diners who patronize "downscale" full-service restaurants (like diners), and given the many decades-long standard of 15% tips, it seems to me a safe standard to continue to use. Certainly no foreign visitor will ever face vitriol for tipping 15%.
The 15% standard supplanted the previous 10% standard somewhere in the 1970s and lasted to the early/mid-aughts depending on where in the US one lived. I don't agree that ~30 years is "many decades-long". Further, that 15% itself was an uptick from the prior standard demonstrates that we're dealing with a moving target, for better or worse.
I'm also from a culture where tipping doesn't happen. I've been living in the US for a number of years, and I rarely go to a restaurant here, because the experience is too awkward.
On the other hand, I find delivery services quite reasonable. They tell me the total price (including the expected tip) before I order. You rarely see that kind of honesty in an actual restaurant. And I don't have to see the person I tip, which makes the experience much less awkward.
Speaking for myself, it is not a mental block, it is disgust with the social design of tip culture. Including tips, waiters are better paid than teachers, and many other more essential professions that require higher qualifications. The pressure is disproportionate to their financial situation. Let's normalize paying everybody what they are worth and do away with the tortuous guilt trips.
To be clear, I am promoting eliminating tips, and paying everybody in the lower 90% of wage earners more for their work. I have no interest in shortchanging waiters.
> tips are tax-advantaged (there's no sales tax on the tip)
Someone please correct me if I'm wrong, but the bigger tax that's avoided here is corporate taxes, I think. The tip goes directly to the employee, and it's thus not taxed as corporate income, is my understanding.
EDIT: Ah, I missed that corporate taxes were on net earnings rather than gross, so this wouldn't make any difference. Thank you!
The primary corporate advantages to tips is they allow the business to display artificially low prices to customers (since they don't include the tip) and pay artificially low wages to employees (tipped jobs have a lower minimum wage).
> Through our voluntary early retirement and separation offerings, we are more than halfway to our workforce reduction target of approximately 15,000 by the end of the year. We still have difficult decisions to make and will notify impacted employees in the middle of October.
Gonna be the finest engine you've seen since the industrial revolution. Grease those gears, guys, we're shedding a head count of no less than 15,000 to keep this baby going. And that's just this year.
That's not easy. Apparently the amount of employees they have, it's difficult to innovate. I'm not surprised. Innovation is easier in smaller groups of people.
There is a little bit of "when a measure becomes a target, it ceases to be a good measure" with respect to chasing financial performance.
While you can't escape thinking about financial metrics, the goal should be something like creating great products, building a competitive barrier etc. Financials can act as a constraint rather than a goal.
A concrete example is Costco.
Even here, Gelsinger puts it last, which is sort of reads like a constraint. Seems fair.
Yup, sure. I'd argue one of the factors involved in the long term problem is when the company starts trying hard to make money as opposed to serving customers well (again, financial metrics must be a constraint). It's not the only factor (incentives get whacky, bureaucracy is difficult), but it's a factor which isn't appreciated as much as the other two.
It'd be nice if they could give me a compelling reason to upgrade my computer more than once or twice a decade, other than 'Our new AI computers have keyboards that go to 11'.
Also, this announcement has wiped out any plans of buying tech products this year, plus a holiday to the US and Canada later in the year. Good thing too, as the entire globe is probably staring down the barrel of a recession.