Hacker News new | past | comments | ask | show | jobs | submit | Karupan's comments login

Genuine questions: why are they calling it “reciprocal”? Is the US just matching the tariffs set by the other countries?

Also, this announcement has wiped out any plans of buying tech products this year, plus a holiday to the US and Canada later in the year. Good thing too, as the entire globe is probably staring down the barrel of a recession.


Someone calculated the formula used. They divided the trade deficit of each country by total trade of each country and assumed that was all a tariff.

So for example Indonesia and the US traded $28 billion. The US has a 17.9 billion trade deficit with Indonesia. 17.9/28 =0.639, or 64%, which is assumed to be all caused by tariffs. So they divide by two and impose 32%.

Anyway no the US isn't matching tariffs they're dramatically exceeding them.


Thanks for pointing this out. As a follow up, if the US has a trade surplus, they seem to just slap 10% in both columns.


That's a bit of a messed up way to calculate things.

I also think the US deficits are hugely overstated because much of what the US produces is intellectual capital rather than physical goods and the profits are made to appear in foreign subsidiaries for tax reasons. Like if I buy Microsoft stuff in the UK, Microsoft make out it was made in Ireland for tax purposes, but really the value is created in and owned by the US. The US company both wrote the software and owns Microsoft Ireland. So much of the perceived unfairness Trump is having a go at isn't real.


You raise an excellent point that US corporate tax evasion is exaggerating the trade deficit. However, from the perspective of winning US elections, I think it does not change the issue that the trade deficit falls more on de-industrializing Midwestern states, and the corporations you are referring to are concentrated in Northeastern and Western states.

Secondly, if Microsoft or Apple makes the profit appear in Ireland, it cannot move that money back to the domestic US, right? So as long as the money sits overseas, it would not count towards US trade and thus the deficit calculation is fair.


They don't move the profit back to the US, but through Ireland and the Netherlands they move it out of the EU mostly to some tax havens in the Caribbean. From there they use them for their stock buybacks, which I think equals mostly flowing back into the US.


This is no longer true. Said loophole was eliminated in 2017, and completely closed in 2020.


Again, not flowing back to the right people. All of this could have been solved by sane redistribution, but no. It'll still be redistribution but in a cruder, less apparent form.


If the profits went back to Apple HQ directly they would serve to raise the share price and allow stock buybacks and stock based compensation for employees. Same as they do now.

You may not like a tech company succeeding at exports and having a rising share price, but that is distinct from the overall point which is that properly considered these are US exports obscured by the US tax code which incentivizes profits abroad.


That's a great point. I checked into this, and if and when the profits are repatriated they indeed only show up in the capital account, not the current account.

However, in practice even if not repatriated those exports show up in the us economy. Profits raise the share price, which allows stock grants at higher values, effectively a wage as one example.

I wonder how big an effect this phenomenon you highlight has. Must be a fairly large overstatement of the US trade deficit.


If the US has a trade deficit, doesn't that mean the US is trading make-believe pieces of paper for real goods.

Like, if I scribble on a piece of paper and then trade you the piece of paper for an incredibly engineered brand new laptop, is that bad for me? Is this a sign of my weakness?

I know economics can be complicated, and probably "it depends", but why is a trade deficit bad? Why does the Trump administration want to eliminate trade deficits?


It's just spin. The new duties are purely punitive.


It's like the blaming in a playground fight; who started first? For context: https://nitter.net/KushDesai47/status/1907618136444067901


Because when the blowback comes, people will be looking to cast blame for starting this whole trade war, and when that time comes Trump will point to the word "reciprocal" and say "we didn't start this, we were only reciprocating".


> Is the US just matching the tariffs set by the other countries?

No. Trump claims that the new tariffs are a 50% discount on what those countries tariff US goods at. (Even if that's questionable - is VAT a tariff?)

If he's correct, or anywhere close, this is a "tough love" strategy to force negotiations. We'll see how it goes. It also plays to his base - why should we tariff any less than they do us? And they have a point, it's the principle of the thing.


> If he's correct

He's not.

According to [1], the White House claims Vietnam has a 90% tariff rate.

According to [2], 90.4% is the ratio of Vietnam's trade deficit with the US -- they have a deficit of $123.5B on $136.6B of exports.

The same math holds true for other countries, e.g. Japan's claimed 46% tariff rate is their deficit of $68.5B on $148.2B of exports. The EU's claimed 39% tariff rate is their deficit of $235.6B on $605.8B of exports.

Who knows, maaaaybe it just so happens that these countries magically have tariff rates that match the ratio of their trade deficits.

Or maybe, the reason Vietnam doesn't buy a lot of US stuff is because they're poor. The reason they sell the US a bunch of stuff is because their labour is cheap to Americans. (They do have tariffs, but they're nowhere near 90%: [3].)

America's government is not trustworthy. Assuming that what they say is truthful is a poor use of time.

[1]: https://x.com/WhiteHouse/status/1907533090559324204/photo/1

[2]: https://ustr.gov/countries-regions/southeast-asia-pacific/vi...

[3]: https://www.investmentmonitor.ai/news/vietnam-gives-us-tax-b...


It's so quaint to me that people actually believe his rhetoric. How long do you think people will put up with high prices before they turn on him?


If high prices are inevitable, what’s their endgame? Are they actually incompetent or are people too pessimistic about what they’re attempting to do?


Prior to yesterday's announcement, the claim regarding tarrifs was that the goal was to bring manufacturing back to american soil. This is unlikely to happen in any case, but it requires at minimum that consumers put up with high prices for a while (with "a while" being measured in years, if not decades). Actually, the "liberation day" tarrifs strongly agree with this goal: after speculation, the administration announced the formula for these new tarrifs, which has nothing to do with counter tarrifs or trade barriers as claimed, and instead comes from a ratio of the trade deficit in goods and the overall amount of trade. In other words, countries that export a lot of goods to the US (and the US doesn't have commensurate goods exports to) get high tarrifs. This makes sense if the goal is to incentivize manufacturing in the US, by making manufactured goods from outside more expensive.

There is another camp that thinks that trump doesn't really have a goal per se, and is rather doing all this as an exercise in showing off his strength and to draw attention to himself. This camp holds that eventually trump will get bored, or the public will turn on him, and he'll need to get rid of tarrifs to save face. We call these people "optimists".


> If he's correct

Trump is not in the business of being _correct_, or indeed caring about correctness as a concept.

And no, these are, obviously, not the actual tariffs, don’t be silly.


Tangentially related: is there a good guide or setup scripts to run self hosted Postgres with backups and secondary standby? Like I just want something I can deploy to a VPS/dedicated box for all my side projects.

If not is supabase the most painless way to get started?


Using bun has been a great experience so far. I used to dread setting up typescript/jest/react/webpack for a new project with breaking changes all over the place. With bun, it’s been self contained and painless and it just works for my use. Can’t comment on the 3rd party libraries they are integrating like s3, sql etc but at least it looks like they are focused on most common/asked for ones.

Thanks for the great work and bringing some much needed sanity in the node.js tooling space!


How does bun make a difference in the frontend tech stack that you mentioned?


Last I tried (several months ago) it didn't, the built-in frontend bundler was not very useful so everybody just used 3rd party bundlers so (for most people) it would not have any meaningful differences compared to nodejs. It seems they are putting more effort in the bundler now, so it seems like it can replace plain SPA applications just fine (no SSR). The bundler is inspired by esbuild so you can expect similar capabilities.

IMO the main benefit of using their bundler is that things (imports/ES-modules, typescript, unit tests, etc) just behave the same way across build scripts, frontend code, unit tests, etc. You don't get weird errors like "oh the ?. syntax is not supported in the unit test because I didn't add the right transform to jest configuration. But works fine in the frontend where I am using babel".

But if you want to use vercel/nextjs/astro you still are not using their bundler so no better or worse there.


not up to this point, but with this release, bun is now a bundler.

That means potentially no webpack, vite and their jungle of dependencies. It's possible to have bun as a sole dependency for your front and back end. Tbh I'll likely add React and co, but it's possible do do vanilla front end with plain web components.


Bun has always been a bundler (and package manager, and Node runtime). This release adds "HTML imports" as a way to use the bundler.


Doesn't the name "bun" come from the fact that it's a bundler?


No it's just a name. Originally it's an alternative to nodejs.


not really a bundler without a dev server that you can just set to an entry point, and css support


i have been setting up these react/ts/etc project with vite or next.js, just fine , i think you're underestimating how much progress happened in other tooling as well


Idk about next 15 but you can literally bootsrap next 13 using a single index.tsx with typescript & next being the only 2 dependencies in package.json. No typescript is fine too.

It's not new, has been the case for a few years, so honestly I don't get people complaining about next's complexity.


Bun is amazing. It’s a life hack for me. Chatgpt doesnt know much about it so there’s some productivity hit but i love bun.


The timing on when this essay is being published is interesting. Are all the tech billionaires falling in line before the next administration takes over? Also, let this be a lesson that no matter how “brilliant” and rich someone is, they can have comically bad takes.


Seems like a great way to write self documenting code which can be optionally used by your python runtime.


I feel this is bigger than the 5x series GPUs. Given the craze around AI/LLMs, this can also potentially eat into Apple’s slice of the enthusiast AI dev segment once the M4 Max/Ultra Mac minis are released. I sure wished I held some Nvidia stocks, they seem to be doing everything right in the last few years!


This is something every company should make sure they have: an onboarding path.

Xeon Phi failed for a number of reasons, but one where it didn't need to fail was availability of software optimised for it. Now we have Xeons and EPYCs, and MI300C's with lots of efficient cores, but we could have been writing software tailored for those for 10 years now. Extracting performance from them would be a solved problem at this point. The same applies for Itanium - the very first thing Intel should have made sure it had was good Linux support. They could have it before the first silicon was released. Itaium was well supported for a while, but it's long dead by now.

Similarly, Sun has failed with SPARC, which also didn't have an easy onboarding path after they gave up on workstations. They did some things right: OpenSolaris ensured the OS remained relevant (still is, even if a bit niche), and looking the other way for x86 Solaris helps people to learn and train on it. Oracle cloud could, at least, offer it on cloud instances. Would be nice.

Now we see IBM doing the same - there is no reasonable entry level POWER machine that can compete in performance with a workstation-class x86. There is a small half-rack machine that can be mounted on a deskside case, and that's it. I don't know of any company that's planning to deploy new systems on AIX (much less IBMi, which is also POWER), or even for Linux on POWER, because it's just too easy to build it on other, competing platforms. You can get AIX, IBMi and even IBMz cloud instances from IBM cloud, but it's not easy (and I never found a "from-zero-to-ssh-or-5250-or-3270" tutorial for them). I wonder if it's even possible. You can get Linux on Z instances, but there doesn't seem to be a way to get Linux on POWER. At least not from them (several HPC research labs still offer those).


1000% all these ai hardware companies will fail if they don't have this. You must have a cheap way to experiment and develop. Even if you want to only sell a $30000 datacenter card you still need a very low cost way to play.

Sad to see big companies like intel and amd don't understand this but they've never come to terms with the fact that software killed the hardware star


Isn’t the cloud GPU market covering this? I can run a model for $2/hr, or get a 8xH100 if I need to play with something bigger.


People tend to limit their usage when it's time-billed. You need some sort of desktop computer anyway, so, if you spend the 3K this one costs, you have unlimited time of Nvidia cloud software. When you need to run on bigger metal, then you pay $2/hour.


3k is still very steep for anyone not on a silicon valley like salary.


Yes. Most people make do with a generic desktop and an Nvidia GPU. What makes this machine attractive is the beefy GPU and the full Nvidia support for the whole AI stack.


I have the skills to write efficient CUDA kernels, but $2/hr is 10% of my salary, so no way I'm renting any H100s. The electricity price for my computer is already painful enough as is. I am sure there are many eastern European developers who are more skilled and get paid even less. This is a huge waste of resources all due to NVIDIA's artificial market segmentation. Or maybe I am just cranky because I want more VRAM for cheap.


This has 128GB of unified memory. A similarly configured Mac Studio costs almost twice as much, and I'm not sure the GPU is on the same league (software support wise, it isn't, but that's fixable).

A real shame it's not running mainline Linux - I don't like their distro based on Ubuntu LTS.


$4,799 for an M2 Ultra with 128GB of RAM, so not quite twice as much. I'm not sure what the benchmark comparison would be. $5,799 if you want an extra 16 GPU cores (60 vs 76).


We'll need to look into benchmarks when the numbers come out. Software support is also important, and a Mac will not help you that much if you are targeting CUDA.

I have to agree the desktop experience of the Mac is great, on par with the best Linuxes out there.


A lot of models are optimized for metal already, especially lamma, deepseek, and qwen. You are still taking a hit but there wasn't an alternative solution for getting that much vram in a less than $5k before this NVIDIA project came out. Will definitely look at it closely if it isn't just vaporware.


They cant walk back now without some major backlash.

The one thing I wonder is noise. That box is awfully small for the amount of compute it packs, and high-end Mac Studios are 50% heatsink. There isn’t much space in this box for a silent fan.


> Sad to see big companies like intel and amd don't understand this

And it's not like they were never bitten (Intel has) by this before.


Well, Intel management is very good at snatching defeat from the jaws of victory



At least they don’t suffer from a lack of onboarding paths for x86, and it seems they are doing a nice job with their dGPUs.

Still unforgivable that their new CPUs hit the market without excellent Linux support.


It really mystifies me that Intel AMD and other hardware companies obviously Nvidia in this case Don't either have a consortium or each have their own in-house Linux distribution with excellent support.

Windows has always been a barrier to hardware feature adoption to Intel. You had to wait 2 to 3 years, sometimes longer, for Windows to get around us providing hardware support.

Any OS optimizations in Windows you had to go through Microsoft. So say you added some instructions custom silicon or whatever to speed up Enterprise databases, provide high-speed networking that needed some special kernel features, etc, there was always Microsoft being in the way.

Not just in the drag the feet communication. Getting the tech people a line problem.

Microsoft will look at every single change. It did as to whether or not it would challenge their Monopoly whether or not it was in their business interest whether or not it kept you as the hardware and a subservient role.


From the consumer perspective, it seems that MSFT has provided scheduler changes fairly rapidly for CPU changes, like X3D, P/e cores, etc. At least within a couple of months, if not at release.

Amd/Intel work directly with Microsoft for shipping new silicon that would otherwise require it.


> From the consumer perspective, it seems that MSFT has provided scheduler changes fairly rapidly

Now they have some competition. This is relatively new, and Satya Nadella reshaped the company because of that.


Raptor Computing provides POWER9 workstations. They're not cheap, still use last-gen hardware (DDR4/PCIe 4 ... and POWER9 itself) but they're out there.

https://www.raptorcs.com/content/base/products.html


It kind of defeats the purpose of an onboarding platform if it’s more expensive than the one you think of moving away from.

IBM should see some entry-level products as loss leaders.


They're not offering POWER10 either because IBM closed the firmware again. Stupid move.


Raptor's value proposition is a 100% free and open platform, from the firmware and up, but, if they were willing to compromise on that, they'd be able to launch a POWER10 box.

Not sure it'd competitive in price with other workstation class machines. I don't know how expensive IBM's S1012 desk side is, but with only 64 threads, it'd be a meh workstation.


There were Phi cards, but they were pricey and power hungry (at the time, now current GPU cards probably meet or exceed the Phi card's power consumption) for plugging into your home PC. A few years back there was a big fire sale on Phi cards - you could pick one up for like $200. But by then nobody cared.


Imagine if they were sold at cost in the beginning. Also, think about having one as the only CPU rather than a card.


The developers they are referring to aren’t just enthusiasts; they are also developers who were purchasing SuperMicro and Lambda PCs to develop models for their employers. Many enterprises will buy these for local development because it frees up the highly expensive enterprise-level chip for commercial use.

This is a genius move. I am more baffled by the insane form factor that can pack this much power inside a Mac Mini-esque body. For just $6000, two of these can run 400B+ models locally. That is absolutely bonkers. Imagine running ChatGPT on your desktop. You couldn’t dream about this stuff even 1 year ago. What a time to be alive!


The 1 PetaFLOP spec and 200GB model capacity specs are for FP4 (4-bit floating point), which means inference not training/development. It's still be a decent personal development machine, but not for that size of model.


This looks like a bigger brother of Orin AGX, which has 64GB of RAM and runs smaller LLMs. The question will be power and performance vs 5090. We know price is 1.5x


How does it run 400B models across two? I didn’t see that in the article


> Nvidia says that two Project Digits machines can be linked together to run up to 405-billion-parameter models, if a job calls for it. Project Digits can deliver a standalone experience, as alluded to earlier, or connect to a primary Windows or Mac PC.


Point to point ConnectX connection (RDMA with GPUDirect)


Not sure exactly, but they mentioned linking to together with ConnectX, which could be ethernet or IB. No idea on the speed though.


I think the enthusiast side of things is a negligible part of the market.

That said, enthusiasts do help drive a lot of the improvements to the tech stack so if they start using this, it’ll entrench NVIDIA even more.


I’m not so sure it’s negligible. My anecdotal experience is that since Apple Silicon chips were found to be “ok” enough to run inference with MLX, more non-technical people in my circle have asked me how they can run LLMs on their macs.

Surely a smaller market than gamers or datacenters for sure.


It's annoying I do LLMs for work and have a bit of an interest in them and doing stuff with GANS etc.

I have a bit of an interest in games too.

If I could get one platform for both, I could justify 2k maybe a bit more.

I can't justify that for just one half: running games on Mac, right now via Linux: no thanks.

And on the PC side, nvidia consumer cards only go to 24gb which is a bit limiting for LLMs, while being very expensive - I only play games every few months.


The new $2k card from Nvidia will be 32GB but your point stands. AMD is planning a unified chiplet based GPU architecture (AI/data center/workstation/gaming) called UDNA, which might alleviate some of these issues. It's been delayed and delayed though - hence the lackluster GPU offerings from team Red this cycle - so I haven't been getting my hopes up.

Maybe (LP)CAMM2 memory will make model usage just cheap enough that I can have a hosting server for it and do my usual midrange gaming GPU thing before then.


Grace + Hopper, Grace + blackwell, and discussed GB10 are much like the currently shipping AMD MI300A.

I do hope that a AMD Strix Halo ships with 2 LPCAMM2 slots for a total width of 256 bits.


Unified architecture is still on track for 2026-ish.


32gb as of last night :)


I mean negligible to their bottom line. There may be tons of units bought or not, but the margin on a single datacenter system would buy tens of these.

It’s purely an ecosystem play imho. It benefits the kind of people who will go on to make potentially cool things and will stay loyal.


>It’s purely an ecosystem play imho. It benefits the kind of people who will go on to make potentially cool things and will stay loyal.

100%

The people who prototype on a 3k workstation will also be the people who decide how to architect for a 3k GPU buildout for model training.


> It’s purely an ecosystem play imho. It benefits the kind of people who will go on to make potentially cool things and will stay loyal.

It will be massive for research labs. Most academics have to jump through a lot of hoops to get to play with not just CUDA, but also GPUDirect/RDMA/Infiniband etc. If you get older/donated hardware, you may have a large cluster but not newer features.


Academic minimal-bureaucracy purchasing card limit is about $4k, so pricing is convenient*2.


Devalapers developers developers - balmer monkey dance - the key to be entrenched is the platform ecosystem.

Also why aws is giving trainium credits for free


Yes, but people already had their Macs for others reasons.

No one goes to an Apple store thinking "I'll get a laptop to do AI inference".


They have, because until now Apple Silicon was the only practical way for many to work with larger models at home because they can be configured with 64-192GB of unified memory. Even the laptops can be configured with up to 128GB of unified memory.

Performance is not amazing (roughly 4060 level, I think?) but in many ways it was the only game in town unless you were willing and able to build a multi-3090/4090 rig.


I would bet that people running LLMs on their Macs, today, is <0.1% of their user base.


People buying Macs for LLMs—sure I agree.

Since the current MacOS comes built in with small LLMs, that number might be closer to 50% not 0.1%.


I'm not arguing whether or not Macs are capable of doing it, but whether is a material force that drives people to buy Macs because of it; it's not.


Higher than that buying the top end machines though, which are very high margin


All macs? Yes. But of 192GB mac configs? Probably >50%


I'm currently wondering how likely it is I'll get into deeper LLM usage, and therefore how much Apple Silicon I need (because I'm addicted to macOS). So I'm some way closer to your steel man than you'd expect. But I'm probably a niche within a niche.


Tons of people do, my next machine will likely be a Mac for 60% this reason and 40% Windows being so user hostile now.


my $5k m3 max 128gb disagrees


Doubt it, a year ago useful local LLMs on a Mac (via something like ollama) was barely taking off.

If what you say it's true you were among the first 100 people on the planet who were doing this; which btw, further supports my argument on how extremely rare is that use case for Mac users.


No, I got a MacBook Pro 14”with M2 Max and 64GB for LLMs, and that was two generations back.


People were running llama.cpp on Mac laptops in March 2023 and Llama2 was released in July 2023. People were buying Macs to run LLMs months before M3 machines became available in November 2023.


You could have said the same about gamers buying expensive hardware in the 00's. It's what made Nvidia big.


I keep thinking about stocks that have 100xd, and most seemed like obscure names to me as a layman. But man, Nvidia was a household name to anyone that ever played any game. And still so many of us never bothered buying the stock

Incredible fumble for me personally as an investor


Unless you predicted AI and Crypto then it was just really good, not 100x. It 20x from 2005-2020 but ~500x from 2005-2025

And if you truly did predict that Nvidia would own those markets and those markets would be massive, you could have also bought Amazon, Google or heck even Bitcoin. Anything you touched in tech really would have made you a millionaire really.


Survivors bias though. It's hard to name all the companies that failed in the dot com bust, but even among the ones that made it through, because they're not around any more, they're harder to remember than the winners. But MCI, Palm, RIM, Nortel, Compaq, Pets.com, Webvan all failed and went to zero. There's an uncountable number of ICOs and NFTs that ended up nowhere. SVB isn't exactly an tech stock but they were strongly connected to it and they failed.


It is interesting to think about crypto as a stairstep that Nvidia used to get to its current position in AI. It wasn't games > ai, but games > crypto > ai.


Nvidia joined S&P500 in 2001 so if you've been doing passive index fund investing, you probably got a little bit of it in your funds. So there was some upside to it.


There's a lot more gamers than people wanting to play with LLms at home.


There's a titanic market with people wanting some uncensored local LLM/image/video generation model. This market extremely overlaps with gamers today, but will grow exponentially every year.


How big is that market you claim? Local LLM image generation already exists out off the box on latest Samsung flagship phones and it's mostly a Gimmick that gets old pretty quickly. Hardly comparable to gaming in terms of market size and profitablity.

Plus, YouTube and the Google images is already full of AI generated slop and people are already tired of it. "AI fatigue" amongst majority of general consumers is a documented thing. Gaming fatigues is not.


> Gaming fatigues is not.

It is. You may know it as the "I prefer to play board games (and feel smugly superior about it) because they're ${more social, require imagination, $whatever}" crowd.


The market heavily disagrees with you.

"The global gaming market size was valued at approximately USD 221.24 billion in 2024. It is forecasted to reach USD 424.23 billion by 2033, growing at a CAGR of around 6.50% during the forecast period (2025-2033)"


Farmville style games underwent similar explosive estimates of growth, up until they collapsed.

Much of the growth in gaming of late has come from exploitive dark patterns, and those dark patterns eventually stop working because users become immune to them.


>Farmville style games underwent similar explosive estimates of growth, up until they collapsed.

They did not collapse, they moved to smartphones. The "free"-to-play gacha portion of the gaming market is so successful it is most of the market. "Live service" games are literally traditional game makers trying to grab a tiny slice of that market, because it's infinitely more profitable than making actual games.

>those dark patterns eventually stop working because users become immune to them.

Really? Slot machines have been around for generations and have not become any less effective. Gambling of all forms has relied on the exact same physiological response for millennia. None of this is going away without legislation.


> Slot machines have been around for generations and have not become any less effective.

Slot machines are not a growth market. The majority of people wised to them literal generations ago, although enough people remain susceptible to maintain a handful of city economies.

> They did not collapse, they moved to smartphones

Agreed, but the dark patterns being used are different. The previous dark patterns became ineffective. The level of sophistication of psychological trickery in modern f2p games is far beyond anything Farmville ever attempted.

The rise of live service games also does not bode well for infinite growth in the industry as there's only so many hours to go around each day for playing games and even the evilest of player manipulation techniques can only squeeze so much blood from a stone.

The industry is already seeing the failure of new live service games to launch, possibly analogous to what happened in the MMO market when there was a rush of releases after WoW. With the exception of addicts, most people can only spend so many hours a day playing games.


I think he implied AI generated porn. Perhaps also other kind of images that are at odds with morality and/or the law. I'm not sure but probably Samsung phones don't let you do that.


I'm sure a lot of people see "uncensored" and think "porn" but there's a lot of stuff that e.g. Dall-E won't let you do.

Suppose you're a content creator and you need an image of a real person or something copyrighted like a lot of sports logos for your latest YouTube video's thumbnail. That kind of thing.

I'm not getting into how good or bad that is; I'm just saying I think it's a pretty common use case.


Apart from the uncensored bit, I'm in this small market.

Do I buy a Macbook with silly amount of RAM when I only want to mess with images occasionally.

Do I get a big Nvidia card, topping out at 24gb - still small for some LLMs, but I could occasionally play games using it at least.


>There's a titanic market

Titanic - so about to hit an iceberg and sink?


> There's a titanic market with people wanting some uncensored local LLM/image/video generation model.

No. There's already too much porn on the internet, and AI porn is cringe and will get old very fast.


AI porn is currently cringe, just like Eliza for conversations was cringe.

The cutting edge will advance, and convincing bespoke porn of people's crushes/coworkers/bosses/enemies/toddlers will become a thing. With all the mayhem that results.


It will always be cringe due to how so-called "AI" works. Since it's fundamentally just log-likelihood optimization under the hood, it will always be a statistically most average image. Which means it will always have that characteristic "plastic" and overdone look.


The current state of the art in AI image generation was unimaginable a few years back. The idea that it'll stay as-is for the next century seems... silly.


If you're talking about some sort of non-existent sci-fi future "AI" that isn't just log-likelihood optimization, then most likely such a fantastical thing wouldn't be using NVidia's GPU with CUDA.

This hardware is only good for current-generation "AI".


I think there are a lot of non-porn uses. I see a lot of YouTube thumbnails that seem AI generated, but feature copyrighted stuff.

(example: a thumbnail for a YT video about a video game, featuring AI-generated art based on that game. because copyright reasons, in my very limited experience Dall-E won't let you do that)

I agree that AI porn doesn't seem a real market driver. With 8 billion people on Earth I know it has its fans I guess, but people barely pay for porn in the first place so I reallllly dunno how many people are paying for AI porn either directly or indirectly.

It's unclear to me if AI generated video will ever really cross the "uncanny valley." Of course, people betting against AI have lost those bets again and again but I don't know.


> No. There's already too much porn on the internet, and AI porn is cringe and will get old very fast.

I needed an uncensored model in order to, guess what, make an AI draw my niece snowboarding down a waterfall. All the online services refuse on basis that the picture contains -- oh horrors -- a child.

"Uncensored" absolutely does not imply NSFW.


Yeah, and there's that story about "private window" mode in browsers because you were shopping for birthday gifts that one time. You know what I mean though.


I really don't. Censored models are so censored they're practically useless for anything but landscapes. Half of them refuse to put humans in the pictures at all.


I think scams will create a far more demand. Spear Phishing targets by creating persistent elaborate online environments is going to be big.


>There's a titantic market

How so?

Only 40% of gamers use a PC, a portion of those use AI in any meaningful way, and a fraction of those want to set up a local AI instance.

Then someone releases an uncensored, cloud based AI and takes your market?


Sure, but those developers will create functionality that will require advanced GPUs and people will want that functionality. Eventually OS will expect it and it will became default everywhere. So, it is an important step that will push nvidia growing in the following years.


AMD thought the enthusiast side of things was a negligible side of the market.


That’s not what I’m saying. I’m saying that the people buying this aren’t going to shift their bottom line in any kind of noticeable way. They’re already sold out of their money makers. This is just an entrenchment opportunity.


If this is gonna be widely used by ML engineers, in biopharma, etc and they land 1000$ margins at half a million sales that's half a billion in revenue, with potential to grow.


today’s enthusiast, grad student, hacker is tomorrow’s startup founder, CEO, CTO or 10x contributor in large tech company


> tomorrow’s startup founder, CEO, CTO or 10x contributor in large tech company

Do we need more of those? We need plumbers and people that know how to build houses. We are completely full on founders and executives.


If they're already an "enthusiast, grad student, hacker", are they likely to choose the "plumbers and people that know how to build houses" career track?

True passion for one's career is rare, despite the clichéd platitudes ecouraging otherwise. That's something we should encourage and invest in regardless of the field.


We might not, but Nvidia would certainly like it.


If I were NVidia, I would be throwing everything I could at making entertainment experiences that need one of these to run...

I mean, this is awfully close to being "Her" in a box, right?


I feel like a lot of people miss that Her was a dystopian future, not an ideal to hit.

Also, it’s $3000. For that you could buy subscriptions to OpenAI etc and have the dystopian partner everywhere you go.


We already live in dystopian hell and I'd like to have Scarlett Johansen whispering in my ear, thanks.

Also, I don't particularly want my data to be processed by anyone else.


Fun fact: Her was set in the year 2025.


Boring fact: The underlying theme of the movie Her is actually divorce and the destructive impact it has on people, the futuristic AI stuff is just for stuffing!


The overall theme of Her was human relationships. It was not about AI and not just about divorce in particular.The AI was just a plot device to include a bodyless person into the equation. Watch it again with this in mind and you will see what I mean.


The universal theme of Her was the set of harmonics that define what is something and the thresholds, boundaries, windows onto what is not thatthing but someotherthing, even if the thing perceived is a mirror, not just about human relationships in particular. The relationship was just a plot device to make a work of deep philosophy into a marketable romantic comedy.


This is exactly the scenario where you don't want "the cloud" anywhere.


OpenAI doesn’t make any profit. So either it dies or prices go up. Not to mention the privacy aspect of your own machine and the freedom of choice which models to run


> So either it dies or prices go up.

Or efficiency gains in hardware and software catchup making current price point profitable.


Training data gets mired in expensive and they need constant input otherwise the AI‘s knowledge is outdated


OpenAI built a 3 billion dollar business in less than 3 years of a commercial offering.


3 billion revenue and 5 billion loss doesn’t sound like a sustainable business model.


Rumor has it they run queries at a profit, and most of the cost is in training and staff.

If they is true their path to profitability isn't super rocky. Their path to achieving their current valuation may end up being trickier though!


The real question is what the next 3 years look like. If it's another 5 billion burned for 3 billion or less in revenue, that's one thing... But...


How...


Recent report says there are 1M paying customers. At ~30USD for 12 months this is ~3.6B of revenue which kinda matches their reported figures. So to break even at their ~5B costs assuming that they need no further major investment in infrastructure they only need to increase the paying subscriptions from 1M to 2M. Since there are ~250M people who engaged with OpenAI free tier service 2x projection doesn't sound too surreal.


One man's dystopia is another man's dream. There's no "missing" in the moral of a movie, you make whatever you want out of it.


If Silicon Valley could tell the difference between utopias and dystopias, we wouldn't have companies named Soylent or iRobot, and the recently announced Anduril/Palantir/OpenAI partnership to hasten the creation of either SkyNet or Big Brother wouldn't have happened at all.


I mean, we still act like a "wild goose chase" is a bad thing.

We still schedule "bi-weekly" meetings.

We can't agree on which way charge goes in a wire.

Have you seen the y-axis on an economists chart?


The dystopian overton window has shifted, didn't you know, moral ambiguity is a win now? :) Tesla was right.


they don't miss that part. they just want to be the evil character.


Please name the dystopian elements of Her.


The real interesting stuff will happen when we get multimodal LMs that can do VR output.


Yeah, it's more about preempting competitors from attracting any ecosystem development than the revenue itself.


Jensen did say in recent interview, paraphrasing, “they are trying to kill my company”.

Those Macs with unified memory is a threat he is immediately addressing. Jensen is a wartime ceo from the looks of it, he’s not joking.

No wonder AMD is staying out of the high end space, since NVIDIA is going head on with Apple (and AMD is not in the business of competing with Apple).


From https://www.tomshardware.com/pc-components/cpus/amds-beastly...

The fire-breathing 120W Zen 5-powered flagship Ryzen AI Max+ 395 comes packing 16 CPU cores and 32 threads paired with 40 RDNA 3.5 (Radeon 8060S) integrated graphics cores (CUs), but perhaps more importantly, it supports up to 128GB of memory that is shared among the CPU, GPU, and XDNA 2 NPU AI engines. The memory can also be carved up to a distinct pool dedicated to the GPU only, thus delivering an astounding 256 GB/s of memory throughput that unlocks incredible performance in memory capacity-constrained AI workloads (details below). AMD says this delivers groundbreaking capabilities for thin-and-light laptops and mini workstations, particularly in AI workloads. The company also shared plenty of gaming and content creation benchmarks.

[...]

AMD also shared some rather impressive results showing a Llama 70B Nemotron LLM AI model running on both the Ryzen AI Max+ 395 with 128GB of total system RAM (32GB for the CPU, 96GB allocated to the GPU) and a desktop Nvidia GeForce RTX 4090 with 24GB of VRAM (details of the setups in the slide below). AMD says the AI Max+ 395 delivers up to 2.2X the tokens/second performance of the desktop RTX 4090 card, but the company didn’t share time-to-first-token benchmarks.

Perhaps more importantly, AMD claims to do this at an 87% lower TDP than the 450W RTX 4090, with the AI Max+ running at a mere 55W. That implies that systems built on this platform will have exceptional power efficiency metrics in AI workloads.


"Fire breathing" is completely inappropriate.

Strix Halo is a replacement for the high-power laptop CPUs from the HX series of Intel and AMD, together with a discrete GPU.

The thermal design power of a laptop CPU-dGPU combo is normally much higher than 120 W, which is the maximum TDP recommended for Strix Halo. The faster laptop dGPUs want more than 120 W only for themselves, not counting the CPU.

So any claims of being surprised that the TDP range for Strix Halo is 45 W to 120 W are weird, like the commenter has never seen a gaming laptop or a mobile workstation laptop.


> The thermal design power of a laptop CPU-dGPU combo is normally much higher than 120 W

Normally? Much higher than 120W? Those are some pretty abnormal (and dare I say niche?) laptops you're talking about there. Remember, that's not peak power - thermal design power is what the laptop should be able to power and cool pretty much continuously.

At those power levels, they're usually called DTR: desktop replacement. You certainly can't call it "just a laptop" anymore once we're in needs-two-power-supplies territory.


Any laptop that in marketed as "gaming laptop" or "mobile workstation" belongs to this category.

I do not know which is the proportion of gaming laptops and mobile workstations vs. thin and light laptops. While obviously there must be much more light laptops, the gaming laptops cannot be a niche product, because there are too many models offered by a lot of vendors.

My own laptop is a Dell Precision, so it belongs to this class. I would not call Dell Precision laptops as a niche product, even if they are typically used only by professionals.

My previous laptop was some Lenovo Yoga that also belonged to this class, having a discrete NVIDIA GPU. In general, any laptop having a discrete GPU belongs to this class, because the laptop CPUs intended to be paired with discrete GPUs have a default TDP of 45 W or 55 W, while the smallest laptop discrete GPUs may have TDPs of 55 W to 75 W, but the faster laptop GPUs have TDPs between 100 W and 150 W, so the combo with CPU reaches a TDP around 200 W for the biggest laptops.


You can't usually just add up the TDPs of CPU and GPU, because neither cooling nor the power circuitry supports that kind of load. That's why AMDs SmartShift is a thing.


People are very unaware just how much better a gaming laptop from 3 years ago is (compared to a copilot laptop). These laptops are sub $500 on eBay, and Best Buy won’t give you more than $150 for it as a trade in (almost like they won’t admit that those laptops outclass the new category type of AI pc).


> since NVIDIA is going head on with Apple

I think this is a race that Apple doesn't know it's part of. Apple has something that happens to work well for AI, as a side effect of having a nice GPU with lots of fast shared memory. It's not marketed for inference.


Apple is both well aware and marketing it, as seen at https://www.apple.com/my/newsroom/2024/10/apples-new-macbook...

Quote:

"It also supports up to 128GB of unified memory, so developers can easily interact with LLMs that have nearly 200 billion parameters."


Which interview was this?


https://fortune.com/2023/11/11/nvidia-ceo-jensen-huang-says-...

I can't find the exact Youtube video, but it's out there.


You missed the Ryzen hx ai pro 395 product announcement


From the people I talk to the enthusiast market is nvidia 4090/3090 saturated because people want to do their fine tunes also porn on their off time. The Venn diagram of users who post about diffusion models and llms running at home is pretty much a circle.


Not your weights, not your waifu


Yeah, I really don't think the overlap is as much as you imagine. At least in /r/localllama and the discord servers I frequent, the vast majority of users are interested in one or the other primarily, and may just dabble with other things. Obviously this is just my observations...I could be totally misreading things.


> I sure wished I held some Nvidia stocks, they seem to be doing everything right in the last few years!

They propelled on unexpected LLM boom. But plan 'A' was robotics in which NVidia invested a lot for decades. I think their time is about to come, with Tesla's humanoids for 20-30k and Chinese already selling for $16k.


This is somewhat similar to what GeForce was to gamers back in the days, but for AI enthusiasts. Sure, the price is much higher, but at least it's a completely integrated solution.


Yep that's what I'm thinking as well. I was going to buy a 5090 mainly to play around with LLM code generation, but this is a worthy option for roughly the same price as building a new PC with a 5090.


It has 128 GB of unified RAM. It will not be as fast as the 32 GB VRAM of the 5090, but what gamer cards have always lacked was memory.

Plus you have fast interconnects, if you want to stack them.

I was somewhat attracted by the Jetson AGX Orin with 64 GB RAM, but this one is a no-brainer for me, as long as idle power is reasonable.


Having your main pc as an LLM rig also really sucks for multitasking, since if you want to keep a model loaded to use it when needed, it means you have zero resources left to do anything else. GPU memory maxed out, most of the RAM used. Having a dedicated machine even if it's slower is a lot more practical imo, since you can actually do other things while it generates instead of having to sit there and wait, not being able to do anything else.


>enthusiast AI dev segment

i think it isn't about enthusiast. To me it looks like Huang/NVDA is pushing further a small revolution using the opening provided by the AI wave - up until now the GPU was add-on to the general computing core onto which that computing core offloaded some computing. With AI that offloaded computing becomes de-facto the main computing and Huang/NVDA is turning tables by making the CPU is just a small add-on on the GPU, with some general computing offloaded to that CPU.

The CPU being located that "close" and with unified memory - that would stimulate development of parallelization for a lot of general computing so that it would be executed on GPU, very fast that way, instead of on the CPU. For example classic of enterprise computing - databases, the SQL ones - a lot, if not, with some work, everything, in these databases can be executed on GPU with a significant performance gain vs. CPU. Why it isn't happening today? Load/unload onto GPU eats into performance, complexity of having only some operations offloaded to GPU is very high in dev effort, etc. Streamlined development on a platform with unified memory will change it. That way Huang/NVDA may pull out rug from under the CPU-first platforms like AMD/INTC and would own both - new AI computing as well as significant share of the classic enterprise one.


> these databases can be executed on GPU with a significant performance gain vs. CPU

No, they can’t. GPU databases are niche products with severe limitations.

GPUs are fast at massively parallel math problems, they anren’t useful for all tasks.


>GPU databases are niche products with severe limitations.

today. For the reasons like i mentioned.

>GPUs are fast at massively parallel math problems, they anren’t useful for all tasks.

GPU are fast at massively parallel tasks. Their memory bandwidth is 10x of that of the CPU for example. So, typical database operations, massively parallel in nature like join or filter, would run about that faster.

Majority of computing can be parallelized and thus benefit from being executed on GPU (with unified memory of the practically usable for enterprise sizes like 128GB).


> So, typical database operations, massively parallel in nature like join or filter, would run about that faster.

Given workload A how much of the total runtime JOIN or FILTER would take in contrast to the storage engine layer for example? My gut feeling tells me not much since to see the actual gain you'd need to be able to parallelize everything including the storage engine challenges.

IIRC all the startups building databases around GPUs failed to deliver in the last ~10 years. All of them are shut down if I am not mistaken.


With cheap large RAMs and the SSD the storage has already became much less of an issue, especially when the database is primarily in-memory one.

How about attaching SSD based storage to NVLink? :) Nvidia does have the direct to memory tech and uses wide buses, so i don't see any issue for them to direct attach arrays of SSD if they feel like it.

>IIRC all the startups building databases around GPUs failed to deliver in the last ~10 years. All of them are shut down if I am not mistaken.

As i already said - model of database offloading some ops to GPU with its separate memory isn't feasible, and those startups confirmed it. Especially when GPU would be 8-16GB while the main RAM can easily be 1-2TB with 100-200 CPU cores. With 128GB unified memory like on GB10 the situation looks completely different (that Nvidia allows only 2 to be connected by NVLink is just a market segmentation not a real technical limitation).


I mean you wouldn't run a database on a GB10 device or cluster of them thereof. GH200 is another story, however, the potential improvement wrt the databases-in-GPUs still falls short of the question if there are enough workloads that are compute-bound in the substantial part of total wall-clock time for given workload.

In other words, and hypothetically, if you can improve logical plan execution to run 2x faster by rewriting the algorithms to make use of GPU resources but physical plan execution remains to be bottlenecked by the storage engine, then the total sum of gains is negligible.

But I guess there could perhaps be some use-case where this could be proved as a win.


The unified memory is no faster for the GPU than the CPU. So its not 10x the CPU. HBM on a GPU is much faster.


No. The unified memory on GB10 is much faster than regular RAM to CPU system:

https://nvidianews.nvidia.com/news/nvidia-puts-grace-blackwe...

"The GB10 Superchip enables Project DIGITS to deliver powerful performance using only a standard electrical outlet. Each Project DIGITS features 128GB of unified, coherent memory and up to 4TB of NVMe storage. With the supercomputer, developers can run up to 200-billion-parameter large language models to supercharge AI innovation."

https://www.nvidia.com/en-us/data-center/grace-cpu-superchip...

"Grace is the first data center CPU to utilize server-class high-speed LPDDR5X memory with a wide memory subsystem that delivers up to 500GB/s of bandwidth "

As far as i see it is about 4x of Zen 5.


> I sure wished I held some Nvidia stocks

I’m so tired of this recent obsession with the stock market. Now that retail is deeply invested it is tainting everything, like here on a technology forum. I don’t remember people mentioning Apple stock every time Steve Jobs made an announcement in the past decades. Nowadays it seems everyone is invested in Nvidia and just want the stock to go up, and every product announcement is a mean to that end. I really hope we get a crash so that we can get back to a more sane relation with companies and their products.


> hope we get a crash

That's the best time to buy. ;)


But if you buy and it crash, you lose the money, no?


“Bigger” in what sense? For AI? Sure, because this an AI product. 5x series are gaming cards.


Not expecting this to compete with the 5x series in terms of gaming; But it's interesting to note the increase in gaming performance Jensen was speaking about with Blackwell was larger related to inferenced frames generated by the tensor cores.

I wonder how it would go as a productivity/tinkering/gaming rig? Could a GPU potentially be stacked in the same way an additional Digit can?


Would hadn't nvidia cripple nvlink on geforce.


Bigger in the sense of the announcements.


Eh. Gaming cards, but also significantly faster. If the model fits in the VRAM the 5090 is a much better buy.


I bet $100k on NVIDIA stocks ~7 years ago, just recently closed out a bunch of them


> they seem to be doing everything right in the last few years

About that... Not like there isn't a lot to be desired from the linux drivers: I'm running a K80 and M40 in a workstation at home and the thought of having to ever touch the drivers, now that the system is operational, terrifies me. It is by far the biggest "don't fix it if it ain't broke" thing in my life.


Use a filesystem that snapshots AND do a complete backup.


Buy a second system which you can touch?


That IS the second system (my AI home rig). I've given up on Nvidia for using it on my main computer because of their horrid drivers. I switched to Intel ARC about a month ago and I love it. The only downside is that I have a xeon on my main computer and Intel never really bothered to make ARC compatible with xeons so I had to hack my bios around, hoping I don't mess everything up. Luckily for me, it all went well so now I'm probably one of a dozen or so people worldwide to be running xeons + arc on linux. That said, the fact that I don't have to deal with nvidia's wretched linux drivers does bring a smile to my face.


Will there really be a mac mini wirh Max or Ultra CPUs? This feels like somewhat of an overlap with the Mac Studio.


There will undoubtably be a Mac Studio (and Mac Pro?) bump to M4 at some point. Benchmarks [0] reflect how memory bandwidth and core count [1] compare to processor improvements. Granted, ymmv to your workload.

0. https://www.macstadium.com/blog/m4-mac-mini-review

1. https://www.apple.com/mac/compare/?modelList=Mac-mini-M4,Mac...


The nVidia price is closer (USD 3k) to a top Mac mini but I trust Apple more for the end-to-end support from hardware to apps than nVidia. Not an Apple fanboy but an user/dev, and I don't think we realize what Apple really achieved, industrially speaking. The M1 was launched in late 2020.


Did they say anything about power consumption?

Apple M chips are pretty efficient.


Not only that, but it should help free up the gpus for the gamers.


it eats into all NVDA consumer-facing clients no? I can see why openai and etc are looking for alternative hardware solution to train their next model.


I can confirm this is the case (for me).


I would like to have Mac as my personal computer and digits as service to run llm.


Am I the only one disappointed by these? They cost roughly half the price of a macbook pro and offer hmm.. half the capacity in RAM. Sure speed matters in AI, but what do I do with speed when I can't load a 70b model.

On the other hand, with a $5000 macbook pro, I can easily load a 70b model and have a "full" macbook pro as a plus. I am not sure I fully understand the value of these cards for someone that want to run personal AI models.


Are you, perhaps, commenting on the wrong thread? Project Digits is a $3k 128GB computer.. the best your your $5K MBP can have for ram is.. 128GB.


Hm? They have 128GB of RAM. Macbook Pros cap out at 128GB as well. Will be interesting to see how a Project Digits machine performs in terms of inference speed.


Then buy two and stack them!

Also I'm unfamiliar with macs is there really a MacBook pro with 256GB of RAM?


No, macbooks pro cap at 128GB. But, still, they are a laptop. It'll be interesting to see if Apple can offer a good counter for the desktop. The mac pro can go to 192Gb which is closer to the 128Gb Digits + your Desktop machine. At $9299 price tag, it's not too competitive but close.


> It'll be interesting to see if Apple can offer a good counter for the desktop.

Mac Pro [0] is a desktop with M2 Ultra and up to 192GB of unified memory.

[0] https://www.apple.com/mac-pro/


Bro we can connect two ProjectDigits as well. I was only looking at the M4 macbook because 128gb unified memory. Now this beast can cook better LLMs at just 3K with 4TB SSD too. M4 Macbook Max (128 GB unified ram and 4TB Storage) is 5999. So, No more apple for me. I will just get the Digits. And can create a workstation as well.


What slice?

Also, macOS devices are not very good inference solutions. They are just believed to be by diehards.

I don't think Digits will perform well either.

If NVIDIA wanted you to have good performance on a budget, it would ship NVLink on the 5090.


They are perfectly fine for certain people. I can run Qwen-2.5-coder 14B on my M2 Max MacBook Pro with 32gb at ~16 tok/sec. At least in my circle, people are budget conscious and would prefer using existing devices rather than pay for subscriptions where possible.

And we know why they won't ship NVLink anymore on prosumer GPUs: they control almost the entire segment and why give more away for free? Good for the company and investors, bad for us consumers.


> I can run Qwen-2.5-coder 14B on my M2 Max MacBook Pro with 32gb at ~16 tok/sec. At least in my circle, people are budget conscious

Qwen 2.5 32B on openrouter is $0.16/million output tokens. At your 16 tokens per second, 1 million tokens is 17 continuous hours of output.

Openrouter will charge you 16 cents for that.

I think you may want to reevaluate which is the real budget choice here

Edit: elaborating, that extra 16GB ram on the Mac to hold the Qwen model costs $400, or equivalently 1770 days of continuous output. All assuming electricity is free


It's a no brainer for me cause I already own the MacBook and I don't mind waiting a few extra seconds. Also, I didn't buy the mac for this purpose, it's just my daily device. So yes, I'm sure OpenRouter is cheaper, but I just don't have to think about using it as long as the open models are reasonable good for my use. Of course your needs may be quite different.


> Openrouter will charge you 16 cents for that

And log everything too?


It's a great option if you want to leak your entire internal codebase to 3rd parties.


> Also, macOS devices are not very good inference solutions

They are good for single batch inference and have very good tok/sec/user. ollama works perfectly in mac.


This is timely as I’m just building out a crawler in scrapy. Thanks!


I dread the day I have to eat at US restaurants, or even get something delivered. Part of the reason is cultural: I’ve never lived in a country where a tip is mandatory and you’ll be called out if you don’t. The other reason being it involves a degree of social pressure and shame, if one doesn’t tip enough. Both don’t sit well with me.

But I can attest that if I’m forced to tip, I’ll not return to that establishment.


In my experience, it's uncommon to be called out for not leaving a tip, at least on the East Coast of the US. At worst, you might get a nasty look depending on the context (unlikely at a cafe, maybe at a nice restaurant with good service).


You run the risk of people spitting in your food if they recognize you when you return.


If they'll spit in your food for not leaving a tip, they'll probably spit in your food for other reasons as well. Best to simply avoid those places.


Why do you think that's even remotely common?


I also used to be nervous about this, but it's really not complicated when you know how.

Just add 15% to the price. If you pay with credit card, you just write it on the field on the receipt. No words need to be said.


Exactly why I'll never visit twice any establishment that asks me for a tip. If I ever go eat in a restaurant, it's because I want to have less stress in my life, not because I want to put myself in a shitty situation where not only I need to be up to date with current social norms (which has always been a difficulty of mine), but I also need to do things that go against my beliefs (tipping is a scam akin to TicketMaster "€100 ticket + €20 shipping fee + €30 convenience fee + €35 what are you going to do about this fee + €15 fee won't make a difference = €200 vs advertised €100"). As a result, whenever I visit a restaurant, the thing I remember the most is not the food nor the ambience, but the moment of tipping when the waiter begs me for a tip like a Syrian refugee begs for water. This is not a problem in my country because thankfully begging isn't as common here, but when I was travelling I was once asked for a tip during a hotel breakfast, which BTW was shitty.

Also, tipping is a monument to human stupidity. Apparently, people would rather pay €10 + 20% tip than €12 with no tip, because the former feels cheaper, even though it's a stupid way to organize pricing.


I've lived here for a long time, but this stuff still gives me anxiety. I get you are supposed to tip in restaurants, but I'm unsure which other services need tipping. Is it required to tip, for instance, the HVAC repairman? Are you supposed to tip mechanics? Do native Americans have a spidey sense for which people need tipping that I just need to become sensitized to?


Any luxury service (consensus determines what a luxury is in this case, not the individual) that is personalized, intimate, and requires spending a lot of time with you would carry the expectation that you tip. HVAC is seen as a necessity in the US, the same with cars, and the same with medical care. Tipping wouldn't be expected in those instances.

The only situation where there is an unwritten expectation for a tip is at a sit-down restaurant with waitstaff and for food/grocery delivery. These are luxuries 99% of the time.

In all other cases that I know of, they will ask you outright if you want to tip. For example, when I get a haircut or a massage, they ask me explicitly if I want to tip, and, because it was a personalized, intimate, luxury service, I oblige. For simple walk-up services like coffee or take out, I wouldn't tip.

The only other times I tip are for exceptional service (e.g. in a fastfood drive through) or if it's a local business that I'm fond of.


>The other reason being it involves a degree of social pressure and shame, if one doesn’t tip enough.

It's standard to tip 15% for decent service in a restaurant(sans win). You are of course free to tip more for good service or less for crappy service, but unless your experience is truly exceptional (in either a good or bad way) you can never go wrong with 15%. This is standard any US restaurant where you sit down and are served by a waiter or waitress. You are never "forced to tip", but you will be universally looked down upon in any sit-down establishment that you fail to tip in, so you might be best off not returning.


> It's standard to tip 15% for decent service in a restaurant

This is not true, and hasn't been for some time [1]

[1] https://www.researchgate.net/figure/Average-reported-tip-rat...


The last metric in this 2017 study, before the pandemic, showed tipping was between 18% and 19% in, "surveys (that) are aimed at diners who patronize full-service midscale and upscale restaurants". It also shows a downward trend in the last few data points. All things considered, including diners who patronize "downscale" full-service restaurants (like diners), and given the many decades-long standard of 15% tips, it seems to me a safe standard to continue to use. Certainly no foreign visitor will ever face vitriol for tipping 15%.


> the many decades-long standard of 15% tips

The 15% standard supplanted the previous 10% standard somewhere in the 1970s and lasted to the early/mid-aughts depending on where in the US one lived. I don't agree that ~30 years is "many decades-long". Further, that 15% itself was an uptick from the prior standard demonstrates that we're dealing with a moving target, for better or worse.


Depends on the state since some are increasing base tipped wage. In california tipped workers get $15/hr plus tips so many have decreased tip to 15%


CA minimum wage is currently $16/hr, and higher for fast food workers (though those are often untipped jobs).


I'm also from a culture where tipping doesn't happen. I've been living in the US for a number of years, and I rarely go to a restaurant here, because the experience is too awkward.

On the other hand, I find delivery services quite reasonable. They tell me the total price (including the expected tip) before I order. You rarely see that kind of honesty in an actual restaurant. And I don't have to see the person I tip, which makes the experience much less awkward.


It's a broken system but it will never go away since tips are tax-advantaged (there's no sales tax on the tip).

The mental block can be helped a little by adding 1/3rd to all the menu prices as you read them.


Speaking for myself, it is not a mental block, it is disgust with the social design of tip culture. Including tips, waiters are better paid than teachers, and many other more essential professions that require higher qualifications. The pressure is disproportionate to their financial situation. Let's normalize paying everybody what they are worth and do away with the tortuous guilt trips.

To be clear, I am promoting eliminating tips, and paying everybody in the lower 90% of wage earners more for their work. I have no interest in shortchanging waiters.


> tips are tax-advantaged (there's no sales tax on the tip)

Someone please correct me if I'm wrong, but the bigger tax that's avoided here is corporate taxes, I think. The tip goes directly to the employee, and it's thus not taxed as corporate income, is my understanding.

EDIT: Ah, I missed that corporate taxes were on net earnings rather than gross, so this wouldn't make any difference. Thank you!


The primary corporate advantages to tips is they allow the business to display artificially low prices to customers (since they don't include the tip) and pay artificially low wages to employees (tipped jobs have a lower minimum wage).


> tipped jobs have a lower minimum wage

This isnt true in Alaska, California, Minnesota, Montana, Nevada, Oregon, or Washington.

And in the states where it is true, the employee is still required to be paid at least the full minimum wage inclusive of tips.


No wages are deducted so they don't pay income taxes on them anyway.


> An engine of financial performance

Cool, new strategy?!

> Through our voluntary early retirement and separation offerings, we are more than halfway to our workforce reduction target of approximately 15,000 by the end of the year. We still have difficult decisions to make and will notify impacted employees in the middle of October.

Oh right.


Silly me I thought Intel was a chip company. Turns out it's an "engine of financial performance."


Must be an internal combustion engine, because it's moved by discrete firings.


Gonna be the finest engine you've seen since the industrial revolution. Grease those gears, guys, we're shedding a head count of no less than 15,000 to keep this baby going. And that's just this year.


Look out Moore's Law!

Instead of doubling transistors every two years, look out, new Sheriff in town -- Gelsinger's law : the staff roster halves every two years!

We predict in the coming decades, Intel will consist of a single CEO producing microchips with over a quintillion transistors!


With the results they're getting, what are they supposed to do?


Inovate ? /s


That's not easy. Apparently the amount of employees they have, it's difficult to innovate. I'm not surprised. Innovation is easier in smaller groups of people.


Quality control checks out.


To reach something you need to get rid of something. The 3rd Newton's law.


Every company should strive to be an "engine of financial performance."

What other expectations do you have of Intel?

The amount of engine metaphors I could toss into this discussion are endless.


There is a little bit of "when a measure becomes a target, it ceases to be a good measure" with respect to chasing financial performance.

While you can't escape thinking about financial metrics, the goal should be something like creating great products, building a competitive barrier etc. Financials can act as a constraint rather than a goal.

A concrete example is Costco.

Even here, Gelsinger puts it last, which is sort of reads like a constraint. Seems fair.


This is a long term problem. Intel, and other once great American companies, do not have the talent or culture needed to make great products anymore.


Yup, sure. I'd argue one of the factors involved in the long term problem is when the company starts trying hard to make money as opposed to serving customers well (again, financial metrics must be a constraint). It's not the only factor (incentives get whacky, bureaucracy is difficult), but it's a factor which isn't appreciated as much as the other two.


It'd be nice if they could give me a compelling reason to upgrade my computer more than once or twice a decade, other than 'Our new AI computers have keyboards that go to 11'.


A company serves three groups.

Customers, employees, and owners.

I don't have a strong opinion about whether customers or employees come first, but owners should be last.


How could you have a strong opinion on what ought to be, when you've only baselessly asserted what is. Hume wept.


"The Shareholder Value Myth" is an interesting book about companies purpose https://www.amazon.com/The-Shareholder-Value-Myth-Shareholde...


I think with this kind of leadership they should probably liquidate and return money to the stockholders.


> What other expectations do you have of Intel?

I expect them to follow the Silicon Valley maxim of "Build the best product instead of focusing on making the most amount of money".


Beatings to continue..


That’s some Minority Report level shit right there!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: