Hacker Newsnew | past | comments | ask | show | jobs | submit | prossercj's commentslogin

Is it? I heard that it was falling.


This is remarkable if you consider how much it must wound Apple's pride to make this deal with their main rival in the smartphone software space, especially after all the fuss they made about "Apple Intelligence". It's a tacit admission that Google is just better at this kind of thing.


> wound Apple's pride

do businesses really "think" in a personified manner as this? isnt it just what the accounting resolves to as the optimal path?


Despite decades of efforts to reduce individual accountability in corporations to zero, companies (as social groupings) definitely still have some sense of identity that shines through in decisions.


The C-levels leading the companies might, and the tech CEOs in question have been at the helm for a long enough while to build up some emotional feelings.


> tacit admission that Google is just better at this kind of thing

Yet at the same time google have the worst offering of all the major players (all starting up out of thin air) in this space.

It doesnt really matter anyway, the LLM is a commodity piece of tech, the interface is what matters and apple should focus on making that rather than worry about scraping the entire internet for training data and spending a trillion on GPUs


> Yet at the same time google have the worst offering of all the major players (all starting up out of thin air) in this space.

Is that so? Gemini Models (including Nano Banana), in my experience, are very good, and are kneecapped only by Google’s patronizing guardrails. (They will regularly refuse all kinds of things that GPT and Claude don’t bat a weight at, and I can often talk them out of the refusal eventually, which makes no sense at all.)

That’s not something Apple necessarily has to replicate in their implementation (although if there’s one company I’d trust to go above and beyond on that, it’s Apple).


I’m not sure. It could be a way to save a ton of money. Look at the investments non-Apple tech companies are making on data centers & compute.

Maybe paying Google a billion a year is still a lot cheaper?

Apple famously tries to focus on only a few things.

Still, they will continue working on their own LLM and plug it in when ready.

Edit: compare to another comment about Wang-units of currency


Well they would still be running the google models in Apple DCs. I doubt this is a very cost efficient deal for them.


I don't think it hurts their pride at all when they are taking tens of billions from Google so it can be the default search engine on iOS. So they give a little of that back to Google, it's still clear who is doing well in this arrangement between the two companies.


> that Google is just better at this kind of thing

That might be true but Siri sucks so bad it doesn't matter. It uses GPT but the quality is OSS models' level.


As of its fiscal quarter ending September 2025 Apple had $35.93 billion in cash and cash equivalents.


I don't use it for large-scale code generation, but I do find it useful for small code snippets. For example asking how to initialize a widget in Kendo UI with specific behavior. With snippets, I can just run the code and verify that it works with minimal effort. It's often more about reminding me of something I already knew rather than discovering something novel. I wouldn't trust it with anything novel.

In general, I think of it as a better kind of search. The knowledge available on the internet is enormous, and LLMs are pretty good at finding and synthesizing it relative to a prompt. But that's a different task than generating its own ideas. I think of it like a highly efficient secretary. I wouldn't ask my secretary how to solve a problem, but I absolutely would ask if we have any records pertaining to the problem, and perhaps would also ask for a summary of those records.


This is that meeting


How is it for gaming? Had any compatibility issues?


I had both an A580 (not an A770, but at least something from that generation) and then later a B580, at one point even both in the same computer, side by side, when I wanted to use one for games and the other for encoding:

https://blog.kronis.dev/blog/what-is-ruining-dual-gpu-setups

https://blog.kronis.dev/blog/more-pc-shenanigans-my-setup-un...

https://blog.kronis.dev/blog/two-intel-arc-gpus-in-one-pc-wo...

When paired with a worse CPU like a Ryzen 5 4500, the experience won't always be good (despite no monitoring software actually showing that the CPU is a bottleneck).

When paired with a better CPU (I got a Ryzen 7 5800X to replace it, eventually with an AIO cause the temperatures were too high under full load anyways), either of them are pretty okay.

In a single GPU setup either of them run most games okay, not that many compatibility or stability issues, even in older indie titles, though I've had some like STALCRAFT: X complain about running on an integrated GPU (Intel being detected as such). Most software also works, unless you want to run LLMs locally, where Nvidia will have more of an advantage and you'd go off the beaten path. Most annoying I've had were some stability issues near the launch of each card, for example running the B580 with their Boost functionality on in their graphics software sometimes crashed in Delta Force, no longer seems to be an issue.

Temperature and power draw seem fine. Their XeSS upscaling is actually really good (I use it on top of native resolution in War Thunder as fancy AA), their frame generation feels like it has more latency than FSR but also better quality, might be subjective, but it's not even supported in that many games in the first place. Their video encoders are pretty nice, but sometimes get overloaded in intensive games instead of prioritizing the encoding over game framerate (which is stupid). Video editing software like DaVinci Resolve also seems okay.

The games that run badly are typically Unreal Engine 5 titles, such as S.T.A.L.K.E.R. 2 and The Forever Winter, where they use expensive rendering techniques and to get at least 30 FPS you have to turn the graphics way down, to the point where the games still run like crap and end up looking worse than something from 5 years ago. Those were even worse on the A series cards, but with the B series ones become at least barely playable.

In a dual GPU setup, nothing works that well, neither in Windows 11, nor Windows 10, neither with the A580 + B580, nor my old RX 580 + B580: system instability, some games ignoring the Intel GPU preference being set when an AMD one is available, low framerates when a video is playing on a secondary monitor (I have 4 in total), the inability to play games on the B580 and do encoding on the A580 due to either just OBS or also the hardware not having proper support for that (e.g. can't pick which GPU to do encode on, like you can with Nvidia ones, my attempts at patching OBS to do that failed, couldn't get a video frame from one GPU to the other). I moved back to running just the B580 in my PC.

For MSRP, I'd say that the Intel Arc B580 is actually a good option, perhaps better than all A series cards. But the more expensive it gets, the more attractive alternatives from AMD and Nvidia become. Personally wouldn't get an A770 unless needed the VRAM or the price was really good.

Also I’m not sure why the A580 needed two 8-pin connectors if it never drew that much power and also why the B580 has plenty of larger 3 fan versions when I could never really get high temps when running Furmark on the 2 fan version.


5800X is a 105W part so should be quite fine with air cooling still. I just built 9950X3D (170W) with air cooling and it's plenty enough for that too, temperatures under load are mostly in the 70s, stress test gets it up to 85C.


I have a pretty bad Kolink case and have to mount the fans in the front instead of the top, otherwise it gets too crowded: https://kolink.eu/Home/case-1/midi-tower-2/others/quantum.ht...

Without the side panel, the temps are like 10-15C lower than with the side panel, so without they go up to about 78C under full load but do hit 90C and the clock frequencies are dialed back with the panel on.

That is already with a CO value of -10 across all cores.

I will probably need a different case altogether, or just get rid of the solid front panel (those vents on it are too small) and replace it with a custom mesh.

Thankfully, for now, in CPU-Z the scores are ~6500 without the side panel and ~6300 with the panel, so with the AIO and more powerful fans on it, it's pretty close to working optimally, even if not quite there yet.

I also tried it with 5x120mm case fans and an air cooler, it was slightly worse than the AIO. Also tried multiple different thermal pastes, didn't make much of a difference. Might also just be cursed and have ghosts in it, go figure.


Yep I guess the case is the limiting factor then, no CPU cooler can do much if the case traps the hot air inside. Though 5 fans should be enough to force quite some air to move already.

I had a fully new build so used one of the well reviewed Fractal cases to get good airflow, with 5x140mm case fans.


Short answer: no


Strong agree. This reminds me of one of my pet theories: that research and education are fundamentally different skills. A good researcher should be flexible and open-minded, almost to a fault, but a good educator needs to be committed to certain beliefs in order to teach them. More important, an educator should instill good habits (even if those habits involve asking good questions) and set a good example, a requirement entirely lacking from research.

So why do all of our universities only employ teachers who have been trained as researchers?

I think much of the 80% grinding that you describe is just the publish-or-perish mindset of graduate school, which the teachers pick up along the way (I'm not faulting them so much as the process). It's more about appearing to know, rather than knowing. This may be what you have to do to survive in a competitive research environment, but one is left wondering what any of that has to do with educating our children, especially the majority who will never become researchers.


This comment on that PR is pure gold. The bots are talking to each other:

https://github.com/dotnet/runtime/pull/115732#issuecomment-2...


Don't they already have their own silicon (Tensor)?


Kind of. If you look at the specs of the tensor chips they appear to essentially copy exactly what Qualcomm is doing. They use off the shelf ARM reference design cores packaged together the same way QC does. They’re also about a year behind QCs latest stuff.

In comparison Apple (and the very very latest QC chipsets) use custom ARM cores. Google has yet to do this.


If you want books that can help you learn to think, you can't do better than the classics. I suggest Plato's dialogs or the Analects of Confucius. You may find them more accessible than you expect. They were excellent teachers.

Here's a good translation of Plato by Benjamin Jowett: https://oll.libertyfund.org/titles/plato-the-dialogues-of-pl...

And one of Confucius by James Legge: https://oll.libertyfund.org/titles/legge-the-chinese-classic...

Both translations are from the 19th Century and so have entered the public domain. But both are also often considered the best English versions of these works, even today.

(Note you can download a facsimile pdf of the original book, and print it out as you go. I prefer to read anything long-form that way. Better for concentration in my opinion).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: