Different countries have different considerations.
Most of these "deals" aren't deals and rather frameworks. And given the tumultuous nature of bilateral trade where countries might not follow on their promises. Happens all the time and countries end up arguing at the WTO. So, it is hard to say whether Europe or Japan "blinked". Given the timing of the European deal it might be to help Trump so that he does not have egg on his face for not having 200 deals by 1st August.
India wants to cozy up to US and was one of the first countries to start trade negotiation. Trump-Modi dynamic has been good. The sticking point is agriculture and dairy. Both countries subsidize agriculture and diary. And in both countries farmers form a big chunk of politically aware voters. For the Indian government it is political suicide to even nod along like Japan and Europe. But if you hear Trump he keeps saying it is about Russian oil.
Brazil might have the same issue. Historically, US was the largest soyabeans exporter. But last time Trump got into a trade war with China, the country has moved away from US exports and started buying from China. So, again even the appearance of a deal might be problematic for the government.
And reading this side by side, maybe US farmers are not that economically aware?
India's issues with the US are for completely separate reason.
Bihar elections are coming up in 2 months [2]. Any incumbent government in India can't give trade concessions on agriculture during peak campaigning season in a swing state that can impact elections in 2-3 additional states as well as the general election in 4 years.
The main Indian exports to the US (pharma, electronics, and services) are all tariff exempted so the economic pain is marginal.
The only export that's hurt is textiles, but frankly, textile workers don't matter in Indian elections, especially when most of them leave for 3-4 months each year to work on the family farm and get social benefits based on their agrarian status and voter rolls that they never updated.
Realistically, India and US will sign a deal either after the Bihar elections, or after making ag/dairy a separate track from the rest of the deal.
Trump needs to keep the cheeseheads of Wisconsin happy just like the NDA needs to keep Bihari farmers happy through direct subsidizes [0] and hardline agriculture policies [1], hence why both the US and India will maintain maximalist positions on agriculture and dairy.
My hunch is a comprehensive deal will be announced during the election media blackout in the run-up of the Bihar elections or shortly after the election.
there has yet to be a value openAI originally claimed to have that has lasted a second longer than there was profit motive to break it.
they went from open to closed.
they went from advocating ubi to for profit.
they went from pacific to selling defense tech.
they went from a council overseeing the project to a single man in control.
and thats fine, go make all the money you can, but don't try do this sick act where you try to convince people to thank you for acting in your own self interest.
can someone help me understand how the following can be true:
1. TPU's are a serious competitor to nvidia chips.
2. Chip makers with the best chips are valued at 1-3.5T.
3. Google's market cap is 2T.
4. It is correct for google to not sell TPU's.
i have heard the whole, its better to rent them thing, but if they're actually good selling them is almost as good a business as every other part of the company.
Wall street undervalued Google even on day one (IPO). Bezos has said that some of the times the stock had been doing the worst were when the company was doing great.
So, to help you understand how they can be true: market cap is governed by something other than what a business is worth.
As an aside, here's a fun article that embarrasses wall street. [0]
I remember sitting around the lunch table in a tech company when Google IPO'd and none of us understood the IPO valuation. I didn't buy any stocks. I also didn't get "cloud" either. Sometimes new business is essentially created out of thin air. Google and Amazon's valuation did not increase only due to their efforts, it also increased because the broader market shifted.
I guess that means don't take investment advice from me ;) I've done OK buying indices though.
Selling them and supporting that in the field requires quite some infrastructure you'd have to build. Why go through all that trouble if you already make higher margins renting them out?
Also, if they are so good, it's best to not level the playing field by sharing that with your competitors.
Also "chip makers with the best chips" == Nvidia, there aren't many others. And Alphabet does more than just produce TPUs.
Does Google cloud offer them on a "aws outpost" style model? I think that plus cloud access is probably the easiest and ' best ' way to offer them. Last thing you need to be dealing with is super micro, gigabyte etc building a box for them and so on - I can definitely understand not selling the raw chip.
Google is saving a ton of money by making TPUs, which will pay off in the future when AI is better monetized, but so far no one is directly making a massive profit from foundation models. It's a long term play.
Common in gold rushes but then they are selling chips. Are they overvalued? Maybe. Are they profitable (something WeWork and Uber aren't) ? Yes, quite.
nvidia, who make AI chips with kinda good software support, and who have sales reflecting that, is worth 3.5T
google, who make AI chips with barely-adequate software, is worth 2.0T
AMD, who also make AI chips with barely-adequate software, is worth 0.2T
Google made a few decisions with TPUs that might have made business sense at the time, but with hindsight haven't helped adoption. They closely bound TPUs with their 'TensorFlow 1' framework (which was kinda hard to use) then they released 'TensorFlow 2' which was incompatible enough it was just as easy to switch to PyTorch, which has TPU support in theory but not in practice.
They also decided TPUs would be Google Cloud only. Might make sense, if they need water cooling or they have special power requirements. But it turns out the sort of big corporations that have multi-cloud setups and a workload where a 1.5x improvement in performance-per-dollar is worth pursuing aren't big open source contributors. And understandably, the academics and enthusiasts who are giving their time away for free aren't eager to pay Google for the privilege.
Perhaps Google's market cap already reflects the value of being a second-place AI chipmaker?
jax is very much a working (and in my view better, aside from the lack of community) software support. Especially if you use their images (which they do). > > Tensorflow
They have been using jax/flax/etc rather than tensorflow for a while now. They don't really use pytorch from what I see on the outside from their research works. For instance, they released siglip/siglip2 with flax linen: https://github.com/google-research/big_vision
TPUs very much have software support, hence why SSI etc use TPUs.
> They have been using jax/flax/etc rather than tensorflow for a while now
Jax has a harsher learning curve than Pytorch in my experience. Perhaps it's worth it (yay FP!) but it doesn't help adoption.
> They don't really use pytorch from what I see on the outside from their research works
Of course not, there is no outside world at Google - if internal tooling exists for a problem their culture effectively mandates using that before anything else, no matter the difference in quality. This basically explains the whole TF1/TF2 debacle which understandably left a poor taste in people's mouths. In any case while they don't use Pytorch, the rest of us very much do.
Right and in order to use it effectively you basically have to use Jax. Most researchers don't have the advantage of free compute so they are effectively trying to buy mindshare rather than winning on quality. This is fine, but it's worth repeating as it biases the discussion heavily - many proponents of Jax just so happen to be on TRC or have been given credits for TPU's via some other mechanism.
Also - getting access to a TPU on GCP (particularly when you don't have a <fancy_school>.edu email address) has historically been a _fucking nightmare_. Absolute shit show.
I am a high schooler, and easily got a tpuv4-64. No fancy school or edu email address, just a dream of winning geoguessr. They are very receptive to emails, I asked for more and they got more for me.
Like other Google internal technologies, the amount of custom junk you'd need to support to use a TPU is pretty extreme, and the utility of the thing without the custom junk is questionable. You might as well ask why they aren't marketing their video compression cards.
Aside from the specifics of Nvidia vs Google, one thing to note regarding company valuations is that not all parts of the company are necessarily additive. As an example (read: a thing I’m making up), consider something like Netflix vs Blockbuster back in the early days - once Blockbuster started to also ship DVDs, you’d think it’d obviously be worth more than Netflix, because they’ve got the entire retail operation as well, but that presumes the retail operation is actually a long-term asset. If Blockbuster has a bunch of financial obligations relating to the retail business (leases, long-term agreements with shippers and suppliers, etc), it can very quickly wind up that the retail business is a substantial drag on Blockbuster’s valuation, as opposed to something that makes it more valuable.
AMD and even people like Huawei also make somewhat acceptable chips but using them is a bit of a nightmare. Is it a similar thing here? Using TPUs is more difficult, only exists inside Google cloud etc
I believe Broadcom is also very involved in the making of the TPU's and networking infrastructure and they are valued at 1.2T currently. Maybe consider the combined value of Broadcom and Google.
If they think they’ve got a competitive advantage vs. GPUs which benefits one of their core products, it would make sense to retain that competitive advantage for the long term, no?
No. If they sell the TPUs for “what they’re worth”, they get to reap a portion of the benefit their competitors would get from them. There’s money they could be making that they aren’t.
Or rather, there would be if TPUs were that good in practice. From the other comments it sounds like TPUs are difficult to use for a lot of workloads, which probably leads to the real explanation: No one wants to use them as much as Google does, so selling them for a premium price as I mentioned above won’t get them many buyers.
Good questions, below I attempt to respond to each point then wrap it up. TLDR: even if TPU is good (and it is good for Google) it wouldn’t be “almost as good a business as every other part of their company” because the value add isn’t FROM Google in the form of a good chip design(TPU). Instead the value add is TO Google in form of specific compute (ergo) that is cheap and fast FROM relatively simple ASICs(TPU chip) stitched together into massively complex systems (TPU super pods).
If interesting in further details:
1) TPUs are a serious competitor to Nvidia chips for Google’s needs, per the article they are not nearly as flexible as a GPU (dependence on precompiled workloads, high usage of PEs in systolic array). Thus for broad ML market usage, they may not be competitive with Nvidia gpu/rack/clusters.
2)chip makers with the best chips are not valued at 1-3.5T, per other comments to OC only Nvidia and Broadcomm are worth this much. These are not just “chip makers”, they are (the best) “system makers” driving designs for chips and interconnect required to go from a diced piece of silicon to a data center consuming MWs.
This part is much harder, this is why Google (who design TPU) still has to work with Broadcomm to integrate their solution.
Indeed every hyperscalar is designing chips and software for their needs, but every hyperscalar works with companies like Broadcomm or Marvell to actually create a complete competitive system.
Side note, Marvell has deals with Amazon, Microsoft and Meta to mostly design these systems they are worth “only” 66B.
So, you can’t just design chips to be valuable, you have to design systems. The complete systems have to be the best, wanted by everyone (Nvidia, Broadcomm) in order to be in Ts, otherwise you’re in Bs(Marvell).
4. I see two problems with selling TPU, customers and margins. If you want to sell someone a product, it needs to match their use, currently the use only matches Google’s needs so who are the customers? Maybe you want to capture hyperscalars / big AI labs, their use case is likely similar to google. If so, margins would have to be thin, otherwise they just work directly with Broadcomm/Marvell(and they all do). If Google wants everyone using cuda /Nvidia as a customer then you massively change the purpose of TPU and even Google.
To wrap up, even if TPU is good (and it is good for Google) it wouldn’t be “almost as good a business as every other part of their company” because the value add isn’t FROM Google in the form of a good chip design(TPU). Instead the value add is TO Google in form of specific compute (ergo) that is cheap and fast FROM relatively simple ASICs(TPU chip) stitched together into massively complex systems (TPU super pods).
Sorry that got a bit long winded, hope it’s helpful!
This also all assumes that there is excess foundry capacity in the world for Google to expand into, which is not obvious. One would need exceptionally good operations to compete here and that has never been Google's forte.
Why do you say that? They are on their seventh iteration of hardware and even from the beginning (according to the article) they were designed to serve Google AI needs.
My take is "sell access to TPUs on Google cloud" is the nice side effect.
by designing ai first products that can operate at far lower margins. Google has to extract hundreds of billions a year, perplexity, anthropic, openai all dont.
Idk what the future of the browser is but i know if i was in the lab at any of these companies i'd be laughing at the competition putting a out a product that was just a text summary in a window.
this done well is a transformational thing, its just no one has been willing to invest yet, but the compute on a phone is now good enough to do most things most users do on desktop.
I can easily see the future of personal computing being a mobile device with peripherals that use its compute and cloud for anything serious. be that airpods, glasses, watches, or just hooking that device up to a larger screen.
theres not a great reason for an individual to own processing power in a desktop, laptop, phone, and glasses when most are idle while using the others.
The future of personal computing is being dictated by the economics of it, which are that the optimal route to extract value from consumers is to have walled-garden software systems gated by per-month subscription access and/or massive forced advertising. This leads to everything being in the cloud and only fairly thin clients running on user hardware. That gives the most control to the system owners and the least control to the user.
Given that all the compute and all the data is on the cloud, there is little point in making ways for users to do clever interconnect things with their local devices.
I've heard so many "The future of personal computing" statements that haven't come true, so I don't take much stock in them.
I remember when everyone thought we were going to throw out our desktops and do all our work on phones and tablets! (Someone who kept insisting on this finally admitted that they couldn't do a spreadsheet on a phone or tablet.)
> Given that all the compute and all the data is on the cloud, there is little point in making ways for users to do clever interconnect things with their local devices.
IMO, it's a pain-in-the-ass to manage multiple devices, so IMO, it's much easier to just plug my phone into a clamshell and have all my apps show up there.
> we were going to throw out our desktops and do all our work on phones and tablets! (Someone who kept insisting on this finally admitted that they couldn't do a spreadsheet on a phone or tablet.)
We're almost there. The cool kids are already using 12" touchscreen ARM devices that people from 10 or 20 years ago would probably think of as tablets. Some kinds of work benefit greatly from a keyboard, but that doesn't necessarily mean you want oneall the time - I still think the future is either 360-fold laptops with a good tablet mode (indeed that's the present for me, my main machine is a HP Envy) or something like the MS Surface line with their detachable "keyboard cover".
> Some kinds of work benefit greatly from a keyboard, but that doesn't necessarily mean you want oneall the time
I would say most kinds of work.
Even if you're just on teams discussions - a real keyboard is much more productive than messing around on a touchscreen. Same with just reading. Sometimes I read a forum thread on my phone and then when I get back to the real computer I'm surprised how little I read and how much it felt like.
The only thing where I don't see this being the case is creative work like drawing where a tablet is really perfect, much better than a wacom or something.
Well, the MacBook Air is pretty much an iPad that swapped its touchscreen for a keyboard (and trackpad).
> I still think the future is either 360-fold laptops with a good tablet mode (indeed that's the present for me, my main machine is a HP Envy) or something like the MS Surface line with their detachable "keyboard cover".
I think people still want to use different form factors in the future. There's different uses for a phone, a tablet, a laptop and a desktop.
I do agree that laptops might get better tablet modes, but if you want to have a full-sized comfortable-ish keyboard, the laptop is gonna be more unwieldy than a dedicated tablet.
The only thing you save from running your desktop (or even laptop) form factor off your phone is the processor (CPU, GPU, RAM). You still have to pay for everything else. But even today the cost of desktop processing components that can reach phone-like performance is almost a rounding error; just because they have so much more space, cooling and power to play with.
(Destop CPUs can be quite pricey if you buy higher end ones, but they'll outclass phones by comical amounts. Phone performance is really, really cheap in a desktop.)
> I think people still want to use different form factors in the future. There's different uses for a phone, a tablet, a laptop and a desktop.
> The only thing you save from running your desktop (or even laptop) form factor off your phone is the processor (CPU, GPU, RAM). You still have to pay for everything else.
Having used the same device as my tablet/laptop/desktop for a few years (previously a couple of generations of Surface Book, now the Envy, in both cases with a dock set up on my desk), I never want to go back. It just makes using it so much smoother, even compared to having tab sync and what have you between multiple devices. It's not a money thing, it's a convenience thing, which is why I think it'll win out in the end.
I think as hardware continues to get thinner and lighter, the advantage of a tablet-only device compared to a tablet/laptop will disappear, and as touchscreens get cheaper, there'll be little point in laptop-only devices. I definitely still want an easy way to take a keyboard with my device on the train/plane, and I don't know what exact hardware arrangement will win out for that, but I'm confident that the convergence will happen. I think phone convergence will also happen eventually, for the same reason, but how that will actually work in terms of the physical form factor is anyone's guess.
> Having used the same device as my tablet/laptop/desktop for a few years (previously a couple of generations of Surface Book, now the Envy, in both cases with a dock set up on my desk), I never want to go back. It just makes using it so much smoother, even compared to having tab sync and what have you between multiple devices. It's not a money thing, it's a convenience thing, which is why I think it'll win out in the end.
Yes, that's useful. But eg ChromeOS already gives you most of that, and a bit of software could get you all the way there.
> I think as hardware continues to get thinner and lighter, the advantage of a tablet-only device compared to a tablet/laptop will disappear, and as touchscreens get cheaper, there'll be little point in laptop-only devices.
I agree with the latter, but not the former. There are mechanical limits to shrinking a keyboard, and still preserve comfort.
(And once you have the extra space from a keyboard, you might as well fill it up with more battery. But I'm not so sure about that compared to the argument about physical lower bounds on keyboard size.)
> eg ChromeOS already gives you most of that, and a bit of software could get you all the way there.
I don't understand what you mean here. If you're talking about some kind of easy sync between devices software, people have been trying to make that work for decades, but they not haven't succeeded but haven't even really made any progress.
> There are mechanical limits to shrinking a keyboard, and still preserve comfort.
Maybe, but those limits are plenty big enough for a tablet - particularly with the size of phones these days, a tablet smaller than say 10" is pointless, and the keyboards on 11" laptops are fine. Now making a device that can work as both a phone and a laptop-with-keyboard will probably require some mechanical innovation, yes, but that's the sort of thing that I suspect will be figured out sooner or later, e.g. we're already seeing various types of folding phones going through the development process.
11" laptops are not fine to type on all day unless you give them huge bezels (even the 11" macbook which did have those huge bezels was space-constrained on the less important keys). Ergonomics is really important.
Sure it's fine to get by for an hour or two but spending 8 hours 5 days a week on one is a really bad idea and will provide a great path to crippling RSI. In fact using any laptop that much is a bad idea, due to the bad posture it provides (with the screen attached to the keyboard). This is why docking stations are still so important.
> 11" laptops are not fine to type on all day unless you give them huge bezels (even the 11" macbook which did have those huge bezels was space-constrained on the less important keys). Ergonomics is really important.
Well, it depends on personal preferences.
I usually go for at least 15" in my laptops, but I can believe that other people would be fine with 11" for what they are doing.
The laptop / tablet hybrid is a valid form factor, and these systems are reasonably successful in the market.
Of course, that doesn't mean that they are the right device for everyone.
> (Someone who kept insisting on this finally admitted that they couldn't do a spreadsheet on a phone or tablet.)
I think that's to generative AI, I would expect people to gradually replace manually creating a spreadsheet with 'vibecoding' it.
> IMO, it's a pain-in-the-ass to manage multiple devices, so IMO, it's much easier to just plug my phone into a clamshell and have all my apps show up there.
ChromeOS already works like that, when you log in on different devices, without having to physically lug one device around that you plug into different shells.
Laptops + docking stations are usually just as fast as a desktop. You can buy $10,000 desktops that are much faster (50+ cores, and a lot of RAM), but most developers don't find them enough faster to be worth it. (in my benchmarks rebuilds with 40 cores finished faster than rebuilds using all 50, for a 10+million line C++ project) It is easier to have everything locally where you are. If like many of us you sometimes work from home remote into a different machine is always a bit painful.
I think this is a really good take - Apple especially (but Google too) aren't gonna naturally invest time and resources into software that'll make you less likely to buy more of their hardware.
That said, market incentives can and do change pretty fast. Especially with climate change, and current tension in global supply chains, we could see a shift away from hardware caused by taxes or pirce hikes (I'm not saying we will though).
That'd be a game changer for how much companies might invest in changing what computing looks like.
> the compute on a phone is now good enough to do most things most users do on desktop.
Really, the compute on a phone has been good enough for at least a decade now once we got USB C. We're still largely doing on our phones and laptops the same things we were doing in 2005. I'm surprised it took this long
I'm happy this is becoming a real thing. I hope they'll also allow the phone's screen to be used like a trackpad. It wouldn't be ideal, but there's no reason the touchscreen can't be a fully featured input device.
I'm fully agreed with you on the wasted processing power-- I think we'll eventually head toward a model of having one computing device with a number of thin clients which are locally connected.
> I hope they'll also allow the phone's screen to be used like a trackpad. It wouldn't be ideal, but there's no reason the touchscreen can't be a fully featured input device.
I might have misunderstood but do you mean as an input device attached to your desktop computer? Kdeconnect has made that available for quite some time out of the box. (Although it's been a long time since I used it and when I tested it just now apparently I've somehow managed to break the input processing half of the software on my desktop in the interim.)
Yes! I enjoy KDEConnect a lot for that :) With the phone being the computer, the latency can probably be made low enough that it just feels like a proper touchpad
> We're still largely doing on our phones and laptops the same things we were doing in 2005. I'm surprised it took this long
Approximately no-one was watching 4k feature-length videos on their phones in 2005, or playing ray traced 3d games on their laptops.
Sending plain text messages is pretty much the same as back then, yes. But these days I'm also taking high resolution photos and videos and share those with others via my phone.
> I hope they'll also allow the phone's screen to be used like a trackpad.
Samsung's DeX already does that.
> I'm fully agreed with you on the wasted processing power-- I think we'll eventually head toward a model of having one computing device with a number of thin clients which are locally connected.
Your own 'good enough' logic already suggests otherwise? Processors are still getting cheap and better, so why not just duplicate them? Instead of having a dumb large screen (and keyboard) that you plug your phone into, it's not much extra cost to add some processing power to that screen, and make it a full desktop pc.
If we are getting to 'thin client' world, it'll be because of 'cloud', not because of connecting to our phones. Even today, most of what people do on their desktops can be done in the browser. So we likely see more of that.
> Approximately no-one was watching 4k feature-length videos on their phones in 2005, or playing ray traced 3d games on their laptops.
Do people really do this now? Watching a movie on my phone is so suboptimal I'd only consider it if I really have no other option. Holding it up for 2 hours, being stuck with that tiny screen, brrr.
I can imagine doing it on a plane ride when I'm not really interested in the movie and am just doing it to waste some time. But when it's a movie I'm really looking forward to, I'd want to really experience it. A VR headset does help here but a mobile device doesn't.
you position it vertically against something in bed and keep it close enough (half a meter) so that its practically same size as tv which is 4-5 meters away and you enjoy the pixels. i love doing this few times a week when im going to sleep or just chilling
We were watching videos and playing games on our laptops in 2005. Of course they mostly weren't 4K or raytraced, don't be silly.
The thin client world is one anticipating a world with fewer resources to make these excess chips. It's just a speculation of what things will look like when we can't sustain what is unsustainable.
> We were watching videos and playing games on our laptops in 2005. Of course they mostly weren't 4K or raytraced, don't be silly.
The video comment was about phones. The raytracing was about laptops.
Yes, laptops were capable of watching DVDs in 2005. (But they weren't capable of watching much YouTube, because YouTube was only started later that year. Streaming video was in its infancy.)
> It's just a speculation of what things will look like when we can't sustain what is unsustainable.
Huh? We are sitting on a giant ball of matter, and much of what's available in the crust is silicates. You mostly only need energy to turn rocks into computer chips. We get lots and lots of energy from the sun.
How is any of this unsustainable?
(And a few computer chips is all you save with the proposed approach. You still need to make just as many screens and batteries etc.)
Our disagreement is probably in the "mostly only need energy to turn rocks into computer chips". I think our economy is a lot more fragile and complicated than that. And that economy relies on non-renewable resources which are dwindling, in a world which is posed to offer less of its renewable resources, which includes people and their labor. (This is a compounded problem, since people and their labor are what would drive recycling, say, to extract gold from old chips.) And important knowledge (say, about how to make CPUs) is something that can be lost with just an unlucky coincidence, or something like another world war.
You don't need to imagine a total economic collapse. Take any resource that goes into a chip, and contrive any reason we'll have to consume significantly less of that resource. How do you solve that?
Well, we have highly-redundant compute-per-person. I personally have nine pretty capable computer chips to my person, just in the building I'm in. That's a lot, and that represents an excess in resource consumption. A phone-as-motherboard laptop solves one of those chips. If we make the same games we're making today but we go back a decade or two in graphics, then we can have fewer consoles and gaming PCs, too.
I'm not saying "one chip for many devices" is a panacea. There are other things we might do. Maybe laptops and phones can be made to have display input, for example.
> And that economy relies on non-renewable resources which are dwindling, [...]
We are sitting on a giant ball of matter. None of our resource use is actually using up material, we are just transforming matter.
We might be running out of resources that are cheap and easy to transform (eg cheap oil), but all of these are problems we can fix with enough energy. And eg solar power is going to provide more and more cheap energy. Fusion is also going to come to the rescue in a few decades (and we already had nuclear fission for ages.)
The economy is pretty resilient. Not even a global pandemic left all that much of a mark three years later.
> Take any resource that goes into a chip, and contrive any reason we'll have to consume significantly less of that resource. How do you solve that?
With substitution, economising and ingenuity. Eg early transistors were made of gallium, but we use silicon these days. That's a substitution.
> Well, we have highly-redundant compute-per-person. I personally have nine pretty capable computer chips to my person, just in the building I'm in. That's a lot, and that represents an excess in resource consumption.
Less than you'd think. These days, the main expense is for the power to run your chips, less so than the energy to make the chips. And having redundant chips around that aren't turned on doesn't cost any of the former.
> If we make the same games we're making today but we go back a decade or two in graphics, then we can have fewer consoles and gaming PCs, too.
Btw, that's one of the answers about what people would do in case of resource shortage for making chips.
> I'm not saying "one chip for many devices" is a panacea.
And I'm saying it would only save you a few chips, but wouldn't save you on batteries nor screens etc.
(And even a 'dumb' screen needs quite a few chips these days.) Hey, even Apple's chargers have more powerful chips in them these days than their first stand alone computers a few decades ago had.
---
Btw, you can economise on powerful chips even more, if you do most of the heavy computing in the cloud: even your combined phone/laptop/desktop chip would still be idle most of the time. The cloud can eg use one million chips for three million people. That's even better than one chip for one person (which you touted as better than nine computers for one person.)
I think ultimately, we disagree about whether or not it's inevitable that we end up having an economy that can transform sunlight into a perpetual recycling machine. I think that's not inevitable, especially in a scenario where we're left dealing with a climate collapse.
Having 'target display mode' on laptops and whatnot is one way that would save the chips that go into screens, which is why I mentioned it above. I agree that computing in the cloud can also reduce the number of chips used (although that does rely on chips to keep the internet going, etc.)
> I think that's not inevitable, especially in a scenario where we're left dealing with a climate collapse.
A 'climate collapse' is extremely unlikely. Look at studies on the (prospected) economic impacts of climate change. Wikipedia has an article on it, for example.
In any case, the forecasts expect something like perhaps 20% total reduction in GDP over say the next 100 years compared to the scenario without global warning. (But that's on top of our regularly scheduled single-digit percent per year regular economic growth.)
20% is a huge impact! It's bigger than Brexit. But it's also only about as big as the per capita gap between the US and the UK. And the UK is far from a collapsed nation.
And: in case you want to mention that the economy ain't everything. Yes, I totally agree. That's why my argument works in reverse: the economy can only function when the environment hasn't totally collapsed. Thus if leading experts project around a 20%-ish reduction in GDP, that means that the don't project a collapse in the environment.
As a sanity check: financial markets also don't seem to expect a collapse of the global economy anytime soon.
Yes it can, it can also become a keyboard in fact.
One thing I'm kinda missing is that it doesn't seem to be able to become both at the same time on a system that has the screen space for that. Like a tablet or Z Fold series.
:D I avoid Samsung products but I'm happy that at least exists. I hope it's not patented, and Google is both able to put the same thing into Android, and that it's available in AOSP
This concept has been floating around for a long time. I think Motorola was pitching it in 2012, and I'm sure confidential concepts in the same vein have been tried in the labs of most of the big players.
> I can easily see the future of personal computing being a mobile device with peripherals that use its compute and cloud for anything serious. be that airpods, glasses, watches, or just hooking that device up to a larger screen.
I don't see that at all.
That's because I think over time the processing power of a eg laptop will become a small fraction of its costs (both in terms of buying and in terms of power).
The laptop form factor is pretty good for having a portable keyboard, pointing device and biggish screen together. Outsourcing the compute to a phone still leaves you with the need for keyboard, pointing device and screen. You only save on the processor, which is going to be a smaller and smaller part.
> theres not a great reason for an individual to own processing power in a desktop, laptop, phone, and glasses when most are idle while using the others.
Even in your scenario, most of your devices will be idle most of the time anyway. And they don't use any energy when turned off. So you are only saving the cost to acquire the processor itself.
Desktop computer processors that can hit the computing power of a mobile processor are really, really cheap already today.
You are ignoring data location and software installs.
Having all your data always with you stored locally (on your phone) is simpler than syncing and more private than cloud.
One OS with all your software. No need to install same app multiple times on different devices. Don't need to deal with questions like, for how many devices is my license valid for. However, apps would need to come with a reactive UI. No more separate mobile and desktop versions.
Example, you take a photos on your phone, dock it at your desk or laptop shell, and edit them comfortably on a big screen, with an app you bought and installed once. No internet connection is required.
A docking station could be more than just display and input devices. It could contain storage for backing up your data from the phone. Or powerful CPU and GPU for extended compute power (you would still use OS and apps/games on your phone with computations being delegated to more powerful HW).
This could replicate many things cloud offers today (excluding collaboration). No need to deal with an online account for your personal stuff. IMO, it would probably be less mystical than cloud to most users.
> Having all your data always with you stored locally (on your phone) is simpler than syncing and more private than cloud.
You need to sync it anyway. Having that phone with you all day also means exposing it to a lot of risk involving theft, drops and other kind of damage. You need that sync for backup purposes.
I agree actually having it on the phone is great though. I use DeX a LOT, it's a great way of working when I don't have my laptop with me but do have a docking station available (e.g. at the office when I forget my laptop or just dropped in unplanned)
Backup is a simple one way sync, but like you said, it is needed. It could still be private, if backup to another of your devices is made when your phone connects to your home WiFi.
You can (in principle) back of over the cloud and still have everything private. Encryption and open source software can handle that. (You want the software to be open source, so you can check that it's really end-to-end encrypted without a backdoor.)
Of course, that scenario would only become the norm, if there's mainstream demand for that. By and large, there ain't.
> You are ignoring data location and software installs.
Caching works well for that.
> Having all your data always with you stored locally (on your phone) is simpler than syncing and more private than cloud.
Have a look at how GMail handles this. It has my emails cached locally on my devices so I can read them offline (and can also compose and hit-the-send-key when offfline), but GMail also does intelligent syncing behind the scenes. It just works.
> Example, you take a photos on your phone, dock it at your desk or laptop shell, and edit them comfortably on a big screen, with an app you bought and installed once. No internet connection is required.
My devices are online all the time anyway.
> A docking station could be more than just display and input devices. It could contain storage for backing up your data from the phone.
I'm already backing up to the Cloud automatically. And Google handles all the messy details, even if my house burns down.
> Or powerful CPU and GPU for extended compute power (you would still use OS and apps/games on your phone with computations being delegated to more powerful HW).
How is that different from the ChromeOS scenario, apart from that the syncing in your case doesn't involve the cloud?
> This could replicate many things cloud offers today (excluding collaboration). No need to deal with an online account for your personal stuff. IMO, it would probably be less mystical than cloud to most users.
No, it would be more annoying, because I couldn't just log in anywhere in the world, and get access to my data. And I would have to manually bring devices in contact to sync them.
You can build what you are suggesting. And some people (like you!) will like it. But customers by-and-large don't want it.
Cache invalidation is hard. Offline-first is also hard and expensive to develop. Single source of truth + backup is simpler.
> No, it would be more annoying, because I couldn't just log in anywhere in the world, and get access to my data. And I would have to manually bring devices in contact to sync them.
You are traveling without your phone? I don't always have an unlimited internet when traveling. If you loose your phone while traveling there's a good chance you won't be able to log in due to 2FA anyway. Devices just have to connect to the same local network to sync. Phone probably connects to your WiFi automatically when you come home. Syncing over internet is also possible.
I'm just saying it could be done. Not that everybody would use it or like it. Although, I imagine getting rid of one dependency (cloud) and having more control would be a plus to some.
Cloud is not magically without issues. People do get locked out their cloud account due to some heuristics flagging it, payment issues, user errors or even political reasons. And it can take a very long time before you get it resolved. Last year there was even a story on HN about Google Cloud accidentally deleting customer's account and deleting all their data.
> But customers by-and-large don't want it.
Do you have any data backing this up?
Phone centered solution could be more cost effective. A casual user would only need a phone, a backup solution (either cloud based or an external drive connected to a network) and a bigger display with input devices (portable or desktop). Possibly one less subscription they have to pay and lower HW costs.
> Cache invalidation is hard. Offline-first is also hard and expensive to develop. Single source of truth + backup is simpler.
Yes, cache invalidation isn't trivial. But it's a software problem that you can solve (for your particular application, or with a library for many similar applications) with enormous economies of scale.
> I'm just saying it could be done. Not that everybody would use it or like it. Although, I imagine getting rid of one dependency (cloud) and having more control would be a plus to some.
Ok, no objection there. Yes, some people would like this.
My point is that cloud first, and local caching that lets you work offline (like what you get with GMail and Google Docs) works well enough for most people, that there's probably not enough market share left over for your offline-first dream to get the economics of scale.
Though it's probably still more than possible in the same way that running your desktop on Linux was feasible from the 1990s onwards: at times a bit clunky, but if you are willing to put up with it, totally doable. Been there, done that.
> Phone centered solution could be more cost effective. A casual user would only need a phone, a backup solution (either cloud based or an external drive connected to a network) and a bigger display with input devices (portable or desktop). Possibly one less subscription they have to pay and lower HW costs.
If you need an external display anyway (and a battery, if you want a laptop form factor), adding a bit of compute power to turn it into essentially a ChromeBook is close enough to free. You don't even need that much computing power, because instead of offloading the computation onto your phone (like your scenario), you offload the heavy lifting into the cloud (basically our real world right now for most people).
The HW costs aren't that much lower, because low performance chips are already pretty cheap.
> But it's a software problem that you can solve ... with enormous economies of scale.
Can be a problem for software that doesn't have such economies of scale.
Cloud is cheap for very basic usage, but costs can increase noticeably when workload increases.
Regarding UX. Some things work better in the cloud while some tasks are not so well suited for the cloud (e.g. latency sensitive tasks, task that require non trivial amount of data transfer between the user and the could).
I have no idea how many casual users would be affected by one or more of these things, if any. Phone centered user could still use cloud for some things. Maybe there would be enough interest, if polished solution becomes available. It could be you are right, I don't really know.
A laptop wins everytime because I don't have to carry around all my peripherals and set em all up again. Unless there's going to be dock setups in every conference room, coffee shop, table in my house, airplane, car, deck, etc, a laptop makes more sense.
Than what do you save? Only the system-on-a-chip (CPU, GPU, RAM).
And the hardware to get an SoC with phone-like performance in a laptop or desktop form factor is relatively cheap, just because you have so much more space and power and cooling to work with.
(Your laptop-shell definitely needs its own power supply, whether that be a battery or a cable, because the screen alone will take more power than your phone's battery can provide for any sustained period of use.)
Right but if it's the same as a laptop why not just use a laptop?
The only things I can think of are you really want to keep all the data on your phone and don't want to use cloud sync solutions (Dropbox etc.), or you really want to save a couple of hundred dollars getting a (probably terrible) laptop without a motherboard. Not very compelling IMO.
Surely long term it'd be cost? A screen and a keyboard in a laptop shell should be a lot cheaper that a screen, a keyboard, RAM, SSD, fans etc in a laptop shell.
Those other parts of the laptop are cheap though. Sure not free, but chromebooks can be had new for just a few hundred $$$ (they don't need a fan either). If you want a fast laptop you need to spend a lot of money, but a fast laptop has the ability to have better RAM, SSDs and such than your phone because there is more space in that form factor and so if you want fast you are back to laptop while if you don't need fast your laptop is cheap.
this done well is a transformational thing, its just no one has been willing to invest yet
I think we've seen this before. Back before phones were "smart" there was one (Nokia, maybe?) that you could put on a little dock into which you could plug a keyboard and monitor.
Obviously, it didn't take off. Perhaps it was ahead of its time. Or, as you say, it wasn't done well at the time.
Phones accepting Bluetooth keyboard connections was very common back in my road warrior (digital nomad) days, but the screen was always the annoyance factor. Writing e-mails on my SonyEricsson on a boat on the South China Sea felt like "the future!"
Slightly related, I built most of my first startup with a Palm Pilot Ⅲ and an attached keyboard. Again, though, a larger screen would have been a game changer.
AIUI, the main problem in the cell phone era is that by the time you create a notebook shell with an even halfway-decent screen, keyboard, battery, and the other things you'd want in your shell, it's hard to sell it next to the thing right next to it that is all that, but they also stuck a cheap computer in it (and is therefore no longer a dock). Yeah, it's $50 more expensive, but it looks way more than $50 more useful.
What may shift the balance is that slowly but surely USB-C docks are becoming more common, on their own terms, not related to cell phones. At some point we may pass a critical threshold where there's enough of them that selling a phone that can just natively use any USB-C dock you've got lying around becomes a sufficient distinguishing feature that people start looking for it. Even just treating it as a bonus would be a start.
I've got two docks in my house now; one a big powerful one to run the work-provided laptop in a more remote-work-friendly form factor, and fairly cheap one to turn my Steam deck into a halfway-decent Switch competitor (though "halfway-decent" and no more; it's definitely more finicky). We really ought to be getting to the point that a cell phone with a docked monitor, keyboard, & mouse for dorm room usage (replacing the desktop, TV, and if whoever pulls this off plays their cards right, the gaming console(s)) should start looking appealing to college students pretty soon here. The docks themselves are rapidly commoditizing if they aren't there already.
Once it becomes a feature that we increasingly start to just expect on our phones, then maybe the "notebook-like" case for a cell phone starts to look more appealing as an accessory. We've pretty much demonstrated it can't carry itself as its own product.
That would probably start the clock on the "notebook" as its own distinct product, though it would take years for them to finally be turned into nothing but shells for cell phones + a high-end, expensive performance-focused line that is itself more-or-less the replacement for desktops, which would themselves only be necessary for high-end graphics or places where you need tons and tons of storage and you don't want 10 USB-C drives flopping around separately.
BTW you don't even need a dock if you have a USB-C monitor with USB and audio ports, which is not that uncommon. The monitor acts like a USB hub, so if you plug in your keyboard and mouse that's your computer essentially
> I think we've seen this before. Back before phones were "smart" there was one (Nokia, maybe?) that you could put on a little dock into which you could plug a keyboard and monitor.
Still in the "smart" era, but the Motorola Atrix allowed that, but with its own laptop form factor dock.
I had one of these Atrix and laptop docks. It was really good, but sadly way ahead of its time. The desktop was a Debian-based Linux desktop and you could install various ARM packages. Unfortunately, the phone just wasn't powerful enough at the time. The touchpad was also not brilliant compared to Macs (probably better than Windows touchpads of the time). I sold it on ebay to a guy who plugged his Raspberry Pi into it, since the Atrix dock used mini HDMI and microUSB connectors. This has obviously been replaced in the modern age with USB-C.
I am pretty sure that modern phones are more than powerful enough! My wife's iPhone 16 Pro Max would be amazingly useful if not limited by iOS (which always feels like it's hiding true capabilities behind an Etch-A-Sketch interface to me). If you could plug the iPhone in and run a macOS desktop (which hasn't really changed for 15+ years), that'd be great. Thanks in advance.
I have a POCO F7 Ultra which is powerful enough to run LLMs via PocketPal and could easily replace my daily laptop or PC for work if it wasn't scuppered by USB2 on the USB-C port. If I could easily run ollama on the phone via a web interface I would because it's faster than my main PC for LLMs I think!
On Android you can go into Developer settings and force enable the ability to use desktop mode but sadly I can't without proper display support on the USB C.
> I think we've seen this before. Back before phones were "smart" there was one (Nokia, maybe?) that you could put on a little dock into which you could plug a keyboard and monitor.
There have been multiple attempts at this over the years.
I think power was a real problem. A 2010 phone was bit as close to a laptop in performance.
An M4 Mac is way more powerful than an iPhone 16, but the iPhone is powerful enough to prove a much better experience on normal tasks compared to what that 2010 phone could at the time.
Basically I think everything has enough headroom that it’s not the compromise it would’ve been before. The biggest constraints on an iPhone’s performance are the battery and cooling. If you’re plugged in the battery doesn’t matter. And unless you’re not playing a fancy game cooling may not be an issue due to headroom.
Agreed. For this reason I'm quite excited about glasses like the Xreal One Pro. Having to carry around with me just my phone, a pair of glasses and a lightweight Bluetooth keyboard would be a game changer for me in terms of ergonomics.
Do you have this yet? I wonder how well it works in practice. I know some people using it with DeX but they're pretty expensive (around $400 I think) so I didn't try it myself.
I remember there was a fad I think in 2009 or 2010 where a bunch of Android manufacturers released 'laptops' (just a display and keyboard) with a dock connector in the back that was meant to turn the phone into a laptop basically
Well, they are a generation ahead in many perspective than desktop UIs, so.
E.g. android/ios has better security than Windows/GNU Linux/MacOS, much more reliable suspend/wake functionality, much better battery management, etc.
Like it's a 50/50 chance my laptop with Win 11 will wake up fully charged or fully discharged in the morning, and whether it will be kind enough to actually be ready for work, or I can go brew a coffee before it's ready..
Since Windows has started this iteration of their move to ARM, I wondered if Microsoft would be the first to do this properly, by building an adaptable/mobile Desktop/UX to Windows 12 (or 13), pumping up the Microsoft Store, and then relaunching the Windows (Surface, I guess) Phone with full fat Windows on it.
In a way it's the same strategy that Nintendo used to re-gain a strong position in gaming (including the lucrative Home Console market where they'd fallen to a distant last place) - drafting their dominance in Handheld into Home Console by merging the two.
I strongly agree, and have felt this way for a long time. We are being sold many processors, each placed into their own device. The reality is our phone processor could be used to run our TVs, streaming devices, monitors, VR glasses, consoles, laptops, etc. That's less profitable, however.
With cables, yes. And LG did that for a while in fact, they had a VR headset that would plug into the phone: https://www.cnet.com/reviews/lg-360-vr-review/ It wasn't a success but this was more software-related and also some hardware-skimping. It was a good idea, it just seems like the devs forgot to actually try using it before declaring it a finished product.
But wireless the lag is so bad that it's not really usable. Like Wireless DeX. Definitely not good enough for processor-less VR glasses (even the wireless VR streaming from meta does require significant processing power on the glasses end).
in a sense apple is already doing this, since there's shared chip tech in the laptops and phones.
I still will prefer the form factor of a laptop for anything serious though; screen, speakers, keyboard.
Yes you can get peripherals for a phone, yes I have tried that, no they're not good. Though perhaps with foldable screens this could change in the future.
Apple is intentionally hampering the desktop experience on the iPad and is very late in brining Stage Manager to the iPhone (the rumor is now iOS 19). Until there is serious competition (this and/or improvements to DeX, Apple will drag their feet because they want to sell you three compute device categories (or four if you count the Vision Pro).
Also, stage manager is not a good way of doing real work. It's with good reason that people abhor it on the Mac. On an iPad with no better alternative it's workable but not great.
So true! I have experimented with plugging an iPad Pro into an Apple 7K Studio Monitor with keyboard and an Apple Trackpad and Stage Manager: close to being generally useful, but I also get the idea that Apple is purposely holding back to prevent reducing Mac sales.
That is why I am rooting for Samsung DeX and what Google is offering: Samsung and Google can make money for their own reasons making a universal personal digital device.
They have the hardware. They don’t provide ANY software for this kind of thing though. And there is a very real chance it could cannibalize some Mac sales.
I’ve always wondered if this kind of thing is actually that useful, but it’s not even an option for me because of the above.
Seems surprising Google didn’t act on this earlier. But maybe they didn’t want to cannibalize the Chromebooks?
I get the feeling very very few people know this exists at all on some Samsung phones. I’ve asked some tech-y people with Samsungs about it before and they didn’t even know it existed.
True! Apple’s already ahead with the shared chip setup between Macs and iPhones. But yeah, for real work, nothing beats a proper laptop — big screen, keyboard, good speakers. I’ve tried using a phone with accessories too… not the same vibe. Maybe foldables will change that someday!
they're paying to aquire places where you can sell tokens at a markup, because the future is multiple base models that are good enough for most user tasks where user gateways play the base model providers off each other and capture a lot of the value
reply