I could see a “portable” all in one being successful. Something closer to the first Mac where it had a handle in the back with a built in screen. In the 10-14” screen range. Back in the day, my mom had an internet appliance (light browsing and email only) which was close to what we’re taking about. She loved it.
You’re of course looking at all laptop parts, but the form factor would be part of the appeal. Then again, an iPad dock would also probably cover this form factor for about the same cost.
This is a cool demo, but it tells me that CSS might be too complex now. Why should you be able to emulate a CPU with a styling language? I’m not sure what you get by using a Turing complete language for visual styling.
I don't know much about CSS, but Turing completeness is notorious for showing up in systems unintentionally.
It doesn't take much to be Turing-complete - if a system provides unbounded read/write memory plus branching or conditional recursion you're usually there.
As an example, Magic The Gathering (the card game) is Turing-complete: https://arxiv.org/abs/1904.09828 . You can use creature tokens as memory and various game mechanics to do flow control. Was this intentional by the designers? Most likely not...
* MOV x86: using memory mapped lookup tables, you can simulate logic gates and branching using only MOV.
* PowerPoint (Without Macros): using On-Click Animations and Hyperlinks, shapes on slides act as the tape and clicking them triggers animations that move the head or change the state of the slide.
* find and mkdir (Linux Commands): find has a -execdir flag executes commands for directories it finds. By using mkdir to create specific folder structures, you can create a feedback loop that functions as a Tag System (aka universal computation).
* Soldier Crabs: Researchers showed that swarms of Mictyris guinotae can be funneled through gates to implement Boolean logic. While a full computer hasn't been built with them, the logic gates (AND, OR, NOT) are the building blocks for one.
Even water is Turing Complete:
* Fluidic Logic Gates: the Coandă effect is the tendency of a fluid jet to stay attached to a convex surface. By using tiny air or water jets to push a main stream from one channel to another, you can create the fluid equivalent of a transistor.
* MONIAC (Monetary National Income Analogue Computer)
* Navier-Stokes equations describe how fluids move are TC.
* In 2015, Stanford researchers developed a computer that operates using the physics of moving water droplets. Tiny iron-infused water droplets moved by magnetic fields through a maze of tracks. The presence or absence of a droplet represents a 1 or a 0. By colliding or diverting each other, the droplets interact perform calculations.
Perhaps so, but ISTM that it encapsulates the same basic point. Try to make something rich and general and you often end up re-implementing a whole computer inside your computer.
Which is why these days it's easier in many cases to just embed an Arm core and implement your controller's functionality in software.
The devil is in the details. The crux of the article is in these two lines:
> Across the U.S., the average annual cost of care for an infant and a 4-year-old is $28,190, according to Child Care Aware of America. The U.S. Department of Health and Human Services (HHS) considers child care affordable when it accounts for no more than 7% of a household’s income.
It’s been awhile, but the $28k number seems reasonable. It’s more expensive in different areas and the article goes into the numbers by state. But the part where it gets difficult to see is the 7% number. You only require $400k/year if you cap child care costs at 7%. When my kids were in daycare, it cost significantly more than 7% of our income.
With all due respect, why in the world does 28k sound like a reasonable number for care of a single child? That's more than many people make at a full time job.
An enterprising person, great with children, would see an opportunity to make 72k a year to watch three kids. And another might offer to do it for 60k, and so on.
Regulations prevent that, and kill the free market. Now everyone pays a crapton to some facility where the owner jumped through all the hoops to get certified, while hiring minimum wage people to care for the kids. Not that some aren't great, but is it really a better system?
Reality shows that the outcome are centers with a television on 24/7, or where kids are given drugs to sleep. That's not the exception but the rule. That's why inspections and licensing came in.
The longterm costs of that -- crime, mental health, etc. -- explain why subsidies make sense. Every rich country has universal public education for a good reason.
Market forces, as you point out, will drive your enterprising person making 72k out-of-business very quickly, and the market becomes a cesspool.
I didn’t mean that it was a reasonable cost, only a reasonable estimate of the cost.
Childcare in the US is way more expensive than it should be. The costs are also highly location dependent with the coasts being much more expensive than the Midwest etc…
Yes, you're right. I'd read the quote out of context and for whatever reason read it as "for either of these ages", but the preceding sentence clarifies it as two individual kids. Still absurd to me, but perhaps less so.
And taking care of an infant is extra hard. The laws where I live mandate twice as many workers per infant compared to 3-4 year olds, and three times as many compared to school age kids.
In every system from hunter-gather society, feudalism, socialism and capitalism you need to exchange your work for the products of the work of other people. No system will give you the ability to not work and get what you want.
The capitalism is the least bad one where there the correlation between "making something that people want" to the value you can keep to feed yourself.
The conditions for retirees relative to those still working in the USSR were fairly decent (by the end). But its not exactly fair since the demographic situation there was much better than it is now in most developed countries.
And then all capitalist countries in Europe these days (which coincidentally are all capitalist) generally have similar retirement systems to the US.
Where will these resources will came from if the capitalist system is the most productive of them all? You can't redistribute something out of nothing. You need a healthy tax base to redistribute.
Childcare is just too regulated and people don't want to do it and ask for a premium on childcare.
Retirement in socialist country? Or in China before the capitalism came there? Or in a feudal society where your children where your insurance in the old age?
Why don't you ask them "where will these resources will came from"? This isn't a hypothetical we're discussing, most of Europe has protected retirement.
The original commenter was saying something more like "we will work until we die, but that's okay, because capitalism is great! And it's not like you would get to retire under any other system anyway"
Not all capitalism is equal though. The overlap between socialism and capitalism is state capitalism, and it turns out if you want affordable childcare, healthcare, utilities, public transport, etc. then state capitalism is the way to go.
Read Milton Friedman on why this won't work. If you have seen any government run organization you'll see why.
Taxes can be levied on productive businesses that are in ruthless competition in the free and international market. No productive businesses = nothing to redistribute. No international business == no imports. Everybody is poor and hungry.
There is a difference between productive business and basic infrastructure. Just look at China, they have state capitalism with free market economy according to the 60/70/80/90 rule. The state capitalism covers most basic needs like utilities, healthcare, and public transport extremely efficiently.
The free market economy is ruthlessly efficient in the national and international market due to involution and strategic loans from state-owned banks.
Difference between the capitalism our parents had vs us. They had taxes on the rich. So they could afford homes and retire and never needed 3 jobs to scrape by. In fact one job was enough to afford a home and a family.
This is just painfully and obviously not the reason why childcare is so expensive now. Labor costs are higher these days (Baumol's cost disease), regulations have become more strict because we are more protective of our children, and multigenerational living has declined.
Taxing the rich is great but it's not gonna fix any of those.
Taxing the rich means that there are fewer people with absurd money, which means businesses won't have many customers at such high prices. It's like McDonalds charging $5 for a quarter of a potato. They're hoping that the lost sales from the poors at such a high price is made up by fewer high priced purchases by the rich.
If even the rich couldn't or didnt want to afford $5 quarter potatoes then they'd have to lower the price
Taxing the rich is not going to solve any of this. It may help some, but it's nothing like the panacea certain people pretend it is.
If we took all the assets from all the billionaires in the US, that total is something like $6-7 trillion if we pretend there's no asset price decrease in the selling of said assets.
Sounds like a lot, but we're nearly $40 trillion in debt. Taxing the rich heavily won't solve a spending problem.
The federal government specifically, and the admin class in general was a lot smaller during our parents' era.
If you were already setup as a non-profit entity with 501c3 US taxes (or similar in other locales), this would be straightforward. Or, even if you were a for-profit company taking part with an LLC or other corporate structure. In those cases, you probably already have an accountant or tax advisor to help handle this stuff. For smaller individual level contributors, I can see how the extra paperwork and overhead could create enough of a hassle to make it not worthwhile. Which is sad.
It looks like the author here is from Bulgaria, so who knows what other hassles they would have on their side.
I don’t know how you have an open source code base allowing for new devs to submit pull requests and block bulk PR submissions. It seems like an impossible needle to thread. It doesn’t even sound like it’s necessarily automated agents making the PRs, but devs who manually are making the PR based on AI code that they don’t understand.
(Sadly) Do we need a captcha on PR submissions and a confirmation that the submitter understands the code in the PR.
You want to allow for new devs to submit code. And I’m not against AI code at all, but the submitter should understand every change they are proposing. I’m not sure how you stop new devs from contributing without putting up many roadblocks in the way.
If you believe the submitter should understand every change they're proposing, then you should be against AI code. The entire premise behind AI coding is to not have to understand, or even care, about the code you generate.
It would take less time for a competent developer to write any non-trivial PR themselves than it would for a vibe coder to have an AI generate it then learn it well enough (and test it well enough, assuming they aren't also testing with AI which you probably are) to be able to submit it.
AI code is the reason Godot and every other open source project is going to have to stop public PRs and only allow them from a team of vetted submitters. FOSS is going to have to go proprietary just to survive.
Is this the start of a more frequent code-migrations out of Github?
For years, the best argument for centralizing on Github was that this was where the developers were. This is where you can have pull requests managed quickly and easily between developers and teams that otherwise weren't related. Getting random PRs from the community had very little friction. Most of the other features were `git` specific (branches, merges, post-commit hooks, etc), but pull requests, code review, and CI actions were very much Github specific.
However, with more Copilot, et al getting pushed through Github (and now-reverted Action pricing changes), having so much code in one place might not be enough of a benefit anymore. There is nothing about Git repositories that inherently requires Github, so it will be interesting to see how Gentoo fares.
I don't know if it's a one-off or not. Gentoo has always been happy to do their own thing, so it might just be them, but it's a trend I'm hearing talked about more frequently.
I'm watching this pretty closely, I've been mirroring my GitHub repos to my own forgejo instance for a few weeks, but am waiting for more federation before I reverse the mirrors.
Note that Forgejo's API has a bug right now and you need to manually re-configure the mirror credentials for the mirrors to continue to receive updates.
Once the protocols are in place, one hopes that other forges could participate as well, though the history of the internet is littered with instances where federation APIs just became spam firehoses (see especially pingback/trackback on blog platforms).
Gitlab has also indicated not to be interested as a company to develop this themself, and esp. not given all the other demands they get from their customer base. The epic you refer to had been closed for this reason, but was later reopened for the sake of the community. For there to be federation support in self-hosted Gitlab instances, a further community effort is needed, and right now AFAIK no one is actively working on any ActivityPub related user stories.
I use GitHub because that's where PRs go, but I've never liked their PR model. I much prefer the Phabricator/Gerrit ability to consider each commit independently (that is, have a personal branch 5 commits ahead of HEAD, and be able to send PRs for each without having them squashed).
I wonder if federation will also bring more diversity into the actual process. Maybe there will be hosts that let you use that Phabricator model.
I also wonder how this all gets paid for. Does it take pockets as deep as Microsoft's to keep npm/GitHub afloat? Will there be a free, open-source commons on other forges?
Unless I misunderstood your workflow Forgejo Agit approach mentioned in OP might already cover that.
You can push any ref not necessarily HEAD. So as long as you send commit in order from a rebase on main it should be ok unless I got something wrong from the doc?
Personally, I'd like to go the other way: not just that PRs are the unit of contribution, but that rebased PRs are a first-class concept and versioning of the changes between entire PRs is a critical thing to track.
That's effectively what I do. I have my dev branch, and then I make separate branches for each PR with just the commit in it. Works well enough so long as the commits are independent, but it's still a pain in the ass to manage.
That’s the trick in your system — all commits have to be completely independent. Generally mine aren’t, so unless we want to review each minor commit, they get squashed.
I can see merit in your suggestion, but it does require some discipline in practice. I’m not sure I could do it.
The way Gerrit handles this is to make a series of PR-like things that are each dependent on the previous one. The concept of "PR that depends on another PR" is a really useful one, and I wish forges supported it better.
I just want a forge to be able to let me push up commits without making a fork. Do the smart thing for me, I don't need a fork of a project to send in my patch!
Right. GitHub started as and still is that "social coding platform" from 2008 inspired by the then-novel growth hacking of that era understood and demonstrated by Facebook—where it wasn't enough to host, say, your user's LiveJournal blog, and their friends might sign up if they wanted, and that was that. No, rather, you host your users' content but put it inside a closed system where you've erected artificial barriers that make it impossible to do certain things unless those friends are actively using the platform, too.
GitHub could have been project hosting and patch submission. It's the natural model for both the style of contributions that you see most on GitHub today and how it's used by Linux. (Pull requests are meant for a small circle of trusted collaborators that you're regularly pulling from and have already paid the one-time cost to set up in your remotes—someone is meant to literally invoke git-pull to get a whole slew of changes that have already been vetted by someone within the circle of frequent collaborators—since it is, after all, already in their tree—and anyone else submits patches.) Allowing simple patch submission poses a problem, though, in that even if Alice chooses to host projects on GitHub, then Bob might decide Gitorious is better and host stuff there even while remaining friendly and sending patches to Alice's GitHub-hosted project. By going with a different, proprietary pull request system and forcing a clunky forking workflow on Alice and Bob, on the other hand, you can enforce where the source of the changes are coming from (i.e. another repo hosted on GitHub). And that's what they did.
I’m speculating here, but I think this is at least a plausible explanation. There is no guarantee that the pull request will be accepted. And the new commit has to live somewhere. When you require a fork, the commit is stored in the submitter’s version. If you don’t require the fork, the commit is stored somewhere in the main project repository. Personally, this is something I’d try to avoid.
I don’t know how the Agit-flow stores the commit, but I assume it would have to be in the main repo, which I’m happy to not be used for random PRs.
Requiring forks makes it more convoluted for simple quick pushes, but I can see why it would be done this way.
I suspect the real answer is that’s the way Linux is developed. Traditionally, the mai developers all kept their own separate branches that would be used track changes. When it was time for a new release, the appropriate commits would then be merged into the main repository. For large scale changes, having separate forks makes sense — there is a lot to track. But, it does make the simple use-case more difficult. Git was designed to make the complex use-cases possible, sometimes at the expense of usability for simpler use cases.
I would love git-bug project[1] to be successful in achieving that. That way Git forges are just nice Web porcelain on top of very easy to migrate data.
No. Git is not a web-based GUI capable of managing users and permissions, facilitating the creation and management of repositories, handling pull requests, handling comments and communication, doing CI, or a variety of other tasks that sites like Codeberg and Forgejo and GitLab and GitHub do. If you don't want those things, that's fine, but that isn't an argument that git subsumes them.
People were doing that by using additional tools on top of git, not via git alone. I intentionally only listed things that git doesn't do.
There's not much point in observing "but you could have done those things with email!". We could have done them with tarballs before git existed, too, if we built sufficient additional tooling atop them. That doesn't mean we have the functionality of current forges in a federated model, yet.
That doesn't cover tracking pull requests, discussing them, closing them, making suggestions on them...
Those exist (badly and not integrated) as part of additional tools such as email, or as tasks done manually, or as part of forge software.
I don't think there's much point in splitting this hair further. I stand by the original statement that I'd love to see federated pull requests between forges, with all the capabilities people expect of a modern forge.
I think people (especially those who joined the internet after the .com bubble) underestimate the level of decentralization and federation coming with the old-school (pre web-centric mainframe-like client mentality) protocols such as email and Usenet and maybe even IRC.
Give me “email” PR process anytime. Can review on a flight. Offline. Distraction free. On my federated email server and have it work with your federated email server.
And the clients were pretty decent, at running locally. And it still works great for established projects like Linux Kernel etc.
It’s just pain to set up for a new project, compared to pushing to some forge. But not impossible. Return the intentionality of email. With powerful clients doing threading, sorting, syncing etc, locally.
I'm older than the web. I worked on projects using CVS, SVN, mercurial, git-and-email, git-with-shared-repository, and git-with-forges. I'll take forges every time, and it isn't even close. It's not a matter of not having done it the old way, it's a matter of not wanting to do it again.
I guess we might have opposite experiences. Part of which I understand - the society moved on, the modern ways are more mature and developed… but I wonder how much of that can be backported without handing over to the centralized systems again.
The advantage of old-school was partially that the user agents, were in fact user agents. Greasemonkey tried to bridge the gap a bit, but the Web does not lend itself to much user-side customization, the protocol is too low level, too generic, offering a lot of creative space to website creators, but making it harder to customize those creations to user’s wants.
I'm older than the trees, but, younger than the mountains! Email all day, all the way. Young people are very fascinated and impressed by how much more I can achieve, faster, with email, compared with their chats, web 3.0 web interfaces, and other crap.
Yes, it takes time to learn, but that is true for anything worthwhile.
What I like about git-and-email-patches is the barrier to entry.
I think it's dwm that explicitly advertises a small and elitist userbase as a feature/design goal. I feel like mailing lists as a workflow serve a similar purpose, even if unintentionally.
With the advent of AI slop as pull request I think I'm gravitating to platforms with a higher barrier to entry, not lower.
What is a forge? What is a modern forge? What is a pull request?
There is code or repository, there is a diff or patch. Everything else your labeling as pull request is unknown, not part of original design, debatable.
GitHub style pull request is not part of the original design.
What aspects and features you want to keep, and what exactly you say many others are interested in?
We don't even know what a forge is. Let alone a modern one.
Coincidentally, my most-used project is on Codeberg, & is a filter list (such as uBlock Origin) for hiding a lot Microsoft GitHub’s social features, upsells, Copilot pushes, & so on to try to make it tolerable until more projects migrate away <https://codeberg.org/toastal/github-less-social>.
Arch Linux have used their own gitlab instance for a long time (though with mirrors to GitHub). Debian and Fedora have both run their own infra for git for a long time. Not sure about other distros. I was surprised Gentoo used GitHub at all.
Pretty sure several of these distros started doing this with cvs or svn way back before git became popular even.
The amount of inference required for semantic grouping is small enough to run locally. It can even be zero if semantic tagging is done manually by authors, reviewers, and just readers.
Where did "AI for inference" and "semantic tagging" come from in this discussion? Typically for code repositories - AIs/LLMs are doing reviews/tests/etc, not sure what/where semantic tagging fits? Even do be done manually by humans.
And besides that - have you tried/tested "the amount of inference required for semantic grouping is small enough to run locally."?
While you can definitely run local inference on GPUs [even ~6 years old GPUs and it would not be slow]. Using normal CPUs it's pretty annoyingly slow (and takes up 100% of all CPU cores). Supposedly unified memory (Strix Halo and such) make it faster than ordinary CPU - but it's still (much) slower than GPU.
I don't have Strix Halo or that type of unified memory Mac to test that specifically, so that part is an inference I got from an LLM, and what the Internet/benchmarks are saying.
The stuff he says in [1] completely does not match my usage. I absolutely do use fork and star. I use release. I use the homepage link, and read the short description.
I'm also quite used to the GitHub layout and so have a very easy time using Codeberg and such.
I am definitely willing to believe that there are better ways to do this stuff, but it'll be hard to attract detractors if it causes friction, and unfamiliarity causes friction.
I really don't get this... like you're a code checkout away from just asking claude locally. I get that it is a bit more extra friction but "you should have an agent prompt on your forge's page" is a _huge_ costly ask!
I say this as someone who does browse the web view for repos a lot, so I get the niceness of browsing online... but even then sometimes I'm just checking out a repo cuz ripgrep locally works better.
I also check for the License of a project when I'm looking at a project for the first time. I usually only look at that information once, but it should be easily viewed.
I also look for releases if it's a program I want to install... much easier to download a processed artifact than pull the project and build it myself.
But, I think I'm coming around to the idea that we might need to rethink what the point of the repository is for outside users. There's a big difference in the needs of internal and external users, and perhaps it's time for some new ideas.
(I mean, it's been 18 years since Github was founded, we're due for a shakeup)
Hrm. Mitchell has been very level-headed about AI tools, but this seems like a rare overstep into hype territory.
"This new thing that hasn't been shipped, tested, proven, in a public capacity on real projects should be the default experience going forwards" is a bit much.
I for one wouldn't prefer a pre-chewed machine analysis. That sounds like an interesting feature to explore, but why does it need to be forced into the spotlight?
Aren't they literally moving off GitHub _because_ of LLMs and the enshittification optimising for them causes? This line of thinking and these features seem to push people _off_ your platform, not onto it.
I hope so. When Microsoft embraced GitHub there was a sizeable migration away from it. A lot of it went to Gitlab which, if I recall correctly, tanked due to the volume.
But it didn't stick. And it always irked me, having Microsoft in control of the "default" Git service, given their history of hostility towards Free software.
At the time I (and many others) had a much more positive view of Microsoft. In 2018 Nadella was bringing a lot of positive change to Microsoft. The release of VSCode and WSL among the more visible trends that signaled a new direction. A world in which Microsoft wasn't the preferred owner of Github, but could at least be a good steward and an open-source friendly company.
Now in 2026 things look different. While the fears that Microsoft would revert to 90s Embrace, Extend, Extinguish mostly haven't come to pass, their products are instead all plagued by declining quality and stability, and a product direction that seems to willfully ignore most of the user base
Find a project, find out if it's the original or a fork, and either way, find all the other possibly more relevant forks. Maybe the original is actually derelict but 2 others are current. Or just forks with significant different features, etc. Find all the oddball individual small fixes or hacks, so even if you don't want to use someone's fork you may still like to pluck the one change they made to theirs.
I was going to also say the search but probably that can be had about the same just in regular google, at least for searching project names and docs to find the simple existence of projects. But maybe the code search is still only within github.
I hope so. Ever since Trump and the US corporations declared software-war against Europeans, I want to reduce all dependencies on US corporations as much as possible. Ideally to zero. Also hardware-wise. This will take a long time, but Canadians understood the problem domain here. European politicians still need to understand that Trump and his cronies changed things permanently.
Yes, you are right. I read a lot about European FOSS projects (and my own blog is member of a planet for german Foss articles). Migrating away from github has been a topic for a while in that scene now. First just because github is not Foss, then accelerated because of Microsoft, and Microsoft now mismanaging Github with ai bullshit accelerated it even more. Plus the push for independence of us services, Trumps imperialism is a big factor as well.
So absolutely not the start of the movement, but it seems to be accelerating more and more.
I get the sense that this is true for many enshittified services. See anything Microsoft. The FOSS movement seems to be gaining some traction again.
My guess is it's driven by very poor user experience coupled with worse product.
Technical leople who care about privacy/surveillance at least a little bit need take one look at the current state of tech and US govt to see how fucking fast dystopia is becoming reality. See discord/openai writeup that came out, ads literally everywhere, flock and ring cameras wide open and passively performing recon, routers doing the same... it's like snow crash out here
Makes perfect sense that those who know would say fuck this, im out. Convenience isn't worth it anymore
I was thinking of something similar — instead of just two passes, couldn’t you also store different quantized values? If you have thousands of documents, you could narrow it down to a handful with a few bit-wise Hamming comparisons before doing a full cosine similarity on just the rest. If you hand more than one bitmap stored, you’d have fewer comparisons at each step too.
I came across this a week ago when I was looking at some LLM generated code for a ToUpper() function. At some point I “knew” this relationship, but I didn’t really “grok” it until I read a function that converted lowercase ascii to uppercase by using a bitwise XOR with 0x20.
It makes sense, but it didn’t really hit me until recently. Now, I’m wondering what other hidden cleverness is there that used to be common knowledge, but is now lost in the abstractions.
A similar bit-flipping trick was used to swap between numeric row + symbol keys on the keyboard, and the shifted symbols on the same keys.
These bit-flips made it easier to construct the circuits for keyboards that output ASCII.
I believe the layout of the shifted symbols on the numeric row were based on an early IBM Selectric typewriter for the US market. Then IBM went and changed it, and the latter is the origin of the ANSI keyboard layout we have now.
I left out that the line before there was a check to make sure the input byte was between ‘a’ and ‘z’. This ensures that if the char is already upper case, you don’t do an extraneous OR. And at this point, OR, XOR, or even a subtract 0x20 would work. For some reason the LLM thought the XOR was faster.
I honestly wouldn’t have thought anything of it if I hadn’t seen it written as `b ^ 0x20`.
The question isn’t if the demand is real or not (supplies are low, so demand must exist). The question is if the demand curve has permanently shifted, or is this a short-term issue. No one builds new capacity in response to short term changes, because you’ll have difficulty recouping the capital expense.
If AI will permanently cause an increase in hard drives over the current growth curve, then WD, et al will build new capacity, increasing supply (and reducing costs). But this really isn’t something that is known at this point.
My post argues that the demand has permanently shifted.
By the way, plenty of people on HN and Reddit ask if the demand is real or not. They all think there's some collusion to keep the AI bubble going by all the companies. They don't believe AI is that useful today.
Usefulness and overvaluation are not mutually exclusive. AI is useful, but it is not a fraction as useful as these companies spending rates would have one believe.
If it is, then the world is going to lose pretty much all white collar jobs. That's not really the bright future they're selling either.
> My post argues that the demand has permanently shifted
The time horizon for this is murky at best. This is something you think, but can’t know. But, you’re putting money behind it, so if you’re right, you’ll make a good profit!
But for the larger companies (like WD), over building capacity can be a big problem. They can’t plan factory expansion based on what might be a short term bubble. That’s how companies go out of business. There is plenty to suggest that you’re right, that AI will cause permanently increased demand for computing/storage resources. Because it is useful and does consume and produce a lot of new data and media.
But I’m still skeptical.
The massive increase in spending can’t be sustainable. We can’t continue to see the AI beast at this rate and still have other devices. Silicon wafer fabs can’t be built on demand and take time. SSD/HD factories take time. I think we are seeing an expansion to see who the big players will be in the next 3-5 years. Once that order has been established, then I think we will fall back to more sustainable rates of demand. This isn’t collusion, it’s just market dynamics at play in a common market. Sadly, we are all part of the same pool and so everything is expensive for all of us. At some point though, the AI money will dry up or get more expensive. Then I think we’ll see a reversion back to “normal” demand, maybe slightly elevated, but not the crazy jump we’ve seen for the past two years.
Us being in the same pool as AI is one of the potential risks pointed out by AI safety experts.
To use an analogy, imagine you're a small fluffy mammal that lives in fertile soils in open plains. Suddenly a bunch of humans show up with plows and till you and your environment under to grow crops.
Maybe the humans suddenly won't need crops any longer and you'll get your territory back. But if that doesn't happen and a paradigm change occurred you're in trouble.
The most important question is are we in 1994 or 2000 of the bubble for investors and suppliers like Samsung, WD, SK Hynix, TSMC.
What about 10 years from now? 15 years? Will AI provide more value in 2040 than in 2026? The internet ultimately provided far more value than even peak dotcom bubble thought.
> The internet ultimately provided far more value than even peak dotcom bubble thought.
Yeah, but not to the early investors. The early investors lost their shirts. The internet provided a lot of value after the bubble popped and everyone lost money.
You’re of course looking at all laptop parts, but the form factor would be part of the appeal. Then again, an iPad dock would also probably cover this form factor for about the same cost.
reply