> We have committed to open-sourcing Mojo in 2026. Mojo is still young, so we will continue to incubate it within Modular until more of its internal architecture is fleshed out.
> That rules it out of any production deployment until 2026 so.
Has that stopped everyone before? Java, C#/.NET, Swift and probably more started out as closed-source languages/platforms, yet seemed to have been deployed to production environments before their eventual open-sourcing.
I don't think Java (when it was owned by Sun) nor .NET (even currently) run the risk of a VC "our incredible journey" event causing "barrel bending" nor the backing company running out of money. In the first flavor, they'd want their pound of flesh and the "our new compiler pricing is ..." would be no good. In the latter, even if they actually opened the platform on the way out, it still would require finding a steward who could carry the platform forward, which is a :-( place to be if you have critical code running upon it
I guess the summary is that neither Java [at the time] nor .NET were profit centers for their owners, nor their only reason for existing
They certainly were, because no one other than a few hardlines were writing Java or C# and VB code in bare bones editors and compiling from command line, as only the bare bones SDKs were free beer, and for the desktop.
IDEs, implementations for embedded and phones, were all paid products, IDEs by developers or their employers, the others by OEMs.
My point is the early days, JCafe, Visual Age, Visual Studio, Forte, before free beer IDEs for them became common.
Java side with Eclipse/Netbeans, .NET side with the Visual Studio Express editions.
Yes, and there is a reason for that: Both are deeply integrated in Microsofts ecosystem, and whether one likes that or not, that ecosystem is the dominant platform for desktop computing, especially in commercial settings.
For example, GIMP works without any issues. And the productivity boost is tremendous, for me it's very hard to work on anything else. I barely encounter programs where it does more harm than use.
Especially having multiple desktops with different names allow me to localize windows so much quicker than looking through a dozens of terminals manually.
Right now, I do have: 1 mail, 2 web, 3 gimp, 4 chat, 5 notes, 6 terminal, 7 ssh cluster
And fusion reactors cannot end up like a Chernobyl disaster.
That's a huge safety plus and one of the major concerns many countries are phasing out fission reactors.
In the 40s and 50s programmable general-purpose electronic computers were solving problems.
Ballistics tables, decryption of enemy messages, and more. Early programmable general-purpose electronic computers, from the moment they were turned on could solve problems in minutes that would take human computers months or years. In the 40s, ENIAC proved the feasibility of thermonuclear weaponry.
By 1957 the promise and peril of computing entered popular culture with the Spencer Tracy and Katharine Hepburn film "Desk Set" where a computer is installed in a library and runs amok, firing everybody, all while romantic shenanigans occur. It was sponsored by IBM and is one of the first instances of product placement in films.
People knew "electronic brains" were the future the second they started spitting out printouts of practically unsolvable problems instantly-- they just didn't (during your timeframe) predict the invention and adoption of the transistor and its miniaturization, which made computers ubiquitous household objects.
Even the quote about the supposed limited market for computers trotted out from time-to-time to demonstrate the hesitance of industry and academia to adopt computers is wrong.
In 1953 when Thomas Watson said that "there's only a market for five computers" what he actually said was "When we developed the IBM 701 we created a customer list of 20 organizations who might want it and because it is so expensive we expected to only sign five deals, but we ended up signing 18" (paraphrased).
Militaries, universities, and industry all wanted all of the programmable general-purpose electronic computers they could afford the second it became available because they all knew that it could solve problems.
Included for comparison is a list of problems that quantum computing has solved:
Exactly. There were many doubts about exactly how useful computers would be. But they were already proving their usefulness in some fields.
The current state of quantum computers is so much worse than that. It's not just that they have produced zero useful results. It's that when these quantum computers do produce results on toy problems, the creators are having a very hard time even proving the results actually came from quantum effects.
I don't think that you can really make that comparison. "Conventional" computers had more proven practical usage (especially by nation states) in the 40s/50s than quantum computing does today.
By the 1940s and 50s, computers were already being used for practical and useful work, and calculating machines had a _long_ history of being useful, and it didn't take that long between the _idea_ of a calculating machine and having something that people paid for and used because it had practical value.
They've been plugging along at quantum computers for decades now and have not produced a single useful machine (although a lot of the math and science behind it has been useful for theoretical physics).
Survivor bias. Just because a certain thing seemed like a scam and turned out useful does not mean all things that seem like a scam will turn out useful.
I'm not the OP, but when you're of a certain age, you don't need citations for that. Memory serves. And my family was saying those sorts of things and teasing me about being into computers as late as the 1970's.
I can attest to the fact that people who didn't understand computers at all were questioning the value of spending time on them long after the 1970s. The issue is that there are people today who do understand quantum computing that are questioning their value and that's not a great sign.
In the 1970s both my parents were programming computers professionaly.
Computing was already a huge industry. Just IBM's revenues were in the multi billion dollar range in the 1970s. And a billion dollar in the 1970s was A LOT of money in the 1970s.
Just to get things right. The big AI LLM hype started end of 2022 with the launch of ChatGPT, DALL-E 2, ....
Most people in society connect AI directly to ChatGPT and hence OpenAI. And there has been a lot of progress in image generation, video generation, ...
So I think your timeline and views are slightly off.
> Just to get things right. The big AI LLM hype started end of 2022 with the launch of ChatGPT, DALL-E 2, ....
GPT-2 was released in 2019, GPT-3 in 2020. I'd say 2020 is significant because that's when people seriously considered the Turing test passed reliably for the first time. But for the sake of this argument, it hardly matters what date years back we choose. There's been enough time since then to see the plateau.
> Most people in society connect AI directly to ChatGPT and hence OpenAI.
I'd double-check that assumption. Many people I've spoken to take a moment to remember that "AI" stands for artificial intelligence. Outside of tongue-in-cheek jokes, OpenAI has about 50% market share in LLMs, but you can't forget that Samsung makes AI washing machines, let alone all the purely fraudulent uses of the "AI" label.
> And there has been a lot of progress in image generation, video generation, ...
These are entirely different architectures from LLM/chat though. But you're right that OpenAI does that, too. When I said that they don't stray much from chat, I was thinking more about AlexNet and the broad applications of ML in general. But you're right, OpenAI also did/does diffusion, GANs, transformer vision.
This doesn't change my views much on chat being "not seeing the forest for the trees" though. In the big picture, I think there aren't many hockey sticks/exponentials left in LLMs to discover. That is not true about other AI/ML.
>In the big picture, I think there aren't many hockey sticks/exponentials left in LLMs to discover. That is not true about other AI/ML.
We do appear to be hitting a cap on the current generation of auto-regressive LLMs, but this isn't a surprise to anyone on the frontier. The leaked conversations between Ilya, Sam and Elon from the early OpenAI days acknowledge they didn't have a clue as to architecture, only that scale was the key to making experiments even possible. No one expected this generation of LLMs to make it nearly this far. There's a general feeling of "quiet before the storm" in the industry, in anticipation of an architecture/training breakthrough, with a focus on more agentic, RL-centric training methods. But it's going to take a while for anyone to prove out an architecture sufficiently, train it at scale to be competitive with SOTA LLMs and perform enough post training, validation and red-teamint to be comfortable releasing to the public.
Current LLMs are years and hundreds of millions of dollars of training in. That's a very high bar for a new architecture, even if it significantly improves on LLMs.
ChatGPT was not released to the general public until November 2022, and the mobile apps were not released until May 2023. For most of the world LLM's did not exist before those dates.
This site and many others were littered with OpenAI stories calling it the next Bell Labs or Xerox PARC and other such nonsense going back to 2016.
And GPT stories kicked into high gear all over the web and TV in 2019 in the lead-up to GPT-2 when OpenAI was telling the world it was too dangerous to release.
Certainly by 2021 and early 2022, LLM AI was being reported on all over the place.
>For most of the world LLM's did not exist before those dates.
Just because people don't use something doesn't mean they don't know about it. Plenty of people were hearing about the existential threat of (LLM) AI long before ChatGPT. Fox News and CNN had stories on GPT-2 years before ChatGPT was even a thing. Exposure doesn't get much more mainstream than that.
As another proxy, compare Nvidia revenues - $26.91bln in 2022, $26.97bln in 2023, $60bln 2024, $130bln 2025. I think it's clear the hype didn't start until 2023.
You're welcome to point out articles and stores before this time period "hyping" LLM's, but what I remember is that before ChatGPT there was very little conversation around LLM's.
If you're in this space and follow it closely, it can be difficult to notice the scale. It just feels like the hype was always big. 15 years ago it was all big data and sentiment analysis and NLP, machine translation buzz. In 2016 Google Translate switched to neural nets (LSTM) which was relatively big news. The king+woman-man=queen stuff with word2vec. Transformer in 2017. BERT and ELMo. GPT2 was a meme in techie culture, there was even a joke subreddit where GPT2 models were posting comments. GPT3 was also big news in the techie circles. But it was only after ChatGPT that the average person on the street would know about it.
Image generation was also a continuous slope of hype all the way from the original GAN, then thispersondoesnotexist, the sketch-to-photo toys by Nvidia and others, the avocado sofa of DallE. Then DallE2, etc.
The hype can continue to grow beyond our limit of perception. For people who follow such news their hype sensor can be maxed out earlier, and they don't see how ridiculously broadly it has spread in society now, because they didn't notice how niche it was before, even though it seemed to be "everywhere".
There's a canyon of a difference between excitement and buzz vs. hype. There was buzz in 2022, there was hype in 2023. No one was spending billions in this space until a public demarcation point that, not coincidentally, happened right after ChatGPT.
Since I almost considered getting a paid AI service, with Kagi I get the freedom to choose different models + I get a nice interface for search, translate, ...
With Kagi the AI service also does not know who I am.
I'm quite happy so far, also the Android app works fine. 95% of the time I don't open a browser but instead the app to answer my questions.
The privacy feature somehow did not work in my firefox browser yet.
I think it actually depends what you define as "pixel". Sure, the pixel on your screen emits light on a tiny square into space.
And sure, a sensor pixel measures the intensity on a tiny square.
But let's say I calculate something like:
# samples from 0, 0.1, ..., 1
x = range(0, 1, 11)
# evaluate the sin function at each point
y = sin.(x)
Then each pixel (or entry in the array) is not a tiny square. It represents the value of sin at this specific location. A real pixelated detector would have integrated sin from `y[u] = int_{u}^{u + 0.1} sin(x) dx` which is entirely different from the point wise evaluation before.
So for me that's the main difference to understand.
> Will Mojo be open-sourced?
> We have committed to open-sourcing Mojo in 2026. Mojo is still young, so we will continue to incubate it within Modular until more of its internal architecture is fleshed out.
Soon it might be though.
reply