They paid about $50k for ch.at. I have a single letter country code domain (3 characters total, x.xx). There are still some single letter country code domains available to register, you could get one for under $1k USD if you want one.
There are still quite a few XX.XX left, but mostly just under obscure cTLDs (unless you are willing to consider IDN/Unicode domains under .ws or similar)
i hold a good 2 letter Chat domain: hi.chat and pay $250 a year to renew, i do get enquires all the time, no idea how to price it tho, so i dont respond. Anyone have any ideas how to go about evaluating it?
If you have a lot of inquiries - start responding with ridiculous prices (whatever ridiculous means to you.. 100k.. 1kk, whatever). Answer different price for each new price request. People either agree, stop talking or start negotiating down. After 30 emails I bet you will have some idea about how much you can sell it for.
One simple thought - it’s just an email answer, not a contract/obligation that you have to sell it at particular price, you can change your mind at any time.
I mean, ch.at is a incredible domain hack. But not sure it's worth millions. If it was ch.com could get mid six figures and up. But either way absolutely amazing domain.
Simple, by defining the price for customers they have insufficient data on as max(price). Private mode and VPN are actually super easy to detect for someone willing to extract the maximum value out of you.
I agree, I saw Xorq and with the font I have, it looked like Xorg.
No offense, but I avoid AI like a plague, so I saw this I thought "WHY?", thinking it was X11 (Xorg).
Now a bit of a rant. Why or why do people have to use X??? for their products, time to come up with something else. If I was the Xorg foundation I would have sued Twitter. And I would have sent a letter to the smaller tech people using X for their product :)
this is really useful. In our case, we are inspired by a terminal game called Zork, which is an Activision trademark. So, we tried to be creative and replacing Z with X made sense at the time. Of course, we did not want to be like any "dork", hence the Q.
Now I read this tought process above, I think this leaves much to be desired.
great point...honestly this will be our 3rd rebrand, but worth considering. I am just hoping that we are in a different enough space that it could slide....
Yeah, please don't stick with Xorq, it's so visually similar to Xorg, it looks like the bottom of the g didn't render, which is totally possible on the web.
If it were any good I would assume there would be no need to hype it up.
My theory is that LLMs will get commoditized within the next year. The edge that OpenAI had over the competition is arguably lost. If the trend continues we will be looking at inference like commodity prices, where the most efficient like cerebras and groq will be the only ones actually making money at the end.
I don't think so, look at how Sora changed every... Well Operator was a game changer for.. Hmm, but what about gpt-4.5 or PhD level o3... o3-pro...? I mean, the 10k/mon agents are definitely coming... any day now...
With this comparison you are saying the original Iphone was like version 6 of an well established line of products in a market that had seen major releases a few times a year for about three years.
That's certainly not how the first iphone is usually described.
"My theory is that LLMs will get commoditized within the next year."
Incredibly bad theory, it's like you're saying every LLM is the same because they can all talk, even though the newer ones continue to smash through benchmarks the older ones couldn't. And now it happens quarterly instead of yearly, so you can't even say it's slowing down.
At the moment most of the dollars are coming from consumer, inclusive of business, subscriptions. That’s where the valuations are getting pegged and most API dollars are probably seen as experimental. The model quality matters but product experience is what is driving revenue. In that sense OpenAI is doing quite well.
If that is the case, the $300 billion question is whether someone can create a product experience that is as good as OpenAI’s.
In my mind there are really three dimensions they can differentiate on: cost, speed, and quality. Cost is hard because they’re already losing money. Speed is hard because differentiation would require better hardware (more capex).
For many tasks, perhaps even a majority right now, quality of free models is approaching good enough.
OpenAI could create models which are unambiguously more reliable than the competition, or ones which are able to answer questions no other model can. Neither of those has happened yet afaik.
Competitors just need to wait for OpenAI to burn all their free money and dig themselves a debt hole they can’t easily climb out of, and then offer a similar experience at a price that barely breaks even or makes a tiny profit, and they win.
> three dimensions they can differentiate on: cost, speed, and quality
The fourth dimension is likely to be the most powerful of the differentiators: specificity.
Think Cursor or Lovable, but tailored for other industries.
There's a weird thing where engineers tend to be highly paid, but people who employ engineers are hesitant to spend highly on tools to make their engineers more productive. Hence all Cursor's magic only gets its base price to ~50% of Intercom's entry-level fee for a tool for people who do customer support.
LLMs applied to high-value industries outside of tech are going to be a big differentiator. And the companies that build such solutions will not have the giant costs associated with building the next foundation model, or potentially operating any models at all.
The fact that xAI only exists for Elon Musk's personal spite and they produced a top performing model certainly implies that model training isn't any kind of moat. It's certainly very expensive but not mysterious.
The demo is not convincing, I rarely find myself migrating between package managers and if I do I would expect claude code to ace this task without mcp help.
Appreciate the feedback. I will make it my todo to try out your suggestion of comparing Claude Code with and without MCP to measure the quantitative difference.
As there is no reason to believe that Harry Potter is axiomatic to our culture in the way that other concepts are, it is strange to me that the LLMs are able to respond in this way, and not at all expected. Why do you think this outcome is expected? Are the LLMs somehow encoding the same content in such a way that they can be prompted to decode it? Does it matter legally how LLMs are doing what they do technically? This is pertinent to the court case that Meta is currently party to.
> See for example OpenAI's comment in the year of GPT-2's release: OpenAI (2019). Comment Regarding Request for Comments on Intellectual Property Protection for Artificial Intelligence Innovation (PDF) (Report). United States Patent and Trademark Office. p. 9. PTO–C–2019–0038. “Well-constructed AI systems generally do not regenerate, in any nontrivial portion, unaltered data from any particular work in their training corpus”
> During the hearing, Judge Chhabria said that he would not take into account AI licensing markets when considering market harm under the fourth factor, indicating that AI licensing is too “circular.” What he meant is that if AI training qualifies as fair use, then there is no need to license and therefore no harmful market effect.
I know this is arguing against the point that this copyright lobbyist is making, but I hope so much that this is the case. The “if you sample, you must license” precedent was bad, and it was an unfair taking from the commons by copyright holders, imo.
The paper this post is referencing is freely available:
OpenwebUI (https://github.com/open-webui/open-webui) accepts openrouter API keys and will let you choose from all models on openrouter in a ChatGTP (or even slightly better IMHO) interface.