This is what a failed App Store looks like. Everybody complains about the 30% cut that seems to be the norm on the steam, iOS App Store and the likes.
But getting an App Store to take off is incredibly hard, and the Mac app store is the proof of that. It should be successful; everything points to it. Despite that; absolutely no one uses it, so big apps aren't on it; and fake / low quality apps are thus more visible, which lowers the trust even more. And then you have a chicken and egg problem.
If this is your metric for a "failed app store", then the iOS app store also qualifies, since it has just as many such listings.
As far as Mac app store, even ignoring Apple's first-party apps, just to name a few, it has MS Office, WhatsApp, Telegram, Kindle, Facebook, Slack, Parallels, LibreOffice, VLC. I use Mac as my primary desktop, and I'd say that about half of the apps I use daily are from the store. As far as I can tell, the ones that are missing, are mostly missing because they can't do what they need to do within the sandbox.
Everybody complains about the lack of alternative. Apple could charge a 99% cut for all I care, but they have to compete with other iOS app providers to prove their cut is worthwhile.
> the Mac app store is the proof of that. It should be successful
The Mac App Store shouldn't be successful. It's the exact same situation as the iOS App Store, but with professional software and competitive third-party storefronts. The Mac is the closest thing Apple has to a healthy software ecosystem.
I don't know if the iOS App Store is much better. The killer for me is paid-for placements for competing apps when I search for something - a clear cut case of putting revenue over user experience.
How far away from each other are those opposing wind trends ? It’s one of the issues with the global grid in europe; if you look at weather patterns; all of western europe often tends to be contained in the same cell.
And assuming you can find those opposing wind trends not too distant from each other how reliable is that (anti)correlation.
This exists because of a cognitive bias: we tend to focus on direct, attributable harm while overlooking larger, diffuse, and indirect harm.
A nuclear plant could operate safely for 50 years, causing no harm, but if it explodes once and kills 10,000 people, there's gonna be a trial. A coal plant could run for the same 50 years without any dramatic accident, yet contribute to 2,000 premature deaths every single year through air pollution—adding up to 100,000 deaths. Nobody notices, nobody is sued, business as usual. It's legally safer today to be "1% responsible for 1000 death" than to be "100% responsible for a single one". Fix this and that law goes away.
Well, no, that's more down to nuclear fans constantly using the worst possible comparisons, and creating false dichotomies. The better comparison are renewables or natural gas, not an ancient technology literally everybody (outside of it's investors) agrees is bad and should go.
Nuclear sits just between wind (slightly more dangerous) and solar (slightly less dangerous) per unit of electricity production, all of them being much safer than hydro; and ridiculously safer than gas, oil and coal. It's a really, really safe option.
Note that these number are a bit old and since then, installation of consumer solar has increased significantly. Installation of solar panels on consumer roofs is much more dangerous than installation of solar panels in solar plants, so death rate for solar are significantly underestimated. Meanwhile accident rates of plant construction (nuclear, solar or otherwise) keep dropping.
Our meetings often involve a mix of onsite and offsite employees. Typical setup might be CEO + CTO + a VP in a room, connected as a single zoom client to the call (either of these 3 guys depending on who got in the meeting room first), then few additional people joining remotely from home each on their own zoom instance. The guys on the meeting room are using a dedicated camera in the meeting room that captures the entire room, and has all participants in sight.
Is this a setup you are trying to address; how are you able to recognize speakers in this configuration ?
Most transcript system we have tried bundle everything that is said by the onsite people as a single entity which pretty much destroys the value of the transcript; especially if people in that room disagree with each other; reading the transcript makes it feel that the onsite guys is very schizophrenic
That’s a great question! We partner with a number of different transcription providers that use AI to identify different speakers based on the sound of their voice. This prevents all the speakers from a conference room from being bundled together as the same person. We’re also going to be looking to add this functionality to our own transcription service in the coming months.
The median salary in the US is $29/hour.
By definition a one hour meeting has at least two people in it; often more. So two median guys talking for an hour costs ~$60. The meeting the you really want transcripts for often contain more than one person; and often involve people earning more than the median. I'd happily ad $1 to every single one of my meetings if they get more productive.
You can complain all you want about land allocation, but when there are 2 tons vehicles going 100+ Kmh/h somewhere and it's closed to pedestrian you don't let your kid got there. Period.
Similarly, if they are reintroducing wolves and grizzlis in a forest near you, and they close it off for trails etc, you don't organize a weekend camping trip there.
Sometimes it's about common sense, stop blaming "society and the government" for your inability to function as a reasonable human being.
Well, they are for cars. So play at your own risk.
I played in the streets and rode my bike every summer day (still do just not as much). But I also knew that 2000lbs of steel traveling at 30mph is not going to stop for me so my life was in my hands and I needed to be vigilant.
That's a mistake, I think. They are for traffic and that includes kids, adult pedestrians, people on bikes, people with dogs, mopeds, and yes, cars. And as the driver of one of those you have a responsibility towards all those lesser protected members of society.
Search engine are as well. At least in the eu they literally are the first thing a browser asks you when you launch it for the first time.
On a dropdown you have 2 clicks to do to change. For search engine you are prompted to using Google is one click; using something else costs the same click.
Google is still managing an outrageous domination.
Being percieved as the best is still a huge headstart. 99% of the population is not using “last weeks llm that topped AIME and ARC-AGI“. They are using “ChatGPT” with the default model selected.
People are going to switch when “their tech friend tells them to switch”. The same way they switched from internet explorer to chrome.
Once you reach that position you can afford being “not the best but good enough” for a long time.
xAI needs to convince investors that: OpenAI is struggling so there is an opening to take the crown and be popular enough to get people to switch. They have twitter to help make that happen.
And they need to convince that no one else is going so much better than they are anytime soon; they just need to be good enough
> They are using “ChatGPT” with the default model selected.
Funny, because the OpenAI models are not the first or even second choice for anyone I know who uses Copilot for coding agents. Anthropic and Google are absolutely stomping them in this space.
Search engines are pretty different from LLMs, in any case: they all have different UX right in your face, different functionality, etc. The LLMs simply generate text, that's all they do, and the differences are far more subtle.
Your sample are tech people using Copilot, which is very small population sample. Hundreds of millions of casual users around the world default to ChatGPT, and for example in my country, it's basically household name at this point. They haven't even heard about Claude or Gemini. For Google, it looks like like Google+ vs Facebook situation all over again.
The European EPR aren’t underwhelming: the power plants are delivering precisely what was planned. The underwhelming part comes from delay and cost overruns caused by local political opposition and lack of vision, as well as difficulties finding builder with the required know-how.
Despite this both France (which has just finished building an EPR) and the UK (which is building one right now) are doubling down and launching new projects to capitalise on the knowledge gained.
In France all historical reactors worked so well that we did not feel the need to build more. This lead to talented engineers going to retirement without having a chance to pass on their knowledge and experience, causing cost overruns on the new constructions. This is not inherent to the technology itself but a symptom of our decision to put it aside for a while. As an example when I was in engineering school I remember being told “don’t do a nuclear physics major there is no job for that in the future”. Not easy retaining excellence in a field when that’s what you tell your children. All the dude that went there anyway are in very very high demand today, as you might expect.
The new generation of reactors is more complex, mainly because of additional security and reliability requirements, which is a good thing. Those are certified for a lifespan of 60 years and costs are computed on that base. Some old gen reactors in the us are looking to extend their lifespan to 80 years. It’s extremely likely the new - safer - reactors will be able go beyond that, reducing the MW costs compared to current estimates.
We are slowly re-learning to build reactors, and mastering a new technology at the same time. The more reactors we build based on that experience the more that initial cost will be distributed.
There is nothing underwhelming in what was delivered; the process to get there was, but we will get better at that.
Yes? In California we adopted a streamlined process for new renewable connections. The first project was recently approved under the new lightweight process. It is being built by an irrigation district on a bunch of retired farmland.
"The Darden project, which is owned by IP Darden I, LLC, a subsidiary of Intersect Power" [0]
"Intersect has closed ~$6B in project financings and raised ~$2.1B in corporate equity to support the buildout of clean energy assets including data center opportunities across the U.S." [1]
Farms quite often have fairly bulky connections that allow them to use their existing grid connection for smaller scale solar. Enough for them to have steady consistent income unconnected to seasons and weather, and a very good tradeoff. You see a ton of these in rural Minnesota because the utility, Xcel, is one of the very few utilities in the country that's not stuck in the 1970s, and can adapt to new technology.
For both the French and British the current investments are fueled by wanting to subsidize their military nuclear ambitions.
As per expected Sizewell C costs it will be even more expensive than Hinkley Point C, nothing learned.
The ”lifespan” you proclaim is also an extremely rosy picture. About the entire plant except outer shell and a few core components like the pressure vessel gets replaced over it.
You also have no idea if expensive nuclear power will have an economical lifespan lasting as long.
We already see existing nuclear plants all over Europe being forced out of the market by cheap renewables. This will only worsen leading to nuclear power having fewer and fewer hours to amortize its insanely high costs over.
> the power plants are delivering precisely what was planned
No. The load factor of the pair of EPRs built in China (5 years late and 60% above the budget) at Taishan is quite bad (.55 and .76).
In France the EPR isn't even producing electricity, while it was to be delivered in 2012 (budget 3.3 billions €, real cost > 23.7 billions €)
> delay and cost overruns caused by local political opposition and lack of vision
Source? An official report (dubbed the "Folz report") explains why the EPR project in France (Flamanville) was a failure, I cannot find "local political opposition" among the causes.
> This lead to talented engineers going to retirement without having a chance to pass on their knowledge and experience
The Civaux-2 reactor was delivered in 1999.
In 2000 the French nuclear sector (at the time "Areva") was trying to sell EPRs (even in France).
In 2003 Finland ordered an EPR and work began in 2005.
How exactly are we supposed to believe that all knowledge vanished, without anyone in the industry to act accordingly, especially while the existing French fleet of reactors (56 at the time) had to be maintained?
Even EDF, as early as 1986, considered the nuclear fleet too large: "We will have two to four too many nuclear reactors by 1990," ( https://www.lemonde.fr/archives/article/1986/01/17/nous-auro... ) and this was confirmed by the 1989 Rouvillois-Guillaume-Pellat report. The reason is well known: after the oil price shock, hydrocarbon prices had fallen significantly and sustainably, and they were competing with electricity.
However, reactors were built until the end of the 1990s. Three of them were started after 1985, and four were built in the 1990s. Some were ready to go in 1999 but did only diverge the generate electricity in 2002...
> certified for a lifespan of 60 years
Subject to a successful technical in-depth inspection every 10 years.
I would expect a lot of attempts to fail and those tend to not be published, or gather less attention. So if we have reached a local optimum, any technique that gets close to the current benchmarks is worth publishing, as soon as results reach that point. All the one that are too distant are discarded. In the end all the paper you see are close to the current status quo.
It's possible that some of those new architecture / optimization would allow us to go beyond the current benchmark score, but probably with more training data, and money. But to get money you need to show results, which is what you see today. Scaling remains king; maybe one of these technique is 2025 "attention" paper, but even that one needed a lot of scaling to go from the 2017 version to ChatGPT.
But getting an App Store to take off is incredibly hard, and the Mac app store is the proof of that. It should be successful; everything points to it. Despite that; absolutely no one uses it, so big apps aren't on it; and fake / low quality apps are thus more visible, which lowers the trust even more. And then you have a chicken and egg problem.
reply