The whole thing that differentiates this company from the dozen other seemingly-interchangeable new-space entrants is the novel technology they've developed to facilitate reuse. Even if it were the case that there isn't a market for five tons to LEO (and to be clear, Rocket Lab seems to be doing decent business launching a lot less) and all this was was a technology demonstrator, why would you build a technology demonstrator that doesn't show off the thing that makes your company interesting?
> Amazon Kuiper is positioning to compete with SpaceX’s Starlink broadband constellation but it would not rule out seeking launch services from its competitor given the tight deadline, Limp said. “We are open to talking to SpaceX. You’d be crazy not to, given their track record.”
> The Falcon 9 [22.8 tons to LEO], however, is not as large as Amazon would like it to be in order to get maximum bang for its launch buck, as Kuiper satellites are larger than Starlink’s.
> “I would say Falcon 9 is probably at the low end of the capacity that we need,” Limp said. Perhaps a better option would be Falcon Heavy or the much larger Starship, which is still in development. As Starship transitions to production readiness, “that becomes a very viable candidate for us as well.”
Maybe, but they also say Starship would be the best option. There is no way Nova or any other rocket could compete here. Nova would only be good relative to Starship for launching individual smaller payloads into specific orbits, so not for satellite constellations.
I'm not sure the goal of this competition, in and of itself, is AGI. They point to current LLMs emerging from transformers, which in turn emerged from a general basket of building blocks from machine-translation research (attention, etc.). It seems like the suggestion is that to get from where we are now to AGI, some fundamental building blocks are missing, and this is an attempt to spur the development of some of those building blocks, but by analogy with LLMs, the goal here is to come up with a new thing like "attention," not a new thing like GPT4.
Intent on whose part, though? Like, supposing in arguendo that the company's goal was to make the voice sound indistinguishable from SJ's in Her, but they wanted to maintain plausible deniability, so instead cast as wide a net as possible during auditions, happened upon an actor who they thought already sounded indistinguishable from SJ without special instruction, and cast that person solely for that reason. That seems as morally dubious to me as achieving the same deliberate outcome by instruction to the performer.
> happened upon an actor who they thought already sounded indistinguishable from SJ without special instruction, and cast that person solely for that reason
so who was doing the selecting, and were they instructed to perform their selection this way? If there was a law suit, discovery would reveal emails or any communique that would be evidence of this.
If, for some reason, there is _zero_ evidence that this was chosen as a criteria, then it's pretty hard pressed to prove the intent.
I think this is overthinking it. ChatGPT is billed as a general-purpose question-answerer, as are its competitors. A regular user shouldn't have to care how it works, or know anything about context or temperature or whatever. They ask a question, it answers and appears to have given a plausible answer, but doesn't actually do the task that was asked for and that it appears to do. What the technical reasons are that it can't do the thing are interesting, but not the point.
But it it like asking a person for them to generate the same thing, but when the start to list off their answers stopping them by throwing up your hand after their first response, writing that down, and then going back in time and asking that person to do the same thing, and stopping them again, and repeat- and then being surprised after 1000 times that you didn't get a reasonable distribution.
Meanwhile if you let the person actually continue, you may actually get 'Left..... Left Right Left Left... etc'.
If You asked me to pick a random number between one and six and ignore all previous attempts, I would roll a die and you would get a uniform distribution (or at least not 99% the same number).
If you are saying that this thing can't generate random numbers on the first try then it can't generate random numbers. Which makes sense. Computers have a really hard time with random, and that's why every computer science course makes it clear that we're doing pseudo random most of the time.
>If You asked me to pick a random number between one and six and ignore all previous attempts, I would roll a die and you would get a uniform distribution
I believe what GP is getting at is that if you didn't have a die, and you truly ignored all your previous attempts to the point of genuinely forgetting that the question had been asked, then your answer would likely be the same every time. Imagine asking a person with severe Alzheimer's to pick a number, then asking again a few minutes later. You'd probably get the same answer.
Yeah, I get what they are saying and there's no reason to believe that.
The die is an analogy for our decision making. They are implicitly claiming that randomness must come in an order and that simply isn't how randomness works or even halfway decent pseudo randomness.
Any system whether it be a die, a person, or an llm doesn't have to know about its previous random choices to make random choices that follow some distribution going forward presuming it's actually capable of randomness.
I'm just impressed that it answered "left" rather than outputting python code that I could run which would sample the list ["left", "right"] with an 80% bias.
It wouldn't be 0 dollars, though; the majority of their users are apparently outside the US. So the question is: how does the amount you could get by selling a US-inclusive Tiktok compare the potential future earnings of a US-free Tiktok? If the market prices it accurately, you'd sort of expect the former to be higher (a US-inclusive version seems obviously more valuable), but maybe they think the market undervalues them, or maybe prospective buyers would smell blood in the water because of the deadline and try to low-ball, etc.
If you want to call from Go into Rust, you can declare any Rust function as `extern "C"` and then call it the same way you would call C from Go. Not sure about going the other way.
Further more: sharp or dull, Ginsu-style or traditional prop, putting a part of your body in the interaction zone is a bad day. The article's style just gives it a more body-horror feel.
If you're building a submarine, you don't have to call it a "future submarine" until it submerges; people understand that if you say "I'm currently building a submarine," it has yet to go under water, but the thing you're building is still a submarine. I think that's generally true of not-yet-built or not-yet-used things: it's understood that if it hasn't done the thing yet, you're describing what it's going to be/do.
If SAFs are to be economically viable at all, they'll almost certainly need to be able to run in existing, unmodified engines. So: all engines will be able to run on some amount of SAFs anywhere for 0% to 100%, as will this new engine. This statement has no information content whatsoever.