What would an EV muscle car look like? Tesla Model 3 seems to check a lot of boxes. Definitely not noise or muscle car power delivery, but those seems like unrealistic goals for an EV
IMO what made American muscle cars special was their combination of power + unique styling + price. Yeah they were noisy (Which some really liked) and sometimes impractical, but that was okay because of they weren't trying to be everything to everyone.
A Model 3 might check a lot of boxes, but its styling is definitely not unique, and the rest of car itself is tying to appeal to as many as possible.
Insecure dudes seem to derive insouciant-to-proud self-satisfaction from inflicting noise, visual, and actual pollution on the rest of us which partially explains coal rollers and gaudy, loud motorcycles.
A couple of thousand acres, big sheds, a couple of silos, a few trucks and combine harvesters and a go hard or go home frequently sideways attitude 'll do that, it seems.
To each, their own, I guess. Yeah, I get needing 4WD for snow and mountains and reasonable capabilities suitable for farm and work, but excess is excess. I also don't understand why people put up with highway princess trucks that cost $60k+. My dad's shop '78 Chevy C-10 Scottsdale lacked headrests, cupholders, and a radio only the latter of which was added. You used to be able able to buy cheap-ish fleet trims of trucks if you didn't want a bunch of plastic and extra nonsense, I don't know if that's still the case; I wonder if Toyota makes fleet versions of Tundras (made near where I live) or Tacomas (if they're not discontinuing tacos) given it doesn't sell Hi-Lux here.
It's not for me but I'm adjacent to rural and mining industries and understand the joy of making an OTT desktop computing rig or near industrial home-lab.
In a similar manner a mechanic that works on aircraft engines for crop dusters capable of short take off and landing with heavy loads and drafting over fields with low clearance can also enjoy tuning the heck out of a V-8 and taking it to the limit.
It's not insecurity driving that behavior, it's confidence veering into over confidence.
You can see that same let's have a go and push it mentality in building MudCrab underwater EV cars
I read the full article (really resonated with it, fwiw), and I'm struggling to understand the issues you're describing.
> Yes of course relationship questions don’t have a “correct” answer. But physics questions do. Code vulnerability questions do. Math questions do. I mean seriously?
Can you say more? It seems to me the article says the same thing you are.
> I don’t actually think the above paragraph makes any sense, does anyone disagree with me? “Instead of planning we observe and hypothesize”?
I think the author is drawing a connection to the world of science, specifically quantum mechanics, where the best way to make progress has been to describe and test theories (as opposed to math where we have proofs). Though it's not a great analog since LLMs are not probabilistic in the same way quantum mechanics is.
In any case, I appreciated the article because it talks through a shift from deterministic to probabilistic systems that I've been seeing in my work.
Sure, but it's overblown. People have been reasoning about and building probabilistic systems formally since the birth of information theory back in the 1940s. Many systems we already rely on today are highly stochastic in their own ways.
Yes, LLMs are a bit of a new beast in terms of the use of stochastic processes as producers—but we do know how to deal with these systems. Half the "novelty" is just people either forgetting past work or being ignorant of it in the first place.
> Half the "novelty" is just people either forgetting past work or being ignorant of it in the first place.
We also see this in cryptocurrencies. The path to forgetting is greased by the presence of money and fame, and at some later time they are eventually forced to "discover" the same ancient problems they insisted couldn't possibly apply.
Truly appreciate the perspective. Any pointers to previous work on dealing with stochastic systems from the past? Part of my work is securing AI workloads, and it seems like losing determinism throws out a lot of assumptions in previously accepted approaches.
You’re implying any real system can have a single top priority, which is equally false. There are always multiple priorities, and the one sitting at the top changes based on the context
> We really need better ways of measuring economic health.
What would that look like to you? It seems to me no single, measurable metric is going to tell you economic health. They're a bunch of indicators that need to be interpreted.
Do you think Iran will have nukes in the near (20 years, just to put a number) term? Your position really only makes sense if that's not the case. By whatever means, the goal now seems to be to prevent that.
> Do you think Iran will have nukes in the near (20 years, just to put a number) term?
If they managed to get enough of their HEU and any reactor spent fuel out of Fordo and elsewhere into locations we don't know about where they happen to have previously built backup facilities then they could have them very quickly. Hopefully a) they didn't build backup facilities, and b) didn't get a change to spirit away the materials w/o us noticing.
If we fail, there's still the hope that other commenters here are right, and Iran isn't intent on using them offensively. If so, then Iran itself will be safe from this sort of attack.
... but it will also be clear to every other that the only way to be secure from Western military intervention is to possess nuclear weapons. There will be a precedent of a country acquiring them despite Western demands and surviving. This will lead to a world where proliferation is rampant, but not necessarily one where their use is no longer taboo as it is today.
I think this downplays the security issue. It's true that scoping the token correctly would prevent this exploit, but it's not a reasonable solution under the assumptions that are taken by the designers of MCP. LLM+MCP is intended to be ultra flexible, and requiring a new (differently scoped) token for each input is not flexible.
Perhaps you could have an allow/deny popup whenever the LLM wanted to interact with a service. But I think the end state there is presenting the user a bunch of metadata about the operation, which the user then needs to reason about. I don't know that's much better; those OAuth prompts are generally click throughs for users.