Wrong. They show up in some of the best and most widely and intensely read prose that exists, with good reason. Of course, like any tool, they can be misused by people who don't know better.
This comment is really below the standards one might expect here, a total and hominem. Why don't you open one of your own big boy books and tell us which one it was that used no em dashes?
And what's your argument? YOU have never read any literature that made use of em dashes? But you're showing no evidence of these dashless works at all?
N=0
And yes, you made an ad hominem, check the site guidelines. You can make your point without being a dick.
I'm widely read. Enough that I'm not a pseud trying to imply that I read a lot of super duper hard books as evidence that my midwit opinion on emdashes is better than the guy here who made me mad by criticizing them.
Let me know if you're so stoked at reading Infinite Jest that you think that's proof I'm wrong.
> I don't know of any prose that relies on crutch dashes
If that's true then you are not widely read, and if it's false then you are a disingenuous troll and your comments here are exactly as worthless as they look. Either way your opinion is still bad. Sorry you're mad about it, but I'm done here.
I'm sure somebody else has mentioned this, but if you're willing to buy physical media, it's not difficult to rip even 4K HDR blu-rays yourself and stream from a self hosted platform like Jellyfin.
I'm able to find most of what I want to watch on physical media in either HD or 4K, with the exception of more obscure anime. Some TV shows can be expensive to pick up and more laborious to rip, though.
And yet I have vivid memories of many situations that weren't dangerous in the slightest, and essentially verbatim recall of a lot of useless information e.g. quotes from my favorite books and movies.
I am not sure exactly what point you're trying to make, but I do think it's reductive at best to describe memory as a tool for avoiding/escaping danger, and misguided to evaluate it in the frame of verbatim recall of large volumes of information.
If there was any indication of a hard takeoff being even slightly imminent, I really don't think key employees of the company where that was happening would be jumping ship. The amounts of money flying around are direct evidence of how desperate everybody involved is to be in the right place when (so they imagine) that takeoff happens.
Based on my own usage patterns I don't think this is too implausible. When I do use an LLM chatbot for a "search", I'm almost always gathering initial information to use in one or more traditional search queries to find authoritative sources.
It does kind of contradict my own assumption that most people just take what the chatbot says as gospel and don't look any deeper, but I also generally think it's a bad idea to assume most people are stupid. So maybe there's a bit of a contradiction there.
I think it is pretty safe to assume that at least one third of Googles users will take what the chatbot says as gospel and will not look any deeper, just as users previously took the first search result as the verbatim truth.
Thank you. That's different than my use of AI summaries, which is "ignore them". I know that I want definitive info, so I look for deeper info than a summary immediately.
But I also share your assumption about "most people".
If you start down this road, policing which media people have access to based on your own totally subjective moral standards and interpretations, you will end up with pervasive censorship that severely stunts cultural development. Because who is "you", anyway? The answer is, of course, whichever monstrous nutjob chooses to devote a huge chunk of their time and money to seizing the levers of power so that they can impose their monstrous nutjobbery on everyone else.
Seriously, outside of special, clearly delineated cases with indisputable negative externalities (especially on the production side), when has [effectively] banning certain [types of] media been a net good? Seems to me that all it's good for is political repression and fueling moral panics.
> Heck, writing in a language you didn't personally invent is like using a player piano.
Do you actually believe that any arbitrary act of writing is necessarily equivalent in creative terms to flipping a switch on a machine you didn't build and listening to it play music you didn't write? Because that's frankly insane.
Importing a library someone else wrote basically is flipping a switch and getting software behavior you didn't write.
Frankly I don't see a difference in creative terms between writing an app that does <thing> that relies heavily on importing already-written libraries for a lot of the heavy lifting, and describing what you have in mind for <thing> to an LLM in sufficient detail that it is able to create a working version of whatever it is.
Actually can see an argument that both of those are also potentially equal, in creative terms, to writing the whole thing from scratch. If the author's goal was to write beautiful software, that's one thing, but if the author's goal is to create <thing>? Then the existence and characteristics of <thing> is the measure of their creativity, not the method of construction.
The real question is what you yourself are adding to the creative process. Importing libraries into a moderately complex piece of software you wrote yourself is analogous to including genai-produced elements in a collage assembled by hand, with additional elements added (e.g. painted) on top also by hand. But just passing off the output of some genai system as your own work is like forking somebody else's library on Github and claiming to be the author of it.
> If the author's goal was to write beautiful software, that's one thing, but if the author's goal is to create <thing>? Then the existence and characteristics of <thing> is the measure of their creativity, not the method of construction.
What you are missing is that the nature of a piece of art (for a very loose definition of 'art') made by humans is defined as much by the process of creating it (and by developing your skills as an artist to the point where that act of creation is possible) as by whatever ideas you had about it before you started working on it. Vastly more so, generally, if you go back to the beginning of your journey as an artist.
If you just use genai, you are not taking that journey, and the product of the creative process is not a product of your creative process. Therefore, said product is not descended from your initial idea in the same way it would have been if you'd done the work yourself.
Not only that, they also have a fairly conservative approach to design that seems to keep a lot of the stupid bullshit out of their cars. I own multiple late model Japanese cars from different manufacturers and have had zero issues with them. The ADAS systems they do have, while arguably basic by 2025 standards, function flawlessly. All essential controls (including climate control) are physical.
Seems like generally it ended up being a surveillance play, in practice if not original intent. For example, Dog coin has been reported to be passing data taken from other agencies directly to ICE^[1] for law enforcement applications, and there was that other matter of logins apparently from Russia using accounts the Dog coin personnel demanded agencies create on their internal systems with (auditable) logging disabled^[2]. And probably more that I'm forgetting.
One does wonder whether this was all part of Musk's vision, or more thanks to the scum he hired to staff Dog coin and/or other lawless opportunists in the Trump administration.
The idea that Musk's intent was to gut all of the agencies that were in a position to regulate any of his companies does seem to suggest that DOGE was an outstanding success.
I see your refusal to acquiesce to Musk's appropriation of an innocent meme, and raise you a, "Keep calling it 'doge', but pronounce it phonetically to piss him off."
Sounds like no one requested that OpenAI wait a week:
>We weren't in touch with IMO. I spoke with one organizer before the post to let him know. He requested we wait until after the closing ceremony ends to respect the kids, and we did.
OpenAI jumped the gun before *the closing party* - didn't even let the kids celebrate their winning properly before stealing the spotlight
Also in the article
> In response to the controversy, OpenAI research scientist Noam Brown posted on X, "We weren't in touch with IMO. I spoke with one organizer before the post to let him know. He requested we wait until after the closing ceremony ends to respect the kids, and we did."
> However, an IMO coordinator told X user Mikhail Samin that OpenAI actually announced before the closing ceremony, contradicting Brown's claim. The coordinator called OpenAI's actions "rude and inappropriate," noting that OpenAI "wasn't one of the AI companies that cooperated with the IMO on testing their models."
reply