Hacker Newsnew | past | comments | ask | show | jobs | submit | caconym_'s commentslogin

Wrong. They show up in some of the best and most widely and intensely read prose that exists, with good reason. Of course, like any tool, they can be misused by people who don't know better.

You wanna cite any sources for that?

I don't know of any prose that relies on crutch dashes


Pretty much any good work of literature? Or any technical document? Framing them as "crutch dashes" doesn't instantly make them so.

To make sure I wasn't posting out of my ass i opened Gravity's Rainbow to 4 or 5 random pages. All confirmed what I said above.

Maybe you should read more widely. It's good for the brain!


>Gravity's Rainbow

>widely and intensely read prose

Dunno about that.

Pynchon's prose is notably colloquial, complete with lots of ellipses. Congrats on your Big Boy Book™ though.


This comment is really below the standards one might expect here, a total and hominem. Why don't you open one of your own big boy books and tell us which one it was that used no em dashes?

Oh no I did a and hominem

My whole point was that opening "big boy books" doesn't actually make a point about the validity of a thing. That's just argumentum ad populum.


And what's your argument? YOU have never read any literature that made use of em dashes? But you're showing no evidence of these dashless works at all?

N=0

And yes, you made an ad hominem, check the site guidelines. You can make your point without being a dick.


It's just a book, man. Maybe you should read more widely; it might even cure you of the obscenely stupid opinion you've expressed here.

I'm widely read. Enough that I'm not a pseud trying to imply that I read a lot of super duper hard books as evidence that my midwit opinion on emdashes is better than the guy here who made me mad by criticizing them.

Let me know if you're so stoked at reading Infinite Jest that you think that's proof I'm wrong.


You made this claim above:

> I don't know of any prose that relies on crutch dashes

If that's true then you are not widely read, and if it's false then you are a disingenuous troll and your comments here are exactly as worthless as they look. Either way your opinion is still bad. Sorry you're mad about it, but I'm done here.


Good—see ya!

I'm sure somebody else has mentioned this, but if you're willing to buy physical media, it's not difficult to rip even 4K HDR blu-rays yourself and stream from a self hosted platform like Jellyfin.

I'm able to find most of what I want to watch on physical media in either HD or 4K, with the exception of more obscure anime. Some TV shows can be expensive to pick up and more laborious to rip, though.


Older TV shows are often also only available on DVD which is much lower quality than what streaming services (and thus pirates) have for them.


And yet I have vivid memories of many situations that weren't dangerous in the slightest, and essentially verbatim recall of a lot of useless information e.g. quotes from my favorite books and movies.

I am not sure exactly what point you're trying to make, but I do think it's reductive at best to describe memory as a tool for avoiding/escaping danger, and misguided to evaluate it in the frame of verbatim recall of large volumes of information.


If there was any indication of a hard takeoff being even slightly imminent, I really don't think key employees of the company where that was happening would be jumping ship. The amounts of money flying around are direct evidence of how desperate everybody involved is to be in the right place when (so they imagine) that takeoff happens.


If LLMs are an AGI dead end then this has all been the greatest scam in history.


Based on my own usage patterns I don't think this is too implausible. When I do use an LLM chatbot for a "search", I'm almost always gathering initial information to use in one or more traditional search queries to find authoritative sources.

It does kind of contradict my own assumption that most people just take what the chatbot says as gospel and don't look any deeper, but I also generally think it's a bad idea to assume most people are stupid. So maybe there's a bit of a contradiction there.


I think it is pretty safe to assume that at least one third of Googles users will take what the chatbot says as gospel and will not look any deeper, just as users previously took the first search result as the verbatim truth.


> almost always gathering initial information to use in one or more traditional search queries to find authoritative sources

For me at least with Perplexity, Grok and ChatGPT, all results come back with citations in every paragraph, so I haven't had to do that.


Thank you. That's different than my use of AI summaries, which is "ignore them". I know that I want definitive info, so I look for deeper info than a summary immediately.

But I also share your assumption about "most people".


If you start down this road, policing which media people have access to based on your own totally subjective moral standards and interpretations, you will end up with pervasive censorship that severely stunts cultural development. Because who is "you", anyway? The answer is, of course, whichever monstrous nutjob chooses to devote a huge chunk of their time and money to seizing the levers of power so that they can impose their monstrous nutjobbery on everyone else.

Seriously, outside of special, clearly delineated cases with indisputable negative externalities (especially on the production side), when has [effectively] banning certain [types of] media been a net good? Seems to me that all it's good for is political repression and fueling moral panics.


> Heck, writing in a language you didn't personally invent is like using a player piano.

Do you actually believe that any arbitrary act of writing is necessarily equivalent in creative terms to flipping a switch on a machine you didn't build and listening to it play music you didn't write? Because that's frankly insane.


Yes, the language comment was hyperbolic.

Importing a library someone else wrote basically is flipping a switch and getting software behavior you didn't write.

Frankly I don't see a difference in creative terms between writing an app that does <thing> that relies heavily on importing already-written libraries for a lot of the heavy lifting, and describing what you have in mind for <thing> to an LLM in sufficient detail that it is able to create a working version of whatever it is.

Actually can see an argument that both of those are also potentially equal, in creative terms, to writing the whole thing from scratch. If the author's goal was to write beautiful software, that's one thing, but if the author's goal is to create <thing>? Then the existence and characteristics of <thing> is the measure of their creativity, not the method of construction.


The real question is what you yourself are adding to the creative process. Importing libraries into a moderately complex piece of software you wrote yourself is analogous to including genai-produced elements in a collage assembled by hand, with additional elements added (e.g. painted) on top also by hand. But just passing off the output of some genai system as your own work is like forking somebody else's library on Github and claiming to be the author of it.

> If the author's goal was to write beautiful software, that's one thing, but if the author's goal is to create <thing>? Then the existence and characteristics of <thing> is the measure of their creativity, not the method of construction.

What you are missing is that the nature of a piece of art (for a very loose definition of 'art') made by humans is defined as much by the process of creating it (and by developing your skills as an artist to the point where that act of creation is possible) as by whatever ideas you had about it before you started working on it. Vastly more so, generally, if you go back to the beginning of your journey as an artist.

If you just use genai, you are not taking that journey, and the product of the creative process is not a product of your creative process. Therefore, said product is not descended from your initial idea in the same way it would have been if you'd done the work yourself.


Not only that, they also have a fairly conservative approach to design that seems to keep a lot of the stupid bullshit out of their cars. I own multiple late model Japanese cars from different manufacturers and have had zero issues with them. The ADAS systems they do have, while arguably basic by 2025 standards, function flawlessly. All essential controls (including climate control) are physical.


Seems like generally it ended up being a surveillance play, in practice if not original intent. For example, Dog coin has been reported to be passing data taken from other agencies directly to ICE^[1] for law enforcement applications, and there was that other matter of logins apparently from Russia using accounts the Dog coin personnel demanded agencies create on their internal systems with (auditable) logging disabled^[2]. And probably more that I'm forgetting.

One does wonder whether this was all part of Musk's vision, or more thanks to the scum he hired to staff Dog coin and/or other lawless opportunists in the Trump administration.

[1] https://www.washingtonpost.com/immigration/2025/04/16/medica...

[2] https://www.reuters.com/technology/cybersecurity/whistleblow...


The idea that Musk's intent was to gut all of the agencies that were in a position to regulate any of his companies does seem to suggest that DOGE was an outstanding success.


Good point!


I see your refusal to acquiesce to Musk's appropriation of an innocent meme, and raise you a, "Keep calling it 'doge', but pronounce it phonetically to piss him off."


I can respect that but I honestly have no concept of how it's "supposed" to be pronounced. At least with Dog coin I know where I stand.


I wouldn't read too much into the timelines, as it seems that OpenAI simply broke an embargo that the other players were up to that point respecting: https://arstechnica.com/ai/2025/07/openai-jumps-gun-on-inter...

Very in character for them!


Sounds like no one requested that OpenAI wait a week:

>We weren't in touch with IMO. I spoke with one organizer before the post to let him know. He requested we wait until after the closing ceremony ends to respect the kids, and we did.

https://x.com/polynoamial/status/1947024171860476264?s=46

https://x.com/polynoamial/status/1947398531259523481?s=46

(I work at OpenAI, but was not part of this work)


https://x.com/Mihonarium/status/1946880931723194389

OpenAI jumped the gun before *the closing party* - didn't even let the kids celebrate their winning properly before stealing the spotlight

Also in the article

> In response to the controversy, OpenAI research scientist Noam Brown posted on X, "We weren't in touch with IMO. I spoke with one organizer before the post to let him know. He requested we wait until after the closing ceremony ends to respect the kids, and we did."

> However, an IMO coordinator told X user Mikhail Samin that OpenAI actually announced before the closing ceremony, contradicting Brown's claim. The coordinator called OpenAI's actions "rude and inappropriate," noting that OpenAI "wasn't one of the AI companies that cooperated with the IMO on testing their models."


Hard to request if you simply are not participating officially and not telling through proper channels.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: