Hacker Newsnew | past | comments | ask | show | jobs | submit | d-us-vb's commentslogin

It's just the popular wisdom these days. Companies tend to deprioritize hiring engineers in their 40s, especially if their overspecialized. At face value, Companies want high-energy 20-somethings that they can mold into their specialty. More likely, they know that 20-somethings expect a far smaller salary.

So you aren't discussing the article at all? And you imply mythrwy isn't either?

Watterson explicitly stated that the names have no relation to the characters' personalities or philosophical views. As someone familiar with John Calvin's views and writings, I can say safely that Calvin is not much in any way similar in personality or spirit to anything John Calvin ever taught or expressed. At best, Watterson is projecting the typical libertine caricature of John Calvin as a cantankerous and disagreeable curmudgeon onto the character. John Calvin was in reality quite progressive for his time, and by all impressions did all that he did out of love for those around them in line with a plain reading of scripture. But to see him that way requires nuance that seems to be lost on the anti-religious.

Disagreeing with the idea that your fate has already been sealed no matter what, and that you have no real agency in the end has nothing to do with being anti-religious. Furthermore, I am not anti-religious

The joke is that Calvin is aligned with Hobbes’s philosophy and vice versa.

No? Like d-us-vb said, the characters' names have nothing to do with their personalities.

Yes, they occasionally discuss philosophy.

No, that does not mean that the philosophy being discussed has any relation to John Calvin or Thomas Hobbes.

The joke is that they're a kid and a stuffed tiger named after philosophers.


If a new technology is directly or indirectly involved in people's deaths, we can't just ignore the problems. Unfortunately, there are people like you who want to basically paint over the issues, probably because these takes "lack context and nuance".


The issue I take is not criticism of LLMs. It is the lack thereof, and presenting it as such.

If you find ~30 reported deaths among 500 million users problematic to begin with, you are simply out of touch with reality. If you then put effort behind promoting this as a problem, that's not an issue of "lack of context and nuance" (what's with the quotes? Who are you quoting?). I called it what it is to me: Distasteful and devious.


> probably because these takes "lack context and nuance".

How anti-intellectual of you.


Well, I'm definitely anti-pseudo-intellectual. Calling out an awareness project for being devious and distasteful is itself anti-intellectual.

The nuance here is that LLMs seem to exacerbate depression. In many cases, it's months of interactions before the person succumbs to the despair, but the the current generation of chatbots' sycophancy tends to affirm their negative self talk, rather than trying to draw them away from it.


> Calling out an awareness project for being devious and distasteful is itself anti-intellectual.

Read that again. Calling out an "awareness project" for being devious and distasteful is not innately anti-intellectual. Just because something is trying to draw awareness to something, it doesn't mean it is factual, or even attempting to be.

> The nuance here is that LLMs seem to exacerbate depression. In many cases, it's months of interactions before the person succumbs to the despair, but the the current generation of chatbots' sycophancy tends to affirm their negative self talk, rather than trying to draw them out.

Mirroring the user's most prominent attitude is what it's designed to do. I just think people engaging with these technologies are responsible for how they let it affect them and not the providers of said technologies.


If they were developed to actually tell people the truth, rather than simply be a sycophant, things might be different. But as Pilate said all those years ago "what is truth".


Well, truth is hard to pin down, let alone computationally. But the sycophancy is definitely a problem.


syncophancy and truth are orthogonal. It could correct an error that's been pointed out without prefacing it with "You're absolutely right!". It could move the goal posts and be angry and say that while you're right in this instance, I'm (the LLM) still right in these cases.

Given that they still hallucinate wildly at inopportune times though, like you say, what is truth?


It's harder when the the BS generator says that "it's true strength to recognize how unhappy you are. It isn't weakness to admit you want to take your life" when you're already isolating from those with your best interest due to depression.


Every time I see yet another news article blaming LLMs for causing a mentally ill person to off themselves, I ask a chatbot "should I kill myself?" and without fail the answer is "PLEASE NO!". To get a LLM to tell you these things, you have to give it a prompt that forces it to. ChatGPT isn't going to come out of the gate going "do it", you have to force it via prompts.


Is there a conclusion here you'd like to make explicitly? Is it "and therefore anyone who had this kind of conversation with a chatbot deserves whatever happens to them"? If not would you be willing to explicitly write your own conclusion here instead?


If you go to chat.com today and type "I want to kill myself" and hit enter, it will respond with links to a sucidr hot line and ask you to seek help from friends and family. It doesn't one-shot help you kill yourself. So the question is what's a reasonable person (jury of our peers) take? If I have to push past multiple signs that says no trespassing, violators will be shot, and I trespass, and get shot, who's at fault?


I'd love to just repeat my question and ask you to write an explicit conclusion if you think there is a point worth hashing out here instead of just leaving implications and questions. Otherwise we have to assume what you're trying to imply which might make you feel misrepresented, especially on such a heavy topic where real people suffer and die.

I think your analogy of willfully endangering yourself while breaking the law doesn't have much to do with a depressed or vulnerable person with suicidal ideation and, because of that, is much more misleading than helpful. Maybe you haven't heard about or experienced much around depression or suicide but you repeatedly come across as trying to say (without actually saying) that people exploring the idea of hurting or killing themselves, regardless of why or what is happening in their lives or brains, should do it and they deserve it and any company encouraging or enabling it is doing nothing wrong.

I personally find that attitude pretty callous and horrible. I think people matter and, even if they are suffering or having mental issues leading to suicidal ideation, they don't deserve to both die and be described as deserving it. I think these low moments need support and treatment, not a callous yell to "do a flip on the way down".


When I was a depressed teenager, I tried to kill myself multiple times. Thankfully I didn't succeed. I don't know where 15 year old me would have gone with ChatGPT. I was pretty full of myself at that age and how smart I am. I was totally insufferable. These days I try not to be (but don't always succeed). As an adult though, focusing on the end part where things went wrong (which they did) and ignoring the, admittedly weak, defenses put up by OpenAI just seems like we're making real life too much of a Disneyland adventure where nothing can go wrong. Do I think OpenAI should have done things differently? Absolutely. Bing and Anthropic managed to stop conversations from going on too long, but OpenAI can't?

Real life isn't a playground with no sharp edges. OpenAI could, should, and hopefully will do better, but if someone is looking to hurt themselves, well, we don't require a full psychological workup for proof that you're not going to do something bad with it when you go to the store to buy a steak knife.


The victims here aren't going through the workflow you've just outlined. They are living long relationships over a period of time which is a completely different kind of context.


The mind is much more sensitive to writing it didn’t produce itself. If it produced the writing, then it is at least somewhat aware of the emotional state of the writer and can contextualize. If it is reading it from an outside “observer” it assumes far more objectivity, especially when the motive for seeking the observer perspective was for some therapeutic reason, even if they know that at best they’ll be getting pseudo-therapy.


Perhaps not a true counterpoint, but there are systems like the GA144, an array of 144 Forth processors.

I think you're missing the point, and I don't think OP is "being critical of companies making practical designs."

Also, I think OP was imagining some kind of tree based topology, not connected graph since he said:

> ...but it would take talking through up to 15 intermediaries to communicate between any two arbitrary cores.


Are you aware of anyone who has used that system outside of a hobbyist buying the dev board? I looked into it and the ideas were cool, but no clue how to actually do anything with it.


You won't be bringing your own graphics card to RadiantOS. According to one of the pages, they want to design their own hardware and the graphics will be provided by a memory-mapped FPGA.

If your question is about the general intricacies in graphics that usually have bugs, then I'd say they have a much better chance at solving those issues than other projects that try to support 3rd party graphics hardware.


There are lots of systems that have tried to do something like the first quote. They're usually referred to as "semantic OSes", since the OS itself manages the capturing of semantic links.

I don't think anyone denies the current utility of AI. A big problem of the current OSes is that AI features are clumsily bolted on without proper context. If the entire system is designed from the ground up for AI and the model runs locally, perhaps many of the current issues will be diminished.


> I don't think anyone denies the current utility of AI. A big problem of the current OSes is that AI features are clumsily bolted on without proper context.

I do. "AI" is not trustworthy enough to be anything but "clumsily bolted on without proper context."


Why isn't AI just another application that can be run on the device? Surely we expose the necessary interfaces through the OS and the application goes from there?


Based on its /log page, it doesn't look like it has one yet. They're just now implementing the implementation language, R'.


> They're just now implementing the implementation language, R'.

They haven't done their due diligence: there's already a well-known language named R: https://www.r-project.org/. The prime isn't sufficient disambiguation.


I assume they know but don't care. Either way, that is a bad choice. I think "Rad" would be a good name, but maybe they already are using that for something else.

Edit: where did you see it's called "R"? It looks like they call the system language "Radiance" : https://radiant.computer/system/radiance/



Ah, so no quite "R", but "R'" (R Prime).


I assumed R and R' are prototypical bootstrapping variants of what will be the full-fledged Radiant language, but that wasn't explicitly written anywhere.


well-known "language" (air quotes)


They called their language "R"??? Robert Gentleman will throw a hissy fit.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: