Hacker Newsnew | past | comments | ask | show | jobs | submit | rafterydj's commentslogin

Do you have any concerns about production bugs or human-in-the-loop?

Human in the loop is no longer needed as all production bugs are fixed with a bug fix agent automatically.

I believe existing laws carve out exceptions for medical fitness for certain positions for this very reason. If I may, stepping back for a second: the reason privacy laws exist, is to protect people from bad behavior from employers, health insurance, etc.

If we circumvent those privacy laws, through user licenses, or new technology - we are removing the protections of normal citizens. Therefore, the bad behavior which we already decided as a society to ban can now be perpetrated again, with perhaps a fresh new word for it to dodge said old laws.

If I understand your comment, you are essentially wondering why those old laws existed in the first place. I would suggest racism or other systemic issues, and differences in insurance premiums, are more than enough to justify the existence of privacy laws. Take a normal office job as an example over a manual labor intensive job. No reason at all that health conditions should impact that. The idea of not being hired because I have a young child, or a health condition, that would raise the group rate from the insurer passing the cost to my employer (which would be in their best interest to do) is a terrible thought. And it happened before, and we banned that practice (or did our best to do so).

All this to say, I believe HIPAA helps people, and if ChatGPT is being used to partially or fully facilitate medical decision making, they should be bound under strict laws preventing the release of that data regardless of their existing user agreements.


> I believe existing laws carve out exceptions for medical fitness for certain positions for this very reason.

It’s not just medical but a broad carve out called “bona fide occupational qualifications”. If there’s a good reason for it, hiring antidiscrimination laws allow exceptions.


If it was really down to two engineers, it's almost certainly what one or both of them were already comfortable or familiar with and no other reason. Six months is such a short time frame for long term projects like this that I imagine they could not spare much time for analysis of alternatives.


If you're comfortable with nextjs, you should be even more comfortable with a nodejs SSR application. It's the same thing, but simpler. The HTML doesn't even have to be pretty. We're really just querying a DB, showing forms, and saving forms. Hell, use PHP if you want.


I had assumed that these people were not junior devs left unsupervised to handle important government work.


Could you elaborate a little bit more?


plugging your AI newsletter at the end of your comment comes off as an indicator you want to farm engagement, not genuinely stimulate conversation.


>it seems likely that it's going to be on the LLM user to know if the out is protected by copyright.

To me, this is what seems more insane! If you've never read Harry Potter, and you ask an LLM to write you a story about a wizard boy, and it outputs 80% Harry Potter - how would you even know?

> there will be probably be some "reasonable belief" defence in eventual laws.

This is probably true, but it's irksome to shift all blame away from the LLM producers, using copy-written data to peddle copy-written output. This simply turns the business into copyright infringement as a service - what incentive would they have to actually build those "secondary search thingies" and build them well?

> it definitely isn't as simple as "LLM wrote it so we can ignore copyright".

Agreed. The copyright system is getting stress tested. It will be interesting to see how our legal systems can adapt to this.


> how would you even know?

The obvious way is by searching the training data for close matches. LLMs need to do that and warn you about it. Of course the problem is they all trained on pirated books and then deleted them...

But either way it's kind of a "your problem" thing. You can't really just say "I invented this great tool and it sometimes lets me violate copyright without realising. You don't mind do you, copyright holders?"


This is a frustrating comment. Please elaborate with a real point! Just highlighting something that used to be the case, does not imply that it should always be the case, nor does it imply that it reflects the case now.


Wow, I might need to steal that idea for bypassing Jira discussions. I hate Jira with all my might.


Please do. It works!

I don't think most folks are both interested and trying to sit in mindless meetings (like my JIRA example).

That JIRA example is particularly annoying. It's a product team (with an external consultant) using JIRA to track progress. But like anything with a reporting component, people are now optimizing toward what's reported - not toward real work. Success in a week (or sprint) is number of tickets closed not whether anything actually happened.

I declined several of these JIRA update meetings. At least two invites popped onto my calendar as agenda-less hour-long blocks.

Then I joined one, asked all the questions around purpose, and suggested what I would do to help with less overall effort and a reduction in pesky meeting invites.


Why do you hate Jira?


This sounds very interesting to me. I'd read through that blog post, as I'm working on expanding my K8s skills - as you say knowledge is very scattered!


Respectfully I disagree there. Social media is dangerous and corrosive to a healthy mind, but AI is like a rapidly adaptive cancer if you don't recognize it for what it is.

Reading accounts from people who fell into psychosis induced by LLMs feels like a real time mythological demon whispering insanities and temptations into the ear directly, in a way that algorithmically recommended posts from other people could never match.

It will naturally mimic your biases. It will find the most likely response for you to keep engaging with it. It will tell you everything you want to hear, even if it is not based in reality. In my mind it's the same dangers of social media but dialed all the way up to 11.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: