Hacker Newsnew | past | comments | ask | show | jobs | submit | calibas's commentslogin

Sounds like it's about the precession of the equinoxes and the new "Age of Aquarius".

True words aren’t eloquent;

eloquent words aren’t true.

Tao Te Ching – Verse 81


That poll doesn't say they're happy with their health insurance, it says they're at least somewhat satisfied with the coverage. You'd see very different results if you asked about health insurance prices.

The problem is the cost.


I believe most people would incorporate the value received into their satisfaction rating.

Most people also don't have any idea how much their insurance costs in total or how it compares to alternatives, so that would be a challenging question to write with any reasonable expectation of getting a coherent response.

> The problem is the cost.

You could say this about almost anything that isn't free, and could still say it about a number of things even if they were free.


I assume they mean:

<script><a href="/honeypot">Click Here!</a></script>

It would fool the dumber web crawlers.


I remember seeing browser extensions that would preload links to show thumbnails. I was thinking about zip bombing crawlers then realized the users of such extensions might receive zip bombs as well.


Advent of Computer Science Advent Calendars, Day 2


Seems we’ve reached that point.


Essentially, repurposing fracking as a method for energy storage.


And generation


It is pitch black. You are likely to be eaten by a grue.


I wonder if grue was taken from Nelson Goodman's Fact, Fiction, and Forecast. https://en.wikipedia.org/wiki/New_riddle_of_induction


Nyet. Jack Vance created grues in the one of the Dying Earth series books.


Yes. Because it is pitch black and therefore you can not determine it's color (plus, the fact that you haven't been eaten by one yet does not justify the conclusion that you won't be). It's also a play on Gardener's "unexpected hanging paradox".


Love it. I use a grue reference on 404s to my blog.

https://mordenstar.com/zork


If this predicament seems particularly cruel, consider whose fault it could be: not a torch or a match in your inventory.

MC Frontalot - It Is Pitch Dark

https://www.youtube.com/watch?v=4nigRT2KmCE

Featuring Steve Meretzky!

https://en.wikipedia.org/wiki/Steve_Meretzky


Soon we'll have a system where students use AI for homework and teachers use AI to grade it. I'm sure it's already happening.


I'm lucky to teach in a school with relatively small classes (~25) and I can manually grade and review everything. (The autograde scripts I wrote in bash help.)

But yes, it's happening. If I had 300 students, I couldn't do this. I'd need a bunch of TAs or some AI. Or just pure autograding, which I always hated since the person who did nothing gets the same F as the person who left out a semicolon.

And students are definitely using AI, evidenced by their stratospheric code improvements in the last 18 months.


It is. It's called magicschool, and districts have already signed contracts


i have seen this joke beaten to death so many times in so many forms


I see an enormous threat here, I think you're just scratching the surface.

You have a customer facing LLM that has access to sensitive information.

You have an AI agent that can write and execute code.

Just image what you could do if you can bypass their safety mechanisms! Protecting LLMs from "social engineering" is going to be an important part of cybersecurity.


Yes that’s the point, you can’t protect against that, so you shouldn’t construct the “lethal trifecta”

https://simonwillison.net/2025/Jun/16/the-lethal-trifecta/


You actually can protect against it, by tracking context entering/leaving the LLM, as long as its wrapped in a MCP gateway with trifecta blocker.

We've implemented this in open.edison.watch


True, you have to add guardrails outside the LLM.

Very tricky, though. I’d be curious to hear your response to simonw’s opinion on this.


Sorry not familiar with this. Can you please link me?


> You have a customer facing LLM that has access to sensitive information.

Why? You should never have an LLM deployed with more access to information than the user that provides its inputs.


Having sensitive information is kind of inherent to the way the training slurps up all the data these companies can find. The people who run chatgpt don't want to dox people but also don't want to filter its inputs. They don't want it to tell you how to kill yourself painlessly but they want it to know what the symptoms of various overdoses are.


Yes, agents. But for that, I think that the usual approaches to censor LLMs are not going to cut it. It is like making a text box smaller on a web page as a way to protect against buffer overflows, it will be enough for honest users, but no one who knows anything about cybersecurity will consider it appropriate, it has to be validated on the back end.

In the same way a LLM shouldn't have access to resources that shouldn't be directly accessible to the user. If the agent works on the user's data on the user's behalf (ex: vibe coding), then I don't consider jailbreaking to be a big problem. It could help write malware or things like that, but then again, it is not as if script kiddies couldn't work without AI.


> If the agent works on the user's data on the user's behalf (ex: vibe coding), then I don't consider jailbreaking to be a big problem. It could help write malware or things like that, but then again, it is not as if script kiddies couldn't work without AI.

Tricking it into writing malware isn't the big problem that I see.

It's things like prompt injections from fetching external URLs, it's going to be a major route for RCE attacks.

https://blog.trailofbits.com/2025/10/22/prompt-injection-to-...

There's plenty of things we should be doing to help mitigate these threats, but not all companies follow best practices when it comes to technology and security...


> You have a customer facing LLM that has access to sensitive information…You have an AI agent that can write and execute code.

Don’t do that then?

Seems like a pretty easy fix to me.


It's a stochastic process. You cannot guarantee its behavior.

> customer facing LLM that has access to sensitive information.

This will leak the information eventually.


Same

    ! [remote rejected] feature/ui-improvements -> feature/ui-improvements (Internal Server Error)
Edit: One minute after posting this it's working again.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: