That poll doesn't say they're happy with their health insurance, it says they're at least somewhat satisfied with the coverage. You'd see very different results if you asked about health insurance prices.
I believe most people would incorporate the value received into their satisfaction rating.
Most people also don't have any idea how much their insurance costs in total or how it compares to alternatives, so that would be a challenging question to write with any reasonable expectation of getting a coherent response.
> The problem is the cost.
You could say this about almost anything that isn't free, and could still say it about a number of things even if they were free.
I remember seeing browser extensions that would preload links to show thumbnails. I was thinking about zip bombing crawlers then realized the users of such extensions might receive zip bombs as well.
Yes. Because it is pitch black and therefore you can not determine it's color (plus, the fact that you haven't been eaten by one yet does not justify the conclusion that you won't be). It's also a play on Gardener's "unexpected hanging paradox".
I'm lucky to teach in a school with relatively small classes (~25) and I can manually grade and review everything. (The autograde scripts I wrote in bash help.)
But yes, it's happening. If I had 300 students, I couldn't do this. I'd need a bunch of TAs or some AI. Or just pure autograding, which I always hated since the person who did nothing gets the same F as the person who left out a semicolon.
And students are definitely using AI, evidenced by their stratospheric code improvements in the last 18 months.
I see an enormous threat here, I think you're just scratching the surface.
You have a customer facing LLM that has access to sensitive information.
You have an AI agent that can write and execute code.
Just image what you could do if you can bypass their safety mechanisms! Protecting LLMs from "social engineering" is going to be an important part of cybersecurity.
Having sensitive information is kind of inherent to the way the training slurps up all the data these companies can find. The people who run chatgpt don't want to dox people but also don't want to filter its inputs. They don't want it to tell you how to kill yourself painlessly but they want it to know what the symptoms of various overdoses are.
Yes, agents. But for that, I think that the usual approaches to censor LLMs are not going to cut it. It is like making a text box smaller on a web page as a way to protect against buffer overflows, it will be enough for honest users, but no one who knows anything about cybersecurity will consider it appropriate, it has to be validated on the back end.
In the same way a LLM shouldn't have access to resources that shouldn't be directly accessible to the user. If the agent works on the user's data on the user's behalf (ex: vibe coding), then I don't consider jailbreaking to be a big problem. It could help write malware or things like that, but then again, it is not as if script kiddies couldn't work without AI.
> If the agent works on the user's data on the user's behalf (ex: vibe coding), then I don't consider jailbreaking to be a big problem. It could help write malware or things like that, but then again, it is not as if script kiddies couldn't work without AI.
Tricking it into writing malware isn't the big problem that I see.
It's things like prompt injections from fetching external URLs, it's going to be a major route for RCE attacks.
There's plenty of things we should be doing to help mitigate these threats, but not all companies follow best practices when it comes to technology and security...
reply