Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is the 2nd time I've seen someone confused about this. The safety filtering is a layer on top of the model's built in ability to refuse, similar to OpenAI's "moderation" endpoint.


This is true but still Gemini is by far the least restrictive Sota LLM out there (once all safety settings are none)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: