Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Any recommendations?



Yup, 3.1-70B-Instruct-lorablated is the one I currently recommend too for anti-rejection models — it seems roughly as anti-rejection as the original failspy "abliterated" model, but it works with 128k context since it's based on 3.1 instead of 3 (which only had 8k context). It's currently our second-most popular model on glhf.chat, behind Llama-3.1-405B-Instruct.


failspy’s or mlabonne’s models. Or just look for any model with ‘abliterated’ in the title. Eg try failspy/meta-llama-3-8b-instruct-abliterated-v3 though of course bigger models will probably be better


No specific ones, but there are some abliteration LoRas for Llama (8B and 70B, I think). Those should be good for what you want.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: