Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

claude-instant-v1 is one of the "best kept secrets".

It is comperable in quality to gpt-3.5-turno, while being four times faster (!) and at half the price (!).

We just released a minimal python library PyLLMs [1] to simplify using various LLMs (openai, anthropic, AI21..) and as a part of that we designed a LLM benchmark. All open source.

[1] https://github.com/kagisearch/pyllms/tree/main#benchmarks



From my evals on nat.dev I found claude instant to give great responses and yes, avg 3-4X faster than 3.5, but one big difference atm is that anyone can sign up and get access to gpt-3.5-turbo right now, but claude is still gated behind an invite/wait list. (I'm still waiting for access for example.)


Exactly!

OpenAI are the only people who are shipping product like absolute maniacs. If I can’t use your fancy system, it doesn’t exist as far as I’m concerned. There’s a mountain of theoretical work, I don’t need a press release on top of it.

The game now is no longer theory, it’s shipping code. A 4-year plan means fuck all when OpenAI is not only ahead, but still running way faster.


I have Claude on Slack. It is far worse than ChatGPT. I’m presuming this is not “claude-instant-v1” version, it is fast though. Any idea what version is Claude in Slack


I didn't know about Anthropic, so I just signed up for the waitlist, thanks for the heads-up!


Could PyLLMs connect to a locally running LLM (e.g, llama variant)?


Not yet but PRs welcome!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: