Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

So I installed litellm proxy, pointed it at the new Cerebras API with Qwen-235B and hooked up Aider to litellm. This is not as good as claude code yet but it's so much faster. I even tried using the leaked claude code prompt into Aider but it doesn't do what I expect. Still worth trying but I learned that claude code's prompt is very specific to claude. I think this is very promising however! Aider basically spat out a bunch of text, installed some stuff, made some web calls & exited. WAS REALLY FAST LOL.

you can repeat my experiment quickly with the following>

config.yaml for litellm ``` model_list: - model_name: qwen3-235b litellm_params: model: cerebras/qwen-3-235b-a22b api_key: os.environ/CEREBRAS_API_KEY api_base: https://api.cerebras.ai/v1 ```

run litellm with ``` litellm --config config.yaml --port 4000 --debug ``` (may need to install litellm[proxy])

start aider with ``` aider --model cerebras/qwen-3-235b-a22b --openai-api-base http://localhost:4000 --openai-api-key fake-key --no-show-model-warnings --auto-commits --system-file ./prompt.txt --yes ```

install whatever you need with pip etc. prompt.txt contains the leaked claude code prompt which you can find yourself on the internet.



Thanks for the report. Can this be hooked up to claude code via a proxy?


You can copy paste my entire previous comment into Claude code and along with your question and ask for suggestions. I’m sure it could create an MCP server for you.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: