If you don't care about running it locally, just spend it online. Everything is good.
But you can run it locally already. Is it cheap? No. Are we still in the beginning? yes. We are still in a phase were this is a pure luxury and just getting into it by buying a 4090, is still relativly cheap in my opinion.
Why running it locally you ask? I personally think running anythingllm and similiar frameworks on your own local data is interesting.
But im pretty sure in a few years you will be able to buy cheaper ml chips for running models locally fast and cheap.
Btw. aat least i don't know a online service which is uncensored, has a lot of loras as choice and is cost effective. For just playing around with LLMs for sure there are plenty of services.
If you don't care about running it locally, just spend it online. Everything is good.
But you can run it locally already. Is it cheap? No. Are we still in the beginning? yes. We are still in a phase were this is a pure luxury and just getting into it by buying a 4090, is still relativly cheap in my opinion.
Why running it locally you ask? I personally think running anythingllm and similiar frameworks on your own local data is interesting.
But im pretty sure in a few years you will be able to buy cheaper ml chips for running models locally fast and cheap.
Btw. aat least i don't know a online service which is uncensored, has a lot of loras as choice and is cost effective. For just playing around with LLMs for sure there are plenty of services.