By that logic, Theranos wasn't a scam either. The hardware existed and it delivered a result. It didn't do what was claimed however and the results were routinely faked.
It was claimed that the R1 would navigate an app like a person would. That it wouldn't matter if the UI changed because the AI would figure it out the same way a person would. It follows a script and breaks when the UI changes.
It was claimed that it would be faster than ChatGPT. The majority of it is a ChatGPT wrapper.
A product exists, sure, but I'd be surprised if anyone feels it met expectations.
Can't find the interview now, but I remember watching it and yes they specifically said that because it is an AI, rather than just an automation script, it is intelligent and will not be thrown off by site redesigns or CAPTCHAs (they have later said that they won't handle CAPTCHAs also).
Turns out that it is just an automation script and it cannot deal with site redesigns or CAPTCHAs.
Edit, just found they have made this claim also which simply doesn't exist at all:
> The R1 also has a dedicated training mode, which you can use to teach the device how to do something, and it will supposedly be able to repeat the action on its own going forward. Lyu gives an example: “You’ll be like, ‘Hey, first of all, go to a software called Photoshop. Open it. Grab your photos here. Make a lasso on the watermark and click click click click. This is how you remove watermark.’” It takes 30 seconds for Rabbit OS to process, Lyu says, and then it can automatically remove all your watermarks going forward.
Regarding its “learning” - it is still a model that needs data. The best you can expect is it will take actual UI sessions (as in users interacting with the website) for specific tasks to build its scripts, and as with any current “large” model it’s not going to update in realtime based on user input alone.
Sure but that’s all in the future. All of the selling points of this device are in future tense. The “model” does not seem to exist, but it’s being “worked on”. Their client app was taken apart and there is nothing interesting there. Their servers were hacked into, and made to run Doom which is funny, and there is no trace of any AI model there.
One of their former engineers gave a statement that LAM is just a marketing term and nothing like that exists.
If all the selling points are in future tense at what point can we call it a scam?
Edit: also the founder’s previous gig was a crypto scam that also promised AI on the blockchain
There is evidently a LAM of sorts given the nature of the queries it can answer. It is able to use agents - something like langchain or ChatGPT tools - in order to perform tasks that may be dependent on other tasks.
The problem is their LAM sucks, and is likely no more than just a task builder prompt on GPT (instead of a model specifically tuned for generating these tasks) using lang chain for resolution. They also have limited tooling, and some of it is already broken.
As for it being a scam. I definitely don’t see how you can offer lifetime ChatGPT with no subscription. So unless they are going to bring in additional revenue somehow it is effectively a ponzi scheme.
It was claimed that the R1 would navigate an app like a person would. That it wouldn't matter if the UI changed because the AI would figure it out the same way a person would. It follows a script and breaks when the UI changes.
It was claimed that it would be faster than ChatGPT. The majority of it is a ChatGPT wrapper.
A product exists, sure, but I'd be surprised if anyone feels it met expectations.