Looking at their docs, it seems that with Browserbase you would still have to deploy your Playwright script to a long-running job and manage the infra around that yourself.
Our approach is a bit different. With finic you just write the script. We handle the entire job deployment and scaling on our end.
Looking at performance recording in Chrome, it's not cobe.
Cobe does not seem to trigger huge time spent in Layerizing and Style recalculations, which are the main areas the web page spends time for me.
Curiously it's not as bad on corporate windows laptop that has worse specs, and which was outputting to 30fps-locked display (personal laptop was rendering to 165Hz screen...)
Awesome! The problem with extracting schema automatically is that you won't know what comes out of it upfront and it could be changing on every run. What I'm trying to do is enable scraping webpages in a structured (and type-safe!) manner.
In my experience Anthropic models are more steerable (requires less prompting) than OpenAI's. For example in code-generation, I'd tell GPT-4 to not include any comments, yet sometimes it would just ignore this. Have not experienced this with Opus yet.
Also, curious why your unstructured idea did not pan out?