Hacker Newsnew | past | comments | ask | show | jobs | submit | more ushakov's commentslogin

I do not understand what this actually is. Any difference between Browserbase and what you’re building?

Also, curious why your unstructured idea did not pan out?


Looking at their docs, it seems that with Browserbase you would still have to deploy your Playwright script to a long-running job and manage the infra around that yourself.

Our approach is a bit different. With finic you just write the script. We handle the entire job deployment and scaling on our end.


I was talking to my friend today about dating apps being a softporn addiction


The reason your website is lagging so much is because of the globe. I have reported this a while ago: https://github.com/shuding/cobe/issues/78


Looking at performance recording in Chrome, it's not cobe.

Cobe does not seem to trigger huge time spent in Layerizing and Style recalculations, which are the main areas the web page spends time for me.

Curiously it's not as bad on corporate windows laptop that has worse specs, and which was outputting to 30fps-locked display (personal laptop was rendering to 165Hz screen...)


There’s also llm-scraper in TypeScript

https://github.com/mishushakov/llm-scraper


Something similar I worked on in the past https://github.com/lucgagan/auto-playwright/


Does it use ChatGPT every time you run the test or only when a test fails (to check if the selector has changed)?


Awesome! The problem with extracting schema automatically is that you won't know what comes out of it upfront and it could be changing on every run. What I'm trying to do is enable scraping webpages in a structured (and type-safe!) manner.


Awesome! Keep in mind there's already scrapeghost and entities-extraction-web-scraper in Python.

I've tried using it with Groq's Llama 3 70B and it worked well :)


Definitely. Smaller models like Haiku are already pretty capable (and cheap!)


How does Haiku do with instruction following?


In my experience Anthropic models are more steerable (requires less prompting) than OpenAI's. For example in code-generation, I'd tell GPT-4 to not include any comments, yet sometimes it would just ignore this. Have not experienced this with Opus yet.


Thank you! I’m working on supporting local llms via llama.cpp currently, so cost won’t be an issue anymore


Given that the ollama API is openai compatible, that should be a drop in, no?


Not really, I believe it’s missing function calling

Edit: and grammar as well


Ahh yeah gotcha


Correct. JS sites are supported out of the box since we're using Playwright!


Nice! Markdown output would be an awesome addition


use something like Browserbase?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: