yup, we've been using this approach with our product to make composing different integrations easier for the LLM and also give it the flexibility of code. Main difference is we use quick-js instead of v8 isolates. Seeing a TS interface instead of ugly JSON schema and simply writing code is far simpler for the LLM
Hey HN, Anders and Tom from Magnitude (YC S25) here. On our last Show HN post about our open-source browser agent, someone left a comment - "there are multiple similar projects like this posted here daily, and this one likely isn't the best". So we asked ourselves, are they right? We decided to run on WebVoyager (a well known benchmark for browser agents) to test ourselves. We scored 94%, beating all other browser agents and making Magnitude state-of-the-art.
The original WebVoyager benchmark was meant to demonstrate a new technique for interacting with the browser by annotating the DOM. Since then, vision models have come a long way in terms of accuracy and visual understanding. Our pure-vision approach with our framework and today's models surpasses the hybrid DOM strategies used by the original WebVoyager paper and other agents like browser-use.
So why does pure-vision beat hybrid DOM approaches?
- Generalizes far better - handles canvas elements, iframes, drag-and-drop, precise text selection, and many other scenarios elegantly where hybrid DOM would struggle and need to implement hacks for those cases to work
- Easier for the LLM - we think LLM performance is roughly proportional to prompt clarity. If the prompt contains a crowded screenshot with loads of colored boxes + a long list of element labels and is asked to pick one, vs given a clean screenshot + where do you want to click - the latter seems far easier
We believe another reason for our success is that we can still hook into the browser as needed. We can use browser-native actions like tab switching, can look at network traffic to know when a page is ready, or use the DOM for other purposes like data extraction. Computer use agents like Operator or Claude Computer Use on the other hand are limited to generic mouse and keyboard controls.
It's worth mentioning that WebVoyager is a strange and flawed benchmark. It contains many tasks that depend on the current date (and need their dates updated), tasks that depend on the time of day, and some tasks that are impossible or too ambiguous to properly evaluate. In the repo we detailed exactly the patches we made to the original WebVoyager benchmark such that each task is at least theoretically possible.
Why does this all matter? People are trying to adopt agents for real use cases, but they often fail to make it to production. We want to enable developers to build with production-ready browser agents - which is why it's important to get the fundamental interaction paradigm right. We think this benchmark is a step in the right direction, showing that pure-vision has best-in-class performance in the browser domain. Curious to hear what others think about this, would love to get your feedback!
So there’s a very big difference in the sort of vision approach that browser-use does vs. what we do
browser-use is still strongly coupled to the DOM for interaction because of the set-of-marks approach it uses (for context - those little rainbow boxes you see around the elements). This means it’s very difficult to get it to reliably do interactions outside of straightforward click/type like drag and drop, interacting with canvas, etc.
Since we interact based purely on what we see on the screen using pixel coordinates, those sort of interactions are a lot more natural to us and perform much more reliably. If you don't believe me, I encourage you to try to get both Magnitude and browser-use to drag and drop cards on a Kanban board :)
In our experience the DOM-based interaction is more repeatable and performant than vision / xy based, but they each have their tradeoffs, as you said click-and-drag is harder when the source and target arent classic dom elements (e.g. canvas). We'll likely add x,y-based interaction as a fallback method at some point.
Hey, curious about your use cases for a chrome extension, care to share more?
To answer your question - BAML is as DSL that helps to define prompts, organize context, and to get better performance on structured output from the LLM. In theory you should be able to map over similar logic to other clients.
Chrome extension has advantage of user friendly distribution - so that non tech savy users can also do automation. I'm also looking for automation for mobile devices (app webview or safari mobile) and because of platform limitation also this doesn't seem can by anytime extended to mobile devices
In 2018, I helped the NFL front offices and the ticket brokers who bought wholesale in blocks of 10k manage event tickets, 100s of thousands of tickets, across secondary marketplaces, e.g. Stubhub and SeatGeek, because their primary marketplace, Ticketmaster, was very slow to develop an API that helped them import the data, barcodes, into the secondary markets and to remove the ticket from being listed if it was in a secondary market checkout or sold preventing millions of dollars worth of double sold tickets. The problem was Ticketmaster for legal reasons couldn't give us preferential access so I was always updating anytime they changed their antibot protections. I created a Chrome extension as a backup incase they blocked the automated browsers on a Friday night which was side loaded and did everything the Puppeteer agents were doing to buy me time. It was a perfect stopgap. The users would press a button and watch it automatically navigate to pages and handle their workflow in their browser moving lightening fast.
You can do most anything you can do in Playwright, navigate, open new tabs, scroll with the added benefit of keeping the human in the loop. Conceptually they are exactly the same, I can go into that more if you want. Most of the limitations are security features. However, for automated workflows, the security features should be heeded for good reason. For example, chatgpt console require isTrusted to be true rejecting synthetic events so it is impossible to automate the chatgpt console without workarounds which they will likely close. That is the biggest limitation. On the other hand, there are 3 billion Chrome users and they can download the extension with a single click. Bypassing the security features like requiring a human interaction button press or mouse click to go fullscreen, play sound, or transfer money on a bank website shouldn't be alowed. If the use case requires that, use Playwright or a BrowserWindow in an electron application. A Chrome extension with a side panel can collect every element using stacking context that is visible to limit the amount of data processed by a LLM, it can capture all the inner text of a page, it can read every single fetch and XMLHttpRequests which is a very good way to get data without loading tons of markup, it can make fetch and XMLHttpRequests in the MAIN world so they automatically contain all the cookies, it can use huggingface/transformers.js to transcribe audio, video to text with openai whisper or perform ocr image to text on webgpu, if available.
I can systematically analyze, poke, and prod thousands of websites running with playwright in the cloud to discover all the capabilities and automatically create workflows with xstate v5 which are sent to the Chrome extension in JSON. For example, I can automatically navigate to a website, find all the inputs, try several ways to inject text, use image to text to test if the text is added to the field to add to the list of capabilities. So if a user is on the page, I can automate the workflow or notify the user they need to take a step.
I think the best idea is to have curated workflows and curated data embeddings to target focused industries. It can automate navigating the browser to MLS and zillow.com, collected information, inject it into google sheets office 365 excel, export it, navigate to email, write information, attach the file to the email, and send it all with the human in the loop. Moreover, if it does 95% of the work, I don't think humans will mind pressing a button or taking an action when prompted. The question is will people prefer this instead of fully automated running somewhere in the cloud? How do you feel about using a code assistant? Do you like being in the loop?
This is all experimental. The gif has a good example of a side panel automating stock option trading. I'm going to try and inject your code to see if I can start to develop systematic generalized automation with it. [0] [1]
Both of them are "visually grounded" - meaning if you ask for the location of something in an image - they can output the exact x/y pixel coordinates! Not many models can do this, especially not many that are large enough to actually reason through sequences of actions well
We currently are optimizing for reliability and quality, which is why we suggest Claude - but it can get expensive in some cases. Using Qwen 2.5-VL-72B will be significantly cheaper, though may not be always reliable.
Most of our usage right now is for running test cases, and people seem to often prefer qwen for that use case - since typically test cases are clearer how to execute.
Something that is top of mind for is is figuring out a good way to "cache" workflows that get taken. This way you can repeat automations either with no LLM or with a smaller/cheap LLM. This will would enable deterministic, repeatable flows, that are also very affordable and fast. So even if each step on the first run is only 95% reliable - if it gets through it, it could repeat it with 100% reliability.
I am desperately waiting for someone to write exactly this! Use the LLM to write the repeatable, robust script. If the script fails, THEN fall back to an LLM to recover and fix the script.
Yes I wish we could combine browser use, stagehand, director.ai, playwright. Even better where I can record my session with mouse movements, clicks, dom inspect, screen sharing and my voice talk and explain what I want to do. Then llm generating scraper for different task and recovering if some scraping task got broken at some point.
Yeah, I think its a little tricky to do this well + automatically but is essentially our goal - not necessarily literally writing a script but storing the actions taken by the LLM and being able to repeat them, and adapt only when needed
I think depends a lot on how much you value your own time, since its quite time consuming to write and update playwright scripts. It's gonna save you developer hours to write automations using natural language rather than messing around with and fixing selectors. It's also able to handle tasks that playwright wouldn't be able to do at all - like extracting structured data from a messy/ambiguous DOM and adapting automatically to changing situations.
You can also use cheaper models depending on your needs, for example Qwen 2.5 VL 72B is pretty affordable and works pretty well for most situations.
But we can use an LLM to write that script though and give that agent access to a browser to find DOM selectors etc. And than we have a stable script where we, if needed, manually can fix any LLM bugs just once…? I’m sure there are use cases with messy selectors as you say, but for me it feels like most cases are better covered by generating scripts.
Yeah we've though about this approach a lot - but the problem is if your final program is a brittle script, you're gonna need a way to fix it again often - and then you're still depending on recurrently using LLMs/agents. So we think its better to have the program itself be resilient to change instead of you/your LLM assistant having to constantly ensure the program is working.
I wonder if a nice middle ground would be:
- recording the playwright behind the scenes and storing
- trying that as a “happy path” first attempt to see if it passes
- if it doesn’t pass, rebuilding it with the AI and vision models
Best of both worlds. The playwright is more of a cache than a test
I think the difficulty with this approach is
(1) you want a good "lookup" mechanism - given a task, how do you know what cache should be loaded? you can do a simple string lookup based on the task content, but when the task might include parameters or data, or be a part of a bigger workflow, it gets trickier.
(2) you need a good way to detect when to adapt / fall back to the LLM. When the cache is only a playwright script, it can be difficult to know when it falls out of the existing trajectory. You can check for selector timeouts and things, but you might be missing a lot of false negatives.
We do believe in a hybrid approach where a fast/deterministic representation is saved - but think there is a more seamless way were the framework itself is high level and manages these details by caching the underlying actions that can run
I think you are overstating. Just use Playwright codegen. No need for manual test writing, or at least 90% can get generated. Still 10x faster and cheaper.
I do, cheap praise doesn't benefit the community and it might be astroturf. Constructive criticism would be more valuable - there are multiple similar projects like this posted here daily, and this one likely isn't the best.
For context, we have no affiliation with KeysToHeaven (though we appreciate his comment). We do think our vision-first approach gives us a significant edge over other browser agents, though we probably could’ve made that aspect clearer in the title