Indeed, it escalated quite quickly to somewhat under the table in the first term, but the second term hadn't already started and it was all out in the open. With some discontent (especially around the comic depicting it that got denied by the Washington Post, resulting in their cartoonist leaving to publish it), but nothing near the outrage there should be.
I think providing examples and sample code is better than tying your API to AI sdk.
Due to how fast AI providers are iterating on their APIs, many features arrive weeks or months later to AI SDK (support for openai computer use is pending since forever for example).
I like the current API where you can wait for an event. Similar to that, it would be great to have an API for streaming and receiving messages and everything else is handled by the person so they could use AI sdk and stream the end response manually.
I check multiple different apps to order food or taxi services every day looking for cheapest price and availability. It’s a pain and I don’t like the decision fatigue.
I thought why not automate this and turn it into an app so I built this MCP server.
I’m not using vision at all (mostly a fallback) and parsing the accessibility tree into something LLM can understand.
As another commenter in this tread said, we are just a 'frontend wrapper' around other people services. Therefore, it is not particularly difficult to add models that are already supported by other providers.
The benefit of using our wrapper is that you can use a single API key and you get one bill for all your AI bills, you don't need to hack together your own logic for routing requests between different providers, failovers, keeping track of their costs, worry what happens if a provider goes down, etc.
The market at the moment is hugely fragmented, with many providers unstable, constantly shifting prices, etc. The benefit of a router is that you don't need to worry about those things.
Scaling infrastructure to handle billions of tokens is no joke.
I believe they are approaching 1 trillion tokens per week.
Glama is way smaller. We only recently crossed 10bn tokens per day.
However, I have invested a lot more into UX/UI of that chat itself, i.e. while OpenRouter is entirely focused on API gateway (which is working for them), I am going for a hybrid approach.
The market is big enough for both projects to co-exist.