Does not take any coding ability configue it. Out of the box you can search for any setting and even if you edit the json config directly you get autocomplete.
Because their blog makes a big song and dance about the items I quoted above as the answer to their self-imposed question "why the heck do we need a new browser?".
They are saying "we need a new browser" because "Zero telemetry, privacy‑first access has been lost".
However the reality is that zero-telemetry has NOT been lost. It is there in Firefox Focus, Mullvad Browser and doubtless others too.
Therefore, I quite rightly am returning to their question "why the heck do we need a new browser?".
Because "there are no zero telemetry browsers" is factually incorrect as an answer to their headline question.
At the very least they should address their competitor products and why they consider themselves different. Don't just bury your head in the sand and pretend your product has no competitors.
But really they should have picked a better headline topic to base their blog on.
> giving the LLM more capability to generate user interfaces
This is not dissimilar to the argument that "MCP needs not exist, just tell llm to run commands and curl". Well, llm can do those, and generate user interfaces. It's just they don't work reliably (maybe ever, depending on how you define "reliable").
I guess as engineers we can do some work and create stopgap solutions or we can all sit and wait for someone else (who? when?) to make AGIs in which everything just magically works, reliably.
I use perplexity a lot and pretty much exclusively with "deep research" on. Is this on the same level? Because Perplexity often take more than a minute, and this is only 20 secs.
For most queries it is around the same level. Time spent isn't always the best tell of quality, particularly when the search engine used returns little noise
I don't think the reasoning level is what makes it for me. I am treating this as a search engine. I want the thing to fetch/read from index as many pages as possible. Either that or as many agentic tool calls as needed. That takes time.
Also he always sounds like "I know this will not work". Dude are you a researcher? You're supposed to experiment and follow the results. That's what separates you from oracles and freaking philosophers or whatever.
If academia is in question, then so are their titles.
When I see "PhD", I read "we decided that he was at least good enough for the cause" PhD, or PhD (he fulfilled the criteria).
He's speaking to the entire feedforward Transformer-based paradigm. He sees little point in continuing to try to squeeze more blood out of that stone and instead move on to more appropriate ways to model ontologies per se rather than the crude-for-what-we-use-them-for embedding-based methods that are popular today.
I really resonate with his view due to my background in physics and information theory. I for one welcome his new experimentation in other realms while so many still hack away at their LLMs in pursuit of SOTA benchmarks.
If the LLM hype doesn't cool down fast, we're probably looking at another AI winter. Appears to me like he's just trying to ensure he'll have funding for chasing the global maximum going forward.
> If the LLM hype doesn't cool down fast, we're probably looking at another AI winter.
Is the real bubble ignorance? Maybe you'll cool down but the rest of the world? There will just be more DeepSeek and more advances until the US loses its standing.
reply