They had quite a few release in the last year so it's not dead that's for sure, but unclear how many new customers they are able to sign up. And with IBM in charge, it's also unclear at what moment they will loose interest.
Technomages. They were humans (and some others) bonded to Shadow-manipulated creatures that could affect spacetime. Technomages learned to "cast spells" in a private language they developed with their symbiote. These were essentially commands to "do the thing I'm talking about."
One of the challenges Technomages faced after acquiring their symbiote is gaining control over both the personal language and the impulses for destruction embedded by the Shadows (bias? bias).
So yes, learning the tricks of getting our AI agents to do the thing we actually meant, while aiming to overcome bias built in to the models when we can see it does sound an awful lot like Technomages dealing with their symbiote.
Sincerely, I respect your response to how arbitrary it seems in this form.
But... I'd like you to take a moment and think really hard about whether this is truly novel behavior for LLMs, or rather something that has always been part of the interplay between inter-agent communication and intra-agent thought :)
I don't use the Kagi assistant yet, just the kind of AI response in search results. But regarding perplexity, I'm a little bit disappointed.
I started to use Perplexity like 1 or 1.5 years ago when it was really good in term of efficiency to find good results with an efficient interface, compared to chatgpt and co. But nowadays I find the assistant response to be not that good at all, with a lot of the links provided or the suggested follow up questions on the same quality as Google SEO top results or ads.
Despite having the paid plan or Perplexity, most of the time I try a request there and then still go to chatgpt or mistral to ask the question again.
For Kagi, when I use the in search ai response, it is mostly good directly.
If you hit `a` in the search results page it'll take you to a chat interface to better grok the AI in a conversational format. Not exactly the same as perplexity still, but more ChatGPT akin. There's also a continue in assistant button at the bottom of the summary results to chat more with the query.
You can also go straight to the view by visiting `/assistant` and typing your query in there.
I still think preplexity has the AI search stuff down better, but getting both a "legacy style" search AND AI search, Kagi has better value to me imo.
I tried both and use Perplexity. Perplexity is much more akin to native ChatGPT in which I can create "spaces" for my projects. For example, I just bought an RV and so I loaded up all of the PDF manuals into a space so the LLM can reference those things. Since a few months ago, Kagi is just thread based.
FYI - I use Kagi every day for standard search and love it.
Perplexity aggressively makes shit up, particularly when it comes to science and finance. It provides sources, so you can check the bullshit, but I’m the error rate is so high that it’s practically dangerous.
I use both but cancelled my Perplexity subscription.
Kagi is the better version of Google search, especially if you learn how to use lenses, bangs, and all these features. Kagi Assistant is great if you‘re happy with basic no-frills chat, i.e. no usable voice input, no image gen, no canvas.
Perplexity is not bad, but somewhat stuck in the middle between ChatGPT/Gemini and search. They provide sources for search results which are somewhat more spot-on than what I‘ve seen elsewhere. For example it could find EV chargers with restaurants for a trip I made along a route, which ChatGPT, Gemini, Kagi Assist failed greatly).
I found refining searches with Perplexity terse and it kept forgetting context once you started to reply. They have an AI generated news feed which lured me into more doom scrolling.
Also, be aware that Perplexity free-tier may collect data about you, which Kagi does not.
Tldr; Kagi is a superior search engine worth paying for. Perplexity seems good at queries that require context but quite expensive.
Any suggestions for how you got your lenses, bangs, assistant set up the way you like? I recently subscribed to Kagi and feel like I don't really know how to get in the habit of really using all the features.
For redirects I like to create a rule to direct versioned sites (like documentation) to either "latest" or the version relevant to me. I also like to redirect sites to a more user friendly version when applicable (like reddit.com => old.reddit.com).
If a site leaves me disgruntled when I visit it, I block it. If I find it too useful to block entirely (reddit) I lower it. I apply the inverse to sites I enjoy.
I find there are certain sites that have built-in search that sucks. One such example is Dockerhub. In that case, when I want to search for a container on Dockerhub, it may be tempting to use the built-in !dh, but that is no good. Instead I favor the "snap" search: @dh, which will just add "site:hub.docker.com" to you query. This will give much better results than !dh. This can also be combined with the "I'm feeling lucky" bang (!), so you can search for something like "nats @dh !" and end up on the Dockerhub page for NATS - without ever even seeing Kagi if you do it from your URL bar. I do this pattern all the time, usually with Dockerhub and GitHub.
You'll find with the above pattern that you'll start to want to apply this to sites that aren't natively supported as bangs. One such site for me has been Ollama. I added a !ollama to be able to search for models directly. It's also nice because just searching "!ollama" will bring me right to the homepage too, which is useful when I want to check to see if I'm on the latest version.
You'll also find there are subjects where you tend to prefer a small set of sources. Maybe it's some software or tool, or some hobby or something, where you prefer official documentation, maybe some known personal sites you trust, a reddit community, something like that. That's where custom lenses comes in. I personally have a lens for the operating system I use (GNU Guix) (as well as a !p bang to search for packages) which includes official documentation, mailing list archives, IRC archives, things like that. I'm sure there are probably similar subjects in your world that you would enjoy having a more focused search for :)
As for the Kagi Assistant, I pretty much have just wired up an Assistant to use my Guix lens as a search source. That is pretty nice, because I can just ask it general questions like "how do I install nginx?" and get focused and relevant answers, instead of having it go off on how to install it on irrelevant distros.
You can start by looking at https://kagi.com/stats?stat=leaderboard and adjusting domains that make sense to you. E.g. I have all Pinterest lowered, as well as w3schools.com as I prefer developer.mozilla.org (which I raised instead).
For bangs I’m pretty sure the default ones might be enough – just use them! Some of my go-to bangs are !gh, !gm for Google Maps (Kagi Maps are sometimes not as good in Asia), !yt, !mdn, and !amo for addons.mozilla.org.
So that if an NYT article comes up in the results I can get a version without paywall directly. You can set it up in https://kagi.com/settings?p=redirects.
I just stumbled across cooked.wiki recently. It allows you to convert any bullshit online recipe to a no-frills ingredient list and instructions. I've only tested it a couple times and I know I'll rarely remember to use the shortcut (https://cooked.wiki/<recipe-url>). But setting it up as a redirect rule from search results might be the thing that pushes usability over the top.
I did not customize bangs and the standard lenses are already useful. I have two custom lenses for travel and academic papers, each searching specific domains. What I meant is that you learn how to use them. What is there by default is already quite good, like !yf !yt !uk etc.
I've subscribed to Kagi for 2 years and subscribed to Perplexity for 2 months. I liked Perplexity a lot—in particular, I think its search AI search is faster and maybe a little better than Kagi's. But the lack of standard search and not being able to do non-search AI work led me to drop it and just keep Kagi. And then Kagi released Ki, their multi-step assistant, which is close to as good.
I use both. I only pay for Kagi because I have many models I can use and I can set up different contexts to use them in.
I rarely use Kagi search anymore and instead search via assistant. Both it and perplexity give me much better results than I get from a traditional search engine.
I've never been great at getting what I want from search engines. With assistant and perplexity, I type plain English with context and get what I am looking for a large chunk of the time. That's a godsend to me.
I've found things that assistant does that make it worth paying for. I often use perplexity but what I use it for (deep research) isn't valuable enough at the time to pay for.
I like the perplexity iOS app a lot and use it almost exclusively on my phone which isn't enough use to necessitate needing a subscription.
I’m a Perplexity subscriber, but I’ve decided to favor Kagi. One issue I have with Perplexity is the lack of control over the search timeframes. With Kagi, I can use my own timeframe filters, such as last 2 days, last week, and last month.
Camera quality is the same within the same app on either.
Pixel Camera can be used on GrapheneOS with full features and photo/video quality if you want it.
GrapheneOS Camera has support for HDR+ on Pixels for regular photos and has Night mode too. It has EIS and HDRnet for video recording. It has a single exposure slider rather than their dual exposure sliders. It uses each of the cameras via zoom level / light level in the same way. More advanced features and configuration are being added to it over time.
Pixel Camera has more features and the HDR+ it uses is more aggressive which makes the photos look higher contrast than a more natural look.
GrapheneOS' own camera app doesn't have Google's propriety processing, so it's not as good as stock, but with Pixel devices, you can install the original Google Camera/Pixel Camera app and get the original camera quality.
GrapheneOS Camera does have hardware accelerated HDR+ and Night mode along with HDRnet and EIS for videos on Pixels.
Photos in Pixel Camera look different because the HDR+ it uses by default is more aggressive resulting in higher contrast but a less natural look. Both are using HDR+ with hardware acceleration though. Videos are more similar and Night mode photos likely are too.
But I think Pixel Camera does some kind of sharpening post-processing, because I get blurry images sometimes with GrapheneOS Camera while almost never with Pixel Camera.
> I learned a ton from my DE. Like, really, a ton. Before that, I had been writing on various blogs for about a decade, but writing online is all about being direct because most people don’t have time. With a book, it’s different. People made a deliberate decision to buy your book. Now, it’s your job to bring them somewhere valuable. And if that takes time (meaning more words), so be it.
I have a hard time with this point. It feels to me like a lot of books have A LOT of unecassery padding all over the place.
The example of taking 28 words and turning it to 120 is pretty good at showing this. The first paragraph is totally pointless - we are reading a book about 100 most common mistakes, obviously this mistake is very common, how did this increased the value?
Then we have another line that explaining what happens in the code, which is totally useless because the code is super trivial.
Then the code, with more explanations on the side as if the previous line was not clear.
And only after that we get to the crux of the issue.
I understand that book publishers feel they need to justify the price of a book by reaching the 300p mark in some or other way, but in my way this only makes the book worse.
It's your opinion, nothing wrong with it. Let me try to see if I can make you change it at least a bit.
> The first paragraph is totally pointless - we are reading a book about 100 most common mistakes, obviously this mistake is very common, how did this increased the value?
There are different levels in terms of common mistakes, and this one was probably one that all the devs did at some point. So I think highlighting the fact it's a frequent one does make sense.
> Then we have another line that explaining what happens in the code, which is totally useless because the code is super trivial.
I have a rule: always explain the intention of the code. Even if it's 5 lines of code, it helps the reader to better understand what we will want to highlight.
> Then the code, with more explanations on the side as if the previous line was not clear.
The explanations on the side do not impact the size of the book so the argument doesn't hold. I did it in many code snippets to highlight where the reader needs to focus.
> I understand that book publishers feel they need to justify the price of a book by reaching the 300p mark in some or other way
This is more about guiding the readers, making sure the expectations are crystal clear and that they can follow me throughout an explanation. You judge it as a criteria to justify the price of the book, but it's not the real reason. At least not for my book and I'm sure it's the case for many others :)
> This is more about guiding the readers, making sure the expectations are crystal clear and that they can follow me throughout an explanation.
Sure, but this holds true for the blog version as well, right?
To be clear, I'm not advocating for The Little Schemer version, and am not arguing that the blog version is the best it can be, but surely we can agree that book padding phenomenon does exist.
By the way, I have read parts of your book over at O'Reilly Learning, and I do think it is a good book. So I'm not trying to take a dump on your work. My criticism is aimed at publishers.
No worries I didn't take it as a criticism. I understand your point. I mean when we sign a contract there's a minimum number of pages to write. But personally, I never felt the pressure of having to add more stuff.
Instead, my DE multiple times told me that it's better to favor just-in-time teaching over just-in-case teaching. Meaning multiple times, he made me drop certain section because they weren't really serving the chapter. They were "perhaps helpful" and he made me drop all of those.
I guess it also depends on who you're working with and which publisher. On this aspect, Manning was fair, imo.
reply