Hacker News new | past | comments | ask | show | jobs | submit | benkaiser's comments login

On this same topic, I just launched an app that lets you use your offline videos in an interface that is easy to use for kids.

https://play.google.com/store/apps/details?id=com.kaiserapps...

I stuck to a one-off payment, rather than the garbage subscription models all the other parenting apps use.


What we find essential about safe vision is that the kid can search like normal but it's limited to the approved channels. With about 30 (highly curated) channels the kid can find a lot of safe content.

It also generates an updated dashboard page from new stuff from all the creators, also essential.

The offline thing has never come up for us. They do a yearly sub $29.99, happy to pay. Just an FYI.


I'm quite curious how they go about licensing the content, maybe they just pay the creators a cut of the subscription fees. Or is it really just scraping youtube in some form?

As for the search you mentioned, that might come into it for an older range, for my little ones they still can't read or spell yet, they just want to click on the thumbnail that looks the most engaging at any random time.


As a developer, the one feature I really love in Chrome is PWAs. But Firefox abandoned PWA support years ago, and seems to have no appetite for adding PWAs back[1]. Maybe I'll just have to split my usage across PWAs in Chrome (since I trust those apps/websites anyway) and Firefox for general browsing.

[1] https://connect.mozilla.org/t5/ideas/bring-back-pwa-progress...


They seem on bringing it back one way or another [1] and there is also a workaround utility [2].

[1] https://connect.mozilla.org/t5/discussions/how-can-firefox-c...

[2] https://pwasforfirefox.filips.si/


Thanks for sharing, I wasn't aware of that blog post. That said, their approach sounds kind of disheartening. I love being able to use extensions within my PWAs (primarily uBlock Origin, obviously), and some Android Chrome forks (Kiwi/Mises) let me do this, while still letting me feel like I'm in a dedicated app (i.e. no browser chrome at all). The firefox team really seem to stress here that they want to keep the chrome around the app (albeit different), which really feels like it goes against the grain of what I expect a PWA to be in the first place (a chromeless website).


Brightintosh is the same as Vivid (roughly) and is actually open source, but the author ships the binaries as a Mac app store purchase for $2:

https://github.com/niklasr22/BrightIntosh


Yeah there's a way to do this with Calibre + some plugins on windows. I tried on my Mac but it was unsuccessful.


Not the best reporting, but still interesting (although not surprising) that the Australian gov is cautious about usage of DeepSeek.


Runs on an AMD 7900 XTX at about ~20 tokens per second using LM Studio + Vulkan.


This is not to far from what AdNauseam does today as a simple browser extension.

https://adnauseam.io/


Predictably, Google has banned that extension from Chrome, but if we all ran AI agents in container swarms...


Absolutely.

I recently read "The Life You Can Save" by Peter Singer, and it really does a great job of making the case for generosity even amongst middle class 1st world individuals.

ebook/audiobook are free from their website: https://www.thelifeyoucansave.org.au/the-book/


To add a little context here, I (the author) understand LLMs aren't the right tool (at least by themselves) for verbatim verse recall. The trigger for me doing the tests was seeing other people in my circles blindly trusting that ChatGPT was outputting verses accurately. My background allowed me to understand why that is a sketchy thing to do, but many people do not, so I wanted to see how worried we really should be.


Thanks for the response, it does sound like you've seen similar treatment of LLMs by others to what I've observed.

I think though that an important part of communicating about LLMs is talking about what they are designed to do and what they aren't. This is important because humans want to anthropomorphise, and LLMs are way past good enough for this to be easy, but similar to pets, not being human means they won't live up to expectations. While your findings show that current large models are quite good at verbatim answers (for one of the most widely reproduced texts in the world), this is likely in no large part down to luck and the current way these models are trained.

My concern is that the takeaway from your article is somewhere between "most models reproduce text verbatim" and "large models reproduce popular text verbatim", where it should probably be that LLMs are not designed to be able to reproduce text verbatim and that you should just look up the text, or at least use an LLM that cites its references correctly.


Check out this video at the 22:20 mark. The goal he’s pursuing is to have the LLM recognize when it’s attempting to make a factual statement, and to quote its training set directly instead of just going with the most likely next token.

https://youtu.be/b2Hp0Jk9d4I?si=SwfKJrck5_0LPgTH


>or at least use an LLM that cites its references correctly.

Are there implementation where LLM just out put the text of the references (or the first 100 words). I'm sure someone has implemented already?


Gemini cites web references, NotebookLM cites references in your own material, and the Gemini APIs have features around citations and grounding in web search content. I'm not familiar with OpenAI or Anthropic's APIs but I imagine they do similar, although I don't think ChatGPT cites content.

All these are doing however is fact-checking and linking out to those fact-checking sources. They aren't extracting text verbatim from a database. You could probably get close with RAG techniques, but you still can't guarantee it in the same way that if you ask an LLM to exactly repeat your question back to you, you can't guarantee that it will verbatim.

Verbatim reproduction would be possible with some form of tool use, where rather than returning, say, a bible verse, the LLM returns some structure asking the orchestrator to run a tool that inserts a bible verse from a database.


I can speak to a couple of perspectives I have seen other people use it for, ranging from valid to somewhat scary.

1. Preparing a sermon for Church, I don't advocate for this, but it's definitely being done out there. Here, the pastor may know the topic they are speaking on, but want the LLM to help them plan out the message and structure it. 2. Preparing lesson plans for Sunday School. This seems reasonably fine to me, but I would still err on the side of not trusting the raw scriptures output as evidence, and instead look them up separately before reading them out.

The above examples may particularly come into play when English is not a first language, since although they can understand and express their faith easily in their native language, ChatGPT can help them represent it in English well.

Personally, I think the use cases are many, but mostly for discussion / personal reflection. These include things like asking for perspectives that other Christians take on certain passages, helping understand how some scriptures link to other scriptures in the Bible, and sometimes even exploring some of the history of the Christian faith through the last ~2 millennia since it was written. Anything meaningful you can manually research further / reference before taking it at face value, but it can work as a great starting point for your search.


Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: