Hacker Newsnew | past | comments | ask | show | jobs | submit | jonathanstrange's commentslogin

Yes, but there is nothing about pricing on this page. That doesn't make sense to me.

Neon is from Databricks. Here's their pricing page: https://neon.com/pricing

As long as it's purely opt-in and before opting in no data is ever sent to some server and no source code can be changed by it, I'm okay with it.

Looks nice & useful. However, I'd make two versions: The one you have, and additionally a version with Javascript that is a Progressive Web App (PWA). I'm pretty sure some AI could convert the normal page into a PWA for you.

The PWA has the advantage that it will also load when the internet is down and there is no need to save the page manually.


That sounds unreliable as cache can disappear. A regular mobile app would be safer.

How can it load when the internet is down?!? Doesn’t the PWA source have to be fetched? And if it’s cached, then so can be the static resources.

The complete web page and all resources are saved locally by the service worker. "Clear site data"/clear cookies will delete it. However, clearing the normal browser cache won't. It's overall a little more persistent than the cache for static resources. However, it needs to be installed as an app to really work offline without initial loading. Chrome will prompt you for that on Android, Linux, and Windows. Safari can also do that but makes you jump through hoops. In Firefox, the PWA will work like a page that loads even when the machine is offline.

Not a single sample chapter / reading sample? That's unusual.

I'm also using Gemini and it's the only option that consistently works for me so far. I'm using it in chat mode with copy&paste and it's pleasant to work with.

Both Claude and ChatGPT were unbearable, not primarily because of lack of technical abilities but because of their conversational tone. Obviously, it's pointless to take things personally with LLMs but they were so passive-aggressive and sometimes maliciously compliant that they started to get to me even though I was conscious of it and know very well how LLMs work. If they had been new hires, I had fired both of them within 2 weeks. In contrast, Gemini Pro just "talks" normally, task-oriented and brief. It also doesn't reply with files that contain changes in completely unrelated places (including changing comments somewhere), which is the worst such a tool could possibly do.

Edit: Reading some other comments here I have to add that the 1., 2. ,3. numbering of comments can be annoying. It's helpful for answers but should be an option/parameterization.


> Both Claude and ChatGPT were unbearable, not primarily because of lack of technical abilities but because of their conversational tone.

It's pretty much trial and error.

I tried using ChatGPT via the webchat interface on Sunday and it was so terse and to the point that it was basically useless. I had to repeatedly prompt for all the hidden details that I basically gave up and used a different webchat LLM (I regularly switch between ChatGPT, Claude, Grok and Gemini).

When I used it a month ago, it would point out potential footguns, flaws, etc. I suppose it just reinforces the point that "experience" gained using LLMs is mostly pointless, your experience gets invalidated the minute a model changes, or a system prompt changes, etc.

For most purposes, they are all mostly the same i.e. produce output so similar you won't notice a difference.


I think you’re highlighting an aspect of agentic coding that’s undervalued: what to do once trust is breached… ?

With humans you can categorically say ‘this guy lies in his comments and copy pastes bullshit everywhere’ and treat them consistently from there out. An LLM is guessing at everything all the time. Sometimes it’s copying flawless next-level code from Hacker News readers, sometimes it’s sabotaging your build by making unit tests forever green. Eternal vigilance is the opposite of how I think of development.


I think what OP means is that a US company cannot simultaneously comply with the CLOUD act and the GDPR. That case has also been made by some courts in the EU, that US law and practice are incompatible with the requirements of the GDPR. US companies who claim to process data in accordance with the GDPR seem to be deceiving their customers. Maybe I'm wrong but it seems to me that companies in the EU who rely on US services, corporations in the US, and even governments themselves keep quit about this unpleasant truth. It means that Microsoft Windows violates the GDPR, Google violates it, every US social network violates it, etc.

Of course, as someone else mentioned, that is not an argument against EU sovereignty but rather one of its motors.


My take, and also perhaps hope, is that the kind of software has best survival chances that is developed by reasonable, down to earth people who understand human needs and desires well, have some overall vision, and create tools that just work and don't waste the user's time. Whether that is created with the help of AI or not might not matter much in the end.

On a side note, any kind of formula that contains what appears to be a variable on the left hand side that appears nowhere on the right hand side deranges my sense of beauty.


I get an Error 404 after login. When I click "Home" I get the same error again.

If the list of songs is random shuffled, you can only hear the same song twice if there is a duplicate or if you've cycled through the whole list. That's why you shuffle lists instead of randomly selecting list elements.

Luckily, current AI technology is still in its infancy and not good enough. That being said, none of this will matter in the long run. I just don't see a way how AI could not completely replace most jobs done in front of a computer. For example, there is no reason why programs wouldn't be created and modified on the fly in the future. It's just logical to offer this functionality once agentive AI has gotten good enough.

However, nothing indicates that this will happen soon, we're talking about a timeline of a decade and longer. Maybe pricing as well as a hardware and energy shortage will further slow down the transition. Right now, AI doesn't seem to be profitable for the companies offering it.

Feel free to downvote this comment but make sure you re-visit this post in 10 years from now.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: