Hacker Newsnew | past | comments | ask | show | jobs | submit | claytongulick's commentslogin

It's such a lovely and simple stack.

No Lit Element or Lit or whatever it's branded now, no framework just vanilla web components, lit-html in a render() method, class properties for reactivity, JSDoc for opt-in typing, using it where it makes sense but not junking up the code base where it's not needed...

No build step, no bundles, most things stay in light dom, so just normal CSS, no source maps, transpiling or wasted hours with framework version churn...

Such a wonderful and relaxing way to do modern web development.


I love it. I've had a hard time convincing clients it's the best way to go but any side projects recently and going forward will always start with this frontend stack and no more until fully necessary.


This discussion made me happy to see more people enjoying the stack available in the browser. I think over time, what devs enjoy using is what becomes mainstream, React was the same fresh breeze in the past.


> (We sure as hell aren’t there yet, but that’s a possibility)

What makes you think so?

Most of the stuff I've read, my personal experience with the models, and my understanding of how these things work all point to the same conclusion:

AI is great at summarization and classification, but totally unreliable with generation.

That basic unreliablity seems to fundamental to LLMs, I haven't seen much improvement in the big models, and a lot of the researchers I've read are theorizing that we're pretty close maxing out what scaling training and inference will do.

Are you seeing something else?


This seems really vague. What does "totally unreliable" mean?

If you mean that a completely non-technical user can't vibe code a complex app and have it be performant, secure, defect-free, etc, then I agree with you. For now. Maybe for a long time, we'll see.

But right now, today, I'm a professional software engineer with two decades of experience and I use Cursor and Opus to reliably generate code that's on par with the quality of what I can write, at least 10x faster than I can write it. I use it to build new features, explore the codebase, refactor existing features, write documentation, help with server management and devops, debug tricky bugs, etc. It's not perfect, but it's better than most engineers I've worked with in my career. It's like pair programming with a savant who knows everything, some of which is a little out of date, who has intermediate level taste. With a tiny bit of steering, we're an incredibly productive duo.


I know the tech is here to stay, and the best parts of it are where it provides accessibility and tears down barriers to entry.

My work is to make sure that you don't need to reach for AI just because human typing speed is limited.

I love to think in terms of instruments versus assistants: an assistant is unpredictable but easy to use. It tries to guess what you want. An instrument is predictable but relatively harder to use. It has a skill curve and perhaps a skill cap. The purpose of an instrument is to directly amplify the expressive power of its user or player through predictable, delicately calibrated responses.


Maybe we're working on different things?

My experience has been much worse. Random functions with no purpose, awful architecture with no theory of mind, thousands of lines of comprehension debt, bugs that are bizarre and difficult to track down and reason about...

This coupled with the occasional time when it "gets it right".

Those moments make me feel like I saved time, but when I truly critically look at my productivity, I see a net decline overall, and I feel myself getting dumber and losing my ability to come up with creative solutions.


Yeah, I haven't had any of that, especially in the last ~6 months. I'm using Cursor and lately Opus 4.5

I have used Claude to write a lot of code. I am however already a programmer, one with ~25 years of experience. I’ve also lead organizations of 2-200 people.

So while I don’t think the world I described exists today — one where non-programmers, with neither programming nor programmer-management experience, use these tools to build software — I don’t a priori disbelieve its possibility.


I had a funny example of that.

I had a question about how to do something in Django, and after googling found a good SO answer.

I read through it thinking about how much I appreciated the author's detailed explanation and answer.

When I looked at the author it was me from two years ago.


One of the key elements of effective UX is discoverability.

The user needs to able to discover the capabilities and limitations of the system they are using.

For most practical examples I can think of, this approach would complicate that, if not make it nearly impossible.


I'm curious what makes you say this? I feel like I discover how most sufficiently complex apps work by googling, or youtube.

I'm also not convinced this makes traditional methods: walkthroughs, support videos, trainings etc impossible?

My friend just discovered coding agents (lol), and he's constantly finding new things it can do for him...

"Oh it can ssh into my raspberrypi and run the code to test it. Wow"

That was an emergent property of the cli coding agent that had no "traditional" discoverability.


A teacher that needs to know the kids that are struggling the most with a recent exam doesn't want to ask the AI 10 different ways, deal with hallucinations and frustrations, send tech support a ticket only to receive a response that the MCP doesn't support that yet - isn't going to be impressed.

They just want to see a menu of available reports, and if the one they want isn't there, move on to a different way of doing what they need.


You understand how reality works right?

It's all just atoms clinging to each other.

Simple.

Heh.


You understand how the brain works?

You're the one then. All those laggardly neurobiologists are still struggling.


My brain is able to determine when it doesn't know something.

That does seem to be a bit important for any "intelligent" system.


So, if we find a way to ensure that transformers balk at hallucinating will you then say that they “understand” what they’re saying?

Because that’s what your comment indicates.


At a brief scan of the code, is there a bug with the way task rows are selected and rolled back?

It looks like multiple task rows are being retrieved via a delete...returning statement, and for each row an email being sent. If there's an error, the delete statement is rolled back.

Let's hypothesize that a batch of ten tasks are retrieved, and the 9th has a bad email address, so the batch gets rolled back on error. Next retry the welcome email would be sent again for the ones that succeeded, right?

Even marking the task as "processed" with the tx in the task code wouldn't work, because that update statement would also get rolled back.

Am I missing something? (entirely possible, the code is written in "clever" style that makes it harder for me to understand, especially the bit where $sql(usr_id) is passed into the sql template before it's been returned, is there CTE magic there that I don't understand?)

I thought that this was the reason that most systems like this only pull one row at a time from the queue with skip locked...

Thanks to anyone who can clarify this for me if it is indeed correct!


I mean the inner select uses "limit 1", right? So it will usually (but not always as I said in another comment) only delete and return a single task.


Hmm, yep I didn't see that, thanks!

It's a confusing way to do do things to me, like, why not select ordered by task date limit 1? Still using for update and skip locked etc... hold the transaction, and update to 'complete' or delete/move the row when done? What's the advantage of the inner select like that?

And I'm still totally confused by:

    const [{ usr_id } = { usr_id: null }] = await sql`
        with usr_ as (
          insert into usr (email, password)
          values (${email}, crypt(${password}, gen_salt('bf')))
          returning *
        ), task_ as (
          insert into task (task_type, params)
          values ('SEND_EMAIL_WELCOME', ${sql({ usr_id })})
        )
        select * from usr_
      `;
This looks to me like usr_id would always be null?

I think the idea is great, I think I'm just struggling a bit with the code style, it seems to be "clever" in a way that increases cognitive load without a real benefit that I can see, but I suppose that's pretty subjective.


I hope that we see more measured, objective articles like this. It's been pretty frustrating as someone on the sidelines looking in, the degree of panic and emotion attached to the climate stuff, that has always seemed to be out of scale with the actual effects to me.

I'm ~50, and my whole life, back to the 80's, there have been these sort of breathless extreme articles about the existential threat that climate poses. I remember, as a kid, it was global cooling, and we were all going to have to deal with an ice age, which terrified me.

Then it was global warming, and the "tipping point" and hawaii and all of our coastal cities were going to be under water within 5 years.

Then it was "climate change" which was poorly defined to me, but humans were definitely to blame, and causing hurricanes and destroying the planet - even though when I bothered to look at the actual data, the rate of hurricanes and other events had actually decreased.

I've read some super compelling articles from what I'll call "measured environmentalists" that argue persuasively that to do the most good for people, we should shift our focus to immediate harms that we can actually control well - things like malaria, and reliable clean water and heating, that would have a far greater impact for tens of millions of people than something nebulous like carbon credits.

I'm far from an expert on this stuff, I just wish that the conversation (as with so many things) could have less yelling, and more considered thoughtful discussion. This article, and Gates' seem to be a great start.


An article talking about a complex system [1] (the Earth's climate system coupled to human industrial/farming systems) with few hard numbers, no mathematical models and graphs of their behavior, and no links to any such discussions, is not objective in any sense of the word. It's all the author's uncited subjective views.

This is the kind of stuff one should take in from one ear, and let it out through the other ear without letting it touch the brain.

[1] complexity in the sense of mathematics.


It sort of depends on the expertise of the author, right? In this case, it seems like an actual climate scientist that has moderated his opinion over time, at least that was my takeaway.

That makes it at least as valuable to me as any given "we're all going to die" article that pops up endlessly in these kinds of discussions.

I agree though, that a big problem with these conversations is dealing with complex systems, small signals and potentially large impacts and communicating all that in an effective way.

Most people (myself included) are simply not equipped to understand the details, so we rely on others to explain it to us.

My point was just that I enjoy a more balanced take on the issue.


> It sort of depends on the expertise of the author, right?

In a well-established field like Physics or Biology, if an expert is talking about the established part of their field, they can just say things and you can trust that they are correct. If they saying things about the unestablished parts of their field - say a physicist talking about string theory - they need to properly cite stuff.

In a not so well established field like Climate Science, where there is a lot of disagreement, every expert needs to cite their sources so people in adjacent fields can verify what they are saying.


> In a not so well established field like Climate Science, where there is a lot of disagreement

Is there? In the actual science, not in the I'm-a-contrarian-because-fossil-pays-well scene.


There are broad differences of opinion about the validity of BAU projections a few decades out, AMOC collapse, etc.

But the climate denialists like the author don't talk about that. They attack settled science and handwave away legitimate, serious concerns by saying that risk is incalculable.


In other words it's the same as the OP says about established vs not-yet-settled parts of physics.


For folks who aren't healthcare tech nerds, what happened in this case is called "unbundling" which is a fraudulent practice that can have steep penalties from CMS.

CMS maintains a service and set of tools to help prevent payers from getting hit with this called the National Correct Coding Initiative (NCCI) [1]. NCCI only applies to provider services and outpatient billing codes, but is still applicable for emergency room services.

There are a bunch of technical details for implementing the edits in the NCCI, but I think it's worth taking a moment to reflect on this.

It's pretty popular to point to the insurance company as the "bad guy" in healthcare, but this is the sort of stuff they deal with thousands of times per day.

As frustrating and horrible as this story is, it's not unique to an uninsured individual. A big problem in US healthcare is provider overbilling.

One of the most tragic jobs I held in healthcare tech was developing software for billing negotiation between providers and insurance companies. It was pretty eye-opening how terribly everyone behaves, and I learned to have a lot more sympathy for what insurance companies/government payers have to deal with.

As a patient trying to have necessary treatment paid for, it's incredibly frustrating to have a claim denied, and these are what we see in the news and experience personally.

As an insurance company, building robust systems that authorize necessary care while catching overbilling, overutilization and outright fraud is unfathomably complex and error prone.

This one of the reasons I've become a fan of DPC (direct primary care) models [2] with HSAs and supplement high-deductible catastrophic insurance to protect against hospital stays. It puts primary care back into a direct relationship with the patient, and lets insurance companies do what they are good at: pricing risk.

Some of the unintended consequences of how insurance companies are currently regulated is that in some states it can be difficult or impossible for an insurance company to provide a low cost, high deductible plan. They are forced to cover things that drive the costs up, so it's hard to do a DPC + catastrophic insurance option.

[1] https://www.cms.gov/national-correct-coding-initiative-ncci

[2] https://www.aafp.org/family-physician/practice-and-career/de...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: