Hacker Newsnew | past | comments | ask | show | jobs | submit | zdragnar's commentslogin

We've had "natural language"-esque DSLs for a long time. Cucumber is probably among the better known, at least the first that comes to mind.

> Yet, with LLMs, we commit our generated source code, completely throwing away the English language abstraction.

Unless you want to commit your chats, that's very much a bonus. You don't want two different people "compiling" completely different versions of your application, because LLMs aren't deterministic.


> Hence he number of DOM elements stayed constant no matter how far you scroll and the only thing that grows is the Y coordinate.

This is generally called virtual scrolling, and it is not only an option in many common table libraries, but there are plenty of standalone implementations and other libraries (lists and things) that offer it. The technique certainly didn't originate with Netflix.


Its been about three years but infinite scroll is naunced depending on the content that needs to be displayed. Its a tough nut to crack and can require a lot of maintenance to keep stable.

None of which chatgpt can handle presumably.


And yet ChatGPT does not use it.

GP was mentioning that a solution to the problem exists, not that Netflix specifically invented it. Your quip that the technique is not specific to Netflix bolsters the argument that OpenAI should code that in.


I'm ignorant of the tech here. But I have noticed that ctrl-F search doesn't work for me on these longer chats. Which is what made me think they were doing something like virtual scrolling. I can't understand how the UI can get so slow if a bunch of the page is being swapped out.

Ctrl-A for select all doesn't work either. I actually wondered how they broke that.

They didn't actually name the solution: the solution is virtualization.

They described Netflix's implementation, but if someone actually wanted to follow up on this (even for their own personal interest), Dynamic HTML would not get you there, while virtualization would across all the places it's used: mobile, desktop, web, etc.


Vulgarity aside, I can sympathize. For years I've been told by designers that discoverability and intuitive interacting patterns are so important, yet every aspect of modern design focuses so much on minimizing "distractions" that features go undiscovered. We get forced into suboptimal workflows and usage patterns because everything gets over-fitted to the lowest common denominator.

This is the biggest reason I love Linux. I can choose my own desktop, or even forsake the desktop entirely for a simpler window manager, without changing operating systems. Some are hyper focused on a tailored experience (gnome) while others let you configure to your heart's content (kde).

There's sacrifices to be made, of course, but not having to live under the oppression of Apple's beneficiary dictator designers is absolutely worth it for me.


This, exactly.

Every MacOS app has a menu item explicitly made for this exact thing. It's often the third item in the menu:

    File    Edit   View

But they refuse to put these viewing options under the View menu item. Why? Why would you not put these really great viewing options under View?

It's under the Window menu?

> we should probably chalk it up to laziness on developers' part

Also, developers at many companies don't own their time. They're given a certain amount per feature that they didn't estimate themselves, and the company doesn't give them time to fix Firefox specific bugs because it would cost them more than the user's monetary value is worth compared to other features or bugs.


Are there lessons on safety that need to be learned here? We already know what the happy path looks like, and we've plenty of lessons on what the unhappy path will look like.

It isn't as if electric charge coming from balcony solar panels is some new magical-seeming type of electricity.


Safety is statistical and depends on human behavior. Unexpected behaviors might appear. For example some places require a power outlet on kitchen islands because with out, people will use cords to the wall which creates tripping hazards.

Also, why do wires have to be fixed to joists every 300 mm? It's not about the electrons.


That's a pretty far cry from "complexity is unavoidable". Reading that to me implies that the complexity is inherent in programming language design, whereas this follow-up argument seems to say that complexity is the result of tacking on new features.

The latter is a bit tautological, since the size of the language grammar is itself a measure of complexity.


I think they even haven't adopted newer JVM features, it is a hosted language designed to depend on its host, plus it is a Lisp.

The complexity would be to grow like Common Lisp, instead it is up to Clojure folks to write Java, C#, JavaScript code, therein lies the complexity.


> I think they even haven't adopted newer JVM features

You don't know what you're talking about. Not only Clojure steadily adopting newer JVM features (for when that makes sense) - java streams, functional interfaces, qualified method values, interactive lib loading, JDK21 virtual threads, etc., Clojure constantly explores beyond the JVM - e.g., Jank targets LLVM and has C++ interop.

Pick some hardcore JVM topics and try searching what Clojurists think about them - GC, profiling, concurrency, etc. There's tons of interesting, deeply involved things constantly being hacked together by incredibly knowledgeable folks. You're casually name-dropping "complexity" maybe without even realizing that it's a community that includes people who have written production experience reports on Shenandoah GC, built profiling tools that work around safepoint bias, and given conference talks on tri-color marking algorithms. Dealing with complexity is their bread-n-butter. Challenging Clojurists to debate about "complexity" is like dropping "the brain has neurons" around a group of neurosurgeons. They'd quietly say nothing, so you can "win your argumentation", but they'll just... know.


I was talking about the JVM bytecodes for dynamic languages.

Also I remember watching a recent talk, where virtual threads was still "being considered".

Having to write portable code that has to take into account the host differences, and difference in execution semantics, and still delivery the same outcome, is also complexity that keeps neurons busy.


Let's see.

> JVM bytecodes for dynamic languages

You're talkiing about invokedynamic - bytecode instruction added in Java 7, specifically to make dynamic language dispatch efficient, right? Explained simply: JVM was designed for static types - method calls resolved at compile time. In dynamic langs you don't know the type of something until runtime had to hack around, typically by boxing everything and doing manual type checks. This was slow and awkward. JRuby/Groovy adopted it eagerly. Clojure's dispatch model though is different. Most calls are either: direct interop (already statically typed), or calls through a Var (is a reference to a function value, not a dynamic method lookup). The Var indirection is a different shape of problem that invokedynamic doesn't solve as cleanly. It's not that it's useless, just that the fit isn't as natural.

> virtual threads was still "being considered"

That is an outdated info. Clojure 1.12.0 shipped two years ago with virtual thread support, but the integration with core.async's thread pool model was not there (so you were not completely incorrect). However, core.async later reimplemented go blocks using virtual threads when available. The improvements are still underway https://clojure.org/news/2025/10/01/async_virtual_threads

> take into account the host differences

Okay, this one is genuinely not that straightforward. The #? reader conditional in .cljc files is a clean, minimal mechanism. I don't really know any other language that can target completely different platforms from a single namespace as cleanly - even in Nodejs you can't in practice do it as nicely. Kotlin Multiplatform is probably the closest competitor - but its `expect/actual` mechanism requires separate source sets, separate files, and considerably more boilerplate. You're not writing in the same namespace; you're wiring together parallel declarations. Scala.js and GHCJS are essentially separate compilation targets with thinner sharing stories. But yes, it still can get complicated - different hosts have meaningfully different concurrency and I/O models, so it's rather "shared logic, host-specific edges" rather than "write once run anywhere". I still think Clojure handles this all far more elegantly than alternatives.

So pragmatically speaking, you're pointing at complexity at the implementation/runtime layer, while Clojure's complexity reduction happens at a different layer entirely - data model, immutability by default, simpler concurrency reasoning, REPL workflow. Those layers mostly don't interfere with each other. You mentioned real concerns at the platform engineering level, but they in practice don't touch what Clojure is actually trying to simplify. Someone writing Clojure code never experiences invokedynamic problems one way or the other.


We also have to adulterate that methane with bitter smelling agents too warn people of the danger when there's a leak. The line into the house is also limited by a regulator to ensure the pressure is very low. If gas builds up in a battery, it's either going to leak out slowly or build up and leak out all at once.

Very much not an equal comparison.


The methane is almost always piped in to be burned, and that can easily create odorless carbon monoxide. And the smell is not foolproof either. This does routinely kill people and we keep doing it. The jurisdictions that are banning it are doing so because of environmental reasons, not safety.

What the other poster said about the risk of releasing cyanide during overcharging is not true.

Cyanide could be released only at high temperatures over 300 Celsius degrees.

During a fire, there are many other things in a car that can release toxic fumes easier than a sealed battery.


The only people with any significant amount of lead acid batteries on their property are off grid types who typically store them away from their primary domicile as a fire safety precaution.

Fast charging a car/chemical weapon in your garage isn't terribly appealing.


If that battery is a chemical weapon then so is a big half-plastic box with ten gallons of gasoline inside.

Welcome to the new era, where programming is neither a skill nor a trade, but a task to be automated away by anyone with a paid subscription.

alot of software isnt that important so its fine, but some actually is important. especially with a branding name slapped on it that people will trust

The industry needs to step up and plant a flag for professionalization certifications for proper software engineering. Real hard exams etc

I can't even imagine what these exams would look like. The entire profession seems to boil down to making the appropriate tradeoffs for your specific application in your specific domain using your specific tech stack. There's almost nothing that you always should or shouldn't do.

All engineering professions are like that. NCEES has been licensing Professional Engineers for over a hundred years. The only thing stopping CS/SE is an unwillingness to submit to anything resembling oversight.

All software runs on somebody's hardware. Ultimately even an utterly benign program like `cowsay` could be backdoored to upload your ssh keys somewhere.

https://xkcd.com/2347/ , but with `fortune -a` and `cowsay` instead of imagemagick

I needed to replace my car's windshield in a hurry while on an extended trip. I called around to see who might have one in stock that could do a rush order. There was one place that had an automated voice system, and I hung up because it kept redirecting the conversation to get me to hand over more information than necessary to answer my question.

If I were already an existing customer and just wanted to schedule an oil change, it'd be fine, though I'd probably just schedule on the website anyway. I'm really only going to call in if I have an unusual circumstance and actually need to speak with someone.


Automated voice systems that try to sound human but are in fact purely scripted are insanely annoying. E.g. "I think you said 'windshield', is that correct? ... Got it, thanks!"

If you only have 4 options, just give me the old school list of voice options and I'll press 1 through 4, in less time, and being only moderately annoyed.

But a knowledgeable AI system as described in the article - that knows what it knows and tells you when it doesn't - could work great. If it had access to inventory and calendar, it might have worked for you. The question is whether the implementation lives up to the high expectations set by the articles.


Me too, but I wonder whether we're in the minority here. I'm sure there must be plenty of people who just call places to get information easily found via the web, or there wouldn't be so many automated phone systems that explain how to get information via their website.

I know someone who works on the voice response system for $LARGEBANK. She says that more than 95% of calls are just to find out a checking account balance.

That's fine, and there's no need for AI pretending to be a human, or to ask me to talk to a computer as if it is a human. Routine decision trees work really well here.

In fact, decision trees are nice because they tell your more or less up front what they're capable of.

What really sucks (AI or decision tree, either way) is when they don't let you easily speak with someone.


I'd argue a well designed AI assistant would be considerably better than a decision tree for that use case. Decision trees are slow because you normally need to wait through several options before getting to the one you're interested in. (Though sure, perhaps not if your call is literally for the most common thing.) But with an AI you could jump straight to what you're interested in.

"Hi, I'm the LargeBank AI Assistant. How can I help you?" "I'd like to know the balance of my checking account."

And then authenticate and get the balance as usual. Simpler and faster. Agreed that it becomes a problem if it's seen as a replacement for human agents though. In an ideal world it would actually free up the human agents for when they're actually needed. In reality it'll probably be some of each.


I'd counter with the following:

por espanol marque beep

if you have a quest beep

for beep

beep*beep*beep*beepbeep*

The account balance for account ending in NNNN is: $375.86

I shouldn't have to navigate a conversation in a situation where muscle memory will take me through the phone system decision tree in seconds.


I believe that. Probably 95% of my support calls to online shops are about order status (aka: the website shows "in preparation" for a week already, I need to talk to a real person).

I routinely call businesses instead of using their websites, but I do this to talk to a person instead of a machine.

Would you call a business to ask a question that's answered on their website?

Absolutely, routinely.

Often the relevant information is a pain to find on a website, but even if it isn't, the people who answer the phone often have important context like "Usually we do offer that recently but one of our suppliers..." or "We can do that, but maybe instead..." or "Oh the website isn't updated with..."


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: