Hacker Newsnew | past | comments | ask | show | jobs | submit | more dfabulich's commentslogin

> If you’re not already using a CLI-based agent like Claude Code or Codex CLI, you probably should be.

Are the CLI-based agents better (much better?) than the Cursor app? Why?

I like how easy it is to get Cursor to focus a particular piece of code. I select the text and Cmd-L, saying "fix this part, it's broken like this ____."

I haven't really tried a CLI agent; sending snippets of code by CLI sounds really annoying. "Fix login.ts lines 148-160, it's broken like this ___"


Yeah I started with Cursor, went hybrid, and then in the last month or so I've totally swapped over.

Part of it is the snappy more minimal UX but also just pure efficacy seems consistently better. Claude does its best work in CC. I'm sure the same is true of Codex.


Cursor Composer appears to have this type of coupling and uses IDE resources better than other models on average.


Seems you haven't heard of Cursor 2.0

https://cursor.com/blog/2-0


Better? Hard to say. Different? Yes. Worth evaluating? Absolutely. Using it for 30 minutes will answer your question better than any reply here. I think you'll answer your own question quickly.

I've been coding seriously for about 15 years. No single tool has changed how I code more than claude code and I'm including non-"AI" tooling/services. This sounds like I'm shilling but I am not affiliated. It's played a large part in injecting my passion back into building stuff.


Claude is able to detect the lines of code selected in vscode anyway


As-is Gemini CLI and Codex. I run my CLIs in VSC and only using it as a file browser.


They all have optional ide integration, e.g Claude knows the active vscode tab and highlighted lines.


Is that better than Cursor? Same? Just different?


All I can say is when I switched from Cursor to Claude it took me less than 24 hours to realise I wouldn’t go back. The extra UI Cursor slaps on to VS Code is just bloat, which I found quite buggy (might be better now though), and the output was nowhere near as good. Maybe things have improved since I switched but Claude CLI with VS Code is giving me no reasons to want to try anything else. Cursor seemed like a promising and impressive toy, Claude CLI is just a great product that’s delivering value for me every day.


Vscode has agents built in now, have you used that UI?


That particular part is the same, roughly. The bigger issue is just that CC's a better agent than Cursor, last I checked.

There's even an official Anthropic VS Code extension to run CC in VS Code. The biggest advantage is being able to use VS Code's diff views, which I like more than in the terminal. But the VS Code CC extension doesn't support all the latest features of the terminal CC, so I'm usually still in the terminal.


Yes and you can select multiple files to give it focus. It can run anything in your PATH too. Eg it's pretty good at using `gh` and so on


Claude is just better at coding than cursor.

Really, the interface isn't a meaningful part of it. I also like cmd-L, but claude just does better at writing code.

...also, it's nice that Anthropic is just focusing on making cool stuff (like skills), while the folk from cursor are... I dunno. Whatever it is they're doing with cursor 2.0 :shrug:


Cursor can use the Claude Sonnet and Claude Opus LLMs, so I would expect output to be quite similar in that respect.

The agentic part of the equation is improving on both sides all the time.


There's something in the prompting, tooling, heuristics inside the Claude Code CLI itself that makes it more than just the model it's talking to and that becomes clear if you point your ANTROPHIC_URL at another model. The results are often almost equivalent.

Whereas I tried Kilo Code and CoPilot and JetBrain's agent and others direct against Sonnet 4 and the output was ... not good ... in comparison.

I have my criticisms of Claude but still find it very impressive.


Claude Code is much more efficient even compared to Cursor using the Anthropic models. The planning and tool use is much better.


Direct use of Codex + GPT5 or Claude Code CLI gives a better result, compared to using the same models in Cursor. I've compared both. Cursor applies some of their augmentation, which reduces the output size, probably to save on tokens.


I use Claude and Codex in VS Code and they work really well.

> I select the text and Cmd-L, saying "fix this part, it's broken like this

This flow works well.


Your intuitions are what give you your axioms and Bayesian priors, the starting point of deduction and analysis, as well as your values and top-level goals.

You can't justify any belief at all without axioms/priors, or make any decisions about what to do without values/goals.

Intuition is the thing that gives you those axioms and values; it's really the "only game in town" for generating them.


There really isn't any Bayesian "prior" for us. We exist as agents interacting with an environment qua data stream. Every single moment brings new flows of "data" and as such there isnt a sense of having a prior and posterior since this milliseconds prior is last milliseconds posterior.


> Your intuitions are what give you your axioms and Bayesian priors

No, I would say it's your perceptions and memories (of past perceptions).


Visualizing large graphs is a "tarpit idea," one that initially seems appealing but never succeeds in practice.

Fundamentally, the problem is that visual aids can only really represent a few dozen things before they become as complicated as the thing you were trying to understand in the first place.

And when analyzing messy node diagrams, it’s not just the nodes we’re trying to visualize, but the lines connecting the nodes (the “edges”). We can only visualize a few dozen of those, and that typically means we can visualize only a handful of nodes at a time.

Visualization only works in trivial examples where you don’t need it; it fails in complex environments where you need it the most.


This is my problem with node based editors, the one I am most familiar with being blenders shader editor. I mean, sure, I guess it represents the internal structure. But it always feels so messy. Sometimes I wish blender would just let me work with a netlist.


There's a threshold, of both user ability and scale of 'problem'

I mostly agree with you dfabulich - the repeated efforts to create node/pipeline tools "visual programming languages" are not built for us, and feel redundant.

But I took issue with "where you don't need it" as this is very much dependent on who "you" is.


I would go further. The fundamental problem is the idea that there is a fundamentally correct representation of something. This actually goes further than even the visualization of the graph. Symbolic representations have the same trap.


This article lists some text adventures to try, but the recommendations are pretty old.

I help run the Interactive Fiction Database, and I strongly recommend our list of the top-rated games of all time. They’re all fantastic. https://ifdb.org/search?browse


Counterfeit Monkey is very high up on my favorite games of the 2010s: https://ifdb.org/viewgame?id=aearuuxv83plclpl

The puzzles are all excellent, the writing is remarkable, both for game and fiction.


Why do you love the name?


Because computering is missing out on whimsical fun in every way nowadays!


I help run the Interactive Fiction Database at https://ifdb.org.

You really can't go wrong browsing our list of the best games of all time. https://ifdb.org/search?browse

All of the top-rated games have walkthroughs or other hints for when you get stuck. My top advice for new players: use the hints.


>You really can't go wrong browsing our list of the best games of all time. https://ifdb.org/search?browse

You can, because those games are the best according to the preferences of interactive fiction connaisseurs, and the preferences of connaisseurs never match those of the masses.

E.g. beer connaisseurs love IPAs, while most people find them way too bitter.


Beer connoisseurs don't love IPAs. Modern IPAs are the most like Budweiser rice beers that you can get in the fancy beer world, so that's what people who aspire to look like connoisseurs prefer. They prefer them to be very bitter, and/or flavored with exotic fruits, because they can't judge quality and rely on distinctness.


I just meant that absolutely none of those games suck.


I started playing Infocom games before there were online hints or even Invisiclues. I knew one of the authors quite well and resorted to (literally) calling him from time to time :-)


The number one game on the list is a word game not a text adventure game.

If I played that as my first text adventure I’d think text adventures with like advanced scrabble or wordle.


This is a good point! I have personally decided to save some of the all-time greatest games until I am better at text adventures and can enjoy them with fewer hints.


Mailing lists aren't federated. Everyone has to email one particular address at one particular domain; whoever controls that email address + domain can censor/block emails. (That's a good thing when you're blocking spam!)

If you're OK with the fact that mailing lists are somewhat centralized, there are actually got a ton of great alternatives to pure mailing lists.

All popular open-source web forums support email notifications, and most of them support posting by email, (I know phpBB and Discourse do,) and all of them have sitemaps with crawlable archives.


You can run your own mail server and name server on top. The network of mail is very much federated.

In mail we have so many freedoms. We have become so locked into technology that we have to introduce a term like “federation” to signify the interoperability and freedom of a single component. Mail is federation layered upon federation.

The fact that you can just use a mailings list address as a member of another mailing list gives you even more federation possibilities. All with the simplest of all message exchange protocols.


> You can run your own mail server and name server on top. The network of mail is very much federated.

While I do completely agree with that in theory (and I also love mail) I think it does not stand the reality test because of email deliveravility which tends to be a nightmare.

How do you solve this? Do you use a third party SMTP?


I ran multiple mail servers for years until about 10 years ago (moved out of the industry). The deliverability problem, as far as I know, hasn't really changed that much in the last decade. The key was to configure DKIM, SPF, only use secure protocols and monitor the various black/block-lists to make sure you aren't on them for very long. In my experience, if you end up on a few bad lists, and don't react quickly, the reputation of your domain goes down rapidly and it's harder to get off said lists.

You also want some spam filtering, which, these days, is apparently much more powerful with local LLMs. I used to just use various bayesian classification tools, but I've heard that the current state of affairs is better. Having said that, when you've trained the tool, it does a pretty good job.

It's not "plug-and-play", but it's not that hard. Once you've got it up and running the maintenance load goes to almost zero.


> It's not "plug-and-play", but it's not that hard. Once you've got it up and running the maintenance load goes to almost zero.

This is where I disagree. In my opinion it might not be that hard but the maintenance is really not zero as you just described how you need a reputable IP as a prerequisite and constant monitoring of block lists.

Just having DKIM, SPF and DMARC really was not enough last time I checked for getting delivered to let's say outlook.


I just realised, and this could be red herring, that almost all of the domains I've administered were based in Australia. I suppose it's possible that the IP ranges I'm dealing with have a better reputation than those from other countries. I have administered a few domains from US companies and IPs, but they've often been based in known data centres which may help their cause. I can't really talk to the reliability of hosting a mail server on a consumer / small business IP in the US / Europe/ Asia. It's possible that all known, common IPs in these areas have a natural disadvantage when it comes to reputation. I suppose try running a tunnel from your server to a small VPS in a knwon data centre? Not ideal, but it may help.

It would be annoying if entire US/European/Asian ISP IP ranges were immediately blocked. We should have moved on from that for many reasons unrelated to email.


The monitoring of block lists is much more important than people assume. I haven't looked into it in detail, but it always seemed like the reputation was based on a ratio of number of messages to known bad messages. If you have a moderately busy server, and you manage to keep off the block lists (or at least pro-actively remove yourself from them) then the reputaion gets higher and higher, and the maintenance goes down.

If you're a domain that only receives occasional messages, and you end up on Spamhaus and co, you're gonna have a problem. It seems that reputation at small scale is viral. You need actively good reputation and response time. But, honestly, it seemed that it didn't take more than about 3 months per domain I administered until they were just accepted by the net as valid, good actors.


If you consistently don't receive mail you expect, then you stop giving money to your mail host and get a different one.


It's not about receiving. Receiving is the easy part. It is about the delivery of your own mail.

> you stop giving money to your mail host and get a different one.

I was entertaining the "host your own mail server" thought, I agree that if you don't host it yourself then you can change your provider if it fails you.


Who needs the transmission more - the sender, or the recipient?

Much of the time, when it's for signup verification, especially for a free service, they just write "don't use @live.microsoft.com" underneath the email address box. The user wants to be signed up for the service more than the service provider wants a new user, at least by enough to use an alternate email address. Enough cases like this, and the user quits @live.microsoft.com.


> if you don't host it yourself then you can change your provider if it fails you.

Even if you host it yourself :-). The key is to own your domain.


If I recall the domain is not the only issue, IP is also deeply involved or am I wrong?


IP address is involved in some receiver's reputation calculation. It's never involved when sending to a domain.


Sure but then your mail gets dropped on the other end: The main issue I had the last time I tried running my own setup for mails was basically getting an email to an outlook or live.microsoft address. My mails were dropped for no reason, effectively not landing in my friends mailboxes and without any error on my side to know that my mail was getting rejected.

This is when I decided to stop trying getting through with this and came back to paying a provider.


The fact that it is a nightmare is a bit of a myth. Granted, not everybody can do it, but that's not necessary.

And then there are many mail providers other than Gmail. It's just that nobody cares and probably the fact that a ton of (most?) people were forced to create a Gmail account by Google.


> The fact that it is a nightmare is a bit of a myth. Granted, not everybody can do it, but that's not necessary.

I agree to some extent. But it is more involved than deploying a Discourse instance in my opinion.

> And then there are many mail providers other than Gmail. It's just that nobody cares and probably the fact that a ton of (most?) people were forced to create a Gmail account by Google.

100% agree. This is the tradeoff I went for. I would love for it to be easier to self host but you can definitely use another provider.


Basically never, because it would require re-standardizing the DOM with a lower-level API. That would take years, and no major browser implementor is interested in starting down that road. https://danfabulich.medium.com/webassembly-wont-get-direct-d...

Killing JavaScript was never the point of WASM. WASM is for CPU-intensive pure functions, like video decoding.

Some people wrongly thought that WASM was trying to kill JS, but nobody working on standardizing WASM in browsers believed in that goal.


People have to look at it, from the same point of view Google sees the NDK on Android.

"However, the NDK can be useful for cases in which you need to do one or more of the following:

- Squeeze extra performance out of a device to achieve low latency or run computationally intensive applications, such as games or physics simulations.

- Reuse your own or other developers' C or C++ libraries."

And I would argue WebGL/WebGPU are preferably better suited, given how clunky WebAssembly tooling still is for most languages.


It's really not that hard. Node.js has a very mature C FFI for interacting with JavaScript.

Just agree on some basic bindings

   wasb_create_u32(env: *const env, value: u32, result *mut value) -> status
   wasb_create_string_utf8(env: *const env, str: *const c_char, length: i32, result: *mut value) -> status
There is even a shim that ports the Node.js C FFI to wasm: https://github.com/devongovett/napi-wasm/blob/main/index.mjs...

Heck, just include that in the browser so it doesn't need to be loaded at runtime

Stack that with support in the browser for wasi and you have everything you need to work with the DOM from wasm as well as things like threading


That's the issue with WASM. Very, very few people want a limited VM that's only good for CPU-intensive pure functions. The WASM group has redirected a huge amount of resources to create something almost nobody wants.


A bit unrelated, but are there any recent benchmarks comparing it with native perf?

Hard to believe it can compete with V8 JIT anytime soon. Might be easier to integrate fast vector libraries within javascript engines.


I strongly agree. Algebraic types aren't "scary," but "algebraic types" is a bad term for what they are. In all popular languages that support "sum types" we just call them "unions."

Your favorite programming language probably already supports unions, including Python, TypeScript, Kotlin, Swift, PHP, Rust, C, and C++. C# is getting unions next year.

The article never mentions the word "union," which is overwhelmingly more likely to be the term the reader is acquainted with. It mentions "sets" only at the start, preferring the more obscure terms "sum types" and "product types."

A union of type sets kinda like a sum, but calling it a "sum" obscures what you're doing with it, rather than revealing. The "sum" is the sum of possible values in the union, but all of the most important types (numbers, arrays, strings) already have an absurdly large number of possible values, so computing the actual number of possible values is a pointless exercise, and a distraction from the union set.

Stop trying to make "algebraic" happen. It's not going to happen.


> In all popular languages that support "sum types" we just call them "unions."

When I was doing research on type theory in PL, there was an important distinction made between sum types and unions, so it’s important not to conflate them. Union types have the property that Union(A, A) = A, but the same doesn’t hold for sum types. Sum types differentiate between each member, even if they encapsulate the same type inside of it. A more appropriate comparison is tagged unions.


What you are calling Union type is not what GP is talking about.


So, disjoint union in set theory terms?

If I understand correctly .


> Stop trying to make "algebraic" happen. It's not going to happen

It's been used for decades, there's no competitor, and ultimately it expresses a truth that is helpful to understand.

I agree that the random mixture of terminology is unhelpful for beginners, and it would be better to teach the concepts as set theory, sticking to set theoretic terminology. In the end, though, they'll have to be comfortable understanding the operations as algebra as well.


No, seriously, you literally never need to understand the operations as algebra. You just need to know how to use your language's type system.

None of the popular languages call them sum types, product types, or quotient types in their documentation. In TypeScript, it's called a "union type," and it uses a | operator. In Python, `from typing import Union`. In Kotlin, use sealed interfaces. In Rust, Swift, and PHP, they're enums.

There are subtle differences between each language. If you switch between languages frequently, you'll need to get used to the idiosyncrasies of each one. All of them can implement the pattern described in the OP article.

Knowing the language of algebraic types doesn't make it easier to switch between one popular language to another; in fact, it makes it harder, because instead of translating between Python and TypeScript or Python and Rust, you'll be translating from Python to the language of ADTs (sum types, tagged unions, untagged unions) and then from the language of ADTs to TypeScript.

People screw up translating from TypeScript to ADTs constantly, and then PL nerds argue about whether the translation was accurate or not. ("Um, actually, that's technically a tagged union, not a sum type.")

The "competitor" is to simply not use the language of algebraic data types when discussing working code. The competitor has won conclusively in the marketplace of ideas. ADT lingo has had decades to get popular. It never will.

ADT lingo expresses no deeper truth. It's just another language, one with no working compiler, no automated tests, and no running code. Use it if you enjoy it for its own sake, but you've gotta accept that this is a weird hobby (just like designing programming languages) and it's never going to get popular.


If you're so confident that algebraic data types will never become popular, I don't understand why you feel it's so important to convince a few nerds not to talk about them.

I don't think anyone here is advocating making the topic compulsory. There will always be some people who are interested in theory and others who aren't.


> None of the popular languages call them sum types, product types, or quotient types in their documentation.

It seems like you may want to spend a bit more time with the ML family of languages. Sure, you can argue the degree of what constitutes "popular" but OCaml and F# routinely make statistics on GitHub and Stack Overflow. The ML family will represent tuples as products directly (`A * B`), and sometimes use plus for sum types, too (though yes `|` is more popular even among the ML family), which they do call sum types.

> Knowing the language of algebraic types doesn't make it easier to switch between one popular language to another

The point of ADTs is not to build a universal type system but to generalize types into Set thinking. That does add some universals that a sum type should act like a sum type and a product type should act like a product type, but not all types in any type system are just sum types or just product types. ADT doesn't say anything about the types in a type system only that there are standard ways to combine them. It says there are a couple of reusable operators to "math" types together.

> ADT lingo expresses no deeper truth.

Haskell has been exploring the "deeper truths" of higher level type math for a long while now. Things like GADTs explore what happens when you apply things like Category Theory back on top of Algebraic Data Types, that because you have a + monoid and * monoid, what sorts of Monads do those in turn describe.

Admittedly yes, a lot of those explorations still feel more academic than practical (though I've seen some very practical uses of GADTs), but just because the insights for now are mostly benefiting "the Ivory Tower" and "Arch Haskell Wizards" doesn't mean that they don't exist. The general trend of this sort of thing is that first academia explores it, then compilers start to use it under the hood of how they compile, then the compiler writers find practical versions of it to include inside their own languages. That seems to be the trend in motion already.

Also, just because it's a relatively small audience building compilers doesn't mean we don't all benefit from the things they explore/learn/abstract/generalize. We might not care to know the full "math" of it, but we still get practical benefits from using that math. If a compiler author does it right, you don't need to know mathematical details and jargon like "what is a product type", you indeed just use the fruits of that math like "Tuples". The math improves the tools, and if the language is good to you, the math provides you with more tools and more freedom to generalize your own algorithms beyond simple types and type constructs, whether or not you care to learn the math. (That said, it does seem like a good idea to learn the type math here. ADTs are less confusing than they sound.)


The explainer goes into more detail about how it would work, if it ever ships: https://github.com/dickhardt/email-verification-protocol

> The Email Verification Protocol enables a web application to obtain a verified email address without sending an email, and without the user leaving the web page they are on. To enable the functionality, the mail domain delegates email verification to an issuer that has authentication cookies for the user. When the user provides an email to the HTML form field, the browser calls the issuer passing authentication cookies, the issuer returns a token, which the browser verifies and updates and provides to the web application. The web application then verifies the token and has a verified email address for the user.

> User privacy is enhanced as the issuer does not learn which web application is making the request as the request is mediated by the browser.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: