Hacker Newsnew | past | comments | ask | show | jobs | submit | logicchains's commentslogin

It already works; it's how people in China purchase adult content online, which is illegal there. Usually with USDT (which is also illegal there).

Have some empathy. Even if it's not certain, there's a non-zero possibility that a large number of developer jobs (especially those focused purely on coding) will go the way of factory workers and switchboard operators, and if that happened it'd be a very tough transition for many people.

I think there's just droves of people who came to software thinking it was the new stable, solid, good reputation way to make money sitting in front of a computer.

I saw so many people who were not following what they liked but "a good choice".

Now many are scared instead of excited and/or skeptical because they don't trust their own self and ability to reinvent all the time. We do change a lot of practices in short periods of time. This is similar to an extent. Learning and guiding this new world is extremely interesting. You don't need to lie like a CEO either. Just solve, guide and build the new challenges.


Recent LLMs are fairly good at following instructions, so a lot of the difference comes down to the level of detail and quality of the instructions given. Written communication is a skill for which there's a huge amount of variance among developers, so it's not surprising that different developers get very different results. The relative quality of the LLM's output is determined primarily by the written communication skills of the individuals instructing it.

It seems to me If you know all these instructions clearly, then you know everything, and it's easy for you to write the code yourself, and you don't need an LLM.

The amount of typing it takes to describe a solution in English text is often less than the amount of typing needed to actually implement it in code, especially after accounting for boilerplate and unit tests. Not to mention the time spent waiting for the compiler and test harness to run. As a concrete example, the HTTP2.0 spec is way fewer chars long than any HTTP2.0 server implementation, and the C spec is way fewer chars long than any compliant C compiler. The C++ spec is way, way fewer chars long than any compliant C++ compiler.

>The amount of typing it takes to describe a solution in English text is often less than the amount of typing needed to actually implement it in code

I don't find this to be true. I find describing a solution in English well to be slower than describing the problem in code (IE, by writing tests first) and having that be the structured data that the LLM uses to generate code.

Its far faster, from the results I'm seeing plus my own personal experience, to write clear tests which benefit from being a form of structured data that the LLM can analyze. Its the guidance we have given to our engineers at my day job and it has made working with these tools dramatically easier.

In some cases, I have found LLM performance to be subpar enough that it is indeed, faster to write it myself. If it has to hold many different pieces of information together, it starts to falter.


I don't think it's so clear-cut. The C spec I found is 4MB and the tcc compiler source code is 1.8MB. It might need some more code to be fully compliant, but it may still be smaller than 4MB. I think the main reason why code bases are much larger is because they contain stuff not covered by the spec (optimization, vendor-specific stuff, etc etc).

Personally I'd rather write a compiler than a specification, but to each their own.


Use shorter variable names

Corpus also matters. I know Rust developers who aren't getting very good results even with high quality prompts.

On the other hand, I helped integrate Cursor as a staff engineer at my current job for all our developers (many hundreds), who primarily work in JavaScript / TypeScript, and even middling prompts will get results that only require refactoring, assuming the LLM doesn't need a ton of context for the code generation (e.g. greenfield or independent features).

Our general approach and guidance has been that developers need to write the tests first and have Cursor use that as a basis for what code to generate. This helps prevent atrophy and over time we've find thats where developers add the most value with these tools. I know plenty of developers want to do it the other way (have AI generate the tests) but we've had more issues with that approach.

We discourage AI generating everything and having a human edit the output, as it tends to be slower than our chosen approach and more likely to have issues.

That said, LLMs still struggle if they need to hold alot of context. For instance, if you have a bunch of files that it needs to understand to also generate code that is worthwhile, particularly if you want it to re-use code.


>Corpus also matters. I know Rust developers who aren't getting very good results even with high quality prompts.

Which model were they using, out of interest? I've gotten decent results for Rust from Gemini 2.5 Pro. Its first attempt will often be disgusting (cloning and other inefficiencies everywhere), but it can be prompted to optimise that afterwards. It also helps a lot to think ahead about lifetimes and explicitly tell it how to structure them, if there might be anything tricky lifetime-wise.


No idea. I do know they all have access to Cursor and tried different models, even the more expensive options.

What you're describing though, having to go through that elaborate detail really drives to my point though, and I think shows a weakness in these tools that is a hidden cost to scaling their productivity benefits.

What I can tell you though both from observation and experience, is that because the corpus for TypeScript / JavaScript is infinitely larger as it stands today, even Gemini 2.5 Pro will 'get to correct' faster even with middling prompt(s) vs for a language like Rust.


I do a lot of work in a rather obscure technology (Kamailio) with an embedded domain-specific scripting language (C-style) that was invented in the early 2000s specifically for that purpose, and can corroborate this.

Although the training data set is not wholly bereft of Kamailio configurations, it's not well-represented, and it would be at least a few orders of magnitude smaller than any mainstream programming language. I've essentially never had it spit out anything faintly useful or complete Kamailio-wise, and LLM guidance on Kamailio issues is at least 50% hallucinations / smoking crack.

This is irrespective of prompt quality; I've been working with Kamailio since 2006 and have always enjoyed writing, so you can count on me to formulate a prompt that is both comprehensive and intricately specific. Regardless, it's often a GPT-2 level experience, or akin to running some heavily quantised 3bn parameter local Llama that doesn't actually know much of anything specific.

From this one, can conclude that a tremendous amount of reinforcement for the weights is needed before the LLM can produce useful results in anything that isn't quasi-universal.

I do think, from a labour-political perspective, that this will lead to some guarding and fencing to try to prevent one's work-product from functioning as free training for LLMs that the financial classes intend to use to displace you. I've speculated before that this will probably harm the culture of open-source, as there will now be a tension between maximal openness and digital serfdom to the LLM companies. I can easily see myself saying:

I know our next commercial product (based on open-source inputs) releases, which are on-premise for various regulatory and security reasons, will be binary-only; I have never customers looking through our plain-text scripts before, but I don't want them fed into LLMs for experiments with AI slop.


Yea this! How many devs say "it doesn't do what i expect" did not try to write up a plan of action before it just YOLO'd some new features? We have to learn to use this new tool, but how to do that is still changing all the time.

> We have to learn to use this new tool, but how to do that is still changing all the time.

so we need to program in natural language now but targeting some subset of it that is no learnable?

and this is better how?


Yes, and also writing for an LLM to consume is its own skill with nuances that are in flux as models and tooling improves.

There absolutely is; formally speaking, statements can be categorised into normative, saying how things should be, and positive, saying how things are. A politically neutral AI would avoid making any explicit or implicit normative statements.

>...and positive, saying how things are

This presumes that the AI has access to objective reality. Instead, the AI has access to subjective reports filed by fallible humans, about the state of the world. Even if we could concede that an AI might observe the world on its own terms, the language it might use to describe the world as it perceives it would be subjectively defined by humans.


That is exactly it. Humans are inherently subjective beings, seeing everything through their ideology, and as a result LLMs are, too.

They will always be a computer representation of the ideology that trained it.


AI simply not openly and proudly declaring itself MechaHitler while spreading White Supremacist lies and Racist ideology would be one small step in the right direction.

Cerebras have previously stated for other models they hosted that they didn't quantise, unlike Groq.

I just asked Gemini 2.5 Pro to write a function in Haskell to partition a list in linear time, and it did it perfectly. When you say you were using ChatGPT and Claude, do you mean you were using the free ones? Plain GPT 4o is very poor at coding.

    -- | Takes a predicate and a list, and returns a pair of lists.
    -- | The first list contains elements that satisfy the predicate.
    -- | The second contains the rest.
    partitionRecursive :: (a -> Bool) -> [a] -> ([a], [a])
    partitionRecursive _ [] = ([], []) -- Base case: An empty list results in two empty lists.
    partitionRecursive p (x:xs) =
        -- Recursively partition the rest of the list
        let (trues, falses) = partitionRecursive p xs
        in if p x
            -- If the predicate is true for x, add it to the 'trues' list.
            then (x : trues, falses)
            -- Otherwise, add it to the 'falses' list.
            else (trues, x : falses)

My Haskell reading is weak, but that looks like it would change the order of elements in the 2 lists, as you are prepending items to the front of `trues` and `falses`, instead of "appending" them. Of course `append` is forbidden, because it is linear runtime itself.

I just checked my code and while I think the partition example still shows the problem, the problem I used to check is a similar one, but different one:

Split a list at an element that satisfies a predicate. Here is some code for that in Scheme:

    (define split-at
        (λ (lst pred)
          "Each element of LST is checked using the predicate. If the
    predicate is satisfied for an element, then that element
    will be seen as the separator. Return 2 values: The split
    off part and the remaining part of the list LST."
          (let iter ([lst° lst]
                     [index° 0]
                     [cont
                      (λ (acc-split-off rem-lst)
                        (values acc-split-off rem-lst))])
            (cond
             [(null? lst°)
              (cont '() lst°)]
             [(pred (car lst°) index°)
              (cont '() (cdr lst°))]
             [else
              (iter (cdr lst°)
                    (+ index° 1)
                    (λ (new-tail rem-lst)
                      (cont (cons (car lst°) new-tail)
                            rem-lst)))]))))
For this kind of stuff with constructed continuations they somehow never get it. They will do `reverse` and `list->vector`, and `append` all day long or some other attempt of working around what you specify they shall not do. The concept of building up a continuation seems completely unknown to them.

>Who, exactly, is clamoring for Recall in the first place?

NSA, CIA, maybe even ICE nowadays.


It'll be nice if this generates more pressure on programming language compilation times. If agentic LLMs get fast enough that compilation time becomes the main blocker in the development process, there'll be significant economic incentives for improving compiler performance.

It's a conspiracy theory to suggest that there's some deliberate elite conspiracy to replace white Canadians. It's not a conspiracy theory to state that if current demographic trends continue then whites (as in people with white skin) will be a minority there by the end of the century; it's basic extrapolation.

Also, it being a theory about a conspiracy doesn't necessarily make it wrong. Rich and powerful people often do conspire against labor. This even happens in the tech industry, where industry leaders across ostensibly competing corporations conspire with each other in anti-poaching schemes to suppress wages. Industrialists like Elon Musk flagrantly try to buy politicians for the obvious purpose of importing cheaper and easily coerced H1B workers, to again suppress wages. Elon Musk has no tack and subtly so he doesn't bother to hide it, but what he's doing is the norm from his economic class. Conspiracy against common workers is a recurring theme throughout history and into the present. Dismissing out of hand the possibility of such a conspiracy happening in Canada, even as unprecedented levels of ethnic replacement with servile and desperate workers from an impoverished third work country is underway is simply absurd.

source?

for the trends i mean, not the extrapolation.


For the trends, https://en.wikipedia.org/wiki/Demographics_of_Canada has some charts in the "Ethnic origin" section.

Yeah, dismissing data as racist is quite annoying. I looked at the fertility rate in Poland and it is 0.33 per1000. In Portugal it got better at the expense of 10% of the population being immigrant in a span of 5 years. Of course the natives revolted, all while they enjoy their lives without dependants .

The fault is not of the Indians, or immigrants that come for a better life, and are often needed. The fault is all the developed world, and some not, deciding that having children is not good. I have a the very controversial opinion that not wanting to have children is a disease as we are living organisms and all of them reproduce. There might be manageable diseases but current demographics is a public health crisis.

I am very capitalist but if there is something where the state needs to intervene is to make any kind of employment disturbance into families a severe liability. I just got to know a parent lost its job after coming from paternity leave, for me that company is on my bad book forever, and I sold its stock.

More importantly evolution will evolve around the people that don’t want to reproduce because they will not pass their childless traits. Crudely, it will consider all childless people even if they live to a 100 as death on birth. If somebody said that 60% of the developed world will die in about 5 generations this would be a catastrophe for Bruce Willis, but as it takes time and we have immigrants that have other fertility inclinations to fill the gap, the frog boils slowly into oblivion.


[flagged]


The problem isnt just "oh no canadian no have job"

the problem is people coming to canada on their version of the H1B visa and accepting jobs for 1/3 as much as canadians. Its the same problem as the US. Its not a racial problem its a socio-economic problem. These companies are trying to squeeze out every looney and tooney they can and the best way to do it is to replace your workforce with people who are willing to work for 1/3 as much just to be able to move there.


If they wanted to make an argument about economic exploitation of workers by capitalist institutions, they would have done so. Instead, once great replacement was brought it up, they began defending it as real. For them, and many, it's about race.

I'm with you - corporations absolutely exploit cheap labor at the expense of all else, including their "home nation" (a concept alien to corporations; corporations are money go up algorithms, nothing more). That corporations will pillage other countries for cheap labor is baked in to the fundament of capitalism and was often written about as something we'll need to figure out when we get to late-stage capitalism if we want to continue doing this system.

Let's not confuse racists as our ideological allies, though they may indeed be class allies that have unfortunately chosen a false reactionary explanation for the degradation of their country and comfort. Not just any false reactionary explanation, the oldest one in the book, so rote and cliche I'm honestly surprised people still fall for it. "It's not the billionaires that control the means of production and all aspects of society that are making your life worse, it's uh.. uh... the Je - oh we can't do that one anymore? The Bla- oh wait that's not allowed anymore either? The INDIANS and MEXICANS!!"


1. The facts absolutely do support it, even Wikipedia: https://en.wikipedia.org/wiki/Demographics_of_Canada . Check the "Canada visible minority, aboriginal and Caucasian (assumed for 1981 to 2016) population as a percentage of the total population over time" chart there. The "Caucasian" proportion is down from 93.3% in 1981 to 69.8% in 2021

2. I defined white, as the skin looks white, not brown or darker. If you want more specific: someone that 90+% of Canadian people would agree "looks like a white person". Or just use whatever definition of "Caucasian" Wikipedia uses.

3. Nobody said anything about good or bad. The question of whether it's good or bad is completely orthogonal to the question of whether or not "white" people are projected to become a minority in Canada.


> 1. The facts absolutely do support it, even Wikipedia: https://en.wikipedia.org/wiki/Demographics_of_Canada . Check the "Canada visible minority, aboriginal and Caucasian (assumed for 1981 to 2016) population as a percentage of the total population over time" chart there. The "Caucasian" proportion is down from 93.3% in 1981 to 69.8% in 2021

No, you're making assumptions, you don't have a strong statistical argument here, you're just pointing at two numbers and saying "obviously this number that was once 93 and now 69 will one day be 49." Why not "one day 20?" Why not "One day 1%? 0%?" You've done nothing here but point at statistics and hand-wave.

2. Ok! let's take a look at the wikipedia definition of Caucasian.

https://en.wikipedia.org/wiki/Caucasian_race

> The Caucasian race (also Caucasoid,[a] Europid, or Europoid)[2] is an obsolete racial classification of humans based on a now-disproven theory of biological race.

Oh.

> If you want more specific: someone that 90+% of Canadian people would agree "looks like a white person".

I guarantee you will never get 90% agreement on someone's race in a consistent manner. This is absurd on its face. If people can't even figure out if Selena Gomez is white or not it's just not gonna work out. Also, still feeling a little racist over here to be trying to draw lines around "white."

3. Oh, then, there's nothing to discuss, right?

It's very obvious that people are saying it's bad, btw. You've got rayiner over there saying NYC and Jersey would be better if they'd stayed British and Dutch, and if it's not bad, why are you trying to argue about it at all? So, then, I'll ask you directly: you're perfectly fine if the majority of Canada is one day whatever it is you call non-white?


>Chile would be a much better place if the CIA didn't overthrow its democracy and install a fascist dictator

Empirically that's not a very well-supported statement, if you compare the economy and living conditions of Chile to its neighbours. Empirically speaking, electing communist governments almost always leads to reduced living standards. It's like if the US hadn't intervened in to help a fascist dictator in South Korea, the whole of South Korea would be as poor as North Korea is now.


> Empirically speaking, electing communist governments almost always leads to reduced living standards.

Almost always leads to a CIA-backed coup or civil war which indirectly indeed reduces living standards. In the other scenarios it often resulted in generally improved living standards via industrialization, increase of literacy and social programs. In yet others gross mismanagement and large scale famines, or fluctuating results depending on the time scale. There is no commonly accepted uniform outcome, and "almost always worse living standards" is clearly not one.


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: