Hacker Newsnew | past | comments | ask | show | jobs | submit | mythical_39's commentslogin

one possible argument against the productivity is if the mirgration introduced too many bugs to be useable.

In which case the code produced has zero value, resulting in a wasted month.


Put it into a storage locker. Pre-pay 5 years of fees on the locker. Wait 5 years.

This is what we call a 'self-sovling-problem'


Where I live none of the storage companies will let you pre-pay 1 year of rent let alone 5.

A major part of their business model is giving low initial rates and then raising the rent over and over knowing what a pain it will be for the renters to move to a different storage place.

The industry acronym for the strategy is "ECRIs", existing customer rate increases, and it is a profitability metric they track.


To be clear, it is in storage for the past 2+ decade. It was moving houses every year, the last few years, with only the essentials, that made me realise how much of a hoarder I was. Nevertheless, dumping it somewhere and "forgetting" about it is not much of a solution - as someone else said here, you are just passing on the problem to the next generation.


I think the implication is once the fees stop being paid it ceases to be your problem. The storage company will sell the contents of your locker.


huh. I'm a professional data scientist, and my masters was in signal processing. In one class the final exam required us to transcribe fourier transforms of speech into the actual words. In another the final exam required us to perform 2d FFTs in our head.

Please be careful about generalizing.

I agree that many 'data science' programs don't teach these skills, and you certainly have evidence behind your assertation.


I don’t think anyone is making the claim that data science has no merit, or data scientists are universally ignorant of anything.

Simply that some data scientists, formally trained or titled by themselves or others, have been known to apply their skills to data without having special knowledge regarding the data.

It is a bit of a cliche in some of our experiences. The consulting company that analyzes data for a decision paralyzed organization, that seeks outside guidance in lieu of getting better leadership, is something I see.

That is a real phenomenon, and despite good intentions, can have all the effectiveness of reading tea leaves.

Because there is always data to be scienced. Competently or not.


> ... my masters was in signal processing

But, you are making my point for me here. Most data scientists don't get masters in signal processing. You are also acknowledging that gaining expertise in a particular field was worth pursuing.


There is also a dimension which is about caring for your fellow man and caring for the land which the food you grow depends on.

Which society seems to have lost because we've focused too much on the one metric, "money", over all others.


Agreed, in addition there is the dimension of I don’t want to starve to death, so we should make sure we have a viable regulated agriculture industry.


wait, did you see the part where the person you are replying to said that writing the code themself was essential to correctly solving the problem?

Because they didn't understand the architecture or the domain models otherwise.

Perhaps in your case you do have strong hands-on experience with the domain models, which may indeed have shifted you job requirements to supervising those implementing the actual models.

I do wonder, however, how much of your actual job also entails ensuring that whoever is doing the implementation is also growing in their understanding of the domain models. Are you developing the people under you? Is that part of your job?

If it is an AI that is reporting to you, how are you doing this? Are you writing "skills" files? How are you verifying that it is following them? How are you verifying that it understands them the same way that you intended it to?

Funny story-- I asked a LLM to review a call transcript to see if the caller was an existing customer. The LLM said True. It was only when I looked closer that I saw that the LLM mean "True-- the caller is an existing customer of one of our competitors". Not at all what I meant.


I saw that part and I disagreed with the very notion, hence why I wrote what I did.

> Because they didn't understand the architecture or the domain models otherwise.

My point is that requiring or expecting an in-depth understanding of all the algorithms you rely on is not a productive use of developer time, because outside narrow niches it is not what we're being paid for.

It is also not something the vast majority of us do now, or have done for several decades. I started with assembler, but most developers have never-ever worked less than a couple of abstractions up, often more, and leaned heavily on heaps of code they do not understand because it is not necessary.

Sometimes it is. But for the vast majority of us pretending it is necessary all the time or even much of the time is a folly.

> I do wonder, however, how much of your actual job also entails ensuring that whoever is doing the implementation is also growing in their understanding of the domain models. Are you developing the people under you? Is that part of your job?

Growing the people under me involves teaching them to solve problems, and already long before AI that typically involved teaching developers to stop obsessing over details with low ROI for the work they were actually doing in favour of understanding and solving the problems of the business. Often that meant making them draw a line between what actually served the needs they were paid to solve rather than the ones that were personally fun to them (I've been guilty of diving into complex low-level problems I find fun rather than what solves the highest ROI problems too - ask me about my compilers, my editor, my terminal - I'm excellent at yak shaving, but I work hard to keep that away from my work)

> If it is an AI that is reporting to you, how are you doing this? Are you writing "skills" files? How are you verifying that it is following them? How are you verifying that it understands them the same way that you intended it to?

For AI use: Tests. Tests. More tests. And, yes, skills and agents. Not primarily even to verify that it understands the specs, but to create harnesses to run them in agent loops without having to babysit them every step of the way. If you use AI and spend your time babysitting them, you've become a glorified assistant to the machine.


Most of the tests are BS too.

And nobody is talking about verifying if the AI bubble sort is correct or not - but recognizing that if the AI is implementing it’s own bubble sort, you’re waaaay out in left field.

Especially if it’s doing it inline somewhere.

The underlying issue with AI slop, is that it’s harder to recognize unless you look closely, and then you realize the whole thing is bullshit.


> Most of the tests are BS too.

Only if you don't constrain the tests. If you use agents adversarially in generating test cases, tests and review of results, you can get robust and tight test cases.

Unless you're in research, most of what we do in our day jobs is boilerplate. Using these tools is not yet foolproof, but with some experience and experimentation you can get excellent results.


I don’t have to do boilerplate, generally.

And with all the stacking LLMs against each other, that just sounds like more work than just… writing the damn tests.


> I don’t have to do boilerplate, generally.

I meant this more in the sense of there is nothing new under the sun, and that LLMs have been trained on essentially everything that's available online "under the sun". Sure, there are new SaaS ideas every so often, but the software to produce the idea is rarely that novel (in that you can squint and figure out roughly how it works without thinking too hard), and is in that sense boilerplate.


hahaha, oh boy. that is roughly as useful or accurate as saying that all machines are just combinations of other machines, and hence there is nothing unique about any machine.


Vertical CNC mills and CNC lathes are, obviously, different machines with different use cases. But if you compare within the categories, the designs are almost all conceptually the same.

So, what about outside of some set of categories? Well, generally, no such thing exists: new ideas are extremely rare.

Anyone who truly enjoys entering code character for character, refusing to use refactoring tools (e.g. rename symbol), and/or not using AI assistance should feel free to do so.

I, on the other hand, want to concern myself with the end product, which is a matter of knowing what to build and how to build it. There’s nothing about AI assistance that entails that one isn’t in the driver’s seat wrt algorithm design/choices, database schema design, using SIMD where possible, understanding and implementing protocols (whether HTTP or CMSIS-DAP for debugging microcontrollers over USB JTAG probe), etc, etc.

AI helps me write exactly what I would write without it, but in a fraction of the time. Of course, when the rare novel thing comes up, I either need to coach the LLM, or step in and write that part myself.

But, as a Staff Engineer, this is no different than what I already do with my human peers: I describe what needs doing and how it should be done, delegate that work to N other less senior people, provide coaching when something doesn’t meet my expectations, and I personally solve the problems that no one else has a chance of beginning to solve if they spent the next year or two solely focused on it.

Could I solve any one of those individual, delegated tasks faster if I did it myself? Absolutely. But could I achieve the same progress, in aggregate, as a legion of less experienced developers working in parallel? No.

LLM usage is like having an army of Juniors. If the result is crap, that’s on the user for their poor management and/or lack of good judgement in assessing the results, much like how it is my failing if a project I lead as a Staff Engineer is a flop.


Sounds like she doth protest too much?


> Most of the tests are BS too.

Why are you creating BS tests?

> And nobody is talking about verifying if the AI bubble sort is correct or not - but recognizing that if the AI is implementing it’s own bubble sort, you’re waaaay out in left field.

Verifying time and space complexity is part of what your tests should cover.

But this is also a funny example - I'm willing to bet the average AI model today can write a far better sort than the vast majority of software developers, and is far more capable of analyzing time and space complexity than the average developer.

In fact, I just did a quick test with Claude, and asked for a simple sort that took into account time and space complexity, and "of course" it knows that it's well established that pure quicksort is suboptimal for a general-purpose sort, and gave me a simple hybrid sort based on insertion sort for small arrays, heapsort fallback to stop pathological recursion, and a decently optimized quicksort - this won't beat e.g. timsort on typical data, but it's a good tradeoff between "simple" (quicksort can be written in 2-20 lines of code or so depending on language and how much performance you're willing to sacrifice for simplicity) and addressing the time/space complexity constraints. It's also close to a variant that incidentally was covered in an article in DDJ ca. 30 years ago because most developers didn't know how to, and were still writing stupidly bad sorts manually instead of relying on an optimized library. Fewer developers knows how to write good sorts today. And that's not bad - it's a result of not needing to think at that level of abstraction most of the time any more.

And this is also a great illustration of the problem: Even great developers often have big blind spots, where AI will draw onresults they aren't even aware of. Truly great developers will be aware of their blind spots and know when to research, but most developers are not great.


But a human developer, even a not so great one, might know something about the characteristics of the actual data a particular program is expected to encounter that is more efficient than this AI-coded hybrid sort for this particular application. This is assuming the AI can't deduce the characteristics of the expected data from the specs, even if a particular time and space complexity is mandated.

I encountered something like this recently. I had to replace an exact data comparison operation (using a simple memcmp) with a function that would compare data and allow differences within a specified tolerance. The AI generated beautiful code using chunking and all kinds of bit twiddling that I don't understand.

But what it couldn't know was that most of the time the two data ranges would match exactly, thus taking the slowest path through the comparison by comparing every chunk in the two ranges. I had to stick a memcmp early in the function to exit early for the most common case, because it only occurred to me during profiling that most of the time the data doesn't change. There was no way I could have figured this out early enough to put it in a spec for an AI.


> But a human developer, even a not so great one, might know something about the characteristics of the actual data a particular program is expected to encounter that is more efficient than this AI-coded hybrid sort for this particular application.

Sure. But then that belongs in a test case that 1) documents the assumptions, 2) demonstrates if a specialised solution actually improves on the naive implementation, and 3) will catch regressions if/when those assumptions no longer holds.

In my experience in that specific field is that odds are the human are likely making incorrect assumptions, very occasionally are not, and having a proper test harness to benchmark this is essential to validate the assumptions whether or not the human or an AI does the implementation (and not least in case the characteristics of the data end up changing over time)


>There was no way I could have figured this out early enough to put it in a spec for an AI.

This is an odd statement to me. You act like the AI can only write the application once and can never look at any other data to improve the application again.

>only occurred to me during profiling

At least to me this seems like something that is at far more risk of being automated then general application design in the first place.

Have the AI design the app. Pass it off to CI/CD testing and compile it. Send to a profiling step. AI profile analysis. Hot point identification. Return to AI to reiterate. Repeat.


> At least to me this seems like something that is at far more risk of being automated then general application design in the first place.

This function is a small part of a larger application with research components that are not AI-solvable at the moment. Of course a standalone function could have been optimised with AI profiling, but that's not the context here.


If your product has code on it that can only be understood and worked on by the person that wrote it, then your code is too complex and underdocumented and/or doesn't have enough test coverage.

Your time would be better spent, in a permanent code base, trying to get that LLM to understand something than it would be trying to understand the thing yourself. It might be the case that you need to understand the thing more thoroughly yourself so you can explain it to the LLM, and it might be the case that you need to write some code so that you can understand it and explain it, but eventually the LLM needs to get it based on the code comments and examples and tests.


> I don't think an American civil war in the next 5 years is at all likely.

Minnesota might like a word with you


Your first paragraph is close to correct, but have you thought of the difference between supportive vs coercive power?

If the Russian army is the force that enables Russian foreign policy, they why do some people in Ukraine think that they don't want to do things the Russian way?

Likewise, I wonder how helpful the US military will be at forcing our former allies to do things they don't want to do?


It's a fair question but the answer is fairly simple: the Russian military simply isn't strong. The only reason they haven't been bombed back into the Stone Age is because they have a nuclear arsenal. That has limited the West's response to being "proportionate".

The Russian-Ukraine front is kinda like WW1. There's no real air power to speak of. The front is dominated by artillery and infantry and fortifications like trenches.

Russia cannot project military power anywhere like how the US can and the US has decades of projecting that power to force countries to capitulate, essentially. Europe outsourced their security to the US, for example. But make no mistake: NATO is a protection racket. It projects American power into Europe.

This is one reason why I call this administration inept because they seem intent on splintering NATO, which actually diminishes American power. Just like disbanding USAID diminished American soft power, and quite cheaply at that.

The lesson since at least the (W) Bush administration is that only nuclear weapons can guarantee your survival.


many of whom are jacked on testosterone supplements.

The aggressiveness might be related to that.

https://www.them.us/story/testosterone-parties-silicon-valle... https://stand.ie/stand-newsroom/political-power-testosterone


not even P/E ratios resetting back to historic norms? Or have we finally entered a new age where highly elevated P/E are a permanent feature of markets?


"This is the new normal" is basically the sign of a bubble about to pop.


> ICE has widespread support for what they are actually doing

Minnesota would like to have a chat with you.


https://www.kttc.com/2026/01/20/americas-pulse-current-ice-a...

About 40-50% are in support, so the parent's accurate. That roughly matches the political divide across the states.


The first sentence says that that's a national poll, not a Minnesota poll


I mean yes conservatives like gestapo mistreating their political opponents. It does not make it not gestapo or not lawless.

It makes conservatives who they are - fascist party.


[flagged]


> deportation rates are much too low

“U.S. Border Patrol agents recorded nearly 238,000 apprehensions of migrants crossing the southern border illegally in fiscal year 2025” [1]. For 2012 to 2015, the chart shows about 360k, 420k, 480k and 330k, respectively.

That means ICE is spending $330 to 580 thousand dollars per additional Southwest border encounter in 2025 versus 2012. ($250 to 440 thousand if we average Obama’s second-term numbers.)

These numbers 10x even San Francisco’s circa 2016 homeless-industrial profligacy [2]. Unless ICE is a ball of wormy corruption, they’re clearly not focused on immigration enforcement.

If you prefer anecdotes, I live in Wyoming. Our farms are de facto exempt from enforcement. I believe in enforcing our immigration laws while we work to reform through the legislature. But that's clearly not what ICE is doing. The most-generous interpretation is they're making videos that make people who want enforcement feel good.

[1] https://www.whitehouse.gov/articles/2025/10/icymi-illegal-cr...

[2] https://www.hoover.org/research/despite-spending-11-billion-...


I've said for a while that ICE needs to go after businesses that hire illegal immigrants. So if you're looking for some kind of conflict in what I'm saying that isn't it.

That's why I said that ICE isn't doing enough.


It's kind of mind blowing people hate immigrants so much they are willing to burn their own country down to get rid of them.


I love immigrants lol. I just want immigrants that integrate and come into the country without breaking the law.


Why do you have so much hate for immigrants?


I'll tell you the same thing I told the other person:

I love immigrants lol. I just want immigrants that integrate and come into the country without breaking the law.


ICE is shooting bystanders in the face, blinding teenagers, and tear-gassing infants.


Thanks for sharing your perspective.


[dead]


It's just nice to know what my neighbors are thinking


> No, actually to be frank, I'd rather the Nazis go back to hiding under the floorboards in fear of public retribution.

You're defaming me and spreading lies. Any actual Nazis can go fuck themselves.


I don’t like that these kinds of posts get flagged. If a post is praising ICE or Trump it should be highlighted and mocked, not flagged and deleted. People should see how batshit insane MAGA people are.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: