Hacker Newsnew | past | comments | ask | show | jobs | submit | JackSlateur's commentslogin

DNS search is stupid by itself and shall be avoided everywhere

For you sanity, only deal with FQDNs;


You may read Asimov, especially the books related to Solaria (https://asimov.fandom.com/wiki/Solaria)

Long story short: massive reduction of population

Of course, this is science fiction; But the answer is still legit and everybody knows that: how do we keep 8 billions occupied ? Useless work (as of know, for an increasing part of the population), drugs or thinning the herd


If you have truly no conception of how to keep people "occupied" as your wording suggests, may I suggest studying even elementary school level humanities or philosophy.

It used to be, and I think will be again one day, that sociopathic comments like "thinning the herd" would be considered deeply troubling. Its clear though that a vision of a brighter future is necessary if we are to achieve it.


Anyone who can't think of something to do is the first to cause trouble...

No, they’re the first to check in on their friends and family, see if they need help. Theyre the first to look for problems to solve.

But I did and I still do

Pascal in his book "les pensées" introduced the idea that we (the human race) keep ourselves busy to avoid the anguish of the inner void that lurks in everybody, to avoid realizing too much that, in the end, life is meaningless, that we are nothing in the great scheme of the universe. So we work, we play, we do things, any things and forget about ourselves.

Charles Baudelaire, in his poem "Enivrez-vous", spoke about the soul-crushing experience of doing nothing, and that "being drunk on something, anything" was a requirement to living a good life (job, drugs, whatever): it is required to fight the terrible passage of time that burdens us.

Being on vacation is meaningful and appreciated because it contrasts with the rest of our life. Would we enjoy a lifetime of vacation ? Yes at first, hardly then (also and this is lame but true, there is pleasure in getting things that not everybody can get)

In another way and in essence, Malsow talk about the common need to solve problems. Which is seen by some people as the key to happiness: not avoid problems, but solve them. Because there are always new problems in life, and if life fails to give some to us, then we find some ourselves ..

So what would we do ? Well, we could work on some very original ideas. Get more power that our neighbors. Get more things. Would that leads to conflicts and wars ? No, no of course not, that would be a first in history.

Now, you gave basically no information. Please answer me: how do we keep 8 billions occupied ? Not for a day, not for a month, but for a lifetime.


Because using a public key infrastructure for client certificate is terrible

mTLS is probably the only sane situation where private key infrastructure shall be used


How are things done in the wider world ?

In your answer (and excluding those using ACME): is this a good behavior (that should be kept) or a lame behavior (that we should aim to improve) ?

Shorter and shorter cert lifetime is a good idea because it is the only way to effectively handle a private key leak. Better idea might exist but nobody found one yet


Is IPsec still relevant ?

It's not. What I have in mind is TLS handshake mediated ESP SA pair keying and policy. Why? Because ESP is much much simpler to implement in silicon than TCP+TLS.

ESP is stateless if using IPv6 (no fragmentation), or even if using IPv4 (fragmented packets -> let the host handle them; PMTUD should mean no need for fragmentation the vast majority of the time). Statelessness makes HW offload easy to implement.


"Die a hero or live long enough to see yourself become the villain"

AI is both a near-perfect propaganda machine and, in the programming front, a self-fulfilling prophecy: yes, AI will be better at coding than human. Mostly because humans are made worse by using AI.


It's a zero sum game. AI cannot innovate, it can only predictively generate code based on what it's already seen. If we get to a point where new code is mostly or only written by AI, nothing new emerges. No new libraries, no new techniques, no new approaches. Fewer and fewer real developers means less and less new code.

Nonsense indeed. The model knowledge is the current state of the art. Any computation it does, advances it. It re-ingests work of prior agents every time you run it on your codebase, so even though the model initializes the same way (until they update the model), upon repeated calls it ingests more and more novel information, inching the state of the art ever forwards.

Current state of the art ? You must be joking .. I see code it has generated, some interns does better.

Obviously, you are also joking about the thing that AI is immune to consanguinity, right ?


I've seen terrible things where it would overcomplicate and duplicate. But I've also seen it write really good code. I've been trying to get it to do the latter consistently. Detailed specs and heavy use of agents really helps with the code quality. The next step is editing the system prompts, to trim away any of the fat that's polluting the context.

If you have had interns who can write better code than Opus 4.5 I would very much like to hire them.

Nonsense. LLMs can easily build novel solutions based on my descriptions. Even in languages and with (proprietary) frameworks they have not been trained on, given a tiny bit of example code and the reference docs.

That's not novel, it's still applying techniques it's already seen, just in a different platform. Moreover it has no way of knowing if it's approach is anywhere near idiomatic in that new platform.

I didn't say the platform was the novel aspect. And I'm getting pretty idiomatic code actually, just based on a bit of example code that shows it how. It's rather good at extrapolating.

> Mostly because humans are made worse by using AI.

I'm confident you are wrong about that.

AI makes people who are intellectually lazy and like to cheating worse, in the same way that a rich kid who hires someone to do their university homework for them is hurting their ability to learn.

A rich kid who hires a personal tutor and invests time with them is spending the same money but using it to get better, not worse.

Getting worse using AI is a choice. Plenty of people are choosing to use it to accelerate and improve their learning and skills instead.


[not an ai booster] I think you are the target of this article. I believe you are misunderstanding the current capacity AI

I think I spend too much time at work fixing the greatness of AI.

Are you hand-fixing the issues or having AI do it? I've found that second pass quality is miles away from an initial implementation. If you're experienced, you'll know exactly where the code smells are. Point this out, and the agents will produce a much better implementation in this second pass. And have those people store the promps in the repo! I put my specifications in ./doc/spec/*.md

Every time I got bad results, looking back I noticed my spec was just vague or relied on assumptions. Of course you can't fix your collegues, if they suck they suck and sombody gotta do the mopping :)


I think it would make sense to have these issues bubble up into the public consciousness of hackernews.

I've never used AI to code, I'm a software architect and currently assume I get little value out of an LLM. It would be useful for me if this debate had a vaguely engineering-smelling quality to it, because its currently just two groups shouting at eachother and handwaving criticism away.

If you actually deal with AI generated problems, I love it, please make a post about it so we have something concrete to point to.


PRs where somebody who clearly doesn't know the tech being used well enough, or enough about how the complex app they're working on really works, thus isn't able to determine a good design from a bad one for the feature they're working on, but has AI*-assisted themselves to something which "works", can become an absolute death spiral.

I wasted so much work time trying to steer one of these towards the light, which is very demotivating when design and "why did you do this?" questions are responded to with nothing but another flurry of commits. Even taking the time to fully understand the problem and suggest an alternative design which would fix most of the major issues did nothing (nothing useful must have emerged when that was fed into the coin slot...)

Since I started the review, I ended up becoming the "blocker" for this feature when people started asking why it wasn't landed yet (because I also have my own work to do), to the point where I just hit Approve because I knew it wouldn't work at all for the even more complex use cases I needed to implement in that area soon, so I could just fix/rewrite it then.

From my own experience, the sooner you accept code from an LLM the worse a time you're going to have. If wasn't a good solution or even was the wrong solution from the get-go, no amount of churning away at the code with an LLM will fix it. If you _don't know_ how to fix it yourself, you can't suddenly go from reporting your great progress in stand-ups to "I have nothing" - maybe backwards progress is one of those new paradigms we'll have to accept?


Here is a sample

We are talking about a "stupid" tool that parses a google sheet and makes calls to a third-party API

So there is one google sheet per team, with one column per person

One line per day

And each day, someone is in charge of the duty

The tool grabs the data from the sheet and configures pagerduty so that alerts go to the right person

Very basic, no cleverness needed, really straightforward actually

So we have 1 person that wrote the code, with AI. Then we have a second person that checked the code (with AI). Then the shit comes to my desk. To see this kind of cruft:

  def create_headers(api_token: str) -> dict:
    """Create headers for PagerDuty API requests.

    Args:
        api_token: PagerDuty API token.

    Returns:
        Headers dictionary.
    """
    return {
        "Accept": "application/vnd.pagerduty+json;version=2",
        "Authorization": f"Token token={api_token}",
        "Content-Type": "application/json",
    }
And then, we have 5 usage like this:

  def delete_override(
    base_url: str,
    schedule_id: str,
    override_id: str,
    api_token: str,
  ) -> None:
    """Delete an override from a schedule.

    Args:
        base_url: PagerDuty API base URL.
        schedule_id: ID of the schedule.
        override_id: ID of the override to delete.
        api_token: PagerDuty API token.
    """
    headers = create_headers(api_token)

    override_url = f"{base_url}/schedules/{schedule_id}/overrides/{override_id}"
    response = requests.delete(override_url, headers=headers, timeout=60)
    response.raise_for_status()


No HTTP keep-alive, no TCP reuse, the API key is passed down to every method, so is the API's endpoint. Timeout is defined in each method. The file is ~800 lines of python code, contains 19 methods and only deals with pagerduty (not google sheet). It tooks 2 fulltime days.

These people fail to produce anything meaningful, this is not really a surprise given their failure to do sane things with such a basic topic

Does AI brings good idea: obviously no, but we knew this. Does AI improves the quality of the result (regardless of the quality of the idea): apparently no Does AI improves productivity: again, given this example: no Are these people better, more skilled or else: no

Am I too demanding ? Am I asking too much ?


Try pasting that full code into Claude and prompting:

> No HTTP keep-alive, no TCP reuse, the API key is passed down to every method, so is the API's endpoint. Timeout is defined in each method. Fix all of those issues.


AI is a wonderful tool that will answer all of your questions, as long as you give it the right answer ? That's probably right.

Even in normal human-written code, it's not guaranteed to get the code completely correct in one-shot. That's why code review and QA still exists.

The issue here is more organizational with the engineers not getting the code up to standards before handing off, not the capabilities of the AI itself.


tell it the problem - it will introduce you to the keep-alives and reuses...

What if you don’t know the problem? That’s kind of the crux of the issue here. You still need an expert operator, and at that point it is just saving some typing, but not even necessarily saving time with all the back and forth.

I'm sorry your teammates have skill issues when it comes to using these tools.

> Mostly because humans are made worse by using AI.

For the type of work I do, I found it best to tightly supervise my LLMs. Giving lots of design guidance upfront, and being very critical towards the output. This is not easy work. In fact, this was always the hard part, and now I'm spending a larger percentage of my time doing it. As the impact of design mistakes is a lot smaller, I can just revert after 20 minutes instead of 3 days, I also get to learn from mistakes quicker. So I'd say, I'm improving my skills faster than before.

For juniors though, I think you are right. By relying on this tech from early on in their careers, I think it will be very hard to grow their skills, taste and intuition. But maybe I'm just an old guy yelling at the clouds, and the next generation of developers will do just fine building careers as AI whisperers.


tldr: a github action with a cron schedule that runs whatever code is required to update whatever you have, and then push the diff (if any) in a branch and create a PR (if needed)

We are using something heavily derived from https://github.com/romoh/dependencies-autoupdate


An oom is when malloc failed

So you have to ensure that in such situation, you can execute code that does not require more memory: ensure that the rest only free stuff, or preallocate structures for that purpose


I started drinking up alcohol a couple of years ago. One of the best life choices I ever made.


Using loc as a KPI So, some people are still stuck there

This is sad

"La perfection, ce n'est pas lorsqu'il n'y a plus rien à ajouter, c'est lorsqu'il n'y a plus rien à retirer"


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: