Hacker Newsnew | past | comments | ask | show | jobs | submit | eats_indigo's commentslogin

honestly expected this port to be headed in the opposite direction

smells like supply chain attack


Yeah, it's a shame because otherwise the library is really nice and could have become the default HTTP library, but it feels like someone will manage to inject some weird behaviour soon and half the planet will be compromised


How is the postgres ecosystem at stating when these kinds of things are ready for adoption? I can think of a usecase at work where this might be useful, but hesitant to just start throwing random opensource extensions at our monolith DB.


The GitHub repo is about two weeks old and there's a single author - if I were you, I'd let it cook for a while longer.


In my experience you wait for the next two major PG release. When its actively maintained they support them fast. If not, you see by them that it is abandoned…


My thoughts exactly


did he just say locomotion came from squirrels


I think he was referencing something Richard Sutton said (iirc); along the lines of "If we can get to the intelligence of a squirrel, we're most of the way there"


I've been saying that for decades now. My point was that if you could get squirrel-level common sense, defined as not doing anything really bad in the next thirty seconds while making some progress on a task, you were almost there. Then you can back-seat drive the low-level system with something goal-oriented.

I once said that to Rod Brooks, when he was giving a talk at Stanford, back when he had insect-level robots and was working on Cog, a talking head. I asked why the next step was to reach for human-level AI, not mouse-level AI. Insect to human seemed too big a jump. He said "Because I don't want to go down in history as the creator of the world's greatest robot mouse".

He did go down in history as the creator of the robot vacuum cleaner, the Roomba.


timestamp?


In what way is this a format or standard? It's just markdown in a namespce


You could get this page down to under 100 words by simply having it say "the name of the file LLM agents will look at for instructions on the repo is AGENTS.md; that's it, that's the standard".

It's a real problem! Every agent right now has their own weird filename. I love David Crawshaw's sketch.dev, but for reasons passing understanding they choose "dear_llm.md" for theirs.


I created a ticket for adding AGENTS.md support.

edit: They're on it. Not everything has to be complex; sometimes somebody just has to do it.


Easy story points


Standards derive their value precisely from being simple and widely adopted - think of .gitignore, CONTRIBUTING.md, or LICENSE files that work because everyone agrees on their location and purpose.


.gitignore is not a standard: it’s a format used by one tool. A few other tools piggy-back on it (e.g. ripgrep ignores paths matching in .gitignore, .hgignore, &c. by default), not infrequently to confusion.

CONTRIBUTING.md is not a standard: it’s a convention pushed by one platform, used by some projects (but many more will not write down such information, or put it in a README file, or put it in some other documentation).

LICENSE is definitely not a standard: it’s one of a wide variety of names people use to hold licensing information, which some tools will be able to detect. I just looked through my /usr/share/licenses, of 1135 files, only 300 are named LICENSE—it’s the most popular single name, sure, with COPYING next at 182, but it’s still definitely a minority, though in certain ecosystems it may be more popular. Any license-detection tooling will be scanning for a lot more file names. “LICENSE” is a very weak convention, compared with the others.


All the different coding agents put their "rules" in different places: .cursor, CLAUDE.md etc..

It makes no sense and it really needs standardisation. I hope this catches on.


As a workaround you can add a wrapper launcher script that symlinks them and then removes the symlink after you exit


Given Github's critical role in software engineering delivery, their SLA commitments are really quite poor, perhaps unacceptable.


luckily, git itself works pretty well when there's an outage

sucks for people that use issues/PRs for coordination and had a planning meeting scheduled, though


It is critical for those who choose to use it.

If you deliberately decide to use a system that introduces a single point of failure into a decentralised system, you have to live with the consequences.

From their point of view, unless they start losing paying users over this, they have no incentive to improve. I assume customers are happy with the SLA, otherwise why use Github?


Network effects are quite strong


Love UV!

Also love Ruff from the Astral team. We just cut our linting + formatting across from pylint + Black to Ruff.

Saw lint times drop from 90 seconds to < 1.5 seconds. crazy stuff.


Until the moment you will realize that ruff perform only a part of pylint checks and that very obvious mistakes can go through easily like code that can't run because of an obvious error.


Do you have an example of this?



Which checks is it missing?


Many, but it also has many that pylint does not. I use them both, ruff first, followed by pylint.

https://docs.astral.sh/ruff/faq/#how-does-ruffs-linter-compa...

https://github.com/astral-sh/ruff/issues/970


Seems though mypy/pyright/pyre/pyrefly or the future ty could bridge the gap, if one is writing typed python?


The big question will be to know if tools will still be fast once they will have added the tests that are missing.


I think if you ever have highly dynamic infrastructure requirements -- think along the lines of a control plane that's spinning up additional workers -- it's really helpful to be able to run your infra provisioning logic locally. There's nothing worse than having to wait on cloud builds to test your iterations.


What would be the difference between eg Tilt and Pulumi for this?


Tilt is for your laptop, where you're editing code. Pulumi is for your cloud environments.


I can use Pulumi on my laptop too... that's precisely what I do... for my use-case I use it more like a "programmable docker-compose" if you want


Migrated from Skaffold to Tilt at my last co, found it was much more easier to configure granular rebuild rules, which lead to faster dev loop cycles


Any catch?


Love tilt, have found it much more difficult to set up in a docker-compose environment than a cloud native / minikube context


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: