Hacker News new | past | comments | ask | show | jobs | submit | gadtfly's comments login

> The trend, hinted by stuff like GPT-4o and proven by MetaQuery, looks modular.

Could you make this more explicit? What modularity is hinted at by 4o? The OpenAI blog post you cite (and anything else I've casually heard about it) seems to only imply the opposite.


Completely unrelated to anything I'm just taking this as an opportunity to yell this into the void while nix is on topic:

I have a theory that a problem for Nix understanding and adoption out of all apparent proportion is its use of ; in a way that is just subtly, right in the uncanny valley, different from what ; means in any other language.

In the default autogenerated file everyone is given to start with, it immediately hits you with:

    environment.systemPackages = with pkgs; [ foo ];
How is that supposed to read as a single expression in a pure functional language?


To be fair, that is not problematic at all and most definitely not what I think is the issue with Nix adoption/learning curve.

Personally, it's the fact that there are 57698 ways of doing something and when you're new to Nix you're swarmed with all the options and no clear way of choosing where to go. For example, the docs still use a shell.nix for a dev shell but most have moved to a flake-based method...

I always recommend starting with devenv.sh from the excellent Domen Kozar and then slowly migrating to bare Nix flakes once you're more accustomed.


> How is that supposed to read as a single expression in a pure functional language?

Well, in Haskell the following is technically single expression:

    do with pkgs; [ foo ];
(Eg using 'let {with = pure; pkgs = 1; foo = 2}' makes the above type check and compile.)

But extreme nerdery and nit-picking aside, I agree that the choice of syntax in nix unfortunate here.


Personally I think a bigger problem is the lack of discoverability of things in nixpkgs, which hits you as soon as you start writing anything remotely non-trivial. "Things" here means functions in 'lib', 'builtins', etc., as well as common idioms for solving various problems. I think this combines with the language's lack of types to make it hard to know what you can write, much less what you should write.

A language server with autocomplete and jump-to-definition would go a long way to making nix more accessible. As it stands, I generally have to clone nixpkgs locally and grep through it for definitions or examples of things related to what I'm doing, in order to figure out how they're used and to try to understand the idioms, even with 4 years of running NixOS under my belt and 3 years of using it for dev environments and packaging at work.


I agree the syntax isn't perfect, but in case you're actually confused there's really only 3 places where semicolons go, and I would argue that two of the places make a lot of sense— as a terminator for attribute sets, and a terminator for `let` declarations.

Unfortunately it is also used with the somewhat confusing `with` operator which I personally avoid using. For those of you who aren't familiar, it works similar to the now deprecated javascript `with` statement where `with foo; bar` will resolve to `bar` if it is in scope, otherwise it will resolve to `foo.bar`.


I actually prefer `with`, since it fits better with the language:

- It uses `;` in the same way as `assert`, whereas `let` uses a whole other keyword `in`.

- It uses attrsets as reified/first-class environments, unlike `let`, which lets us do `with foo; ...`.

- Since it uses attrsets, we can use their existing functionality, like `rec` and `inherit`; rather than duplicating it.

I've been using Nix for over a decade (it's even on my phone), and I've never once written a `let`.

(I agree that the shadowing behaviour is annoying, and we're stuck with it for back-compat; but that's only an issue for function arguments and let, and I don't use the latter)


Interesting, are you saying that instead of reaching for `let foo = bar; in expr` you usually use something like `with { foo = bar; }; expr`?

> Since it uses attrsets, we can use their existing functionality, like `rec` and `inherit`; rather than duplicating it.

`let` supports `inherit`, and is always `rec`. Or is that your point, that it is needlessly duplicated functionality?


Yes and yes :)

Functions with default arguments are also very useful; especially since `nix-build` will call them automatically. Those are "always `rec`" too, which (a) makes them convenient for intermediate values, and (b) provides a fine-grained way to override some functionality. I used this to great effect at a previous employer, for wrangling a bunch of inter-dependent Maven projects; but here's a made-up example:

    {
      # Main project directory. Override to build a different version.
      src ? pkgs.lib.cleanSource ./.
    
      # Take these files from src by default, but allow them to be overridden
    , config ? "${src}/config.json"
    , script ? "${src}/script.sh"
    
      # A couple of dependencies
    , jq ? pkgs.jq
    , pythonEnv ? python3.withPackages choosePyPackages
    , extraDeps ? [] # Not necessary, but might be useful for callers
    
      # Python is tricky, since it bakes all of its libraries into one derivation.
      # Exposing intermediate parts lets us override just the interpreter, or just
      # the set of packages, or both.
    , python3 ? pkgs.python3
    , choosePyPackages ? (p: pythonDeps p ++ extraPythonDeps p)
    , pythonDeps ? (p: [ p.numpy ])
    , extraPythonDeps ? (p: [])  # Again, not necessary but maybe useful
    
      # Most of our dependencies will ultimately come from Nixpkgs, so we should pin
      # a known-good revision. However, we should also allow that to be overridden;
      # e.g. if we want to pass the same revision into a bunch of projects, for
      # consistency.
    , pkgs ? import ./pinned-nixpkgs.nix
    }:
    # Some arbitrary result
    pkgs.writeShellApplication {
      name = "foo";
      runtimeInputs = [ jq pythonEnv ] ++ extraDeps;
      runtimeEnv = { inherit config; };
      text = builtins.readFile script;
    }


> now deprecated javascript `with` statement where `with foo; bar` will resolve to `bar` if it is in scope, otherwise it will resolve to `foo.bar`.

Technically, in JavaScript it's `with (foo) bar`.

Source: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...


Your p,k,g,s keys must be worn to nubs.


Reasoning transfers across domains.



^ This is publicly new information, and the 2nd part especially contradicts consequential rumours that had been all-but-cemented in closely-following outsiders' understanding of Sonnet and Anthropic. Completely aside from anything else in this article.


Also, though it's not "new information": "Making AI that is smarter than almost all humans at almost all things [...] is most likely to happen in 2026-2027." continues to sail over everybody's head, not a single comment about it, even to shit on it. People will continue to act as though they are literally blind to this, as though they literally don't see it.


> People will continue to act as though they are literally blind to this, as though they literally don't see it.

Or like they see it and have learned the appropriate weight to give unsupported predictions of this type from people with a vested interest in them being perceived as true. It not only not new information, its not information at all.


I find that really, really, really hard to believe, given the current approaches.


we're getting used to it

and, personally, i think, if any CEO in this industry dare to say "we won't get a super AI in 2028"

many people will be disappointed, some people will be scared, and one prisident be pissed off


Breaking innovative new ground on how to make things even worse.


spot on.


You know how you make one failing economy better? Combine it with another failing economy!


See also the Mark 14 torpedo, the primary American torpedo in WWII, which didn't actually work for the first 2 years of the war because they had never bothered to actually test it because it would be too expensive.

https://en.wikipedia.org/wiki/Mark_14_torpedo


>because it would be too expensive.

Specifically, they didn't want to waste 1-10 torpedoes for testing, which maybe that can be defensible, but it became utterly indefensible when every single submarine came back from patrol with reports of "we launched a spread of 4 torps, 2 hit the hull of the enemy ship, zero detonations".

The lost value in un-sunk enemy shipping, the number of dead seaman that should have come back victorious, the number of subs that got sunk after an attack utterly failed, all were individually prices that dwarfed a single Mk14 torpedo, and together had a measurable impact on war performance.

All because the bureau of ordinance basically refused to hear any feedback.

Nearly every single component of the torpedo was unfit for service. The magnetic exploder didn't work. The contact detonator was nearly incapable of working because of the physics of torpedo impacts in a way that meant getting a perpendicular hit, which was considered optimal, actually was less likely to detonate. The depth keeping system was calibrated incorrectly, due to module integration mistakes, and ran 10 feet deeper than it was supposed to in some cases.

It's actually kind of common for US military procurement to produce a somewhat failed piece of equipment initially, but it usually gets modified and iterated on and improved to the point of being very respectable hardware in short order. The refusal of BuOrd to hear feedback is the real problem here. Their insane delays in fielding and responding to feedback cost real US lives. Once the torpedo was fixed up, the American sub fleet in the pacific ran roughshod over Japanese supply and utterly crippled their abilities to maintain control over the island chains.

The reason BuOrd gave for refusing to double check their work as these scathing reports came in? You see, the navy was struggling to produce enough torpedoes to meet requirements, so we can't waste a couple for testing. Instead, HUNDREDS of outright non-functioning torpedoes were sent to the bottom of the pacific, completely wasted, with almost no hope of actually working, because they were never tested.

The entire situation should be required reading for anyone in management, anywhere. Textbook case of penny smart, pound foolish.


Makes perfect sense to me. If you don't test, then there are no bugs. Field reports can be written off as user error.

/s


There's a lovely youtube historian, Drachinifel who covered this https://www.youtube.com/watch?app=desktop&v=eQ5Ru7Zu_1I


If I remember correctly part of the issue was that they used magnetic detector based firing systems and only tested them off the coast of California. When they fired them elsewhere the Earth's magnetic field was different enough that the detonators failed.


Not only did the magnetic fuses not work, the impact fuses would collapse and fail if the torpedo made a direct hit. And the torpedoes would consistently run deeper than they were set to. US torpedoes in the early stages of the war were nearly completely ineffective.


I had never heard that theory before!

FWIW, everyone in the beginning of WWII had magnetic detonator / torpedo problems, so it couldn't be just that. They were difficult to depth-keep just right to pass under a ship but within detection, for one thing. The sub captain had to correctly identify the ship, look up the draft, and call down to manually set the depth keeping. (Good luck in the swells of north atlantic). Often it just didn't use that depth anyway, due again to issues with design/testing.

The contact detonators had their own issues, for one they couldn't explode at an oblique angle, instead needed near-right-angle impact - but even then had a high dud ratio.

So, in theory the magnetic ones were preferable, even though standard doctrine was to fire for right-angle impact regardless (it makes evasion much more difficult, for one thing).


There's a building at NUWC Division Newport that's designed to survive a direct hit from a 500 lb bomb.

The joke is they had to build it to survive attack from Navy crews that were livid about the quality of the torpedoes built there.


I think the thing behind the group photo at the top is it.


First I thought your comment was a joke about the clock tower, but then I realized the blade was the 6 dark gray rectangles just behind the team. (I initially thought the rectangles were a fountain or sculpture or something.)


Gmail, Youtube, etc.

Checked from Western US and Canada.



Did you know the soundtrack was composed in a module tracker? Someone recently recorded the full soundtrack playing back in Milkytracker. Pretty neat since you can see how the composers wrote the songs.

https://www.youtube.com/watch?v=iONsjiiqeKg


And you could use a utility called umr (unreal media reaper) to extract the songs. It was mind blowing to then understand how the game switched patterns to suit the pace, a very clever use of this technology.


I listen to Foregone Destruction so much that when I play the actual map that uses it I always think I left my music player running in the background.


Ah, another jungle music example.[1]

[1]: https://pikuma.com/blog/jungle-music-video-game-drum-bass


I don't really understand where the dividing line is between Jungle and Drum and Bass, but I'm pretty sure Foregone Destruction is on the DnB side of it.


Kane Pixels[0] is the most popular Backrooms creator by a wide margin[1], and the recipient of that A24 movie deal.

His Backrooms canon is completely separate from The Backrooms Wiki canon. It has none of the prescribed "levels", or any other of the Wiki's infinite proliferation of boring bullshit. It's purely descended from the original 4chan post[2]

[0] https://www.youtube.com/@kanepixels

[1] https://www.youtube.com/results?search_query=backrooms

[2] https://i.kym-cdn.com/photos/images/newsfeed/001/495/035/a5a...


From all the backrooms channels this one is my second if not most favorite: https://youtube.com/@lostinthehyperverse - there’s something nostalgic to it rather than action and it focuses less on an actual entity horror.

(Note that the channel contains pre-backrooms videos, you probably want to start with 1997 in the name.)


Endless wiki spam nonsense aside, I think the biggest contrast between Kane Pixels' stuff and most other Backrooms-derived content is that he leans hard into the weird cosmic horror aspect and surrounding interactions with the world (and black comedy absurdity/tragedy to go with it, like a government contractor proposal to retrofit the Backrooms as an infinite housing complex), rather than just 'spooky endless office with squiggly monsters'.


After Kane Pixel, I recommend Return to Render, whose humour excellently highlights the absurdist undercurrent in the foundations of the Backrooms' lore.

https://www.youtube.com/@ReturnToRender


That guy is crazy good. Super creepy!


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: