Hacker News new | past | comments | ask | show | jobs | submit | evgpbfhnr's comments login

He releases rc every single week (ok, except before rc1 there's two weeks for merge window), there's no "off" time to upgrade anywhere.

Not that I approve the untested changes, I'd have used a different gcc temporarily (container or whatever), but, yeah, well...


I find it surprising that linus bases his development and release tools based on whatever's in the repositories at that time. Surely it is best practice to pin to a specified, fixed version and upgrade as necessary, so everyone is working with the same tools?

This is common best practice in many environments...

Linus surely knows this, but here he's just being hard headed.


People downloading and compiling the kernel will not be using a fixed version of GCC.


Why not specify one?


That can work, but it can also bring quite a few issues. Mozilla effectively does this; their build process downloads the build toolchain, including a specific clang version, during bootstrap, i.e., setting up the build environment.

This is super nice in theory, but it gets murky if you veer off the "I'm building current mainline Firefox path". For example, I'm a maintainer of a Firefox fork that often lags a few versions behind. It has substantial changes, and we are only two guys doing the major work, so keeping up with current changes is not feasible. However, this is a research/security testing-focused project, so this is generally okay.

However, coming back to the build issue, apparently, it's costly to host all those buildchain archives. So they get frequently deleted from the remote repository, which leads to the build only working on machines that downloaded the toolchain earlier (i.e., not Github action runner, for example).

Given that there are many more downstream users of effectively a ton of kernel versions, this quickly gets fairly expensive and takes up a ton of effort unless you pin it to some old version and rarely change it.

So, as someone wanting to mess around with open source projects, their supporting more than 1 specific compiler version is actually quite nice.


Conceptually it's no different than any other build dependency. It is not expensive to host many versions. $1 is enough to store over 1000 compiler versions which would be overkill for the needs of the kernel.


What would that help? People use the compilers in their distros, regardless of what's documented as a supported version in some readme.


Because then, if something that is expected to compile doesn't compile correctly, you know that you should check your compiler version. It is the exact same reason why you don't just specify which library your project depends on but also the libraries' version.


People are usually going to go through `make`, I don't see a reason that couldn't be instrumented to (by default) acquire an upstream GCC vs whatever forked garbage ends up in $PATH


This would result in many more disasters as system GCC and kernel GCC would quickly be out of sync causing all sorts of "unexpected fun".


Why would it go wrong, the ABI is stable and independent of compiler? You would hit issues with C++ but not C. I have certainly built kernels using different versions of GCC than what /lib stuff is compiled with, without issue.


You'd think that, but in effect kconfig/kbuild has many cases where they say "if the compiler supports flag X, use it" where X implies an ABI break. Per task stack protectors comes to mind.


Ah that's interesting, thanks


or just that they don't run windows/mac OS with chome like everyone else and it's "suspicious". I get cloudflare capchas all the time with firefox on linux... (and I'm pretty sure there's no such app in my home network!)


FWIW I run firefox on linux too, and I don't have any trouble with cloudflare captchas. I get them every now and then but definitely not all the time.


I've considered this, but I'm running on a potato, and fetching the whole atuin history seems to take a while:

    $ time atuin history list --print0 -f "{time} | {command}" > /dev/null
    
    real 0m1.849s
(for some reason the built-in atuin search command doesn't take so long to show up? It might only fetch the last few entries from the db first... Eh, actually `atuin search` without argument which lists roughly the same thing run in less than half the time (0.85s), but -i is still another order of magnitude faster)

Anyway, thanks - I'll fiddle with all this :)


I couldn't find a run or raise repo that'd have a ws.jq and I'm not convinced it's https://github.com/thaliaarchi/wsjq (whitespace programing language implemented in jq...) Could you point at that?

Thanks for sharing!


Haha, no. It's a home grown abomination. Give me a couple of days & I'll put it in a repo for you.


Interestingly running this in the console works but from a bookmarklet changes all the page content to false,false,false,.... (on firefox)

Any idea?


Firefox expects your script to return undefined, so you can add one at the end (or even shorter: void 0).


Thank you (and neighbors), I didn't know this.


To clarify slightly, bookmarklet behavior across browsers is to call `document.write` with the result of the bookmarklet’s last expression unless that result is `undefined`, and calling `document.write` after page load completes replaces the page’s DOM with the content written. It’s a weird bookmarklet thing, I don’t think there’s anywhere else in JS that accepts a list of statements, not expressions, but cares about the result of the last expression.

People often disable this by making the last expression `void 0`, which evaluates to `undefined`. This is really an anachronism, though, the original point wasn’t actually brevity (it’s only one character shorter after URL encoding, not worth funky syntax) but that just writing `undefined` was broken and sometimes didn’t evaluate to the special value undefined. That’s fixed now, so I would just append `undefined` instead.

Though, really what you should do is always wrap bookmarklets in IIFEs, which avoids stomping around the page’s global variables, lets you write code with early exits, lets you opt back in with an explicit return rather than editing boilerplate, and also solves this issue as a bonus.


Change map to forEach?


document.querySelectorAll('body *').forEach(e=>{if (["fixed","sticky"].includes(getComputedStyle(e).position)) e.remove()});


If someone has one for the "share this" popups when you select some text in way too many articles/blogs I'd love to add this to my collection...


Don't loop on values with `*`; the difference key/value is the lack of `!` at the start of the expression; `*` and `@` rules are the sames a $@ and $* and you almost never want *.


> Though at this point I don’t even bother colocating the .git repo alongside .jj. Meaning I haven’t found a need to fall back to git commands in maybe four or five months.

That interests me -- is there a native jj protocol for push/fetch? Even if just ssh? Or do you just work in local repos?


Backends can do whatever. The git backend knows how to push to git remotes. If you used a different backend, it would know what those backends expect. There aren’t any of these publicly, but Google has one to work with piper.


Google's internal git support for the Google3 monorepo was deprecated/is unsupported in favor of fig, the mercurial/hg based client for Google3.

Microsoft has a custom git client internally which includes a filesystem shim (think: the Windows equivalent of FUSE) since stat-ing monorepo amounts of files doesn't work. Also GitHub's running custom git servers, though their custom database backend is behind a compatibility layer.

Finally, sapling, Meta's git replacement which maintains supports for git servers also supports sapling servers (which was not released). Unsure whether to count that as a git backend though.


For sure. This kind of thing is very useful. It’s why jj is backend agnostic by design.


How does jj handle very large git repos, e.g. this linux clone with a mix&mash of quite a few upstreams with a 6GB big .git dir?

(I agree I probably shouldn't focus on that first, and could just try jj first on smaller rpos... But git is already slow enough in there that it's an honest question, I don't need to keep using git there as long as I can keep pulling from stable kernels git trees for regular merges)


It works fine on nixpkgs, which is by some measures larger than the linux git repo.

jj git clone/fetch/push are using the corresponding git commands under the hood so they won't improve on git performance but also it doesn't have much overheard of its own.

If you're using the -T revset syntax, you can specify a revset that requires looking at every commit, which is slow, but that's equivalent to asking for `git log -n 1000000`


Thanks! My nixpkgs clone is 5.2GB so it's not too far indeed! :)

I think the nixpkgs workflow is much less prone to rebase/merges (some cherry-picks to stable branches I don't do much), but it's a very good data point, thank you. I guess I'll give it a try over christmas break..


Interesting, their configuration tool runs in browser and they explicitly mention linux; I don't have much experience with gaming keyboards but the only one I bought in the past had some windows-only configuration software and was a bit of a pain.


(this keyboard does not appear to be QMK-based, but for the record) any keyboard that runs QMK (an open-source firmware for keyboards) will offer the same experience. web-based configurator and full linux support.


Nuphy's other keyboards, such as the Air v2 (which I own and use), are QMK based. It's just the hall effect keyboards that aren't.


I think mean to say QMK?


Configuration stored _on the device_ and not needing the software to autorun every time you reboot to get your settings back, should really be a no-brainer. There's unfortunately enough gaming keyboards and mice out there who haven't got this yet.


I'd love to see some standard HID feedback channel for interactive keyboard LED control. My previous Logitech keyboard had some fun interactions with Factorio that I miss. Things like flashing a specific letter or using the backlighting to show a progress bar.


This is why WebUSB/WebBT is a good thing, you can build configuration tools that run anywhere and are easily ported offline or replicated.


"Handy" and "a good thing" are not the same thing.

USB and BT are security nightmares, and the browser is a fantastic sandbox. I'm pretty sure a lot of 0-days will come from there.


As always you have a choice between things existing at all on Linux or fretting about the security issues that so far have not materialized.


Many mechanical keyboards use firmware works with Web Apps that use WebUSB/WebHID APIs to allow easy cross-platform configuration. It's a welcome change over dedicated desktop software for your keyboard. All the configuration is stored and behavior is managed on the keyboard itself. This doesn't get you cool things like keyboard LED interactivity, but I can live without that.


RAZR is the worst offender here


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: