Hacker News new | past | comments | ask | show | jobs | submit | more AceJohnny2's comments login

Fedora stupidly uses beta compiler in new release, Torvalds blindly upgrades, makes breaking, unreviewed changes in kernel, then flames the maintainer who was working on cleanly updating the kernel for the not-yet-released compiler?

I admire Kees Cook's patience.


Exactly. As quoted in the article:

> you didn't coordinate with anyone. You didn't search lore for the warning strings, you didn't even check -next where you've now created merge conflicts. You put insufficiently tested patches into the tree at the last minute and cut an rc release that broke for everyone using GCC <15. You mercilessly flame maintainers for much much less.

Hypocrisy is an even worse trait than flaming people.


> Hypocrisy is an even worse trait than flaming people.

Eh I mean everyone's a hypocrite if you dig deep enough—we're all a big nest of contradictions internally. Recognition of this and accountability is paramount though. He could have simply owned his mistake and swallowed his pride and this wouldn't have been such an issue.


On the one hand, sure, fine. He has raked people for less. However this is just an RC. Further, how long has Linus been doing this?

I remember Maddox on xmission having a page explaining that while he may make a grammatical error from time to time, he has published literally hundreds of thousands of words, and the average email he receives contains 10% errors.

However, Linus is well-known for being abrasive, abusive, call it what you want. If you can't take it, don't foist it, Linus. Even if you've earned the right, IMO.


Nobody earns the right to be an asshole. That is nothing that can be earned.


Indeed. On the other hand, the right to show that you are an asshole is available to anyone, and it has become quite popular!


I'd say if you're doing truly-heroic solo efforts, then you can earn that. (But I can only think of fictional examples.) For team efforts like the Linux kernel, sure, no amount of individual contribution to that project grants you the right to belittle the other contributors.


This idea that if you've done great things, then you've earned the right to treat people poorly, needs to go away. It's toxic and gross, and we should expect and demand better of our heroes (and ourselves).


Fabrice Bellard as earned that right but somehow I don't think he is !


Fabrice Bellard's work is impressive, but I wouldn't call it heroic. I was thinking more like, the grumpy-guts who ensures the local homeless shelter is adequately stocked with food, clean bedding, and toiletries, day-in and day-out, even in the depths of winter. You're allowed to be vaguely misanthropic in your interpersonal relationships if you're doing something like that, at least in my book.

Again, the only non-fictional people I know who qualify, are actually really nice to people.


Still nope.


IMHO Cook is following good development practices.

You need to know what you support. If you are going to change, it must be planned somehow.

I find Torwalds reckless by changing his development environment before release. If he really needs that computer to release the kernel, it must be stable one. Even better: it should be a VM (hosted somewhere) or part of a CI-CD pipeline.


The real problem here was "-Werror", dogmatically fixing warnings, and using the position of privilege to push in last-minute commits without review.

Compilers will be updated, they will have new warnings, this has happened numerous times and will happen in the future. The linux kernel has always supported a wide range of compiler versions, from the very latest to 5+ years old.

I've ranted about "-Werror" in the past, but to try to keep it concise: it breaks builds that would and should otherwise work. It breaks older code with newer compiler and different-platform compiler. This is bad because then you can't, say, use the exact code specified/intended without modifications, or you can't test and compare different versions or different toolchains, etc. A good developer will absolutely not tolerate a deluge of warnings all the time, they will decide to fix the warnings to get a clean build, over a reasonable time with well-considered changes, rather than be forced to fix them immediately with brash disruptive code changes. And this is a perfect example why. New compiler fine, new warnings fine. Warnings are a useful feature, distinct from errors. "-Werror" is the real error.


With or without -Werror, you need your builds to be clean with the project's chosen compilers.

Linux decided, on a whim, that a pre-release of GCC 15 ought to suddenly be a compiler that the Linux project officially uses, and threw in some last-minute commits straight to main, which is insane. But even without -Werror, when the project decides to upgrade compiler versions, warnings must be silenced, either through disabling new warnings or through changing the source code. Warnings have value, and they only have value if they're not routinely ignored.

For the record, I agree that -Werror sucks. It's nice in CI, but it's terrible to have it enabled by default, as it means that your contributors will have their build broken just because they used a different compiler version than the ones which the project has decided to officially adopt. But I don't think it's the problem here. The problem here is Linus's sudden decision to upgrade to a pre-release version of GCC which has new warnings and commit "fixes" straight to main.


Sadly, I lost that battle with Torvalds. You can see me make some of those points on LKML.


I see, thanks. ( Found it here: https://lkml.org/lkml/2021/9/7/716 )


This is my take-away as well. Many projects let warnings fester until they hit a volume where critical warnings are missed amidst all the noise. That isn't ideal, but seems to be the norm in many spaces (for instance the nodejs world where it's just pages and pages of warnings and deprecations and critical vulnerabilities and...).

But pushing breaking changes just to suppress some new warning should not be the alternative. Working to minimize warnings in a pragmatic way seems more tenable.


Ironically, as a NodeJS dev, I was going to say the opposite: I'm very used to the idea that you have a strict set of warnings that block the build completely if they fail, and I find it very strange in the C world that this isn't the norm. But I think that's more to do with being able to pin dependencies more easily: by default, everyone on projects I work with uses the same set of dependencies always, including build departments and NodeJS versions. And any changes to that set of dependencies will be recorded as part of the repository history, so if be warnings/failures show up, it's very easy to see what caused it.

Whereas in a lot of the C (and C++, and even older Python) codebases I've seen, these sorts of dependencies aren't locked to the same extent, so it's harder to track upgrades, and therefore warnings are more likely to appear, well without warning.

But I think it's also probably the case that a C expert will produce codebases that have no warnings, and a C novice will produce codebases filled with warnings, and the same for JS. So I can imagine if you're just "visiting" the other language's ecosystem, you'll see worse projects and results than if you've spent a while there.


He releases rc every single week (ok, except before rc1 there's two weeks for merge window), there's no "off" time to upgrade anywhere.

Not that I approve the untested changes, I'd have used a different gcc temporarily (container or whatever), but, yeah, well...


I find it surprising that linus bases his development and release tools based on whatever's in the repositories at that time. Surely it is best practice to pin to a specified, fixed version and upgrade as necessary, so everyone is working with the same tools?

This is common best practice in many environments...

Linus surely knows this, but here he's just being hard headed.


People downloading and compiling the kernel will not be using a fixed version of GCC.


Why not specify one?


That can work, but it can also bring quite a few issues. Mozilla effectively does this; their build process downloads the build toolchain, including a specific clang version, during bootstrap, i.e., setting up the build environment.

This is super nice in theory, but it gets murky if you veer off the "I'm building current mainline Firefox path". For example, I'm a maintainer of a Firefox fork that often lags a few versions behind. It has substantial changes, and we are only two guys doing the major work, so keeping up with current changes is not feasible. However, this is a research/security testing-focused project, so this is generally okay.

However, coming back to the build issue, apparently, it's costly to host all those buildchain archives. So they get frequently deleted from the remote repository, which leads to the build only working on machines that downloaded the toolchain earlier (i.e., not Github action runner, for example).

Given that there are many more downstream users of effectively a ton of kernel versions, this quickly gets fairly expensive and takes up a ton of effort unless you pin it to some old version and rarely change it.

So, as someone wanting to mess around with open source projects, their supporting more than 1 specific compiler version is actually quite nice.


Conceptually it's no different than any other build dependency. It is not expensive to host many versions. $1 is enough to store over 1000 compiler versions which would be overkill for the needs of the kernel.


What would that help? People use the compilers in their distros, regardless of what's documented as a supported version in some readme.


Because then, if something that is expected to compile doesn't compile correctly, you know that you should check your compiler version. It is the exact same reason why you don't just specify which library your project depends on but also the libraries' version.


People are usually going to go through `make`, I don't see a reason that couldn't be instrumented to (by default) acquire an upstream GCC vs whatever forked garbage ends up in $PATH


This would result in many more disasters as system GCC and kernel GCC would quickly be out of sync causing all sorts of "unexpected fun".


Why would it go wrong, the ABI is stable and independent of compiler? You would hit issues with C++ but not C. I have certainly built kernels using different versions of GCC than what /lib stuff is compiled with, without issue.


You'd think that, but in effect kconfig/kbuild has many cases where they say "if the compiler supports flag X, use it" where X implies an ABI break. Per task stack protectors comes to mind.


Ah that's interesting, thanks


I'm completely unsure whether to respond "it was stable, he was running a release version of Fedora" or "there's no such thing as stable under Linux".

The insanity is that the Kernel, Fedora and GCC are so badly coordinated that the beta of the compiler breaks the Kernel build (this is not a beta, this is a pre-alpha in a reasonable universe...is the Kernel a critical user of GCC? Apparently not), and a major distro packages that beta version of the compiler.

To borrow a phrase from Reddit: "everybody sucks here" (even Cook, who looks the best of everyone here, seems either oblivious or defeated about how clownshoes it is that released versions of major linux distros can't build the Kernel. The solution of "don't update to release versions" is crap).

(Writing this from a Linux machine, which I will continue using, but also sort of despise).


The GCC 15 transition has been very disruptive, but Fedora is known for being on the bleeding edge ("first" is in the "four foundations" [1]). Be glad because eventually everyone will get GCC 15, and we've worked out most of the problems for you already.

[1] https://docs.fedoraproject.org/en-US/project/


GCC 15.1 was released today. Your Fedora release was two weeks earlier, now using a nonexistent version of 15.0.1, ironically now including bugs you reported and that were fixed for 15.1. That just seems like poor decision making.


You're belittling the large amount of work done across thousands of packages to get them ready for GCC 15, which did involve backporting fixes to GCC 15 itself. All those fixes went into GCC upstream. GCC 15.1 was released two hours ago as of writing this message, even before the US wakes up, yet I'm sure there will be a build of it in Fedora later today.


Creating the fake release for gcc was by no means necessary for that.



Gentoo also has a tracker [1] for GCC 15 issues that they've been working on as well. (Note: GCC 15 is masked in Gentoo so you have to go out of your way to install it)

[1] https://bugs.gentoo.org/932474


This is just GCC 2.96 again, they will never learn.


GCC 2.96 lasted a year or more and even after GCC 3.0 was released it wasn't able to compile a working kernel. This lasted two weeks and the issue is just a new warning; it's just bad timing across the release cycles of two projects.


Do you work in marketing


> makes breaking, unreviewed changes in kernel,

And reverted them as soon as the issue became apparent.

> then flames the maintainer who was working on cleanly updating the kernel for the not-yet-released compiler?

Talking aboutchanges that he had not pushed by the time Linus published the release candidate.

Also the "not yet released" seems to be a red herring, as the article notes having beta versions of compilers in new releases is a tradition for some distros, so that should not be unexpected. It makes some sense since distros tend to stick to a compiler for each elease, so shipping a soon to be out of maintenance compiler from day one will only cause other issues down the road.


Fedora releases are supported for about 13 months after release. They could live with an older version of GCC for a year.


> They could live with an older version of GCC for a year.

That's just not what Fedora is, though. Being on the bleeding edge is foundational to Fedora, even if it's sometimes inconvenient. If you want battle-tested and stable, don't run Fedora, but use Debian or something.


Bleeding-edge is fine, but shipping a beta C compiler seems a bridge too far. Even Arch does not ship GCC 15 yet.


> I dont love the idea of Apple eco system for server side development.

100%! Everyone repeat after me: "macOS Is Not A Server OS"

macOS is approximately the worst OS you can run a server on:

1. It is buggy. We had a bug in Sonoma where our CI machines would freeze on some filesystem access. The bug was fixed during Sonoma's lifetime... but only released in Sequoia. Before that we had a rare bug that plagued us for years (once every few months across several CI machines) where a process would fail to execute with "/bin/sh: cannot execute binary file", indicating an erroneous ENOEXEC from the kernel (that bug quietly disappeared)

2. It has no LTS version. See above filesystem hang. Want a fix? Cool, it comes only in the next major OS version, along with a host other changes you didn't want, and new bugs! (see point 4 below)

3. It is just poorly documented. Apple's doc are awful, poorly searcheable, they will change things and not document it, you're left with the community trying to reverse-engineer everything (I cannot recommend eclecticlight.co enough!)

4. It's just bloated with cruft for consumers. Upgrade to Sequoia 15.3? You got a free download of "Apple Intelligence" models, there go a few GBs! Again rely on the community to come up with the magic settings to disable stuff.

Ask me how I feel about macOS as a server.

(I lament the death of XServe, which could've driven more server-focused software quality)


> macOS Is Not A Server OS

It does not matter, Swift server-side can run (and be built) on Linux (or Windows BTW, or even embedded platforms (a subset of the language anyway))…



Offtopic, but I'm distracted by the opening example:

> After all, you don’t want to be building your iPhone app on literal iPhone hardware.

iPhones are impressively powerful, but you wouldn't know it from the software lockdown that Apple holds on it.

Example: https://www.tomsguide.com/phones/iphones/iphone-16-is-actual...

There's a reason people were clamoring for Apple to make ARM laptops/desktops for years before Apple finally committed.


I do not think I like this author...

> A critical piece of history here is to understand the really stupid way in which GCC does cross compiling. Traditionally, each GCC binary would be built for one target triple. [...] Nobody with a brain does this ^2

You're doing GCC a great disservice by ignoring its storied and essential history. It's over 40 years old, and was created at a time where there were no free/libre compilers. Computers were small and slow. Of course you wouldn't bundle multiple targets in one distribution.

LLVM benefitted from a completely different architecture and starting from a blank slate when computers were already faster and much larger, and was heavily sponsored by a vendor that was innately interested in cross-compiling: Apple. (Guess where LLVM's creator worked for years and lead the development tools team)


The older I get the more this kind of commentary (the OP, not you!) is a total turn off. Systems evolve and there's usually, not always, a reason for why "things are the way they are". It's typically arrogance to have this kind of tone. That said I was a bit like that when I was younger, and it took a few knockings down to realise the world is complex.


"This was the right way to do it forty years ago, so that's why the experience is worse" isn't a compelling reason for a user to suffer today.

Also, in this specific case, this ignores the history around LLVM offering itself up to the FSF. gcc could have benefitted from this fresh start too. But purely by accident, it did not.


> "This was the right way to do it forty years ago, so that's why the experience is worse" isn't a compelling reason for a user to suffer today.

On my system, "dnf repoquery --whatrequires cross-gcc-common" lists 26 gcc-*-linux-gnu packages (that is, kernel / firmware cross compilers for 26 architectures). The command "dnf repoquery --whatrequires cross-binutils-common" lists 31 binutils-*-linux-gnu packages.

The author writes, "LLVM and all cross compilers that follow it instead put all of the backends in one binary". Do those compilers support 25+ back-ends? And if they do, is it good design to install back-ends for (say) 23 such target architectures that you're never going to cross-compile for, in practice? Does that benefit the user?

My impression is that the author does not understand the modularity of gcc cross compilers / packages because he's unaware of (or doesn't care for) the scale that gcc aims at.


> And if they do, is it good design to install back-ends for (say) 23 such target architectures that you're never going to cross-compile for, in practice? Does that benefit the user?

   rustc --print target-list | wc -l
  287
I'm kinda surprised at how large that is, actually. But yeah, I don't mind if I have the capability to cross-compile to x86_64-wrs-vxworks that I'm never going to use.

I am not an expert on all of these details in clang specifically, but with rustc, we take advantage of llvm's target specifications, so you that you can even configure a backend that the compiler doesn't yet know about by simply giving it a json file with a description. https://doc.rust-lang.org/nightly/nightly-rustc/rustc_target...

While these built-in ones aren't defined as JSON, you can ask the compiler to print one for you:

     rustc +nightly -Z unstable-options --target=x86_64-unknown-linux-gnu --print target-spec-json
It's lengthy so instead of pasting here, I've put this in a gist: https://gist.github.com/steveklabnik/a25cdefda1aef25d7b40df3...

Anyway, it is true that gcc supports more targets than llvm, at least in theory. https://blog.yossarian.net/2021/02/28/Weird-architectures-we...


I'd love to learn what accident you're referring to, Steve!

I vaguely recall the FSF (or maybe only Stallman) arguing against the modular nature of LLVM because a monolothic structure (like GCC's) makes it harder for anti-GPL actors (Apple!) to undermine it. Was this related?


That is true history, in my understanding, but it's not related.

Chris Lattner offered to donate the copyright of LLVM to the FSF at one point: https://gcc.gnu.org/legacy-ml/gcc/2005-11/msg00888.html

He even wrote some patches: https://gcc.gnu.org/legacy-ml/gcc/2005-11/msg01112.html

However, due to Stallman's... idiosyncratic email setup, he missed this: https://lists.gnu.org/archive/html/emacs-devel/2015-02/msg00...

> I am stunned to see that we had this offer.

> Now, based on hindsight, I wish we had accepted it.

Note this email is in 2015, ten years after the initial one.


The only truth to the story is the missed email.

There is nothing "unmodular" about GCC -- considering that it supports plenty of architectures, operating systems, and languages.

The big difference, which people seem to miss in the context of the GNU project and GNU system, is that modularity is for free software projects. GCC is planty modular, and very easy to extend in any way shape or form .. if you abide by the license!

If you want to be a parasite on a project licensed under the GNU GPL, you will have a rough ride .. that is after all the whole idea of copyleft.


Incredible. Thank you for sharing.


You're welcome! It's a wild story. Sometimes, history happens by accident.


Wow that is wild. Imagine how different things could have been...


> and was heavily sponsored by a vendor that was innately interested in cross-compiling

and innately disinterested in Free Software, too


A more pertinent (if dated) example would be "you don't want to be building your GBA game on literal Game Boy Advance hardware".


Or a microcontroller


iPhones have terrible heat dispersion compared to even a fanless computer like a macbook air. You get a few minutes at full load before thermal throttling kicks in, so you could do the occasional build of your iPhone app on an iPhone but it'd be pretty terrible as a development platform.

At work we had some benchmarking suites that ran on physical devices and even with significant effort put into cooling them they spent more time sleeping waiting to cool off than actually running the benchmarks.


Companies run Windows 11, right? How do they control what features are enabled? How can users leverage that control?


Group Policy?


Hard to say that many even do really, hardware manufacturers in POS and medical are still shipping win10 IOT


Related: https://news.ycombinator.com/item?id=35623625 (2023) "Why Did Prolog Lose Steam (2010)"

Anecdotally, some years ago the Zebra Puzzle [1] made the rounds in my team. Two people solved it: myself, a young intern, who mapped out the constraints as a physical puzzle that I was able to solve visually, and a more seasoned colleague who used Prolog.

[1] https://en.wikipedia.org/wiki/Zebra_Puzzle


FYI, a very compact Prolog version of the Zebra/Einstein puzzle can be loaded into the editor for execution using "Load example" on [1].

[1]: https://quantumprolog.sgml.net/browser-demo/browser-demo.htm...


This is exciting, convergence is always good, but I'm confused about the value of putting the tracking information in a git commit header as opposed to a git trailer [1] where it currently lives.

In both cases, it's just metadata that tooling can extract.

Edit: then again, I've dealt with user error with the fragile semantics of trailers, so perhaps a header is just more robust?

[1] https://git-scm.com/docs/git-interpret-trailers


Mostly because it is jarring for users that want to interact with tools which require these footers -- and the setups to apply them, like Gerrit's change-id script -- are often annoying, for example supporting Windows users but without needing stuff like bash. Now, I wrote the prototype integration between Gerrit and Jujutsu (which is not mainline, but people use it) and it applies Change-Id trailers automatically to your commit messages, for any commits you send out. It's not the worst thing in the world and it is a little fiddly bit of code.

But ignore all that: the actual _outcome_ we want is that it is just really nice to run 'jj gerrit send' and not think about anything else, and that you can pull changes back in (TBD) just as easily. I was not ever going to be happy with some solution that was like, "Do some weird git push to a special remote after you fix up all your commits or add some script to do it." That's what people do now, and it's not good enough. People hate that shit and rail at you about it. They will make a million reasons up why they hate it; it doesn't matter though. It should work out of the box and do what you expect. The current design does that now, and moving to use change-id headers will make that functionality more seamless for our users, easier to implement for us, and hopefully it will be useful to others, as well.

In the grand scheme it's a small detail, I guess. But small details matter to us.


Thanks for the explanation!

While you're around, do you know why Jujutsu created its own change-id format (the reverse hex), rather than use hashes (like Git & Gerrit)?


I don't know if it's the only or original reason, but one nice consequence of the reverse hex choice is that it means change IDs and commit IDs have completely different alphabets ('0-9a-f' versus 'z-k'), so you can never have an ambiguous overlap between the two.

Jujutsu mostly doesn't care about the real "format" of a ChangeId, though. It's "really" just any arbitrary Vec<u8> and the backend itself has to define in some way and describe a little bit; the example backend has a 64-byte change ID, for example.[1] To the extent the reverse hex format matters it's mostly used in the template language for rendering things to the user. But you could also extend that with other render methods too.

[1] https://github.com/jj-vcs/jj/blob/5dc9da3c2b8f502b4f93ab336b...


Yes, it was to avoid ambiguity between the two kinds of IDs. See https://github.com/jj-vcs/jj/pull/1238 (see the individual commits).


Interesting, that was just a few short months before I showed up. :)


I'm not an expert on this corner of git, but a guess: trailer keys are not unique, that is

  Signed-off-by: Alice <alice@example.com>
  Signed-off-by: Bob <bob@example.com>
is totally fine, but

  Change-id: wwyzlyyp
  Change-id: sopnqzkx
is not.

I've also heard of issues with people copy/pasting commit messages and including bits of trailers they shouldn't have, I believe.


~I think it's more that not all existing git commands (rebase, am, cherry-pick?) preserve all headers.~

ignore, misread the above


That's a downside of using headers, not a reason for using them. If upstream git changes to help this, it would involve having those preserve the headers. (though cherry-pick has good arguments of preserving vs generating a new one)


ah, I'm sorry, I misread your comment (and should have mentioned the cherry-pick thing anyway).


It’s all good!


what does "Free" mean? What do you think the author meant in this context?

The problem is that "Free" means two things in English, which is why some like to use the French/Spanish "Libre" instead, to separate the "free-as-in-speech" from "free-as-in-beer".


I haven't seen a person in this thread use free in the sense of payment yet, it seems pretty disambiguous that everyone is referring to free in the sense of liberty.

This includes the point the person you are responding to is trying to make, in which case they are noting that open source and source available are not the same things. Their point seems to be that open source, by the very definition to most who matter/care about the concept, implies the "free/libre/disentangled" portion. Whether that means derivative continuation (GPL and its kin) or not (MIT, BSD, etc).

This is why the phrase "source available" exists.


> This is kinda like Docker/Podman thing on Linux – but secure instead.

How true is this? I know jails have been around longer than Linux containers, which are explicitly not designed as "secure" isolation (which is why people like fly.io use VMs instead).

How battle-tested are FreeBSD jails?

In particular, I note we're talking FreeBSD, not OpenBSD, which is the one that's all about security.


Sure - lets have a discussion about differences between security of FreeBSD Jails and Linux Podman containers.

Isolation: With rootless Podman it seems to be on the same level as Jails - but only if You run Podman with SELinux or AppArmor enabled. Without SELinux/AppArmor the Jails offer better isolation. When you run Podman with SELinux/AppArmor and then you add MAC Framework (like mac_sebsd/mac_jail/mac_bsdextended/mac_portacl) the Jails are more isolated again.

Kernel Syscalls Surface: Even rootless Podman has 'full' syscall access unless blocked by seccomp (SELinux). Jails have restricted use of syscalls without any additional tools - and that can be also narrowed with MAC Framework on FreeBSD.

Firewall: You can not run firewall inside rootless Podman container. You can run entire network stack and any firewall like PF or IPFW independently from the host inside VNET Jail - which means more security.

TL;DR: FreeBSD Jails are generally more secure out-of-the-box compared to Podman containers and even more secure if you take the time to add additional layers of security.

> How battle-tested are FreeBSD Jails?

Jails are in production since 1999/2000 when they were introduced - so 25 years strong - very well battle tested.

Docker is with us since 2014 so that means about 10 years less - but we must compare to Podman ...

Rootless support for Podman first appeared late 2019 (1.6) so only less then 6 years to test.

That means Jails are the most battle tested of all of them.

Hope that helps.

Regards, vermaden


I read the first line and expected LLM spam, but I was wrong. Thanks for the detailed comparison.


Thanks, when I read it know it really sounds like LLM :)

Say hello to vermadenGPT :]


Linux containers are also fairly secure, even though that isn’t their explicit purpose. Container escape bugs are CVEs and are fixed immediately.

The line is just tribalism shade.


Running containers inside VMs in multitenant scenarios is so common that Google though of inventing gVisor which you can think of as a highly paravirtualized guest OS that is lighter than a full VM but still based on similar virtualization principles for isolation.


and here I thought the library shredder/scanner in Vinge's Rainbows End was just sci-fi loosely based on gene sequencing...

(I mean it is, but seeing this almost real-world implementation is fun!)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: