Hacker Newsnew | past | comments | ask | show | jobs | submit | jerf's commentslogin

"In Firefox I was getting just under a billion operations per second when perf testing on hardware with slow DDR3 memory."

When you profile something and you get "a billion per second" what you've got there is a loop where the body has been entirely optimized away. Presumably the JIT noticed you were doing nothing and making no changes and optimized it away. I don't think there's a single real DOM operation you can do in an amortized 3-ish CPU cycles per operation (2015 3GHz-ish CPU, but even at 5GHz it wouldn't matter).

That's not a real performance number and you won't be seeing any real DOM operations being done at a billion per second anytime soon.


This post gets the reason why people are cutting off LLMs exactly backwards and consequently completely fails to address the core issue. The whole reason people are blocking LLMs is precisely that they believe it kills the flow of readers to your content. The LLMs present your ideas and content, maybe with super-tiny attribution that nobody notices or uses [1], maybe with no attribution at all, and you get nothing. People are blocking LLMs with the precise intent of trying to preserve the flow to their content, be it commercially, reputationally, whatever.

[1]: https://www.pewresearch.org/short-reads/2025/07/22/google-us...


Users don't seek content for the attribution; that's extra noise, unless there's reason to contact the attributed. And given that many websites offer an inefficient flow to content, made of ads and/or unnecessarily animated things for example, the LLM is merely improving the experience for the user.

"Why look at a sunset when you can read a summary about one from an AI?"

-Someone, somewhere, eventually


The whole conversation here is about incentives for the content creators, not the user.

Yes, as a user I'd like everything served to me on a silver platter, for free, on demand, and completely and 100% aligned with my interests exclusively with no thought given to anybody else... but that's not a realistic world. In the real world, if the content providers have no reason to provide content, they won't.

I kind of hate the connotations of "content provider", that neutral term that implies that it is all "content" that can just be measured in megabytes or something, but I mean the full richness here of the term, individual producers, small businesses, big business, everybody. Even my personal site, if I'm not getting something out of it, however intangible it may be, I wouldn't do it. I'd be mighty pissed if I lose a job someday because I get accused of just spewing out LLM content that the LLM can only spew out because of my own original ideas/formulation of ideas being on the internet.


Fair, if your content is your product, but I’m more than happy for every LLM on the planet to summarize my page and hype the virtues of my product to its user.

Enjoy the brief window of LLMs "hyping the virtues of your product" to its users for free. In 2030 that's not going to sound realistic at all. And I feel I'm being generous pushing it up to 2030, the first "sponsored training data" either already exists or will probably be out this year, the only question being whether it will be publicly admitted to or not.

Why do tech bros assume that every site is selling a product? There are blogs, personal web sites, communities, and open-source projects out there.

If there's no product, and it's free, why would one care about it appearing in the output of an LLM? If it's so secret that it shouldn't, then perhaps it should be behind some auth anyway.

Because writing is, in many senses, exposing yourself, and at the very least you want the recognition for it (even if only in the form of a visit to the website, and maybe the interactions that can follow)? Maybe you want at least the prestige that comes with writing good content that took a lot of time to create. Maybe because you want to participate in a community with your stuff. Maybe other million reasons.

I know that medium, substack and the other "publication" platforms (like LinkedIn) are trying to commodify even the act of writing into purely a form or marketing (either for a product, or for your personal brand), but not everyone gave up just yet.


Agreed, and we can argue semantics, but many folks would consider the content in that case a product.

Not everything everyone does is for a profit motive. I'm not trying to sell you anything ; myself included, when you visit my site. It's just reading material.

Something being a product does not require a profit motive.

Why would removing your content from LLM training data cause people to go and seek it out directly from you?

Would removing your website from google search results cause people to go directly to your website?


This seems like a weird comparison - Google’s explicit purpose is to direct people to your site. An LLMs purpose is not.

The point being made is that just as the search engine was the primary means for users to discover content yesterday, so the LLM agent will become the primary means tomorrow. And that content doesn't have to be in the training data, but if an agent is unable to access some particular content, then it won't be discovered by users. Similar to if a rewatch engine is unable to access it.

As the time horizon increases, planning for the future is necessary, then prudent, then sensible, then optimistic, then aspirational, then foolish, then sheer arrogance. Claiming 25 years of support for something like SQLite is already on the farther end of the good set of those adjectives as it is. And I don't mean that as disrespect for that project; that's actually a statement of respect because for the vast majority of projects out there I'd put 25 years of support as already being at "sheer arrogance", so putting them down somewhere around "optimistic" is already high praise. Claiming they've got a 50 or 100 year plan might sound good but it wouldn't mean anything real.

What they can do is renew the promise going forward; if in 2030 they again commit to 25 years of support, that would mean something to me. Claiming they can promise to be supporting it in 2075 or something right now is just not a sensible thing to do.


Having a plan for several hundred years is possible and we've seen such things happen in other facets of life. We as humans are clearly capable of building robust durable social organizations, religion and civics both being testaments.

I'm curious how these plans would look and work in the context of software development. That was more what my question is about (also only being familiar with sqlite taking this seriously).

We've seen what lawyers can accomplish with their BAR associations and those were created over 200 years ago in the US! Lawyers also work with one of the clunkiest DSLs ever (legalese).

Imagine what they could accomplished if they used an actual language. :D


I’d be interested to know what you would classify as having been planned to last hundreds of years. Most of the long term institutions I can think of are the results of inertia and evolution, having been set up initially as an expediency in their time, rather than conforming to a plan set out hundreds of years ago.

The Philadelphia BAR Association was established in ~1800. I doubt the profession of law is going to disappear anytime soon, and lawyers have done a good job building their profession all things considered. Imagine if the only way you could legally sell software was through partnerships with other developers?

Do you think such a thing would have helped or hurt our industry?

I honestly think help.


What I mean is that the bar was set up for the lawyers themselves at that time. They didn’t create a 250 year plan for a Philadelphia bar that has played out in all that time and gotten us to today. It’s stayed in existence because it happened to stay useful for the lawyers that followed after them. Law itself is a collection of decisions made by judges and juries in trials, not decisions that are calibrated to have an impact over hundreds of years. Institutions are more like organisms that evolve, trying to adapt to the environment they find themselves in. The ones that work are able to stick around, and the ones that don’t die off.

You don't see an institution that established useful norms persisting for lifetimes as one worth preserving and emulating?

I do.

Medieval guilds are another equivalent but they could not deal with the industrial revolution or colonialism, so they don't seem like something worth studying (outside of their failures) if it can't deal with societal change.


The problem is that 900 lines of code is also nothing for your potential customers as well. Non-programmers have a very poor ability to judge how difficult something is and how worth paying for it is. 900 lines is probably less effort for most organizations than it is to evaluate paying for the functionality.

Out on the super, super far end of the distribution you may have things like paying for what is essentially 900-ish lines of extremely, extremely carefully vetted code for things like encryption, but that is very, very exceptional.

I've got a few open source projects on my GitHub that are in the 900 line range, and I know they're used in a few "interesting" places but I'm not crying about it because the simple truth is the commercial value of that code is simply $0. If I tried to sell it to the people using it, they would perfectly rationally just say no. I am abundantly compensated for it by all the other open source software I get to use.


That makes it so that in absolute terms, Python is not as slow as you might naively expect.

But we don't measure programming language performance in absolute terms. We measure them in relative terms, generally against C. And while your Python code is speculating about how this Python object will be unboxed, where its methods are, how to unbox its parameters, what methods will be called on those, etc., compiled code is speculating on actual code the programmer has written, running that in parallel, such that by the time the Python interpreter is done speculating successfully on how some method call will resolve with actual objects the compiled code language is now done with ~50 lines of code of similar grammatical complexity. (Which is a sloppy term, since this is a bit of a sloppy conversation, but consider a series "p.x = y"-level statements in Python versus C as the case I'm looking at here.)

There's no way around it. You can spend your amazingly capable speculative parallel CPU on churning through Python interpretation or you can spend it on doing real work, but you can't do both.

After all, the interpreter is just C code too. It's not like it gets access to special speculation opcodes that no other program does.


I love this “real work”. Real work, like writing linked lists, array bounds checking, all the error handling for opening files, etc, etc? There is a reason Python and C both have a use case, and it’s obvious Python will never be as fast as C doing “1 + 1”. The real “real work” is in getting stuff done, not just making sure the least amount of cpu cycles are used to accomplish some web form generation.

Anyway, I think you’re totally right, in your general message. Python will never be the fastest language in all contexts. Still, there is a lot of room for optimization, and given it’s a popular language, it’s worth the effort.


I can't figure out what your first paragraph is about. The topic under discussion is Python performance. We do not generally try to measure something as fuzzy as "real work" as you seem to be using the term in performance discussions because what even is that. There's a reason my post referenced "lines of code", still a rather fuzzy thing (which I already pointed out in my post), but it gets across the idea that while Python has to do a lot of work for "x.y = z" for all the things that "x.y" might mean including the possibility that the user has changed what it means since the last time this statement ran, compiled languages generally do over an order of magnitude less "work" in resolving that.

This is one of the issues with Python I've pointed out before, to the point I suggest that someone could make a language around this idea: https://jerf.org/iri/post/2025/programming_language_ideas/#s... In Python you pay and pay and pay and pay and pay for all this dynamic functionality, but in practice you aren't actually dynamically modifying class hierarchies and attaching arbitrary attributes to arbitrary instances with arbitrary types. You pay for the feature but you benefit from them far less often than the number of times Python is paying for them. Python spends rather a lot of time spinning its wheels double-checking that it's still safe to do the thing it thinks it can do, and it's hard to remove that even in JIT because it is extremely difficult to prove it can eliminate those checks.


I understand what you're saying. In a way, my comment is actually off-topic to most of your comment. What I was saying in my first paragraph is that the words you use in your context of a language runtime in-effeciency, can be used to describe why these in-effeciences exist, in the context of higher level processes, like business effeciency. I find your choice of words amusing, given the juxtoposition of these contexts, even saying "you pay, pay, pay".

You claimed churning through Python interpretation is not "real work". You now correctly ask the question: what is "real work"? Why is interpreting Python not real work, if it means I don't have to check for array bounds?

>Why is interpreting Python not real work, if it means I don't have to check for array bounds?

Because other languages can do that for you too, much much faster...


To put it another way, I choose Python because of its semantics around dynamic operator definition, duck typing etc.

Just because I don’t write the bounds-checking and type-checking and dynamic-dispatch and error-handling code myself, doesn’t make it any less a conscious decision I made by choosing Python. It’s all “real work.”


Type checking and bounds checking aren't "real work" in the sense that, when somebody checks their bank account balance on your website or applies a sound effect to an audio track in their digital audio workstation, they don't think, "Oh good! The computer is going to do some type checking for me now!" Type checking and bounds checking may be good means to an end, but they are not the end, from the point of view of the outside world.

Of course, the bank account is only a means to the end of paying the dentist for installing crowns on your teeth and whatnot, and the sound effect is only a means to the end of making your music sound less like Daft Punk or something, so it's kind of fuzzy. It depends on what people are thinking about achieving. As programmers, because we know the experience of late nights debugging when our array bounds overflow, we think of bounds checking and type checking as ends in themselves.

But only up to a point! Often, type checking and bounds checking can be done at compile time, which is more efficient. When we do that, as long as it works correctly, we never† feel disappointed that our program isn't doing run-time type checks. We never look at our running programs and say, "This program would be better if it did more of its type checks at runtime!"

No. Run-time type checking is purely a deadweight loss: wasting some of the CPU on computation that doesn't move the program toward achieving the goals we were trying to achieve when we wrote it. It may be a worthwhile tradeoff (for simplicity of implementation, for example) but we must weigh it on the debit side of the ledger, not the credit side.

______

† Well, unless we're trying to debug a PyPy type-specialization bug or something. Then we might work hard to construct a program that forces PyPy to do more type-checking at runtime, and type checking does become an end.


> and the sound effect is only a means to the end of making your music sound less like Daft Punk or something

What do you mean. Daft Punk is not daft punk. Why single them out :)


Well, originally I wrote "more like Daft Punk", but then I thought someone might think I was stereotyping musicians as being unoriginal and derivative, so I swung the other way.

I believe they are talking about the processor doing real work, not the programmer.

Yeah, I get it, but I found the choice of words funny, because these words can apply in the larger context. It's like saying, Python transfers work from your man hours to cpu hours :)

> And while your Python code is speculating about how this Python object will be unboxed

This is wrong i think? The GP is talking about JIT'd code.


> After all, the interpreter is just C code too.

What interpreter? We’re talking about JITting Python to native code.


Welp there is Mojo so looks like soon you will not really need to care that much. Prob will get better performance than C too.

I've been hearing promises about "better than C" performance from Python for over 25 years. I remember them on comp.lang.python, back on that Usenet thing most people reading this have only heard about.

At this point, you just shouldn't be making that promise. Decent chance that promise is already older than you are. Just let the performance be what it is, and if you need better performance today, be aware that there are a wide variety of languages of all shapes and sizes standing by to give you ~25-50x better single threaded performance and even more on multi-core performance today if you need it. If you need it, waiting for Python to provide it is not a sensible bet.


I am a bit older than Python :). I imagine creator of clang and LLVM has fairly good grasp on making things performant. Think of Mojo as Rust with better ergonomics and more advanced compiler that you can mix and match with regular python.

I maintain a program written in Python that is faster than the program written in C that it replaces. The C version can do a lot more operations, but it amounts to enumerating 2^N alternatives when you could enumerate N alternatives instead.

Certainly my version would be even faster if I implemented it in C, but the gains of going from exponential to linear completely dominate the language difference.


You're probably right, Mojo seems to be more "python-like" than actually source-compatible with python. Bunch of features notably classes are missing.

Give em a bit of time it's pretty young lang

Mojo feels less like a real programming language for humans and primarily a language for AI's. The docs for the language immediately dive into chatbots and AI prompts.

I mean thats the use case they care about for obvious reasons but it's not the only use case

Fiber just came through my area. They offer up to 3Gbps for less than I was paying for Comcast ~500Mbps asymmetric and for more money I can get 5Gbps... but I just signed up for the 500Mbps symmetric and pocket the difference monthly, because what the hell am I going to do with even 1Gbps? My Wifi can't 5Gbps, and all but two network devices in my house use Wifi to get to the internet. My NVMes can nominally do it, but it takes everything firing on all cylinders to actually achieve that. I've still got some spinning rust that is pretty full up at even the 500Mbps. I do run backups to AWS, but that runs in the nighttime anyhow and could still finish a complete non-incremental backup in 4-5 hours at full speed, and I have incrementals anyhow. Sure, the game per month I download from Steam would be ready in 4 minutes instead of 8, but, seriously, how much am I willing to pay for those four minutes? It's not like I'm staring at the progress bar at that point anyhow.

500Mbps is already enough for me to tailscale my house network up and have every single member of my family accessing the house Jellyfin server remotely simultaneously, which is not a realistic amount of load.

100Mbps down is still plenty for most people. 20Mbps up is definitely making some things annoying but most people will still be fine. It's a fine definition of minimum service for "broadband".


"Nothing prevents them from incrementally starting to add proper parallelism, multithreading, ..."

In principle perhaps not. In practice it is abundantly clear now from repeated experience that trying to retrofit such things on to a scripting language that has been single-threaded for decades is an extremely difficult and error-prone process that can easily take a decade to reach production quality, if indeed it ever does, and then take another decade or more to become something you can just expect to work, expect to find libraries that use properly, etc.

I don't think it's intrinsic to scripting languages. I think someone could greenfield one and have no more problems with multithreading than any other language. It's trying to put it into something that has been single-threaded for a decade or two already that is very, very hard. And to be honest, given what we've seen from the other languages that have done this, I'd have a very, very, very serious discussion with the dev team as to whether it's actually worth it. Other scripting languages have put a lot of work into this and it is not my perception that the result has been worth the effort.


This isn't really that important. I don't care if the probe is here because of magh'Kveh or because its creators are really motivated to zzzzssszsezesszzesz. What I care about is whether it's going to be benign (which includes just cruising through doing nothing) or malevolent to me. I don't even care if the aliens think they are doing us a favor by coming to a screeching halt, going full-bore at Earth, and converting our ecosystem into a completely different one that they think is "better" for whatever reason. However gurgurvivick that makes them feel, I'm going to classify that as a malign act and take appropriate action... because what else can I even do?

And from that perspective, "benign" and "malign" aren't that hard to pick up on. They are relative to humanity, and there is nothing wrong with that. In fact it would be pathological to not care about how the intentions are relative to their effect on humanity.

Whatever happens, it's not like we can actually cause an interstellar incident at this phase of our development. Anything that they would interpret as an interstellar incident they were going to anyhow (e.g. "how dare you prevent our probe from eliminating your species?") and that responsibility is on them, not us. You can't blame a toddler that can barely tie their shoelaces for international incidents, likewise for us and interstellar incidents.


Whatever happens, it's not like we can actually cause an interstellar incident at this phase of our development.

What if we have inadvertently caused tremendous offense via our radio/television/planetary radar signals


One problem with your assumption here is that "humanity" has no definition of "benign" and "malign".

If we did have such a thing, extrapolated coherent volition would be solved and that would solve half of the AI alignment problem.

This hypothetical "alien" problem is actually pretty much equivalent to the AI alignment problem. One half is, we don't know what we want, and the other half is, even if we knew... we don't know how to make "them" do what we want.


Sure, and I can't figure out whether the guy who is letting me in to traffic instead of cutting me off is malign or benign, because I lack a definition of those words. Alas, I am doomed to infinite confusion forever.

It's very fashionable to confuse the inability to draw bright shining lines as being unable to define a thing at all, but I don't have much respect for that attitude. Of all the outcomes, "the probe engages in indefinite behavior that we are never able to classify as 'humanly benign' or 'humanly malign'" is such a low percentage that it's something I'll worry about when it happens.

The world is full of concepts we can't draw bright shining lines through. In fact the ones we can are the exceptions. We manage to have definitions even so.


> One problem with your assumption here is that "humanity" has no definition of "benign" and "malign".

Agreed. One can think of any number of actions that would be impossible to rate on a benign/malign scale. E.g. as a trivial example: aliens destroy 80% of humanity, which leads to restoration of Earth ecosystems and prevention of the inevitable future war that would destroy 100% of humanity; in 100 years humanity is in a much better position than it would have been if left alone [0] [1]

And that doesn't even include intentions. We often do bad things for good reasons, with good intentions. Malignity includes or infers the intention to cause harm. That may not be present, or the intention may have been benign.

Morality is complicated and subjective. Even judging the outcome of an action as positive or negative is complicated and subjective.

[0] I don't really want to argue whether this is true, possible, etc. Pick your own variant of example where a seemingly-malign action is actually benign in the long term.

[1] Also raises the problem of estimating "better" in this context. Exercise left for the reader.


I feel confident that we do.

> and converting our ecosystem into a completely different one that they think is "better" for whatever reason.

You could theoretically be convinced that they are right and resign yourself to death.


The major problem with understanding articles like this is that while it typically doesn't involve quantum entanglement, it's close enough to quantum that it makes the science writers get all giddy about the words they are throwing around and they do their usual "why inform the reader about what is going on when we can just make them go Gee Whiz" schtick.

The key word is "quasi-particle" which is somewhat less exotic than it sounds. It is a combination of what you might call real or normal particles that produces some sort of pattern in it that itself acts like a particle of some sort. The resulting "quasi-particle" can have all kinds of interesting properties that normal particles can't have on their own, but what makes them "quasi" is that they can't exist on their own. They're intrinsically on top of some substrate of normal particles.

One of the simplest quasiparticle is the "electron hole". Take a lattice of some electrically neutral substance. Remove one electron from it. There is now an "electron hole" in it. You can treat that hole like a particle now. It can "move" to another location by having the real electrons change places. It can "flow" through a series of such events. You can model a lot of things with "electron holes" that act in very particle-like ways. But they don't exist on their own. This one is simple because you don't even need quantum mechanics to get a hold of it in your head.

Many more complicated scenarios are possible. Many interesting things can happen with them. Most, if not all, news articles about "new phases of matter", which science writers love to write about only slightly less than making "woo woo" motions with their fingers while talking about quantum entanglement, are new quasiparticles of some sort. This is somewhat less interesting than they think because if you include quasiparticles as "phases of matter" then there are already hundreds or thousands, but the science writer wants to write an article about every single one of them as if the list is now "solid, liquid, gas, Weyl semimetals" and then write the next article as if the list is now "solid, liquid, gas, ELECTRON HOLE" and so on and so on for each new quasiparticle.

But from this perspective, the list hasn't been so short as "solid, liquid, gas" for well over a hundred years now, and while adding a new one is often good science, it has also been "just" another one of thousands for a while now.

This post is not an explanation of "spin ice", "Wely fermions", or anything else; what this is is the "secret decoder ring" to remove the wiggly fingers and the "woo woo" noises the science writers add to this topic every time they write about it and to give you the terms you can Google and start reading up on what is one of the most interesting and productive fields in the hard sciences right now. Everyone loves to talk about how stuck particle physics is, but physics is making a lot of interesting findings in the field of making the particles we know about sing and dance in all sorts of new and interesting ways.


> One of the simplest quasiparticle is the "electron hole". Take a lattice of some electrically neutral substance. Remove one electron from it. There is now an "electron hole" in it. You can treat that hole like a particle now. It can "move" to another location by having the real electrons change places. It can "flow" through a series of such events. You can model a lot of things with "electron holes" that act in very particle-like ways. But they don't exist on their own. This one is simple because you don't even need quantum mechanics to get a hold of it in your head.

An electron hole seems like a simple, almost silly idea at first. Isn't it just like the hole in a sliding puzzle game. You move a neighbouring electron into the hole, so the hole disappears and a new hole appears at the neighbouring position. It seems to "move". Does this deserve a special name like "quasi-particle"?

But it's not like the hole in a sliding puzzle!

An electron hole moves with inertia, like a real particle. It behaves as if it has mass: You can push it and it starts moving. If you push it more, it accelerates more. But unlike a sliding puzzle, when you stop pushing, the electron hole carries on moving at the same speed.

It keeps going by itself in whatever direction it was going, until it's pushed in a different direction, or bounces off something.

You can't push a sliding puzzle hole at a diagonal angle, let alone push it that way and then watch the puzzle hole keep on moving that way by itself like an independently moving object, as far as it can go until it hits something.

If you had a large sliding puzzle with two holes, you wouldn't expect to be able to send them towards each other, bounce off each other and continue.

And you certainly can't perform double slit interference with sliding puzzle holes. You can, in principle (hard in practice), make electron hole beams and interfere them.

Things like holes and other patterns in matter behave remarkably like real, coherent particles, even though they are just patterns.


Thank you for that fantastic elaboration. I'll have to put it in my pocket for future discussions to link to.

Working with this sort of thing is on my short list of "if I had it to do all over again". It's really fascinating stuff.


OK, but you're not in "Go"-specific problems any more, that's just concurrency issues. There isn't any approach to concurrency that will rigorously prevent programmers from writing code that doesn't progress sufficiently, not even going to the extremes of Erlang or Haskell. Even when there are no locks qua locks to be seen in the system at all I've written code that starved the system for resources by doing things like trying to route too much stuff through one Erlang process.

I would say it is a Go specific problem with how mutexes and defer are used together.

In rust you would just throw a block around the mutex access changing the scoping and ensuring it is dropped before the slow function is called.

Call it a minimally intrusive manual unlock.


In Rust you can also explicitly drop the guard.

    drop(foo); // Now foo doesn't exist, it was dropped, thus unlocking anything which was kept locked while foo exists
If you feel that the name drop isn't helpful you can write your own function which consumes the guard, it needn't actually "do" anything with it - the whole point is that we moved the guard into this function, so, if the function doesn't return it or store it somewhere it's gone. This is why Destructive Move is the correct semantic and C++ "move" was a mistake.

You can also just drop it by scoping the mutex guard to the critical area using a block, since it’ll be dropped when it goes out of scope.

Generally, in any language, I'd suggest of you're fiddling with lots of locks (be they mutexes, or whatever), then one is taking the wrong approach.

Specifically for Go, I'd try to address the problem in CSP style, so as to avoid explicit locks unless absolutely necessary.

Now for the case you mention, one can actually achieve the same in Go, it just takes a bit of prior work to set up the infra.

  type Foo struct {sync.Mutex; s string}
  
  func doLocked[T sync.Locker](data T, fn func(data T)) {
      data.Lock(); defer data.Unlock(); fn(data)
  }
  
  func main() {
      foo := &Foo{s: "Hello"}
      doLocked(foo, func(foo *Foo) {
        /* ... */
      })
      /* do the slow stuff */
  }

> OK, but you're not in "Go"-specific problems any more, that's just concurrency issues.

It’s absolutely a go-specific problem from defer being function scoped. Which could be ignored if Unlock was idempotent but it’s not.


It's tedious, I agree, but I found it easiest to just wrap it in an inline function defined and called there and then.

This alleviates all these problems of unlocks within if bodies at the cost of an indent (and maybe slight performance penalty).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: