Im surprised performance at this level even matters to most folks. Like if you truly thought this microbenchmark was the reason to choose one runtime over another Id be shocked. It makes it equally surprising that this error from the Deno crew, who should all know better.
In either case, I hope they announce a correction and move on to more important matters. If youre trying to shave another tiny bit of rps out of your boxes then thats an incredible success problem; not the kind 99.999 of companies will need.
You'd be surprised. I've had countless battles with (junior-wannabe-senior) devs who wanted to use a different framework simply because it is "fast". When you point out that this project will be a huge success if it has 10 req/s, the usual answer is "well it doesn't hurt", when it truth it does - if nothing else, because it diverts discussion from important matters (like consistency of the company's tech stack) to irrelevant ones.
Dino makers are well aware of this. An otherwise great Python framework (based on Starlette) is even named "FastAPI" in an attempt to use this to their advantage (it is great for other reasons, not because of its speed).
Unfortunately lots of devs are looking for silver bullets when it comes to speed, instead of detecting, determining, investigating and removing the bottlenecks.
Also what I thought … and indeed turns out to be the case in actual usage. It's quite fast to get a usable API up and running with FastAPI, starting from zero to the point of making useful requests to the API and getting back useful data. The actual speed of API access itself (the response times for the requests, etc) has never really been an issue I've wrestled with (me not being Twitter, etc. and not needing zillions of requests per second).
That's my take on it as well. Fast apu does use asyncio which may have speed avantages in some circonstances. But the main takeaway and the killer feature is the fact you build your api just declaring functions signatures.
You are declaring the types of the parameters, and FastAPI parses and enforces them for you. This saves quite a bit of code, and lets you focus on your business logic. Writing the code this way also allows FastAPI to generate a meaningful description of your API, which can be schema (such as Swagger) that tools can use to generate clients, or it can be documentation pages for humans to read. Any good documentation will still require you to write things, but this gets you further, faster than using something like Flask.
This example[0] takes these concepts even further.
Or just glance at the summary and see that it has nothing to do with @app.post.[1]
FastAPI has also properly supported async routes for longer than Flask, from what I understand.
(I've never personally used FastAPI for anything serious, since I have rarely used Python for anything other than machine learning for the past 5+ years, preferring to use Go, Rust, or TypeScript for most things, but I am aware of it, and seeing its claims misrepresented like that is mildly annoying. FastAPI is far more appealing to me than any other Python web framework I've ever seen, and I've only heard good things about it. Based on my experiences in other languages, their approach to writing APIs is absolutely a good one.)
Typing is the base for many reasons I love FastAPI, so yes it is useful. And I speak as someone who has used it in production on multiple projects, and even converted some from Flask (but not because of speed).
Far from "fast to get going", starting with FastAPI is slower actually, but it takes you further (as you pointed out). The train of thought that "typing" leads to "fast to get going" which leads to "fast" in the name... Let's say I don't buy it.
The link to "performance" is difficult to miss, I don't think that is a coincidence. And it's OK. If this is what matters to devs that much, they would be stupid not to highlight it. I'm actually happy people are using it, even if for the "wrong" reasons.
You think no one cares about the convenience of having the type system do a lot of the work for you? Or being able to autogenerate client libraries? I find that position confusing.
Proper async support is decently important to me in any language or framework, but in the real world, I haven't often run into other developers who care much about that.
Async in Python is a huge deal as it is in any language, yes. For very real reasons (cutting down on incredible amounts of confusing boilerplate) to very lame reasons (it's been memed into developer consciousness enough that it becomes a primary yes/no gate for development teams).
It's not that. Fast API comes with a way to declare python types to get for free:
- URL params, forms and query string parsing and validation
- (de)serialization format choice
- dependency injection for complex data retrieval (this one is very underrated and yet is amazing for identification, authentication, session and so on)
If the latter requires you to do your own mini router to handle each verb separately, then yes. It greatly improves readability and reduces boilerplate to have the ability to have one handler per path x verb
Maybe - in that case I'm judging them wrongly. But flask (the king they dethroned) was faster to get going (with less features, granted) and their docs feature "Performance" [0] rather prominently.
Note that I don't hold this against them. They simply understand what makes the devs pick them and adjusted their market strategy accordingly. It would be nice if they didn't have to though.
Flask is faster to get going in what way? Writing types instantly saves you a ton of time and effort right out of the gate.[0]
And if your framework is faster, then of course you're going to mention it. Do you really think Flask or Django wouldn't point out that they were fast, if they were? I'm quite sure they would, since it's not shameful to educate the reader on what your framework offers compared to the competition, but they can't, because they're not.
Your link goes to the very bottom of that page, so is it really prominently featured compared to everything else they're trying to sell you on? It really doesn't seem like it. More convincing would be pointing to their list of "key features" at the top, which does mention performance first, but then quickly focuses back on "Fast to code" and "Fewer bugs".
flask took the approach of only providing basic funtionality and relying on 3rd party packages for things like openapi docs and thr like. fastapi has a lot more included and it makes the common tasks like documenting your api easier and more consistent
Kinda devil's advocate, but I've met several senior-should-be-junior cargo-culting devs who swear by popular tools that are objectively slower than alternatives, instead of actually taking the time to evaluate the less popular alternatives to see if the lack of surrounding ecosystem will actually affect their project. The result is a death by a thousand cuts, because they auto-pilot to "what is everyone else using"?
Putting aside who-wants-to-be-what, a change in a common tech stack is a serious change with all sorts of implications. Arguments for and against should be carefully considered, and yes "going faster can't hurt" is not an argument.
Mostly agree; if you're a Java shop, maybe stick to Java instead of confusing all of your engineers just because you read on HN how much faster Golang can be. But again, YMMV.
it's really only nuanced when you have small team and a very tightly scoped project.
for anything that can plausibly grow in scope and team size (which, let's be honest here, is most complex projects), it almost never makes sense to go without an existing ecosystem. it becomes difficult to hire, difficult to train, difficult to pass off maintenance, slows down velocity of shipping, makes your team gradually re-invent a worse version of the framework/tooling you initially tried to avoid, etc.
i've been on both kinds of projects. when i build something solo it's a work of art in code size, API consistency, and performance...and that feels truly amazing. but unfortunately it's not something that is feasible with bigger and more diverse-skillset teams. ever-growing scope and shipping features quickly usually means giving up performance and well thought out design.
I agree. PReact for instance comes to my mind. But it’s faster! And? And is it really in a business web app and not just a printf(« hello world »)? Does it matter that much? Do they have as many dev working on it? What about edge cases? More than anything for this type of change does your faster magic new thing has even an ecosystem?
> You'd be surprised. I've had countless battles with (junior-wannabe-senior) devs who wanted to use a different framework simply because it is "fast". When you point out that this project will be a huge success if it has 10 req/s, the usual answer is "well it doesn't hurt", when it truth it does - if nothing else, because it diverts discussion from important matters (like consistency of the company's tech stack) to irrelevant ones.
I think that we as an industry don't have the best hindsight.
I'll use enterprise Java as an example of a few common situations:
- sometimes we go for Spring (or Spring Boot) as a framework, because that's what we know, but buy into a lot of complexity that actually slows us down
- other times we might look in the direction of something like Quarkus or Vert.X in the name of performance, but have to deal with a lack of maturity
- there's also something like Dropwizard which stitches together various idiomatic packages, yet doesn't have the popularity and tutorials we'd like
- people still end up being limited by ORMs, which can speed up development and make it convenient, but have hard to debug issues like over-eager fetching
- regardless of how fancy and "enterprise" your framework is, people still make data structure mistakes (e.g. iterating over a list instead of using a map)
- if you've written a singleton app (runs just on a single instance) that's monolithic, your background processes will still slow everything down
And then, people wonder why it's hard to change their enterprise codebase and wave around their hands helplessly when their app needs at least 2 GB of RAM to even run locally and answering some simple REST requests needs close to 20 seconds and the app does about 2000 database queries to return a relatively simple list with some data.
When people should think about performance, they're instead busy "getting things done", when people should think about "getting things done" they're busy bikeshedding about which new framework would look best on their CV. And we even pick the wrong problems to solve, given that many (but not all) of the systems out there won't really have that stringent load requirements and just writing decent code should be our priority, regardless of the framework/technology/language.
I remember load testing a Ruby API that I wrote: on a small VPS (1 CPU core, 4 GB of RAM), it could consistently (over 30 minutes) serve around 200 requests/second with database interaction for each, which is probably enough for almost any smaller project out there. Doubling those resources by scaling horizontally almost doubled that number, with the database eventually being the limiting factor (which could have been scaled vertically as well). And that is even considering that Ruby is slow, when compared to other options. But even "slow" can be enough, when your code is decently written (and the scale at which you operate doesn't force your hand).
Read requests, sure. In that particular instance, I was testing a write heavy workload for my Master's Degree with K6 at the time: https://k6.io/
The idea was to see how performance intensive COVID contact tracking would be, if GPS positions were to be sent by all of the devices using an app, which would later allow generating heat maps from this data, instead of just contact tracing. Of course, I talked about the privacy implications of this as well (the repository was called "COVID 1984"), but it was a nice exercise to demonstrate horizontal scaling, the benefits of containerization and to compare the overhead of Docker Swarm with lightweight Kubernetes (K3s).
So yes, write heavy workloads are viable with Ruby on limited hardware, read heavy workloads can be even more easy (depending on who can access what data).
The last sentence kind of contradicts the preceding ones.
I agree that it’s harmful to distract from what actually matters: the core goal and competencies of the team.
But therefore we should hope devs look for silver bullets to address performance without having to be distracted by it. “It’s out of the box pretty good so I don’t have to think about caching or CDNs or load balancing until later” is deeply valuable.
I disagree. If we're talking of microbenchmarks focused on 100k rps or whatever that are mostly IO/syscall limited, sure.
But if it's that JS execution is outright faster, it's a big deal.
People here handwave "oh, your business doesn't require more than 10rps". Sure. But rps is just half the story, latency is the other. I'll give you two examples
1) SSR with something like Material UI is slow, especially because of the CSS-in-JS. The server rendering the page can easily take 200ms or more.
2) Modern backend stacks. On an API I have I use Prisma + Apollo GraphQL. Some queries take 500ms. These same queries but using REST and knex are <10ms. There is no slow SQL queries here or N+1 issues, it's just prisma and graphql executing a lot of JS.
In either case, the user experience is impacted because the website becomes slower. And a faster runtime would make these JS run faster, thus the web/api load faster.
You've given 2 of the worst (imo) regressions in frontend dev in the last decade.
If you're relying on innovation to make your existing tech stack not behave like dogshit when proven and trivial solutions exist, you're valuing the wrong things when choosing your stack.
They may be the worst, but they have now become extremely common to find in a variety of company projects. Not just FAANG or FAANG-adjacent but boring insurance or healthcare companies too.
Youre almost proving my point. This bechmark is focused on one tiny layer. It’s not executing complex graph queries. If you want to do a real comparison then the benchmark should be “apollo graph query performance on node vs deno vs bun” and then decide if its good enough to compare speed of feature delivery using knex. Even then the benchmark would need to be carefully crafted.
You may be able to slash SSR from 200ms to 195ms by using a different JS runtime. Or you may slash it to 100ms by rewriting and optimizing code (caching and such). Resources are limited. You either do one or the other.
It matters to the guy paying the AWS bill... or anyone who cares about their ecological impact. We have a duty to utilize resources as efficiently as possible, no different than anyone else. Building every new project on top of a mountain of abstraction that pushes resource utilization to few orders of magnitude beyond what is actually necessary to do the job is financially stupid at the least and socially irresponsible at worst.
If you're an auto manufacturer and you discover something like fuel injection that will dramatically improve efficiency for your customers (the people paying that bill), not doing so makes you a terrible engineer. The 'developer velocity' argument is pure BS... there's absolutely no direct (or even really indirect) correlation there. If the ones you have need someone else to write 9 libraries so that they can build a REST API, you need better engineers.
The only problem is that then you end up married to your mountain of abstractions. Designing things to work as intended from the outset is, in my experience, always the better path. It’s like ‘buy once, cry once’ for technical debt/effort.
Overly complex and feature-filled (or extremely barebones and "fast") frameworks can also have the property of giving no responses for additional months at a time. (i.e., sometimes "it works" is good enough, and our ego in design elegance doesn't need to get in the way of our need to keep existing as a business. If we need to rebuild or refactor later when we really know what we want, we can. :) )
It is [1] ( and should be ) pretty well known. 1.3 BILLION page view per month, 6K RPS with 9 ( Fairly Weak ) Servers, Sub 20ms response time with zero caching.
>also wondering what peak RPS is for HN.
Less than 100 RPS for logged in users. The number were pre 2020 but I doubt the current number is significantly higher.
I mean... to be clear, they do tons of caching[0], which is certainly critical for their ability to have a non-cached response time of 20ms. Most of their responses should be coming from a cache, given the type of site they run, otherwise they would need a lot more servers.
This microbenchmark in particular isn’t a reason I’d consider Bun. But the sum of many performance and DX considerations that have been put into Bun’s development—and that they are core motivating principles for the creator—certainly are.
As for the error, I suspect it was an innocent mistake. I see no reason Deno would choose to mislead, when they’ve generally very publicly responded to performance deficits by acknowledging them and then actually improving real performance.
Javascript is plagued with idea that it is slow while it is not. Many devs now have PTSR after arguing day after day that javascript is a good thing and not slow.
Performance is a very important thing in js world for a peace of mind of devs.
> Javascript is plagued with idea that it is slow while it is not.
I benchmarked a hello world in .net and node/express, and the .net version was multiple orders of magnitude faster than the node/express version. That's a starting point, and as you add more logic, that gap only grows in my experience. Javascript may be fast _enough_ for many cases, and in a tight JIT loop it may be faster again, but by any measure, js is not quick.
You’re not wrong but I do think it’s important to remember the context: people don’t tend to write math-heavy code in classic JS (there’s a side discussion about WASM now) so relatively few apps bottleneck on CPU - it’s wild when you see people going on about how they need to switch frameworks based on some microbenchmark of request decoding when 99% of their request processing time is some kind of database. I’ve seen more Node apps blow up on RAM usage than CPU because someone thought async would magically make their app faster without asking how much temporary state they were using.
Where I think there’s more of a problem is cultural: similarly to Java, there’s a subset of programmers seemingly dedicated to layering abstractions faster than the JIT developers can optimize them.
>> Where I think there’s more of a problem is cultural: [...] there’s a subset of programmers seemingly dedicated to layering abstractions faster than the JIT developers can optimize them.
"Relatively few apps bottleneck on CPU" is a very 2010 opinion. With fast intranet (100Gbps+) and SSD, running business logic can become the bottleneck in many cases.
Unfortunately, I don't have readily available data to back up my claim neither.
I’m not saying it can’t happen, just that it’s pretty rare. SSDs are not infinite capacity and 100G networking isn’t common even in data centers - and more to the point, what really matters is latency: how many cycles can your CPU exercise in the time it takes for a round-trip network request is usually orders of magnitude greater than your business logic.
Again, not saying it never happens but I’ve rarely seen the kind of microbenchmark this story is about end up correlating with real application performance. I have seen developers get all fired up in some religious war and endanger their entire project trying to see benefits which never materialized, though, a common feature there was this focus on toy benchmarks rather than measuring the whole system or what they could do at the app level if they weren’t supporting some niche framework.
And until we have a feature parity moderately complex web app written in multiple languages to compare, we'll never know. In the meantime, all we have to go on is basic benchmarks, and I've not ever seen a _single_ benchmark that puts any js, framework or otherwise in the same ballpark as java, .net or go. When I do, I'll happily change my tune, but until then I'll have to stick with what all the numbers I've ever seen say - js is significantly slower.
One example is the techempower benchmarks Fortune section[0]. It's a fairly basic app, but it tests a full stack web app in multiple languages, and it's pretty clear that js is fimrly in the middle of the pack, far behind the compiled options. If you have any sources to the contrary, I'd love to see them.
Maybe not .net, but I've worked on 3D graphics in the browser and can say with confidance that rewriting your app in C or C++ could see orders of magnitude perf increase over JS.
Comparing a pure js implemention to code using opengl or webgl, I imagine several orders of magnitude difference is likely.
But a decent js implemention of 3D graphics-something would use one of the available tools for such applications. Making the difference considerably smaller.
I've worked with a couple of the 'decent JS implementation of 3D graphics' libraries and, although they're not all like this, the ones I used were not built by people with experience doing low-level performance work. As such, they made some poor architectural decisions that prevented users of the libraries from doing some very basic optimizations that would have increased perf significantly.
The three major blockers I remember were:
1. Render contexts are created on the main thread and the user of the library gets no control over this. This means all driver overhead and library function calls block the main thread, which matters a lot when trying to hit 8ms/frame.
2. Loading textures asynchronously (in another thread, not Javascript async/await) was straight up impossible due to poor architecture. This means app startup was 500ms instead of 5ms. Maybe not a big deal to you, but our use case necessitated quick (a few frames at worst) startup.
3. The renderer used a scene graph, which was hilariously slow to traverse for large numbers of objects. Impossible to optimize by anyone as far as I can tell. Scene graphs just don't work well in JS.
I'm afraid it was a while back so I don't have it to hand, but what I do have is the techempower benchmarks [0] which show about a 10x difference between asp.net or go, and all of the js options. I'm not going to claim they're perfect, and would be happy if you could provide some info that supports your argument?
And yet the top js entry is above all c# and go entries. It takes the top overall on the composite score.
Not that it is representative of actual use cases. Can't use just-js in production as it's hyper optimized for this benchmark rather than a work horse. But it does provide a better view of what is possible if the work is put in.
in techempower, the vast vast majority of code running in the just-js entry is JavaScript. all the core libraries for networking and interacting with the OS are js wrappers around C++/v8. the http server, though incomplete and not production rady, is written in javascript, with http parsing handed off to picohttpparser. the postgres wire protocol is completely written in javascript. in fact, one of the advantages JS and other JIT languages have is you can optimize away a lot of unnecessary logic at run time when you need to. e.g. https://github.com/just-js/libs/blob/main/pg/pg.js#L241
the whole point of doing this was to prove that JS can be as fast as any other language for most real world web serving scenarios.
if i had more time to work on it, i am sure i could improve the fortunes score where it would be at or very close to the top of that ranking too.
Might call node.js the same thing. Deno has a rust shell and bun is zig.
Just-js had spent a lot of resources optimizing the input and output gateways to the V8 engine and it obviously pays off nicely. It does serve requests with the JS.
Is the boundary in the same place? Not familiar enough with the others to say exactly. But does it really matter?
I was hesitant as to how much I should go into this because you get into the semantics of the benchmark but this [0] thread goes into why - that particular implementation doesn't behave the same way as the other implementations, it uses a different db driver that doesn't synchronise, which won't be allowed in the next version of the benchmarks. Techempower publish regular snapshots of their benchmarks at [1], and if you look at any of the snapshots that aren't the last published set where the discrepancy was fixed you'll see that all of the js implementations lag far far behind.
i'm sorry but this is not true. postgres pipelining is not allowed in the benchmarks any more, and even when it was, just-js was completely compliant with the rules and it was other C++, PHP, Rust and C# frameworks that were non compliant.
the postgres driver was rewritten in JS because i spent so long benchmarking using the pg C driver and couldn't get the performance i needed from it. if you actually read the github thread you can see i even did a lot of work to verify the various frameworks were compliant with the requirements.
in round 21, postgres pipelining was disallowed for all frameworks and just-js/JavaScript is in first place. \o/
they upgraded postgres recently and it uses a different default authentication mechanism which broke just-js. they seem to have stopped doing runs right now so just-js should re-appear when they start again.
Oh, come on! I am in fact a front end developer. And when I saw the result first time a few years ago, I was surprised and wanted to use this “just JS”, but the reality was quite far from what I was expecting. It might look like JS, but if you check the source code of the app for the benchmark, you’ll realise that it looks more like C or C++.
I agree performance is important, but it’s optimistic for any dev to assume that this particular layer is the place where things will be slow. Introduce a single file or other 3rdparty IO dependent method on your HTTP response and poof
I don't know if they were making a JS joke but I have legitimately had newer programmers tell me that Java is an interpreted language because it compiles to a bytecode language which is interpreted by the JVM. Inversely I've had people argue that JS and Python are compiled languages because their interpreters convert statements into bytecode before executing them. When someone starts trying to argue those points I find its best to just give them a thumbs up and leave the conversation.
Describing Javascript can be confusing. C++ compiles -> C compiles -> assembly language compiles -> 1-for-1 to machine code. But Javascript be like "Javascript is the programming language interacts with your browser" or "Javascript conforms to the ECMAScript specification that describes how the language should act but is implemented according to the browser vendor's interpretation of said specification, and is further compiled according the browser." And that only covers browsers' Javascript.
And I'm not even sure if the above is 100% accurate.
But all else equal, wouldn’t you want the fastest option available? Also, it’s not just about raw qps. When a client connects to your app, you want them to receive data as quickly as possible so that they get the best user experience. That’s true wether you have 1 qps or 100,000. Having a development philosophy that every part of the stack must run quickly is attractive.
All else is never equal. The level of adoption and support is what drives decisions in the end. That's why everyone still uses Node when Bun is probably better in every way.
Also I shouldn’t have to say this but all of the current JavaScript applications have already been written. Switching a large production codebase to a new framework does not dovetail well with modern dev practices, amplifying the pain of doing so.
I don't think application developers are the main target for their product. OTOH, if AWS/GCP/ETC. adopted Bun transparently to run your cloud functions faster and using less resources (thus less $$) that would be a win/win situation for all parties involved.
This is the point GP is making - they (the "junior-wannabe-senior") aren't even thinking of all else, and by focussing purely on the speed of operations are probably using the lesser optimal solution. Facetious example: A is faster than B, shaving a few cycles here and there. But nobody knows how to use A, it's support is lacklustre, and there are many known vulnerabilities that haven't been fixed. B is the most widely used in the industry, support and security is good. Junior-wannabe-senior is picking A because it's faster.
Its tempting to assume that just because this number is high, that the rest of the dependencies required to meaningfully respond will be equally performant. That is rarely the case.
The challenge I have with these positions is that unless you have very specific latency requirements, most of the time youre better off focusing on solving a business problem and then measuring what is slow. Starting off with “well it has to be fast so lets use this brand new thing” is the swan song of the eventually remorseful.
Bun has a philosophy that everything needs to be fast. Including things like CLI tools and process startup. Process startup being quick is important, especially in a severless environment. I understand your point, but it’d be nice to limit the discussion to Bun and Deno and not other theoretical possibilities.
I appreciate that Bun wants to be fast, but what I need right now from Bun or Deno is a better concurrency primitive than sendMessage(). I’m so… angry that we waited all this time for a worker threads implementation in Node and what we got was this hot garbage. It’s a toy. There is no sane way to send multiple tasks to the same worker, because the results come back unordered and uncorrelated, unless you build your own layer on top of them. The public API should have been promise based, not event based.
Agreed. But the problem is not on "fast" but on "brand new". Sometimes, these are equal (because new things often advertising itself as fast alternative). Rare times when this is not equal, fast can be a good choice.
But if the problem is in the framework or the runtime, it is too late to think about the performance AFTER you have tied yourself all up to the slower one.
These sorts of things also build up over time. Usually when the underlying system is well thought out and performant it’s reflected in higher layers as well.
Yeah. Performance is rarely a concern. Although they are pushing it for serverless where micro benchmarks may matter if they are related to execution and startup time.
I think the benefit of deno or bun aren't as obvious when compared in the context of node on DX matter too.
Most of the tooling and standard library can be used without using the cli and switching runtime.
Tools like tsx simplifies running typescript code directly. It does pretty much what deno does internally using esbuild.
The modularity of runtime doesn't matter to consumers even if it's pretty cool.
FFI and security features are nice but I think the future is running sensitive code as a wasm module directly in separate isolated context.
The browser compatibility is an awesome boost but most bundler will polyfill that for you out of the box and you will use a bundler with either deno or node most of the time. I know polyfilling is not perfect but it's good enough for most.
I want to hear what strong reason people have for choosing to use either bun or deno in production.
I use deno for writing scripts because it's so easy to run them especially if they have any dependencies but outside of that, I haven't reached out for it.
Even when considering benchmarking errors, performance can be more objectively measured than developer ergonomics, good architecture, clear API and documentation, good implementation and other aspects that usually matter more than performance. It is usually the fun and immediately gamifiable aspect that more junior developers can and do easily optimize for.
In a world where engineers just keep piling on cloud toys and oh hey microservices to solve very common problems, pretending that database and HTTP roundtrips are basically free, your choice of framework and language should not even matter. This is what's wrong with this industry.
After a while you come to expect this to be the default state of the world. Then what surprises you is how often people believe benchmarks without asking to see the code.
Developers should focus more about time spent getting an application up and running (and to market) rather than how much time is spent serving an http request.
> Deno is a multi-threaded server that utilizes in this test almost 2x the CPU-time while Bun is running single-threaded utilizing only 1x the CPU-time.
So we should intentionally handicap Deno? This is complete nonsense. If Bun wants to come out ahead of Deno, then they should also consider scaling with the number of CPU cores.
Single core CPUs are few and far between those days. Real-world performance is what matters.
And a real-world deployment of a single-threaded interpreter just runs multiple instances of it, so do and compare that. No need to "handicap" anything.
My gut would say that you're paying 2x the overhead for a JS runtime. Of course, such a benchmark (edit: N instances of Bun and N instances of Deno) would be much more realistic/relevant.
There was a recent post about a new Ruby server implementation. The author pointed out that for interpreted languages, forking quickly deviates and gains very little benefit from copy-on-write. However! Pre-warming the first instance and then forking afterwards brought the memory savings back down to compiled levels. No real point to my comment except that it was new knowledge to me, and that efficiency of forking really depends on the implementation of what/when you're forking.
This post doesn’t link to the original benchmark, or any code to verify what’s being suggested. Along with the click-bate heading, I think it’s fairly irrelevant.
My assumption is that this is a micro benchmark doing just an echo response, that is so far from what Bun/Deno/Node will ever be used for it’s just a farce.
The speed of your application frameworks http server will (almost) never be your bottleneck.
Developer experience, available libs, security model, packaging, deployment story, and to some extent resource usage (within reason) are far more important.
Searching Google for the exact text in the slide doesn't yield any results. So perhaps that was part of the post author's angst...that the numbers were presented in a venue where they wouldn't get much scrutiny?
If you look at that blog post he posted benchmarks (again without posting the source or methodology) and be compared Deno vs Node vs uWebsockets.js which is just a small C library with a thin JS layer on top.
It seems like they are promoting their own library and doing a bit of their own deception by including a small library in their tests. Particularly a library that was designed entirely to perform well at that particular usecase (high throughout for small bits of data).
Maybe it's just me, but there are some things I don't understand about this article.
> Deno is a multi-threaded server that utilizes in this test almost 2x the CPU-time while Bun is running single-threaded utilizing only 1x the CPU-time.
How do we know this? Also, when the author says that Deno is getting "almost 2x the CPU-time", does that mean Deno was being run on a 2-core machine?
> If we run the same test on a MacBook Air M1, with latest Bun v. latest Deno, we get the following:
What "same test" was run? The M1 MacBook Air has an 8-core CPU. If the "same test" means the test that the Deno people ran in the presentation, shouldn't Deno still be beating Bun due to the multi-core advantage?
I'm very willing to believe everything in the article. But especially with the author requesting that we verify and think critically, it would be helpful to have more details.
Looking at the source code [0] for Deno HTTP server, they spawn at least one thread for the server to listen and handle requests, before they are sent through a MPSC channel to the thread running the Isolate.
Thermal throttling is going to be part of this mix. Especially on an Air. I can’t even run useful benchmarks on my work laptop. I get purely random results every time. I gave up and created benchmarks I can run in CI.
So the logical fallacy here is that since the Bun version is leaving more cores idle that we can run more copies to get even more throughput. Yes, but actually no. You’ll get more throughput, but not at the multiple you think. The real test of a system is when you redline it for all scenarios.
Meanwhile the author does the same omission of important details that Deno is being criticized for: the M1 has 8 cores, yet there's no mention about how those are used in the Deno vs. Bun benchmark. Did Deno only run on one core? Did they run 8 Bun instances in parallel? Who knows.
I think nobody should be surprised that two startups competing for more or less the same space end up fighting over benchmarks. My recommendation is to ignore all benchmarks altogether and simply run the tests yourself.
If you care about performance that much you should be using neither. What you are buying into with these is DX, including features and language portability with other parts of your stack. As long as they are the same overall performance class, it doesn't matter.
> If you care about performance that much you should be using neither.
Unfortunately by the time the coding industry understand this , we'll already have a fifth JS runtime that will promise to solve all the performance issues that exist within the 4 others...
Node.JS / Electron are some of my most favorite tech , but if I need performance I'll go with Kotlin / Go / Rust , it's just simpler IMHO.
Unfortunately?? Y’all don’t know how good you have it. I work in Python and desperately wish there were multiple competing legitimately used Python runtimes. It’s slow as molasses and not getting faster because CPython has an overwhelming monopoly.
Node.js is getting faster, for free, because things like Deno and Bun exist.
I'm predominantly a React/Node/TS person by trade (just what I'm paid for), but like you I see this as a round peg square hole situation.
There are too many front-end JS frameworks and now it seems to be leaking to the backend as well in the form of runtimes. I'm just waiting for one of these groups to say "productivity is priority 1, followed by performance/security etc."
Because JS is sort of a nice sedan at its best - truly nice. But if you want a Ferrari, you're at the wrong dealership - that's a "you" problem.
If you are buying into JS/TS the very last thing you are getting is DX. I have never worked with such a low quality ecosystem ever before and I hope I won't have to endure it for too much longer.
If you want DX you are better off with JVM, Rust or something similarly well designed. Hell, Go has better DX than JS/TS and that is a pretty low bar.
Real build systems, real module systems (instead of like 3 competing incompatible ones) actual compilers instead of transpilation BS. Actually good linters and static analysis. Actually good IDEs.
The list of things all of these platforms do better than the whole JS/TS nightmare just goes on and on.
What you are getting when you buy JS is isomorphic code with browsers. That is all. DX? Forget it.
I have done front end development for hobby and professionally for over a decade. I can confirm that this guy does not understand the history, limitation and complexity of tooling around the ecosystem and is just here being uninformed and whining. Phrases like "real compiler" tell enough about people who mostly work in the back end that do not understand how the real world works for web.
Or let me ask this question: How would you change all this? Why would people find it better than the current status, and that proposal be universally adopted? And I am pretty sure you can't give a convincing answer. Because if it existed, people would have adopted it. Maybe you don't think this way, but there are a lot of smart people in this ecosystem and think about it a lot.
By the way, we have come a long way and are still much better than 2000s or early 2010s for many many reasons. I don't see any acknowledgement of that.
I'm not uniformed, I am aware better solutions don't currently exist for the frontend.
This however is a thread about using TS/JS on the backend as a web server and I think you will find all my arguments perfectly valid in that context.
Saying "Javascript tooling is better in 2022 than in 2000" is a useless statement even if true when saying "Javascript tooling is vastly inferior to JVM tooling" is also valid in 2022.
Just because it's better than it was doesn't make it good.
> I can confirm that this guy does not understand the history, limitation and complexity of tooling
It sounds like he does, at least a little bit, because that's the problem he has with it?
>Phrases like "real compiler" tell enough about people who mostly work in the back end that do not understand how the real world works for web.
"How the real world works for the web" is the problem.
>How would you change all this?
WebAssembly/WASI is a pretty good start IMO: Having a compiled language with escape hatches to browser APIs is the ideal for me. Personally, the closest I've seen is Flutter's Skia/CanvasKit (which does use WASM, apparently). I'd certainly like to see applications of similar concepts in other languages/frameworks though.
You're getting downvoted, but I absolutely agree that the ecosystem is a problem.
It's not the JS/TS language/syntax that is the problem per-se, but all the tooling that needs to exist around it for a decent DX.
NPM, Yarn, Babel, Webpack, TSC, ESLint, Prettier, even VSCode itself all contribute to an ecosystem that is a nightmare if you are trying to do anything outside of a handful of common scenarios
Tooling like Vite, Expo, CRA all try to reduce the headaches here, but at the end of the day just end up moving the headaches up an abstraction layer, and reduce the amount of supported scenarios.
That being said, there's not much support for using those languages outside of the ecosystem. Typescript is an incredible language, and if it was possible to transpile it to a language other than Javascript, that'd be ideal for me.
Yeah definitely. Typescript is passable, even has some decently cool stuff like discriminating unions which would be nice to have more widespread. The tooling however is just awful.
I'm trying to make it palatable with Bazel + rules_js/rules_ts and pnpm but even so still end up needing Babel and Webpack because browsers are a thing. This stack does make the build times somewhat more reasonable and actually manages to avoid too much wasted work locally/CI but boy does it take a lot of work to get done.
The time I'm investing in making a Typescript monorepo viable will pay off but only when compared to not implementing my changes and staying with TS, the comparison to simply not using the stack and using something better to solve our non-Typescript specific business problems would rather lopsided.
My usual project stack is React Native (+React Native Web) to cover iOS/Android/Windows/Mac/Web all in one repo, with serverless functions (Workers/C@E/etc) on the API side in another, and with shared types between the two as a third.
However I have different business goals where I can actually take some time to choose a non-TS stack...
... but unfortunately with those kind of goals/restrictions the choices aren't really there, afaik.
On the client-side, you have Flutter, which is cool enough (No JS, everything in a canvas), but I'm not Dart's biggest fan, and there's no Dart WASM target yet to be able to write in one language for both (and share types).
On the server-side, most cloud companies only support Javascript + Rust + WASM, and WASM just isn't mature enough yet where it's not a total pain to work with (I think it'll get there at some point though)
So I'm left pretty much where you are, my last and only hope is a Typescript monorepo.
You are free to think that - I personally don't think you give it enough credit, but if your general argument is that there's a best tool for the job and this isn't it, I would agree within the context of this topic: performance.
Deno is objectively a pretty nice package and does a better job than its JS predecessors of delivering on DX. But my fear is that JS is inherently fragmented and this will never be truly resolved. I personally wouldn't dissuade those from using it - you can still achieve great results. But it's not a poster child for how we should do things. That undertone bothers me a lot more than the runtimes themselves.
Agree with JS ecosystem being a dumpster fire, however you say "Go has better DX... and that is a pretty low bar.", What in the world do you mean? Go has in my opinion some of the best DX of any language to exist right now?
I use C and C++ on a day to day basis, which has an extremely bad rap in the DX department. Even coming from languages that, by and large, have very poor DX, I find the JS and TS ecosystems to be intolerable. The tooling is extremely slow, buggy, and often produces straight-up incorrect results.
I originally came from a C background where the tooling is essentially the worst in the business (embedded) so yeah, I definitely know what poor DX looks like lol.
For me I would pick that again over the current state of Typescript/Javascript tooling. Atleast I felt like I had some semblance of sanity even when cobling together piles of CMake.
Not rigged at all. Just a joke of a microbenchmark trying to equate a toy with an actual runtime.
Bun may be great someday, but right now it’s missing such basic functionality that any benchmark is a farce. Nobody cares that you can quickly ack a request a half million times a sec if you can’t do something as basic as spawn a child process.
Spawning processes was added in Bun v0.2.0 (released three days ago). It internally uses posix_spawn. That being said, there’s definitely still a lot of work to do in Bun
Fair enough and you seem to be adding features fairly quick which is impressive to watch, but my point still stands.
These benchmarks don't make me think Bun is fast, they make me think Bun is cheating in something they don't need to cheat in yet.
Go heads down, finish adding features, run another benchmark when you're closer to parity and promote THAT heavily. Sure it won't be such a dramatic difference once you have to compare apples to apples but it'll be more honest.
Here’s another perspective on this: someday, you’re going to find bugs that require more code to work correctly, and more code is slower. If you’ve fixated on performance you now have a conflict of interest, and conflicts make it easier to talk yourself into bad solutions.
I say this as someone who has gotten a lot of pushback in my career about performance work. Performance is a thing that needs to be priority. It doesn’t trump all else.
Make it work, make it right, make it fast. When people skip step 2 (and a godawful number of people do) they make enemies of themselves to their coworkers.
I fully agree. Played around with Bun quite a bit, but any benchmark is meaningless until it has all the features and stability to have it production ready. Until then it's just a promising (and very respectful) start - but nothing more.
It seems like there is some battle between the two with an article bashing Bun's performance metrics by a Deno contributor just the other day. The article doesn't do nearly as good of a job pointing out the differences though and the allegations by the Deno contributor which appear to be legitimate.
Also, just because Bun is single threaded doesn't mean they should get special exceptions in performance testing. If they don't like the numbers then give a comparable test with Bun using multithreading.
> Also, just because Bun is single threaded doesn't mean they should get special exceptions in performance testing.
Maybe the chart should point that out, if that's the case. Or maybe the person running the benchmark should run more than one instance of Bun to make it an apples-to-apples comparison.
Micro and synthetic benchmarks are useful, however people tend to attribute WAY too much importance to them because it quickly becomes a battle of egos IMO.
It's entertaining seeing these 2 camps throwing jelly at each other, it's healthy to a point and when it stops being healthy by that time most have moved on to the new shinny thing.
I suspect since both have taken investor money, that is a factor too.
Post and heading are written to attribute this to malice, without offering any proof. It seems likely given how new Bun is that the benchmark writers simply lacked familiarity.
> Please - verify, verify, verify and think critically about what you read.
If you're going to excoriate someone for an improper benchmark, and then provide one of your own and advise your audience to "verify," then it might be wise to include instructions for how to reproduce your results.
"We interrupt the presentation of our project to bring you this important message about the existence of a similar project that we are afraid of, but think we beat on some benchmark of limited relevance. Having said that, please don't look up that project, let alone use it! ... We now return you to the presentation of our project."
I think it’s better to do it via multiple JS VMs than one http server splitting the work between multiple threads. There’s really not enough work for it to be worthwhile otherwise
Bun will eventually implement Worker/worker_threads and that will have an integration with the HTTP server for load balancing
I can’t find the tweet right now but I follow the developer on twitter and his rationale was essentially that he wanted to focus on nailing single core speeds before expanding to multi core.
I assume this has to do with how extremely experimental bun is at the moment anyway. So they are focusing on nailing the fundamentals before attempting anything that would make it „production ready“ such as multi core support.
I have no idea if this is Bun's plan or not, but IMO there's nothing wrong in opting for a share-nothing architecture where you simply spin up one Bun process per core and let caddy/nginx/... handle the load balancing.
Ah yeah in the submitted article they say the updated benchmark used “latest Bun…” - so does that mean that it was just tested using a released version (lacking multithreading) instead of a nightly build (with recently added multithreading)?
As an aside, I’ve now seen a couple of HN submissions relating to Bun and there’s been a vibe that there’s conspiracy or foul play afoot in each. That’s a little … unusual. What does the developer seem like on Twitter, fairly normal or a bit eccentric?
> As an aside, I’ve now seen a couple of HN submissions relating to Bun and there’s been a vibe that there’s conspiracy or foul play afoot in each. Kinda weird IMO
That's inevitable when companies with big investments have to market a performance-oriented product to a public that doesn't fully understand the nuance of software performance running on modern hardware.
The average JS dev doesn't really know what's happening behind the scenes, so a precise message will not be nearly as effective as slapping together a long and a short bar in a barchart. And since this type of marketing is imprecise by nature, tweaking some details lets opposing parties corroborate conflicting claims.
I've been through this same phenomenon when working at Redis Labs. Everybody had the fastest cache and/or db. Both Redis Labs and each of its competitors. You just had to pick the right bar chart.
I recently took a deep dive into competing json packages for go. They all have charts showing they are the fastest. Some just leave out faster packages (often because the readme is from before the faster packages existed), but others seem to have conflicting claims. When you look at them in detail, it’s often because they don’t understand what the other packages are doing (maybe willfully misunderstanding). So they end up comparing their package validating the json with another package validating the json and copying the values in memory.
I meant that there are comments or posts which appear and which make accusations that some other communities or people are deliberately biased against Bun. I couldn't remember if they came from the dev lead or from just a user of Bun, but I do recall them being a little bit weird
I'm surprised to read that "Bun actually sits above a URL router capable of matching methods, URL wildcards and parameters".
I thought bun was just a JS runtime, though I guess thinking about it maybe I don't even understand what a JS runtime is.
I always figured nodejs is v8 + an stdlib. I think bun is a custom zig-js engine + an extended stdlib (website claims "batteries included")?
Isn't the router generally "application/framework (i.e vue) specific"?
Does stuff like Express or Nextjs not actually ship a webserver and just use the underlying "runtime"'s `http.serve` function? Is it possible to make an analogous "webbrick to unicorn" server swap while still running nodejs or are you swapping runtimes to do that?
V8 only provides javascript execution. All the browser APIs need to be implemented by runtime such as fetch, filesystem, console, http, Intl, webgpu, serialization, any global objects, cloning, message passing, etc.
Any module system and dependency management is part of runtime (loading scripts, running them before executing code, etc).
Any other execution context such as workers or running wasm is also responsibility of runtime to implement and manage.
FFI or native extensions are also part of runtime. So one may use the built in networking APIs to build a server framework such as express or they may opt to bring their own networking layer through a native extension.
Frameworks such as express and next build on top of http node module.
Many of these APIs provided by runtime are also simply js scripts run in the global context before running your code.
Router is application specific, not part of any of the JS runtime but URL pattern can be used to build a router which is a standardized API implemented by both browsers and deno.
Runtime can choose to provide any API they want under their namespace so deno could provide a full blown router if they want to.
Not that you should choose it over node or bun.sh or deno. I look forward to seeing where bun.sh ends up on the chart.
Currently (well, as of the last benchmark, in July) deno is 0.9%-1.6% of the speed of the fastest options, while faster node options are around 20-40%.
That's pretty shocking. Both that deno is so behind and just is so ahead (even rust/C).
Would love to read what is going on here / what just is doing so much better than deno. I think deno is using a pure js server in that benchmark (vs it's newest/unstable ffi-based one) but, even still, 1% is awful.
The most problematic thing about the state of the world for web framework benchmarks is that some of them drop a lot of real world requirements while showcasing high RPS.
E.g. a signficant number of frameworks on the techempower benchmark don't run with any timeouts in the various stages of a HTTP request lifecycle. That would mean if one would deploy those in the real world the server would sooner or later run out of memory or fd's due to broken client connections. Once you add timeouts, performance would already drop.
Then there is logging and metrics, which are absolutely required for any production setup, but never included in any of those proof of concept setups.
If the performance after those mandatory things is added drops 2 to 4x, the conclusions from the benchmarks can be very different.
I'm curious if the Node.js setup that resulted in these numbers was also the default single process, rather than the cluster setup that's included with Node, but not configured by default.
I've been a little puzzled about the latest runtime competition. Bun is a great package manager but I don't see why you'd choose it as a runtime. Ditto with Deno and whatever other runtimes have entered the competition. Seems like you'd start using one, realize npm compatibility is rather poor, and pick the runtime that actually lets you, yknow, do stuff, i.e. node.
>Reminds me of how each version of Windows is "the fastest Windows ever!"
Says who? I searched for "windows 10 fastest" and the only relevant results were:
* claims about edge being faster, which is probably true given how much work they put into optimizing it for windows
* third party reviews saying that windows 10 S was faster (than home/pro), which is probably also true because it's so locked down that you can't run anything that slows it down
I'm not surprised that engines that prioritize "fresh" results only turn up claims that the most recent version of Windows is the fastest. Even so the most recent version does have higher system requirements than its predecessor.
Also your search fu is weak. Try: "Windows new version" faster old version
In either case, I hope they announce a correction and move on to more important matters. If youre trying to shave another tiny bit of rps out of your boxes then thats an incredible success problem; not the kind 99.999 of companies will need.