Oklo | Remote (US) or Santa Clara or Brooklyn | Full time | https://oklo.com
Join us in pioneering the next generation of nuclear reactors! You'll leverage your software skills alongside nuclear engineers to model, simulate, design, and deploy advanced fission power technology. You will work at the forefront of the nuclear industry, developing novel techniques to reach new levels of safety, efficiency, and resiliency. Come be a part of powering the future with advanced fission power plants to provide clean, reliable, affordable energy.
At $WORK we use SQLite in WASM via the official ES module, running read-only in browser.
The performance is very poor, perhaps 100x worse than native. It's bad enough that we only use SQLite for trivial queries. All joins, sorting, etc. are done in JavaScript.
Profiling shows the slowdown is in the JS <-> WASM interop. This is exacerbated by the one-row-at-a-time "cursor" API in SQLite, which means at least one FFI round-trip for each row.
Don't confuse "presence of dynamic types" with "absence of static types."
Think about the web, which is full of dynamicism: install this polyfill if needed, call this function if it exists, all sorts of progressive enhancement. Dynamic types are what make those possible.
Sure, I'm primarily a C# programmer which does have a dynamic type object, and occasionally use VB which uses late binding and can use dynamic typing as well.
You want to know how often I find dynamic typing the correct tool for the job? It's literally never.
Dynamic typing does allow you to do things faster as long as you can keep the whole type system in your head, which is why JavaScript was designed the way it was. That doesn't mean it is necessary to do any of those things, or is even the best way to do it.
If the allocation is backed by the kernel, then it will be zero-filled for security reasons. If it's backed by user-space malloc then who knows; but there's never a scenario where a mallocated page is quietly replaced by a zero-filled page behind the scenes.
SIMD is true, but the original guess is correct, and that effect is bigger!
using_map is faster because it's not allocating: it's re-using the input array. That is, it is operating on the input `v` value in place, equivalent to this:
Even when passing the array as a borrow instead of a clone[1], map still auto-vectorizes and performs the new allocation in one go, avoiding the bounds check and possible calls to grow_one
"Numbers go into the numbers vector" is unusual - typically JS engines use either NaN-boxing or inline small integers (e.g. v8 SMI). I suppose this means that a simple `this.count += 1` will always allocate.
Have you considered using NaN-boxing? Also, are the type-specific vectors compacted by the GC, or do they maintain a free list?
We do have all safe integers inline (and most doubles too).
I answered about NaN boxing somewhere here but basically, we get quite a bit of mileage from our tagged union / enum / ADT based Value, so I don't think I'd change to NaN boxing now even if I could.
That's no problem in many modern lexers as they usually have a "state" so when you encounter "echo" you can switch to a new state and that state may have different token parsing rules. So "if" in the "echo" state could be a string literal whereas it may be a keyword in the initial state.
Lex/Flex takes care of that mostly for you which is one of the benefits of using a well worn lexer generator and not rolling your own.
unless i miss something this should not be an issue. the lexer could parse if as an IF token, and the parser could treat tags as STRING || IF ( || other keywords… )
That seems like it'd get really awkward pretty quickly. "if" isn't unique in this regard; there are about a hundred shell builtins, and all of them can be used as an argument to a command. (For example, "echo then complete command while true history" is a valid shell command consisting entirely of names of builtins, and the only keyword in it is the leading "echo".)
The problem lies with shells extensive usage of barewords. If you could eliminate the requirement for any bareword to be treated as a string then parsing shell code would then become much simpler...but also few people would want to use it because nobody wants to write the following in their interactive shell:
Rust goes to substantial lengths to allow unwinding from panics. For example, see how complicated `Vec::retain_mut` is. The complexity is due to the possibility of a panic, and the need to unwind.
reply