Your house sounds like a great place to hold a fighting game local tournament (or something like the old Smash Summit series for Smash Bros Melee and Ultimate before Beyond The Summit shut down)
Burrito works very well in my experience. I've used it for distributing an implementation of breakout in Elixir with OpenGL and Metal rendering backends as a binary. Pretty neat!
I’d like to take this moment to say that the recent She Ra revival series on Netflix by ND Stevenson (the creator of Nimona) is pretty good, go watch it.
I might try this next, will check out and try to build it tomorrow
Shame that there’s no way to run ./configure on native windows though, I’ll have to use MSYS2
Imho maintainers should just keep a set of pre-made header-files for a Windows compiler, a huge portion the work configure does is because we historically had:
- Bad compilers (that lacked stdlib features)
- Lacked package managers (so we need to detect versions instead of just specifying used libraries for the program)
- The mess of various nix/linux distros having differing paths (/bin or /local/bin or /usr/local/bin or whatever?) and nix binaries lacked a standardized way to just locate "themselves"
In contrast, Windows programs mostly just make an API call to detect where they live and then just load files from relative paths, this also allows for side-by-side installations of varying versions instead of multiple builds (Yes, some programs sadly needed installers but that's just bad engineering, whilst many programs have portable variants).
Considering Mac programs also are self contained I guess those also have some sane API's for program self-location.
Yes, I do realize that much of the centralization of programs harkens back to Unix multi-user paradigms with centralized management but personal computers has been the norm for some almost 40 years at this point (Even if we've moved to web mainframes instead).
Gauche that is hosted on this site can do it. It does by statically linking the entire Gauche system so may not be the best option. Besides Chez (compiling to native code) that sibling comments mentioned, other options are CHICKEN and Gambit compiling to C (CHICKEN docs provide instructions to even cross-compile[0]).
I've always liked bigloo. It's probably the most pragmatic of the schemes in my opinion. It never gets the attention that chicken and gambit get, though, and I've always wondered why.
Oh, right, I forgot about that one. As I remember, it's a good recommendation.
I'm guessing it doesn't get much chatter due to INRIA being not very good at promotion of the stuff they do, and Bigloo doesn't have the academia-industry-matrimonial push that e.g. Pharo has received.
I've found Chicken reasonably good at compiling to a standalone executable on Linux. Because of how it works I imagine you can get it to work with msys too.
i am fairly sure chicken can do this (never used it on windows myself but the homepage lists all three major platforms). it worked great for building executables on linux and it had a good ecosystem of packages.
The best way I've found to make a standalone executable is to compile my scheme program into a .boot file and embed it, along with Chez's .boot files, into a small C program that then calls the scheme program.
The author got hired by Modular, the AI startup founded by the creators of LLVM and Swift, and is now working on the new language Mojo.
He’s been bringing a bunch of ideas from Vale to Mojo
Oh nice! I just had an excuse to try mojo via max inference, it was pretty impressive. Basically on par with vllm for some small benchmarks, bit of variance in ttft and tpot. Very cool!
Odin’s FAQ says it’s because closures require automatic memory management. [0] But if that’s the case, why do languages like C++ and Ada [1] support closures?
I can't speak for Ada, but C++ closures require that you explicitly specify what's captured from the enclosing environment and how (i.e. copy or reference). That capture is also unsafe, which relates to the issue of automatic memory management: for instance, if you have a function that returns a closure that's captured a reference to something in the function's stack frame, stack semantics mean that value will be destroyed. I'm sure C++ developers are fine with them but - having not used them in anger - they sound quite brittle.
The answer in the Odin FAQ maybe could be expanded to say "many uses of closures require automatic memory management, and while Odin could add some kind of support for closures to handle the uses that don't, it'd add too much complexity and too much potential for bugs to be worthwhile". Not to speak for gingerbill, here.
> I’d argue that actual closures which are unified everywhere as a single procedure type with non-capturing procedure values require some form of automatic-memory-management. That does not necessarily garbage collection nor ARC, but it could be something akin to RAII. This is all still automatic and against the philosophy of Odin.
C++ doesn't have this feature either. A C++ closure does not have the same type as a regular C-style function with the same argument types and result type. The types of functions and closures are not unified.
And C++ does have RAII, which the author feels is a kind of automatic memory management and against the philosophy of Odin.
So C++ doesn't have the feature G.B. says is impossible. I don't know enough to comment on Ada.
What Bill wrote, on his own web site, about his own language is simply this:
> For closures to work correctly would require a form of automatic memory management which will never be implemented into Odin.
I suppose you can insist Bill thinks "correctly" means all that verbiage about unified types - but then a reasonable question would be why doesn't Odin provide these "not correct" closures people enjoy in other languages ?
RAII is entirely irrelevant, the disposal of a closure over a Goose is the same as disposal of a Goose value itself. In practice I expect a language like Odin would prefer to close over references, but again Odin is able to dispose of references so what's the problem?
Rust shows that closures don't require automatic memory management, unless you consider Rust's static ownership analysis to be "automatic memory management", which I suppose you could, but it's all at compile-time, not runtime. Of course, it's fair if he doesn't want his language to have an ownership system, but ownership systems honestly aren't very complex, they're just different.
You really don't think ownership systems are that complex kibwen?
I just watched a recent Polonius talk ( https://m.youtube.com/watch?v=uCN_LRcswts ) and came away very impressed with the difficulty of implementing (or even modeling) the borrow checker. Or maybe you're referring to something else?
Ownership and the borrow checker two distinct things; putting these concepts together is the premier novelty of Rust. "Ownership" is this: an analysis pass that enforces single-ownership of values (call it "affine types" if you want to be fancy, but it's an extremely simple analysis), along with a mechanism to allow types to opt-out of single-ownership and allow multiple ownership/implicit copying (what Rust calls the `Copy` trait). That's all it is, and it automatically gives you Rust's trick of "automatic static memory management". It's much simpler than a borrow checker, which would also require a notion of generics and subtyping, to say nothing of lifetimes (or a control flow graph, which is what you want if you want a good borrow checker). Such a system of ownership without a borrow checker could even be memory-safe, if your language doesn't allow unmanaged pointers (though it wouldn't be as efficient as Rust, and would involve more copying, and makes for slightly more annoying APIs).
References are the purview of the borrow checker, naturally, but (assuming that you care about memory safety) you might be able to get away with a system that doesn't let you store arbitrary references, but conceivably you could have some kind of simple-ish scope-based analysis that might allow the compiler to transparently elide copies when passing owned values to functions deeper in the call stack. A mechanism to reify the notion of strictly-scoped values in the language (like Python's `with` statement) could probably go quite far here, allowing your compiler to know that a piece of data is anchored to a given scope which strictly outlives some child scopes, allowing a reference to that data to remain valid for those child scopes. You'd still have to have Rust's notion of aliasing XOR mutation, but if references are first-class values this might be tractable, because you get much of this for free from an ownership system (mutable references are owned, immutable references are copyable), although Rust has several other ways that it bends over backwards to make this work nicely (e.g. compiler-inserted reborrowing). If you wanted to avoid this complexity you could start by just sticking to only having immutable references that allow copying the inner data if you want to mutate it.
Conceptually, think about it. Coroutines require copying not only the variables but the function itself, outside of the lifetime of the parent function. (Or at least pointers thereto.) I would like to hear about a language with static coroutines but I am not aware of any. Even Rust doesn't do it, they just make you pass the lifetime around.
The parent is talking about closures, not coroutines. Rust does have closures that don't require a GC or passing lifetimes around (there's not even any syntax for putting a lifetime on a closure).
Rust closures can have lifetimes, and the lifetime of a closure is restricted to the lifetime of the shortest lived reference that it captures. But that just means the compiler protects you from having dangling pointers in your closure. And I don't think you can get much better than that without a runtime garbage collector.
A Rust closure "has" a lifetime in the sense that the anonymous underlying struct that Rust creates to hold the closed-over values has a lifetime, but that's not a requirement to have closures without a GC. Like C++, Odin doesn't care about being memory-safe, so you can just say "be careful doing anything that involves pointers" (which is how the language already works), or if you wanted to be a little safer you could just forbid closing over anything that contained a pointer.
Automatic memory management isn’t necessary for closures, but if you don’t have it then it is easy to have dangling pointers (e.g. you capture a local variable by reference, return the closure object, and then call the closure and use the referenced variable). This is a problem in C++ but isn’t in Ada due to Ada’s stricter scoping rules. Capturing variables by value is safe (assuming the captured values contain no dangling references themselves). It might require allocation if there were type-erased function objects (like std::function in C++), but this could be done using explicit allocator and deallocator functions with some help from the compiler to determine the closure object’s size.
I use Racket. It has a lot of standard libraries and also packages that you can download.
Using only the standard librares I made a few projects:
* Open a GUI to select a file, untargzip it, parse one of the expanded files with xlm, edit the xml and targzip everything again. (This is a common pattern. Now many applications save the data as a xml compressed with tar and gzip.) I made an executable and send it to my coworkers so they can just run it.
* A bot to reply emails, with IMAP and SMPT. It reads the email, scrap some data from one of my webpages and send it in the reply. the bot can only only handle the easy questions, but in my case it's like the 90% so it it saves me a lot of time.
* I used the webserver so the T.A. in my part of the university can fill their preferences about the courses they want to teach. It handles like 500 users in an old computer without problems.
> It does not have syntax-rules or any of its friends
This is still super interesting of course, but why use lisp at this point and not lua or python? I mean this earnestly as a daily scheme user. Macros are 90% of what makes lisp interesting.
Wicked, fully FOSS for non-console targets, I had some fun with earlier this year, it's a great very-modern full-featured D3D-or-Vulkan renderer under active development with a lively yet cozy-sized community (including a handful of folks taking care of the Linux side). The API is easily learned and can be operated via your game code from Lua or C++ or a mix, as you see fit, with Lua scripts being executable also in Wicked's editor app (Windows/Linux).
My hunch is that by sticking to just Windows/Linux/consoles and firmly-decidedly-skipping other cross-platform platforms such as Apple's Metal, mobile's OpenGL, WebGL/WebGPU/WebAssembly, it is kept maintainable, unbuggy (not 100s of bug-tagged Open Github Issues) and capable of ongoing rapid feature iteration.
What do you mean by standalone renderers? There's Forge [0], which was used for Starfield. Also, there's nothing stopping you from taking a popular engine and using it as a renderer only. Remaster of Oblivion is running the original game code underneath and using Unreal Engine 5 for rendering. I assume Diablo 2 remaster did something similar because you can seamlessly switch between old and new graphics.
I mean yes you can do that, but it’s pretty hard since the engine expects you to work within its framework.
Not saying it’s impossible of course, just annoying.
Also, if I’m not mistaken, the forge is just a cross platform graphics wrapper? You still need to write a GLTF renderer and all that yourself if I’m not mistaken.
Yeah I guess so, lol. I don't think its super updated though. Pretty sure Red Eclipse has long since forked cube engine and done it's own thing with it
reply