Hacker Newsnew | past | comments | ask | show | jobs | submit | DDSDev's commentslogin

DOTS stands for “Data-Oriented Technology Stack”, which is a collection of Unity technology to include their Entity Component System framework, the Burst Compiler, and their Job system.

Each of these technologies have their own descriptions, but a very quick (and perhaps over simplified) explanation:

ECS is a framework where you have entities that themselves have one or more components, acted on by systems. The way these components are laid out in memory and acted upon by systems is typically more cache friendly then traditional game engine design with heap allocated GameObjects that are themselves a mess of pointers (Structs of Arrays vs Arrays of Structs)

Burst allows you to annotate functions, jobs and systems written in a subset of unmanaged C# that compiles to highly optimized instructions for better performance.

The job system allows you to schedule jobs with dependencies in a useful way that can help enable parallelism without manually having to deal with thread primitives. The jobs can also be burst’s themselves and used in a non-parallel manner if you don’t have something easily parallelizable.


While what you are saying is technically true, Chinese ships would be allowed to exercise their right of Transit Passage under UNCLOS through the Strait of Gibraltar.


China is not a signatory of the UNCLOS. See the south china see debacle for an easy answer as to why.


To my knowledge, China is a signatory to UNCLOS, but has disputes around it's "islands" in the South China Sea and their relation to the EEZ. I acknowledge that China's relationship to UNCLOS, as a minimum, is complicated and rapidly evolving, but I dispute that they do not have a right to transit passage. Or to be more specific, I would put forward that they would have a plausible argument to claim transit passage.

The United States has not ratified UNCLOS, and regularly claims the right of Transit Passage. In fact, this fact is one of the reasons why Iran claims that the United States cannot enter into Iranian TTW while making a Strait of Hormuz transit - because the US has not ratified UNCLOS, their claim is that the US cannot claim transit passage. For the United States (or any Western Nation) to make the claim that China cannot claim Transit Passage would lend weight to Iran's argument, which you can imagine, they would not want to do.

I do not want to make any assumptions around your specific views on this matter - you may hold the opinion that China could not claim transit passage, however I wanted to interject some perspective that:

1. That may not be universally agreed upon 2. Specifically, the United States and it's allies may not make that argument because it would put them in a negative position for other international disputes.


I am the main author behind the JS/TS integration for emacs-ng. I still think it is one of the cooler things I have done that people have taken note of.

I am a game programmer by trade, and I had experiencing embedding JS into projects from a previous engine I worked on where JS was the primary gaming scripting layer.

I love emacs and I love lisp in general - I added JS to emacs mainly to find out if I could do it. I also had a hope that adding JS could expand emacs usage to people that don't know elisp, and have JS be a "gateway" into elisp.

I think that the JS integration is a huge testament to the flexibility and quality of emacs and elisp.

As I have gotten older, I've had less and less time for open source, but I don't consider this project abandoned. I still want to upgrade us to the latest deno. My previous upgrade attempt (which you can see in draft PR at the time of this comment) was a little too ambitious - I tried to move us to a more multithreaded approach, but I think I need to work on this more incrementally.


As a game dev, factorio has always been a title that I would absolutely love to see the original source code for.

Also, for anyone who’s first question was “how is this legal”:

“When asked about the legality of this whole endeavour, they showed great understanding and allowed the project to be released, provided it won’t be used for commercial purposes.”


The Factorio team is pretty chill about stuff, generally, which is why they make the game so moddable in the first place. The only time I remember them having to take any legal action was to deal with someone reselling stolen keys or something. It's understood that anything cool the community does is good for Factorio.

> i would absolutely love to see the original source code for

As someone who's seen the source code, I'd say it's relatively good code, but nothing particularly amazing. C++ with multiple inheritance, game objects doing method calls to each other in multiple phases, and a good dose of "maybe not the best way to do this but it works and we're not changing it now" code. What keeps it all running as well as it does is a comprehensive set of automated tests, and Rseding getting on your case any time you make a PR with less-than-optimal code. The interesting bits are the algorithms, which are covered by the Friday Facts.


> What keeps it all running as well as it does is a comprehensive set of automated tests

"Let's game it out enters the chat"

Really though super cool to hear a perspective from someone with inside knowledge!


I'd be impressed if they managed to find any bugs, it's probably one of the least buggy games I've encountered (the speedruns for example contain no glitches at all).


This is underselling how un-buggy Factorio is. Just read the bugfix section of the previous releases: https://wiki.factorio.com/Version_history/1.1.0

“ Fixed a crash when downgrading ghost of assembling machine when target machine cannot craft recipe due to missing pipes. ”

“ Fixed a crash when removing modded pipe-to-ground that connects to a shifted pipe-to-ground. ”

“ Fixed that solar panels on multiple surfaces would all produce electricity based on a daytime of one of the surfaces when they were part of a single electric network with a script created wire between surfaces “

“ Fixed a crash when moving blueprint book to blueprint library when there is also another book that will get under the cursor and tooltips are showing “

“ Fixed a crash related to teleporting spider vehicles with burner energy sources between surfaces. “

And so on. There was a blog post a while back that claimed they had fixed over 8,800 reported bugs and after more than four years of development they had reached 0 outstanding bugs.


Particularly notable is that many of the bugs they're fixing can only be triggered by mods. (This is the case for at least half of the ones in your example -- "modded pipe-to-ground" is obviously mod-only, and neither "solar panels on multiple surfaces" nor "spider vehicles with burner energy sources" are features present in the base game.)


There are some bits of Factorio source code on github — https://gist.github.com/Rseding91

and https://old.reddit.com/r/factorio/comments/13bsf3s/technical... is fascinating reading (things like: they used doubly linked lists, and multiple inheritance)


Factorio has a great dev blog. They don't show the code, but they go pretty in-depth on some challenges they have to overcome.

https://www.factorio.com/blog/


In fact this community Alt-F4 has started after the weekly FFFs stopped, because people were missing it so much !


Naively, I had the inverse question after reading that, how would this be illegal? Why must this be a labor of love? Use of Factorio IP seems limited to visual inspiration and file format parsing, and its clearly transformative (in the legal sense of the word)


Sometimes I think we're far too obsessed with legal. The developer is good to the community, the community is good to the developer. This seems like a much more productive relationship than arguing over what is legal.


"Visual inspiration" is generous. The models are a direct 1:1 copy of the in-game sprites, which sounds like a "derivative work" to me. IANAL.


In this instance, it would be more than "look and feel" and more "would a person assume this project is done by the original Factorio developers" — are they misleading the audience. They copied much of the UI from the Factorio site for instance — so if they wanted you could make the argument the intent is to mislead.


I think you’re confusing trademark and copyright. Trademark is about confusing the public. Copyright is about derivative works. It seems pretty clear to me that this is a derivative work, due to its direct copying of Factorio sprites.


The website UI is by the Alt-F4 team, not the FUE5 team, it's like seeing this discussion and then complaining that FUE5 'copied' Factorio's colour scheme, thinking they are also behind hackernews, lol.


I mean, nothing is clear in terms of "transformative" and never has been, but I absolutely agree with the overall point.


For anyone curious, Smedley Butler died in 1940[1]

[1] https://en.wikipedia.org/wiki/Smedley_Butler


Which was probably good for his reputation. Some people die at the right time. Had he lived a bit longer he might have come out against entry into WWII, which would not have aged well.


He was an Anti-Fascist, but wanted to prevent a war with the Fascists. In fact, he was so Anti-Fascist, that the US Government apologized to Benito Mussolini on his behalf, and court-martialed him over his Anti-Fascist comments about Mussolini.

Source: New York Times.

https://www.nytimes.com/1931/01/30/archives/united-states-ap...


Sounds like he won a moral victory over the US Government and the Fascists, and then the US Government (ever full of contradictions) later helped depose the actual Fascists through the war he wanted to avoid.


Also, record of the apology from the Department of State.

https://history.state.gov/historicaldocuments/frus1931v02/d6...


> He was an Anti-Fascist, but wanted to prevent a war with the Fascists

That's rich :) For all intents and purposes you're supporting it, you just like to support it while feeling morally superior.


To anyone just passing through the comments, the Wiki article on Butler is a worthy read. He was an interesting person.


The YouTube series Knowing Better had a very good episode about Butler and this essay too: https://yewtu.be/watch?v=Lg-nUy2DalM


I apricate the time that the deno team has spent having a separation between deno/deno_core/deno_runtime. The deno codebase is well organized and overall high quality.

One issue I've run into as someone who has embedded these libraries into another other project is that there are a lot of really nice features are only available in deno proper (everything under their cli/ directory https://github.com/denoland/deno/tree/main/cli) vice deno_runtime and deno_core. Specifically, typescript is implemented in cli/

A user of runtime or core could just reimplement those features piece meal, however I ended up just forking deno and adding a lib.rs that just exposes the code in cli/ as a library, and it has worked out pretty well for my needs.

Edit: Elsewhere in the thread the author of this blogpost linked another blog post where they describe getting TS without having to use cli - something I will look into!


Agreed, I felt the same when toying with this concept for the first time. I initially thought I could just build directly off of deno_core, but quickly learned there's a lot you'll have to implement. Much of what I perceived as deno is really just the cli crate and other dependencies that feel somewhat "internal". Some examples of things you'll need to implement and reference from the cli I ran into were:

1. module loader: https://github.com/denoland/deno/blob/main/cli/module_loader...

2. runtime interfaces (workers): https://github.com/denoland/deno/blob/main/cli/worker.rs

3. invocation of the runtime: https://github.com/denoland/deno/blob/main/cli/tools/run.rs#...

4. permissions: https://github.com/denoland/deno/blob/main/runtime/permissio...

You can wire up some of these other packages or potentially even the CLI itself to avoid re-implementing much of the runtime over again, but it's a heavy dependency for a few crucial utilities that feel like they could exist in more lightweight runtime crates.

At the end of the day, I feel like most people potentially want to offer APIs through the runtime as opposed to a package or framework (think Netlify edge functions, Shopify functions). I wonder if they are reserving this interface for Deno subhosting customers rather than making it more ergonomic for a self hosted runtime. Kind of similar to how the runtime crate injects Deno's Web Platform JS implementations: https://github.com/denoland/deno/tree/main/ext


Can you tell us a bit about a project you've embedded it in? Was it done to provide a scripting interface to an application? Something else?


It was an emacs fork that used TypeScript/Javascript alongside elisp. I am currently in the process of upgrading to deno 1.30.3 right now.


This is a really cool project!

One thought that I have: would there be an issue sending this file in an unsecure context where MITM is a possibility? Alice sends Bob her HTML file over an unsecured medium, and Jane intercepts this traffic, and gives Bob a modified HTML file that will report the password back to Jane. Jane can set it up so that the browser still displays the local file as it's source, but has an HTTP request in the background phone home back to Jane's server.

My scenario only works if Bob doesn't inspect the page source before entering their password, or if the modifications are sufficiently obfuscated.


> Jane intercepts this traffic, and gives Bob a modified HTML file

I’d expect better from Jane. This is the kind of shit Mallory would pull.


Or Eve. But Jane, I would have never thought.


Yes. This is essentially a binary with the browser acting as the OS. And if it's hosted on a server it's a binary which may change at any moment. There isn't even a mechanism by which to freeze its hash and alert you if it changes - a browser would have to provide that functionality.

To be fair, the exact same problem afflicts something like Protonmail, so calling it a toy may be too harsh.


Maybe it could put a checksum in the url?


100% agree. This is why I’m always a bit dubious of in-browser decryption. At any moment a little extra snippet of JS can be added to stealthily leak my secrets. There’s no great mechanism to ensure the JS I’m running is always the same JS I ran before. Compare this to a desktop app where I always know I’m running the exact same binary I ran last time. (And if I’m not then I’m probably already pwned anyway)

For this persons use case though, assuming they’re not a person of interest to any “threat actors”, I wouldn’t be too worried.


This isn't a 100% solution but you can pin external JavaScript includes in HTML using subresource integrity (https://developer.mozilla.org/en-US/docs/Web/Security/Subres...).

This doesn't help if someone can mess with the HTML though.


It may be too much for moms, but maybe the html document includes the content, but a chrome extension does the work and presents the secret in the extension popup.

- attack surface area drastically reduced. only one MITM matters now, the extension installation - requires great extension UX, to help dads know where to click

less portable than just a document, but perhaps a nice middle ground


> too much for moms

I assume you are referring to the OP's post there but I did a double take because I know some extremely technical moms.


It's important to be precise and clear-thinking when articulating problems, lest we identify the wrong one but believe it's the right one.

Your issue is not with the program being in-browser, but rather it being an online one that gets ~continually re-fetched and immediately trusted without verification.


One counter-measure to this is to run the decryption without internet connection. This is easy enough to explain to a hypothetical non technical mother.


Yea. The html must have signature and Bob has public key of Alice to verify ... and we here again. :)


What are your feelings on the direction of game engines across the industry right now?

What is an innovation you hope you see wider adoption of in the space?

Thank you for your efforts!


Some broad strokes on hot topics:

Raytracing: Exciting, useful, pretty. We need to support it asap. A pull request was recently opened to add initial support to Bevy's low level gpu abstraction (wgpu), so I'm excited!

Nanite-style rendering: Very cool. Not yet sure its the best approach for Bevy, but I'd love to explore this!

Proprietary Engines Taking Cuts Of Sales: Game developers should be investing their time in open tooling without contracts or restrictive licensing. We have the experience. We should own our own tools. And there are so many more of us than there are Unity or Unreal developers. Proprietary tooling is a massive loss of potential.

AI Generated Art and Code: empowering and definitely the future, but currently an ethical disaster. Licenses are ignored. Artist's work and identity is assimilated without consent.


> Game developers should be investing their time in open tooling without contracts or restrictive licensing

Unfortunately, this is not possible in consoles, because of issues with NDA. Even Bevy can't have an open licensing there: https://github.com/rust-gamedev/wg/issues/90 (Godot also can't: https://docs.godotengine.org/en/stable/tutorials/platform/co...)

I expect that Bevy ports to consoles will take a cut of sales rather than a flat fee :(


From what I've heard with Godot there's basically two paths you can take there. Pay another studio who has experience getting the engine to run on consoles to share their engine modifications with you or even port your game to the console for you, or you can have your developers take the time to get the engine working on the consoles yourself. For a studio without many engine programmers, the former is probably more likely to happen and they may either take a cut or charge a flat fee, but the option to do the engineering yourself is still always on the table.


I think that nativecomp represents a huge leap forward for emacs. I continually give kudos to Andrea Corallo and the entire team for making this a possibility.

I think that the next major leap for emacs needs to involve the garbage collector and allocation logic. I think that a large class of performance optimizations would be possible with an improved GC, including improvements to emacs existing threading capabilities.


I think nativcomp doesn't really improve what makes emacs "slow".

Emacs is a single threaded synchronous and blocking UI system. Sometimes when my autocomplete, C++ checking, git checking, auto formating, ... run, the editor freezes, for multiple seconds.

All this stuff runs in the UI thread. Braindamaged.

The other thing that makes emacs slow is remote editing. Emacs TRAMP uses one ssh connection per command. VS Code spawns a remote server and asynchronously updates the remote's state. VS Code remote editing experience is as good as the local one, but emacs experience is supper laggy, recurrent freezes of multiple seconds, etc. Particularly when navigating the filesystem in the remote in any modern emacs way (helm, ido, etc.). Or when auto-save happens and everything blocks for multiple seconds, etc.

---

I don't really care if native compilation makes single threaded code faster, if that single threaded code runs in the UI thread and blocks the editor for 10 seconds. Sure now it maybe blocks for 9 seconds, because you can't do much about those 9 seconds you have to wait for some IO operation to complete. But that still sucks.


A lot of this is down to configuration. For example, you say “Emacs TRAMP uses one ssh connection per command”, but that’s only true if you’re not using the ControlMaster SSH option. Add this to your ~/.ssh/config file:

    ControlMaster auto
    ControlPersist yes
    ControlPath ~/.ssh/control/%C
Then run mkdir -p ~/.ssh/control/, if you haven’t already. Why this isn’t the default I don’t know, but once configured correctly you won’t have any problems from TRAMP.


I have this, yet I still see emacs sending individual I/O operations over SSH and blocking on their completion.

Everything that requires remote file system interaction is super slow. Emacs just seem to be doing lots of individual I/O operations.

VScode has a remote server that batches them before updating over the network.

The client sends operations to the server and queries the server for updates asynchronously. The server can perform multiple operations, and batch them into one response.

Stuff like, "regex on all files in this directory" is performed by emacs as "list all files in directory, wait, for each file, regex that file, wait". VScode just sends the "regex all files" and the server locally handles everything, and send one update back.

The difference is going from < 100ms for VSCode vs 5 seconds for emacs. Night and day.

The same happens for pretty much every modern feature (git status, diffs, blames and updates, autocompletion, correctness checks / intellisense, etc.).


This varies from command to command, but M-x grep literally just runs the program grep on the remote machine. It doesn’t enumerate the files and search each one individually. If you’re using something other than M-x grep, then sure, it might be written badly.


The command I use is `M-x find-grep`.

I also use `helm`, `ido`, etc. to navigate the file system and they are all super slow.


You might have already tried this: I’ve had fantastic performance starting a remote emacs server and using emacs (often in terminal mode) through ssh. It certainly has a one time cost: setup your local terminal for all keys to go through, and sync your init files. I still use tramp for certain rare cases but most work happens on 3 remote and 1 local machine with four different emacs servers running, and a couple of additional non-development machines that I simply ssh into from within emacs shells.


I was going to say, there is a way to do the server side logic with Emacs... ;-)


See my comment about TRAMP and file system operations from a couple of months ago for some of the underlying issues.

https://news.ycombinator.com/item?id=25626412


> The same happens for pretty much every modern feature (git status, diffs, blames and updates, autocompletion, correctness checks / intellisense, etc.).

Could it be a configuration issue in parts at least? Because it does start to sound like it. For example git status has never been slow for me in Emacs, except for huge diffs with lots of changes in lots of files. Same for diffs. I know, that autocompletion depends on the language and tools used for it and what things are checked for possible auto completion entries. It is possible to limit autocompletion to only use some sources, or to make it use a language server for some languages. When developing Rust, Python or TypeScript in Emacs, I did not experience slow correctness checks (I assume you mean type checks and unused variable kind of stuff.).


> Why this isn’t the default I don’t know (…)

It is, it has been for a while. Check the TRAMP FAQ for details when it is being used automatically (grep for "ControlMaster"): https://www.gnu.org/software/emacs/manual/html_node/tramp/Fr...


I guess things are getting better.


> Why this isn’t the default I don’t know, [...]

I searched a little and found the following blog, which claims, that there are issues with it, when you do heavy data transfers, for example via many rsyncs at the same time:

https://www.anchor.com.au/blog/2010/02/ssh-controlmaster-the...

I did not test it myself. This would seem like an appropriate reason to not make it the default.


I suppose that’s a good point. With multiple large simultaneous flows, multiple TCP connections probably will be better, unless SSH goes to all of the trouble to reimplement all of TCP’s nicer features. And at that point you should just be using a real VPN anyway.

I wonder if ssh shouldn’t have a sensible default like ~/.ssh/control/%C for ControlPath, so that you could just turn on ControlMaster and have it just work. Then TRAMP could set the ControlMaster option on the command line when it runs ssh. At least then people wouldn’t have to mess with their SSH config, and they wouldn’t have to consider whether it will break something else.


Emacs is definitely capable of multithreading. Still using a single thread might actually be a boon because it simplifies the programming model. Right now, Elisp code can work under the assumption that there is only one thread accessing editor state at the same time.

Disentangling this is probably going to be difficult, but the Emacs folks are probably smart enough to come up with something that matches with the spirit of Emacs. Libuv and callbacks might be a way, but cooperative scheduling of user threads on backing threads together with structured concurrency, like Java's Project Loom is attempting, are another way.

The emergence of the LSP protocol is a very promising development in the architecture of editors and IDEs. Many things can and maybe should be passed off to background processes to handle, both to expand the feature set of every editor out there, and to increase speed and stability.


To outsource things step-by-step to lightweight processes seems to be a good incremental way to disentangle. Perhaps one could protect access to resources not initially invented by Emacs packages (Emacs core?) and make Emacs query a separate process, without a package knowing about that, so that it channels all communication to the separate process. Then deprecate directly the resource at some point and offering a more direct way of communicating with the separate process. Finally removing the state in the main process. I think more Emacs internals knowledgeable people would need to judge this idea.

Is there anything in the communication of LSP making the LSP stand out from any other protocol? (I really mean the protocol, not the infrastructure on which it is used.) Other than that it is quite an old idea to have things at the ready running in a separate process. This time it is applied to editors, completion, type checking and other features. In general this simplifies making use of multi-core systems. Watch any Joe Armstrong talk about it. The question is, why it was not thought of before, or perhaps, if it was, then why it was not done before.


In terms of LSP, I don’t think there’s anything particularly novel about it. It’s really only better than other similar protocols in the sense that it is gaining in popularity. I.e., the biggest thing that was missing is consensus on a shared protocol.

In terms of the protocol itself, I suspect it’s actually worse than a lot of other ones.


> I suspect it’s actually worse than a lot of other ones.

Choosing JSON to communicate certainly could have been more wisely decided. Moreover since sending entire file contents across for some queries, what a waste.


It indeed isn't. Some Emacs packages use that approach, and others are well-known to employ Unix utilities to do the legwork. Maven and Gradle both have daemon modes. Unfortunately, all these integrations are custom, so there is no chance for it to take off. Ycmd is also worth mentioning.

What really made the difference is that VS Code uses LSP to integrate languages. Instead of dozens of ad-hoc integrations, there is one unified way to integrate a language now, and Microsoft's backing ensured there is an ecosystem for it from day one.

Not all is rosy though, mind you. LSP uses JSON/RPC after all. Also, we are touching the Microsoft world here, which means that standard adherence is not necessarily a given[0].

[0] https://www.reddit.com/r/vim/comments/b3yzq4/a_lsp_client_ma...


SLIME is another good example.


Indeed. It does not need precisely multithreading, but at least asynchrony, which is to allow side tasks to not block the main function of editing text. I have low familiarity with emacs internals, but I can assume there is an event loop, which is a basic form of this.

Perhaps this needs be used more, or there is another mechanism to be developed allowing tasks to be interrupt/resumed in priority service to the editing functions. A key example may be mode line refresh: this certainly must happen on the main thread, but it ought not block other more important items.


With a few exceptions, I don't think moving stuff into the background promotes speed and stability. It makes it way easier to get away with reductions to both.


> All this stuff runs in the UI thread. Braindamaged.

Calling this "braindamaged" is hardly fair; I'm sure this decision was made ~35 years ago when it made more sense.


Forgive my ignorance, but why would doing all of the application work on a single thread make more sense 35 years ago?


The first consumer processors with multiple cores came out around 2005. Before there were multiple cores in consumer machines, there was no reason to implement concurrency in a way that would scale to multiple cores.


Not to mention, back when the Emacs we know today was created, there weren't many GUI platforms available in the first place. First public release of GNU Emacs - the one with Emacs Lisp - was in 1985, so the Emacs we know is ~3 years older than X11. Emacs added GUI support in 1986[0] - before X11 was a thing (though X itself existed since 1984).

Emacs started as a terminal app, the GUI was added as an afterthought, by pretending it's a TTY. The concept of a "UI thread" wasn't on Stallman's mind back then. It continued to evolve from there; fast forward 35 years, and now we're living with a GUI program that still thinks it's writing to a teletype[1].

--

[0] - https://stackoverflow.com/questions/10084842/first-gui-versi...

[1] - https://m.facebook.com/nt/screen/?params=%7B%22note_id%22%3A...


Isn't the issue more concurrency than parallelism, though? Concurrency works just fine with a single core thanks to kernel-level juggling.

They would've been wildly forward-looking if they had figured this out 35 years ago though.


VSCode's remote editing simply does not work on unstable networks. Tramp works like a champ in those situations.


I agree about this issue, maybe in the distant future this could be fixed as well but where will VSCode be at that stage? Probably even more miles ahead, due to MS funding. When there is good async support in Emacs a lot of packages would need to be rewritten.

However I prefer Emacs anyday over VSCode, I am very productive in Emacs because I'm used to doing almost everything in it


I don't think VSCode is ahead of Emacs on some objective scale. I tried to re-create my Emacs configuration with it and found it impossible. Also VSC appeared to be notably slower when tuned closer to my needs. At the end of the day there are always users who delighted by innumerable choices of configuration possibilities, and those who believe it's unnecessary cognitive load. Emacs being DIY kit appeals to the former, while VSC is built like a browser with rich plugin system, it's in between but closer to the latter. I'm sure VSC does and will have larger audience, but only better Emacs can unseat Emacs on its side of the spectrum.

Btw the effects of MS funding can be overvalued - e.g. MsTeams is among the worst IMs in the existence.


vscode doesn't even support multiple monitors, it's far behind in many areas.


What does it mean for an editor to "support multiple monitors?"


Emacs allows you to open multiple windows that share the same local server / process and essentially work as one editor.

You can open the same or different files in the different views, use one of the views for debugging, shells, remotely running tests, while you use the other views for other stuff (showing different files side-by-side, etc.).

I often see vim and vs code users doing a lot of shuffling around when working with multiple open files to switch back and forth between files or across tabs, but with emacs none of this is really necessary.


There were two attempts to solve that - Guile Emacs[1] and Common Lisp Emacs[2]. Sadly, both are effectively dead now.

[1] https://www.emacswiki.org/emacs/GuileEmacs

[2] https://www.cliki.net/cl-emacs


My impression is that the major bottleneck is the redisplay loop.


In general I think this is an interesting idea, but I feel like this has a lot of overlap with asm.js and WebAssembly.

With this standard we would have standard dynamic JavaScript as the world knows it today, a restricted subset ('constraint spec') that is still designed for human readability/writability, and then asm.js/WebAssembly, which would not be written directly but instead would be an output of code written in other languages. Programmers will want interoperability between all of these paths, and that is a lot of complexity for these engines to manage.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: