"Is this fun" and "is it something a person in their 20s would want to play" are entirely different questions.
There are people who absolutely thrive doing the things NetHack rewards you for in the long term: perseverance, patience, planning, resourcefulness, risk management, strategizing, analyzing and learning systems... I feel like it has a timeless and ageless appeal to a particular kind of player and has never been quite palatable to mainstream audiences. If you like NetHack today you probably do it for the same reasons you would 30 years ago.
My understanding is that UMoria is an independent, complete remake of the original Moria. NetHack on the other hand is the same project with the same codebase throughout its history. With that in mind, NetHack is a few months older than the Moria still under development today.
Not to say that this is necessarily the right way to look at it. It's not clear-cut, is what I'm saying.
I wonder how much energy OpenAI spends each day on pink elephant paradoxing goblins. A prompt like that will preoccupy the LLM with goblins on every request.
That is a great point. Machine consumes energy of adding goblins in every response. The machine consumes energy on removing goblins from every response. That is a great attack vector. If (wild imagination ensues) an adversary can do that x100 (goblins, potatoes, dragons, Lightning McQueen, etc.) they can render the machine useless/uneconomical from the standpoint of energy consumption.
Greater context size means more computational resources means more energy. Dedicating a portion of the context to telling the LLM not to refer to goblins then has a non-zero energy cost every time you prompt the model.
> X11 or any network transparent graphics protocol doesn't solve the problems that a terminal solves. how do you pipe data through multiple applications in one command using a GUI for example? nobody has been able to solve that in a practical way yet.
It seems to me that you are conflating the role of the terminal with the role of the shell. The terminal accepts streams of text and commands to instruct the terminal, so that software can accept input and present output. It doesn't fundamentally need to be aware of the concepts of pipes and commands to do that.
Of course, that doesn't stop iTerm2 from doing RCE by design, but at a conceptual level this is not a problem inherent to a terminal.
i am not conflating them, the problem is rather that the current terminals define or restrict what the shell can do. shells are being rewritten already. b they could not do what i want them to do without the terminals changing too, so the terminal needs to be next.
You can absolutely pipe programs together using a shell without a terminal. This also doesn't restric in any way the possibility to "output to some kind of graphical device/interface that uses that semantic information to display the data using nice graphical interface elements".
What is it specifically that you want to do which your favorite shell doesn't allow because it is restricted by terminals?
yes, i can write a program that takes data and displays it nicely. but in order to use graphics i need a second channel to send the graphic instructions, for x11, wayland or even MacOS or wndows. at that point i have two interfaces, the terminal and that graphics display. i want both to be in one. i want the terminal to be that graphical output. that currently is only possible through in-band escape sequences. that is the restriction i want to get rid of.
I don't fully understand the request. On one hand you don't want a second channel to send the graphics instructions, but on the other you don't want to use in-band escape sequences.
Maybe you'd be interested to learn about plan9's graphical terminal. Its window manager runs entirely within it, and all windows just represent multiplexed access to limited areas of the terminal.
On one hand you don't want a second channel to send the graphics instructions, but on the other you don't want to use in-band escape sequences
correct. the in-band sequences are dangerous and unwieldy. they don't convey enough information. they are a hack to work within the limitations if historical terminals. that's what this whole thread is about.
a separate graphics channel creates a separate window. then you have two windows. not good either. it needs to be one window, and considering that this window should be able to support multiple remote connections it needs to be local otherwise i would get a new window for each server i connect to. that works for some people, but not for me. and it needs to work through a single channel like ssh/mosh or another similar protocol and be forwardable.
so i want a third option. one approach is sending semantic data, letting the terminal interpret it and display it graphically. this is interesting because shells are already exploring semantic data. (elvish, murex, nushell, others...)
plan9 sounds interesting. i see several efforts to port aspects of it to linux. they all seem to have stalled. more work needs to be done here. that's what i am advocating.
The bug is in a feature of iTerm2 that the "bad dead hardware design" did not have. The "bad dead hardware design" was much simpler and less ambitious in scope.
If iTerm2 had stuck to emulating a VT220 this issue would not have existed. If anything it's the idea that it should "evolve" that's flawed. Something like a VT220 was designed for a kind of use that is surprisingly relevant still. I think doing something significantly different warrants designing something significantly different, not merely "evolving" existing solutions to other problems by haphazardly shoehorning new features into them without paying attention to security implications.
This is only the latest of several rather serious vulnerabilities in iTerm2's SSH integration.
reply