Hacker Newsnew | past | comments | ask | show | jobs | submit | more jakebasile's commentslogin

There's a YouTuber that's been going through and making in-depth retrospectives of each Ultima game that some here may find interesting. I've found them a pleasant watch and I don't usually go for this type of content. I never played the Ultima series until Ultima Online so I don't have the nostalgia goggles that I'd need to go back and play games this old (sorry, it's the truth) so these videos are as close as I'll probably get.

https://www.youtube.com/playlist?list=PL16yfJJxAM6g-4YxKGI-1...


As someone who played these back in the day and enjoyed them, I don't think I could even recommend them to myself today. Ultima VII is everything the other comments say it is... but it's also one staggeringly large nested fetch quest, with nothing much breaking it up.

I'd recommend some combination of the Bethesda open world games and the JRPG genre today. They're not the same, nothing really quite fills the Ultima gap that I know of, but between those two I'd call it close enough.


For years, Ultima VII was the buggiest RPG I ever finished the main quest on. Then I played Daggerfall.


I keep meaning to go and play the Underworld games, since I love the subgenre they pioneered (Immersive Sim). I'll get around to it... one day.

It's like you say, not exactly the same, but for CRPGs it's hard to go wrong with Baldur's Gate 3. It's similar enough to later Ultima in some ways that it might scratch the itch.


Elsewhere in thread is mentioned an Underworld remake.

I do not have the same complaint about them as I do Ultima VII. A good remake could still be an interesting experience today, with little more than a graphical overhaul and modern controls. Both of them have a mixture of exploration, combat, and conversation that keeps them engaging over time.

Ultima VII's problem is that it's just... one big fetch quest. The combat is a non-entity (in the original game, I gather some of the remakes have tried to address this but they're starting way, way behind the eight ball), and there just isn't much else there.

Ultimas prior to VII don't have that problem because combat is functional, so the essentially multi-genre nature of an RPG where each subgenre keeps the other from becoming painfully tedious is there, but by modern standards, probably still fairly unplayable. Neat ideas, you can see how later games picked up the ball and ran with it in various directions, but very hard to play by modern standards. For instance, Ultima 4 combat has modern TRPG combat in it to some extent... except no even remotely modern TRPG game makes "moving one space" an entire action on par with "make one attack", and for good reason.


I'm a huge fan of Majuular, these videos are amazing. I like that he dives deeply into the history behind each game as well - like the company, Richard Garriott (Lord British), etc.

His other videos on other games are great as well!


A YT'er named 'Spoony' also made a thorough, entertaining retrospective of the Ultimas, and even caps it off with an interview with Lord British!


How many forks of VS Code am I supposed to have installed at this point?


My YC2026 startups are an AI agent that automatically manages your VSCode forks, and a public safety computer vision app for smart glasses that predicts whether someone can afford a lawyer based on their skin color


They'll have to compete with these other pending funded companies:

- ai therapist for your ai agents

- genetic ai agent container orchestration; only the best results survive so they have to fight for their virtual lives

- prompt engineering as a service

- social media post generator so you can post "what if ai fundamentally changes everything about how people interact with software" think pieces even faster

- vector db but actually json for some reason

(Edit: formatting)


> genetic ai agent container orchestration; only the best results survive so they have to fight for their virtual lives

AI Agent Orchestration Battle Bots. Old school VMs are the brute tanks just slowly ramming everybody off the field. A swarm of erratically behaving lightweight K8s Pods madly slicing and dicing everything coming on their way. Winner takes control of the host capacity.

I might need this in my life.


Don't forget there's 2 or 3 of each in YC2026.


You'll need a Chromium based app to count the installs for you.


yes


If the author sees this, I'm sorry for your loss.

Losing family is hard, but losing them suddenly makes it harder. Losing them suddenly because of poor advice or (in)action of people who are supposed to help is yet more difficult. I know from experience.

It does get easier to deal with, in time.


I find it hard to take anything that spends this much text talking about how LLMs "feel" seriously. They do not feel, nor do they think. They're not alive nor human and have no rights. It has to be marketing speak, right? "Oh our models are so advanced we have to consider how they feel".

What’s next, sick days and parental leave for the software? Give me a break. This is performative.


I'm never going back. If something changes and the only option is Windows or consoles I'll just stop buying new games or take up another hobby.

Being able to use sane scripting to solve problems, ZFS snapshots to undo bad mod installs, using the same system for development, and so on is no longer something I'm willing to give up. I've also started amassing a small collection of Cloud Init configs that set up game servers inside LXD containers. Some of these have native Linux binaries but a few only have Windows servers. They run perfectly well through Wine.

Anyone here even vaguely interested, I encourage you to just try it. I use Ubuntu and it works great on both AMD and Nvidia cards for me. What have you got to lose?


The only thing that is keeping me from switching on my home PC is Duo [1].

It allows my wife to play Stardew Valley on the TV via game streaming, while not disturbing my work at all on the PC. When she launches the game, I don't even notice it on my PC, meanwhile other solutions like Sunshine or Apollo do not let you use your computer while a gaming session is active on a client. Sadly, Duo is Windows only for now, which sucks.

Does anybody know a alternative for Linux that would work this way?

1. https://github.com/DuoStream/Duo


I poked around a bit, and this might do what you want: https://github.com/Steam-Headless/docker-steam-headless


Wow, that actually might seem like a viable alternative. Thanks a bunch, will check it out!


Tried to play dota 2 on Linux, it has a bad memory leak that made the whole system hang after an hour or two. Plus it seems to get worse fps on Vulcan, but I read that it might just be a bad implementation in my card (6650 xt)


I often find that Linux native games (Dota is one such) run worse than Windows games running under Proton. For many of them, you can just force it to play via Proton and go along with your day, but the Valve competitive ones don’t work that way since VAC will cry about it.


Also tried that, but I did not noticed the memory leak problem, since I only played about 1 hour. There are other minor issues about using Linux, for example, my input method does not work in the Steam chat window, but works everywhere else. Anyway, Linux is much more cleaner than Windows and it is a overall better development environment, I prefer it over Windows now.


Pop OS! With NVIDIA GPU here.

Shit just works. When it doesn't, changing the proton version usually fixes it.

Way better than Windows.


I've stopped even checking protondb before getting a game these days. Pretty much unless they've gone out of their way to make it not work (which is mostly competitive games, so you can tell beforehand), it works.

I realize by posting that here on HN I'm tempting people to send me the ProtonDB garbage tier list, but it's true for the types of games I play.


I must be the one Data Scientist in the world whose PopOS has twice failed to boot after updates. To the point I have give up on it.

My stack is so vanilla (nvidia, python, R) I can’t think what the issue is. Maybe hardware.


Give Fedora a shot. You can run that cosmic desktop on it.


Thanks for the advice. I spend 99.99% of my time in a terminal, a browser and vscode.

The graphical environment is neither here nor there for me, I just want to do an update and cuda libraries/nvidia drivers not break and for my OS to boot!


Same setup here, one game setup I've hit but this will be a rare problem, is StarCraft Remastered. Wine has an issue with audio processing which I can't seem to configure my way out of. It pegs all 32 threads and still stutters. Thankfully this game can likely run on an actual potato, so I have a separate mini PC running windows for this when I want to get my ass kicked on battle.net.


The beta based on 24.04 ubuntu or the stable which is still based on 22.04? Not that it wasn't nice when i used it, but it is very much outdated at this point from the regular packages perspective

26.04 will be a huge upgrade if you stay


Also have the same setup. Very few issues. Been very happy with the switch


I'd suggest Clojure simply because of the JVM. The JVM is unjustly maligned, and I will defend it to anyone at any time. Having access to the huge Java JAR ecosystem for integrations and libraries is a game changer compared to CL.

As for interactivity - I use a REPL daily with Clojure and it works just as you'd expect from a CL REPL. The only time you need to reload is if you add a new JAR to the classpath or occasionally when state gets really weird. There are reload facilities baked in to nREPL too.

For reference, I've used Clojure professionally for about a decade.

I'd suggest trying it out with Calva in VS Code.


Using Emacs for 20 years. So for me it will be CIDER :)


Both Anthropic and OpenAI have something like this now and neither bothered to implement a delete feature.

Google’s version Jules has one.


Can’t steal my crypto if I don’t have any in the first place.


I miss when gaming in general was less mainstream and more weird like this. Now the silicon manufacturers hate that they even have to sell us their scraps, let alone spend time on making unique designs for their boxes.

I bought a small press book with a collection of this art and it was a fun little trip down memory lane, as I’ve owned some of the hardware (boxes) depicted in it.

For anyone else interested: https://lockbooks.net/pages/overclocked-launch


> I miss when gaming in general was less mainstream and more weird like this.

To me, this is a continuum with the box art of early games, where because the graphics themselves were pretty limited the box art had to be fabulous. Get someone like Roger Dean to paint a picture for the imagination. https://www.rogerdean.com/

The peak of this was the Maplin electronics catalogue: https://70s-sci-fi-art.ghost.io/1980s-maplin-catalogues/ ; the Radio Shack of UK electronics hobbyists, now gone entirely. Did they need to have cool art on the catalogue? No. Was it awesome? Yes.


This great interview with Roger Dean about his career in games was recently posted:

https://spillhistorie.no/2025/10/03/legends-of-the-games-ind...

Turns out that the Psygnosis developers in the 1980s used him as a kind of single-shot concept artist. They would commission the box art first, then use that as inspiration for designing the actual game to go inside the box.


Reminds me of the Anderson Bruford Wakeman & Howe 12” cover:

https://row.rarevinyl.com/products/anderson-bruford-wakeman-...


Huh. I hadn't actually realised Maplin was gone entirely. They closed in Ireland a while back, but I put that down to a general trend of marginal UK high-street retailers (Argos etc) pulling out of Ireland, but still existing in some form in the UK.

Weird shop; they never really got rid of any stock that was even theoretically useable, so it was at least partially a museum of outdated gadgets.


On the plus side, PC gaming hardware seems to last ages now. I built my gaming desktop in 2020, I had a look lately at what a reasonable modern mid tier setup is and they are still recommending a lot of the parts I have. So I'll probably keep using it all for another 5 years then.


Its a double edge sword. Yes, back then 2 year old computer was old, but at the same time every 2 years a new generation of games came out that were like never seen before. Each generation was a massive step-up.

Today, a layman couldnt chronologically sort CoD games from past 10 years from looks/play/feel, new Fifa and similar is _the_ same game but with new teams added to it, and virtually every game made is a "copycat with their own twist" with almost 0 technical invention.


This is fine? Today’s games look beautiful and developers are hardly restricted by hardware. Games can innovate on content, stories, and experiences rather than on technology.

Feels similar to how painting hasn’t had any revolution in new paints available.


It's not a one-or-the-other. One wouldn't want content, stories, and experiences to stagnate just because graphics were improving, so why would the opposite be assumed?


> "copycat with their own twist" with almost 0 technical invention.

I think he meant the software side (game-systems wise), not hardware innovation


AAA games suck compared to 10 years ago though.


I disagree, AAA games started nosediving with the seventh generation 20 years ago and only recently have they started to tentatively show signs of recovery.


Do you have any examples in mind from each era? I thought Fallout 3 was quite good around back then. Today we've got stuff like Borderlands 4 (or whatever the newest one is) that barely run on anyone's PC, and general game install size has also shot up drastically so it's no longer really feasible to keep most of your games installed all the time and ready to play.

I mostly play indie/retro/slightly-old games these days, so I mostly hear of the negatives for modern AAA, admittedly. I'm also tempted to complain about live service, microtransactions, gacha, season passes, and so on in recent big releases, but maybe that would be getting off-topic.


> Today we've got stuff like Borderlands 4 (or whatever the newest one is) that barely run on anyone's PC

Just like Crysis did 18 years ago?

>it's no longer really feasible to keep most of your games installed all the time and ready to play.

Crysis was around 5% of common HDD back then. Now, it'd be equivalent of around 80 GiB now. That would be just about what Elden Ring with the DLC takes.


Yes, thanks, I don't need "technical invention" in the form of more shaders for hiding the ass quality of the gameplay. Mirror's Edge Catalyst still looks great despite being almost 10 y.o. and manages to bring 2080Ti on it's knees in FullHD+.


But your stuff from 2020 probably isn't AI "enhanced"!! Throw it in the garbage!


I was planning on hanging on to my Win10 PC, which was perfectly fine except that Microsoft were both pestering me to upgrade and telling me it wasn't possible, but the death of its SSD after 7 years put paid to that.


I have a PC that is probably at least 10 years old but works perfectly well for browsing, spreadsheets and the occasional lightweight game but MS has decided, in its infinite wisdom, to say that it can't be upgraded to Windows 11.

So I will probably install Linux (probably Debian) and move on and forget about those particular games... (~30 years since I first installed Linux on a PC!).


I've got a computer that I built in 2008 and upgraded in 2012. It's pretty solid for higher-requirement games up until about 2015, and can still handle lower-requirement indie stuff today.

I built its successor in 2020, using a GPU from 2017. The longevity of the PS4 has given that thing serious legs, and I haven't seen the need to upgrade yet. It still runs what I want to run. It's also the first post-DOS x86/64 machine I've owned that has never had Windows installed.


What sort of lightweight games? Nethack? Solitaire?


World of Tanks...


Sounds like a plus to me.


I still have a 1080ti which does swimmingly. There just aren't enough AAA/"AAAA" games coming out that I care to play. Oblivion remaster almost tempted me into upgrading but I couldn't justify it just for a single game.

The only reason I'd upgrade is to improve performance for AI stuff.


On the other hand, you're also stuck with design mistakes for ages.

The AM5 platform is quite lacking when it comes to PCIe lanes - especially once you take USB4 into account. I'm hoping my current setup from 2019 survives until AM6 - but it seems AMD wants to keep AM5 for another generation or two...


There's minimal demand for Thunderbolt/USB4 ports on Windows desktops. It won't ever make sense to inflate CPU pin count specifically for that use case, especially when the requisite PCIe lanes don't have to come directly from the CPU.

You'd be better off complaining about how Threadripper CPUs and motherboards are priced out of the enthusiast consumer market, than asking for the mainstream CPU platform to be made more expensive with the addition of IO that the vast majority of mainstream customers don't want to pay for.


Aren't they waiting for DDR6?


One person's "design mistake" is another person's "market segmentation".

x16 GPU + x4 NVMe + x4 USB = 24 direct CPU lanes covers 99% of the market, with add-ons behind the shared chipset bandwidth. The other 1% of the market are pushed to buy Threadripper/Epyc.


Just this year I finally replaced my 3rd gen I5 system. It was well over 10 years old and still just able to keep up with my workloads.

Now it's a node in my proxmox cluster running transcodes for jellyfin. The circle of life


Woah, that book is cool; and so much more from this publisher!


LGR took a look at it on his channel; a very tiny book, with very tiny art, apparently all grabbed from google images. Something of a letdown.


Wow, I just checked and that's really underwhelming: Tiny page size, lots of padding around the images and yet there's often 4 images per page. The layout makes it seem like the size a was late decision, it would be appropriate for a large artbook.


You ain't kidding! What a treasure trove of a publisher. Never heard of them before, great rec


> Now the silicon manufacturers hate that they even have to sell us their scraps, let alone spend time on making unique designs for their boxes.

I genuinely don't believe this to be true for AMD. I bought a 6600xt on Release Day and by the time I was able to build my complete PC, it had upstream linux kernel support. You can say what you will about AMD but any company that respects my freedoms enough to build a product with great linux support and without requiring any privacy invading proprietary software to use is a-ok in my book.

Fuck NVidia though.


I agree with the spirit of your post, and that AMD is probably the lesser evil, but it's worth noting they "just" moved the proprietary bits into a firmware blob. It's still similarly proprietary but due to the kernel being pretty open to such blobs, it's a problem invisible to most users. You'd have to use linux-libre to get a feel for how bad things really are. You can't really use any modern GPUs with it.


I understand the sentiment, but I don't see how devices with proprietary firmware stored in ROM or NVRAM are any more free or open than devices that require proprietary firmware loaded at boot.

And it looks like Linux-Libre also opposes CPU microcode updates[1], as if bugged factory microcode with known security vulnerabilities is any less non-free than fixed microcode. Recommending an alternative architecture that uses non-proprietary microcode I can understand; this I cannot.

[1] https://www.phoronix.com/news/GNU-Linux-Libre-4.16-Released


That honestly makes little difference to me. There's no useful computer out there that isn't a bunch of proprietary blobs to interact with proprietary hardware. Wish there was, but if it's practically indistinguishable from just having those blobs burned into the hardware directly instead of being injected on boot it's not perfect but still a pretty good situation.

I remember when running linux on your computer at all was hit or miss, these days i can go to lenovo and buy a thinkpad with ubuntu preinstalled and know that it will just work.


>I genuinely don't believe this to be true for AMD. I bought a 6600xt on Release Day

That was 2021 though when AMD was still a relative underdog trying to claw market share from Nvidia from consumers. AMD of today has adjusted their prices and attitude to consumers to match their status as a CPU and GPU duopoly in the AI/datacenter space.


You can still buy their GPUs, they work perfectly fine on linux ootb and they even make stuff like the AI Max for local AI that is very end user and consumer friendly. Yes, GPUs got more expensive which is a result of increased demand for them. But for a hardware company, as long as their linux support is as good as it currently is, they are a-ok in my book.


I just don't have enough pain points with Git to move to something new. I don't have a problem remembering the ~5 commands I need most on any given workday. Between stashes, branches, temporary commits I later rebase, and recently worktrees, I don't lack for anything in my usage. It's universally used across both my public and corporate life, and neither does anyone need to learn a new tool to interact with my code base, nor do I need to deal with possible inconsistencies by using a different frontend on my end.

It's cool that it exists, and it's impressive that it is built on top of git itself. If you (like the author) want to use it, then more power to you. But I have yet to be convinced by any of these articles that it is worth my time to try it since nearly all of them start from a point of "if you hate Git like me, then try this thing".

If anyone has a link to an article written from the point of view of "I love or at least tolerate git and have no real issues with it, here's why I like JJ," then I'd be glad to read it.


If you've ever lived in a world of stacked commits with develop on main (i.e. not gitflow, no feature branches), combined with code review for every commit, git will soon start to aggravate you. Git doesn't make rebasing a chain or tree of commits pleasant. Git records merge resolutions and can reuse them, but it doesn't do the same thing for rebases, making them more repetitive and tedious than they should be. When you address comments on earlier commits in a chain, you need to rebase after. Git's affordances for rebasing aren't great.

And when you rebase, the commits lose their identity since commit hashes are content-addressed, despite having an identity in people's minds - a new revision of a commit under review is usually logically the same unit of change, but git doesn't have a way of expressing this.

jj, as I understand it, addresses these pains directly.


> Git doesn't make rebasing a chain or tree of commits pleasant.

There's a semi-recent addition that makes it a single command, the --update-refs flag to rebase (along with the --rebase-merges flag, formerly called --preserve-merges).


> Any branches that are checked out in a worktree are not updated in this way.

Would be nice, if it would also have a flag to choose that behaviour.


> Git records merge resolutions and can reuse them, but it doesn't do the same thing for rebases

Since when does rerere not work with rebase anymore?


I think it does but rerere is not nearly as good as first class conflicts in jj


Ok, but by that measure,

>> Git records merge resolutions

isn't true either.


FWIW git-branchless, an extension to git, addresses many of these points without leaving git.


The best thing that could come out of jujitsu is git itself adopting the change-id system (which I believe I read somewhere is being considered). If you actually take time to learn your tools and how they're intended to be used, there's really not reason to learn jj IMO


git is both a (bad) UI and a protocol. Jujutsu is a UI on top of git (the protocol).

There's nothing wrong with taking the time to learn how to use a bad UI, especially if there's no other option. But don't mistake your personal mastery of git for evidence that it's better than jj.

In all likelihood, the git proposal you allude to would not extend further than adding a bit of persistent metadata that follows commits after "destructive" changes. And even then, it'd be imperatively backing into the change-as-commit-graph data model rather than coming by it honestly.

> If you actually take time to learn your tools and how they're intended to be used, there's really not reason to learn jj IMO

This is like saying if people take the time to learn curl, there's really no reason to learn Firefox.

And it doesn't suggest to me that you're all that familiar with jj!

- automatic rebasing! goodbye to N+1 rebases forever

- first-class conflict resolution that doesn't force you to stop the world and fix

- the revset/template languages: incredibly expressive; nothing like it in git

- undo literally any jj action; restore the repo to any previous state. try that with the reflog...

No amount of learning git nets you any of these things.


Small point of order, jj is a VCS with a pluggable backend, one of which is git. That’s a bit different than a UI on top of the git protocol.


Yes - but only a bit! And personally I still like it as a reference point when I tell people about it - because in the end, it mainly affects what they see of the repo and how they interact with it, and the result is still a valid git repo (if they've colocated).


Isn't git the only supported backend?

I always introduce jj as a better git ui.


It's the only open source backend right now. Google's backend for their VCS infrastructure is still private.


They support Gerrit, Google's internal VCS.


Gerrit is not a VCS


Cool


Heres a great article about a powerful workflow that jj makes practical https://ofcr.se/jujutsu-merge-workflow/

Where jj shines is advanced workflows that aren’t practical with git. If you aren’t interested in those then it doesn’t give you as many benefits over git.

If you are breaking down your features into small PRs, stacking them, etc…, then jj is super helpful.


I’m also fine with git, and have used mercurial and p4 before. I think simplicity is better in this case. I do think with more and more generated code inflating the codebase with high velocity, we need to find a better way to merge conflicts.


I don't have major pain points with git either (mainly just that rebase merge conflicts can get awful to deal with), but I just love jj and I'm not looking back.

It turns out there were a lot of things that I was not doing with git because with git it would have been painful.

Now my PRs are split into human-sized commits that each contain a set of changes that make sense together, and I keep moving changes around during development to keep the history tidy, until it's time to send the pull request. If a commit introduces a typo, the typo fix should go into that commit so the typo never happened in the first place and you don't get reviews like "please fix this" and then "oh wait I see you fixed it in a later commit".

And sure, with git you could checkout the faulty commit, amend it, then amend -a and hope no one was looking, and rebase your dev branch onto the amended commit and it will often even work. Or rebase -i, sure -- have fun with it if the typo was 12 commits ago.

So I just never did that because augh.

With jj it's trivial. You just switch to that commit, fix the typo, and switch back to where you were. Or fix the typo where you were and squash the fix, and only the fix, into the commit that introduced it.

No more rebase hell. No more deleting the checkout and pulling it clean because things went sideways in a way that would be hell to fix manually -- jj takes snapshots after every mutation and rolling back is easy. No more squashing on merge to sweep the messy commit history under the carpet. No more juggling index and staged files and stashed files and all that messy business. Everything is just suspiciously straightforward. To think this could have been our lives all along!

And I'm not looking back.

It's not that I dislike git. It's just that I love jj.


> Now my PRs are split into human-sized commits that each contain a set of changes that make sense together, and I keep moving changes around during development to keep the history tidy, until it's time to send the pull request. If a commit introduces a typo, the typo fix should go into that commit so the typo never happened in the first place and you don't get reviews like "please fix this" and then "oh wait I see you fixed it in a later commit".

That's my workflow in Git. It is sometimes painful, but only because other people don't do that.

It is less work, if you use git --fixup=old-commit-hash and then use git rebase --autosquash. The --fixup step can even by automated, by using git-absorb, but this needs to be installed separately.

> You just switch to that commit, fix the typo, and switch back to where you were. Or fix the typo where you were and squash the fix, and only the fix, into the commit that introduced it.

That sounds the same in Git?

> No more deleting the checkout and pulling it clean because things went sideways in a way that would be hell to fix manually

When do you need to do that? The only case in which I messed up my repo, was when I had a harddrive crash and recovered commits manually from git objects, because the other files were corrupted.

> rolling back is easy.

Yes, that seams a bit easier, but reflog still exists.

> No more squashing on merge to sweep the messy commit history under the carpet.

This is just as well a stupid idea in Git and I hate that. This seams to be cultural and is not suggested by the tool itself.

> No more juggling index and staged files and stashed files

I find these useful, but you can also use Git without them. Just always use commit -a and commit instead of stashing.


> > You just switch to that commit, fix the typo, and switch back to where you were. Or fix the typo where you were and squash the fix, and only the fix, into the commit that introduced it.

> That sounds the same in Git?

I've always struggled with this myself, and would like to update my git knowledge. Can you walk me through the commands to do this? Let's say the commit id is `abcd1234` and it's 5 commits ago.

In JJ I'd do this:

    vim file/with/typo.txt
    jj squash -into abcd1234 file/with/typo.txt
Or if I was feeling more pedantic I'd do:

    jj edit abcd1234
    vim file/with/typo.txt
    jj edit <<my latest commit ID>>


You have the same two options in Git:

    vim file/with/typo.txt
    git add file/with/typo.txt
    git commit --fixup=abcd1234
    git rebase --autosquash -i
For some reason I need to pass --interactive/-i, even if I don't actually want it to be interactive. I am not sure if this is just a bug in my Git version or if this is intended.

The git commit step can also be replaced with git-absorb, if you have this installed and abcd1234 was the last time you modified these lines.

The second approach is this:

    git rebase -i abcd1234~
    # do 's/pick abcd1234/edit abcd1234/' in your editor
    vim file/with/typo.txt
    git rebase --continue


I believe this only works when there are no other uncommitted changes.

So you would need to stash after the fixup commit and pop the stash after rebase. Or use autostash.


Sure, but I don't see that as a big deal. It comes down to the fact that git does (interactive) rebases and some merges in the working tree and not in memory. If it is able to do it in memory, then you also don't need to squash, but it needs a point where it could checkout conflicts, without altering data. If you like it to be automatic, you can always set rebase.autoStash or merge.autoStash.

As for effect on the workflow, you can just continue to work and commit and then call autosquash in two weeks, when your about to merge. This also has the benefit, that Git (by default) only alters commits nobody has used yet.


s/squash/stash/ <<< "then you also don't need to squash"


Ok. JJ wins this round. Go to commit, fix, go to commit is superior to this bunch of commands in every possible way.


That's your opinion.

The second approach has the exact same amount of commands. The first has 2 more. The add is necessary, because Git separates committing into two steps, which has other benefits. If you would want the JJ behaviour you can call commit -a. The autosquash is necessary, because you don't need to use fixup with autosquash, you can also do it alone, which is recommended if this fixes something, that is already in an older release.


Of course! Likewise, you can do in C++ everything you can do in Rust. And yet Rust is there and fast growing in popularity. As it turns out, how to do things, and at the cost of what externalities, matters no less than what you can do.


I don't like C++ either, but yes that is also my opinion about Rust. I do already annotate ownership semantics in C, so I don't know why I need a compiler that is opinionated and tells me how I should structure my code. I think Rust would invoke much less resistance, if it weren't so opinionated.

As for jj, JJ and Git are much more close than C and Rust. JJ tends to have slightly less commands, but only because what are flags in JJ are separate commands in Git. The impression I get from these posts, is that it would have been doable with similar effort in Git, but they just never bothered.

Also I prefer refining stuff we already have instead of throwing it all in the bin and stating it's obsolete.


> The impression I get from these posts, is that it would have been doable with similar effort in Git, but they just never bothered.

I don't think it would have been doable. I hear this question/comment sometimes. I sent https://github.com/jj-vcs/jj/pull/7729 to have a place to point people to.


> ### Why is Jujutsu a separate project? Why were the features not contributed to Git instead?

> The project started as an experiment with the idea of representing the working copy by a regular commit. I (@martinvonz) considered how this feature would impact the Git CLI if it were added to Git. My conclusion was that it would effectively result in deprecating most existing Git commands and flags in favor of new commands and flags, especially considering I wanted to also support revsets. This seemed unlikely to be accepted by the Git project.

This doesn't support what you said?

> I don't think it would have been doable.

This only states that JJ has a different user model, with a different structure of commands and flags, not that it magically does things that are impossible in Git.

Btw., you could just link to the commit diff instead. And are you quoting yourself?

-----------

Oops, now I see the issue, you misunderstood me.

> but they just never bothered.

"They" referred to the authors of blog posts. I didn't want to claim, that it would be a good fit to merge the JJ user model with the Git user model. My claim was that the approaches people learn with the JJ user model aren't exactly difficult in the Git model either. People just reconsider their approaches when they touch a new tool with the experience they gained since starting to use Git. They just never considered that approaches with a tool they have been using for years.


Ah, I see what you're saying now. Thanks for explaining. I agree with you in principle.

My impression from our Discord server is that quite many JJ users actually are Git experts who have been creating clean commits for years. I'm one of those people. It's not a coincidence that JJ's CLI makes it easy to create clean commits - it's designed for that from scratch.


Impressive project. I couldn't try it yet, because it requires a newer Rust source than exists in my distro, I'm not going to run random shell code as root and I haven't felt like it is worth to figure out how to bootstrap Rust.

I'm not sure if I would like it, because I care about reproducibility of commits so I want commit hashes to stay the same after a rebase. Does it support --committer-date-is-author-date ?

I'm also somewhat wary of additional layers on top of something.


We don't have --committer-date-is-author-date. I don't think we've had a single request for it until now.

That flag seems a bit weird, though. Wouldn't you want to keep whatever the committer date already was? Why would you want to reset it to the author date? Of course, after you have passed the flag once, any subsequent uses will have the same effect as keeping the committer date, so maybe that's why it's good enough. Maybe it works the way it does for historical reasons (I happened to see that it used to be passed to `git am` back when `git rebase` was based on that).


I do keep the committer date when I intentionally modify some single special commit, but when I just keep changing stuff I don't do it. In my opinion this date is only useful if the committer and author are also different, for example when you pull some random change suggested in some forum you can put author and date from the forum in and yourself and the time of commit in the committer.

> Wouldn't you want to keep whatever the committer date already was?

Note, that when you create a new commit, the beginning committer and author date are always equal. So the original one is always the author date.

I find this flag useful, because it means that I can keep rebaseing and still reproduce the same commit hashes. This is really useful for asserting that nothing changed when the commit hash is equal or that really something changed when it is different. Otherwise the commit hash changes for all commits you touched in the rebase and you constantly need to verify if there was a change or not.


I haven't used JJ, so does it work the same in JJ than in Git? Did you introduced it intentionally or does the committer fields only exists, because this is how it is in Git and you had no reason to change it.


Do you mean if jj has separate author and committer timestamps? Yes, we simply followed Git there. I didn't really question it. There has been some discussion about simplifying it to just one timestamp, however. You can probably find that discussion somewhere in our issue tracker if you're curious.

Or do you mean if jj updates the committer timestamp while leaving the author timestamp unchanged? Yes, that's also the same as what Git does.


Thanks, the first was what I asked, but the latter was also nice to know.


I was curious so I searched the repo, the only mentions of this flag boil down to people being confused about there being two dates, and being okay with the current behavior.


Are you talking about Git or JJ?


I'm talking about jj.


Did it sound like an explicit decision from jj or just being inherited from Git?


The jj git backend happens to update only one of the two timestamps. There's not a lot of talk about the author date on the jj issue tracker.

I did not dig into the history of the code, so I can't say if it was an explicit design decision or just how it was implemented. Since there hasn't been a lot of demand to change anything here, I didn't see any active discussion about the choice.

EDIT: oh I see martin has said 'we just followed git' here: https://news.ycombinator.com/item?id=45599741


I don't hate git, I like it fine and, until recently, used it exclusively on all my projects (I still use it non-exclusively). Here's an article that's written from that viewpoint:

https://www.stavros.io/posts/switch-to-jujutsu-already-a-tut...

That having been said, I didn't hate Subversion either. It was fine.


> I don't hate git

Idk man, the first two paragraphs of the article very much make it sound like you hate git.

> Over the past few years, I’ve been seeing people rave about Jujutsu, and I always wanted to try it, but it never seemed worth the trouble, even though I hate git.


Also:

> I don't hate git

but

> I have my trusty alias, fuckgit

Someone who doesn't hate git would have named this alias quite differently...


I read that more as "aw, fuck it, I'm starting over". Then, given what it's doing, "fuck it" -> "fuckgit" makes sense.

But hey, it's not my alias. I'm just saying that the way I read it didn't suggest hate, just a little cleverness. I can't speak for what the author was thinking.


Fair enough, I'll clarify what I actually hate.


Yeah I definitely hated Subversion, which helped push me to try Git back in the day. Actually, back then I was an `hg` guy. That battle was lost long ago though.

I think you linked to the same post as OP, though?


I wrote the post, so that's a post from the perspective of someone who doesn't hate git :P

I used bzr after SVN, but my larger point is that it's all fine, the question was whether you want to go through some short-term learning for long-term gain, or if you want to keep using what you know. Either is fine, I'm still using vim as my editor, for example.


SVN was not fine. Branching took forever (all the copying). And the space that required ... In fact, lots of things took forever on large-ish repos. Remember that everything required the server and network and disk speeds were slower back then. And just a commit could destroy your work if you got stuck in a conflict resolution. So you'd have to copy all the files you changed to a backup just in case, then delete them if the resolution went OK etc.

Was it better than CVS in some way? Sure.

But git is just better in so many ways. Back in the day I used git exclusively with git-svn at a place that was still stuck with SVN and I had a blast, while everyone else didn't. I just never had any of the problems they did.

I'm not entirely sure what pain people speak of with git. I found the transition very natural. And don't come talking to me about the "weird command syntax". Some of that was specifically to be compatible / "intuitive" / what they were used to for people coming from tools like SVN.

Sure you gotta learn about "the index", understand that everything is local and that you have an origin and local copy of all the labels (also sometimes called branches or tags) you can attach to commits. That's about it for the normal and regular use that someone would've had with SVN.


Well you can either have a viewpoint of "the current thing I use is fine because I'm used to the warts" or "it's not fine because other things exist".

It can't be that SVN is bad and git is better but also that git is fine even though jj is better.


Except that it has to first be true that jj is better ;)

You start out the article with hate for git without explaining what you actually don't like, then here on HN say "I don't hate git". A command called `fuckgit`? Because you need to re-clone? What are the things you commonly do that require this? I've never encountered it. Maybe you're just too advanced a user for git and jj really is better for you. But for us lowly regular users I really do not see an issue.

Some of the benefits you tout, like "editing a commit and you don't need to commit it yourself"? I'm sorry but I want to be the one in control here. I am the one that says "I'm done here, yes this is the new version of the commit I'm comfortable with". I've specifically forbid Claude to add, commit, push etc. for example.

It also breaks your "you need to stash" argument. I don't stash. I just commit if I have something WIP that needs saving while I work on some other emergency. There's no reason not to just commit. In fact I do that all the time to checkpoint work and I amend commits all the time. It's my standard commit command actually `git commit -a --amend`.

Automatic "oplog" of everything Claude did, IDE style: sure, maybe. Though I've yet to see that need arise in practice. Just because I have Claude et. al. now, I don't believe changes should be any bigger than they used to. Nor should my "commit early, commit often, push later" practice change.


> You start out the article with hate for git without explaining what you actually don't like

I start out the article saying I never understood git, and why does it matter what I don't like? That would only matter if I were trying to say that git is bad, but I'm not making a comparison. I just think jj is better-designed, and that you should try it.

> Some of the benefits you tout, like "editing a commit and you don't need to commit it yourself"?

I never said that's a benefit, I just said that's something jj does differently. I `jj commit` when I'm done with some work anyway.

> It also breaks your "you need to stash" argument. I don't stash. I just commit if I have something WIP that needs saving while I work on some other emergency.

In that case, you'll like jj, as it handles all that for you.

Your comment is coming off as a bit defensive, I didn't write my article to attack git. If you like git, keep using it, I prefer jj and I think other people will too. It's hard to get started with because its workflow is different from what we're used to, so I wrote the tutorial to help.


    Your comment is coming off as a bit defensive
Your article is coming off as a bit offensive ;)

    I didn't write my article to attack git [...] I wrote the tutorial to help.
Except you didn't write a tutorial. You wrote an "I hate git and jj is better and if you think otherwise you're wrong" article.

Blue speech bubble with literally the text: "If you don't like Jujutsu, you're wrong". This is text. There's no "tongue in cheek" voice and body language here, even if potentially you meant it that way. But given how the article itself starts, I don't think there was any of that to transport :shrug:

    Needless to say, I just don’t get git
Actually, it does bear saying. And I do think that if you say "everyone that doesn't think jj is better is wrong" you have to explain what you really don't like or get. No it's not needless, because not everyone has your experience. I really do not understand your pain points unless you explain them, because I've never felt them. Either because I did understand the part you didn't, because I don't need to understand that part to use it well (cutting the decision/knowledge tree in your head is a skill by itself I've found over the years - sometimes you do have to accept magic! E.g. I don't need to understand exactly how any specific LLM works to use it well) or because I simply never had a need for the kinds of feature that trip you up.


> I just said that's something jj does differently.

Except Git doesn't do it differently here. Git only provides an additional way to commit temporary changes, you still can commit them how you like. In fact a stash are just two commits.


I wonder if there's a parallel universe where people are writing posts about Sapling and getting mercurial users to migrate to it.


Before jj came along (or rather, before it reached my threshold of awareness), I was such a person. Started with git, switched to mercurial, wanted to work against github so tried sapling, loved it and started telling people about it.

I probably would have stuck with Sapling, but my company switched to git and the jj integration was cleaner -- especially being able to run in colocated mode: your project root directory has both .jj/ and .git/ subdirectories, and you can use both jj and git commands without much trouble. If someone gives me a git commit hash, I can just use it directly with jj.

Sapling is good. I'm still fond of it, and in fact still have a few repositories I haven't switched over yet. But being able to do everything with a single tool is useful, plus jj gives me some extra capabilities over sapling (or at least things that I don't know how to do as easily).


Oh, if you’re an ex-hg guy, this makes this easier, then: jj is in many ways a renaissance for hg’s philosophy: https://ahal.ca/blog/2024/jujutsu-mercurial-haven/


I worked with SVN and I hated it, merging branches and dealing with conflict resolution on SVN was like getting stepped on the balls


For me the killer feature was distributed version control.

When I was an intern at GenericCorpo we had days we couldn't work because SVN was down. WTF was that?


I don’t hate git either but you’ll meet very few people who will claim its UX is optimal. JJ’s interaction model is much simpler than git’s, and the difficulty I found is that the better you know git, the harder it is to unlearn all its quirks.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: