Hacker Newsnew | past | comments | ask | show | jobs | submit | Jackevansevo's commentslogin

For the longest time I've been using vims built-in `compiler` feature with tartansandal/vim-compiler-pytest combined tpope/vim-dispatch


Could you share an example of a workflow using the built in feature to run tests in Vim?


Sure, here's a recorded example: https://www.youtube.com/watch?v=TUeousvp4PQ


Interesting approach! Can you share more about this?


Rather than write something up or linking to a bunch of articles I recorded a quick screen capture: https://www.youtube.com/watch?v=TUeousvp4PQ


> Meanwhile I'm watching a community of mostly young people building and using tools like copilot, cursor, replit, jacob etc and wiring up LLMs into increasingly more complex workflows.

And yet, I don't see much evidence that software quality is improving, if anything it seems in rapid decline.


I don't see much evidence that software quality is improving

Does it matter? Ever since FORTRAN and COBOL made programming easier for the unwashed masses, people have argued that all these 'noobs' entering the field is leading to software quality declining. I'm seeing novice developers in all kinds of fields happily solving complex real world problems and making themselves more productive using these tools. They're solving problems that only a few years ago would require an expensive team of developers and ML-experts to pull off. Is the software a great feat of high quality software engineering? Of course not. But it exists and it works. The alternative to them kludging something together with LLMs and OpenAI API calls isn't high quality software written by a team of experienced software engineers, it is not having the software.


Even if that were true (and I'd challenge that assumption[0]), there's no dichotomy here.

Software quality, for the most part, is a cost center, and as such will always be minimal bearable.

As the civil engineering saying goes, any fool can make a bridge that stands, it takes an engineer to build a bridge that barely stands.

And anyway, all of those concerns are orthogonal to the tooling used, in this case LLMs.

[0] things we now take for granted, such as automated testing, safer languages, ci/cd, etc; makes for far better software than when we used to roll our own crypto in C.


Author here: I'm super familiar with this kind of find and replace syntax inside vim or with sed. Usually it works great!

But in this specific situation it was tricky to handle situations with things spanning over multiple lines + preventing accidental renames.


For those tricky situations, there's "sledgehammer and review" and the second-order git-diff trick:

https://blog.moertel.com/posts/2013-02-18-git-second-order-d...


I realise that and like the article. I was trying to convey in my response that devs should have these things in their toolkit not that you "did the wrong thing"[1] somehow by using treesitter for this.

[1] like that's even possible in this situation


This is super cool! I wish I'd known about this.


Author here, I'm not aware of any IDE that can do this specific refactor


PyCharm understands pytest fixtures and if this is really just about a single fixture called "database", it takes 3 seconds to do this refactoring by just renaming it.


To add some balance to everyone slating Jinja in the comments, I've personally found it great to use.

Sure you CAN write unmaintainable business logic spaghetti in your templates, doesn't mean you SHOULD (Most criticism appears to come from this angle).


Mozilla forever determined to do anything but actually improve their core product.

I know it's opt-in, but nobody is going to switch to a browser because they ship this kinda stuff.


The defaults it ships out of the box makes the shell actually usable. Unsure I could ever go back to a regular bash/zsh prompt.

A lot of people will tell you this is slow and you've got to use X,Y,Z instead. If you're new, I'd strongly recommend just sticking with this, it's much easier to configure.


I see a lot of negativity in this thread. Personally I think flatpaks make it super easy to use a rock solid stable distribution as your base OS, and then run the latest and greatest software on top.

In years gone-by if I wanted the latest versions of software I'd have to use an unstable rolling release distro. Now I can just use Debian stable and essentially get the exact same experience.


Agreed. Just looking at the Steam Deck and how well flatpak works as a distro independent way to install apps is great.

Flatpak supports mutliple remotes (repos), everything is open source and apps are partially sandboxed.

Locking down network permissions with Flatseal [1] is great for situations like with PolyMC, were a maintainer's gone rogue without having to worry about leaking data.

[1] https://flathub.org/apps/com.github.tchx84.Flatseal


Although SteamOS is Arch which is an "unstable" rolling release.

Personally I like flatpak because you don't need root (or a writable /) to install software.


Wait, so you can use your Nvidia GPU with Debian stable? The kernel was outdated last time I checked.

Flatpaks solve this?


Debian makes newer kernels available for Stable via Backports. I believe 6.5 is currently the latest available.

There are also Nvidia flatpak packages available that will install based on the driver version installed.


I never mentioned anything about Nvidia GPUs...


At every single company I've ever worked the company issued laptop has been significantly faster than the machines provisioned for CI (i.e. m1 mac vs Github action free tier runners). Consequently I don't usually push code without running the tests locally, it's such a faster feedback loop.

I've always wondered if it would be possible to design some proof of work concept where you could hash your filesystem + test artifacts to verify tests passed for a specific change.

FWIW in yeas of development I've never had an issue where "it works on my machine" doesn't equate to the exact same result in CI.


I've seen pieces of this without the actual proof of work but with cryptographically hashed inputs and commands such that your test can just be a cache hit.

https://bazel.build/remote/caching


Maybe but one important reason to have it done externally is to confirm it works elsewhere.


One bug that's bitten me a time or two is the rather annoying case-preserving but not case-sensitive MacOS filesystem. This can mean a link works locally, but not when e.g. deployed to a linux server.


It's a valid reason to have stuff run in CI (i.e. consistent environment). But for my line of work I can't think of a single scenario where the architecture / platform has ever caused issues in tests, typically it's caught at build time / when resolving dependencies/packages.


Forgetting to commit a file is pretty common. Non-existent folder, not updating out-of-tree config, etc.


Have easy way to run your tests in container. Then you don’t need to worry, unless you are deploying on strange hardware


What kind of software do you develop? In my area, running all the tests on my laptop would take days and I frequently have issues where the tests passes locally but not in CI.


Backend webdev in Python, mostly worked for startups. Nothing crazy complex.


A few jobs ago, we didn't bother with a separate CI server, for exactly this reason. We just had the output of the local test run appended to our commit messages.


pre-commit more or less does this. When you install locally as a git hook it will only test on changes that are being committed.


But is it smart enough to go and test transient deps of code you've made changes to?

I've been impressed by Pants (Python build tooling) which manages this really well https://www.pantsbuild.org/docs/advanced-target-selection#ru...


I've always liked the theory of this but not the implementation. I'm a big fan of squash merges so my branches are a mess of commi that each may not even build let alone pass tests. If I had to run tests each commit it would slow things down significantly for little benefit.

I wish it was more nuanced


I use pre-commit and have a git alias for those wip commits.

  cmnv = commit --no-verify


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: