Yes, it went commercial. But Conda is still free. And the community had built conda-forge to maintain packages. Conda + conda-forge can completly replace Anaconda for most people.
So even though they went commercial, they left pretty good things behind for the Open source community.
I use Miniforge in a commercial environment and never found a package downloading from the main channel. I'm pretty sure a recipe that does that would be blocked by conda-forge reviewers.
How much of these 9% uses Apple Pay? My bet it's just a small part. People still use Pix and physical Credit/Debit Cards. Google Pay/Apple Pay are far behind.
Considering all the major banks credit cards are able to be added to Apple Pay's wallet, my educated guess is if a person has an iPhone and a credit card, they are using it through Apple Pay most of the time. Even meal voucher cards are now able to be added to Apple Pay's wallet.
I don't have statistics readily available, but you can search for CADE's (Brazilian fair trade regulator) inquiry 08700.002893/2025-17 on Apple's refusal to support Pix on Apple Pay and comb through the documents.
> many scenarios where the only option was to pay with PIX
I guess you want to say "only option _beyond cash_ was Pix". Most places should accept Passport ID to replace CPF. But if you found hard to pay using credit cards, that has nothing to do with Pix...
I followed with this rationale in a small project and opted for PostgreSQL pub/sub instead Redis. But I went through so much trouble correctly handling PostgreSQL disconnections that I wonder if Redis wouldn't be the better choice.
IMHO one of the biggest advantages of using HTMX is not having to maintain three layers of test suites (backend, frontend, end-to-end). You just need to test that your endpoints return the right HTML piece. You don't need unit testing the frontend since you don't have that much logic there. Just rely on end-to-end tests as a complementary tool to check small pieces of logic on the HTMX code.
I’m not sure I’d agree. The HX attributes encode a lot of functionality and you want to check that they do the right thing together. You can test you get the right attributes in a response, but that’s only a proxy for the application doing the right thing, so you most likely still need the end to end tests.
The problem is that end to end tests are the hardest kind to write. At least if you have a frontend code base you can have some tests there and get some confidence about things working at a lower level.
> You just need to test that your endpoints return the right HTML piece. You don't need unit testing the frontend since you don't have that much logic there.
Only using the standard Django test client? I don't find this myself when I've had to work with HTMX code e.g. if you're using HTMX for your sign up form, it can be completely broken on the happy path and on form errors if the HTMX attributes are wrong, and the standard Django test client won't let you know, so you lose a lot of confidence when making refactors/changes.
That honestly sounds like a downside. Having to verify HTML as an output (in comparison to e.g. JSON) is brittle since it will often change just for presentation reasons. Having a clear and somewhat stable API (which is a boon for testing) is IMHO a strong reason for the SPA model.
> Having to verify HTML as an output (in comparison to e.g. JSON) is brittle since it will often change just for presentation reasons.
For HTML tests, if you target elements by their ARIA role, semantic tag, or by matching against the text/label/title the user would see, it should be robust to a lot of presentation changes (and does some accessibility checks too). It's much more robust than e.g. `body > div > div .card > h2` which are sure to break in tests and slow you down when you make changes later.
Not sure if this is what you meant, but you can't rely on only the JSON because you can't assume it's been properly transformed into the expected HTML.
In my experience snapshots have largely fallen out of vogue, at least as far as them blocking deploys goes. They're usually flaky, change often, are slow to test and it's way too easy to have false positives which lulls people into complacency and just regenerating the snapshots as soon as they reach a failure. I'm guilty of that myself, unless I can spot the breakage immediately I pretty much just regen the snapshot by default subconsciously because of how often I've had to do that for false positives
Then what's to stop you from making any unit test pass by just changing the expectations? The best testing technique in the world can't save us from developer error.
The problem with snapshot tests is that they tend to fail when something is changed.
One property of good tests is that they only fail when something is broken.
Snapshot tests aren't really automated tests, because you have to manually check the outputs to see if failures are genuine. It's a reminder to do manual checking but it's still not an automated process.
> The best testing technique in the world can't save us from developer error.
Sure, but using a testing technique that increases developer error is unwise, and snapshot testing has done that every time I've seen it used, so I don't use it anymore.
Then fix the test so that it fails only when something breaks? Do people not fix flaky, overly broad, or incorrect unit tests? How is a snapshot test any different?
I'm not sure you understand what snapshot testing is. It's a form of testing where you render frontend components and take a "snapshot", storing the output and saving it for later, then on the next test run, the component is rendered again and compared to the saved snapshot.
Any change that modifies the rendering of any component under test will break snapshot tests. It's literally just a test that says, "Is this render function still the same as it used to be?"
I'm not sure you understand how to capture snapshots at the correct level of granularity. If you have snapshots that are breaking for irrelevant changes, then by definition those changes don't need to be snapshotted, do they? Cut down the snapshot to only the relevant part of the rendered HTML so that when it fails, you know something broke.
It would still break if you need to modify that output in any way. Need to add a class for styling? The snapshot breaks. Need to add a new hx-attribute? The snapshot breaks. It's not tied to the logic, it's tied to whatever markup you output. You've reduced the surface area of the problem, but not eliminated it.
> If you have snapshots that are breaking for irrelevant changes, then by definition those changes don't need to be snapshotted, do they?
How often? All the time? Or relatively rarely? For most projects, it's the latter. We are not constantly churning the styling.
> Need to add a new hx-attribute? The snapshot breaks. It's not tied to the logic
An `hx-` attribute is logic. That's the point of htmx.
> You've reduced the surface area of the problem, but not eliminated it.
That's a risk of any kind of unit test. You can always have false positives.
> You're so close to getting it.
That tests should be fixed until they're robust? I'm not a fan of the learned helplessness of the 'We couldn't figure out how to do XYZ, therefore XYZ is a bad idea' approach.
Do you have experience with snapshot tests? If not, you should try it out for a bit and see what we mean, you'll quickly run into the situation we described.
If you introduce a new feature and have a new button on the page, for example, the snapshot test will fail (by necessity, you're changing what's on the page) because the snapshot doesn't match the current version with the new button, for example. So, what you have to do is regenerate the snapshot (regenerating it will always make it pass).
The problem is in my experience, that these kind of tests are massively unreliable. So you either fiddle with tolerances (most testing libraries let you set a threshold for what constitutes a failure in a snapshot) a lot, or you manually have to hunt down and pixel peek at what might've happened.
As an example of what usually happens is that, at some middle step of the test an assert fails because some new code causes things to render slightly out-of-order or delayed in some way. Ultimately the final product is good and if it was an intelligent system it would give it a pass. The unit tests pass. Even the integration and E2E tests pass, but the snapshot tests fail because something changed. The chance that this test failure is a false positive is much, much higher than the chance of it being a legitimate failure, but let's assume it's legitimate like the good citizens we are and try to hunt it down.
So, you go through the snapshots, 1-by-1, setting breakpoints where you think things might break - note that this process by itself is painful, slow and manual, and often running snapshot tests on your local machine can break things because things get rendered differently than how they would in CI, which leads to an extra set of headaches - What happens here is that you're testing an imperfect reflection of what the actual underlying app is, though, because a lot or most of it will be mocked/hoisted/whatever. So sifting through what is a potential breakage, and what is just the quirk of the snapshot test is basically a gamble at best. You spin up the app locally and see that nothing looks out of the ordinary. No errors to be seen. Things look and behave as they should. You have your other tests, and they all pass. So, you chalk it up to a false positive, regen the snapshots, and go on with your day.
The next day you add some other thing to the page. Uh oh, snapshots fail again!
Rinse and repeat like this, and after the 50th time sitting there waiting for the snapshot to be captured, you decide there are better ways to spend your limited time on this mortal plain and just say "Fuck it, this one's probably a false positive as well" after taking a 2 minute look at the page locally before hitting the repacture command again.
I legitimately don't think I've ever actually seen a non-false positive snapshot test failure, to be honest.
Snapshot tests are even more sensitive to flakiness than E2E tests but flakiness is still a bug that can (and probably should) be fixed like any other.
As an example - the different browser you have running locally and in CI? That's a bug. The fix is to run the browser in a container - the same container in both environments. You can probably get away without this with an E2E test. Snapshot test? No way.
Ive seen people decry and abandon E2E tests for the same reason - simply because they couldnt get flakiness under control and thought that it was impossible to do so.
I dont think it's intrinsically impossible though. I think it's just a hard engineering problem. There are lots of nonobvious points of flakiness - e.g. unpinned versions, selects without order bys, nondeterministic JavaScript code, etc. but few that are simply IMpossible to fix.
I second that. I used Dash for a while. Although it's nice to have everything wired up for free, the code quickly becomes a mess as soon as you try some more complex things (or you put everything in one function and got a big super nested structure, or you build a lot of small components and quickly get lost in a sea of small functions).
I'm using a managed Postgres instance in a well known provider and holy shit, I couldn't believe how slow it is. For small datasets I couldn't notice, but when one of the tables reached 100K rows, queries started to take 5-10 seconds (the same query takes 0.5-0.6 in my standard i5 Dell laptop).
I wasn't expecting blasting speed on the lowest tear, but 10x slower is bonkers.
Laptop SSDs are _shockingly_ fast, and getting equivalent speed from something in a datacenter (where you'll want at least two disks) is pretty expensive. It's so annoying.
We've being using it for years. Very simple to setup.
reply