The easiest alternative is using a where clause and filtering by an ID range. Eg: "WHERE id between 1000 and 1200". But this introduces a ton of limitations with how you can sort and filter, so the general advice of not using LIMIT and OFFSET has a ton of caveats.
I mean Maya from Indian philosophy, which states that we live in an illusion that makes it difficult for us to recognize the true essence of the world and things.
This seems like an outrageous statement on its face. They probably have React on their resume because that is where the job market has drifted. No one is getting hired these days, even at old-school Fortune 500 type companies, for listing "Java-Server-Pages" or "Jinja SSR templates" on their resume for a frontend position.
That was my initial reaction too. Then I realized that every web UI re-write I've seen was lead by developer evangelism, never by management. Developers update their resumes and HR updates hiring reqs accordingly.
This makes it harder to see this as something other than a developer-inflicted problem but for the fact that there are probably three groups of developers affected by this:
a) true-believer evangelists
b) resume-builders
c) innocent bystanders
There's no way to tell from the presence of "Framework X" on a resume where the developer falls between bystander and evangelist. Probably the CTO's point is that they do not like "Framework X", and cases of c) lack judgement and decisiveness for putting up with it?!
Anyway, funny to think about, especially in comparison to the playbook for changing the back-end frameworks.
Isn't the point of having a PM to bridge the gap between leadership that knows the domain of "it" and IT who knows how to implement general business needs onto a technical solution? Of course that would require having a PM to begin with, instead of expecting your developers to become subject matter experts in your obscure domain.
The article actually argues the opposite. Developers should move their focus to integration / "real-world" tests. The major summary bullet point being:
"Aim for the highest level of integration while maintaining reasonable speed and cost"
My experience mirrors the author's. In any "real" business application, the unit tests end up mocking so many dependencies that changes become a chore, in many cases causing colleagues to skip certain obvious refactors because the thought of updating 300 unit tests is out of the question. I've found much better success testing at the integration level. And to be clear this means writing a tests inside the same project that run against a database. They should run as part of your build, both locally and in CI. The holy grail is probably writing all your business logic inside pure functions, and then unit testing those, while integration testing the outer layers for happy and error paths. But good luck trying to get your coworkers to think in pure functions.
> The holy grail is probably writing all your business logic inside pure functions, and then unit testing those, while integration testing the outer layers for happy and error paths. But good luck trying to get your coworkers to think in pure functions.
I've come to a similar conclusion. Functions don't necessarily have to be pure in the academical sense, though - but I feel like the more the business logic is decoupled from dependency injection and the less it is relying on some framework, the better.
It makes testing a lot easier, but also code reuse. I've just been writing a one-off migration script where I could simply plug in parts of the core business logic. It would have been very annoying if that was relying on Angular, NestJS or whatever.
I've had the same experience. Suboptimal code isn't refactored because of the test code overhead, or, much worse, the tests on that same subpar code somehow morph into a perceived "gold standard" for how that code should work.
I avoid tests (aside from hands-on end user testing) as much as possible, actually, since they rarely seem to tell you anything you'd didn't already know.
> in many cases causing colleagues to skip certain obvious refactors because the thought of updating 300 unit tests is out of the question.
Good! They shouldn't do the refactor.
Because "obvious" refactors often introduce bugs (e.g. copy/paste errors), and if developers can't be bothered to write tests to catch them, they're going to screw over the other team members and users who will be forced to deal with their bugs in production.
> The holy grail is probably writing all your business logic inside pure functions, and then unit testing those, while integration testing the outer layers for happy and error paths.
So settle for half a loaf.
Write all the easy unit tests first. The coverage will be very incomplete, but something is better than nothing.
> Because "obvious" refactors often introduce bugs (e.g. copy/paste errors), and if developers can't be bothered to write tests to catch them, they're going to screw over the other team members and users who will be forced to deal with their bugs in production.
In my opinion, useful tests should be able to survive a refactor. That is the only sane way I've ever done refactoring.
If I'm doing a large refactor on a project and there are no tests, or if the tests will not pass after the refactor, the first thing I do is write tests at a level that will pass both before and after refactoring.
Rewriting tests during refactoring doesn't protect from regression on my experience.
Unit tests which can survive a refactor are a nice-to-have.
I would not rule out a refactor merely because I'd have to refactor some unit tests too. That's just part of the cost benefit analysis.
> Rewriting tests during refactoring doesn't protect from regression on my experience.
Your experience is completely at odds with mine. Every time I change code, there is the possibility for simple errors such as copy/paste mistakes. Trivial, cheap-to-write unit tests have saved me time and again from having to debug something down the line.
Overconfident devs who act as though they're above making such simple mistakes make for bad team members.
Tests that will survive a refactor are the most important tests to have.
The other tests are, at best, a false sense of security and often an active detriment that slows down future development. They might sometimes catch actual mistakes, but just as often they fail when nothing is broken, leading to the tests not being trusted and broken tests being updated even when something was actually broken.
I know where you're coming from. This is the classic argument for limiting unit tests to black-box testing of public APIs exclusively, avoiding clear-box testing altogether.
I agree that it's possible to write absolutely wretched fragile clear-box tests. And I agree that if you have a black-box test and a clear-box test which provide equal validation of functionality, the black-box test is superior because it will survive a refactor.
I generally dislike absolutist rules of any kind when it comes to unit testing and prefer to think of things in terms of ROI. Sometimes you can add a lot of value with a clear-box test because the functionality is impossible to write a black-box test for without a ton of extra work and time.
But sometimes you may be in an environment where absolutist rules are the only way to go.
I understand what the article is arguing. I agree with it, but think it's idealistic. If swaths of your code are a mess, integration testing is super painful. You can't easily add it until you clean up the mess, so the other forms of testing are more practical more often in my experience. If you get to a point where your code isn't a mess, I'd agree that you should start introducing meaningful integration tests.
I think this is just one of those cases where there is a context-sensitive strategy to testing. It depends completely on the cleanliness of your code and experience working with it.
I'm working on a codebase that evolved at the same pace as React, but without any thought for idiomatic principals. As a result, you have class-based components, purely functional components, hook based components, HOCs, Redux state passed in through the older functional way, Redux state passed in through an HOC, styled components, traditional CSS styled components, and anything else you can think of that was in vogue at some point. It's a mediocre code base that generally works as advertised, but the performance is trash due to misuse of styled components, and the state management is a nightmare to traverse. I think this is the reality of a lot of 3 - 5 year old React frontends, and it has definitely soured my opinion on React / Redux a bit. I'm not sure if this effects other frontend frameworks as much, but it does seem like React was always pushing for a "new way to do things" every year.
From here, your problem sounds like a broken code review process or missing/unclear style guides in teams/company.
> I'm not sure if this effects other frontend frameworks as much, but it does seem like React was always pushing for a "new way to do things" every year.
In my experience, it purely depends on how you regulate the code internally. You can always find developers using something shiny every week when you already solved the problem the shiny thing solves like 100 times in your codebase. I had the same problem with some random integration of Blazor recently in a .NET codebase. Some had the same problem (apparently, I came later) integrating Primefaces to a JSF codebase.
It may be amazing tech, but do you really need it if you can KISS without it?
I believe React is inherently terrible, rather than your reason. I think all libraries that try to abstract an underlying technology into something that it is not, and then also have to provide escape clauses for the abstraction because the user actually has to know how the underlying stuff works, are inherently broken. I also think state-based reactivity is a terrible way to reason about things because sometimes you don't want a certain part of the state to participate in the reactivity, and then at another time you do, so you have to add another state variable to say when it is OK for the other state variable to start participating in the state. It just isn't a good way to create algorithms.
The whole idea of React performing a bunch of tasks in the background and updating everything when it feels like is just all a bunch of silly magic. In fact, my state of JavaScript 2021 would be pretty damning, but because of the libraries used instead of JavaScript (like React and TypeScript), not because of JavaScript.
React presents a simple view of the world, but in doing so, you now have no idea what is actually happening, and you need to know that. This isn't like an array in JavaScript that is abstraction of an array in c that needs memory allocated and new arrays created and so on. You don't need to know how that works other than that the JavaScript version is a bit slower and you accept that. You need to know how all of the stuff React hides actually works to create your program.
In terms of hooks, there are so many "rules". Instead of seeing an API, the API is moved into the documentation and not visible in the code until run-time when you get one of the many errors like not adding a dependency to a hook, using a hook outside of some specific area, etc, etc. It isn't JavaScript at all, so you no longer have the tools of the language to freely use to make your life easier.
I really disagree with this, plus I have to say, it doesn't sound like you are offering an alternative.
1. Escape hatches are good. Designing an API is hard, and supporting every use case within that API is harder. Providing a way to make sure you don't get backed into a corner means you can continue building things.
2. State updates -> View updates seems perfectly reasonable to me. I don't think I've ever wanted a view to use a stale piece of state in one section and an up-to-date piece of state in another.
3. Reconciliation is just React's way of saying "we are trying to make things fast." I generally don't have to worry about how this happens while I'm building things: I can just build things. A library that let's me focus on business logic rather than boilerplate or implementation details is one that I want to continue using.
4. "Thinking in React" I put into two buckets: The first is easier to grasp, in how to reason about state (propagating down in one direction) and how to keep things in "React" land (don't manipulate the DOM directly, React won't "know" about it). The 2nd is in the API, and I'll admit does take some time to get used to. Lifecycle methods I think can be learned faster, but knowing the lifecycles around hooks also can be learned, of course. Plus, the abstraction and co-location benefits of hooks outweigh the learning curve IMO.
5. The [rules of hooks][1] that React provides is just two items, hardly overwhelming. The actual API of hooks (e.g. the need for a dependency array, return function from useEffect to cleanup on unmount, etc.) is larger and takes longer to fully grasp, but being able to have a linter catch a large chunk of these is a good thing in my book. Plus, no library is without its footguns. In the jQuery days there were plenty of times I'd forget to remove a listener, create race conditions in ajax stuff, etc. Learning a library's API is just that, its API. You are writing in Javascript, but of course no one is saying "the idea of dependency arrays is 'Javascript,'" although that concept definitely appears in other computer science / programming fields.
> A library that let's me focus on business logic rather than boilerplate or implementation details is one that I want to continue using.
React is the exact opposite of that. Endless boilerplate, abstractions and unwritten rules. As the reality of large projects sets in, they look nothing like the starter examples.
I really enjoy the Lit model. It’s quite a bit faster than React, it is a tiny bit of syntactic sugar on top of browser APIs. It’s powering Photoshop (https://web.dev/ps-on-the-web/) and a future release of YouTube I believe.
Lit is just react with web components. It uses all the same magic, and a whole bunch of documentation is needed (is this state reflective, reactive, blah blah). It does the same "let me figure out updates in the background and you don't worry about it" as React, just without a virtual DOM.
>it doesn't sound like you are offering an alternative.
Actually, I create a library github.com/thebinarysearchtree/artwork which I use but haven't documented for public use. You can probably see some earlier versions of actual components in the history with the initial commit. I think I created a login component or something. The essential concept behind it is that html is no longer necessary in a component world, where everything is broken down into little pieces and spread out inbetween loops and so on. Removing html and having access to the live dom elements, and using modern JavaScript and dom API features is now a better alternative than all of these ridiculous libraries that are just variations of React or Vue or insane with their own compilers and virtual doms and god knows what else.
>State updates -> View updates
state updates -> query updates
state updates -> url / history updates
>A library that let's me focus on business logic rather than boilerplate or implementation details is one that I want to continue using.
From my experience, these implementation details are important. The library I use is less code than React and shows the implementation details so you know what is happening.
>The [rules of hooks][1] that React provides is just two items
That definitely isn't correct. It is missing the warnings about dependencies, where async methods can run, and a lot more. There is a list of eslint warnings somewhere. There are a lot of rules of React and hooks and the different types of hooks and they are all in the documentation and not visible in the code. The order of your setState calls is important, you can't use async in certain places, the state has to be assumed to be stale unless you do setState(a => a + 1) etc. It is quite frankly ridiculous. There is so much happening that is all hidden in magic.
Bluntly, I think it is all complete garbage that is more complex than it should be and there are now so many people writing incompatible React instead of standard JavaScript that works for everybody. I just find it hard to believe that there is anybody who has written a large project in React and has come away with any conclusion other than that React is complete and utter garbage. It is a failure and it will not stand the test of time. The only people who could think otherwise: have some kind of cocoon set up by a large tech company and they basically just write demo code all day, or they haven't worked with React long enough.
My library, which I have used to rewrite a large application with, is less code, faster, and easier to understand. I proved that React is complete garbage to myself by doing that.
You could say this of absolutely anything. Let's use assembler then so we know what we're doing. Oh no, wait, let's write the 0s and the 1s ourselves so we understand what the processor is doing. Even better, let's just put some voltage on some transistors so we understand the flow of the electricity.
At some point you need to accept the underlying stuff just works and build on top of existing layers. That's what DHH calls "conceptual compression" I think, and that's also how science works. Building everything from scratch so that you understand it doesn't work, specially in a business context and usually ends up in worse home-made frameworks.
React's blessing and curse is that it is so (relatively) simple. It holds only one opinion, that UI should be a pure function of input/state. It leaves everything else up to "you", which is where the problem sets in.
I don't think I'll be looking to work in a react codebase in the future again. I don't know what was wrong with class based components, especially the lifecycle methods, they were so easy to understand/read. But almost every react codebase is now a combination of classes/hooks or 100% hooks which imo is so messy/unreadable...
When I will be looking for a job change I'll definitively look for angular only jobs (or even better, backend only)
> I'm not sure if this effects other frontend frameworks as much
Depends. Angular is the same old solid brick you can toss towards your enemies since 2017 or so. Vue has had at least one major breaking release, but there's a migration path, so it's not impossible to transition(although I know of one instance where the team opted for not doing that because of uncertain estimates).
But none of them have this goldfish-memory-like attraction to the new and shiny.
I think I agree with you as well. Although it is hard for me to picture exactly how you've structured your dependency graph. In any case, extracting the business logic into some sort of static method / class has definitely been one of the only useful things I can carry across projects that works in nearly all use cases. It also makes unit testing the actual business logic extremely easy. That said, you can end up with a static method that takes 20 parameters, which is always fun. But, in those cases, you are lefty dealing with complexity that is intrinsic to the business, rather than complexity introduced through some bad architectural decision, so at least it is isolated.
> That said, you can end up with a static method that takes 20 parameters, which is always fun.
I used to get upset about this, but now I embrace it. If a business method requires certain things to operate, then modeling those things as arguments to the method is totally reasonable. Some business is complicated and messy so we would expect more arguments to be involved. Trying to sweep reality under the rug just makes things 10x harder elsewhere.
I've tried this now on my 2019 Macbook pro, and my 2018 Thinkpad T480 running Linux. On Mac, I can't say I really noticed a difference. Also there were quite a few font rendering issues that were resolved with whatever patches Jetbrains has applied. On Linux, certain actions are definitely faster. For instance, Ctrl+Tab to either quickly tab switch or bring up the tab switcher menu was always instantaneous, where-as beforehand there was sometimes a pretty long delay.