It sounds like you're not asking, "How do I learn the language?" but "How do I know I'm doing it right?"
I think Gustedt's book is superb if you're just trying to learn the language, and it does a good job of presenting the most up-to-date standard. I admire the K&R as much as anyone else, but it's not exactly an up-to-date language resource at this point, and it doesn't really provide any guidance on the design and structure of systems in C (that's not it's purpose).
You might have a look at Hanson's C Interfaces and Implementations: Techniques for Creating Reusable Software. That's focused on large projects and APIs, but it will give you a good sense of the "cognitive style" of C.
21st Century C is very opinionated, and it spends a great deal of time talking about tooling, but overall, it's a pretty good orientation to the C ecosystem and more modern idioms and libraries.
I might also put in a plug for Reese's Understanding and Using C Pointers. That sounds a bit narrow, but it's really a book about how you handle -- and think about -- memory in C, and it can be very eye-opening (even for people with a lot of experience with the language).
C forces you to think about things that you seldom have to consider with Javascript, Python, or Go. And yes: it's an "unsafe" language that can set your hair on fire. But there are advantages to it as well. It's screaming-fast, the library ecosystem is absolutely gigantic, there are decades of others' wisdom and experience upon which to draw, it shows no signs of going away any time soon, and you'll have very little trouble keeping up with changes in the language.
It's also a language that you can actually hold in your head at one time, because there's very little sugar going on. It's certainly possible to write extremely obfuscated code, but in practice, I find that I'm only rarely baffled reading someone else's C code. If I am, it's usually because I don't understand the problem domain, not the language.
As a developer with carpal and cubital tunnel syndrome, I'm extremely excited about the possibility of voice programming. However my experience was that it's somewhere between non-existent and impractical when I last tried it.
It was actually the inspiration for my undergraduate research which explored (scratched the surface, really) the use of Lojban[0] as an "interactive metaprogramming language". The benefits of Lojban are that it is significantly more expressive than making sounds that are mapped to vim-like commands via a hacked natural language recognition engine, is isomorphic between text and speech (you can speak mathematical expressions with precision), is phonologically and grammatically unambiguous, and is standardized/has a community -- even if it's a small one.
I still think it's a paradigm shift and would be indescribably more efficient and powerful than traditional code exploration and editing. And I would love to continue, but it's such a massive project and working on something that foreign alone for long enough is really isolating because it's hard to explain. Lojban also has its fair share of problems for this and would need to be further standardized and scoped out for computers, so there's really not a clear path forward in any case.
Regardless, for those _currently_ suffering from hand and arm issues preventing them from coding and/or working, my advice is:
1) don't panic. Go on antidepressants and/or see a therapist if you need to; it's going to take a while to recover
2) rest as much as possible and make sure to get a lot of sleep
3) wear wrist braces to sleep to prevent further damage in the short term (consult a doctor and all that, you want to avoid muscle atrophy so don't use them for too long without starting some kind of muscle strengthening program)
4) invest in proper tools (standing desk + ergonomic keyboard is like magic for me, I can actually type again)
5) gain weight if you're on the low-side of normal weight -- this helped my cubital tunnel syndrome quite a bit by giving my ulnar nerve more protection
And finally, don't give up hope; I'm able to work full time and don't wear wrist braces to sleep at all anymore after a little over a year.
These methods are really interesting for high-dimensional PDE (like HJB), but there's a ton of skepticism about the applicability of NN models for solving the more common PDE that arise in physical sciences and engineering.
The tests are rarely equivalent, in that standard PDE technology can move to new domains, boundary conditions, materials, etc., without new training phases. If one needs to solve many nearby problems, there are many established techniques for leveraging that similarity. There is active research on ML to refine these techniques, but it isn't a silver bullet.
Far more exciting, IMO, is to use known methods for representing (reference-frame invariant and entropy-compatible) constitutive relations while training their form from observations of the PDE, and to do so using multiscale modeling in which a fine-scale simulation (e.g., atomistic or grain-resolving for granular/composite media) is used to train/support multiscale constitutive relations. In this approach, the PDEs are still solved by "standard" methods such as finite element or finite volume, and thus can be designed with desired accuracy and exact conservation/compatibility properties and generalize immediately to new domains/boundary conditions, but the trained constitutive models are better able to represent real materials.
Step 1: put 'xrandr --dpi <your actual DPI>' in .xinitrc
Step 2: Use QT applications (Plasma is a fantastic QT desktop)
Step 3: Enjoy your reasonably sized everything.
"Scaling" is a broken concept to work around applications assuming 96 DPI (which is considered scale=1). You don't need it if you use programs that actually respect your real DPI. Unfortunately X11 doesn't properly compute DPI settings, even though EDID information generally contains the screen size - I imagine, for fear of breaking stuff.
(You can correct GTK3/GDK applications by setting GDK_DPI_SCALE=<actual dpi / 96>, but in my view it's a sin that you need to do that)
I think Gustedt's book is superb if you're just trying to learn the language, and it does a good job of presenting the most up-to-date standard. I admire the K&R as much as anyone else, but it's not exactly an up-to-date language resource at this point, and it doesn't really provide any guidance on the design and structure of systems in C (that's not it's purpose).
You might have a look at Hanson's C Interfaces and Implementations: Techniques for Creating Reusable Software. That's focused on large projects and APIs, but it will give you a good sense of the "cognitive style" of C.
21st Century C is very opinionated, and it spends a great deal of time talking about tooling, but overall, it's a pretty good orientation to the C ecosystem and more modern idioms and libraries.
I might also put in a plug for Reese's Understanding and Using C Pointers. That sounds a bit narrow, but it's really a book about how you handle -- and think about -- memory in C, and it can be very eye-opening (even for people with a lot of experience with the language).
C forces you to think about things that you seldom have to consider with Javascript, Python, or Go. And yes: it's an "unsafe" language that can set your hair on fire. But there are advantages to it as well. It's screaming-fast, the library ecosystem is absolutely gigantic, there are decades of others' wisdom and experience upon which to draw, it shows no signs of going away any time soon, and you'll have very little trouble keeping up with changes in the language.
It's also a language that you can actually hold in your head at one time, because there's very little sugar going on. It's certainly possible to write extremely obfuscated code, but in practice, I find that I'm only rarely baffled reading someone else's C code. If I am, it's usually because I don't understand the problem domain, not the language.