You can observe this trend w/ AIfluencers already:
They create short guides on how to leverage AI tools like chatGPT to generate income/reach and all of their typical bullet points equal generation of low-quality noise:
Examples: let chatGPT..
1. write blog posts
2. generate scripts for youtube videos
3. create short stories so you can sell them as a book
Painful to see. I would like to think nobody in his right mind is willing to consume these delightful pieces of work. Dead internet theory on the next level.
That's only because the Netflix bean counters end up working like an AI in their ends. AI trains on known things and outputs something from that finite set. Netflix bean counters similarly bet on known things or derivative things and the output reflects that. The difference is, scripts for the unknown things still occasionally get produced in this later situation, which is exponentially more than can be said if 100% of our content came from AI.
If you just go into the Midjourney Discord you'll see people with absolutely no mechanical artistic talent be create absolutely stunning novel art. The creativity people have there is crazy and the AI is letting them go from head to canvas in a few seconds.
When I see AI art from those communities, it all looks the same to me.
People tend to overestimate their talent, and get overly attached to their own ideas. So when a tool like midjourney/stable diffusion generates beautiful artwork based on a low effort prompt from the user, it's comfy to think that they had a big part in it.
The reality is that almost everything those tools make is equally gorgeous/amazing regardless of who uses it, so they're effectively interchangeable. As a tool for personal expression, it has a lot of value because of the impact it can have on the individual using it, but as art it's pretty worthless. A generated AI image could potentially have some cultural impact or value, but not on the merits of "it looks pretty", since they all look like that.
My experience browsing midjourney supports this. It's amazing how quickly you go from "wow" to "whatever". But that's probably cold comfort for a huge number of jobbing illustrators. Paying clients typically don't want art.
We're already past the point of diminishing returns on "human creativity".
And I would argue that the current (mostly recent productions) Netflix catalog is indeed repetitive and stagnant. Originality in tv production is currently in a race to the bottom.
Often if I go job hunting, there's a 25%-250% increase in salary. The 250% is often some tiny startup in a developed nation, which comes with its own costs; no healthcare, dodgy contract, remote, etc. But the idea is that it's easy to just take -- or turn down, a range much higher than you make now.
When AR/VR finally happens, UI developers will have to deal with complexity from a completely different paradigm. For me, React's biggest strength has always been its ability to organize complexity into a manageable order. Combine this with the large pool of developers and extensive ecosystem, I think React will be the go-to tool for AR/VR apps. For this reason, I'm super hyped for R3F.
I really don't think React will be the go-to tool for VR; it's based on the DOM and trees of function calls, which are both hierarchical, which necessarily means you have the gorilla-banana problem.
If you have a coffee cup on a table in VR, is that coffee cup a child of the table? How do you move the coffee cup off the table and put it onto another table? Is it now a child of that other table? What about the coffee in the cup? Is that a child of the cup? How do you change properties of the coffee without necessarily accessing the table and the cup?
Developers working on 3D systems have developed much better paradigms than the DOM for dealing with this problem. An Entity-Component-System architecture with "constraints" is the current best solution. In that architecture, you would create a coffee cup "entity" with a mesh "component" with another "constraint" component, constraining that coffee cup to the table (or better yet, mass component acted on by a physics system). Then you can simply remove the constraint component when removing the cup from one table, and re-add the constraint component when adding it to the other table.
Overall, I think web developers are in for some intense learning and paradigm shifts if 3D becomes the norm.
I don't see why an ECS would be incompatible with a DOM tree.
As for the gorilla-banana problem, I would think all objects in a scene would be under the root, with the exception of pieces that make up a thing and rarely separate (wheels on a car, for example).
While not incompatible with ECS, the DOM and this renderer go all-in on the javascript event-loop. You would have to write your own run loop, which executes the systems on every frame (ideally creating a DAG and executing in parallel while possible), and leave the event loop behind, with all the niceties like `onClick`, to go full ECS. Otherwise you'll create some Frankenstein monster of part ECS, part event-loop, part declarative React.
Additionally, you can throw OOP in that mix as well, because Three.js has it's own whole OOP-style framework, that you're strapping declarative React on top of with this renderer. Reminds me of Jonathan Blow's talk on the end of civilization via endless layers of abstraction[1].
I really think, when it's ready, a Bevy[2]-style system either native or compiled to WASM with WebGPU will be ideal.
And while I'm airing opinions (forgive me), I think writing shaders now is like SQL 30 years ago. Developers left optimizing difficult--according to them--SQL to database administrators by abstracting it away into ORMs. If history is any indicator, I think we'll be having the same arguments on Hacker News 30 years from now about 3D frameworks vs writing shaders directly as we're having now about ORMs vs writing SQL directly.
It's not that it's incompatible, it's that when the ECS is the primary tool for organization, a DOM tree (or scenegraph) is merely one way of iterating over the entities - not the way.
This provides tons of benefits, so for example you can also decide to iterate over the entites by shader program and gain significant speedups for graphics processing, or maintain components that roughly sort them by their position in world space for physics and culling or lighting, etc.
For a crude analogy, imagine if Document.querySelectorAll() were a zero-cost abstraction, i.e. it ran as fast as iterating over linear memory. In practice this isn't how it turns out with an ECS, but it's much closer and you can get this kind of performance for the "hot path" kind of queries.
To add to the sibling comment, there's another wonderful Rust ECS called shipyard[0] and I helped write a scenegraph for it (which I really need to update, one of these days)[1]
React is not based on the dom, R3f merely expresses regular threejs which works as an object graph. Three is the usual choice for 3D on the web, if you use it once you'll see that it is also quite natural. There is no conflict between the two and react certainly doesn't change any rules or apis, it just builds a graph, which you would normally form imperatively.
React is inspired by the DOM and they split it before 1.0 IIRC, but that misses the forest for the trees. The main issue I had is that React, Three.js, and R3F are all hierarchical/tree-like (what you and Three.js are calling a graph). You can technically yes, build 3D scenes, but anything non-trivial will be very awkward.
Let's say you're building a game where you want a sphere to stick to whatever player you throw it at. How would you do that with a scene graph/OOP model? It'd be awkward, removing objects from one parent and adding them to another. Even more awkward if it's a complex object and you only want a part of that complex object to stick to the player. ECS + a constraint or physics system does a decent job (not perfect) of handling this in a relatively elegant and performant way.
I've used Three.js enough--built my portfolio[1] out of it, and then switched to Babylon when I realized how little I liked Three.js. For the record, I also dislike Babylon.
i have yet to encounter something that shouldn't be expressed as a graph. three, babylon, ogl, blender, gltf, cad, games, they're all scene aligned. that doesn't seem to be a conflict since you still use shaders, physics, ecs and so on.
could you go more into detail what you mean when you say "anything non trivial"? is there a real example of something that would not be possible to create in, say, threejs?
Benefits include complete control at frame and pixel level, being cross-platform (same code runs on web, iOS, macOS, Linux), and having access to third-party C/C++ libraries for 3D graphics.
I have not put a lot of time into learning WebAssembly. But isn't WebGL a JavaScript API? Meaning wouldn't you be going from WebAssembly -> JavaScript engine -> WebGL? I was under the impression WebAssembly had no access to the outside world and could only access the relevant JavaScript APIs. But if it is true you can basically do WebAssembly -> native GL then that would be amazing.
You're correct. WebGL does require extra validation compared to native GL, but it's effectively the same API as OpenGL ES 2.0/3.0 and Emscripten handles the translation from C/C++ for you.
There's some overhead but it's negligible (assuming you're not making overly redundant API calls.)
And that last part is key: for modern high-performance graphics acceleration, the name of the game is "maximum throughput with minimum API interactions."
If your data isn't structured for fast rendering, it doesn't matter much what language you're using; they'll all be too slow.
emscripten ships a "desktop GL" emulation library [0], which can have quite a bit of overhead. If you want something faster, you can use the native WebGL bindings [1]
The idea is that it does more cacheing of things like uniform locations and such so you do very fast in-memory lookups in WASM without hitting the JS Api as much.
In the future this will be obsolete since WebGPU has a more optimal API to begin with, and Rust/WASM won't need to go through the JS layer due to "interface types"
WebAssembly has no access to the outside world at the moment, that is correct. It is only able to call (and be called by) JS.
A C++ application compiled via Emscripten ships (a fairly large amount) of JS glue code that exposes all relevant Browser APIs like WebGL, Fetch or other HTML5 stuff to the actual WASM program.
As others commented, for WebGL an additional API translation is applied. If the source targets OpenGL ES 2 (or 3 for WebGL 2), this step has almost no overhead however.
React is one of the worst choices of doing something like that.
The underlying abstraction model of having a tree of components and re-rendering only the parts that have changed between renders doesn’t map to the hardware at all, meaning you’ll waste most of the HW performance just on maintaining the abstraction.
You’ll also get zero benefits from the third-party libraries - there’s nothing in them that can help you with stuff that matters, like minimizing amount of the GPU state transitions for example or minimizing amount of GPU/CPU syncs.
It will be scenegraphs all over again, and the graphics industry has ditched these long ago in favor of simpler models, for good reasons.
Long story short, the happy path in graphics programming is very narrow and fragile, and you typically want to structure your abstraction around it.
You are arguing against threejs not react. R3f reconciles threejs in the exact way it's getting used, a graph. This ofc is also how blender gltf et al work. If you make a webgl app on the web you most likely use three and all react does is make that a little ordered with some additional benefits when it comes to performance, memory and interop.
Except that for three.js, it's react introducing complexity rather than improving the organisation of the code.
A simple component with defaults look neat, but start building a complex scene and jsx gets in the way.
three.js isn't dom elements updated in js. The state of each object is updated in the scene depending on more than whether they changed.
Where three.js lacks abstraction is a component system, in plain js, to organise application with decent patterns. Most three apps are a big blob or code.
Okay yes, thank you from saving me from my drivel. Why would three.js care about representing a document model, bubbling up events, and so on. If we do this, we do it fresh.
I think they need to take a very serious and hard look at performance before it can go anywhere near VR (where rendering speed and stability are paramount). I'm sure it works for simple things and can handle GUIs fine, but the overhead seems huge currently.
there are many large scale apps built with it these days. it was initially made for complex use cases, to bring order into the scene graph, and of course to optimize raw rendering performance: https://docs.pmnd.rs/react-three-fiber/advanced/scaling-perf...
> find a position that seems to involve a TON of manual work - automate all of it - and DO NOT TELL anyone
This reminds me the article "Now That's What I Call a Hacker" [1], where a guy left behind his automation scripts when switched companies, which revealed some extreme scripts, like:
> hangover.sh - another cron-job that is set to specific dates. Sends automated emails like "not feeling well/gonna work from home" etc. Adds a random "reason" from another predefined array of strings. Fires if there are no interactive sessions on the server at 8:45am.
I'm putting this article and what you just said together. Now I think it's reasonable to believe there are a lot of IT professionals doing this, they are just hidden, because there is no reason to share this kind of works experience, as it makes sense in the competitive side of the industry.
I almost created this myself some time ago, although not specifically for hangovers. I was envisioning a sort of dead man's switch, where if I didn't check in before a certain time it would send an SMS to my manager calling in sick.
The main reason I didn't, was that I figured I was more likely to forget to check in (and be forced to use a sick day when I didn't "need" one) than be incapable of waking up, making the call on if I was in a workable state, and sending the SMS by myself.
I had an insufferable coworker who doubled down on that attitude. If he hadn't thought of something, instead of just saying he could have done it, he'd start rambling about why it was a bad idea, and that he was smart for choosing not to do it. His whole attitude was "I know everything, and if I don't know it, it's because it's not worth knowing". Worst part was the non-technical management ate it up because he was so confidently wrong, so people got dragged into working on his convuluted solutions that just happened to always match up with his skillset.
I have to deal with a couple of people like this and it's a waking nightmare. I'm trying to figure out if it's even possible to mitigate their damage or successfully negotiate with them, or if I just need to change jobs. This behavior is especially bad when it comes from manager / team lead types.
> The vaccine (...) uses messenger RNA, or mRNA, to carry instructions into the human body for making proteins that prime it to attack a specific virus.
I love how this technology turns a medical challenges into a software problem. Being able to code medicine will open up an affordable way to personalized drugs, instead of the current day "one size fits all" solutions. What a time to be alive!
But in practice: personalized medicine is impossible to tests. We rely upon giving tens of thousands of volunteers medicine ahead-of-time to prove if medicine is safe.
While the technology theoretically exists to make and distribute personalized medicine, the ethics and safety questions of doing so remain unanswered.
Personalized medicine has been tested for years in clinical trials. Trials require tens -- not tens-of-thousands -- of volunteers.
Personalized therapies are being used to treat cancer patients right now, and using machine learning to find the right binding site for a particular patient's tumor should present no more ethical questions than giving them NSAIDs.
That's really not how mRNA works. If DNA is the source code of our body, then mRNA is the machine code that exists in L1-instruction cache. (There's another process out there that copies DNA into more readily processable RNA)
The way any of this mRNA stuff works is by throwing instructions into our body to create certain proteins in certain configurations. For COVID19, the vaccine is... as XKCD-put it... a set of blueprints to build a "fake death star" without any weapons activated.
mRNA further has a "innate safety" mechanism, in that it degenerates. That's why our body uses DNA after all: because DNA does not degenerate, even though RNA is what's actually executing so to speak.
So any mRNA medicine will have to be strictly temporary, and get its job done in a limited timeframe. The COVID19 virus gets around this fact by self-replicating. The instructions that our cells execute are to create a new COVID19 virus. The original "quine", COVID19 (and all viruses) "print themselves" as part of their execution.
-------------
The "vaccine" is a set of blueprints for the COVID19 "spike protein" (and ONLY the spike-protein). Since it is missing all the other parts of COVID19, it cannot self-replicate. Our body then gets trained on recognizing the COVID19 spike protein, and is ready when the real thing attacks our body.
I believe there are also viruses who can insert themselves into the cells DNA, and there are some articles on how some parts of human DNA is already old virus genome. Viruses are far more dangerous in that regard.
There's both DNA viruses and RNA viruses. RNA viruses are way more common though.
> [DNA] Viruses are far more dangerous in that regard.
Not necessarily. "Corrupt DNA" can be somewhat detected the body as a cancer cell. Our "Natural Killer" cells then kill those cells.
Cancer / corrupted DNA happens all the time in our bodies. Even healthy bodies (!!). The difference between a cancer-patient and us however, is that a cancer-patient is overrun with cancer-cells.
Our bodies naturally kill off cancer under normal circumstances. Figuring out why cancer / corrupted DNA completely takes over the body is a big mystery, especially because our body is so good at fighting off cancer under normal conditions.
I mean, my point is that "if (race == whatever)" is completely nonsensical from an mRNA perspective.
There's no way for mRNA to scan the rest of your DNA sequence to make an if/else determination. mRNA is just gonna execute once its in the body.
> Or how about some biological ransomware?
Yeah, that's called a poison and antidote / antitoxin. You don't need mRNA for that. Poison someone's food, and as they lie dying, you can offer them the antidote in exchange for something.
When we say mRNA "executes", that's a cell injesting mRNA, and assembling a protein (polypeptide). For example, the mRNA sequence 'ACU', when 'executed' by a cell, will turn into a Threonine: https://en.wikipedia.org/wiki/Threonine
> There's no way for mRNA to scan the rest of your DNA sequence to make an if/else determination. mRNA is just gonna execute once its in the body.
Of course there is, genes can be turned on/off using transcription factors. Still, the sequence of vaccines are published and easy enough to sequence and analyze, so I don't think something like this could be mass produced.
A better analogy would be anonymous functions thrown into the global namespace that get called with whatever arguments that may happen to be on the stack at that moment.
> my VSCode autoformats the classes to break lines just fine.
My only pain point with Tailwind was the order of classes. It helps with readability if the classes are always in the same order. I just discovered that eslint-plugin-tailwind [0] is solving this exactly. Configuring VSCode to run code formatter and perform ESLint fixes [1] is just an incredible DX.