It's great for workflows with clear points in the logic where things should happen and it provides more sophisticated (i.e. powerful but harder to master) tools like Fibers and Streams which allow you to reason about failure cases of reactive asynchronous operations. In many cases it offers a clear path out of callback-hell that is more reliable than promises and async/await.
However while the Effect-ts docs are getting better and may be ok for people with a good knowledge of functional programming and Typescript, they are nowhere near the quality they need to be for those who don't. People looking for examples online will get frustrated because Effect API churn over the past three years has made many old posts and articles obsolete. Old github repos won't work out of the box. And you better be comfortable with codemods if you haven't frozen your Effect-ts version.
Fortunately the Effect-ts Discord channel is full of friendly and helpful people and the Effect team provides high quality assistance to people who ask for it. It makes me sad this treasure trove of information is trapped in Discord where search is of little value.
A good book or collection of high quality examples of how to use Effect-ts with de-facto standard frameworks like React could help its adoption grow significantly.
Do you know if people have tried distributing execution across multiple machines using Effect? That's one of the big advantages of fully immutable code execution imo, and I'm really interested if anyone has succeeded in leveraging Effect for that purpose.
I don't believe anyone has done that yet in the way I think you're thinking for Effect-ts - i.e. something which would give effect execution a kind of location-transparency.
That said, the person who built the ZIO project which inspired Effect-ts is now working on an exciting project called Golem Cloud¹ which aims to provide durable and reliable location-transparent execution of Wasm programs.
Mike Stonebraker's DBOS² looks to provide something similar for Typescript.
I really just wish groups would train a llama 3 model and have AI assisted answers pulled directly from source code. Like if this project made https://docs.effect.website and I could query it directly secretllama.com style.
That would require the models to have real reasoning ability as opposed to looking up preformed answers on Stack Overflow.
Study after study shows that many evaluations of LLMs are deceptive because they evaluate them on questions that were answered in the training set. On the other hand, this is exactly why they can do so well on medical board exams because you don't learn to pass the medical board by predicting what medical treatments should work according to first principles but instead by remembering which medical treatments have been proven to work.
For that matter I've long wanted a conventional search engine which can be specialized for a particular software project I am working on: for instance I might be using JDK 17 and JooQ version 3.15 and I only want to search those versions of docs. It shouldn't be hard to look at the POM file to figure these versions out. I have the exact same problem with Javascript where I have to work with code that uses different versions of react-router, bootstrap, MUI, etc. The idea LLM coding assistant should do the same.
I just asked GPT 4o to do the following and it succeeded:
> Write me JS code that generates a random list of 10 prime numbers, takes the sine of each (as radians), then takes the cosine of each of those values (as degrees) and then adds 5 to each value
I assure you, this task is not on Stackoverflow (because nobody would ever be deranged enough to want to do this) but GPT 4o could do it!
Admittedly, this shows only a very rudimentary reasoning ability but it demonstrates some. The idea that LLMs are just a slight evolution of search indexes is demonstrably false.
LLMs are also good at tasks that are roughly "linear" in the sense that a group of input tokens corresponds to a group of output tokens and that translation moves from left to right.
In a hard programming problem, for instance, you have a number of actions that have to take place in a certain dependency order that you could resolve by topological sort, but in a problem like the above one bit of English corresponds to one bit of Python in basically the same order. Similarly if it couldn't translate
take the sine of...
to
Math.sin(x)
because the language didn't already have a sin function it would have to code one up. Similarly, translating between Chinese and English isn't really that hard because it is mostly a linear problem, people will accept some errors, and it's a bit of an ill-defined problem anyway. (Who's going to be mad if one word out of 50 is wrong whereas getting one word wrong in a program could mean it doesn't compile and delivers zero value?)
LLMs can do a bit of generalization beyond full text search, but people really underestimate how much they fake reasoning by using memory and generalization and how asking them problems that aren't structurally close to problems in the training set reveals their weakness. Studies show that LLMs aren't robust at all to the changes in the order of parts of problems, for instance.
The biggest problem people have reasoning about ChatGPT is that they seem to have a strong psychological need to credit it with more intelligence than it has the same way we're inclined to see a face inside a cut stem. It can do an awful lot, but it cheats pervasively and if you don't let it cheat it doesn't seem as smart anymore.
It's great for workflows with clear points in the logic where things should happen and it provides more sophisticated (i.e. powerful but harder to master) tools like Fibers and Streams which allow you to reason about failure cases of reactive asynchronous operations. In many cases it offers a clear path out of callback-hell that is more reliable than promises and async/await.
However while the Effect-ts docs are getting better and may be ok for people with a good knowledge of functional programming and Typescript, they are nowhere near the quality they need to be for those who don't. People looking for examples online will get frustrated because Effect API churn over the past three years has made many old posts and articles obsolete. Old github repos won't work out of the box. And you better be comfortable with codemods if you haven't frozen your Effect-ts version.
Fortunately the Effect-ts Discord channel is full of friendly and helpful people and the Effect team provides high quality assistance to people who ask for it. It makes me sad this treasure trove of information is trapped in Discord where search is of little value.
A good book or collection of high quality examples of how to use Effect-ts with de-facto standard frameworks like React could help its adoption grow significantly.