is it true that it's a tradeoff ? the "more precocial" the less flexibility to learn new things ? on the contrary knowing less equals less assumptions, which needs more flexibility in exchange.
Would be true that what is precocial in us is the ability mimic and abstract specific patterns into general rules ?
It must be a tradeoff. I don't have any proof, but my thinking is that we pay an extraordinary price in terms of resources required to keep human babies safe for years before they can keep themselves safe. That is a strong selection pressure on everyone involved. The fact that it still happens means it must somehow be worth it.
Naturally, I'm a dev. Could it be something to do with limited genetic storage being dedicated to software instead of coding for hardware capabilities? In my limited knowledge, increasing DNA size comes at a maintainance cost(transcription, replication etc), so there's a soft upper bound.
Does it matter ? if it's well defined, each of those would be a node in the graph, or can you elaborate ?
Dozens seems not that much, for a graph where a higher level node would be slack, and the agent only loads further if it needs anything related with slack.
Or I'm not understanding.
The difference between manipulation and influence is that on the first one you are the only one taking advantage of the situation, and the second one you genuinely believe the other person will end in a better place and if you are wrong no harm is done.
I guess is also about if you care about the other person or you are just pretending, unfortunately in my opinion there is no way to know, because some people are really good at pretending to care, and even supporting you with a hidden score tracking board, basically they are investing.
And then there are people that really care about you and because they know they can't do anything or don't know what to say, they won't reach to you.
I guess we are only left with our instinct and that is something that you learn to calibrate with time.
I've been thinking a lot about this, and I want to build the following experiment, in case anyone is interested:
The experiment is about putting an LLM to play plman[0] with and without prolog help.
plman is a pacman like game for learning prolog, it was written by profesor Francisco J. Gallego from Alicante University to teach logic subject in computer science.
Basically you write solution in prolog for a map, and plman executes it step by step so you can see visually the pacman (plman) moving around the maze eating and avoiding ghost and other traps.
There is an interesting dynamic about finding keys for doors and timing based traps.
There are different levels of complexity, and you can also write easily your maps, since they are just ascii characters in a text file.
I though this was the perfect project to visually explain my coworkers the limit of LLM "reasoning" and what is symbolic reasoning.
So far I hooked ChatGPT API to try to solve scenarios, and it fails even with substancial amount of retries. That's what I was expecting.
The next thing would be to write a mcp tool so that the LLM can navigate the problem by using the tool, but here is where I need guidance.
I'm not sure about the best dynamic to prove the usefulness of prolog in a way that goes beyond what context retrieval or db query could do.
I'm not sure if the LLM should write the prolog solution. I want to avoid to build something trivial like the LLM asking for the steps, already solved, so my intuition is telling me that I need some sort of virtual joystick mcp to hide prolog from the LLM, so the LLM could have access to the current state of the screen, and questions like what would be my position if I move up ?
What's the position of the ghost in next move ? where is the door relative to my current position ?
I don't have academic background to design this experiment properly. Would be great if anyone is interested to work together on this, or give me some advice.
Prior work pending on my reading list:
- LoRP: LLM-based Logical Reasoning via Prolog [1]
- A Pipeline of Neural-Symbolic Integration to Enhance Spatial Reasoning in Large Language Models [2]
It certainly should be possible but idk if its actually implemented; at the very least you should be able to implement it as a filter plugin for ffmpeg.
Some of the more advanced CRT shaders actually attempt to mathematically model how the video gets distorted by the CRT and even the component video. If the effects of converting to film are so well-understood that Pixar can adapt their film for the process then it out to be able to post-process the video in a way that reproduces those artifacts.
I don't think its possible for it ever to be exactly the same since the display technology of a monitor is fundamentally different from a film projector(or a CRT) but it should be possible to get it good enough that its indistinguishable from an photo of the film being displayed on a modern monitor (ie the colors aren't completely different like in the comparisons in the article.
BTW TFA didn't mention this but about 15 years ago they rerendered toy story and toy story 2 for a new theatrical run when those gimmicky 3d glasses were popular. If that's the version thats being distributed today on Disney plus and bluray (IDK but i feel like it probably is) then that could potentially be a more significant factor in ruining the color balance than not having been converted to film.
What I understood from youtube gurus, take it with a grain of salt, is that your brain is taking ketones as source of energy to preserve the little glucose that goes into the system, and as result it consumes less oxygen.
But I'm not sure the mouth taste comes from the brain's waste.
To some degree, if you had your brain inflamed by bad eating habits, fasting would revert that and make the flushing more efficient as well.
Again please take with with double grain of salt, since I don't even know inflame brain is a thing for sure, or the correct term.
wow! you are almost there, if you made a version that was only drawings, or drawings first titles later, would be awesome, right now titles take too long to write a title, making the filling and meanwhile the pace is lost with the narration, then it makes a cool drawing super fast, so it feels like with a bit of tweaking in the pace you'll be able to get an outstanding result.
For me, because I jump from project to project, different versions and languages, is just perfect to use the same mise commands, and I know it's going to work.
Also I like having a global portable ~/.config/mise/config.toml file.
What I'm not sure yet is how to mix it with brew on Mac OS, so far what I do is, everything I can install with mise, I do, so I know it will work in linux as well, and everything else with brew.
There are exceptions like postgres, which seems available in the registry (asdf:mise-plugins/mise-postgres) but then you need to do:
My rule is, everything that doesn't have brew dependencies and is in mise registry or plugins, I install it from there, everything else from brew. But I'm curious, what's your approach ?
The creator of mise has written about it [1] but I don't think he switches from mac to linux:
> I tend to basically just use core plugins. I like mise for managing things where I really care about the major version (like node). If it's something like shfmt or jq I don't really care about the version. I just want the latest and for me, I find brew to be better suited to that purpose.
Markdown UI is loading a bunch of components and then displaying them where you want them by parsing some Markdown to figure out where they go.
MDX goes a step further by enabling you to actually write the component JSX in the Markdown as if it's React (rather than using ``` syntax). You can just display a component that was loaded from somewhere else. Basically, if you took the Markdown UI components as React components, you could use the runtime embed option from here: https://mdxjs.com/guides/embed/
the reason I made it is that mdxjs (afaik) only works with a build step and locks the user into React, and you are still technically running code sent by the LLM, rather than a more secure static JSON like object.
MDX parses JSX so it's compatible with any JSX runtime, not just react.
You're right that MDX executes arbitrary code because it supports the `import` and string interpolation of MDX and performs unsafe evals.
Because those evals aren't safe to use at runtime, I actually created a library that uses the MDX parser to support the static subset of JSX (which is very similar to HTML making it static and secure): https://www.npmjs.com/package/react-markdown-with-mdx
Totally agree, I do use it with Go lang and echo framework which is very light,
I have a very simple vite config to build the css and js, reload everything with air, and it's fast and simple.
Edit: I can't prove it but I think even AI coding is more efficient with this approach, clear server side rendering, plain JS with modern features built with vite and CSS classes.
Would be true that what is precocial in us is the ability mimic and abstract specific patterns into general rules ?