I'm not sure that's necessary. Early humans didn't know why a lot of things happened, such as why rubbing sticks makes fire; they just learned to use them from trial and error.
The physics of it were beyond them. I see it more as goal-oriented: "I want fire, how can I get it?".
I suppose that's cause-and-effect in a loose sense, but one doesn't have to view everything as C&E to get similar results. It seems more powerful to think of it as relationships instead of just C&E because then you get a more general relationship processing engine out of it instead of a single-purpose thing. Make C&E a sub-set of relationship processing. If the rest doesn't work, then you still have a C&E engine from it by shutting off some features.
They understood cause and effect. They didn't know the causal chain in depth, but they did know that rubbing sticks together caused fire. They also knew that dumping water on the ground did not cause it to rain. Thus they could distinguish between correlation and causation.
Did they really understand cause and effect? Primitive cultures frequently used religious ceremonies (cause) to effect changes in the natural world. It didn't actually work, but somehow they fooled themselves into believing that it did.
According to Hume, causality is a habit of mind all humans have from experiencing many situations where A always follows B. And according to Kant, causality is one the categories the mind imposes on the world of sensory impression, like space and time, to make sense of the world.
Early childhood studies should be able to answer the question as to whether humans are wired to expect causal relationships. My understanding is that all children do develop that expectation, as do animals.
People getting umbrellas ready causes it to rain? We take into account models of minds of others etc. into a much more rich understanding of causality than A follows B in time.
Reasoning about cause and effect works both ways. A causes B also implies B can caused by A. Your example is actually strengthening the idea of a simplistic view of cause and effect.
People getting their umbrellas ready is caused by the expectation of rain.
"Why are they taking their umbrellas out?" "Oh it must be because it is about to rain." "Why do they have an expectation of rain?" "Because they saw the weather report and clouds are visible"
1. There is a time-based relationship between A and B: they tend to happen close together in time
2. There is a time-based relationship between A and B such that B often happens shortly after A.
3. When I (or bot) creates condition A, then B usually happens.
4. When I (or bot) creates condition A, then B usually does not happen.
5. Science or simulations explain how A triggers B, if #2 is observed.
A bot can be programmed to conclude there is a potential causal relationship if #2 happens. If not problematic, the bot can then do experiments to see whether #3 or #4 is the case. If #3 happens, the bot can label the relationship as "likely causal". If #4, label it "probably not causal, but puzzling".
#5 would probably be needed to conclude "most likely causal", and is probably an unrealistic expectation for the first generation of "common sense" AI, although they may have a simple physics simulator built in. The highest "causal" score would be #2, #3, and #5 all true.
An AI bot could test that theory by opening bunches of umbrellas. It's not too different from something a toddler might try. A parent would see that, chuckle, and explain that clouds cause rain, not umbrellas. It's a "fact" (rule) a parent instills in their child.
I was recently reading an essay (can't find it this second) about how some religious ceremonies actually introduced helpful randomness.
For example, if you hunt to the east and find good game, you'd keep hunting to the east. Eventually you'd kill everything over there or get them to move, and your hunting would get worse. The optimal approach might be to randomize the direction you hunt, so that game doesn't learn where you're hunting.
The society couldn't say WHY the ceremony was good, but long term if they kept applying the ceremony they'd have better outcomes than societies that didn't.
Sometimes superstition is just that ... but I'd also bet there are unintuitive/surprising benefits behind a lot of it.
"One performs the rain sacrifice and it rains. Why? I say: there is no special reason why. It is the same as when one does not perform the rain sacrifice and it rains anyway. When the sun and moon suffer eclipse, one tries to save them. When Heaven sends drought, one performs the rain sacrifice. One performs divination and only then decides on important affairs. But this is not to be regarded as bringing one what one seeks, but rather is done to give things proper form. Thus, the gentleman regards this as proper form, but the common people regard it as connecting with spirits."
There are psychological benefits. If you believe that things like rain are within your control then you can be confident instead of feeling helpless.
Obviously religions take advantage of this desire for the illusion of control and convince followers that practicing the religion will keep their lives free from external bad influences.
Sure — I think the randomization concept particularly interests me because it's NOT just psychological. There's actual real-world benefit to doing rituals/read entrails/listening to people speaking tongues vs. not because our instinct or normal habits aren't always right.
I suppose that's cause-and-effect in a loose sense, but one doesn't have to view everything as C&E to get similar results. It seems more powerful to think of it as relationships instead of just C&E because then you get a more general relationship processing engine out of it instead of a single-purpose thing. Make C&E a sub-set of relationship processing. If the rest doesn't work, then you still have a C&E engine from it by shutting off some features.
I may be wrong (heck, I'm probably wrong), but I can't help but feel that you're abstracting things out too much. Yes, a "cause / effect relationship" IS-A "relationship", but sometimes the distinctions actually matter. I'd argue that a "cause/effect relationship" (and the associated reasoning) is markedly different in at least one important sense, and that is that it includes time in two senses: direction, and duration. There's a difference between knowing that Thing A and Thing B are "somehow" related, and knowing that "Doing Thing A causes Thing B to happen shortly afterwards" or whatever.
To may way of thinking, this is something like what Pearl is talking about here:
The key, he argues, is to replace reasoning by association with causal reasoning. Instead of the mere ability to correlate fever and malaria, machines need the capacity to reason that malaria causes fever.
That said, I do like your idea of trying to build the processing engine in such a way that you can turn features on and off, because I don't necessarily hold that "cause/effect" is the only kind of reasoning we need.
I'm not against tagging a vector between two or more nodes as "probably causal" (cause related) in some sense (or maybe a "weighted causal"). It just shouldn't be the ONLY tag.
Re quote: "The key, he argues, is to replace reasoning by association with causal reasoning. Instead of the mere ability to correlate fever and malaria, machines need the capacity to reason that malaria causes fever."
Back to my original point, humans did just fine at "intelligence" before science came along. That's Step 3, we need step 2 first. Find correlations, and if it seems to be that part-1 happens before part-2, then the bot can infer something equivalent to a causal relationship. That may be the time element you are talking about. Perhaps it's a matter of interpreting what "causal" means. I see it as "finding relationships we can take advantage of to obtain our goals". Whether there is physics or chemistry behind a relationship is an unnecessary distraction (without other advances).
Re: That said, I do like your idea of trying to build the processing engine in such a way that you can turn features on and off, because I don't necessarily hold that "cause/effect" is the only kind of reasoning we need.
So we kind of agree. If it turns out I'm wrong and that we can make most of the bot work right with just causal relationships, then factor out the general purpose "graph processor" for efficiency and make it causal-centric.
To me learning cause-and-effect is a non trivial process, when recognizing relationship often comes first and intrigues "formal" reasoning process afterwards (if any).
The more formal in the later process, the closer we reach to the real "knowledge". So your suggestion is quite practical, in the sense that we can start from that and figure out how to push the engine toward the more formal spectrum.
That doesn't tell one directly if heat causes fire or the other way around. One just knows there is some relationship. Recognizing a relationship itself is often enough to trigger experimentation.
I suppose that's cause-and-effect in a loose sense, but one doesn't have to view everything as C&E to get similar results. It seems more powerful to think of it as relationships instead of just C&E because then you get a more general relationship processing engine out of it instead of a single-purpose thing. Make C&E a sub-set of relationship processing. If the rest doesn't work, then you still have a C&E engine from it by shutting off some features.