All the failures to detect humans will be used as training data to fine tune the model.
Just like a toddler might be confused when they first see a box with legs walking towards it. Or mistake a hand puppet for a real living creature when they first see it. I've seen this first hand with my son (the latter).
AI tooling is already capable of identifying whatever it's trained to. The DARPA team just hadn't trained it with varied enough data when that particular exercise occurred.
Not really. Depends entirely on how general-purpose (abstract) the learned concept is.
For example, detecting the possible presence of a cavity inside an object X, and whether that cavity is large enough to hide another object Y. Learning generic geospatial properties like that can greatly improve a whole swath of downstream prediction tasks (i.e., in a transfer learning sense).
That's exactly the problem: the learned "concept" is not general purpose at all. It's (from what we can tell) a bunch of special cases. While the AI may learn as special cases cavities inside carboard boxes and barrels and foxholes, let's say, it still has no general concept of a cavity, nor does it have a concept of "X is large enough to hide Y". This is what children learn (or maybe innately know), but which AIs apparently do not.
> It still has no general concept of a cavity, nor does it have a concept of "X is large enough to hide Y". This is what children learn (or maybe innately know), but which AIs apparently do not.
I take it you don't have any hands-on knowledge of the field. Because I've created systems that detect exactly such properties. Either directly, through their mathematical constructs (sometimes literally via a single OpenCV function call), or through deep classifier networks. It's not exactly rocket science.
It would be murder if we weren't required by human progress to embrace fully autonomous vehicles as soon as possible. Take it up with whatever god inspires these sociopaths.
In this case there is very much intent. OP knows there isn't enough data to form a full model so is relying on stochastic death to get the model data, literally and knowingly trading lives for data. The intent is to kill people to figure out what information is missing.
Just like a toddler might be confused when they first see a box with legs walking towards it. Or mistake a hand puppet for a real living creature when they first see it. I've seen this first hand with my son (the latter).
AI tooling is already capable of identifying whatever it's trained to. The DARPA team just hadn't trained it with varied enough data when that particular exercise occurred.