Conglomerate developed an AI and vision system that you could hook up to your Anti-aircraft systems to eliminate any chance of friendly fire. DARPA and the Pentagon went wild, pushing the system through test so they could get to the live demonstration.
They hook up a live and load up dummy rounds system, fly a few friendly planes over and everything looks good however when they fly a captured Mig-21 over the system fails to respond. The Brass is upset and the engineers are all scratching their heads trying to figure out what is going on but as the sun sets the system lights up, trying to shoot down anything in the sky.
They quickly shut down the system and do a postmortem, in the review they find that all the training data for friendly planes are perfect weather, blue sky overflights and all the training data for the enemy are nighttime/ low light pictures. The AI determined that anything fling during the day is friendly and anything at night is terminate with extreme prejudiced.
we used synthetic data for training a (sort of) similar system. not gonna get into the exact specifics, but we didn't have a lot of images of one kind of failure use-case.
like they're just not that many pictures of this stuff. we needed hundreds, ideally thousands, and had, maybe, a dozen or so.
okay, so we'll get a couple of talented picture / design guys from the UI teams to come out and do a little photoshop of the images. take some of the existing ones, play with photoshop, make a couple of similar-but-not-quite-the-same ones, and then hack those in a few ways. load those into the ML and tell em they're targets and to flag on those, etc. etc.
took a week or two, no dramas, early results were promising. then it just started failing.
turns out we ran into issues with two (2) pixels, black pixels against a background of darker black shades, that the human eye basically didn't see or notice; these were artifacts from photoshopping, and then re-using parts of a previous image multiple times. the ML started determining that 51% or more of the photos had those 2 pixels in there, and that photos lacking those -- even when painfully obvious to the naked eye -- were fails.
like, zooming in at it directly you're like yea, okay, those pixels might be different, but otherwise you'd never see it. thankfully output highlighting flagged it reasonably quickly but still took 2-3 weeks to nail down the issue.
It wouldn’t be cheap but I could see 3D modeling and physically based rendering (I’ve been working with Octane but others should do the job fine) being a really good use case for this. Having a bazillion years in 2D but getting into 3D at a professional level a few years ago, I don’t think I’d even suggest using a purely 2D approach if I was looking for optimal results. Match all the camera specs, simulate all sorts of lighting and weather patterns from all sorts of angles, etc.
Conglomerate developed an AI and vision system that you could hook up to your Anti-aircraft systems to eliminate any chance of friendly fire. DARPA and the Pentagon went wild, pushing the system through test so they could get to the live demonstration.
They hook up a live and load up dummy rounds system, fly a few friendly planes over and everything looks good however when they fly a captured Mig-21 over the system fails to respond. The Brass is upset and the engineers are all scratching their heads trying to figure out what is going on but as the sun sets the system lights up, trying to shoot down anything in the sky.
They quickly shut down the system and do a postmortem, in the review they find that all the training data for friendly planes are perfect weather, blue sky overflights and all the training data for the enemy are nighttime/ low light pictures. The AI determined that anything fling during the day is friendly and anything at night is terminate with extreme prejudiced.