Good question. Perhaps depends on the type of warning shot. Plenty of media has an anti-tech bend and will publicize warning shots if they see them -- and they do this already with near term risks, such as facial recognition.
If the warning shot is from an internal red team, then higher likelihood that it isn't reported. To address that I think we need to continue to improve the culture around safety, so that we increase the odds that a person on or close to that red team blows the whistle if we're stepping toward undisclosed disaster.
I think the bigger risk isn't that we don't hear the warning shots though. It's that we don't get the warning shots, or we get them far too late. Or, perhaps more likely, we get them but are already set on some inexorable path due to competitive pressure. And a million other "or's".
You mention media publicizing warning shots. Does that really work at all?
Most of the reporting I see is half-dismissive: [facial recognition is a risk but what are you gonna do? it can't be bad to fight crime.] This goes for everything. And it rarely results in effective control.
Internal practice in biology or chemistry labs kinda does - but takes a long time, and then accidents still happen.
NTSB accident investigations: Is there another field where each accident is taken as seriously as there? And step-wise improvement does not sound like a good solution for self-reproducing agents.
Great article on the Apollo mission return "quarantine". One lesson is that it got not much priority (the vehicle itself released in air and ocean water), and not much effort: lots of things in the lab were not tested or designed sufficiently (broken glove / gloveboxes, fire procedures involving breaching containment...) Another lesson is that this was apparently not tested or wargamed anywhere enough. No test run? A third is that of course it didn't go perfect, with the two first points as a starting point, and the fact that it was the first run. In hindsight, of course it would fail.
That argues for at least taking the idea of containment (for AI or Mars samples) more seriously. But it also argues that it will (of course) not be taken seriously enough. Plus amateurs not taking things seriously either. So, taking it even more seriously because of this prior experience.
Science is used to "fair warnings" (screwdriver criticality experiments, Marie Curie, now lunar samples, but yes also smallpox ... plenty of stories) - but all of these were minor: a few persons died, the rest learned. And the risk for a sufficient AI is not in the same scale. For that one, we don't have much experience. Comparable might be high containment pathogen labs maybe? - with plenty of problems themselves; and the difficulty of cleaning computers after an intrusion (proper procedure being a clean re-install - not possible for an AI leak.)
NASA had an easy win putting the astronauts in quarantine, there is no such easy win for current AI research, you can whistle-blow as much as you want but AGI will be worked on until it is real, regardless of legislation unless that legislation covers all countries which is impossible.
Something to think about: even if there are AI 'warning shots', why do you think anyone will be allowed to hear them?