I don't know if it is the paper you are thinking of (likely not) but this idea of checking predictions against outcomes is a very common idea in less mainstream AI research, including the so called "energy-based models" of Yann LeCun and the reference frames of the thousand brains project.
A recent paper posted here also looked at Recurrent Neural Nets and how in simplifying the design to its core amounted to just having a latent prediction and repeatedly adjusting that prediction.
A recent paper posted here also looked at Recurrent Neural Nets and how in simplifying the design to its core amounted to just having a latent prediction and repeatedly adjusting that prediction.