Artifacts at the edges are due to occlusions. An occlusion is a part of the scene which wasn't visible to the original camera. You see these if you move far from the original camera in VR to look behind something. This is a really hard problem for 6DOF. We've been improving the quality of occlusions over time, e.g.:
* v1: https://lifecastvr.com/demo_maui.html
* v2: https://lifecastvr.com/kalalea_fire.html
* v3: https://lifecastvr.com/hubner4.html
Version 3 now uses a 2-layer representation which has an image+depthmap for the background layer, which is drawn to fill in the occlusions. This background layer can be precomputed in a variety of different ways. For example, here is a CGI synthetic scene where we can construct the background layer perfectly:
https://lifecastvr.com/liferay.html
However, making up the background layer for real-world content is more challenging. We are on version 1 of that. We will improve this with machine learning in a future release. We can also substitute a "plate" 3d scene for the background in cases where the camera doesn't move. We have also experimented with using data from other frames when the camera moves. This will improve over time.
When moving onto multiple (depth) camera setups, in-painting from old frames worked really well, even before any masking off of static vs moving content (done in realtime, for live streaming)