Hacker News new | past | comments | ask | show | jobs | submit login

It's not too far off. Vid2vid is already decent at keeping character consistency when configured correctly, background/environment flickering is hard to control but since the process is currently done using img2img on successive frames that makes sense. I think we'll see new models that do temporal convolution soon that will make video -> video transformations absolutely stunning.



Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: