via: https://prostheticknowledge.tumblr.com/






Everybody Dance Now


Graphics research from UC Berkeley is the best implementation of motion synthesis with human poses to date, taking the dance moves from one video and recreating it in another:


This paper presents a simple method for “do as I do” motion transfer: given a source video of a person dancing we can transfer that performance to a novel (amateur) target after only a few minutes of the target subject performing standard moves. We pose this problem as a per-frame image-to-image translation with spatio-temporal smoothing. Using pose detections as an intermediate representation between source and target, we learn a mapping from pose images to a target subject’s appearance. We adapt this setup for temporally coherent video generation including realistic face synthesis.


At the moment, there is no official project website / code available, but the official research paper can be found here


Source: arxiv.org


Mark nindustrict.de truede-noizer.de lettersaremyfriends.com