Facebook researchers develop AI method to convert videos of people into playable game characters
Apple recently unveiled the power of AI in mapping facial data so as to convert it into an animated character. This innovation, known as the Animoji, may realize the dreams of many to turn themselves into their favorite avatars as much as possible on a digital platform. The next steps in this use of technology may be to also turn oneself into one's own game character. A group of Facebook researchers have published a new paper that suggests this possibility is closer than ever.
These scientists - Oran Gafni, Lior Wolf and Yaniv Taigman - have developed a system by which a person and their movements are captured on video, then converted into a 3D character that can replicate the motions or gestures in question in response to joystick-based controls. This sounds like an impressive breakthrough; however, as the researchers concede themselves in their article, this has actually been done (or at least started on) in prior projects.
As in this body of work, Gafni, Wolf & Taigman used AI in order to model, mask and eventually produce their video-to-playable-characters. The researchers used 2 such programs, Pose2Pose and Pose2Frame, which 'learned' the subjects' body shapes, movement styles and other characteristics in order to build up the resulting characters.
However, the authors also assert that their new method improves on earlier counterparts, in that their masking and output techniques result in characters that appear more realistic against their backgrounds. In other words, they relate better to their surroundings; for example, the shadows they cast are more realistic and accurate.
This also means that the Facebook scientists' characters can be imposed onto different backgrounds with greater ease and efficacy than before. Their movements are also convincing enough, although they could be a little more natural. Nevertheless, this remains a potentially significant advancement in the field of AI-enhanced media conversions. It will be interesting to see where this research - which is available in Cornell's arXiv database of advance research publications - takes us in the future.