Tweeted By @ylecun
Using a temporal ConvNet with dilated convolutions to turn a video of people moving around into a 3D pose sequence.
— Yann LeCun (@ylecun) November 30, 2018
From FAIR-Menlo Park. https://t.co/YzLTUvJB8z
Using a temporal ConvNet with dilated convolutions to turn a video of people moving around into a 3D pose sequence.
— Yann LeCun (@ylecun) November 30, 2018
From FAIR-Menlo Park. https://t.co/YzLTUvJB8z