LSTM Motion Generation

Motion Generation Using 3-Layer-LSTM

CS 231N (Convolutional Neural Networks) / 231A (Computer Vision) Joint Project
Jihee Hwang, Danish Shabbir (2017)

By feeding motion data from joint-annotated videos (JHMDB data set) into a 3-Layer-LSTM (Long-Term Short Memory) Recurrent Neural Network structure, we were able to accurately generate human motion of a certain action given 5 to 10 initial frames of an unseen motion. 

Tensorflow was used for modeling the neural network, and Python was the primary language. 

Read the poster below for more detailed information.

Click to expand poster.

Leave a Reply

Your email address will not be published. Required fields are marked *