Key Frame Proposal Network for Efficient Pose Estimation in Videos
Abstract
Human pose estimation in video relies on local information by either estimating each frame independently or tracking poses across frames. In this paper, we propose a novel method combining local approaches with global context. We introduce a light weighted, unsupervised, key frame proposal network (KFPN) to select informative frames and a learned dictionary to recover the entire pose sequence from these frames. The KFPN speeds up the pose estimation and provides robustness to bad frames with occlusion, motion blur, and illumination changes, while the learned dictionary provides global dynamic context. Experiments on Penn Action and subJHMDB datasets show that the proposed method achieves stateoftheart accuracy, with substantial speedup.
Keywords:
Fast Human pose estimation in videos; Key frame proposal network(KFPN); Unsupervised learning1 Introduction
Human pose estimation [belagiannis2017recurrent, Hourglass, pishchulin2013strong, deeppose, wei2016convolutional], which seeks to estimate the locations of human body joints, has many practical applications such as smart video surveillance [cristani2013human, park2008understanding], human computer interaction [shotton2011real], and VR/AR[lin2010augmented].
The most general pose estimation pipeline extracts features from the input, and then uses a classification/regression model to predict the location of the joints. Recently, [sparselylabeled] introduced a Pose Warper capable of using a few manually annotated frames to propagate pose information across the complete video. However, it relies on annotations of every frame and thus it fails to fully exploit the dynamic correlation between them.
Here, we propose an alternative pose estimation pipeline based on two observations: All frames are not equally informative; and the dynamics of the body joints can be modeled using simple dynamics. The new pipeline uses a light weighted key frame proposal network (KFPN), shown in Fig. 1, to select a small number of frames to apply a pose estimation model. One of the main contributions of this paper is a new loss function based on the recovery error in the latent feature space for unsupervised training of this network. The second module of the pipeline is an efficient Human Pose Interpolation Module (HPIM), which uses a dynamicsbased dictionary to obtain the pose in the remaining frames. Fig. 2 shows two sample outputs of our pipeline, where the poses shown in purple were interpolated from the automatically selected red key frames. The advantages of the proposed approach are:

It uses a very light, unsupervised, model to select “important” frames.

It is highly efficient, since pose is estimated only at key frames.

It is robust to challenging conditions present in the nonkey frames, such as occlusion, poor lighting conditions, motion blur, etc.

It can be used to reduce annotation efforts for supervised approaches by selecting which frames should be manually annotated.
2 Related Work
Image Based Pose Estimation. Classical approaches use the structure and interconnectivity among the body parts and rely on handcrafted features. Currently, deep networks are used instead of handcrafted features. [chen2014articulated] used Deep Convolutional Neural Networks (DCNNs) to learn the conditional probabilities for the presence of parts and their spatial relationships. [yang2016end] combined in an endtoend framework the DCNN with the expressive mixture of parts model. [chu2016structured] learned the correlations among body joints using an ImageNet pretrained VGG16 base model. [wei2016convolutional] implicitly modeled longrange dependencies for articulated pose estimation. [Hourglass] proposed a “hourglass” architecture to handle large pixel displacements, opening a pathway to incorporate different scaled features stacked together. [flownet, featurepriamid, PapandreouZKTTB17, Tang_dlcm, yang2017pyramid] made several improvements on multiscaled feature pyramids for estimating human pose. However, capturing sufficient scaled features is computationally expensive. [fastpose] proposed a teacherstudent architecture to reduce network complexity and computational time. Finally, [openpose, PiPaf, nie2018mula] refined the location of keypoints by exploiting the human body structure.
Video Based Pose Estimation. Human pose estimation can be improved by capturing temporal and appearance information across frames. [simonyan2014two, song2017thin] use deep Convolutional Networks (ConvNet) with optical flow as its input motion features. [pfister2015flowing] shows that an additional convolutional layer is able to learn a simpler model of the spatial human layout. [charles2016personalizing] improves this work to demonstrate that the joint estimations can be propagated from poses on the first few frames by integrating optical flows. Furthermore, tracking on poses is another popular methodology such as [poseTrack, simplebaseline] which can jointly refine estimations. Others adopt Recurrent Neural Networks(RNN) [luo2018lstm, gkioxari2016chained, 3Dlstm]. [gkioxari2016chained] shows that a sequencetosequence model can work for structured output prediction. A similar work [luo2018lstm] imposes sequential geometric consistency to handle image quality degradation. Despite of notable accuracy, RNNbased methods suffer from the expensive computations required. [nie2019dynamic] proposed to address this issue by using a lightweighted distillator to online distill pose kernels by leveraging the temporal information among frames.
3 Proposed Approach
Fig.1 shows the proposed architecture. Given consecutive frames, we aim to select a small number of frames, which can capture the global context and provide enough information to interpolate the poses in the entire video. This is challenging since annotations for this task are usually unavailable. Next, we formulate this problem as the minimization of a loss function, which allows us to provide a set of optimal proposals deterministically and without supervision.
The main intuition behind the proposed architecture is that there is a high degree of spatial and temporal correlation in the data, which can be captured by a simple dynamicsbased model. Then, key frames should be selected such that they are enough (but no more than strictly needed) to learn the dynamic model and recover the nonselected frames.
3.1 Atomic Dynamicsbased Representation of Temporal Data
We will represent the dynamics of the input data by using the dynamicsbased atomic (DYAN) autoencoder introduced in [DYAN], where the atoms are the impulse response of linear time invariant (LTI) systems with a pole^{1}^{1}1Poles are in general complex numbers. Systems with real outputs with a non real pole must also have its conjugate pole : . , is a constant and indicates time. The model uses atoms, collected as columns of a dictionary matrix :
(1) 
Let be the input data matrix, where each column has the temporal evolution of a datapoint (i.e. one coordinate of a human joint or the value of a feature, from time 1 to time ). Then, we represent by a matrix such that , where the element indicates how much of the output of the atom is used to recover the input data in :
In [DYAN], the dictionary was learned from training data to predict future frames by minimizing a loss function that penalized the reconstruction error of the input and the norm of to promote the sparsity of (i.e. using as few atoms/pixel as possible):
(2) 
In this paper, we propose a different loss function to learn , which is better suited to the task of key frame selection. Furthermore, the learning procedure in [DYAN] requires solving a Lasso optimization problem for each input before it can evaluate the loss (2). In contrast, the loss function we derive in section 3.2 is computationally very efficient, since it does not require such optimization step.
3.2 Key frame Selection Unsupervised Loss
Given an input video with frames, consider a tensor of its deep features with channels of width and height , reshaped into a matrix . That is, the element has the value of the feature , , at time . Then, our goal is to select a subset of key frames, as small as possible, that captures the content of all the frames. Thus, we propose to cast this problem as finding a minimal subset of rows of (the key frames), such that it would be possible to recover the left out frames (the other rows of ) by using these few frames and their atomic dynamicsbased representation.
Problem 1
Given a matrix of features , an overcomplete dictionary , , for which there exist an atomic dynamicsbased representation such that , find a binary selection matrix with the least number of rows , such that , where is the atomic dynamicsbased representation of the selected key frames .
Problem 1 can be written as the following optimization problem:
(3)  
subject to:  
(4)  
(5) 
The first term in the objective (3) minimizes the recovery error while the second term penalizes the number of frames selected. The constraint (4) establishes that should be the atomic dynamicsbased representation of the key frames and the constraints (5) force the binary selection matrix to select distinct frames. However, this problem is hard to solve since the optimization variables are integer () or binary (elements of ).
Next, we show how we can obtain a relaxation of this problem, which is differentiable and suitable as a unsupervised loss function to train our key frame proposal network. The derivation has three main steps. First, we use the constraint (4) to replace with an expression that depends on , and . Next, we make a change of variables so we do not have to minimize with respect to a matrix of unknown dimensions. Finally, in the last step we relax the constraint on the binary variables to be real between 0 and 1.
Eliminating : Consider the atomic dynamicsbased representation of :
(6) 
Multiplying both sides by , defining , and using (4), we have:
(7) 
Noting that is an overcomplete dictionary, we select the solution for from (7) with minimum Frobenious norm, which can be found by solving:
(8) 
The solution of this problem is:
(9) 
since the rows of (see (1)) are linearly independent and hence the inverse exists. Substituting (9) in the first term in (3) we have:
(10) 
Using the fact that yields the following equivalent to Problem 1:
(11) 
Minimizing with respect to a fixed size matrix: Minimizing with respect to is difficult because one of its dimensions is , which is a variable that we also want to minimize. To avoid this issue, we introduce an approximation trick, where we add a small perturbation to the diagonal of :
(12) 
and combine (12) with the Woodbury matrix identity
by setting , , , and , to get:
(13) 
Now, define , which is a matrix of fixed size . Furthermore using the constraints (5), it is easy to show that is diagonal and that its diagonal elements are 1 if selects frame and 0 otherwise. Thus, the vector is an indicator vector for the sought key frames and the number of key frames is given by . Therefore, the objective becomes:
(14) 
Note that the fact that the inverse is well defined follows from Woodbury’s identity and the fact that exists since and is positive semidefinite.
Relaxing the binary constraints: Finally, we relax the binary constraints on the elements of the indicator vector and let them be real numbers between 0 and 1. We now have the differentiable objective function:
(15) 
where the only unknown is . Then, we can use the loss function:
(16) 
where the vector should be the output of a sigmoid layer in order to push its elements to binary values (See section 3.4 for more details).
3.3 Human Pose Interpolation
Given a video with frames, let be the 2D coordinates of human joints for key frames, be the associated selection matrix, and be a dynamicsbased dictionary trained on skeleton sequences using a DYAN autoencoder [DYAN]. Then, the Human Pose Interpolation Module (HPIM) finds the skeletons for the entire sequence, which can be efficiently computed. Its expression can be derived as follows. First, use the reduced dictionary: and (9) to compute the minimum Frobenius norm atomic dynamicsbased representation for the key frame skeletons : Then, using the complete dictionary , the entire skeleton sequence is given by:
(17) 
where can be computed ahead of time.
3.4 Architecture, Training, and Inference
Fig. 3 shows the architecture for the KFPN, which is trained completely unsupervised, by minimizing the loss (16). It consists of two Conv2D modules (Conv + BN + Relu) followed by a Fully Connected (FC) and a Sigmoid layers. The first Conv2D downsizes the input feature tensor from to while the second one uses the temporal dimension as input channels. The output of the FC layer is forced by the Sigmoid layer into logits close to either 0 or 1, where a ‘1’ indicates ‘key frame’ and its index which one. Inspired by [quantization], we utilized a control parameter to form a customized classification layer, represented as , where is linearly increased with the training epoch. By controlling , the output from the KFPN is nearly a binary indicator such that the sum of its elements is the total number of key frames. The training and inference proceduresare summarized in Algorithms 1 to 3. and code is available at https://github.com/Yuexiaoxi10/KeyFrameProposalNetworkforEfficientPoseEstimationinVideos.
3.5 Online Key Frame Detection
The proposed KFPN can be modified to process incoming frames, after a minimum set of initial frames has been processed. To do this, we add a discriminator module as shown in Fig. 4, consisting of four (Conv2D + BN + Relu) blocks, which is used to decide if an incoming frame should be selected as a key frame or not. The discriminator is trained to distinguish between features of the incoming frame and features predicted from the set of key frames selected so far, which are easily generated by multiplying the atomic dynamicsbased representation of the current key frames with the associated dynamicsbased dictionary extended with an additional row (since the number of frames is increased by one) [DYAN]. The reasoning behind this design is that when the features of the new frame cannot be predicted correctly, it must be because the frame brings novel information and hence it should be incorporated as a key frame.
4 Experiments
Following [luo2018lstm, nie2019dynamic], we evaluated the KFPN on two widelyused public datasets: Penn Action [penn_action] and subJHMDB [Jhuang:ICCV:2013]. Penn Action is a largescale benchmark, which depicts human daily activities in unconstrained videos. It has 2326 video clips, with 1258 reserved for training and 1068 for testing with varied frames. It provides 13 annotated joint positions on each frame as well as their visibilities. Following common convention, we only considered the visible joints to evaluate. subJHMDB [Jhuang:ICCV:2013] has 319 video clips in three different splits with a training and testing ratio of roughly 3:1. It provides 15 annotations on each human body. However, it only annotates visible joints. Following [luo2018lstm, nie2019dynamic, song2017thin], the evaluation is reported as the average precision over all splits.
We adopted the ResNet family [Resnet] as our feature encoder and evaluated our method, as the depth was varied from 18 to 101 (see subsection 4.3). During training, we froze the ResNet, where [18/34/50/101], and then our KFPN was trained only on the features output from the encoder. Following [nie2019dynamic], we adopted the pretrained model from [simplebaseline] as our pose estimator. During our experiments, we applied a specific model, which was trained on the MPII[mpii] dataset with ResNet101. However, unlike previous work [nie2019dynamic], we did not do any finetunning for any of the datasets. To complete the experiments, we split the training set into training and validation parts with a rough ratio of 10:1 and used the validation split to validate our model along with the training process. The learning rate of KFPN for both datasets was set as 1e8 and we used 1e4 for the onlineupdating experiment. The ratio for the two terms in our loss function (16) is approximately 1:2 for Penn Action and 3:1 for subJHMDB.
The KFPN and HPIM dictionaries were initialized as in [DYAN], with rows for both datasets. Since videos vary in length, we added dummy frames when they had less than 40 frames. For clips longer than 40 frames, we randomly selected 40 consecutive frames as our input during training and used an sliding window of size 40 during testing, in order to evaluate the entire input sequence.
4.1 Data Preprocessing and Evaluation Metrics
We followed conventional data preprocessing strategies. Input images were resized to 3x224x224 and normalized using the parameters provided by [Resnet]. After that, in order to capture a better pose estimation from the pose model, we utilized the person bounding box to crop each image and pad to 384x384 with a varying scaling factor from 0.8 to 1.4. The Penn Action dataset provides such an annotation, while JHMDB does not. Therefore, we generated the person bounding box on each image by using the person mask described in [luo2018lstm].
Following [nie2019dynamic, luo2018lstm, song2017thin], we evaluated our performance using the PCK score [Yang&Ramanan]: a body joint is considered to be correct only if it falls within a range of pixels, where is defined by , where and denote the height and width of the person bounding box and controls the threshold to justify how precise the estimation is. We follow convention and set .
Our full framework consists of three steps: given an input video of length , KFPN first samples key frames; then, pose estimation is done on these frames; and HPIM interpolates these results for the full sequence. The reported running times are the aggregated time for these three steps. All running times were computed on NVIDIA GTX 1080ti for all methods.
4.2 Qualitative Examples
4.3 Ablation Studies
In order to evaluate the effectiveness of our approach, we conducted ablation studies on the validation split for each dataset.
Backbone Selection. We tested KFPN using different backbones from the ResNet family. Since subJHMDB is not a large dataset, we believe that our KFPN would be easily overfitted by using deeper feature maps. Thus, we didn’t apply ResNet101 on this dataset specifically. Table 1 summarizes the results of this study, where we report running time(ms) and Flops(G) along with PCK scores (higher is better) and average number of selected key frames. These results show that the smaller networks provide faster speed with minor degradation of the performance. Based on these results, for the remaining experiments we used the best model on the validation set. Specifically, we used ResNet34 for Penn Action and Resnet18 for subJHMDB.
Number of Key Frames Selection. To evaluate the selectivity of the KFPN, we randomly picked validation instances with frames, ran the KFPN (using Penn action validation set with Resnet34) and recorded the number of key frames selected for each of these instances: . Given the number of key frames , theoretically, one could determine the best selection by evaluating the PCK score for each of the possibilities. Since it is infeasible to run that many combinations, we tried two alternatives: i) selected frames by uniformly sampling the sequence (Uniform Sample), and ii) randomly sampled 100 out of all possible combinations and kept the one with the best PCK score (Best Sample). Table 2 compares the average PCK score using the KFPN against Uniform Sampling and Best Random Sampling. From [TempoBai], it follows that the best PCK score over 100 subsets has a probability , with confidence, of being the true score over the set of all possible combinations and hence provides a good estimate of the unknown optimum. Thus, our unsupervised approach indeed achieves performance very close to the theoretical optimum.
Key frames Selection Method  

KFPN  Best Sample  Uniform Sample  
PCK  98.0  96.4  79.3 
Online Key Frame Selection. We compared the performance between using batch and online updating key frame selection. All evaluations were done with the subJHMDB dataset. In this experiment, we use a set of frames to select an initial set of key frames (using “batch” mode) and process the following frames using online detection. We compare the achieved PCK score and the number of selected frames against the results obtained using a batch approach on all frames. The results of this experiment for and for are shown in Table 3 and Fig. 6, respectively. This experiment shows that on one hand, using batch mode, shorter videos ( small) have better PCK score than longer ones. This is because the beginning of the action is often simple (i.e. there is little motion at the start) and is well represented with very few key frames. On the other hand, online updating performs as well as batch, as long as the initial set of frames is big enough ( frames). This can be explained by the fact that if is too small, there is not enough information to predict future frames when is large, making it difficult to decide if a new frame should be selected.
4.4 Comparison Against the StateofArt
Comparisons against the stateofart are reported in Table 4. We report our performance using Resnet34 for Penn Action and Resnet18 for SubJHMDB, and also using Resnet50, since it is the backbone used by [nie2019dynamic]. Our approach achieves the best performance and is 1.6X faster (6.8ms v.s 11ms) than the previous stateofart [nie2019dynamic] for the Penn Action dataset, using an average of 17.5 key frames. Moreover, if we use our lightest model (Resnet34), our approach is 2X faster than [nie2019dynamic] with a minor PCK degradation. For the subJHMDB dataset, [nie2019dynamic] did not provide running time and it is not opensourced. Thus, we compare time against the best available open sourced method [luo2018lstm]. Our approach performed the best of all methods, with a significant improvement on elbow (95.3%) and wrist (91.3%). For completeness, we also compared against the baseline [simplebaseline], which is a framebased method, on both datasets. We can observe that by applying our approach with the lightest model, we run more than 2X faster than [simplebaseline] without any degradation in accuracy.
4.5 Robustness of Our Approach
We hypothesize that our approach can achieve better performance than previous approaches using fewer input frames because the network selects “good” input frames, which are more robust when used with the framebased method [simplebaseline]. To better quantify this, we ran an experiment where we randomly partially occluded/blurred/changed illumination at random frames in the subJHMBD dataset. Table 5 shows that our approach (using ResNet18) is more robust to all of these perturbations when compared to [simplebaseline].
5 Conclusion
In this paper, we introduced a key frame proposal network (KFPN) and a human pose interpolation module (HPIM) for efficient video based pose estimation. The proposed KFPN can identify the dynamically informative frames from a video, which allows an image based pose estimation model to focus on only a few “good” frames instead of the entire video. With a suitably learned pose dynamicsbased dictionary, we show that the entire pose sequence can be recovered by the HPIM, using only the pose information from the frames selected by the KFPN. The proposed method achieves better (similar) accuracy than current stateofart methods using 60% ( 50%) of the inference time.
Acknowledgements
This work was supported by NSF grants IIS–1814631 and ECCS–1808381; and the Alert DHS Center of Excellence under Award Number 2013ST061ED0001. The views and conclusions contained in this document are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of the U.S. Department of Homeland Security.