No frame left behind: Full Video Action Recognition

Figure 1: We efficiently utilize all frames for training without missing important information.

Abstract

Not all video frames are equally informative for recognizing an action. It is computationally infeasible to train deep networks on all video frames when actions develop over hundreds of frames. A common heuristic is uniformly sampling a small number of video frames and using these to recognize the action. Instead, here we propose full video action recognition and consider all video frames. To make this computational tractable, we first cluster all frame activations along the temporal dimension based on their similarity with respect to the classification task, and then temporally aggregate the frames in the clusters into a smaller number of representations. Our method is end-to-end trainable and computationally efficient as it relies on temporally localized clustering in combination with fast Hamming distances in feature space. We evaluate on UCF101, HMDB51, Breakfast, and Something-Something V1 and V2, where we compare favorably to existing heuristic frame sampling methods.

Publication
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2021)
Xin Liu
Xin Liu
PhD candidate
Silvia L. Pintea
Silvia L. Pintea
Assistant Professor
Jan van Gemert
Jan van Gemert
Associate Professor

Head of the CV Lab.