Details
- Affiliation
-
AffiliationPeking University
- Country
In this paper, we aim to explore a more generalized kind of video frame interpolation, that at an arbitrary time step. To this end, we consider processing different time-steps with adaptively generated convolutional kernels in a unified way with the help of meta-learning. Specifically, we develop a dual meta-learned frame interpolation framework to synthesize intermediate frames with the guidance of context information and optical flow as well as taking the time-step as side information. Extensive qualitative and quantitative evaluations demonstrate that, our method not only achieves superior performance to state-of-the-art frame interpolation approaches but also owns an extended capacity to support the interpolation at an arbitrary time-step.