Skip to main content
Video s3
    Details
    Presenter(s)
    Yanchen Zhao Headshot
    Display Name
    Yanchen Zhao
    Affiliation
    Affiliation
    Institute of Digital Media, Peking University
    Country
    Country
    China
    Author(s)
    Display Name
    Yanchen Zhao
    Affiliation
    Affiliation
    Institute of Digital Media, Peking University
    Display Name
    Kai Lin
    Affiliation
    Affiliation
    Peking University
    Display Name
    Shanshe Wang
    Affiliation
    Affiliation
    Peking University
    Display Name
    Siwei Ma
    Affiliation
    Affiliation
    Peking University
    Abstract

    In this paper, a model selection based multi-scale convolutional neural network (CNN) model for in-loop filtering is proposed for Versatile Video Coding (VVC). We propose a novel network to jointly filter the luminance component and chrominance components simultaneously in the coding loop to improve the quality of the reconstructed frames. The proposed model contains two main branches with different scales as well as the global identity connection. Each branch contains several residual blocks. We fuse the residual features of the two branches by using a convolutional block attention module (CBAM) at the end of the model. In addition, we adopt a model selection strategy which can select the best coding tree unit (CTU) level model in terms of R-D cost at the encoder. Compared with VTM-14.0 our method achieves 5.17%, 11.05% and 10.89% BD-rate reduction under all intra (AI) configuration, and 6.56%, 13.14%, 11.98% BD-rate reduction under random access (RA) configuration. Moreover, under RA configuration, the proposed model selection strategy brings extra 0.34% BD-rate reduction in the luminance component.

    Slides
    • Joint Luma and Chroma Multi-Scale CNN In-Loop Filter for Versatile Video Coding (application/pdf)