Skip to main content
Video s3
    Details
    Presenter(s)
    Vladan Velisavljevic Headshot
    Affiliation
    Affiliation
    University of Bedfordshire
    Country
    Abstract

    Three-dimensional (3D) imaging has significantly contributed to various immersive multimedia technologies, such as free viewpoint television, virtual and augmented reality or 6 degree-of- freedom video. In these applications, virtual view synthesis plays a critical role to enable a user to navigate the 3D immersion environment. Virtual view synthesis comprises synthesis of virtual viewpoints from a small subset of captured (camera) viewpoints. However, despite the success in providing a seamless visual experience, real-time delivery of high perceptual quality in 3D imaging remain challenging because of a huge amount of data for transmission and processing and a tremendous computational load. In this talk, we present an analysis of the virtual view quality constraints caused by signal compression distortion and refocussing to variable view depths. The first part of the talk addresses signal compression of the large portion of data required to deliver an acceptable virtual view quality. The captured viewpoint signal consists of texture and depth components used by the virtual view synthesis system to create a new viewpoint. Based on the rate-distortion characteristics of the compressed components, we model the virtual view distortion so that the distortion bounds can be estimated without a significant computational cost in order to optimize the compression system configuration. The second part of the talk tackles the problem of scene refocusing in 3D images captured by plenoptic cameras. This special type of cameras allows for capturing 3D image signals at multiple viewpoints in a single turn and using a single device making it more efficient as compared to the case when a standard calibrated camera set is used. The captured signal allows for virtual view synthesis within a range of viewpoint locations and for refocussing to various scene depths. We present an analysis of scene depth estimation based on the captured plenoptic signal, which is used to enable refocussing.

    Slides
    • Compression distortion and refocussing analysis in 3D imaging (application/pdf)
    Chair(s)
    Hyungtak Kim Headshot
    Display Name
    Hyungtak Kim
    Affiliation
    Affiliation
    Hongik University
    Country