Skip to main content
Video s3
    Details
    Presenter(s)
    Manjula Narayanaswamy Headshot
    Affiliation
    Affiliation
    Robert Gordon University
    Country
    Abstract

    A low-complexity wavelet-based visual saliency model, to predict the regions of human eye fixations in images using bottom-up features is proposed. Unlike the existing wavelet-based saliency detection models, the proposed model requires only two channels - luminance (Y) and chrominance (Cr) in YCbCr colour space for saliency computation. These two channels are decomposed to their lowest resolution using Discrete Wavelet Transform (DWT) to extract local contrast features at multiple scales. The reconstructed local details are integrated using a 2D entropy based combination scheme to derive a combined map. The combined map is normalised and enhanced using natural logarithm transformation to derive a final saliency map. The proposed model has been tested on two large public image datasets and the experimental results shows that the proposed model has achieved better prediction accuracy with significant complexity reduction compared to the existing benchmark models.

    Slides
    • A Low-Complexity Wavelet-Based Visual Saliency Model to Predict Fixations (application/pdf)