Skip to main content
    Details
    Author(s)
    Display Name
    Elvira Fleig
    Affiliation
    Affiliation
    Technische Universität Berlin
    Display Name
    Jonas Geistert
    Affiliation
    Affiliation
    Technische Universität Berlin
    Display Name
    Erik Bochinski
    Affiliation
    Affiliation
    Technische Universität Berlin
    Display Name
    Rolf Jongebloed
    Affiliation
    Affiliation
    Technische Universität Berlin
    Display Name
    Thomas Sikora
    Affiliation
    Affiliation
    Technische Universität Berlin
    Abstract

    Steered-Mixtures-of-Experts (SMoE) models provide sparse, edge-aware representations, which can provide competitive performance to state-of-the-art image compression. Unfortunately, the iterative model-building process comes with excessive computational demands. We introduce an edge-aware Autoencoder (AE) designed to avoid the time-consuming iterative optimization of SMoE models. This is done by directly mapping pixel blocks to model parameters for compression, in spirit similar to recent works on “unfolding” of algorithms, while maintaining full compatibility with the established SMoE framework. With our AE, we achieve a quantum-leap in performance with encoder run-time savings by a factor of 500 to 1000 with even improved image reconstruction quality.