Video Not Available
Details
Steered-Mixtures-of-Experts (SMoE) models provide sparse, edge-aware representations, which can provide competitive performance to state-of-the-art image compression. Unfortunately, the iterative model-building process comes with excessive computational demands. We introduce an edge-aware Autoencoder (AE) designed to avoid the time-consuming iterative optimization of SMoE models. This is done by directly mapping pixel blocks to model parameters for compression, in spirit similar to recent works on “unfolding” of algorithms, while maintaining full compatibility with the established SMoE framework. With our AE, we achieve a quantum-leap in performance with encoder run-time savings by a factor of 500 to 1000 with even improved image reconstruction quality.