Our paper Inverse problem regularization with hierarchical variational autoencoders has been accepted for poster presentation at ICCV 2023! Congrats to Jean Prost!
Archives de catégorie : Paper
On the Existence of Optimal Transport Gradient for Learning Generative Models
New preprint available! [HAL] [PDF][ArXiv]
Abstract : The use of optimal transport cost for learning generative models has become popular with Wasserstein Generative Adversarial Networks (WGAN). Training of WGAN relies on a theoretical background: the calculation of the gradient of the optimal transport cost with respect to the generative model parameters. We first demonstrate that such gradient may not be defined, which can result in numerical instabilities during gradient-based optimization. We address this issue by stating a valid differentiation theorem in the case of entropic regularized transport and specify conditions under which existence is ensured. By exploiting the discrete nature of empirical data, we formulate the gradient in a semi-discrete setting and propose an algorithm for the optimization of the generative model parameters. Finally, we illustrate numerically the advantage of the proposed framework.
Learning local regularization for variational image restoration
New preprint available! [HAL] [PDF][ArXiv]
Joint work with Jean Prost, Andrés Almansa and Nicolas Papadakis
Abstract: In this work, we propose a framework to learn a local regularization model for solving general image restoration problems. This regularizer is defined with a fully convolutional neural network that sees the image through a receptive field corresponding to small image patches. The regularizer is then learned as a critic between unpaired distributions of clean and degraded patches using a Wasserstein generative adversarial networks based energy. This yields a regularization function that can be incorporated in any image restoration problem. The efficiency of the framework is finally shown on denoising and deblurring applications.
Semi-discrete OT loss for image generation
Some new application of our recent work Wasserstein Generative Models for Patch-based Texture Synthesis [arXiv] [HAL] that proposed a loss with semi-dual formulation of OT.
- Style transfert using semi-dual formulation for minimizing the OT distance between VGG-19 features:
- Textures barycenters using our texture generation algorithm (Alg.1 from this work).
The key idea is to generate a new texture such that its patch distributions at various scales are Wasserstein barycenters of the patch distribution of the two inputs:
Wasserstein Generative Models for Patch-based Texture Synthesis
Wasserstein Generative Models for Patch-based Texture Synthesis [arXiv] [HAL]
NEW! Published at SSVM 2021 [link]
joint work with Arthur Leclaire, Nicolas Papadakis and Julien Rabin
Abstract: In this paper, we propose a framework to train a generative model for texture imagesynthesis from a single example. To do so, we exploit the local representationof images via the space of patches, that is, square sub-images of fixed size (e.g. 4×4). Our main contribution is to consider optimal transport to enforce themultiscale patch distribution of generated images, which leads to two differentformulations. First, a pixel-based optimization method is proposed, relying ondiscrete optimal transport. We show that it is related to a well-known textureoptimization framework based on iterated patch nearest-neighbor projections, whileavoiding some of its shortcomings. Second, in a semi-discrete setting, we exploitthe differential properties of Wasserstein distances to learn a fully convolutionalnetwork for texture generation. Once estimated, this network produces realisticand arbitrarily large texture samples in real time. The two formulations result innon-convex concave problems that can be optimized efficiently with convergenceproperties and improved stability compared to adversarial approaches, withoutrelying on any regularization. By directly dealing with the patch distribution ofsynthesized images, we also overcome limitations of state-of-the art techniques,such as patch aggregation issues that usually lead to low frequency artifacts (e.g. blurring) in traditional patch-based approaches, or statistical inconsistencies (e.g. color or patterns) in learning approaches.
New preprint available!
Statistical Modeling of the Patches DC Component for Low-Frequency Noise Reduction [pdf]
Abstract: In this work, we consider an additive white Gaussian noise (AWGN) model on the image patches in the context of patch-based image denoising. From this, we propose a derivation of the induced models on the centered patch of noise and on the DC component of the noise. These models allow us to treat separately the two component. We provides experiments with the HDMI method [pdf] that lead to denoising quality improvements, particularly for residual low frequency noise.
More… first color experiments came up! For images with many constant areas and few textured parts the results are extremely positive, for instance, the improvement for the image dice with a noise of standard deviation 50/255, is up to 0.25dB. The final result is even better than the recent deep learning method FFDNet.
New version of HDMI
An enhanced version of the paper High-Dimensional Mixture Models For Unsupervised Image Denoising (HDMI) is online. This version include new experiments with color images!
Noisy image | Clustering from HDMI | Denoised with HDMI |