joint work with Arthur Leclaire, Nicolas Papadakis and Julien Rabin
Abstract: In this paper, we propose a framework to train a generative model for texture imagesynthesis from a single example. To do so, we exploit the local representationof images via the space of patches, that is, square sub-images of fixed size (e.g. 4×4). Our main contribution is to consider optimal transport to enforce themultiscale patch distribution of generated images, which leads to two differentformulations. First, a pixel-based optimization method is proposed, relying ondiscrete optimal transport. We show that it is related to a well-known textureoptimization framework based on iterated patch nearest-neighbor projections, whileavoiding some of its shortcomings. Second, in a semi-discrete setting, we exploitthe differential properties of Wasserstein distances to learn a fully convolutionalnetwork for texture generation. Once estimated, this network produces realisticand arbitrarily large texture samples in real time. The two formulations result innon-convex concave problems that can be optimized efficiently with convergenceproperties and improved stability compared to adversarial approaches, withoutrelying on any regularization. By directly dealing with the patch distribution ofsynthesized images, we also overcome limitations of state-of-the art techniques,such as patch aggregation issues that usually lead to low frequency artifacts (e.g. blurring) in traditional patch-based approaches, or statistical inconsistencies (e.g. color or patterns) in learning approaches.
Title: On the use of Gaussian models on patches for image denoising
Abstract: Some recent denoising methods are based on a statistical modeling of the image patches. In the literature, Gaussian models or Gaussian mixture models are the most widely used priors. In this presentation, after introducing the statistical framework of patch-based image denoising, I will propose some clues to answer the following questions: Why are these Gaussian priors so widely used? What information do they encode? In the second part, I will present a mixture model for noisy patches adapted to the high dimension of the patch space. This results in a denoising algorithm only based on statistical tools, which achieves state-of-the-art performance. Finally, I will discuss the limitations and some developments of the proposed method.
Abstract: In this work, we consider an additive white Gaussian noise (AWGN) model on the image patches in the context of patch-based image denoising. From this, we propose a derivation of the induced models on the centered patch of noise and on the DC component of the noise. These models allow us to treat separately the two component. We provides experiments with the HDMI method [pdf] that lead to denoising quality improvements, particularly for residual low frequency noise.
More… first color experiments came up! For images with many constant areas and few textured parts the results are extremely positive, for instance, the improvement for the image dice with a noise of standard deviation 50/255, is up to 0.25dB. The final result is even better than the recent deep learning method FFDNet.