HDMI notebook

High-Dimensional Mixture Models For Image Denoising (HDMI)

This notebook present the HDMI algorithm (from HDMI paper) for grayscale images in the ‘unsupervised’ version (i.e. when the noise variance is known).

Antoine Houdard, November 2020

you can download the notebook here

In [1]:
import numpy as np

from numpy import linalg as LA

import matplotlib.pyplot as plt

from sklearn.feature_extraction import image

import time

1. Image processing tools

1.1 functions definition

We start with some function definition:

  • imgshow which displays a grayscale image (with clipping between 0 and 1)
  • im2col which extracts patches from an image
  • reprojection which perform an uniforme aggregation of the patches
  • psnr that compute the PSNR (for image valued between 0 and 1)
In [2]:
def imgshow(img, title=''):

    '''

    display an image with title

    '''

    ax = plt.imshow(img.clip(0,1))

    ax.set_cmap('gray')

    plt.axis('off')

    plt.title(title)

    return plt.show()



def im2col(img, patchsize):

    '''

    exctract overlapping patches from an image

    img: grayscale image of size NxM

    patchsize: size of patches [p,q]

    return: patches as an array of size (pq, npix) with npix=(N-p+1)*(M-q+1)

    '''

    patches = image.extract_patches_2d(img,patchsize)

    patches = np.reshape(patches,(patches.shape[0],patches.shape[1]*patches.shape[2]))

    return patches.T

  

def reprojection(patches, patchsize, imgsize):

    '''

    perform an uniform reprojection of a set of patches to an image

    patches: size (pq, npix)

    patchsize: [p,q]

    imgsize: [N,M]

    with npix=(N-p+1)*(M-q+1)

    return: image NxM being the uniform reprojection of patches

    '''

    res = np.zeros(imgsize+[patches.shape[0]]) 

    weight = np.zeros(imgsize+[patches.shape[0]])

    

    for i in np.arange(patchsize[0]):

        for j in np.arange(patchsize[1]):

            k = j+patchsize[1]*i;

            res[i:imgsize[0]-patchsize[0]+1+i,j:imgsize[1]-patchsize[1]+1+j,k] = np.reshape(patches[k,:],(imgsize[0]-patchsize[0]+1,imgsize[1]-patchsize[1]+1))

            weight[i:imgsize[0]-patchsize[0]+1+i,j:imgsize[1]-patchsize[1]+1+j,k] = 1



    resweight = np.sum(res*weight,2)/np.sum(weight,2)

    return resweight



def psnr(img1, img2):

  MSE = np.mean((img1-img2)**2)

  return 10*np.log10(1**2/MSE)

1.2. load an image and test

Let’s test these functions with an image

In [3]:
# dowload image

!wget -q https://houdard.wp.imt.fr/files/2020/11/simpson_512.png

In [4]:
# import a grayscale image

img=plt.imread('simpson_512.png')

imgshow(img, 'clean image')



# add Gaussian white noise

stdv = 30/255

imgb = img + stdv*np.random.normal(0, 1, img.shape)

imgshow(imgb, 'noisy image')



# Testing the patch extraction and reprojection

patchsize=[8,8]

imgsize=list(img.shape)



# extract patches

patches = im2col(img,patchsize)



# reprojected image

reprojected_img = reprojection(patches,patchsize,imgsize) 

# display image

imgshow(reprojected_img, 'reprojected image')

2. The HDMI algorithm

2.1 Learning the HDMI model on the patch space

The first step, is to learn the HDMI model, which is a mixture model with dimension reduction in the high-dimensional patch space. To do so, we use an EM algorithm and we start with a random initialization.

The main algorithm is defined in the HDMI_learn function

In [5]:
def HDMI_learn(patches,K,IterMax,sigma):

    '''

    Learn the HDMI model (see paper) for the set of patches with



    Inputs:

      patches: se set of noisy patches

      K: number of groups

      IterMax: max iterations of the EM algorithm

      sigma: variance of the noise (used as a threshold for dimension reduction)



    Return:

      mu: the mean of each group 

      Q: orthogonal change of basis matrix  for each group

      ev: eigeivalues of each group

      b: estimated std value of the noise (should be close to sigma)

      K: K

      d: the dimension of each group

      posterior: the probability for each patch to belong in each group



    '''    

    # Initialization

    p,n = patches.shape

    mu  = np.zeros((p,K))

    Q   = np.zeros((p,p,K))

    pi  = np.ones((K,1))/K 

    a   = np.zeros((K,p))

    b   = sigma

    ev  = np.zeros((K,p))            # eigenvalues

    Phi = np.zeros((K,n)) 

    eps = 100*np.finfo(float).eps    # machine epsilon threshold for speeding-up the algorithm (all smaller values are considered as 0)

    

    # Random Initialization of groups

    posterior = np.zeros((K,n))

    classes   = np.random.randint(0,K,n)

    for i in np.arange(n):

        posterior[classes[i],i]=1

    

    t0 = time.time()

    # EM Loop

    for i in np.arange(IterMax):

        print('iteration '+str(i+1)+'/'+str(IterMax))

        

        # M-step 

        pi = np.mean(posterior,1)

        for k in np.arange(K):

            tk = posterior[k,:]

            stack   = patches[:,tk>eps]

            tk = tk[tk>eps]

            nk = np.sum(tk)               # normalization for class k

            mu[:,k] = np.average(stack,axis=1,weights=tk)

            centeredstack = stack - mu[:,k].reshape(-1,1)

            Sigmak   = ((tk/nk)*centeredstack@centeredstack.T)

            ev[k,:],Q[:,:,k]   = LA.eigh(Sigmak)

            

        # estimation des dimensions

        meanev = np.cumsum(ev,axis=1)/np.arange(1,p+1)

        d = np.sum(meanev-sigma>0, axis = 1) # dimension should be at least 1

        bvalues = np.diag(meanev[:,p-d])

        b = np.sum(pi*bvalues)

        

        # E-step

        for k in np.arange(K):

            s = np.sum(np.log(ev[k,p-d[k]:]));

            muk = mu[:,k]

            isev = np.sqrt(np.abs(1/(ev[k,p-d[k]:]) - 1/b))

            Qk = Q[:,p-d[k]:,k]*isev;

            Y = (patches.T)@(Qk)-(muk.T)@(Qk)

            Phi[k,:] = - np.sum(np.square(Y), axis = 1) + s + (p-d[k])*np.log(b) - 2*np.log(pi[k]) - patches.T@((2/b)*muk) + muk.T@(muk/b)        

        

        for k in np.arange(K):

            posterior[k,:] = 1/np.sum(np.exp((Phi[k,:].reshape(-1,1)-Phi.T)/2), axis = 1)



    tt = time.time()-t0    

    print('Learning time = '+str(int(tt))+'s')

    return mu, Q, ev, b, K, d, posterior

2.2 Denoising

We now compute the conditional expectation for each noisy patch, knowing the HDMI model (once learned) with the function HDMI_denoise. This gives for each noisy patch, an estimate of the underlying clean patch. Then we perform an uniform aggregation to recover the image with the function reprojection

In [6]:
def HDMI_denoise(patches, HDMM):

    mu = HDMM[0]

    Q = HDMM[1]

    ev = HDMM[2]

    b = HDMM[3]

    K = HDMM[4] 

    d = HDMM[5] 

    posterior = HDMM[6]

    p,n = patches.shape

    denoised_patches = np.zeros(patches.shape)

    

    for k in np.arange(K):

        tk = posterior[k,:]

        stack   = patches[:,tk>0]

        tk = tk[tk>0]

        Qk = Q[:,p-d[k]:,k]

        iDelta = (b - ev[k,p-d[k]:])/(ev[k,p-d[k]:])

        centeredstack = stack - mu[:,k].reshape(-1,1)

        product = (np.eye(p) + (Qk*iDelta)@(Qk.T))@centeredstack

        denoised_patches[:, posterior[k,:]>0] = denoised_patches[:, posterior[k,:]>0] + tk*(stack - product)

                

    return denoised_patches

3. Run the algorithm

Initialize parameters

In [7]:
# Noisy image Gaussian white noise

stdv = 30/255

imgb = img + stdv*np.random.normal(0, 1, img.shape)

imgshow(imgb, 'noisy image')



# extract noisy patches

patchsize=[8,8]

patchesb = im2col(imgb,patchsize)



# parameters for HDMI

K = 40

IterMax = 30

sigma = stdv**2

Run the EM algorithm that learn the High-Dimentional Mixture Model (be patient!)

In [8]:
# Learn the High-Dimentional Mixture Model (HDMM)

HDMM = HDMI_learn(patchesb, K, IterMax, sigma)

iteration 1/30

iteration 2/30

iteration 3/30

iteration 4/30

iteration 5/30

iteration 6/30

iteration 7/30

iteration 8/30

iteration 9/30

iteration 10/30

iteration 11/30

iteration 12/30

iteration 13/30

iteration 14/30

iteration 15/30

iteration 16/30

iteration 17/30

iteration 18/30

iteration 19/30

iteration 20/30

iteration 21/30

iteration 22/30

iteration 23/30

iteration 24/30

iteration 25/30

iteration 26/30

iteration 27/30

iteration 28/30

iteration 29/30

iteration 30/30

Learning time = 676s

Denoise each patch with conditional expectation

In [11]:
# denoise patches with HDMM model

patchesd = HDMI_denoise(patchesb, HDMM)

Uniformly aggregate patches in order to obtain the denoised image

In [12]:
# uniform aggregation

imgd = reprojection(patchesd,patchsize,imgsize) 



# display image

imgshow(img, 'clean image')

imgshow(imgb, 'noisy image')

imgshow(imgd, 'HDMI denoised')



# PSNR

print('PSNR = '+str(psnr(imgd,img)))

PSNR = 32.11599697490574

WordPress conversion from HDMI.ipynb by nb2wp v0.3.1