Monday, September 21, 2009

日本人彼女募集中。(Activity # 18)

This activity is entitled 'Noise Models and Basic Image Restoration.' It introduced me to the different noise models, and the filters to use whenever they're encountered in real life (not likely). As usual, although this activity will orient the reader on the different image restoration techniques, the best advice is still to start off with a good image--one that doesn't need restoration in the first place. The second best advice would be to use Photoshop--that's why, this post and Scilab with SIPtoolbox should come as a last resort (after Matlab).

The original test image and its pdf looks like this:


Different Types of Noise



In the activity, we were given six image models, as well as the equations needed for the models to act on the probability distribution function (refer to Activity 3) of the image. These noise models are: Exponential Noise, Gamma Noise, Gaussian Noise, Salt-and-Pepper Noise, Uniform Noise and Rayleigh Noise. Note that I listed them in the order the sample images above are arranged, except for Rayleigh Noise. (Rayleigh noise is currently in the works.)

While there are six noise models, only four filters were given for us to examine. These filters are: Arithmetic Mean, Geometric Mean, Harmonic Mean and Contra-harmonic Mean.

Exponential Noise

Gamma Noise

Gaussian Noise

Rayleigh Noise

Salt and Pepper Noise

Uniform Noise

Arithmetic Mean as the name implies, takes the average of the sum of color values for each subimage (hence, a moving average; sub-image means 3x3 patch); Geometric Mean takes the average via an n-th root; Harmonic Mean divides the patch dimensions by the inverse sum; and finally, the Contra-harmonic filter takes the sum raised to an order value Q, and divides it by the sum raised to the order of Q-1.

Since this activity is pretty much about learning what kind of filter to use for a specific kind of noise, the images below can serve as a guide for the reader. Though most of the time, the arithmetic mean and geometric mean filters obtain a similar looking image, the images are different for the Gaussian and Salt-and-Pepper noise. It is also important to note that for the contra-harmonic filter, we obtain different images for different values of Q.


The images above show the contra-harmonic filter acting on Salt-and-Pepper Noise for Q=-2, Q=0 and Q=+2 respectively. From this, we can infer that the value of Q to use would depend on the image being restored--negative values for grayscale images biased towards black, while positive values for grayscale images biased towards the white. In the case of my Yuuko sketch, since it's biased towards white, it is greatly restored at Q~+2.

I thank Neil and Gilbert for their help with the noise addition parts, and myself for helping others with the filtering part. xD Anyway, since all the objectives for this activity was accomplished, and I had my own batch processing script as a bonus, I'll give myself a 10.

Thursday, September 10, 2009

「ai sp@ce」 のバイトは、どうしますか? (Activity # 17)


This activity focuses on photometric stereo, a method of extracting shape and surface detail from shadows. Using this method, we are able to reconstruct an image with surface detail from a series of several images taken at different point sources.

Suppose we have the matrix V given by

where the first index of V represents the image while the second index refers to x, y, and z coordinate of the light source respectively. Assuming these sources illuminate a common object of interest, we can obtain a series of images with intensities defined by

for each point (x,y). We would then have to find g for each point using the usual matrix operation (i.e. inv(V'*V)*V'*I) and normalize the g's to obtain the value for the normal vectors. We get the surface normals using

and finally, we obtain the surface elevation at a point (u,v) using

Following the steps I listed above, here's the Scilab output for the images contained in Photos.mat

I'd like to give myself a 10 for this activity, since this is one of the RARE activities that anyone can finish in one meeting. xD

Tuesday, September 8, 2009

一番大切なものは? (Activity # 16)

Neural networks is the third classification method for pattern recognition explored in this course, and is the title of activity 16. Unlike the LDA, neural networks don't need heuristics and recognition rules for classification and instead makes use of learning, a function it tries to imitate from the neurons in our brain.

In any case, the activity makes use of the ANN toolbox (ANN_toolbox_0.4.2 found on Scilab's Toolboxes Center) which makes the activity more of a plug-and-play kind of activity.

So how do neural networks work? Each input connection to the artificial neuron has its own weight (synaptic strength) and is multiplied to the inputs, xi. A neuron receives weighted inputs from other neurons and lets the sum act on an activation function g. The result z is then fired as an output to other neurons.

A neural network is formed through the connection of neurons. A typical network consists of an input layer, a hidden layer and an output layer. Just like the previous activities, two modes were considered -- a training mode and a test mode. The parameters one can play around with for the functions in ANN would be the Neural Network architecture (how many neurons for each layer), the Learning Rate and the Number of Iterations.

Using the same data I used in Activity # 15

I had to do a few modifications. First, the inputs must be in a [2xN] matrix (done by transposing the bin after an fscanfMat() call), afterwhich the values must be normalized (ranging from 0 to 1). For the training set, I also had to change the classification values (1 and 2) to 1 and 0 (this is crucial to avoid having erroneous outputs). Finally, here's the output of my neural net classification after round():

For the win, I give myself a pat at the back and a well-deserved 10. xD I'd like to thank Gilbert for his help in helping me understand the ANN toolbox.