# Digital Image Processing Questions and Answers – Noise Reduction by Spatial Filtering

This set of Digital Image Processing Multiple Choice Questions & Answers (MCQs) focuses on “Noise Reduction by Spatial Filtering”.

1. Spatial domain methods operate on image pixels given by : g(x, y) = T[f (x, y)]. What does g, T and f represent?
a) g represents output image, T represents noise matrix, f represents input image
b) g represents noise matrix, T represents input image, f represents output image
c) g represents output image, T represents input image, f represents noise matrix
d) g represents input image, T represents operator defined over a neighborhood point(x,y), f represents output image

Explanation: Consider a 3X3 matrix A with the first position as (1,1), and (x,y) be a point (2,2). Thus A is the 3X3 neighbor of (x,y). Smallest possible neighbor would be 1X1, i.e T is the intensity transformation function which operates of the input image f(x,y) to produce the output image g(x,y).

2. If the mask filter is given by the following matrix, find T[f(x,y)].

M00 M01 M02
M10 M11 M12
M20 M21 M22

a) T[f(x,y)] = f(x,y)xM01 + f(x,y)xM02 + f(x,y)xM03 + f(x,y)xM10 + f(x,y)xM11 + f(x,y)xM12 + f(x,y)xM20 + f(x,y)xM21 + f(x,y)xM22
b) T[f(x,y)] = f(x+1,y+1)xM01 + f(x-1,y-1)xM02 + f(x,y)xM03 + f(x+1,y)xM10 + f(x,y+1)xM11 + f(x-1,y)xM12 + f(x,y-1)xM20 + f(x+1,y-1)xM21 + f(x-1,y+1)xM22
c) T[f(x,y)] = f(x-1,y-1)xM01 + f(x-1,y)xM02 + f(x-1,y+1)xM03 + f(x,y-1)xM10 + f(x,y)xM11 + f(x,y+1)xM12 + f(x+1,y-1)xM20 + f(x+1,y)xM21+ f(x+1,y+1)xM22
d) T[f(x,y)] = f(x+1,y)xM01 + f(x,y+1)xM02 + f(x-1,y-1)xM03 + f(x+1,y+1)xM10+ f(x,y)xM11 + f(x-1,y+1)xM12 + f(x+1,y+1)xM20 + f(x-1,y-1)xM21+ f(x,y)xM22

Explanation: Since the matrix is 3X3, the image f will have pixels at f(x,y) at M11 Similarly, all the other pixels would be at their corresponding position and multiplication of image f with the mask M will be T[f(x,y)] :
T[f(x,y)] = f(x-1,y-1)xM01 + f(x-1,y)xM02 + f(x-1,y+1)xM03 + f(x,y-1)xM10 + f(x,y)xM11 + f(x,y+1)xM12 + f(x+1,y-1)xM20 + f(x+1,y)xM21 + f(x+1,y+1)xM22

3. Which of the following represents the gray level transformation for image negative?

a) s=(L-1)-r
b) s=(L+1)+r
c) s=(L-1)*r
d) s=(L-1)/r

Explanation: In negative image transformations, each value of input image is subtracted from (Level-1). This is then mapped on the output image. For an image 8 bpp image, there will be 28 levels = 256 levels. Putting the L=256 in (d) we get, s=(256-1)-r , s=255-r.

4. Which of the following represents the gray level transformation for log transformation?
a) s=c+log(1+r)
b) s=c-log(1+r)
c) s=c/log(1+r)
d) s=c*log(1+r)

Explanation: In log transformation, s and r represents the pixel values of the input and the output images and c is an arbitrary constant. We know that log(0)= infinity, so to make the value finite, the value 1 is added to the input image pixel value, which makes the value as 1, since log(1) is defined with the value = 0.

5. Which of the following represents the gray level transformation for power-law transformation?
a) s=c+rγ
b) s=c+log(rγ)
c) s=c-rγ
d) s=c*rγ

Explanation: This transformation is used to enhance the image for different devices. The gamma of different device are different. Higher value of gamma corresponds to a darker image and a lower value of gamma corresponds to a brighter image. Gamma for CRT lies in between of 1.8 to 2.5.

6. Smoothing filters are used for blurring and noise reduction. (True / False)
a) True
b) False

Explanation: Smoothing filters are used to reduce noise of an image or to produce a less pixelated image. Most smoothing filters are low pass filters. Smoothing filters are also known as averaging and low pass filters.

7. What is contrast stretching?
a) Normalization of the image to improve the contrast.
b) Stretching the image from 50% to 150%.
c) Stretching a part of the image.
d) Stretching the color of an image from point A to point B.

Explanation: Contrast stretching is also called normalization of an image. It is a simple image enhancement technique to improve the contrast in an image. It is done by stretching the intensity values to a desired range of values.

8. What is the formulation of gray-level slicing?
a) s=L*L
b) s=log(L+1)
c) s=L-1
d) s=L+1

Explanation: Gray level slicing is also called intensity level slicing. As the name suggests, gray level slicing is used for highlighting the different parts of the image. This is done in to types: just highlighting the part of an image and highlighting and preserving the other intensities as well. Thus L-1 gives the gray level slicing where L is the number of levels, for 8 bit L=256.

9. Which is not a goal of bit-plane slicing?
a) Converting to a binary image from gray level image
b) Represent an image with few bits and convert the image to a small size
c) Dividing images into slices
d) Enhance the image by focusing

Explanation: Bit plane slicing is a method to represent images with one or more bits of the byte for each pixel. Only MSB is used to represent the pixel. It reduces the original gray level to a binary image. The three main goals of bit plane slicing is: Converting to a binary image from gray level image, Represent an image with few bits and convert the image to a small size, Enhance the image by focusing.

10. Which of the following is not a image enhancement using arithmetic/logical operations?
a) AND, OR
b) NOT
c) SUBTRACTION AND AVERAGING
d) XOR

Explanation: For image enhancement, the techniques used are Arithmetic and Logical Operations. For Logical operations for image enhancement the operations are: AND, OR, NOT. Arithmetic operations for image enhancement the operations are: Subtraction and Averaging. XOR operation is not used in image enhancement.

Sanfoundry Global Education & Learning Series – Digital Image Processing.

To practice all areas of Digital Image Processing, here is complete set of 1000+ Multiple Choice Questions and Answers.

If you find a mistake in question / option / answer, kindly take a screenshot and email to [email protected]