Colour vs Greyscale in Machine Vision
Info: 2605 words (10 pages) Dissertation
Published: 18th May 2020
Tagged: Computer Science
Colour vs. Greyscale
Although colour can offer huge potential in machine vision, especially in applications such as navigation, autonomous intelligent vehicles (AIV`s), food inspection and surgical robotics, the data that is offered through colour images can take an extremely long time for post-processing to be carried out in comparison to greyscale [1]. Colour does not need to be altered or enhanced in order for it to be useful as it contains all of the information required. However, colour images will contain inhomogeneous data, given that all pixels are likely to contain multiple different colours in the image itself, whereas greyscale is homogenous and can therefore be processed as a single entity without window adaptation. Simple processing techniques for colour may involve subtracting two or combining three channels of colour in order for discrimination of colours to take place (channel meaning red, green, blue (RGB) or any other colour as a result of the combination of these three). However, the amount of digital processing required to decide on what actions to take in relation to these channels may be quite high and requires careful consideration [1].
Some simple greyscale operations include clearing the image or setting the contents of a photo to a constant level. Alternative operations include inverting an image (i.e. turning a photograph from negative to positive) or shifting the image up/down/left or right. These operations are made easier if the local intensities are made similar [1]. A major defining factor when converting colour images to greyscale is that local perceptual colour differences in the input image must be translated into differences in the grey-level of the output greyscale image. This is achieved by first constructing an eigenvector and eigenvalues of the tensor cluster of the input image. This can then be used to create the gradient of the local colour differences, which is used to compute the greyscale image by linear anisotropic diffusion [2].
Converting an image to greyscale can be done in a number of ways;
- Lightness Method: (RGB max + RGB min) / 2
- Average Method: (R+G+B) / 3
- Luminosity Method (0.21 R + 0.72 G + 0.07 B). As we as humans are most perceptive to green, the luminosity method therefore accounts for this by giving green the most weight [1].
While the luminance method is the most commonly used method when converting to greyscale, it can have a negative effect as much of the important image data can be lost during translation [2]. In an experiment to determine the most successful greyscale conversion method, Cordedlli et al. found that a conversion method based on RGB-derived information outperformed three other models tested. This experiment was carried out in order to be used in antibody detection in patients, where greyscale cameras were too costly and RGB cameras provided an adequate amount of data [3]. There have been much works also into the colourisation of greyscale images, such as colour spaces developed by Rudermn et al. [4] [5] Although much noise makes it through a greyscale filter, many works have aimed at the eradication of this noise with a multitude of methods (e.g. fuzzy interference systems) [6][7].
Edge Detection
Edges are projections of physical processes or changes in illumination. Edge detection has been found to be one of the most critical tasks in applications such as pattern analysis and recognition and can be used in almost every field of image analysis (object recognition, target tracking etc.). As edges within an image contain very important features, it therefore carries a significant portion of information with it. Edge detection techniques aim to focus on distinct edges within an image (i.e. for greyscale images there should not be two edges of the same brightness and for colour images there should not be two edges of the same colour) [8] [9].
As edges are detected, this then allows the filtering out of unwanted information within the image (e.g. Surface texture etc.) resulting in the reconstruction of a much smaller sized image of more relevance to the application [8][10]. Carrying out edge detection on colour images has been found to be a much more difficult task compared to carrying out the same process on a grey image. This is due to the fact that the detected edges within a colour image are generally not clearly visible or thick, resulting in false edges being detected [8].
More traditional edge detection operators focus on small areas such as 3×3 pixel windows, however, depending on the scene this may result in a lot of false positives being detected. A method of rectification or alternative to this is running a first and second order edge detection filter, where the second has increased neighbourhood size by factors of 2 or 3. The combination of results allow for disregarding of false positives due to noise. The downside to this of course, is the possibility of the omission of features that are indeed edges but do not travel outside a single neighbourhood in size. The three main types of edge detection operators can be grouped as gradient (approximations of the first derivative), Laplacian (zero crossing detectors) and image approximation algorithms [11]. Gradient-based techniques require less complexity compared to Laplacian based techniques. Gradient based techniques detect edges by finding the maximum and minimum of the first derivative in the image while Laplacian detects edges of the image involved by finding the zero-crossing in the second derivative [8].
Canny Filter
Canny edge detection (developed by John F. Canny in 1986) [12] is used to detect edges while also supressing noise. The canny filter, however, operates outside these edge detection operators mentioned previously and is based on the squared magnitude gradient, where any local points above the specified threshold are then identified as edges [11]. One of the benefits of the canny filter is the acknowledgment and overcoming of white gaussian noise (i.e. values of any pair of pixels at any time are uncorrelated) [13].
One of the major issues found with the original canny filter is that the filters criteria imply that the optimal edge detector is infinitely wide, meaning that an endless number of edges will be distinguished in the image. This would therefore render the edge detector useless. To overcome this issue, research has found that by combining other edge detector criteria to optimise the performance [12]. While noisy images do present a challenge for Canny filters, this has been addressed numerous times, a prime example being in 2002 where the addition of a canny filter used similarly to Deriche’s filter techniques, significantly improved signal to noise ratios when compared to Deriche’s original filter [14]. The canny filter has been proven successful by Islam et al. for applications as fine as fingerprint detection when used with discrete wavelet transformation (i.e. the transform of a continuous time signal) with matching correlations as high as 0.9999 [15]. Canny filters can also play a large part in the feature extraction portion of object recognition [16].
Sobel Operator
The sobel operator, also known as the sobel filter or the Sobel-Friedman operator, was created by Irwin Sobel and Gary Friedman in 1968.
Smoothing Filters
Since all edge detection results are easily affected by noise, filtering out of this noise is of great importance. Many methods of smoothing are based on an anisotropic modification of the heat conductance equation, but rather on a modification of the solution of the equation. An edge preserving smoothing filter is essentially one which eliminates fine data points of an image while preserving those which are integral for the processing of an image. This, of course, is dependent on thresholds. Due to the fact that a course structured image may contain edges, high and low frequencies, edge preserving smoothing cannot be achieved by employment of linear filtering [17].
Gaussian Filter
Gaussian smoothing is based on the assumption, which has been verified in many experiments, that bold noise is a close approximation of gaussian distribution, therefore removing this bold noise. However, as stated above, the fine details (i.e. fine details that lie outside the gaussian distribution) therefore are left [18]. Gaussian filtering has been used successfully, and remains a standard image processing step, for space scale filtering with applications such as blur removal in photographs [19]. It has proven popular due to its predictivity, symmetry and its decrease around the mean. It is also desirable to work with due to the ease of its nature as it is readily integrated or differentiated [20].
Median Filter
The median filter (also known as the rank filter) is used to reduce noise within an image by locating pixels in the image that have extreme, improbable intensities and replacing them with a more suitable value (i.e. the median value). Pixel values with extreme intensities are generally a cause of impulse noise, so it is important to remove these points [1] [21]. Blurring is kept to a minimum when applying the median filter, with only a slight loss of detail in an image after carrying out the filtering process. As its most important application is to attenuate gaussian noise without blurring edges, the median filter is one of the most commonly used filters within image processing applications [1]. However, this filtering process is not absolutely perfect and does come with its disadvantages such as edge jitter, streaking and it can result in removing important image details [22].
The filtering process is very similar to the mean filter, the main notable difference being that the median pixel value is used to replace the neighbouring pixel values, whereas the mean filter will use the mean value as the replacement. The median filter has been found to effectively preserve the useful detail in the image. Generally, the median filter will focus on a 3×3 neighbourhood, but to produce more severe smoothing, a larger neighbourhood is required [23] [22].
The median is not affected by outliers (i.e. extreme pixel values within the neighbourhood will not affect the median as much as the mean). It is also a much more effective method of preserving sharp edges because the median value will be an actual pixel value within the neighbourhood and not an unrealistic new value, which is generated when carrying out mean filtering [23].
[1] E. R. (E. R. Davies, Computer vision : principles, algorithms, applications, learning, 5th ed. United Kingdom, United States: Mara Conner, 2018.
[2] I. Farup, M. Pedersen, and A. Alsam, “Colour-to-Greyscale Image Conversion by Linear Anisotropic Diffusion of Perceptual Colour Metrics,” in 2018 Colour and Visual Computing Symposium (CVCS), 2018, pp. 1–6.
[3] E. Cordelli and P. Soda, “Methods for greyscale representation of HEp-2 colour images,” in 2010 IEEE 23rd International Symposium on Computer-Based Medical Systems (CBMS), 2010, pp. 383–388.
[4] Ye Ji and Yan Chen, “Rendering greyscale image using color feature,” in 2008 International Conference on Machine Learning and Cybernetics, 2008, pp. 3017–3021.
[5] D. L. Ruderman, T. W. Cronin, and C.-C. Chiao, “Statistics of cone responses to natural images: implications for visual coding,” J. Opt. Soc. Am. A, vol. 15, no. 8, p. 2036, Aug. 1998.
[6] M. Sharique and M. A. Ali, “Two step impulse noise suppresion in greyscale images using ANFIS,” in IMPACT-2013, 2013, pp. 80–83.
[7] P. Civicioglu, “Using Uncorrupted Neighborhoods of the Pixels for Impulsive Noise Suppression With ANFIS,” IEEE Trans. Image Process., vol. 16, no. 3, pp. 759–773, Mar. 2007.
[8] D. J. Bora, “A Novel Approach for Color Image Edge Detection Using Multidirectional Sobel Filter on HSV Color Space,” Int. J. Comput. Sci. Eng., vol. 5, no. 2, pp. 154–159, 2017.
[9] J. S. Lim and J. S., Two-dimensional signal and image processing. Prentice Hall, 1990.
[10] W. Gao, L. Yang, X. Zhang, and H. Liu, “An improved Sobel edge detection,” Proc. – 2010 3rd IEEE Int. Conf. Comput. Sci. Inf. Technol. ICCSIT 2010, vol. 5, pp. 67–71, 2010.
[11] M. Ali and D. Clausi, “Using the Canny edge detector for feature extraction and enhancement of remote sensing images,” in IGARSS 2001. Scanning the Present and Resolving the Future. Proceedings. IEEE 2001 International Geoscience and Remote Sensing Symposium (Cat. No.01CH37217), 2001, vol. 5, pp. 2298–2300.
[12] W. McIlhagga, “The Canny Edge Detector Revisited,” Int. J. Comput. Vis., vol. 91, no. 3, pp. 251–261, Feb. 2011.
[13] J. Canny, “A Computational Approach to Edge Detection,” IEEE Trans. Pattern Anal. Mach. Intell., vol. PAMI-8, no. 6, pp. 679–698, Jan. 1986.
[14] E. Bourennane, P. Gouton, M. Paindavoine, and F. Truchetet, “Generalization of Canny–Deriche filter for detection of noisy exponential edge,” Signal Processing, vol. 82, no. 10, pp. 1317–1328, Oct. 2002.
[15] M. I. Islam, N. Begum, M. Alam, and M. R. Amin, “Fingerprint Detection Using Canny Filter and DWT, a New Approach,” J. Inf. Process. Syst., vol. 6, no. 4, 2010.
[16] A. D. O. Riordan, D. Toal, T. Newe, and G. Dooly, “Flexible Automation and Intelligent Manufcaturing Conference 2019,” in Object Recognition within Smart Mnufacturing, 2019.
[17] V. Aurich and J. Weule, “Non-Linear Gaussian Filters Performing Edge Preserving Diffusion,” in Mustererkennung, 1995, pp. 538–545.
[18] A. M. Wink and J. B. J. M. Roerdink, “Denoising Functional MR Images: A Comparison of Wavelet Denoising and Gaussian Smoothing,” IEEE Trans. Med. Imaging, vol. 23, no. 3, pp. 374–387, Mar. 2004.
[19] J. Babaud, A. P. Witkin, M. Baudin, and R. O. Duda, “Uniqueness of the Gaussian Kernel for Scale-Space Filtering,” IEEE Trans. Pattern Anal. Mach. Intell., vol. PAMI-8, no. 1, pp. 26–33, Jan. 1986.
[20] A. P. Witkin, “IJCAI’83 Proceedings of the Eighth international joint conference on Artificial intelligence – Volume 2,” in SCALE-SPACE FILTERING, 1983, pp. 1019–1022.
[21] Z. Wang and D. Zhang, “Progressive switching median filter for the removal of impulse noise from highly corrupted images,” vol. 46, no. 1, pp. 78–80, 1999.
[22] L. Yin, R. Yang, and M. Gabbouj, “Weighted Median Filters: A Tutorial,” IEEE Trans. CIRCUITS Syst. Analog Digit. SIGNAL Process., vol. 43, no. 3, pp. 157–192, 1996.
[23] A. K. Jain, “Fundamentals of digital image processing. Anil K. Jain.,” no. November, p. 569, 1989.
Cite This Work
To export a reference to this article please select a referencing stye below:
Related Services
View allRelated Content
All TagsContent relating to: "Computer Science"
Computer science is the study of computer systems, computing technologies, data, data structures and algorithms. Computer science provides essential skills and knowledge for a wide range of computing and computer-related professions.
Related Articles
DMCA / Removal Request
If you are the original writer of this dissertation and no longer wish to have your work published on the UKDiss.com website then please: