Research on image segmentation method based on gray-level co-occurrence matrix

Publisher:量子启示Latest update time:2011-08-25 Reading articles on mobile phones Scan QR code
Read articles on your mobile phone anytime, anywhere

Image segmentation refers to the technology and process of dividing an image into regions with different characteristics and extracting the target of interest. It is one of the key technologies in digital image processing and the basis for further image recognition, analysis and understanding. At present, there are many existing algorithms for image segmentation, and many methods for classifying them have been proposed. Generally, they are divided into three categories: (1) threshold segmentation; (2) edge detection; (3) region extraction. However, there is no method that can be universally applied to various images. Therefore, the research on image segmentation is still deepening and is also one of the hot topics in image processing research. With the development and progress of science and technology, the application of image processing in the military is becoming more and more extensive, which is mainly concentrated in the aspect of camouflage design. Now, military camouflage is an important means to hide weapons and equipment and preserve oneself in modern high-tech warfare, and it is also necessary to eliminate the enemy. Therefore, the design and research of camouflage has always been a hot topic in various countries. This paper mainly takes a certain mountain aerial photo as the research object, analyzes its background and then realizes image segmentation to prepare for the later camouflage design. Since the background texture characteristics of the mountain are obvious, texture analysis is used to analyze its background, and gray-level co-occurrence matrix is ​​the most commonly used method in texture analysis. In this paper, the gray level co-occurrence matrix method is used to study the segmentation of the image.


1 Gray Level
Co-occurrence Matrix Gray Level Co-occurrence Matrix (GLCM) is one of the image texture analysis methods. It reflects the spatial information of the relative positions of different pixels and, to a certain extent, reflects the spatial distribution characteristics of each gray level in the texture image. It is one of the most frequently used features in the field of texture analysis. The gray level co-occurrence matrix is ​​a second-order statistical measure of the grayscale change of an image and is also a basic function that describes the characteristics of the texture structure. It counts the joint probability distribution of the positions of two pixels. Let S be the set of pixel pairs with a specific spatial connection in the target area R, then the co-occurrence matrix P can be defined as The numerator on the right side of the equal sign in formula (1) is the number of pixel pairs with a certain spatial relationship and grayscale values ​​i and j respectively, and the denominator is the total number of pixel pairs (# represents the number). The P obtained in this way is normalized. For an image Gf(i, j), size N×N, containing pixels (dynamic range G) with gray levels {0, 1, ..., G-1}, its gray level co-occurrence matrix is ​​a two-dimensional matrix C(i, J), each matrix element represents the probability of the joint occurrence of intensities i and j at a certain distance d and angle θ. Therefore, depending on the different values ​​of d and θ, there may be multiple co-occurrence matrices. However, in practical applications, d is often appropriately selected, and θ is generally taken as 0°, 45°, 90°, 135, as shown in Figure 1.


2 Experimental design and analysis
2.1 Common parameters of gray-level co-occurrence matrix
In practical applications, some parameters calculated from the gray-level co-occurrence matrix are used as feature quantities for image texture analysis. Haralick once proposed 14 parameters calculated from the gray-level co-occurrence matrix. However, the following four parameters are mainly used in this experiment:
(1) Angular second moment (ASM). Angular second moment is the sum of squares of the element values ​​of the gray-level co-occurrence matrix, so it is also called energy, which reflects the uniformity of the gray-level distribution of the image and the coarseness of the texture. If all values ​​of the co-occurrence matrix are equal, the ASM value is small; on the contrary, if some values ​​are large and others are small, the ASM value is large. When the elements in the co-occurrence matrix are concentrated, the ASM value is large. A large ASM value indicates a more uniform and regularly changing texture pattern; (2) Contrast (CON). Contrast reflects the clarity of the image and the depth of the texture grooves. The deeper the texture grooves, the greater the contrast and the clearer the visual effect; conversely, the smaller the contrast, the shallower the grooves and the blurred the effect. The more grayscale differences, i.e., the more pixel pairs with large contrast, the greater this value. The greater the value of the element farthest from the diagonal in the grayscale co-occurrence matrix, the greater the CON; [page]




(3) Correlation (abbreviated as COR).

Correlation can measure the similarity of the elements of the spatial grayscale co-occurrence matrix in the row or column direction. Therefore, the size of the correlation value reflects the local grayscale correlation in the image. When the matrix element values ​​are uniformly equal, the correlation value is large; on the contrary, if the matrix pixel values ​​differ greatly, the correlation value is small. If there is horizontal texture in the image, the COR of the horizontal matrix is ​​greater than the COR value of the other matrices;
(4) Entropy. Entropy is a measure of the amount of information possessed by the image, and texture information also belongs to the information of the image. If the image has no texture, the grayscale co-occurrence matrix is ​​almost zero, and the entropy value is close to zero; if the image is full of fine textures, the values ​​of P(i, j) are approximately equal, then the entropy value of the image is the largest; if there are fewer textures distributed in the image, the values ​​of P(i, j) differ greatly, then the entropy value of the image is small. 2.2 Image preprocessing That is, filtering the image in order to improve the speed and accuracy of image recognition. The image preprocessing process is a filtering process for the image, which eliminates interference, retains the parts that need to be processed, and filters out the unnecessary parts. Taking a mountain photo as an example, we first select a picture of a certain size as the research object, as shown in Figure 2(a), then scan the picture into the computer and number it. Then remove the noise and binarize it to better extract the image features, as shown in Figure 2(b). However, when calculating the co-occurrence matrix, due to the large amount of calculation, the grayscale of the image is divided into 16 grayscale levels.



2.3 Experimental Design
There are many specific application examples of texture image recognition and classification. The general approach is to extract a set of texture features from each image through the texture feature measurement method, and these features constitute the feature vector of the sample, and then apply the statistical pattern recognition method in the feature space to identify and classify a large number of image samples. In this experiment, the number of pixels of the sample image after digitization is 109×116. The image is divided into non-overlapping windows of size 16×16 pixels, with a total of 49 sub-images, Ng=16 (dividing 0~255 grayscale into 16 levels). Each small block extracts features from 4 directions (0°, 45°, 90° and 135°). The specific design steps are as follows:
(1) Use the aforementioned gray-level co-occurrence matrix to calculate the 4 most important eigenvalues ​​(angular second-order moment, contrast, relativity and entropy value), and then take the mean and variance of these 4 directions to represent the eigenvalue, so that the 4 direction values ​​become 2. For this purpose, a total of 8 texture feature values ​​can be provided. The extracted feature values ​​are saved in the texture feature library as training samples;
(2) The texture feature values ​​of other small blocks are calculated as unknown samples and numbered;
(3) The feature values ​​extracted from the unknown samples are compared with the feature values ​​of the training samples in the texture feature library using the minimum Euclidean distance classification method. The number of the successfully matched unknown sample is output only when the Euclidean weighted distance between the feature vector of the unknown sample and the training sample is the smallest, otherwise it is not output. After the output is successful, the number of the successfully matched unknown sample is uniformly adjusted with the number of the training sample;
(4) The texture feature value of another unknown sample is saved in the texture feature library as a training sample, and pattern matching is performed using a similar method as described above. Repeat steps (2) and (3) until each unknown sample is output.
After all the above steps are completed, it is concluded that the small blocks in the similar texture area have the same number, so that texture classification is achieved. Then, regional integration and division are implemented according to the different numbers, so that texture image segmentation can be achieved. The specific experimental design process is shown in Figure 3.

[page]

2.4 Experimental results analysis
According to the above steps, Matlab and VC++ tools are used to analyze and process the image, and then cluster analysis and other methods are used to achieve the experimental results, as shown in Figure 4.

As can be seen from Figure 4, after the image with obvious texture features is binarized, the image features are processed according to the texture feature values ​​combined with the weighted Euclidean distance. The final effect image achieves better regional fusion and division, achieving the purpose of image segmentation.

3 Conclusion

In this paper, the gray level co-occurrence matrix method is used to extract the texture features of the image, and then the pattern matching is performed on each texture area according to the weighted Euclidean distance, and the image is integrated and divided according to different texture areas. Finally, the image segmentation is achieved by clustering and other methods. After many experiments, it is shown that for images with significant texture features, the image segmentation based on the gray level co-occurrence matrix method has a certain accuracy and practicality, and can achieve a good image segmentation effect.

Reference address:Research on image segmentation method based on gray-level co-occurrence matrix

Previous article:Surface defect detection of laser welding seam images based on data fitting
Next article:Measurement technology based on linear CCD scanning

Latest Test Measurement Articles
Change More Related Popular Components

EEWorld
subscription
account

EEWorld
service
account

Automotive
development
circle

About Us Customer Service Contact Information Datasheet Sitemap LatestNews


Room 1530, 15th Floor, Building B, No.18 Zhongguancun Street, Haidian District, Beijing, Postal Code: 100190 China Telephone: 008610 8235 0740

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号