Home > Backend Development > Python Tutorial > Detailed explanation of k-means clustering model in Python

Detailed explanation of k-means clustering model in Python

王林
Release: 2023-06-10 09:15:19
Original
2008 people have browsed it

Detailed explanation of k-means clustering model in Python

Cluster analysis is a method used to discover similar objects in data. In fields such as data mining and machine learning, cluster analysis is widely used. k-means clustering is one of the more common clustering methods. It can divide the samples in the data set into k clusters, with the smallest internal difference in each cluster and the largest inter-cluster difference. This article will introduce the k-means clustering model in Python in detail.

  1. The principle of k-means clustering

The k-means clustering algorithm is an iterative clustering method. Its core steps include: initializing the center of mass, calculating distance, updating the center of mass, determining stopping conditions, etc.

First, you need to specify the number of clusters k. Then k data samples are randomly selected as the initial centroids, and for each remaining sample, it is assigned to the cluster with the nearest centroid. Next, the sum of the squared distances of all data points in each cluster from the centroid of the cluster is calculated as the error of the cluster. The centroid of each cluster is then updated, moving it to the center of all samples in that cluster. Repeat the above steps until the error is less than a certain threshold or the upper limit of the number of iterations is reached.

  1. Python implements k-means clustering

In Python, the sklearn library provides the k-means clustering function, which is the simplest way to use the k-means clustering algorithm. method. The following takes the iris data set as an example to show how to use Python to implement k-means clustering

from sklearn.cluster import KMeans
from sklearn.datasets import load_iris

iris = load_iris()
X = iris.data[:, :2]  # 为了便于可视化,只取前两个特征
y = iris.target

kmeans = KMeans(n_clusters=3)  # 聚成3类
kmeans.fit(X)

centroids = kmeans.cluster_centers_  # 质心
labels = kmeans.labels_  # 样本分类

# 绘制图形
import matplotlib.pyplot as plt

colors = ['red', 'green', 'blue']
for i in range(len(X)):
    plt.scatter(X[i][0], X[i][1], c=colors[labels[i]])
    
for c in centroids:
    plt.scatter(c[0], c[1], marker='x', s=300, linewidths=3, color='black')
    
plt.show()
Copy after login

Execute the above code to generate an image similar to the following:

In the image, red, green and blue The color points represent different clusters, and the black "x" symbol represents the centroid of each cluster.

  1. How to choose the optimal k value

How to determine the optimal k value is one of the more difficult problems in the k-means clustering algorithm. Two common methods are introduced below: the elbow method and the contour coefficient method.

Elbow method: First, set the k value to a smaller integer and calculate the sum of squared errors (SSE) for each cluster. As the value of k increases, the sum of squared errors decreases. When the k value increases to a certain level, SSE no longer drops significantly. At this time, the relationship between the k value and SSE is drawn into a curve graph, which must present an elbow line segment. The line segment is at the "elbow" position here, and the corresponding k value is the optimal number of clusters.

Code example:

sse = []
for i in range(1, 11):
    kmeans = KMeans(n_clusters=i).fit(X)
    sse.append(kmeans.inertia_)  # ineria_属性表示模型的误差平方和
 
plt.plot(range(1, 11), sse)
plt.xlabel('K')
plt.ylabel('SSE')
plt.show()
Copy after login

Silhouette coefficient method: Silhouette coefficient combines the two factors of intra-cluster irrelevance and inter-cluster similarity. The larger the value of the silhouette coefficient, the better the clustering effect. The calculation process of the silhouette coefficient method is as follows:

For each sample, calculate its average distance from all samples in the same cluster (called a), and calculate its average distance from all samples in the nearest other clusters (called a) for b).

Calculate the silhouette coefficient s of each sample, $s = rac {b-a} {max(a, b)}$. The silhouette coefficient of the entire model is the average of the silhouette coefficients of all samples.

Code example:

from sklearn.metrics import silhouette_score

sil_scores = []
for k in range(2, 11):
    kmeans = KMeans(n_clusters=k).fit(X)
    sil_score = silhouette_score(X, kmeans.labels_)  # 计算轮廓系数
    sil_scores.append(sil_score)
    
plt.plot(range(2, 11), sil_scores)
plt.xlabel('K')
plt.ylabel('Silhouette Coefficient')
plt.show()
Copy after login
  1. k-means clustering considerations

k-means clustering has the following considerations:

The initial value has a greater impact on the results. If the initial value is not good, you may get poor results.

The clustering results depend on the selected distance metric, such as Euclidean distance, Manhattan distance, etc. The choice should be made according to the actual situation.

Outliers in the data set are easily attracted to the wrong clusters, and removal of outliers should be considered.

When the sample class distribution is unbalanced, a common problem is to obtain clusters with extremely skewed attributes.

  1. Summary

k-means clustering is a widely used clustering algorithm. In Python, the KMeans function provided by the sklearn library can be used to quickly implement it. At the same time, the elbow method or the silhouette coefficient method can also be used to determine the optimal number of clusters. At the same time, attention should be paid to the selection of the k value and the setting of the initial centroid during application.

The above is the detailed content of Detailed explanation of k-means clustering model in Python. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:php.cn
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template