K-Means Clustering with Scipy
Join the DZone community and get the full member experience.
Join For FreeK-means clustering is a method for finding clusters and cluster centers
in a set of unlabeled data. Intuitively, we might think of a cluster as
comprising a group of data points whose inter-point distances are small
compared with the distances to points outside of the cluster. Given an
initial set of K centers, the K-means algorithm alternates the two
steps:
The Scipy library provides a good implementation of the K-Means algorithm. Let's see how to use it:
In this case we splitted the data in 2 clusters, the blue points have been assigned to the first and the red ones to the second. The squares are the centers of the clusters.
Let's see try to split the data in 3 clusters:
- for each center we identify the subset of training points (its cluster) that is closer to it than any other center;
- the means of each feature for the data points in each cluster are computed, and this mean vector becomes the new center for that cluster.
The Scipy library provides a good implementation of the K-Means algorithm. Let's see how to use it:
from pylab import plot,show from numpy import vstack,array from numpy.random import rand from scipy.cluster.vq import kmeans,vq # data generation data = vstack((rand(150,2) + array([.5,.5]),rand(150,2))) # computing K-Means with K = 2 (2 clusters) centroids,_ = kmeans(data,2) # assign each sample to a cluster idx,_ = vq(data,centroids) # some plotting using numpy's logical indexing plot(data[idx==0,0],data[idx==0,1],'ob', data[idx==1,0],data[idx==1,1],'or') plot(centroids[:,0],centroids[:,1],'sg',markersize=8) show()The result should be as follows:
In this case we splitted the data in 2 clusters, the blue points have been assigned to the first and the red ones to the second. The squares are the centers of the clusters.
Let's see try to split the data in 3 clusters:
# now with K = 3 (3 clusters) centroids,_ = kmeans(data,3) idx,_ = vq(data,centroids) plot(data[idx==0,0],data[idx==0,1],'ob', data[idx==1,0],data[idx==1,1],'or', data[idx==2,0],data[idx==2,1],'og') # third cluster points plot(centroids[:,0],centroids[:,1],'sm',markersize=8) show()This time the the result is as follows:
clustering
SciPy
Published at DZone with permission of Giuseppe Vettigli, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
Comments