Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

DZone's Guide to

K-means Clustering and Voronoi Sets

· Big Data Zone ·
Free Resource

Comment (0)

Save
{{ articles[0].views | formatCount}} Views

Hortonworks Sandbox for HDP and HDF is your chance to get started on learning, developing, testing and trying out new features. Each download comes preconfigured with interactive tutorials, sample data and developments from the Apache community.

In the context of $k$-means, we want to partition the space of our observations into $k$ classes. Each observation belongs to the cluster with the nearest mean. Here “nearest” is in the sense of some norm, usually the $\ell_2$ (Euclidean) norm.

Consider the case where we have 2 classes. The means being respectively the 2 black dots. If we partition based on the nearest mean, with the $\ell_2$ (Euclidean) norm we get the graph on the left, and with the $\ell_1$ (Manhattan) norm, the one on the right,

Points in the red region are closer to the mean in the upper part, while points in the blue region are closer to the mean in the lower part. Here, we will always use the standard $\ell_2$ (Euclidean) norm. Note that the graph above is related to Voronoi diagrams (or Voronoy, from Вороний in Ukrainian, or Вороно́й in Russian) with 2 points, the 2 means.

In order to illustrate the $k$-means clustering algorithm (here Lloyd’s algorithm) consider the following dataset

set.seed(1)
pts <- cbind(X=rnorm(500,rep(seq(1,9,by=2)/10,100),.022),Y=rnorm(500,.5,.15))
plot(pts)

Here, we have 5 groups.  So let us run a 5-means algorithm here.

• We draw randomly 5 points in the space (intial values for the means), $\boldsymbol{\mu}_1^{(1)},\cdots,\boldsymbol{\mu}_k^{(1)}$
• In the assignment step, we assign each point to the nearest mean

$S_i^{(t)} = \big \{ \boldsymbol{x}_j : \big \| \boldsymbol{x}_j - \boldsymbol{\mu}^{(t)}_i \big \|^2 \leq \big \| \boldsymbol{x}_j - \boldsymbol{\mu}^{(t)}_{i'} \big \|^2 \ \forall i'\in\{1 ,\cdots, k\} \big\}$

• In the update step, we compute the new centroids of the clusters

$\boldsymbol{\mu}^{(t+1)}_i = \frac{1}{|S^{(t)}_i|} \sum_{\boldsymbol{x}_j \in S^{(t)}_i} \boldsymbol{x}_j$

To visualize it, see:

The code the get the clusters is:

kmeans(pts, centers=5, nstart = 1, algorithm = "Lloyd")

Observe that the assignment step is based on computations of Voronoi sets. This can be done in R using:

library(tripack)
V <- voronoi.mosaic(means[,1],means[,2])
P <- voronoi.polygons(V)
points(V,pch=19)
plot(V,add=TRUE)

This is what we can visualize below:

The code to visualize the $k$ means, and the $k$ clusters (or regions), use:

km1 <- kmeans(pts, centers=5, nstart = 1, algorithm = "Lloyd")
library(tripack)
library(RColorBrewer)
CL5 <- brewer.pal(5, "Pastel1")
V <- voronoi.mosaic(km1$centers[,1],km1$centers[,2])
P <- voronoi.polygons(V)
plot(pts,pch=19,xlim=0:1,ylim=0:1,xlab="",ylab="",col=CL5[km1$cluster]) points(km1$centers[,1],km1\$centers[,2],pch=3,cex=1.5,lwd=2)
plot(V,add=TRUE)

Here, starting points are draw randomly. If we run it again, we might get:

Or:

On that dataset, it is difficult to get cluster that are the five groups we can actually see. If we use:

set.seed(1)
A <- c(rep(.2,100),rep(.2,100),rep(.5,100),rep(.8,100),rep(.8,100))
B <- c(rep(.2,100),rep(.8,100),rep(.5,100),rep(.2,100),rep(.8,100))
pts <- cbind(X=rnorm(500,A,.075),Y=rnorm(500,B,.075))

we usually get something better.

Colors are obtained from clusters of the $k$-means function, but additional lines are obtained using as outputs of Voronoi diagrams functions.

Hortonworks Community Connection (HCC) is an online collaboration destination for developers, DevOps, customers and partners to get answers to questions, collaborate on technical articles and share code examples from GitHub.  Join the discussion.

Topics:

Comment (0)

Save
{{ articles[0].views | formatCount}} Views

Published at DZone with permission of

Opinions expressed by DZone contributors are their own.