# Density-Based Clustering and Identifying Arbitrarily Shaped Distributions Using R

### We take a look at how R can help us analyze, make sense of, and visualize data using density-based clustering algorithms.

Join the DZone community and get the full member experience.

Join For FreeI was startled to see a group of three blind people who were trying to navigate along a busy corridor. The first one was leading the way, perhaps owing to his familiarity with the terrain, while the other two hung on to the shoulder of the one before them, thus forming a trio that patiently and painstakingly navigated their journey. This coordination and commitment helped them safely overcome the spurious twists and turns, the wide and narrow passages filled with people moving in various directions.

The three-nearest neighbor concept in statistics is intuitively similar. If we have a data set that contains arbitrary shapes or patterns which are yet to be discovered, this can be envisioned similar to the navigation problem mentioned above.

If I do not know the caverns, passages, twists, and turns the pathways have to offer, I cannot rely on a predefined, cookie-cutter approach, where I must know how many clusters exist in advance, and then assign observations to those respective clusters.

I must then, compute in real-time and adapt on the fly, as the shapes and passages change during my navigational journey. Finding the nearest neighbor is then similar to extending the hand to reach out in the darkness so that some sense can be gained through proximity and touch.

You can derive the same from principles used in rock climbing. The rocks must be felt by hands or feet in order to assess their position, strength, and suitability to as a foot-hold.

So, density-based clustering is suitable when the distributions are arbitrary and must be scaled on the fly in order to recognize and map them. This is where algorithms like DBSCAN excel and simple K-means clustering approaches fail (if all you have is a cutter that can cut out a shape that is predefined and fixed, that's what you get. You learn nothing more than what you already know).

Now, let's look at the core concepts in this algorithm.

You have:

(i)A set of core observations, around which (ii) non-core, yet directly reachable (traversable as in graph theory) observations arrange themselves. (iii) Observations who belong to neither (i) nor (ii) are considered noise or irrelevant.

Now, to distinguish between core and non-core observations, we need to have a `minPts`

criterion, that determines the minimum number of points or observations that must exist within a "critical," or epsilon, distance from a core point.

In order to find the nearest neighbors, we must define how we are going to measure the proximity.

R provides two ways to measure proximity or "nearness": the first is obviously the Euclidean distance and second is the KD-Tree approach.

The latter is similar to decision trees in concept, where we iteratively continue partitioning the space around a given observation, in order to reach it. The space itself can be specified through **n**** **dimensions*, *and hence we must then choose a sequence among such dimensions for partitioning.

We will be working with a predefined data set named ** DS3**.

```
## install package DBSCAN and get access to data set DS3
library(DBSCAN)
data("DS3")
```

Once you get the data set into the environment, you can compute the 3nn, a.k.a. 3 nearest neighbors, for each observation. So, effectively `minPts`

for core versus non-core classification becomes equal to 3.

```
## do a know your neighbors ( for each observation)
nn3<-kNN(DS3,k=3,search="kdtree")
## visualize nearest neighbors ( 3 in this case)
plot(nn3,DS3)
```

We will compute the neighbors using the kd-tree distance criterion and the result can be visualized through the plot below.

You can easily see how arbitrary the shapes can be almost magically discovered, through the principle of the nearest neighbor search.

The magic happens because the methodical approach of meeting and greeting the neighbors discovers more and more neighbors (and hence the visualization becomes denser and denser) as per the formation of the shape, and on the other hand, sparser and sparser as the traversal approaches the contours of those very shapes. The sparseness around the dense shapes provides the much-needed contrast to discover hidden shapes.

Compare this with the cookie cutter approach of K-Means clustering.

Now, since I do not know anything about the denseness or the sparseness, I must start with a random number of clusters, and, depending on that number, my partitioning and corresponding assignments will change.

Here is how distribution looks with, say, 25 clusters.

And here is how the partitions and assignments look like with just 5 pre-defined clusters.

The knowledge that can be gained from K-means clustering is limited to the size (and dimensions) of the cut, rather than the actual shape of the distribution.

We can complete the rest of the visualization leveraging the distance plot in R.

`kNNdistplot(DS3,k=3)`

This will allow us to arrive at an estimate of epsilon distance or the "critical" distance around each core classification, within which the minimum number of nearest neighbors must exist (in this case, `minPts=3`

, i.e. the three nearest neighbors).

Those which fall outside this "critical" distance will be classified as non-core, which in turn can be further classified as either fringe or those on the fence, yet reachable from core ones. Those that belong to neither the core or fringe classification will be classified as "noise."

`abline(h=6,col="RED",lty=2)`

This may need a bit of trial and error.

```
> ## recognize shapes through clustering
>
> plot(DS3,col=res$cluster,pch=24)
> res<-dbscan(DS3,eps=6,minPts=3)
> plot(DS3,col=res$cluster,pch=24)
```

That's it.

Another example of the simplistic elegance and efficiency with which R can perform complex data analysis and visualization.

Opinions expressed by DZone contributors are their own.

Comments