Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Splitting a Node in a Tree

DZone's Guide to

Splitting a Node in a Tree

· Big Data Zone ·
Free Resource

How to Simplify Apache Kafka. Get eBook.

If we grow a tree with standard functions in R, on the same dataset used to introduce classification tree in some previous post,

> MYOCARDE=read.table(
+ "http://freakonometrics.free.fr/saporta.csv",
+ head=TRUE,sep=";")
> library(rpart)
> cart<-rpart(PRONO~.,data=MYOCARDE)

we get

> library(rpart.plot)
> library(rattle)
> prp(cart,type=2,extra=1)

The first step is to split the first node (based on the whole dataset). To split it, we can use either Gini index

http://latex.codecogs.com/gif.latex?\text{gini}(Y|X)=-\sum_{x\in\{A,B\}}\frac{n_x}{n}\sum_{y\in\{0,1\}}\frac{n_{x,y}}{n_x}\left(1-\frac{n_{x,y}}{n_x}\right)

> gini=function(y,classe){
+ T=table(y,classe)
+ nx=apply(T,2,sum)
+ n=sum(T)
+ pxy=T/matrix(rep(nx,each=2),nrow=2)
+ omega=matrix(rep(nx,each=2),nrow=2)/n
+ g=-sum(omega*pxy*(1-pxy))
+ return(g)}

or the entropy

http://latex.codecogs.com/gif.latex?\text{entropy}(Y|X)=-\sum_{x\in\{A,B\}}\frac{n_x}{n}\sum_{y\in\{0,1\}}\frac{n_{x,y}}{n_x}\log\left(\frac{n_{x,y}}{n_x}\right)

> entropie=function(y,classe){
+   T=table(y,classe)
+   nx=apply(T,2,sum)
+   n=sum(T)
+   pxy=T/matrix(rep(nx,each=2),nrow=2)
+   omega=matrix(rep(nx,each=2),nrow=2)/n
+   g=sum(omega*pxy*log(pxy))
+   return(g)}

For instance, if we choose to split according to the first variable, with threshold 2.5, Gini index would be

> CLASSE=MYOCARDE[,1]<=2.5
> gini(y=MYOCARDE$PRONO,classe=CLASSE)
[1] -0.4832375

To get the “optimal” split, we consider all variable, and all threshold (according to some constraint, perhaps, e.g. at leat 5 observations per node)

> mat_gini=mat_v=matrix(NA,7,101)
> for(v in 1:7){
+   variable=MYOCARDE[,v]
+   v_seuil=seq(quantile(MYOCARDE[,v],
+ 6/length(MYOCARDE[,v])),
+ quantile(MYOCARDE[,v],1-6/length(
+ MYOCARDE[,v])),length=101)
+   mat_v[v,]=v_seuil
+   for(i in 1:101){
+ CLASSE=variable<=v_seuil[i]
+ mat_gini[v,i]=
+   gini(y=MYOCARDE$PRONO,classe=CLASSE)}}

> par(mfrow=c(2,3))
> for(v in 2:7){
+   plot(mat_v[v,],mat_gini[v,],type="l",
+   ylim=range(mat_gini),
+   main=names(MYOCARDE)[v]) 
+   abline(h=max(mat_gini),col="blue")
+ }

It looks like we should split according to the second variable (INSYS), as seen on the tree graph, above. Of course, we could be using the entropy

(here we have the same spliting criteria).

Now that we’ve how to split the first node. We keep it.

> idx=which(MYOCARDE$INSYS>=19)

Let us get to the node on the right, when the second variable was above the threshold (here 19). The idea is to run the same code as before, but on that subset,

> mat_gini=mat_v=matrix(NA,7,101)
> for(v in 1:7){
+   variable=MYOCARDE[idx,v]
+   v_seuil=seq(quantile(MYOCARDE[idx,v],
+ 6/length(MYOCARDE[idx,v])),
+ quantile(MYOCARDE[idx,v],1-6/length(
+ MYOCARDE[idx,v])), length=101)
+   mat_v[v,]=v_seuil
+   for(i in 1:101){
+     CLASSE=variable<=v_seuil[i]
+     mat_gini[v,i]=
+       gini(y=MYOCARDE$PRONO[idx],
+            classe=CLASSE)}}

> par(mfrow=c(2,3))
> for(v in 2:7){
+   plot(mat_v[v,],mat_gini[v,],type="l",
+        ylim=range(mat_gini),
+        main=names(MYOCARDE)[v]) 
+   abline(h=max(mat_gini),col="blue")

Here we see that we should split according to the last one (REPUL), exactly as we got on the tree graph.

12 Best Practices for Modern Data Ingestion. Download White Paper.

Topics:
big data ,bigdata ,node splitting ,r language ,classification tree ,data visualization

Published at DZone with permission of

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}