# R for Actuarial Science

Join the DZone community and get the full member experience.

Join For Free

as mentioned in the appendix of
**
modern actuarial risk theory
**
, “
*
r (and s) is the ‘lingua franca’ of data analysis and statistical computing, used in academia, climate research, computer science, bioinformatics, pharmaceutical industry, customer analytics, data mining, finance and by some insurers. apart from being stable, fast, always up-to-date and very versatile, the chief advantage of r is that it is available to everyone free of charge. it has extensive and powerful graphics abilities, and is developing rapidly, being the statistical tool of choice in many academic environments.
*
”

r is based on the s statistical programming language developed by joe chambers at bell labs in the 80’s. to be more specific, r is an open-source implementation of the s language, developed by robert gentlemn and ross ihaka. it is a vector based language, which makes it extremely interesting for actuarial computations. for instance, consider some life tables,

> td[39:52,] > tv[39:52,] age lx age lx 39 38 95237 38 97753 40 39 94997 39 97648 41 40 94746 40 97534 42 41 94476 41 97413 43 42 94182 42 97282 44 43 93868 43 97138 45 44 93515 44 96981 46 45 93133 45 96810 47 46 92727 46 96622 48 47 92295 47 96424 49 48 91833 48 96218 50 49 91332 49 95995 51 50 90778 50 95752 52 51 90171 51 95488

those (french) life tables can be found here

> td <- read.table( + "http://perso.univ-rennes1.fr/arthur.charpentier/td8890.csv",sep=";",header=true) > tv <- read.table( + "http://perso.univ-rennes1.fr/arthur.charpentier/tv8890.csv",sep=";",header=true)

from those vectors, it is possible to construct the matrix of death probabilities,

> lx <- td$lx > m <- length(lx) > p <- matrix(0,m,m); d <- p > for(i in 1:(m-1)){ + p[1:(m-i),i] <- lx[1+(i+1):m]/lx[i+1] + d[1:(m-i),i] <- (lx[(1+i):(m)]-lx[(1+i):(m)+1])/lx[i+1]} > diag(d[(m-1):1,]) <- 0 > diag(p[(m-1):1,]) <- 0 > q <- 1-p

one can compute easily, e.g., the (curtate) expectation of life defined as

and one can compute the vector of life expectancy, at various ages

> life.exp = function(x){sum(p[1:nrow(p),x])} > e = vectorize(life.exp)(1:m)

an actually, any kind of actuarial quantity can be derived from those matrices. the expected present value (or actuarial value) of a temporary life annuity-due is, for instance,

the code to compute those functions is here

> for(j in 1:(m-1)){ adots[,j]<-cumsum(1/(1+i)^(0:(m-1))*c(1,p[1:(m-1),j])) }

or consider the expected present value of a term insurance

with the following code

> for(j in 1:(m-1)){ a[,j]<-cumsum(1/(1+i)^(1:m)*d[,j]) }

some more details can be found in the first part of the notes of the crash courses of last summer, in meielisalp. vector – or matrices – are extremely convenient to work with, when dealing with life contingencies. it is also possible to model prospective mortality. here, the mortality is not only function of the age , but also time ,

> t(dtf)[1:10,1:10] 1899 1900 1901 1902 1903 1904 1905 1906 1907 1908 0 64039 61635 56421 53321 52573 54947 50720 53734 47255 46997 1 12119 11293 10293 10616 10251 10514 9340 10262 10104 9517 2 6983 6091 5853 5734 5673 5494 5028 5232 4477 4094 3 4329 3953 3748 3654 3382 3283 3294 3262 2912 2721 4 3220 3063 2936 2710 2500 2360 2381 2505 2213 2078 5 2284 2149 2172 2020 1932 1770 1788 1782 1789 1751 6 1834 1836 1761 1651 1664 1433 1448 1517 1428 1328 7 1475 1534 1493 1420 1353 1228 1259 1250 1204 1108 8 1353 1358 1255 1229 1251 1169 1132 1134 1083 961 9 1175 1225 1154 1008 1089 981 1027 1025 957 885

thus, we now have a force of mortality matrix

it is also possible to use r packages to estimate a lee-carter model of the mortality rate,

> library(demography) > muh =matrix(death$male/exposure$male,nl,nc) > poph=matrix(exposure$male,nl,nc) > baseh <- demogdata(data=muh, pop=poph, ages=age, years=year, type="mortality", + label="france", name="hommes", lambda=1) > res=residuals(lch,"pearson")

one can easily study residuals, for instance as a function of the age,

or a function of the year,

some more details can be found in the second part of the notes of the crash courses of last summer, in meielisalp.

r is also interesting because of its huge number of libraries, that can be used for predictive modeling. one can easily use smoothing functions in regression, or regression trees,

> tree = tree((nbr>0)~ageconducteur,data=sinistres,split="gini",mincut = 1) > age = data.frame(ageconducteur=18:90) > y1 = predict(tree,age) > reg = glm((nbr>0)~bs(ageconducteur),data=sinistres,family="binomial") > y = predict(reg,age,type="response")

some practitioners might be scared because the legend claims that r is not as good as sas to handle large databases. actually, a lot of functions can be used to import datasets. the most convenient one is probably

> basecout = read.table("http://freakonometrics.free.fr/basecout.csv", + sep=";",header=true,encoding="latin1") > tail(basecout,4) numeropol debut_pol fin_pol freq_paiement langue type_prof alimentation type_territoire 6512 87291 2002-10-16 2003-01-22 mensuel a professeur vegetarien urbain 6513 87301 2002-10-01 2003-09-30 mensuel a technicien vegetarien urbain 6514 87417 2002-10-24 2003-10-21 mensuel f technicien vegetalien semi-urbain 6515 88128 2003-01-17 2004-01-16 mensuel f avocat vegetarien semi-urbain utilisation presence_alarme marque_voiture sexe exposition age duree_permis age_vehicule i coutsin 6512 travail-occasionnel oui ford m 0.2684932 47 29 28 1 1274.5901 6513 loisir oui honda m 0.9972603 44 24 25 1 278.0745 6514 travail-occasionnel non volkswagen f 0.9917808 23 3 11 1 403.1242 6515 loisir non fiat f 0.9972603 23 4 11 1 230.9565

but if the dataset is too large, it is also possible to specify which variables might be interesting, using

> mycols = rep("null", 18) > mycols[c(1,4,5,12,13,14,18)] <- na > basecoutsubc = read.table("http://freakonometrics.free.fr/basecout.csv", + colclasses = mycols,sep=";",header=true,encoding="latin1") > head(basecoutsubc,4) numeropol freq_paiement langue sexe exposition age coutsin 1 6 annuel a m 0.9945205 42 279.5839 2 27 mensuel f m 0.2438356 51 814.1677 3 27 mensuel f m 1.0000000 53 136.8634 4 76 mensuel f f 1.0000000 42 608.7267

it is also possible (before running a code on the entire dataset) to import only the first lines of the dataset.

> basecoutsubcr = read.table("http://freakonometrics.free.fr/basecout.csv", + colclasses = mycols,sep=";",header=true,encoding="latin1",nrows=100) > tail(basecoutsubcr,4) numeropol freq_paiement langue sexe exposition age coutsin 97 1193 mensuel f f 0.9972603 55 265.0621 98 1204 mensuel f f 0.9972603 38 9547.7267 99 1231 mensuel f m 1.0000000 40 442.7267 100 1245 annuel f f 0.6767123 48 179.1925

it is also possible to import a zipped file. the file itself has a smaller size, and it can usually be imported faster.

> import.zip = function(file){ + temp = tempfile() + download.file(file,temp); + read.table(unz(temp, "basefreq.csv"),sep=";",header=true,encoding="latin1")} > system.time(import.zip("http://freakonometrics.free.fr/basefreq.csv.zip")) trying url 'http://freakonometrics.free.fr/basefreq.csv.zip' content type 'application/zip' length 692655 bytes (676 kb) opened url ================================================== downloaded 676 kb user system elapsed 0.762 0.029 4.578 > system.time(read.table("http://freakonometrics.free.fr/basefreq.csv", + sep=";",header=true,encoding="latin1")) user system elapsed 0.591 0.072 9.277

finally, note that it is possible to import any kind of dataset, not only a text file. even a microsoft excel folder. on a windows computer, one can use sql queries

> sheet = "c:\\documents and settings\\user\\excelsheet.xls" > connection = odbcconnectexcel(sheet) > spreadsheet = sqltables(connection) > query = paste("select * from",spreadsheet$table_name[1],sep=" ") > result = sqlquery(connection,query)

then, once the dataset is imported, several functions can be used,

> cost = aggregate(coutsin~ agesex,mean, data=basecout) > frequency = merge(aggregate(nbsin~ agesex,sum, data=basefreq), + aggregate(exposition~ agesex,sum, data=basefreq)) > frequency$freq = frequency$nbsin/frequency$exposition > base.freq.cost = merge(frequency, cost)

finally, r is interesting for its graphical interface. “
*
if you can picture it in your head, chances are good that you can make it work in r. r makes it easy to read data, generate lines and points, and place them where you want them. its very flexible and super quick. when youve only got two or three hours until deadline, r can be brilliant
*
” as said amanda cox, a graphics editor at the new york times. “
*
r is particularly valuable in deadline situations when data is scant and time is precious
*
.”.

several cases were considered on the blog
http ://chartsnthings.tumblr.com/…
. first, we start with a simple graph, here state government control in the us

then try to find a nice visual representation, e.g.

and finally, you can just print it in your favorite newspaper,

and you can get any kind of graphs,

and not only about politics,

graphs are important. “
*
its not just about producing graphics for publication. its about playing around and making a bunch of graphics that help you explore your data. this kind of graphical analysis is a really useful way to help you understand what you’re dealing with, because if you cant see it, you cant really understand it. but when you start graphing it out, you can really see what you’ve got
*
” as said peter aldhous, san francisco bureau chief of new scientist magazine. even for actuaries. “
*
the commercial insurance underwriting process was rigorous but also quite subjective and based on intuition. r enables us to communicate our analytic results in appealing and innovative ways to non-technical audiences through rapid development lifecycles. r helps us show our clients how they can improve their processes and effectiveness by enabling our consultants to conduct analyses efficiently
*
”, as explained by john lucker, team of advanced analytics professionals at deloitte consulting principal, in
http://blog.revolutionanalytics.com/r-is-hot/
. see also andrew gelman’s view, on graphs,
http://www.stat.columbia.edu/…

so yes, actuaries might be interested to use r for actuarial communication, as mentioned in http ://www.londonr.org/…

the actuarial toolkit (see
http ://www.actuaries.org.uk/…
) stresses the interest of r, “
*
the power of the language r lies with its functions for statistical modelling, data analysis and graphics ; its ability to read and write data from various data sources; as well as the opportunity to embed r in excel or other languages like vba. in the way sas is good for data manipulations, r is superior for modelling and graphical output
*
“.

from 2011, asia capital reinsurance group (acr) uses r to solve big data challenges (see http ://www.reuters.com/… ). and lloyd’s uses motion charts created with r to provide analysis to investors (as discussed on http ://blog.revolutionanalytics.com/… )

a lot of information can be found on http ://jeffreybreen.wordpress.com/…

markus gesmann mentioned on his blog a lot of interesting graphs used for actuarial reporting, http ://lamages.blogspot.ca/…

further, r is free. which can be compared with sas, $6,000 per pc, or $28,000 per processor on a server (as mentioned on http ://en.wikipedia.org/… )

it is also becoming more and more popular, as a programming language. as mentioned on this month transparent language popularity (see http ://lang-index.sourceforge.net/ ), r is ranked 12. far away after c or java, but before matlab (22) or sas (27). on stackoverflow (see http ://stackoverflow.com/ ) is also far being c++ (399,232 occurrences) or java (348,418), but with 21,818 occurrences, it appears before matlab (14,580) and sas (899). as mentioned on http ://r4stats.com/articles/popularity/ r is becoming more and more popular, on listserv discussion traffic

it is clearly the most popular software in data analysis, as mentioned by the rexer analytics survey, in 2009

what about actuaries ? in a survey (see http ://palisade.com/… ), r was not extremely popular.

if we consider only statistical softwares, sas is still far ahead, among uk and cas actuaries

but, as mentioned by mike king, quantitative analyst, bank of america, “
*
i cant think of any programming language that has such an incredible community of users. if you have a question, you can get it answered quickly by leaders in the field. that means very little downtime
*
.” this was also mentioned by glenn meyers, in the actuarial review “
*
the most powerful reason for using r is the community
*
” (in
http ://nytimes.com/…
). for instance,
http ://r-bloggers.com/
has contributions from more than 425 r users.

as said by bo cowgill, from google “
*
the best thing about r is that it was developed by statisticians. the worst thing about r is that it was developed by statisticians.
*
”

Published at DZone with permission of Arthur Charpentier, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

Comments