The Developer’s Guide to Data Science
Join the DZone community and get the full member experience.Join For Free
when developers talk about using data, they are usually concerned with acid, scalability, and other operational aspects of managing data. but data science is not just about making fancy business intelligence reports for management. data drives the user experience directly, not after the fact.
large scale analysis and adaptive features are being built into the fabric of many of today’s applications. the world is already full of applications that learn what we like. gmail sorts our priority inbox for us. facebook decides what's important in our newsfeed on our behalf. e-commerce sites are full of recommendations, sometimes eerily accurate. we see automatic tagging and classification of natural language resources. ad-targeting systems predict how likely you are to click on a given ad. the list goes on and on.
many of the applications discussed above emerged from web giants like google, yahoo, and facebook and other successful startups. yes, these places are filled to the brim with very smart people, working on the bleeding edge. but make no mistake, this trend will trickle down into “regular” application development too. in fact, it already has. when users interact with slick and intelligent apps every day, their expectations for business applications rise as well. for enterprise applications it's not a matter of if, but when.
this is why many enterprise developers will need to familiarize themselves with data science. granted, the term is incredibly hyped, but there's a lot of substance behind the hype. so we might as well give it a name and try to figure out what it means for us as developers.
from developer to data scientist
how do we cope with these increased expectations? it's not just a software engineering problem. you can't just throw libraries at it and hope for the best. yes, there are great machine learning libraries, like apache mahout (java) and scikit-learn (python). there are even programming languages squarely aimed at doing data science, such as the r language . but it's not just about that. there is a more fundamental level of understanding you need to attain before you can properly wield these tools.
this article will not be enough to gain the required level of understanding. it can, however, show you the landmarks along the road to data science. this diagram (adapted from drew conway's original ) shows the lay of the land:
as software engineers, we can relate to hacking skills. it's our bread and butter. and that's good, because from that solid foundation you can branch out into the other fields and become more well-rounded.
let's tackle domain expertise first. it may sound obvious, but if you want to create good models for your data, then you need to know what you're talking about. this is not strictly true for all approaches. for example, deep learning and other machine learning techniques might be viewed as an exception. in general though, having more domain-specific knowledge is better. so start looking beyond the user-stories in your backlog and talk to your domain experts about what really makes the clock tick. beware though: if you only know your domain and can churn out decent code, you're in the danger zone. this means you're at risk of re-inventing the wheel, misapplying techniques, and shooting yourself in the foot in a myriad of other ways.
of course, the elephant in the room here is “math & statistics.” the link between math and the implementation of features such as recommendation or classification is very strong. even if you're not building a recommender algorithm from scratch (which hopefully you wouldn't have to), you need to know what goes on under the hood in order to select the right one and to tune it correctly. as the diagram points out, the combination of domain expertise and math and statistics knowledge is traditionally the expertise area of researchers and analysts within companies. but when you combine these skills with software engineering prowess, many new doors will open.
what can you do as developer if you don't want to miss the bus? before diving head-first into libraries and tools, there are several areas where you can focus your energy:
- data management
we'll look at each of them in the remainder of this article. think of these items as the major stops on the road to data science.
recommendation, classification, and prediction engines cannot be coded in a vacuum. you need data to drive the process of creating/tuning a good recommender engine for your application, in your specific context. it all starts with gathering relevant data, which might already be in your databases. if you don’t already have the data, you might have to set up new ways of capturing relevant data. then comes the act of combining and cleaning data. this is also known as data wrangling or munging. different algorithms have different pre-conditions on input data. you'll have to develop a strong intuition for good data versus messy data.
typically, this phase of a data science project is very experimental. you'll need tools that help you quickly process lots of heterogeneous data and iterate on different strategies. real world data is ugly and lacks structure. dynamic scripting languages are often used to filter and organize data because they fit this challenge perfectly. a popular choice is python with pandas or the r language.
it's important to keep a close eye on everything related to data munging. just because it's not production code, doesn't mean it's not important. there won't be any compiler errors or test failures when you silently omit or distort data, but it will influence the validity of all subsequent steps. make sure you keep all your data management scripts, and keep both mangled and unmangled data. that way you can always trace your steps. garbage in, garbage out applies as always.
once you have data in the appropriate format, the time has come to do something useful with it. much of the time you’ll be working with sample data to create models that handle yet unseen data. how can you infer valid information from this sample? how do you even know your data is representative? this is where we enter the domain of statistics, a vitally important part of data science. i've heard it said: “a data scientist is a person who is better at statistics than any software engineer and better at software engineering than any statistician.”
what should you know? start by mastering the basics. understand probabilities and probability distributions. when is a sample large enough to be representative? know about common assumptions such as independence of probabilities, or that values are expected to follow a normal distribution. many statistical procedures only make sense in the context of these assumptions. how do you test the significance of your findings? how do you select promising features from your data as input for algorithms? any introductory material on statistics can teach you this. after that, move on the bayesian statistics. it will pop up more and more in the context of machine learning.
it's not just theory. did you notice how we conveniently glossed over the “science” part of data science up till now? doing data science is essentially setting up experiments with data. fortunately, the world of statistics knows a thing or two about experimental setup. you'll learn that you should always divide your data into a training set (to build your model) and a test set (to validate your model). otherwise, your model won’t work for real-world data: you’ll end up with an overfitting model. even then, you're still susceptible to pitfalls like multiple testing . there's a lot to take into account.
statistics tells you about the when and why, but for the how, math is unavoidable. many popular algorithms such as linear regression, neural networks, and various recommendation algorithms all boil down to math. linear algebra, to be more precise. so brushing up on vector and matrix manipulations is a must. again, many libraries abstract over the details for you, but it is essential to know what is going on behind the scenes in order to know which knobs to turn. when results are different than you expected, you need to know how to debug the algorithm.
it's also very instructive to try and code at least one algorithm from scratch. take linear regression for example, implemented with gradient descent . you will experience the intimate connection between optimization, derivatives, and linear algebra when researching and implementing it. andrew ng's machine learning class on coursera takes you through this journey in a surprisingly accessible way.
but wait, there's more...
besides the fundamentals discussed so far, getting good at data science includes many other skills, such as clearly communicating the results of data-driven experiments, or scaling whatever algorithm or data munging method you selected across a cluster for large datasets. also, many algorithms in data science are “batch-oriented,” requiring expensive recalculations. translation into online versions of these algorithms is often necessary. fortunately, many (open source) products and libraries can help with the last two challenges.
data science is a fascinating combination between real-world software engineering, math, and statistics. this explains why the field is currently dominated by phds. on the flipside, we live in an age where education has never been more accessible. be it through moocs, websites, or books. if you want read a hands-on book to get started, read machine learning for hackers , then move on to a more rigorous book like elements of statistical learning . there are no shortcuts on the road to data science. broadening your view from software engineering to data science will be hard, but certainly rewarding.
Opinions expressed by DZone contributors are their own.