Agile Machine Learning: From Theory to Production
Here are a few things that you should consider if you are planning your machine learning roadmap in an agile development environment.
Join the DZone community and get the full member experience.Join For Free
Later this year, Sumanas and I will be co-presenting a talk about researching Machine Learning in an agile development environment at the JAXLondon conference. This is a high-level overview of some of the topics we will be presenting (we will also try to get some cool ML demos in there too, just to make things a bit more interesting).
So What’s the Problem?
Artificial intelligence (AI) and machine learning (ML) are all the rage right now. At Google’s recent I/O 2017 event, they really put ML front and center with plans to bake it into all their products, and lots of other large companies are aligning themselves as machine learning companies as a natural progression from big data.
According to a recent Narrative Science survey, 38% of enterprises surveyed were already using AI, with 62% expecting to be using it by 2018. So it would be understandable if many companies might be feeling the pressure to invest in an AI strategy before fully understanding what they are aiming to achieve, let alone how it might fit into a traditional engineering delivery team.
We have been working the last 12 months on taking a new product to market and trying to go from a simple idea to a production ML system. Along the way, we have had to integrate open-ended academic research tasks with our existing agile development process and project planning, as well as working out how to deliver the ML system to a production setting in a repeatable, robust way, with all the considerations expected from a normal software project.
Here are a few things you might consider if you are planning your ML roadmap (and topics we will cover in more detail in the JAXLondon session in October).
Machine Learning != Your Product
Machine learning is a powerful tool to enhance a product, whether it be by reducing costs of human curation or having a more powerful understanding for your voice/natural language interface. However, machine learning shouldn’t be considered the selling point of the product. Think of the end result product first — that is, is there a market for the product regardless of whether it is powered by ML or human oversight? Consider if it makes sense to build a fully non-ML version of the product to start proving the market fit and delivering value to customers.
Start Small and Build
The Lean Startup principles of MVP and fast iterations still apply here. If you are able to start to apply ML techniques to a new a non-ML product to get even a small increase in performance (better recommendations, reduced human effort/cost, improved user experience, replacing human process with ML for just 5% of cases can start to realize cost benefits, etc.), then that can start adding value straight away. Starting small, you are able to start proving the value that can be added whilst also getting the ML infrastructure tested and proven.
Tie Into Development Sprint Cycles
You may be hiring a new R&D team or you may be using members from your existing engineering team. Either way, it is helpful to have them working in similar development sprint cycles (if you work in sprints). It will allow both sides of the teams to understand what is happening and how work is progressing. Product and engineering changes and issues might be useful in informing the direction of R&D, and likewise, there may be data features or feedback from the R&D team that could be easily engineered and would make things simpler for research. Whilst research is ongoing, and is often a time-consuming task, having fortnightly (or whatever the sprint length is) checkpoints, where ideas can be discussed and demoed, can be good for the whole team’s understanding as well as being a positive motivator.
Don’t Forget Clean Code!
Whilst experimenting and researching different ideas, it can be pretty easy to fall into hacking mode, just rattling out rough scripts to prove an initial concept or idea. There is definitely a place for this, but as your team progresses, it will be more beneficial to actually invest in good coding principles. Whilst one-off scripts make sense to be hacked out, as the team works on several ideas, having code that is re-usable and organized sensibly with proper separation of concerns can make research in the future easier, as well as reduce the cost when it comes to production. Investing in some machinery to make experiments easily testable (and benchmarking different solutions) will be very beneficial to invest in from the start.
While the interwebs are awash with machine learning articles, tutorials, and clickbait links "guaranteed to reduce the error of your model in three quick steps," the following is a small list of resources that we think are worth browsing.
- Math-lite blog covering core ML and deep learning concepts run by a research scientist at Google Brain
- Deep Visualisation Toolbox, a 4-minute video showing how a deep net teaches itself "features" about the dataset.
- Play with a neural network right in your browser. This is a good resource to get a feel for how simple networks learn using a point-and-click interface.
- Course on ML taught at UBC by Nando de Freitas.
For the academically inclined, the following is a list of papers both recent and not so recent:
- AlexNet Paper: The first paper about a deep net showing state-of-the-art performance.
- Dropout Layers: A simple way to prevent an NN from overfitting.
- Adversarial Training Paper: Intentionally inducing worst-case perturbations.
- Deep Residual Networks: Deep residual learning for image recognition.
- Data Augmentation Paper: Unsupervised feature learning by data augmentation.
- DenseNets Paper: Densely connected NN layers.
Published at DZone with permission of Rob Hinds, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.