Machine Learning 1.0 Over Coffee
Machine Learning 1.0 Over Coffee
Taking a step further from ML, you go into AI, which leans more on methods like neural networks and natural language processing to mimic the operation of the human brain.
Join the DZone community and get the full member experience.Join For Free
Article aimed at anyone (technical or nontechnical) who wants to understand the steps in Machine Learning at a high level. Readable in five minutes over coffee. I think.
What Is Machine Learning?
Today, we live in a world of seemingly infinitesimal connected devices, in both personal and commercial environments. The currency associated with these devices is data, which whizzes around in near real time, is stored locally and in cloud environments. The types of data vary greatly, with text, audio, video, and numerical data just a sample of the data modalities generated.
This data is a currency — there is value associated with it — but how do we extract this value? A high growth area is called data science which is used to extract value and insight from this data. It has numerous ingredients in the recipe, with data mining, data optimization, statistics and machine learning key to generating any successful flavor. And like any good recipe, you need a good chef. These chefs in data terms are called Data Scientists, who use a wide variety of tools to glean insight from the data to deliver impact for your business. The data-sets themselves can either be uni-variate (single variable or feature) or multivariate (multiple variables or features). A persons age would be an example of uni-variate, whereas multivariate would expand a person’s feature set to include age, weight and waist size for example.
Why Do It?
Machine learning (ML) is born out of the perspective that instead of telling computers how to perform every task, perhaps we can teach them to learn themselves. Examples include predicting the sale price of your house based on a set of features (square feet, number of bedrooms, area), to try to determine if an image is a dog rather than a cat to determining the sentiment of a set of restaurant reviews to be positive or negative. There are a host of applications across many industries, some of these are shown below.
Before the magic is induced from the algorithms, perhaps the most important step in any machine learning problem is the upfront data transformation and mining, towards optimization. Optimization is required as most of the algorithms that “learn” are sensitive to what they receive as an input, and can greatly impact the accuracy of the model that you build. It can also ensure you have a thorough understanding of your data-set and the challenge you are trying to solve. Some of the data transformation and mining techniques include record linkage, feature derivation, outlier detection, missing value management, and vector representation. All this is sometimes called exploratory data analysis.
Techniques Once Optimized
Once data is presented in the right manner, there are a number of machine learning techniques one can apply. They are broken in supervised and unsupervised techniques, with supervised learning taking an input data set to train your model on, and with unsupervised no datasets are provided. Unsupervised techniques include learning vector quantification and clustering. Supervised techniques include nearest neighbors and decision trees. Another technique is Reinforcement learning, and this type of algorithm allows software agents and/or machines to automatically determine the ideal behavior within a specific context, to maximize its performance.
Verifying your model is also an important step, and we often use confusion matrices to do that. This involves building a table of four results — true positives, true negatives, false positive and false negatives. A set of test data is applied to the classifier and the results are analyzed to assess performance. Sometimes, the result of the model is still questionable. When this happens, machine learning has an answer in the form of ensemble methods, which essentially you build a series of models that you build your final prediction from. Examples here include boosting and bagging on the training data. Bagging splits the training data into multiple input sets, boosting works by building a series of increasingly complex models.
There are complimentary techniques used in any successful machine learning problem — these include data management and visualization, and software languages such as Python and Java have a variety of libraries that can be used for your projects.
Taking a step further from machine learning, you are into a complementary area called artificial intelligence (AI), which leans more on methods such as neural networks and natural language processing which look to mimic the operation of the human brain. This is showing how human centric design in technology is evolving, and how much excitement there is for how humans and technology will work together in the future. It can be said this excitement is born from revealing that as we evolve our understanding of what it means to be human, it outweighs anything that technology alone can deliver. People have always been at the core of innovation, and this has led to an evolution in how improved our lives are.
Published at DZone with permission of Denis Canty , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.