Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Getting the Most Out of Your Training Data With Support Vector Machines

DZone's Guide to

Getting the Most Out of Your Training Data With Support Vector Machines

Learn how to create a machine learning model that can automatically classify your data and how to efficiently use your training data.

· AI Zone ·
Free Resource

Insight for I&O leaders on deploying AIOps platforms to enhance performance monitoring today. Read the Guide.

In many cases, the acquisition of well-labeled training data is a huge hurdle for developing accurate prediction systems with supervised learning. At Love the Sales, we aggregate sales products from over 700 multinational retailers, which results in over two million products a day that need classification. It could take a traditional merchandising team four years to complete this task manually. Our requirement was to apply this classification to the textual metadata of these two million products (mostly fashion and homeware) into 1,000+ different categories that are represented in a hierarchy.

Support Vector Machines

For our classification task, we decided to use support vector machines (SVMs). SVMs are a class of supervised machine learning algorithms that are proficient in the classification of linearly separable data. Essentially, given a large enough set of labeled training data, an SVM attempts to learn the best discriminative plane between the examples — to try to draw a multidimensional line in the sand. Find more on SVMs here and here.

For example, here are some possible ways to separate this dataset:

https://lh4.googleusercontent.com/5ahti0PvGJSa8M3bxWgt0DA-HzD1URzBtRHWatqdG5saKLZYk5v0fIuDLoOqCiISeS1nqWDi_7FdilyX3okTubTikH7a1n4POpYplPpAhMdHXusIf-VzgwIP824j5xG3wILcco2Z

The SVM will attempt to learn the optimal hyperplane:

https://lh6.googleusercontent.com/tr5KYXffZRTHuCQBw4aKzOHR1gnSc5Md9Umubm5xFM5egE6qck4-35DAyQctNp17guFo7hm06UiAvYyPopCoExjpVKkjcJOAkBrbkSgUlG3w3k2hybkdzLDbzMFHZls6rBWoV2s2

Images from opencv.org

While there are many machine learning algorithms for classification (i.e. neural networks, random Forest, Naive Bayesian), SVMs really shine for data with many features — in our case, document classification, wherein each "word" is treated as a discrete feature.

While SVMs can actually classify into multiple classes, we opted to use a hierarchy of simple two-class SVMs chained together in a hierarchical fashion.

The main reason for this is that when we tried it, it seemed to yield better results, and, importantly, it used a lot less memory on our machine learning platform because each SVM only had to know about two classes of data. High memory utilization for large datasets (300k+ examples) and large input vectors (1m different known words) was a definite obstacle for us.

Some simple well-known techniques for pre-processing our documents also helped a great deal in reducing the feature space, such as lowercasing, stemming, stripping odd characters, and removing "noisy" words and numbers.

Stemming is a common language-specific technique useful when dealing with large corpuses of textual data, with the goal of taking different words with a similar meaning and root and "crunching" them down to similar tokens. For example, the words "clothing" and "clothes" have a very similar meaning; when each is stemmed through the common Porter stemming algorithm, the result is "cloth." In doing this, we halved the number of words we have to worry about. Using stemming in conjunction with noisy word removal (the removal of common words without any domain meaning, i.e. the, is, and, with, etc.), we can hope to see a large reduction in the number of words we have to deal with.

Creating SVMs

Once you have pre-processed your textual data, the next step is to train your model. In order to do this, you must first transform your textual data into a format the SVM can understand. This is known as vectorization. Take the sentence:

"Men, you’ll look fantastic in this great pair of men's skinny jeans."

After pre-processing it as described above with stemming and removing words, we don’t care about:

"men fantastic great pair men skinny jean"

To take just the words from the above example, we can see we have one repeated word, so we could encode the data like so:

Occurrences

Term

1

fantastic

1

great

1

jean

2

men

1

pair

1

skinny


This can be represented in a vector:

[1,1,1,2,1,1]

This is fine for a small set of terms (only one short sample in my above case). However, as we add more samples and more terms, our vocabulary increases. For example, if we add another training sample that isn’t men's skinny jeans:

"women bootcut acid wash jean"

...we need to increase the overall vocabulary the algorithm must know about, i.e.:

[acid,bootcut,fantastic,great,jean,men,pair,skinny,wash,women]

This means that our new term vector for the initial men's skinny jeans example has to change to:

[0,0,1,1,1,2,1,1,0,0]

When dealing with thousands of samples, your vocabulary can become large and can become quite cumbersome, as the encoded training samples become mostly empty and very long:

[0,0,0,0,0,0,0,0,..... 2,0,0,0,0,0,.....1,0,0,0,0 …]

Thankfully, many machine learning libraries allow you to encode your term vectors as sparse vectors — which means you only need to supply non-zero cases, and the library (in our case, LibSVM) will magically figure it out for you and fill in the gaps.

In such a case, you provide the term vectors and the classes they represent as term indices relative to the entire vocabulary for all the training samples you wish you use. For example:

Term Index

Term

0

acid

1

bootcut

2

fantastic

3

great

4

jean

5

men

6

pair

7

skinny

8

wash

9

women


So, you would describe these words:

"men fantastic great pair men skinny jean"

As:

Term index #2: 1 Occurence

Term index #3: 1 Occurence

Term index #4: 1 Occurence

Term index #5: 2 Occurrences

Term index #6: 1 Occurrences

Term index #7: 1 Occurrences

Which can then be encoded succinctly as:

[2:1,3:1,4:1,5:2,6:1,7:1]

Alexandre Kowalczyk has a great description of vocabulary preparation here, along with other great SVM tutorials.

Hierarchy and Data Structures

A key learning for us is that the way these SVMs are structured can actually have a significant impact on how much training data has to be applied. For example, a naive approach would have been as follows:

https://lh5.googleusercontent.com/Ulep6_pZ3faiWlDxua8qZC-P0fEZeWY-p7ZMO97MKA1R7lQkBOE4j8XiJBtZkRfZ98Cun8IghNMIoMOQMALFDrU0_LMl4rTIITsXP9PRIqncOQr907ZTtpSHkpiRCSAY1L3tXRS1

This approach requires that for every additional sub-category, two new SVMs be trained. For example, the addition of a new class for swimwear would require an additional SVM under men's and women's — not to mention the potential complexity of adding a unisex class at the top level. Overall, deep hierarchical structures can be too rigid to work with.

We were able to avoid a great deal of labeling and training work, by flatening our data structures into many sub-trees like so:

https://lh4.googleusercontent.com/UzE4_5C20nBH6poXAfZv9Mdd73g_aqpcvamNslWRd6KkSofmi1X01sKp9CHj0tXqSxjCEPAXEPib0F8F49fzOSrI5F0aCZPVGuzQSiljHwZo6bLp72mY_8lWlSA9zUHG7iQfdqc9

By decoupling our classification structure from the final hierarchy, it is possible to generate the final classification by traversing the SVM hierarchy with each document and interrogating the results with simple set-based logic such as:

Men's slim-fit jeans = (men's and jeans and slim fit) and not women's

This approach vastly reduces the number of SVMs required to classify documents, as the resultant sets can be intersected to represent the final classification.

https://lh3.googleusercontent.com/ZK6lpi_0ZyK2BklfZJtuaHjFT8Q33lcikDu7QqIEAjubGA_Jjr2s0gVSRrtIUFAipp7zWjVBah0-i9sf9SgK2kLgalzg27nrQV7h4D6y88DF90iUs14lpbb6ZBy8948rXiG8Qvcm

It should also now be evident that adding new classes opens up an exponentially increasing number of final categories. For example, adding a top-level children's class would immediately allow the creation of an entire dimension of new children’s categories (children’s jeans, shirts, bathing suits, etc.), with minimal additional training data (only one additional SVM):

https://lh6.googleusercontent.com/6h6KtZFoBv7WIlZPljq_3Ha8ZM_RLC1vmtrTjuG4Z8ILhiYaSeQ13W--zkcwGPK9nqDNUQZkx4qeg0kQ348c4fTgzuTfOppOisbrcQBcMMU8OTJE_P87yIAC5tY3Dc5QL1jsc8cz

Data reuse

Because of the structure we chose, one key insight that we were able to leverage was that of re-using training data via linked data relationships. Linking data enabled us to re-use our training data by an overall factor of 9x, massively reducing the cost and increasing the accuracy of predictions.

For each individual class, we obviously want as many training data examples as possible, covering both possible outcomes. Even though we built some excellent internal tooling — primarily, a nice fast user interface for searching, sorting, and labeling training data examples in large batches — labeling thousands of examples of each kind of product can still be laborious, costly, and error-prone. We determined that the best way to circumvent these issues was to attempt to reuse as much training data as we could across classes.

For example, given some basic domain knowledge of the categories, we know that washing machines can never be carpet cleaners:

https://lh4.googleusercontent.com/F7FN4PbIQGODYnLSuHgu3AdFDXpeLvl31JuwnoHSpAA-w5PIZ3QcOKC9XUQ9o2EifwZBuiO1S0jsPRXrwYSLF6E0J_PkhLsP4vEG8eLRSivKTsZeHyLoc6KqNcZpeTd0Ce1edSnI

By adding the ability to link excluded data, we can heavily bolster the number of negative training examples for the washing machines SVM by adding to it the positive training data from the carpet cleaners SVM. Put more simply, given that we know carpet cleaners can never be washing machines, we may as well reuse that training data.

This approach has a nice uptick in that whenever the need arises to add some additional training data to improve the carpet cleaners SVM, it inadvertently improves the washing machines class via linked negative data.

Finally, another chance for reuse that is apparent when considering a hierarchy is that the positive training data for any child nodes is also always positive training data for its parent.

For example, "jeans are always clothing" looks like:

https://lh5.googleusercontent.com/jrRm1Br8-MMUE0QZE7yXs5FRMY83Cd1ScSQ0VCbUzp60tYYfhVi5Wxq31f9NRr9e7Pl9a6cWA7m9t1hxvQ2_MAw8Dvza5NL3WjiapJ_MHd5Vp6dvsCTcyW8KWgMCRO4lJM2c9wWA

This means that for every positive example of training data added to the jeans SVM, an additional positive example is also added to the clothing SVM via the link.

Adding linked data is far more efficient than manually labeling thousands of examples.

https://lh4.googleusercontent.com/I-yqKpWEaM02N5GAnuwl3XkV2ZTnBt4oec-EvGFzcmz1zfdWBL1gF0rzqLUt5qAOBL4jdqdJlA9za6ShdxrIbIrq1ki66ukT8cHcCDN6GC9LmTDrK_UEhswFMlipJOjz8sRW2ROw

Conclusion

Support vector machines have helped us reach a quality and speed of classification that we could have never achieved with a non-machine learning approach. As such, we have come to learn that support vector machines are an excellent addition to any developer's toolbox and that any investigation should also serve as a nice introduction to some key machine learning concepts.

Additionally, when it comes to the specifics of hierarchical classification systems, decoupling the classification component from the resulting hierarchy, flattening the data structure, and enabling the reuse of training data will all be beneficial in gaining as much efficiency as possible. The approaches outlined above have not only helped reduce the amount of training data we needed to label but have also given us greater flexibility overall.

TrueSight is an AIOps platform, powered by machine learning and analytics, that elevates IT operations to address multi-cloud complexity and the speed of digital transformation.

Topics:
machine learning ,ai ,tutorial ,classification ,svm

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}