Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Case Study: Automatically Training a Classifier With OptiML

DZone's Guide to

Case Study: Automatically Training a Classifier With OptiML

No-show appointments cost hospitals over $150 billion per year. Learn how OptiML uses Bayesian parameter optimization to analyze this.

· AI Zone ·
Free Resource

Did you know that 50- 80% of your enterprise business processes can be automated with AssistEdge?  Identify processes, deploy bots and scale effortlessly with AssistEdge.

This post, the second in a series of six posts exploring OptiML, the new feature for automatic model optimization on BigML, focuses on a real-world use case within the healthcare industry: medical appointment no-shows. We will demonstrate how OptiML uses Bayesian parameter optimization to search for the best-performing model for your data. The status of the search is continually updated in the BigML Dashboard and the process yields a list of models ranked by performance, which enables further exploration, evaluation, and prediction tasks.

The Dataset

With regards to healthcare expenses, no-show appointments represent a major expense, estimated to cost hospitals over $150 billion per year. A no-show is when all the necessary information for a medical appointment has been delivered, yet the patient fails to arrive at the scheduled appointment. This is distinct from events like cancellations, which require intervention and rescheduling but yield nowhere near the same amount of financial burden.

Here, we are using a verified dataset from Kaggle, which has documented over 100,000 examples of scheduled medical appointments, each consisting of 15 variables (fields). In this dataset, 22,319 instances (~20% of the total) are labeled as no-shows. The fields describing each instance consist of simple descriptive information about the patient (e.g. age, gender), health characteristics (e.g. hypertension, diabetes), geographical information (e.g. neighborhood), communication information (e.g. SMS text), scheduling, and appointment date and time information.

Automatic Model Optimization

The OptiML configuration options, by design, consist of relatively few options. Selecting the objective field from the dataset is essential; in our use case, this is whether or not a patient is a no-show for their visit. The other main parameters control how extensively the OptiML search will run by setting the maximum training time and the number of evaluations. The advanced configurations allow you to choose which types of machine learning algorithms will be evaluated among models (decision trees), ensembles, logistic regression, and Deepnets, the evaluation approach (default being cross-validation), and the desired optimization metrics.

After choosing OptiML from the supervised drop-down menu, training the model is as simple as one click! To monitor the progress of the models that are being trained and evaluated, the Dashboard will display the elapsed time, a running series of F-measures indicating evaluations, and a counter for the number of resources created.

Once the OptiML process is complete, a summary view displays the total number of resources and models created and selected, along with the total elapsed time and amount of data processed. The selected models are the best-performing ones, according to the evaluation metric originally chosen for the analysis. Because many of the evaluations are part of a cross-validation process, it is expected for the number of selected models to be substantially lower than the total number of models evaluated.

Assessing Classification Model Performance

In our example, we can sort our best-performing models by any performance metric we would like. We can also view two metrics simultaneously, e.g. precision and recall, if we would like to consider different applications or error tolerances. Here, we are sorting by ROC AUC and also viewing the corresponding accuracy for each of the selected models. We can see that our top-performing model is a Deepnet using an auto-network search, which has an ROC AUC of 0.73899 and accuracy of 58.96%.

The Dashboard also allows us to select individual models generated by OptiML and compare their evaluations. Here, we display the ROC curves for the top-performing Deepnet, ensemble, model (decision tree), and logistic regression. With regards to AUC, only the logistic regression model (in green, AUC = 0.661) noticeably underperforms relative to the alternatives.

The root cause behind the estimated 54.3 million patients that skip scheduled medical care is widely thought to be a combination of financial hardship, anxiety about long wait times, transportation difficulty, and poor medical literacy. With our current dataset, however, it is difficult to engineer features that can accurately represent all of these issues. This is a good reminder that model optimization alone will not guarantee a result with top-notch performance. Like all machine learning methods, OptiML's performance will only be as good as the data it's fed. Regardless, OptiML allows us to quickly prototype and assess maximal performance with minimal effort, allowing for more resources to be dedicated to high-yield tasks such as additional data acquisition and feature engineering.

Consuming AI in byte sized applications is the best way to transform digitally. #BuiltOnAI, EdgeVerve’s business application, provides you with everything you need to plug & play AI into your enterprise.  Learn more.

Topics:
data science ,machine learning ,ai ,optiml ,data analytics ,bayesian

Published at DZone with permission of

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}