DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports Events Over 2 million developers have joined DZone. Join Today! Thanks for visiting DZone today,
Edit Profile Manage Email Subscriptions Moderation Admin Console How to Post to DZone Article Submission Guidelines
View Profile
Sign Out
Refcards
Trend Reports
Events
Zones
Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Partner Zones AWS Cloud
by AWS Developer Relations
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Partner Zones
AWS Cloud
by AWS Developer Relations
11 Monitoring and Observability Tools for 2023
Learn more
  1. DZone
  2. Data Engineering
  3. AI/ML
  4. Using Apache Ignite’s Machine Learning for Fraud Detection at Scale

Using Apache Ignite’s Machine Learning for Fraud Detection at Scale

Let's dive deeper into using Apache Ignite's Machine Learning for fraud detection at scale.

Akmal Chaudhri user avatar by
Akmal Chaudhri
CORE ·
Aug. 08, 18 · Analysis
Like (9)
Save
Tweet
Share
9.00K Views

Join the DZone community and get the full member experience.

Join For Free

This article is featured in the new DZone Guide to Artificial Intelligence: Automating Decision-Making. Get your free copy for more insightful articles, industry statistics, and more!

Apache Ignite is a very versatile product that supports a wide range of integrated components. These components include a machine learning (ML) library that supports popular ML algorithms, such as linear regression, k-NN classification, and k-means clustering.

The ML capabilities of Ignite provide a wide range of benefits. For example, Ignite can work on the data in-place, avoiding costly ETL between different systems. Ignite can also provide distributed computing that includes both the storage and manipulation of data. The ML algorithms implemented in Ignite are optimized for distributed computing and can use Ignite’s collocated processing to great advantage. Ignite can also act as a sink for streaming data, allowing ML to be applied in real-time. Finally, ML is often an iterative process, and context may change while an algorithm is running. Therefore, to avoid loss of work and delay, Ignite supports partition-based datasets, which makes it tolerant to node failures.

Ignite ships with code examples that demonstrate the use of various ML algorithms on some well-known datasets. These code examples can work standalone, making it very easy to get started with the ML library, and the examples can be used as templates for user-specific problems.

With these benefits in mind, let’s undertake some exploratory analysis and write some code for a real-world problem. Specifically, let’s look at how Ignite could help with credit card fraud detection.

Background

Today, credit card fraud is still a major problem experienced by many financial institutions. Historically, checking financial transactions has been a manual process. However, we can now apply ML techniques to identify potentially fraudulent transactions and, therefore, be able to develop real-time fraud detection systems that can act much faster and help stop these fraudulent transactions in their tracks.

The Dataset

A suitable dataset for credit card fraud detection is available through Kaggle [1], provided by the Machine Learning Group at Université Libre de Bruxelles (ULB). The data are anonymized credit card transactions that contain both genuine and fraudulent cases. The transactions occurred over two days during September 2013 and the dataset contains a total of 284,807 transactions of which 492 are fraudulent, representing just 0.172% of the total. This dataset, therefore, presents some challenges for analysis as it is highly unbalanced. The dataset consists of the following fields:

  • Time: The number of seconds elapsed between a transaction and the first transaction in the dataset.
  • V1 to V28: Details not available due to confidentiality reasons.
  • Amount: The monetary value of the transaction.
  • Class: The response variable (0 = no fraud, 1 = fraud).

Machine Learning Algorithm

According to Andrea Dal Pozzolo, who was involved in the collection of the original dataset, fraud detection is a classification problem [2]. Also, since investigators may only review a limited number of transactions, the probability that a transaction is fraudulent is more important than the true classification. Therefore, a good algorithm to use for the initial analysis is logistic regression. This is because the outcome has only two possible values, and we are also interested in the probability.

Data Preparation

As previously mentioned, the dataset is highly unbalanced. There are a number of solutions we can use to manage an unbalanced dataset [3]. The initial approach we can take is to under-sample. We will keep all the 492 fraudulent transactions and reduce the number of non-fraudulent transactions. There are several ways we could perform this dataset reduction:

  1. Randomly remove majority class examples.
  2. Select every nth row from the majority class examples.

For our initial analysis, let’s use the second approach and select every100th majority class example.

We know that there are columns V1 to V28, but not what these represent. The Amount column varies significantly, between 0 and 25691.16. We can remove the Time column since it does not provide the actual time of a transaction. There are no missing values in the dataset.

Another decision that we need to make is whether to normalize the data. For our initial analysis, we won’t use normalization.

One approach to data preparation for this credit card fraud problem is described by Kevin Jacobs [4]. Simple analysis can often provide good initial insights and help refine the strategy for further data analysis. Using the approach described by Jacobs, we’ll create our training and test data using scikit-learn. We’ll then load the data into Ignite storage and perform our logistic regression using Ignite’s Machine Learning Library.

Once our training and test data are ready, we can start coding the application. You can download the code from GitHub [5] if you would like to follow along. We need to do the following:

  1. Read the training data and test data
  2. Store the training data and test data in Ignite
  3. Use the training data to fit the logistic regression model
  4. Apply the model to the test data
  5. Determine the confusion matrix and the accuracy of the model

Read the Training Data and Test Data

We have two CSV files with 30 columns, as follows:

  1. V1 to V28
  2. Amount
  3. Class (0 = no fraud, 1 = fraud)

We can use the following code to read-in values from the CSV files:

private static void loadData(String fileName,
IgniteCache<Integer, FraudObservation> cache)
 throws FileNotFoundException {
 Scanner scanner = new Scanner(new File(fileName));
 int cnt = 0;
 while (scanner.hasNextLine()) {
 String row = scanner.nextLine();
 String[] cells = row.split(",");
   double[] features = new double[cells.length - 1];
 for (int i = 0; i < cells.length - 1; i++)
 features[i] = Double.valueOf(cells[i]);
 double fraudClass = Double.valueOf(cells[cells.
length - 1]);
 cache.put(cnt++, new FraudObservation(features,
fraudClass));
 }
}

The code reads the data line-by-line and splits fields on a line by the CSV field separator. Each field value is then converted to double format and then the data are stored in Ignite.

Store the Training Data and Test Data in Ignite

The previous code stores data values in Ignite. To use this code, we need to create the Ignite storage first, as follows:

IgniteCache<Integer, FraudObservation> trainData =
getCache(ignite, "FRAUD_TRAIN");
IgniteCache<Integer, FraudObservation> testData =
getCache(ignite, "FRAUD_TEST");
loadData("src/main/resources/resources/fraud-train.
csv", trainData);
loadData("src/main/resources/resources/fraud-test.
csv", testData);

The code for getCache() is implemented like so:

private static IgniteCache<Integer, FraudObservation>
 getCache(Ignite ignite, String cacheName) {
 CacheConfiguration<Integer, FraudObservation>
cacheConfiguration = new CacheConfiguration<>();
 cacheConfiguration.setName(cacheName);
 cacheConfiguration.setAffinity(new
RendezvousAffinityFunction(false, 10));
 IgniteCache<Integer, FraudObservation> cache =
ignite.createCache(cacheConfiguration);
 return cache;
}

Use the Training Data to Fit the Logistic Regression Model

Now that our data are stored, we can create the trainer, as follows:

LogisticRegressionSGDTrainer<?> trainer = new
LogisticRegressionSGDTrainer<>(new UpdatesStrategy<>(
 new SimpleGDUpdateCalculator(0.2),
 SimpleGDParameterUpdate::sumLocal,
 SimpleGDParameterUpdate::avg
), 100000, 10, 100, 123L);

We are using Ignite’s Logistic Regression Trainer with stochastic gradient descent (SGD). The learning rate is set to 0.2 and controls how much the model changes. We have also specified the maximum number of iterations as 100,000 and the seed as 123.

We can now fit the logistic regression model to the training data, as follows:

LogisticRegressionModel mdl = trainer.fit(
 ignite,
 trainData,
 (k, v) -> v.getFeatures(), // Feature
extractor.
 (k, v) -> v.getFraudClass() // Label
extractor.
).withRawLabels(true);

Ignite stores data in a key-value (K-V) format, so the above code uses the value part. The target value is the fraud class and the features are in the other columns.

Apply the Model to the Test Data

We are now ready to check the test data against the trained logistic regression model. The following code will do this for us:

int amountOfErrors = 0;
int totalAmount = 0;
int[][] confusionMtx = {{0, 0}, {0, 0}};
try (QueryCursor<Cache.Entry<Integer,
FraudObservation>> cursor = testData.query(new
ScanQuery<>())) {
 for (Cache.Entry<Integer, FraudObservation>
testEntry : cursor) {
 FraudObservation observation = testEntry.
getValue();
 double groundTruth = observation.
getFraudClass();
 double prediction = mdl.apply(new
DenseLocalOnHeapVector(observation.getFeatures()));
 totalAmount++;
 if ((int)groundTruth != (int)prediction)
 amountOfErrors++;
 int idx1 = (int)prediction;
 int idx2 = (int)groundTruth;
 confusionMtx[idx1][idx2]++;
 System.out.printf(">>> | %.4f\t | %.0f\t\t\
t|\n", prediction, groundTruth);
 }
}

Determine the Confusion Matrix and the Accuracy of the Model

Now we can check by comparing how the model classifies against the actual fraud values (ground truth) using our test data.

Running the code gives us the following summary:

>>> Absolute amount of errors 80
>>> Accuracy 0.9520
>>> Precision 0.9479
>>> Recall 0.9986
>>> Confusion matrix is [[1420, 78], [2, 168]]

For the confusion matrix, we have the following:

NO FRAUD FRAUD
NO FRAUD 1420

78 (Type I error

FRAUD

2 (Type II error)

168

Summary

Our initial results look promising, but there is room for improvement. We made a number of choices and assumptions for our initial analysis. Our next steps would be to go back and evaluate these to determine what changes we can make to tune our classifier. If we plan to use this classifier for a real-time credit card fraud detection system, we want to ensure that we can catch all the fraudulent transactions and also keep our customers happy by correctly identifying non-fraudulent transactions.

Once we have a good classifier, we can use it directly with transactions arriving into Ignite in real-time. Additionally, with Ignite’s continuous learning capabilities, we can refine and tune our classifier further with new data, as the data arrive.

Finally, using Ignite as the basis for a real-time fraud detection system enables us to obtain many advantages, such as the ability to scale ML processing beyond a single node, the storage and manipulation of massive quantities of data, and zero ETL.

References

[1] kaggle.com/mlg-ulb/creditcardfraud

[2] slideshare.net/dalpozz/adaptive-machine-learning-for-credit-cardfraud-detection

[3] analyticsvidhya.com/blog/2017/03/imbalanced-classification-problem

[4] data-blogger.com/2017/06/15/fraud-detection-a-simple-machine-learning-approach

[5] https://github.com/VeryFatBoy/ignite-ml-examples/

This article is featured in the new DZone Guide to Artificial Intelligence: Automating Decision-Making. Get your free copy for more insightful articles, industry statistics, and more!

Machine learning Test data Apache Ignite Distributed Computing

Opinions expressed by DZone contributors are their own.

Popular on DZone

  • How To Best Use Java Records as DTOs in Spring Boot 3
  • Custom Validators in Quarkus
  • 5 Steps for Getting Started in Deep Learning
  • What’s New in the Latest Version of Angular V15?

Comments

Partner Resources

X

ABOUT US

  • About DZone
  • Send feedback
  • Careers
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 600 Park Offices Drive
  • Suite 300
  • Durham, NC 27709
  • support@dzone.com
  • +1 (919) 678-0300

Let's be friends: