Salesforce, Mulesoft, and Killing the API

DZone 's Guide to

Salesforce, Mulesoft, and Killing the API

Zone Leader John Vester talks about how a thing that drives him crazy with Salesforce actually provided positive results elsewhere.

· Integration Zone ·
Free Resource

My team has been focused on integrating our Salesforce deployment with our back-end systems.  To get started, we originally configured Salesforce to make a call to our back-end systems (through Mashery) directly.  However, as our requirements and complexity changed, we were given the opportunity to leverage Mulesoft as our ESB and message queue technology.

Introducing Mulesoft and AnypointStudio

Image title

Using AnypointStudio, our developers were able to map out flows which perform the necessary actions.  Since our back-end application hasn't reached the point where it can fire events, we used a polling technique to call an API end-point to determine if any records had been updated since the last time the poll was ran.  Mulesoft provides something they refer to as a Watermark, which is used to keep track of the last successful polling call.  The list of items to process were placed on an Anypoint MQ queue, which is a part of the CloudHub offering by Mulesoft.

The original approach was simple, once we received a list of unique IDs to process, the listener flow would call the existing APIs to obtain resource information from the back-end system.  Basically, we called the end-points each time for every unique ID we needed to process.

Mulesoft then took the data, performed any necessary transformations and then called an API end-point in Salesforce, which then finished the processing. 

Our Findings

While the existing end-points for the back-end system worked great for the AngularJS application, we found that hitting those end-points several times ... maybe even several thousand times, wasn't the most ideal approach.  In fact, it caught the eye of not only the DevOps team, but the performance monitoring tools we had in place.  Turns out, our servers were designed to handle a user base that wasn't comparable to the load we were placing on it via the Mulesoft flow processing.

The biggest issue we ran into occurred during the evening processing and month-end processing.  In both cases, we caused out of memory errors, which didn't fare so well during business hours.  Also, DevOps wasn't excited to fix the issue during the middle of the night either.

Our New Approach

Like the approach Salesforce developers have to utilize, we decided to bulkify our Mulesoft calls to the back-end system.  Instead of calling a single API end-point multiple times, we introduced a new end-point which could handle a list of unique IDs to process.

We had hoped to avoid introducing new end-points to the back-end system, but realized our original approach was flawed.  However, the effort to introduce the new functionality leaned more on some code refactoring and turned out to be a minor update.  Our next approach is to move our API end-points into their own microservice, which is currently underway.

I will admit that my biggest complaint with Salesforce is the need to bulkify so many of the tasks that are performed within Salesforce.  However, in this case, that very approach turned out to be very helpful in yielding the results needed without impacting the performance of the application.

Have a really great day!

api best practices, api design, api development, dev ops, integration, mulesoft, salesforce

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}