Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

External Data Operations on Salesforce Analytics Using Mulesoft Salesforce Analytics Connector Part 1

DZone's Guide to

External Data Operations on Salesforce Analytics Using Mulesoft Salesforce Analytics Connector Part 1

Read this article in order to learn more about how to utilize the Salesforce Analytics Connector in order to create Salesforce Analytics DataSet and upsert, delete, and overwrite records as well.

· Integration Zone ·
Free Resource

SnapLogic is the leading self-service enterprise-grade integration platform. Download the 2018 GartnerMagic Quadrant for Enterprise iPaaS or play around on the platform, risk free, for 30 days.

In this series, I will be explaining how to play with the Salesforce Analytics Connector in order to create Salesforce Analytics DataSet and upsert, delete, and overwrite records as well. 

Salesforce provides External Data API which enables you to upload external data files to Analytics. Salesforce Analytics Connector lets us connect to the Salesforce Analytics Cloud application using the same External Data API.   This Connector is just like any other Anypoint Connector in which we will be supplying the data in the generic java format List<Map>, The Connector internally will take care of the necessary transformation later, thus making our life easy.

Let's familiarize with the various terms which we will be coming across for Salesforce Analytics Integration

DataSet: A Dataset contains a set of source data, specially formatted and optimized for interactive exploration. All data-sets will have a unique name in the Salesforce organization.

The InsightsExternalData Object: The InsightsExternalData object is used with the InsightsExternalDataPart object, which holds the parts of the data to be uploaded. Together, they provide a programmatic way to upload a large file in parts and trigger a data-flow into a dataset.

The InsightsExternalDataPart Object : The InsightsExternalDataPart object works with the InsightsExternalData object. After you insert a row into the InsightsExternalData object, you can create part objects to split up your data into parts. If your initial data file is larger than 10 MB, split your file into parts that are smaller than 10 MB

MetaData JSON File: A metadata file is a JSON file that describes the structure of an external data. We can define the schema of the data with it. We can assign different data types like Text, Date, etc. to the different fields depending on the type of Data. Depending on the data types, we need to provide the additional attributes. For example: for a date, we need to provide the Format, firstDayOfWeek, fiscalMonthOffset, isYearEndFiscalYear, etc. attributes. For operations like Upsert, Delete on the Dataset, we have to specify a Primary Key, which needs to be defined in the metadata file. Please refer: Analytics External Data Format Reference (For Building Metadata JSON): https://resources.docs.salesforce.com/214/latest/en-us/sfdc/pdf/bi_dev_guide_ext_data_format.pdf

A MetaData JSON is required for almost all operations in the Connector and is required to be configured in the Connector Configuration itself.

Edgemart Container: Edgemart Container is the APP (for e.g. SharedApp) that owns the dataset. When using the Connector for creating or updating the dataset, we need to provide the Salesforce ID or the Developer Name of the App. Make sure you have the necessary permission to the APP before Creating/Updating the dataset, otherwise, the operation would fail because of Insufficient privileges.

If you are not sure what the Developer Name/Salesforce ID of your APP is you can check it using this SOQL Query, which you can run in Developer ConsoleSELECT Id,DeveloperName,Name, AccessType,CreatedDate,Type FROM Folder where Type = 'Insights'

DataSet ID: Salesforce ID of the Dataset. Create Data Set operation using the connector returns the Data Set ID as payload which can be used later in subsequent operations.

DataSet Operations Available in Connector:

1. Create Data Set: Creates a new InsightsExternalData record into Salesforce Analytics Cloud system and returns the identifier (Salesforce ID of the created record) of the created entity within the Salesforce Analytics Cloud system. This operation will just create the Dataset skeleton in Analytics and will also upload the Metadata JSON, which acts as "Schema/Structure" for the Dataset. Keep in mind that an InsightsExternalData record doesn't represent the DataSet itself, even though InsightsExternalData record is essential for a DataSet creation and is required to channel data into the Dataset.

Type: Provide the path of the MetaData JSON File. All the records Data Parts uploaded later should be aligned properly with the MetaData JSON.

Operation: Even though the connector operation is named "Create DataSet," we can actually perform APPEND, OVERWRITE, UPSERT, and DELETE by selecting the appropriate operation.

  • APPEND: Append all data to the dataset. Creates a dataset if it doesn’t exist. If the dataset or rows contain a unique identifier, the append operation is not allowed.

  • OVERWRITE: Create a dataset with the given data, and replace the dataset if it exists.

  • UPSERT: Insert or update rows in the dataset. Creates a dataset if it doesn’t exist. The rows to upsert must contain one (and only one) field with a unique identifier. We have to set one of the fields with "isUniqueId" attribute set to true in the Metadata JSON.

  • DELETE: Delete the rows from the dataset. The rows to delete must contain one (and only one) field with a unique identifier.

DELETE can be used only over an existing DataSet

APPEND, UPSERT, OVERWRITE can be used to create a new DataSet and also modify it later.

Edgemart Container: We need to provide the Salesforce ID or the developer name of the app in which the DataSet needs to be created. Keep in mind these rules regarding Edgemart Container:

a. If this parameter is omitted when you’re creating a dataset, the name of the user’s private app is used.

b. If this parameter is omitted for an existing dataset, the system resolves the app name.

c. If this parameter is specified for an existing dataset, the name is required to match the name of the current app that contains the dataset.

2. Upload External Data: Inserts records into a dataset at an ID obtained from the Salesforce Analytics Cloud system. We will have to pass the Dataset Salesforce ID, which we obtained from "Create Data Set" operation. We will be passing the chunked content in this operation itself. We have to make sure that the data gets passed through this operation aligns to the MetaData JSON. This operation will create a Data Part (InsightsExternalDataPart) in the Salesforce Analytics Cloud system using the content passed. InsightsExternalDataPart record created is a child of InsightsExternalData record, which got created in "Create Data Set" operation. We will also need to configure the path of the MetaData JSON File using the "Type" drop-down.

3. Start Data Processing: This operation tells the Salesforce Analytics Cloud system to start processing the Data parts uploaded so far into a data set. We have to provide the data set Salesforce ID obtained from "Create Data Set" operation. After this operation is executed from the Connector, a "JOB" will be created in the Analytics Cloud System. This Job will take care of actual channeling/associating of records into the DataSet.

Additional Info: The status of the Job can be monitored using the Data Monitor under the Jobs tab (Analytics Studio -> Settings Button -> Data Monitor).

We can Monitor the Job if it is Success, Failure, Warning, etc. using the Data Monitor. We will also be able to see metrics like the number of records success or failure.

4. Upload External Data into new Dataset and Start Processing: This will create a dataset, upload data into it, and tell the Salesforce Analytics Cloud system to start processing the uploaded data. It is the combination of all the operations above and can be used if we are planning to send data in one DataPart itself.

We will be covering these scenarios in this series.

Scenario 1: Creating a new DataSet and adding to an existing DataSet and uploading records in the DataSet (in Batches and in one go).

Scenario 2: Overwriting the dataset with the new set of Data.

Scenario 3: Creating a DataSet with a Primary Key, and inserting/updating records in the DataSet.

Scenario 4: Deleting records in the DataSet

We will be covering these Scenarios in the subsequent parts of this series.

With SnapLogic’s integration platform you can save millions of dollars, increase integrator productivity by 5X, and reduce integration time to value by 90%. Sign up for our risk-free 30-day trial!

Topics:
mule 3 ,analytics ,salesforce wave ,mulesoft ,salesforce ,mule ,mule esb ,anypoint connector ,muleesb ,integration

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}