7 Things You Didn’t Know About DataMapper
Join the DZone community and get the full member experience.
Join For FreeOriginally authored by Nicolas Domeniconi
Think you know DataMapper? Think again! Let me share seven little tricks and tools that will help develop faster with DataMapper.
1. Using Flows as Lookup Tables
Let’s say you need to map and transform a message payload from one structure and format to another. Of course, DataMapper is the perfect tool for that job. However, what if you also need to add to the payload as it is transformed and mapped? You could use DataMapper and a lookup table(CSV, DB or User Defined) to find and add data to a message, or you could take advantage of a new feature in 3.4: the Flow Reference Lookup. With this feature, you can use a flow reference to invoke a separate flow to request, process or retrieve information and use it in the mapping.
One of the examples that comes bundled with Studio demonstrates the flow ref lookup functionality perfectly. You can read the full example details, but to keep things short and sweet, I’ll explain an abridged version of how it was built here.
We’ll be modifying the DataMapper with FlowRef Lookup example by deletingthe FlowRef inside the DataMapper to create it from scratch and illustrate how easy it is to use.
Use Case: A company needs to upload contacts in a CSV file to Salesforce. Not only do all the contact fields need to be transformed and mapped from CSV to Salesforce, we need to figure out how to convert the value of “state” in the CSV to “region” in Salesforce.
To meet these objectives, we’re going to use a DataMapper and a FlowRef Lookup table to access another flow with a Groovy script which uses the value of “State” to determine “Region”.
Step 1: We already have two flows: one with a File endpoint, a Logger, a DataMapper and a Salesforce connector, and a second one with a Groovy script element and a Logger. The Groovy script can accept a value for “state”, figure out in which region the state belongs, and return a value for “region”.
Step 2: We configured a FlowRef Lookup Table in the DataMapper. Delete the mappings and follow along to create your own: in the Input pane of the DataMapper console, right-click Lookup Tables > Add FlowRef Lookup Table.
Step 3: Enter a value for the name of the FlowRef Lookup table, then use the drop-down to select the Flow Name which we must reference to look up information; in this case, it’s the LookUpSalesRegionFlow (see image below, left). Use the “+” icon to add an input Name and Type (state, string), and an output Name and Type (region, string). Click OK to save your lookup configurations.
Step 4: From the Input panel of the DataMapper console, drag and drop the “region” field from the FlowRef Lookup table to the “Region__c” field in the Output panel. In the Lookup assignment dialog that appears, use the drop-down in the Expression field for “state”, then click OK to complete the mapping.
The result is a mapping that looks like this:
Save, run, enjoy!
2. Using MEL to invoke Java functions
You may already know that you have the option of using one of two expression languages in DataMapper: Mule Expression Language (MEL) or Clover Transformation Language (CTL). What you may not know is that you can invoke Java functions using MEL.
Step 1: When you create a new mapping, DataMapper utilizes MEL by default. If you have previously changed your Default Script Type to CTL, you can change it back to MEL in the Mule Studio Preferences (Mule Studio > Preferences).
Step 2: Create any mapping you want, then click “Script” (upper right corner of the DataMapper console) to view the script of the mapping which looks something like this: “output.name = input.name”.
Step 3: Click to set your cursor just after “input.name” then add “.toLowerCase()” . This modification invokes a Java function to change the input name to lowercase. See example below.
TIP! Did you know you can also use auto-complete to invoke a Java function? Set your cursor at the end of “input.name” then hit “Ctrl + Space Bar” to display a list of auto-complete options.
3. Streaming large files through DataMapper
Streaming! Yes, you can! Stream extra large files through DataMapper without consuming tons of memory. Let me illustrate with an example.
The HTTP endpoint accepts a message – a large file – which it passes into a DataMapper. Passing through a Logger, the message then reaches a Foreach which wraps a Database endpoint. DataMapper must create “iteratable” objects from the file and so that the Foreach can process the items iteratively and push them into the database. In order to manage the processing of this large file, you can enable streaming on DataMapper.
Step 1: To enable streaming, click to open the DataMapper Properties (upper right hand corner of the DataMapper console).
Step 2: Check the box to enable streaming.
Step 3: Save and start streaming!
4. Viewing sample mapping values
Ever wished you could see an example of the values you’re mapping? Your wish has come true! If you used an input file example to define your input fields, DataMapper automatically detects the information in the file and uses it to show you sample values for each field.
For example, for my mapping input, I created a CSV file which contained the following information:
company_name, company_address, company_city, company_city,company_state,company_zipUniversal Exports, 55 Main Street, Miami, fl, 33126
I added a DataMapper to my flow and used the example CSV file to define the input fields. Because the example CSV contains values for each field, DataMapper displays sample values for each field to make mapping more intuitive.
5. Handling Mapping Metadata
At times, you may need change some fields and re-create the mapping accordingly. DataMapper has a “magic” tool to make this happen.
Click the “magic wand” icon in the upper left-hand corner of the Input panel to display the Metadata Handling tools.
Reload Metadata
Step 1: Right-click your main input mapping item (in the example above, “companies2”), and select Add field. Enter a name for your new field, use the drop-down to define the type, then click OK to save.
Step 2: Click the magic wand, then select Reload Metadata.
Step 3: Watch as DataMapper magically uploads a sample value for your new field. In such a case, the value is “null”. My example below has a new field for “has_given_contact_permission”.
Recreate Metadata
Step 1: Add an input field to your CSV.
Step 2: In your Input panel, click Re-Create Metadata. Browse to select your newly modified CSV example file, then click OK. The new field appears in the Input panel.
Recreate Metadata from Input
If you want to include the new field in the output, click the “magic wand” icon in the Output panel, then select Re-Create Metadata From Output to transfer all input fields – including any new ones – to the output panel.
6. Propagating DataSense data
Automatically import a Anypoint Connector’s data structure to DataMapper? Consider it done!Using DataSense, you can easily map data between connectors without the hassle of manually researching and defining the fields. Each connector sucks in the data structure from its respective SaaS, and a DataMapper dropped between them pulls in the metadata so you can configure input and output with a few clicks.
To demonstrate, I’ll map a Salesforce connector’s inout to another Salesforce connector’s output.
Step 1: Build a flow with two Salesforce Anypoint Connectors.
Step 2: Configure each Salesforce connector, testing the connectivity of each. See Testing Connections for details.
Step 3: Drop a DataMapper between the Salesforce connectors.
Step 4: Double-click to open the DataMapper. DataSense has already populated the input and output configurations, pulled automatically from each connector.
Step 5: Click Finish, and witness all necessary input and output fields appear, ready for drag-and-drop mapping.
7. Mapping flat structures to tree structures
Last but not least, the joy of easily mapping flat structures to tree structures! To illustrate this activity, I’ll use the following XML as input:
<persons-in> <person> <id>1234</id> <phone1>123-456-7890</phone1> <phone2>234-567-8901</phone2> </person> <person> <id>5678</id> <phone1>345-678-9012</phone1> <phone2>456-789-0123</phone2> </person> </persons-in>
The goal is to produce the following output:
<persons-out> <person> <id>1234</id> <phoneEntrys> <phoneEntry type=”1″> <number>123-456-7890</number> </phoneEntry> <phoneEntry type=”2″> <number>234-567-8901</number> </phoneEntry> </phoneEntrys> </person> <person> <id>5678</id> <phoneEntrys> <phoneEntry type=”1″> <number>345-678-9012</number> </phoneEntry> <phoneEntry type=”2″> <number>456-789-0123</number> </phoneEntry> </phoneEntrys> </person> </persons-out>
To achieve this goal, I created an XML-to-XML mapping, transforming them to XSD. DataMapper produced the following:
Step 1: To convert from flat to tree structure, I dragged-and-dropped the “phone1” input field to the “number” output field.
Step 2: I dragged-and-dropped “person:person” input element to the “phoneEntry:phoneEntry” output element. This created a new Element Mapping so that now we have two “person” > “phoneEntry” element mappings.
Step 3: I dragged-and-dropped the “phone2” input field to the “number” output field. Checking the preview of the mapping, we see something like this:
Not exactly what I had hoped to achieve. To fix this, I adjusted the script in DataMapper.
Step 4: Go to the Script view (upper right-hand corner of the DataMapper console) and in both of the element mappings named “person” > “phoneEntry” change the script line “output.__parent_id = input.__parent_id;” to “output.__parent_id = input.__id;”
Et voilà! Seven handy tricks to make your data mapping life easier!
Published at DZone with permission of Ross Mason, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
Trending
-
Effortlessly Streamlining Test-Driven Development and CI Testing for Kafka Developers
-
Observability Architecture: Financial Payments Introduction
-
The SPACE Framework for Developer Productivity
-
Design Patterns for Microservices: Ambassador, Anti-Corruption Layer, and Backends for Frontends
Comments