Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Apache Oozie without the XML tax

DZone's Guide to

Apache Oozie without the XML tax

· Big Data Zone ·
Free Resource

The open source HPCC Systems platform is a proven, easy to use solution for managing data at scale. Visit our Easy Guide to learn more about this completely free platform, test drive some code in the online Playground, and get started today.

As you sink deeper into the big data world and your collection of mapreduce jobs grow, you realize that you need a work scheduler to chain all your mapreduce jobs together. At the time of this writing there are two leading job scheduler in hadoop world. Apache Oozie, which comes bundle with pretty much every hadoop distribution (except Pivotal HD) and Linkedin's Azkaban. This blog post is not about comparing Oozie and Azkaban so I will cut to the chase and tell you upfront that in my opinion Azkaban is much better than Oozie in pretty much every aspect. If you are starting a new cluster, I would recommend seriously taking a look at Azkaban but if you are stuck with a distribution that comes with Oozie or if your company only uses Oracle (Azkaban uses MySQL for storing workflows) and would not allow MySql in production (even though MySql is owned by Oracle) then your best bet is to learn to love Apache Oozie.  

Apache Oozie like a bunch of other first generation Hadoop component uses ungodly amount of XML for configuration. There is a project from Cloudera called Hue to ease some of the pain in creating Oozie workflows but if you cannot use a UI for creating workflows for version control and various other reasons you may need a different solution to avoid XML hell. In this blog post we will create a simple Oozie workflow without using a single line of XML and compare it to traditional approach of creating flows in Oozie.

The example below is taken from 'Introduction to Oozie' by Boris Lublinsky, Michael Segel. Lets say that you have two Map/Reduce jobs - one that is doing initial ingestion of the data and the second one merging data of the given type. The actual ingestion needs to execute initial ingestion and then merge data for two of the types - Lidar and Multicam. To automate this process we have created a simple Oozie Workflow (See the article by Boris Lublinsky, Michael Segel for the workflow in all its XML glory). Lets take a stab at creating the same workflow without using any XML. We will be using Gradle and gradle-oozie-plugin to create our workflow. This is how our flow will look like using Groovy dsl

oozie {

def common_props = [
            jobTracker: '${jobTracker}',
            namenode: '${nameNode}',
            configuration: ["mapred.job.queue.name": "default"]
    ]

    def ingestor = [
            name: "ingestor",
            type: "java",
            mainClass: "com.navteq.assetmgmt.MapReduce.ips.IPSLoader",
            ok: "merging",
            error: "fail",
            args: ['${driveID}'],

    ]

    def merging = [
            name: "merging",
            type: "fork",
            paths: [
                    "mergeLidar",
                    "mergeSignage"
            ]
    ]


    def mergeLidar = [
            name: "mergeLidar",
            type: "java",
            mainClass: "com.navteq.assetmgmt.hdfs.merge.MergerLoader",
            ok: "completed",
            error: "fail",
            args: ['-drive',
                    '${driveID}',
                    '-type',
                    'Lidar',
                    '-chunk',
                    '${lidarChunk}'
            ],
            javaOpts: "-Xmx2048m"
    ]


    def mergeSignage = [
            name: "mergeSignage",
            type: "java",
            mainClass: "com.navteq.assetmgmt.hdfs.merge.MergerLoader",
            ok: "completed",
            error: "fail",
            args: ['-drive',
                    '${driveID}',
                    '-type',
                    'Lidar',
                    '-chunk',
                    '${signageChunk}'
            ],
            javaOpts: "-Xmx2048m"
    ]

    def completed = [
            name: "completed",
            type: "join",
            to: "end"
    ]

    def fail = [
            name: "fail",
            type: "kill",
            message: "Java failed, error message[\${wf:errorMessage(wf:lastErrorNode())}]"
    ]

    actions = [
            ingestor,
            merging,
            mergeLidar,
            mergeSignage,
            completed,
            fail]

 common = common_props
    start = "ingestor"
    end = "end"
    name = 'oozie_flow'
    namespace = 'uri:oozie:workflow:0.1'
    outputDir = file("$projectDir/workflow2")
}

once we have the flow created, we can generated the XML flow by running 'gradle oozieWorkflow'.  This approach is a lot cleaner as Groovy not only removes a lot of boilerplate XML but also because we can define all the common properties in one location instead of repeating them in all nodes.  Here is the the complete build.gradle file and generated workflow. Hopefully this approach will reduce some pain in working with Apache Oozie.  

Managing data at scale doesn’t have to be hard. Find out how the completely free, open source HPCC Systems platform makes it easier to update, easier to program, easier to integrate data, and easier to manage clusters. Download and get started today.

Topics:

Published at DZone with permission of

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}