DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports Events Over 2 million developers have joined DZone. Join Today! Thanks for visiting DZone today,
Edit Profile Manage Email Subscriptions Moderation Admin Console How to Post to DZone Article Submission Guidelines
View Profile
Sign Out
Refcards
Trend Reports
Events
Zones
Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
  1. DZone
  2. Culture and Methodologies
  3. Career Development
  4. Connect to Cloudant Data in AWS Glue Jobs Using JDBC

Connect to Cloudant Data in AWS Glue Jobs Using JDBC

In this post, we will go over how to connect to Cloudant from AWS Glue jobs using the CData JDBC Driver hosted in Amazon S3.

Jerod Johnson user avatar by
Jerod Johnson
·
Oct. 14, 18 · Tutorial
Like (4)
Save
Tweet
Share
10.06K Views

Join the DZone community and get the full member experience.

Join For Free

AWS Glue is an ETL service from Amazon that allows you to easily prepare and load your data for storage and analytics. Using the PySpark module along with AWS Glue, you can create jobs that work with data over JDBC connectivity, loading the data directly into AWS data stores. In this article, we walk through uploading the CData JDBC Driver for Cloudant into an Amazon S3 bucket and creating and running an AWS Glue job to extract Cloudant data and store it in S3 as a CSV file.

Upload the CData JDBC Driver for Cloudant to an Amazon S3 Bucket

In order to work with the CData JDBC Driver for Cloudant in AWS Glue, you will need to store it (and any relevant license files) in a bucket in Amazon S3.

  1. Open the Amazon S3 Console.
  2. Select an existing bucket (or create a new one).
  3. Click Upload
  4. Select the JAR file (cdata.jdbc.cloudant.jar) found in the lib directory in the installation location for the driver.

Configure the Amazon Glue Job

  1. Navigate to ETL -> Jobs from the AWS Glue Console.
  2. Click Add Job to create a new Glue job.
  3. Fill in the Job properties:
    • Name: Fill in a name for the job, for example, CloudantGlueJob.
    • IAM Role: Select (or create) an IAM role that has the AWSGlueServiceRole and AmazonS3FullAccess (because the JDBC Driver and destination are in an Amazon S3 bucket) permissions policies.
    • This job runs: Select "A new script to be authored by you."
      Populate the script properties:
      • Script file name: A name for the script file, for example, GlueCloudantJDBC
      • S3 path where the script is stored: Fill in or browse to an S3 bucket.
      • Temporary directory: Fill in or browse to an S3 bucket.
    • Expand Script Libraries and job parameters (optional). For Dependent jars path, fill in or browse to the S3 bucket where you loaded the JAR file.
  4. Click Next. Here you will have the option to add a connection to other AWS endpoints, so if your Destination is Redshift, MySQL, etc, you can create and use connections to those data sources.
  5. Click Next to review your job configuration.
  6. Clicking Finish will create the job.
  7. In the editor that opens, write a Python script for the job. You can use the sample script (see below) as an example.

Sample Glue Script

To connect to Cloudant using the CData JDBC driver, you will need to create a JDBC URL, populating the necessary connection properties. Additionally (unless you are using a Beta driver), you will need to set the RTK property in the JDBC URL. You can view the licensing file included in the installation for information on how to set this property.

Set the following connection properties to connect to Cloudant:

  • User: Set this to your username.
  • Password: Set this to your password.

Below is a sample script that uses the CData JDBC driver with the PySpark and AWSGlue modules to extract Cloudant data and write it to an S3 bucket in CSV format. Make any changes to the script you need to suit your needs and save the job.

import sys
from awsglue.transforms import *
from awsglue.utils import getResolvedOptions
from pyspark.context import SparkContext
from awsglue.context import GlueContext
from awsglue.dynamicframe import DynamicFrame
from awsglue.job import Job

args = getResolvedOptions(sys.argv, ['JOB_NAME'])

sparkContext = SparkContext()
glueContext = GlueContext(sparkContext)
sparkSession = glueContext.spark_session

##Use the CData JDBC driver to read Cloudant data from the Movies table into a DataFrame
##Note the populated JDBC URL and driver class name
source_df = sparkSession.read.format("jdbc").option("url","jdbc:cloudant:RTK=5246...;User=abc123; Password=abcdef;").option("dbtable","Movies").option("driver","cdata.jdbc.cloudant.CloudantDriver").load()

glueJob = Job(glueContext)
glueJob.init(args['JOB_NAME'], args)

##Convert DataFrames to AWS Glue's DynamicFrames Object
dynamic_dframe = DynamicFrame.fromDF(source_df, glueContext, "dynamic_df")

##Write the DynamicFrame as a file in CSV format to a folder in an S3 bucket.
##It is possible to write to any Amazon data store (SQL Server, Redshift, etc) by using any previously defined connections.
retDatasink4 = glueContext.write_dynamic_frame.from_options(frame = dynamic_dframe, connection_type = "s3", connection_options = {"path": "s3://mybucket/outfiles"}, format = "csv", transformation_ctx = "datasink4")

glueJob.commit()

Run the Glue Job

With the script written, we are ready to run the Glue job. Click Run Job and wait for the extract/load to complete. You can view the status of the job from the Jobs page in the AWS Glue Console. Once the Job has succeeded, you will have a CSV file in your S3 bucket with data from the Cloudant Movies table.

Using the CData JDBC Driver for Cloudant in AWS Glue, you can easily create ETL jobs for Cloudant data, writing the data to an S3 bucket, or loading it into any other AWS data store.

AWS career GLUE (uncertainty assessment) Data (computing)

Published at DZone with permission of Jerod Johnson, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

Popular on DZone

  • The 31 Flavors of Data Lineage and Why Vanilla Doesn’t Cut It
  • The Key Assumption of Modern Work Culture
  • The Changing Face of ETL
  • Connecting Your Devs' Work to the Business

Comments

Partner Resources

X

ABOUT US

  • About DZone
  • Send feedback
  • Careers
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 600 Park Offices Drive
  • Suite 300
  • Durham, NC 27709
  • support@dzone.com
  • +1 (919) 678-0300

Let's be friends: