DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports Events Over 2 million developers have joined DZone. Join Today! Thanks for visiting DZone today,
Edit Profile Manage Email Subscriptions Moderation Admin Console How to Post to DZone Article Submission Guidelines
View Profile
Sign Out
Refcards
Trend Reports
Events
Zones
Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
  1. DZone
  2. Data Engineering
  3. Databases
  4. Using Apache Spark to Query a Remote Authenticated MongoDB Server

Using Apache Spark to Query a Remote Authenticated MongoDB Server

Apache Spark is one of the most popular open source tools for big data. Learn how to use it to ingest data from a remote MongoDB server.

Pradeeban Kathiravelu user avatar by
Pradeeban Kathiravelu
·
Mar. 20, 19 · Tutorial
Like (2)
Save
Tweet
Share
7.48K Views

Join the DZone community and get the full member experience.

Join For Free

1. Download and Extract Spark

$ wget http://apache.spinellicreations.com/spark/spark-2.4.0/spark-2.4.0-bin-hadoop2.7.tgz
$ tar -xf spark-2.4.0-bin-hadoop2.7.tgz
$ cd spark-2.4.0-bin-hadoop2.7

Create a spark-defaults.conf file by copying spark-defaults.conf.template in conf/.

Add the below line to the conf file.

spark.debug.maxToStringFields=1000

2. Connect to Mongo via a Remote Server

We use the MongoDB Spark Connector.

First, make sure the Mongo instance in the remote server has the bindIp set to the appropriate value and the correct local IP (not just localhost). Use the authentication root and password below to indicate the credentials of your authenticated Mongo database. 192.168.1.32 is your remote server's private IP (i.e., the server where Mongo is running). We are reading the oplog.rs collection in the local database. Change these accordingly. Similarly, we are writing the outputs to the database, sparkoutput. 

spark-2.4.0-bin-hadoop2.7]$ ./bin/pyspark --conf "spark.mongodb.input.uri=mongodb://root:password@192.168.1.32:27017/local.oplog.rs?readPreference=primaryPreferred" --conf "spark.mongodb.output.uri=mongodb://root:password@192.168.1.32:27017/sparkoutput" --packages org.mongodb.spark:mongo-spark-connector_2.11:2.4.0
Python 2.7.5 (default, Oct 30 2018, 23:45:53)

[GCC 4.8.5 20150623 (Red Hat 4.8.5-36)] on linux2

Type "help", "copyright", "credits" or "license" for more information.

Ivy Default Cache set to: /home/pkathi2/.ivy2/cache

The jars for the packages stored in: /home/pkathi2/.ivy2/jars

:: loading settings :: url = jar:file:/home/pkathi2/spark-2.4.0-bin-hadoop2.7/jars/ivy-2.4.0.jar!/org/apache/ivy/core/settings/ivysettings.xml

org.mongodb.spark#mongo-spark-connector_2.11 added as a dependency

:: resolving dependencies :: org.apache.spark#spark-submit-parent-33a37e02-1a24-498d-9217-e7025eeebd10;1.0

confs: [default]

found org.mongodb.spark#mongo-spark-connector_2.11;2.4.0 in central

found org.mongodb#mongo-java-driver;3.9.0 in central

:: resolution report :: resolve 256ms :: artifacts dl 5ms

:: modules in use:

org.mongodb#mongo-java-driver;3.9.0 from central in [default]

org.mongodb.spark#mongo-spark-connector_2.11;2.4.0 from central in [default]

---------------------------------------------------------------------

| | modules || artifacts |

| conf | number| search|dwnlded|evicted|| number|dwnlded|

---------------------------------------------------------------------

| default | 2 | 0 | 0 | 0 || 2 | 0 |

---------------------------------------------------------------------

:: retrieving :: org.apache.spark#spark-submit-parent-33a37e02-1a24-498d-9217-e7025eeebd10

confs: [default]

0 artifacts copied, 2 already retrieved (0kB/6ms)

19/03/06 08:24:16 WARN NativeCodeLoader: This message means the systme is unable to load native-hadoop library for your platform... using built-in Java classes where applicable.

Set the default log level to "WARN".

To adjust logging level, use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).

Welcome to

____ __

/ __/__ ___ _____/ /__

_\ \/ _ \/ _ `/ __/ '_/

/__ / .__/\_,_/_/ /_/\_\ version 2.4.0

/_/

Using Python version 2.7.5 (default, Oct 30 2018 23:45:53)

SparkSession is available as 'spark'.

>>> from pyspark.sql import SparkSession

>>> my_spark = SparkSession \
... .builder \
... .appName("myApp") \
... .config("spark.mongodb.input.uri", "mongodb://root:password@192.168.1.32:27017/local.oplog.rs?authSource=admin") \
... .config("spark.mongodb.output.uri", "mongodb://root:password@192.168.1.32:27017/sparkoutput?authSource=admin") \
... .getOrCreate()

Make sure you are using the correct authentication source (i.e., where you authenticate yourself in the Mongo server).

3. Perform Queries on the Mongo Collection

Now you can perform queries on your remote Mongo collection through the Spark instance. For example, the below query finds the schema from the collection.

>>> df = spark.read.format("com.mongodb.spark.sql.DefaultSource").load()
>>> df.printSchema()


root
|-- h: long (nullable = true)
|-- ns: string (nullable = true)
|-- o: struct (nullable = true)
| |-- $set: struct (nullable = true)
| | |-- lastUse: timestamp (nullable = true)
| |-- $v: integer (nullable = true)
Database remote Apache Spark MongoDB

Published at DZone with permission of Pradeeban Kathiravelu, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

Popular on DZone

  • Differences Between Site Reliability Engineer vs. Software Engineer vs. Cloud Engineer vs. DevOps Engineer
  • OpenID Connect Flows
  • How Do the Docker Client and Docker Servers Work?
  • Understanding gRPC Concepts, Use Cases, and Best Practices

Comments

Partner Resources

X

ABOUT US

  • About DZone
  • Send feedback
  • Careers
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 600 Park Offices Drive
  • Suite 300
  • Durham, NC 27709
  • support@dzone.com
  • +1 (919) 678-0300

Let's be friends: