Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Microsoft BCP Performance on Sqoop EXPORT to SQL Server From Hadoop

DZone's Guide to

Microsoft BCP Performance on Sqoop EXPORT to SQL Server From Hadoop

In this tutorial, you'll learn how to export data out of Hadoop to boost throughput using the DataDirect SQL Server JDBC driver and Apache Sqoop.

· Big Data Zone ·
Free Resource

The open source HPCC Systems platform is a proven, easy to use solution for managing data at scale. Visit our Easy Guide to learn more about this completely free platform, test drive some code in the online Playground, and get started today.

We've gotten everyone connected to SQL Server using Progress DataDirect's exclusive support for both NTLM and Kerberos authentication from Linux with Sqoop. Now, we plan to blow your minds with high-flying bulk insert performance into SQL Server using Sqoop's Generic JDBC Connector. Linux clients will get similar throughput to the Microsoft BCP tool.

sqoop-logo

So far, Cloudera and HortonWorks have been pointing shops to the high-performance DataDirect SQL Server JDBC driver to help load data volumes anywhere from 10GB to 1TB into SQL Server data marts and warehouses. It's common for the DataDirect SQL Server JDBC driver to speed up load times by 15-20X, and Sqoop will see similar improvement since it leverages JDBC batches that we transparently convert into SQL Server's native bulk load protocol. Moving data out of Hadoop and into external JDBC sources are exciting projects that represent the democratization of big data for downstream application consumers. You're definitely doing something right if you are ready to read on!

Get Started With Fast Performance for Sqoop EXPORT to SQL Server

  1. Download the DataDirect Connect for JDBC drivers and follow the quick-start guides supplied with the download.
  2. Copy the sqlserver.jar file to the $SQOOP_HOME/lib directory on your client machine. (This will be /user/lib/sqoop/lib if you installed from an RPM or Debian package). The JDBC driver needs to be installed only on the machine where Sqoop is executed; and not on each node in your Hadoop cluster.
  3. Verify the database’s recovery mode per the MSDN article on Considerations for Switching From the Full or Bulk-Logged Recovery Model. To verify the recovery mode, the database user can run the following query:
    SELECT name, recovery_model_desc
    FROM sys.databases
    WHERE name = ‘database_name’ ;
    Note the recovery_model_desc returned by this query (expect to return, 'BULK_LOGGED').
  4. From the command line, run the Sqoop export command using similar properties as below. Or specify the equivalent using the Hue web UI for Sqoop jobs.
sqoop export --connect 'jdbc:datadirect:sqlserver://nc-sqlserver:1433;database=test;user=test01;password=test01;EnableBulkLoad=true;BulkLoadBatchSize=1024;BulkLoadOptions=0' --driver com.ddtek.jdbc.sqlserver.SQLServerDriver --table 'blah_1024MB' --export-dir /user/hdfs/blah_1024MB/ --input-lines-terminated-by "n" --input-fields-terminated-by ',' --batch -m 10

Notes

  • --batch mode is used for underlying insert statement execution.
  • --driver must be specified when using a Generic JDBC connector.
  • --connect is the JDBC URL. EnableBulkLoad=true authorizes the DataDirect SQL Server driver to utilize the bulk load protocol for the inserting of rows. The BulkLoadBatchSize value indicates to the driver the number of rows it will attempt to bulk load on a single roundtrip to the server. If this value is less than the sqoop.export.records.per.statement value, then each call to executeBatch will result in more than one round trip to the server in order to insert the batch of rows.
  • --table: The table to be populated in the target relational database as data is transferred from HDFS.
  • --export-dir identifies the HDFS directory that contains the Hadoop table to be exported.
  • --input-lines-terminated-by identifies the character which separates rows in the HDFS files.
  • --input-fields-terminated-by identifies the character which separates columns in the HDFS files.
  • -D sqoop.export.records.per.statement is not recommended nor the equivalent of JDBC batch size. Rather, it specifies the number of rows per SQL statements for data sources that support multi-row inserts such as Postgres.
INSERT INTO films (code, title, did, date_prod, kind) VALUES
  ('B6717', 'Tampopo', 110, '1985-02-10', 'Comedy'),
  ('HG120', 'The Dinner Game', 140, DEFAULT, 'Comedy');

View the Sqoop user's guide for complete reference.

Special thanks to Mike Spinak, Principal Software Engineer, and Danh Huynh, Systems Administrator, for their help with setup and testing in the six-node Cloudera CDH5.2.0-1.cdh5.2.0.p0.36 cluster to export data into SQL Server 2008 R2.

Show Me the Numbers

Results are still coming in from several shops and the best to date is a load of 40 GB into SQL Server within ten minutes. In the above system, we were loading 37 GB in 18.5 minutes. There are several properties across Hadoop, Sqoop, JDBC driver and SQL Server you can tune to improve performance even further.

Managing data at scale doesn’t have to be hard. Find out how the completely free, open source HPCC Systems platform makes it easier to update, easier to program, easier to integrate data, and easier to manage clusters. Download and get started today.

Topics:
big data ,hadoop ,sql server ,sqoop ,data performance ,tutorial

Published at DZone with permission of

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}