DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports Events Over 2 million developers have joined DZone. Join Today! Thanks for visiting DZone today,
Edit Profile Manage Email Subscriptions Moderation Admin Console How to Post to DZone Article Submission Guidelines
View Profile
Sign Out
Refcards
Trend Reports
Events
Zones
Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Partner Zones AWS Cloud
by AWS Developer Relations
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Partner Zones
AWS Cloud
by AWS Developer Relations
  1. DZone
  2. Software Design and Architecture
  3. Cloud Architecture
  4. Loading Data From Multiple S3 Buckets Into H2O

Loading Data From Multiple S3 Buckets Into H2O

In this quick tutorial, we learn how to load big data sets into an open source machine learning platform from several different Amazon S3 buckets.

Pavel Pscheidl user avatar by
Pavel Pscheidl
CORE ·
Mar. 21, 19 · Tutorial
Like (2)
Save
Tweet
Share
4.66K Views

Join the DZone community and get the full member experience.

Join For Free

A common use-case when working with H2O open-source machine learning platform is to load data from Amazon S3 buckets. However, not all buckets are publicly accessible. In the case of H2O users loading data from a single Amazon S3 bucket, the traditional ways of providing secret credentials and making the desired bucket accessible described in H2O’s documentation is sufficient. In general, H2O is able to pick up S3 credentials with a chain of providers, searching in the following locations:

  • Credentials stored in a AwsCredentials.properties file.
  • From the EC2 instance itself, if H2O is running on it and the bucket is accessible by the same user.
  • Environment variables.
  • System properties aws.accessKeyId and aws.secretKey.
  • Profile credentials provider.

For details, see the documentation.

However, there might be a problem when connecting to multiple S3 buckets during one session — all the above-mentioned options do load the S3 credentials during startup or the user has limited means of forcing a credential refresh.

The Solution

In order to make accessing multiple buckets with distinct credentials possible, one more credential provider with top priority has been introduced. For our users, this means calling a simple function in the H2O API before accessing a bucket in time(t)with credentials different from a bucket accessed in time (t-1).

Python Example

# Iris dataset from imaginary S3 bucket is about to be downloaded. There are no credentials set anywhere, so the call to set them is made right before the call.
from h2o.persist import set_s3_credentials
set_s3_credentials("ACCESSKEYID", "SECRETACCESSKEY")
iris = h2o.import_file("s3://test.somewhere.com/iris.csv")
airlines = h2o.import_file("s3://test.somewhere.com/airlines.csv")

# New bucket somewhere else is being accessed, set the correct credentials
set_s3_credentials("DIFFERENT/ACCESSKEYID", "DIFFERENT/SECRETACCESSKEY")
iris = h2o.import_file("s3://differenttest.somewhereelse.com/different-iris.csv")

R Example

# Iris dataset from imaginary S3 bucket is about to be downloaded. There are no credentials set anywhere, so the call to set them is made right before the call.
h2o.set_s3_credentials("ACCESSKEYID", "SECRETACCESSKEY")
iris <- h2o.importFile(path = "s3://test.somewhere.com/iris.csv")
airlines <- h2o.importFile(path = "s3://test.somewhere.com/airlines.csv")

# New bucket somewhere else is being accessed, set the correct credentials
h2o.set_s3_credentials("DIFFERENT/ACCESSKEYID", "DIFFERENT/SECRETACCESSKEY")
iris <- h2o.importFile(path = "s3://differenttest.somewhereelse.com/different-iris.csv")
AWS H2O (web server) Data (computing)

Published at DZone with permission of Pavel Pscheidl. See the original article here.

Opinions expressed by DZone contributors are their own.

Popular on DZone

  • Steel Threads Are a Technique That Will Make You a Better Engineer
  • Building a RESTful API With AWS Lambda and Express
  • Fargate vs. Lambda: The Battle of the Future
  • Authenticate With OpenID Connect and Apache APISIX

Comments

Partner Resources

X

ABOUT US

  • About DZone
  • Send feedback
  • Careers
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 600 Park Offices Drive
  • Suite 300
  • Durham, NC 27709
  • support@dzone.com
  • +1 (919) 678-0300

Let's be friends: