Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

How to Migrate On-Premise Databases to IBM DB2 On Cloud

DZone's Guide to

How to Migrate On-Premise Databases to IBM DB2 On Cloud

Database migration can look simple from outside, but the devil is always in the details, and the route is not that simple.

· Database Zone ·
Free Resource

Slow SQL Server? These SentryOne resources share tips and tricks for not only troubleshooting SQL Server performance issues, but also preventing them before they hit your production environment.

Introduction

Database migration can look simple from outside, namely, get the source data and import/load to the target database, but the devil is always in the details, and the route is not that simple. A database consists of more than just the data. A database can consist of many different — but often related — objects. With DB2, two types of objects can exist: system objects and data objects. Let's see what they are, and later on in the article, some of the major objects are discussed from caution perspectives during planning and migrating.

Most of the major database engines offer the same set of major database object types: (please read additionally on these object types from respective vendors. The definition and function would be more or less similar. An analogy is that you drive cars, and when you move from one car to another, the basics of cars remain the same, but differences include ignition buttons, windows, structure as a whole, etc., but the functional use and base of the car remains the same, like 4 wheels, engines, chassis, etc.)

  • Tables
  • Indexes
  • Sequences
  • Views
  • Synonyms
  • Indexes
  • Alias
  • Triggers
  • User-defined data types (UDTs),
  • User-defined functions (UDFs)
  • Stored procedures
  • Packages

System Objects include:

  • Storage groups
  • Tablespaces
  • Buffer pools
  • System Catalog tables and views
  • Transaction log files

These objects at on-premise databases should be given proper care while planning migrations. It is very important to understand what can be migrated and what cannot since there might be a need for professional services from a 3rd party or from a cloud vendor in doing so.

What Can and Can't Be Migrated?

General SQL user-defined functions (UDFs) can be migrated but external UDFs might have some problem being migrated. External UDFs might be written in C, C++, or Java and then compiled in some cases to form a library and sit at a specified location and would need to be registered with DB2. So, external UDFs need to be rebuilt on cloud servers because OS versions might be different at the target. Migrating such UDFs might need database migration services from cloud vendors or they cannot be migrated to cloud. Similarly, SQL stored procedures can be migrated to the target Database, but external store procedures will carry the same constraints than that of UDF and will not be supported. Materialized query table(MQTs) can be migrated, but they should be created after the data is moved to the target database. Similarly, the triggers can be migrated when data is moved to target database. The link between system-period temporal tables and their associated history tables must be broken before the table's data can be moved (this holds true for bitemporal tables). A system-period temporal table is a table that maintains historical versions of its rows. Bitemporal Modeling is an information modeling technique designed to handle historical data along two different timelines. A bitemporal table is a table that combines the historical tracking of a system-period temporal table with the time-specific data storage capabilities of an application-period temporal table. Bitemporal tables are generally used to keep user-based period information as well as system-based historical information.

Now we have some idea on what to migrate and what not to. Database administrators should seek some downtime while performing this. Proper planning and cautions should be taken while performing each of the discussed activities, and proper time should be allotted to understand the nature of migration. Let me also point out a major constraint with migration to DBaaS or DB on instances on cloud (from System Object PoV): Only one buffer pool would be supported and the user spaces should be merged with main user space with one buffer pool to migrate to the target state. Multiple user spaces with multiple DB pools and buffer pools will not be supported for DBaaS or DB on instance (VM) on cloud. So, remember that!

Now we would start the migration. There are certain tools developed from IBM to perform migration tasks, and the important ones are db2look utility and IBM Optim high performance unload. Db2look utility is used for generating new Data Definition Language (DDL) statements for target DB2. IBM Optim high performance unload can perform copying the current Database to a temporary folder/bucket, which can be AWS S3 or Softlayer Swift. The same tool can be leveraged to paste through import/load utility to the target.

The various ways to move data to DB2 on cloud — DB2 hosted, DB2 on cloud and DB2 warehouse on cloud are given below:

  • Load data from a local file stored on the desktop (Using #Bluemix interface)
  • Load data from a Softlayer swift object store (Using #Bluemix interface)
  • Load data from Amazon S3 (Using #Bluemix interface)
  • Use the DB2 data movement utilities, remotely
  • Use the IBM data transfer service (25 to 100TB)
  • Use the IBM Mass Data migration service (100 TB or more)

Now comes the security aspect while migrating, encryption using AES or 3DES is recommended. SSL and TLS are the preferred methods to secure data in transit.

Let’s also shed some light on DB2’s native encryption and how it works.

  • Client requests an SSL connection and lists its supported cipher suites (AES, 3DES)
  • Then, the server responds with a selected cipher suite and a copy of its digital certificate, which includes a public key.
  • Client checks the validity of certificate — if it is valid, a session key and a message authentication code (MAC) is encrypted with a public key and sent back to the server.
  • Server decrypts the session key and MAC (message authentication code), then send an acknowledgment to start an encrypted session with the client
  • Server and client securely exchange data using the session key and MAC selected.

These are some of the important points to be considered while migrating on-premise databases to IBM DB2 on Cloud.

Feel free to share your views in the comments. 

Database monitoring tools letting you down? See how SentryOne empowers Enterprises to go faster.

Topics:
ibm db2 ,database migration ,database ,ibm ,ibm cloud

Published at DZone with permission of

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}