DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Because the DevOps movement has redefined engineering responsibilities, SREs now have to become stewards of observability strategy.

Apache Cassandra combines the benefits of major NoSQL databases to support data management needs not covered by traditional RDBMS vendors.

The software you build is only as secure as the code that powers it. Learn how malicious code creeps into your software supply chain.

Generative AI has transformed nearly every industry. How can you leverage GenAI to improve your productivity and efficiency?

Related

  • Why Database Migrations Take Months and How to Speed Them Up
  • How Doris Connects to Multiple Databases for Federated Analytics and Data Migration
  • Automatic Versioning in Mobile Apps
  • Oracle: Migrate PDB to Another Database

Trending

  • SaaS in an Enterprise - An Implementation Roadmap
  • Go 1.24+ Native FIPS Support for Easier Compliance
  • Software Delivery at Scale: Centralized Jenkins Pipeline for Optimal Efficiency
  • Developers Beware: Slopsquatting and Vibe Coding Can Increase Risk of AI-Powered Attacks
  1. DZone
  2. Data Engineering
  3. Data
  4. QA Approach for Data Migration

QA Approach for Data Migration

Data migration can be a pain. Read on to learn how about the common mistakes and how to avoid them in your next migration project.

By 
Aravind TP user avatar
Aravind TP
·
Mar. 27, 19 · Analysis
Likes (3)
Comment
Save
Tweet
Share
12.9K Views

Join the DZone community and get the full member experience.

Join For Free

Several systems are getting migrated to latest technologies these days. Cloud enablement is one of the major reasons for migration. The other reasons being cost reduction, productivity improvement, and flexibility in managing data. When a system gets migrated, the data held by the system cannot be ignored and needs to be migrated. The data migration process involves huge risks because of the high volume and criticality of the data to be migrated. The data consists of both business data and customer data. Business intelligence is contained in the business data, which is mainly the rules and parameters used for the successful processing of the application; whereas the customer data includes data such as buyer demographics and consumer ratings. There are several steps involved in the successful migration of data from one system to another.

First and foremost is the planning and determination of the scope of the migration. Once the scope is identified, we can to list out the components with the help of SMEs and people working on the system. Planning should be done for allocating the resources, including the tools and the phases of data migration. This includes the final step of the actual migration of production data, which may impact the availability or downtime of the system. Plan in such a way that the business continuity is interrupted at a minimum level.

Data migration

A thorough analysis by the development team, testing team, and business analysts is required in order to understand the system. Create artifacts and have them reviewed by analysts to make sure that the teams have a shared and proper understanding of the existing system. Back up the data that is to be migrated. This can be used in the event of a major data migration failure.

There are two major things to be considered during data migration: data volume and data value.

  • Data volume: Both the initial data and the delta (or the rate of change in the data) is a major criterion for data migration. Initial data involves historical data as well, which needs to be loaded in a single step. We also need to consider a change in the architecture/design or parallel processing for loading huge volumes of data within a limited time. This will ensure reduced downtime of the system, which is critical in systems like banking and FinTech.

  • Data Value: Data value is all about the different types of data and its format. The different data types are character, numeric, float, double, etc. The format of the data is values such as, date, address, phone number, etc. Analysis is required to get the source and target data structure and map the corresponding fields before doing the migration process which is either automatic, manual, or a combination of both.

The following phases are involved in data migration.

  • Identify and segregate the databases, fields, and programs used in the application.
  • Validation of metadata, which includes the details of the fields and databases.
  • Map the database fields with the corresponding program level fields.
  • Identify the business rules performed and segregate the one required for data preparation.
  • Find the possible values of each of the database fields using the mapped program level fields and their usage across the entire system.
  • Identify the data base primary key, foreign keys, and indexes used.
  • Analyze the program fields for possible relationships between the databases.
  • Fields used across the databases and the key fields to be identified by analyzing the application.
  • Analyzing the data validation programs for possible values of the database fields

Even if there is an exact match for the databases in the source and target systems, the target system may generate errors when executed. This is because the target system's design/language is entirely different from the source system. So, the handling of data in the target system should be different from that of the source system. One such case is the conditional logic issue which may happen due to EBCDIC and ASCII difference. So, the analysts or architects should foresee these kinds of errors before data migration and the data should be converted accordingly.

Challenges of Data Migration

Truncation and precision of data: In this case, the data to be migrated is partially moved to the target system. It can be challenging because of the additional padded spaces or a smaller number of bytes in the source data.

Data type mismatch: The data types must be mapped correctly. Numeric types should be mapped with integers, float types must be identified, and so on. Variable length fields should also be mapped correctly and handled properly.

Null data translation: Null data should be translated to the Target as null itself and not as spaces or default values. The application can have some checks for null validation. The target application should be modified to handle the null values, if it doesn’t support them. The handling of null values should be converted to the equivalents in the target architecture as well.

Incorrect translation or data corruption: There can be different data sources and data formats that are stored in multiple source tables. A thorough analysis of the data is required when the architecture or design changes from a legacy system to a modern system. An incorrect translation of data, or corrupted data, impacts the target system.

Misplaced data: This is data that has been lost while performing the migration. It happens mainly because of the process followed while backing up the data or because of some illogical analysis of the system.

Extra records: Duplicate records can come from different data sources. So, before loading the data to the target system, a filtering and transformation of the data needs to be done in order to remove the duplicate records.

Transformation logic errors: There can be logic errors that lead to the improper transformation of data. This can give unexpected results upon execution in the target system.

Hidden requirements: There are instances where some requirements are missed due to the absence of subject matter experts on the source system. SMEs in the system would be able to tell the reason behind the design. If that is not addressed during the build of the target system, it will affect the data migration project.

Numeric field precision: This is mostly valid in the case of calculated fields and reports. Field precision of numeric data used in reports needs to be used as it is in the target system. Invalid values in these may produce incorrect reports, as the values will come with incorrect field descriptions. For example, we gave a percentage field which should have a value between 0 and 100 only. The value should be between 0 and 1 while used in calculations. 25% or 0.25 should be used for percentage values in this example.

Rejected rows: Some data might get rejected during data extraction. This usually happens when an automated migration of data is done. We need to have a report on these rejected rows with the values as well. This will be useful to reload the rejected data back into the target system if required.

Data Cleansing: The process of preparing data for the migration. This is required mainly because the storage method and the architecture of the target system differs from the source system. Minor manipulation or transformation of the data is also involved in the cleansing process to make the data compatible with the target system.

Data Quality: Execute some mock migrations and pilot migrations before the final migration. It may give some errors which we never expected during any of the previous phases. Customize the ETL tools to handle these kinds of errors in the transformation process. This is to ensure the quality of the migration.

Schema Validation: Automate processes to validate the data by comparing the source and the target systems. Scripts can be written in Perl, Python, or any language of choice to do it. Export both the source and target metadata into a table and execute the script against it to identify any mismatches in views, stored procedures, etc.

Data validation: Here each data element in the source needs to be compared with the target. Scripts need to be executed to compare the column values. A mismatch in data needs to be reported for reconciliation. The report should list the missing column and data with the key value, if any.

Once the data is migrated, have the scripts executed against the source and target databases to find any errors or missing data in the mapped fields. Rectify these errors using the back up data you created. Do a thorough testing to ensure that the data is loaded to the proper location and is accessible from various systems and applications as before.

Data element Data migration Database Question answering

Opinions expressed by DZone contributors are their own.

Related

  • Why Database Migrations Take Months and How to Speed Them Up
  • How Doris Connects to Multiple Databases for Federated Analytics and Data Migration
  • Automatic Versioning in Mobile Apps
  • Oracle: Migrate PDB to Another Database

Partner Resources

×

Comments
Oops! Something Went Wrong

The likes didn't load as expected. Please refresh the page and try again.

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends:

Likes
There are no likes...yet! 👀
Be the first to like this post!
It looks like you're not logged in.
Sign in to see who liked this post!