Over a million developers have joined DZone.
Refcard #160

Data Warehousing

Best Practices for Collecting, Storing, and Delivering Decision-Support Data

by David Haertzen

Describes data warehousing, data modeling, dimensional databases, and data integration.

Free PDF
Data Warehousing
Section 1

What is Data Warehousing?

Data Warehousing is a process for collecting, storing, and delivering decision-support data for some or all of an enterprise. Data warehousing is a broad subject that is described point by point in this Refcard. A data warehouse is one of the artifacts created in the data warehousing process.

William (Bill) H. Inmon has provided an alternate and useful definition of a data warehouse: "A subject-oriented, integrated, time-variant, and nonvolatile collection of data in support of management's decision-making process."

As a total architecture , data warehousing involves people, processes, and technologies to achieve the goal of providing decision-support data that is consistent, integrated, standardized, and easy to understand.

See the book The Analytical Puzzle: Profitable Data Warehousing, Business Intelligence and Analytics (ISBN 978-1935504207) for details.

What a Data Warehouse is and is Not

A data warehouse is a database whose data includes a copy of operational data. This data is often obtained from multiple data sources and is useful for strategic decision making. It does not, however, contain original data.

"Data warehouse," by the way, is not another name for "database." Some people incorrectly use the term "data warehouse" as if it's a generic name for a database. A data warehouse does not only consist of historic data, it can be made up of analytics and reporting data, too. Transactional data that is managed in application data stores will not reside in a data warehouse.

Section 2

Data Warehouse Architecture

Data Warehouse Architecture Components

The data warehouse's technical architecture includes: data sources, data integration, BI/Analytics data stores, and data access.

Data Warehouse Architecture Components

Data Warehouse Tech Stack

Item Description
A software tool that contains data that describes other data. Here are the two kinds of metadata: business metadata and technical metadata.
Data Modeling Tool

A software tool that enables the design of data and databases through graphical means. This tool provides a detailed design capability that includes the design of tables, columns, relationships, rules, and business definitions.

Data Profiling Tool A software tool that supports understanding data through exploration and comparison. This tool accesses the data and explores it, looking for patterns such as typical values, outlying values, ranges, and allowed values. It is meant to help you
better understand the content and quality of the data.
Data Integration

ETL (extract, transfer & load) tools, as well as realtime integration tools like the ESB (enterprise service bus) software tools. These tools copy data from place to place and also scrub and clean the data.

RDBMS (Relational

Software that stores data in a relational format using SQL (Structured Query Language). This is really the Database system that is going to maintain robust data and store it. It is also important to the expandability of the system.

Database software designed for data mart-type operations. This software organizes data into multiple dimensions, known as "cubes," to support analytics.
Big Data Store

Software that manages huge amounts of data (relational databases, for example) that other types of software cannot. This Big Data tends to be unstructured and consists of text, images, video, and audio.

Reporting and
Query Tools

Business-intelligence software tools that select data through query and present it as reports and/or graphical displays. The business or analyst will be able to explore the data-exploration sanction. These tools also help produce reports and outputs that are desired and needed to understand the data.

Data Mining Tools

Software tools that find patterns in stores of data or databases. These tools are useful for predictive analytics and optimization analytics.

Infrastructure Architecture

The data warehouse tech stack is built on a fundamental framework of hardware and software known as the infrastructure."


Using a data warehouse appliance or a dedicated database infrastructure helps support the data warehouse. This technique tends to yield the highest performance. The data warehouse appliance is optimized to provide database services using Massively Parallel Processing (MPP) architecture. It includes multiple, tightly coupled computers with specialized functions, plus at least one array of storage devices that are accessed in parallel. Specialized functions include: system controller, database access, data load and data backup.

Data Warehouse Appliances provide high performance. They can be up to 100-times faster than the typical Database Server. Consider the Data Warehouse Appliance when more than 2TB of data must be stored.

Data Architecture

Data architecture is a blueprint for the management of data in an enterprise. The data architect builds a picture of how multiple sub-domains work. Some of these subdomains are data governance, data quality, ILM (Information Lifecycle Management), data framework, metadata and semantics, master data, and, finally, business intelligence.


Data Architecture Sub-Domains

Sub-domain Description
Data Governance (DG)

The overall management of data and information includes people, processes, and technologies that improve the value obtained from data and information by treating data as an asset. It is the cornerstone of the data architecture.

Data Quality
Management (DQM)

The discipline of ensuring that data is fit for use by the enterprise. It includes obtaining requirements and rules that specify the dimensions of quality required, such as accuracy, completeness, timeliness, and allowed values.

Information Lifecycle
Management (ILM)
The discipline of specifying and managing information through its life from its conception to disposal. Information activities that make up ILM include classification, creation, distribution, use,
maintenance, and disposal.
Data Framework

A description of data-related systems that is in terms of a set of fundamental parts and the recommended methods for assembling those parts using patterns. The data framework can
include: database management, data storage, and data integration

Metadata and

Information that describes and specifies datarelated objects. This description can include: structure and storage of data, business use of data, and processes that act on the data. "Semantics" refers to the meaning of data.

Master Data
Management (MDM)
An activity focused on producing and making available a "golden record" of master data and essential business entities, such as customers, products, and financial accounts. Master data is
data describing major subjects of interest that is shared by multiple applications.
Business Intelligence

The people, tools, and processes that support planning and decision making, both strategic and operational, for an organization.

Data Flow

The diagram displays how data flows through the data warehouse system. Data first originates from the data sources, such as inventory systems (systems stored in data warehouses and operational data stores). The data stores are formatted to expose data in the data marts that are then accessed using BI and analytics tools.


Section 3


Data is the raw material through which we can gain understanding. It is a critical element in data modeling, statistics, and data mining. It is the foundation of the pyramid that leads to wisdom and to informed action.

Data Attribute Characteristics

Characteristic Description

Each attribute has a name, such as "Account Balance
Amount." An attribute name is a string that identifies
and describes an attribute. In the early stages of data design you may just list names without adding clarifying information, called


The datatype, also known as the "data format," could have a value such as decimal(12,4). This is the format used to store the attribute. This specifies whether the information is a string, a number, or a date. In addition, it specifies the size of the attribute.

Domain A domain, such as Currency Amounts, is a categorization of attributes by function.
Initial Value

An initial value such as 0.0000 is the default value that an attribute is assigned when it is first created.


Rules are constraints that limit the values that an attribute can contain. An example rule is "the attribute must be greater than or equal to 0.0000." Use of rules helps to improve data quality.

Definition A narrative that conveys or describes the meaning of an attribute. For example, Account Balance Amount is a measure of the monetary value of a financial account, such as a bank account or an investment account."
Section 4

Data Modeling

Three levels of data modeling are developed in sequence:

  1. Conceptual Data Model - a high level model that describes a problem using entities, attributes, and relationships.
  2. Logical Data Model - a detailed data model that describes a solution in business terms, and that also uses entites, attributes, and relationships.
  3. Physical Data Model - a detailed data model that defines database objects, such as tables and columns. This model is needed to implement the models in a database and produce a working solution.


An entity is a core part of any conceptual and logical data model. An entity is an object of interest to an enterprise—it can be a person, organization, place, thing, activity, event, abstraction, or idea. Entities are represented as rectangles in the data model. Think of entities as singular nouns.



An attribute is a characteristic of an entity. Attributes are categorized as: primary keys, foreign keys, alternate keys, and non-keys, as depicted in the diagram below.



A relationship is an association between entities. Such a relationship is diagrammed by drawing a line between the related entities. The following diagram depicts two entities— Customer and Order —that have a relationship specified by the verb phrase "places" in this way: Customer Places Order.



Cardinality specifies the number of entities that may participate in a given relationship, expressed as: one-to-one, one-to-many, or many-to-many, as depicted in the following example.


Cardinality is expressed as minimum and maximum numbers. In the first example below, an instance of entity A may have one instance of entity B, and entity B must have one and only one instance of entity A. Cardinality is specified by putting symbols on the relationship line near each of the two entities that are part of the relationship.

In the second case, entity A may have one or more instances of entity B, and entity B must have one and only one instance of entity A.


Minimum cardinality is expressed by the symbol farther away from the entity. A circle indicates that an entity is optional, while a bar indicates an entity is mandatory. At least one is required.


Maximum cardinality is expressed by the symbol closest to the entity. A bar means that a maximum of one entity can participate, while a crow's foot (a three-prong connector) means that many entities may participate. This means a large unspecified number.

Maximum cardinality

Section 5

Normalized Data

Normalization is a data modeling technique that organizes data by breaking it down to its lowest level, i.e., its "atomic" components, to avoid duplication. This method is used to design the Atomic Data Warehouse part of the data warehousing system.

Normalization Level Description
First Normal Form Entities contain no repeating groups of
Second Normal Form Entity is in the first normal form and
attributes that depend on only part of a
composite key are separate.
Third Normal Form

The entity is in the second normal form.
This means that non-key attributes
representative of an entity's facts (about
other non-key attributes) separate two or
more independent, multi-valued facts.

Fourth Normal Form Entity is in its third normal form, and two or
more independent, multi-valued facts for an
entity are separate.
Fifth Normal Form Entity is in its fourth normal form, and all
non-primary key attributes depend on all
attributes that make up the primary key.
Section 6

Atomic Data Warehouse

The atomic data warehouse (ADW) is an area where data is broken down into low-level components in preparation for export to data marts. The ADW is designed using normalization and methods that make for speedy history loading and recording.

Header and Detail Entities

The ADW is organized into non-changing data with logical keys and changeable data that supports tracking of changes and rapid load / insert. Use an integer as the primary surrogate key. Then add the effective date to track changes.

Header and Detail Entities

Associative Entities

Track the history of relationships between entities using an associative entity with effective dates and expiration dates.

Associative Entities

Atomic DW Specialized Attributes

Use specialized attributes to improve ADW efficiency and effectiveness. Identify these attributes using a prefix of ADW_.

Attribute name Description
dw_xxx_id Data Warehouse assigned surrogate
key. Replace 'xxx' with a reference
to the table name, such as 'dw_
dw_insert_date The date and time when a row was
inserted into the data warehouse.

The date and time when a row in the
data warehouse began to be active.

dw_expire_date The date and time when a row in
the data warehouse stopped being
dw_data_process_log_id A reference to the data process log.
The log is a record of the process of
how data was loaded or modified in
the data warehouse..
Section 7

Supporting Tables

Supporting data is required to enable the data warehouse to operate smoothly. Here is some supporting data:

  • Code Management and Translation
  • Data Source Tracking
  • Error Logging

Code Translation

Data warehousing requires that codes, such as gender code and unit of measure, be translated to standard values aided by code-translation tables, like these:

  • Code Set – Group of codes, such as "Gender Code"
  • Code – An individual code value
  • Code Translation – Mapping between code values


Data-Source Tracking and Logging

Data-source tracking provides a means of tracing where data originated within a data warehouse:

  • Data Source – identifies the system or database
  • Data Process – traces the data-integration procedure
  • Data Process Log – traces each data warehouse load


Message Logging

Message logging provides a record of events that occur while loading the data warehouse:

  • Data Process Log – traces each data warehouse load
  • Message Type – specifies the kind of message
  • Message Log – contains an individual message

Message Logging

Section 8

Dimensional Database

A Dimensional Database is a database that is optimized for query and analysis and is not normalized like the Atomic Data Warehouse. It consists of fact and dimension tables, where each fact is connected to one or more dimensions.

Sales Order Fact
The Sales Order Fact includes the measurer's order quantity and currency amount. Dimensions of Calendar Date, Product, Customer, Geo Location and Sales Organization put the Sales Order Fact into context. This star schema supports looking at orders in a cubical way, enabling slicing and dicing by customer, time, and product.

 Dimensional Database

Section 9


A fact is a set of measurements. It tends to contain quantitative data that gets presented to users. It often contains amounts of money and quantities of things. Facts are surrounded by dimensions that categorize the fact.

Anatomy of a Fact

Facts are SQL tables that include:

  • Table Name – a descriptive name usually containing the word 'Fact'
  • Primary Keys – attributes that uniquely identify each fact occurrence and relate it to dimensions
  • Measures – quantitative metrics


Event Fact Example

Event facts record single occurrences, such as financial transactions, sales, complains, or shipments.

Event Fact

Snapshot Fact

The snapshot fact captures the status of an item at a point in time, such as a general ledger balance or inventory level.

Snapshot Fact

Cumulative Snapshot Fact

The cumulative snapshot fact adds accumulated data, such as year-todate amounts, to the snapshot fact.

Cumulative Snapshot Fact

Aggregated Fact

Aggregated facts provide summary information, such as general ledger totals during a period of time, or complaints per product per store per month.

Aggregated Fact

Fact-less Fact

The fact-less fact tracks an association between dimensions rather than quantitative metrics. Examples include miles, event attendance, and sales promotions.

Fact-less Fact

Section 10


A dimension is a database table that contains properties that identify and categorize. The attributes serve as labels for reports and as data points for summarization. In the dimensional model, dimensions surround and qualify facts.

Data and Time Dimensions

Date dimensions support trend analysis. Date dimensions include the date and its associated week, month, quarter, and year. Time-of-day dimensions are used to analyze daily business volume.

Data and Time Dimensions<

Multiple-Dimension Roles

One dimension can play multiple roles. The date dimension could play roles of a snapshot date, a project start date, and a project end date.

Multiple-Dimension Roles

Degenerate Dimension

A degenerate dimension has a dimension key without a dimension table. Examples include transaction numbers, shipment numbers, and order numbers.

Degenerate Dimension

Slowly-Changing Dimensions

Changes to dimensional data can be categorized into levels:

SCD Type Description
SCD Type 0 Data is non-changing. It is inserted once and never changed.
dw_insert_date The date and time when a row was inserted into the data warehouse.

The date and time when a row in the data warehouse began to be active.

dw_expire_date The date and time when a row in the data warehouse stopped being active.
Section 11

Data Integration

Data integration is a technique for moving data or otherwise making data available across data stores. The data integration process can include extraction, movement, validation, cleansing, transformation, standardization, and loading.

Extract Transform Load (ETL)

In the ETL pattern of data integration, data is extracted from the data source and then transformed while in flight to a staging database. Data is then loaded into the data warehouse. ETL is strong for batch processing of bulk data.


Extract Load Transform (ELT)

In the ELT pattern of data integration, data is extracted from the data source and loaded to staging without transformation. After that, data is transformed within staging and then loaded to the data warehouse.


Change Data Capture (CDC)

The CDC pattern of data integration is strong in event processing. Database logs that contain a record of database changes are replicated near real time at staging. This information is then transformed and loaded to the data warehouse.


CDC is a great technique for supporting real-time data warehouses.


  • Featured
  • Latest
  • Popular
Getting Started With Docker
Teaches you typical Docker workflows, building images, creating Dockerfiles, and includes helpful commands to easily automate infrastructure and contain your distributed application.
10.2k 7,169
Getting Started With Real User Monitoring
Teaches you how to use new web standards—like W3C’s Beacon API—to see how your site is performing for actual users, letting you better understand how to improve overall user experience.
4,442 3,655
Core Java
Gives you an overview of key aspects of the Java language and references on the core library, commonly used tools, and new Java 8 features.
104k 263.9k
JavaFX 8
Gives you what you need to start using the powerful JavaFX 8 UI and graphics tool with code snippets and visual examples of shapes and controls.
6,786 8,934
Continuous Delivery With Jenkins Workflow
Provides an introduction to the Jenkins Workflow plugin, a tool that extends the popular CD application to manage even the most complex software pipelines and help you continuously deliver more efficiently.
8,663 9,673
Functional Programming in JavaScript
Explains functions, data types, and techniques to demonstrate the functional paradigm through the familiar JavaScript language.
12k 8,665
Java Caching
Explores the building blocks of JCache and other caching APIs, as well as multiple strategies for implementing temporary data storage in your application.
10.5k 10.5k
Getting Started With Microservices
Still re-deploying your entire application for one small update? Microservices deploy modular updates and increase the speed of application deployments.
13.7k 14.1k
Getting Started With MQTT
Explores the fundamentals of MQTT, including message types, QoS levels, and security.
6,068 7,092
Monitoring NGINX
Overcome dropped connections, server errors, and more by efficiently monitoring your NGINX web server.
4,874 3,800
Getting Started With Apache Tomcat
Learn Apache Tomcat, a pure Java open-source web server that implements the Java Servlet, JavaServer Pages, and Expression Language specifications.
8,845 7,314
IntelliJ IDEA Essentials
Helps Java developers navigate the various facets of this world-class IDE, with tips, tricks, shortcuts, and quick tutorials on editor basics, navigation, and more.
20.4k 91k
{{ card.title }}
{{card.downloads | formatCount }} {{card.views | formatCount }}

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}