Data Warehousing Interview Questions and Answers
A data warehouse is
the main repository of an organization's historical
data, its corporate
memory. It contains the raw material for
support system. The critical factor leading to
the use of a data
warehouse is that a data analyst can perform
complex queries and
analysis, such as data mining, on the information
without slowing down
the operational systems (Ref:Wikipedia). Data
collection of data designed to support management
decision making. Data
warehouses contain a wide variety of data that
present a coherent
picture of business conditions at a single point in
time. It is a
repository of integrated information, available for queries
fundamental stages of Data Warehousing?
Databases - Data warehouses in this initial
stage are developed
by simply copying the database of an operational
system to an off-line
server where the processing load of reporting
does not impact on
the operational system's performance.
Warehouse - Data warehouses in this stage of
evolution are updated
on a regular time cycle (usually daily, weekly or
monthly) from the
operational systems and the data is stored in an
reporting-oriented data structure
Real Time Data
Warehouse - Data warehouses at this stage are
updated on a
transaction or event basis, every time an operational
system performs a
transaction (e.g. an order or a delivery or a
Warehouse - Data warehouses at this stage are
used to generate
activity or transactions that are passed back into the
for use in the daily activity of the organization.
model concept involves two types of tables and it is
different from the
3rd normal form. This concepts uses Facts table
which contains the
measurements of the business and Dimension table
which contains the
context(dimension of calculation) of the
Fact table contains
measurements of business process. Fact table
contains the foreign
keys for the dimension tables. Example, if you are
business process is
"paper production", "average production of paper
by one machine"
or "weekly production of paper" will be considered as
contains textual attributes of measurements stored
in the facts tables.
Dimensional table is a collection of hierarchies,
categories and logic
which can be used for user to traverse in
the Different methods of loading Dimension tables?
There are two
different ways to load data in dimension tables.
Conventional (Slow) :
All the constraints
and keys are validated against the data before, it is
loaded, this way data
integrity is maintained.
Direct (Fast) :
All the constraints
and keys are disabled before the data is loaded.
Once data is loaded,
it is validated against all the constraints and keys.
If data is found
invalid or dirty it is not included in index and all future
processes are skipped
on this data.
OLTP is abbreviation
of On-Line Transaction Processing. This system is
an application that
modifies data the instance it receives and has a
large number of
OLAP is abbreviation
of Online Analytical Processing. This system is an
collects, manages, processes and presents
for analysis and management purposes.
the difference between OLTP and OLAP?
data is from original data source of the data
data is from various source.
OLTP: Snapshot of
business processes which does fundamental
Multi-dimensional views of business activities of planning and
Queries and Process
OLTP: Simple quick
running queries ran by users.
OLAP: Complex long
running queries by system to update the
small database. Speed will be not an issue due to
smaller database and
normalization will not degrade performance. This
relationship(ER) model and an application-oriented
large database. Speed is issue due to larger
de-normalizing will improve performance as there will be
lesser tables to scan
while performing tasks. This adopts star,
snowflake or fact
constellation mode of subject-oriented database
Back up and System
Database backup and system administration can do the
OLAP: Reloading the
OLTP data is good considered as good backup
the foreign key columns in fact table and dimension
Foreign keys of
dimension tables are primary keys of entity tables.
Foreign keys of facts
tables are primary keys of Dimension tables.
Data Mining is the
process of analyzing data from different
summarizing it into useful information.
the difference between view and materialized view?
A view takes the
output of a query and makes it appear like a virtual
table and it can be
used in place of tables.
A materialized view
provides indirect access to table data by storing
the results of a
query in a separate schema object.
What is ER
Diagrams are a major data modelling tool and will
help organize the
data in your project into entities and define the
the entities. This process has proved to enable
the analyst to
produce a good database structure so that the data can
be stored and
retrieved in a most efficient manner.
(ER) diagram is a specialized graphic that
interrelationships between entities in a database. A type
of diagram used in
data modeling for relational data bases. These
diagrams show the
structure of each table and the links between
ODS is abbreviation
of Operational Data Store. A database structure
that is a repository
for near real-time operational data rather than long
term trend data. The
ODS may further become the enterprise shared
allowing operational systems that are being reengineered
to use the ODS as
there operation databases.
ETL is abbreviation
of extract, transform, and load. ETL is software
businesses to consolidate their disparate data while
moving it from place
to place, and it doesn't really matter that that
data is in different
forms or formats. The data can come from any
powerful enough to handle such data disparities. First,
the extract function
reads data from a specified source database and
extracts a desired
subset of data. Next, the transform function works
with the acquired
data - using rules orlookup tables, or creating
other data - to convert it to the desired state.
Finally, the load
function is used to write the resulting data to a target
VLDB is abbreviation
of Very Large DataBase. A one terabyte database
would normally be
considered to be a VLDB. Typically, these are
systems or transaction processing applications
serving large numbers
database is design optimal for Data Warehouse?
No. OLTP database
tables are normalized and it will add additional
time to queries to
return results. Additionally OLTP database is smaller
and it does not
contain longer period (many years) data, which needs
to be analyzed. A
OLTP system is basically ER model and not
Dimensional Model. If
a complex query is executed on a OLTP system,
it may cause a heavy
overhead on the OLTP server that will affect the
If de-normalized is improves data warehouse
is in normal form?
Foreign keys of facts
tables are primary keys of Dimension tables. It is
clear that fact table
contains columns which are primary key to other
table that itself
make normal form table.
A lookup table is the
table placed on the target table based upon the
primary key of the
target, it just updates the table by allowing only
modified (new or
updated) records based on thelookup condition.
contains the summary of existing warehouse data
which is grouped to
certain levels of dimensions. It is always easy to
retrieve data from
aggregated tables than visiting original table
which has million
records. Aggregate tables reduces the load in the
database server and
increases the performance of the query and can
retrieve the result
What is real time data-warehousing?
captures business activity data. Real-time data
business activity data as it occurs. As soon as
the business activity
is complete and there is data about it, the
data flows into the data warehouse and becomes
What are conformed dimensions?
mean the exact same thing with every possible
fact table to which
they are joined. They are common to the cubes.
What is conformed
are the dimensions which can be used across
multiple Data Marts
in combination with multiple facts tables
How do you load the time dimension?
Time dimensions are
usually loaded by a program that loops through
all possible dates
that may appear in the data. 100 years may be
represented in a time
dimension, with one row per day.
What is a
level of Granularity of a fact table?
Level of granularity
means level of detail that you put into the fact
table in a data
warehouse. Level of granularity would mean what detail
are you willing to
put for each transactional fact.
What are non-additive facts?
are facts that cannot be summed up for any of the
dimensions present in
the fact table. However they are not considered
as useless. If there
is changes in dimensions the same facts can be
What is factless facts table?
A fact table which
does not contain numeric fact columns it is called
factless facts table.
What are slowly changing dimensions (SCD)?
SCD is abbreviation
of Slowly changing dimensions. SCD applies to
cases where the
attribute for a record varies over time. There are
three different types
1) SCD1 : The new
record replaces the original record. Only one
record exist in
database - current data.
2) SCD2 : A new
record is added into the customer dimension table.
Two records exist in
database - current data and previous history data.
3) SCD3 : The
original data is modified to include new data. One
record exist in
database - new information are attached with old
information in same
What is hybrid slowly changing dimension?
Hybrid SCDs are
combination of both SCD 1 and SCD 2. It may happen
that in a table, some
columns are important and we need to track
changes for them i.e
capture the historical data for them whereas in
some columns even if
the data changes, we don't care.
What is BUS Schema?
BUS Schema is
composed of a master suite of confirmed dimension
definition if facts.
What is a Star Schema?
Star schema is a type
of organizing the tables such that we can
retrieve the result
from the database quickly in the warehouse
What Snow Flake Schema?
each dimension has a primary dimension table, to
which one or more
additional dimensions can join. The primary
dimension table is
the only table that can join to the fact table.
between star and snowflake schema?
Star schema - A
single fact table with N number of Dimension, all
dimensions will be
linked directly with a fact table. This schema is denormalized
and results in simple
join and less complex query as well
as faster results.
Snow schema - Any
dimensions with extended dimensions are know as
dimensions maybe interlinked or may have one to
with other tables. This schema is normalized and
results in complex
join and very complex query as well as slower
Difference between ER Modeling and Dimensional
ER modeling is used
for normalizing the OLTP database design.
is used for de-normalizing the ROLAP/MOLAP
What is degenerate dimension table?
If a table contains
the values, which are neither dimension nor
measures is called
Why is Data Modeling Important?
Data modeling is
probably the most labor intensive and time
consuming part of the
development process. The goal of the data
model is to make sure
that the all data objects required by the
completely and accurately represented. Because the data
model uses easily
understood notations and natural language , it can
be reviewed and
verified as correct by the end-users.
In computer science,
data modeling is the process of creating a data
model by applying a
data model theory to create a data model
instance. A data
model theory is a formal data model description.
When data modelling,
we are structuring and organizing data. These
data structures are
then typically implemented in a database
management system. In
addition to defining and organizing the data,
data modeling will
impose (implicitly or explicitly) constraints or
limitations on the
data placed within the structure.
quantities of structured and unstructured data is a
primary function of
information systems. Data models describe
structured data for
storage in data management systems such as
They typically do not describe unstructured data,
such as word
processing documents, email messages, pictures, digital
audio, and video.
(Reference : Wikipedia)
What is surrogate key?
Surrogate key is a
substitution for the natural primary key. It is just a
unique identifier or
number for each row that can be used for the
primary key to the
table. The only requirement for a surrogate primary
key is that it is
unique for each row in the table. It is useful because
the natural primary
key can change and this makes updates more
keys are always integer or numeric.
What is Data Mart?
A data mart (DM) is a
specialized version of a data warehouse (DW).
Like data warehouses,
data marts contain a snapshot of operational
data that helps
business people to strategize based on analyses of past
experiences. The key difference is that the creation of a
data mart is
predicated on a specific, predefined need for a certain
configuration of select data. A data mart configuration
access to relevant information (Reference : Wiki).
Data Marts are
designed to help manager make strategic decisions
about their business.
What is the difference between OLAP and data
Datawarehouse is the
place where the data is stored for analyzing
where as OLAP is the
process of analyzing the data,managing
information into cubes for in depth
What is a
Cube and Linked Cube with reference to data
Cubes are logical
representation of multidimensional data.The edge of
the cube contains
dimension members and the body of the cube
contains data values.
The linking in cube ensures that the data in the
What is junk dimension?
A number of very
small dimensions might be lumped together to form
a single dimension, a
junk dimension - the attributes are not closely
related. Grouping of
Random flags and text Attributes in a dimension
and moving them to a
separate sub dimension is known as junk
What is snapshot with reference to data warehouse?
You can disconnect
the report from the catalog to which it is attached
by saving the report
with a snapshot of the data.
What is active data warehousing?
An active data
warehouse provides information that enables decisionmakers
organization to manage customer relationships
the difference between data warehousing and business
deals with all aspects of managing the development,
operation of a data warehouse or data mart
including meta data
management, data acquisition, data cleansing,
storage management, data distribution, data
operational reporting, analytical reporting, security
backup/recovery planning, etc. Business intelligence, on
the other hand, is a
set of software tools that enable an organization
to analyze measurable
aspects of their business such as sales
profitability, operational efficiency, effectiveness of
market penetration among certain customer
groups, cost trends,
anomalies and exceptions, etc. Typically, the term
intelligence” is used to encompass OLAP, data visualization,
data mining and
query/reporting tools. (Reference : Les Barbusinski)
Explain paradigm of
Bill Inmon and Ralph Kimball.
paradigm: Data warehouse is one part of the overall
system. An enterprise has one data warehouse,
and data marts source
their information from the data warehouse. In
the data warehouse,
information is stored in 3rd normal form.
paradigm: Data warehouse is the conglomerate of all
data marts within the
enterprise. Information is always stored in the