Direct Data Distribution from Data Warehouses

Direct data distribution from data warehouses covers extracting consolidated and cleaned data directly from a data warehouse and distributing it to your GoodData workspaces. This article outlines essential data warehouse integration practices and provides links to detailed integration guides and best practices.

Contents:

Supported Data Warehouses

In addition to the GoodData ADS data warehouse (see Data Warehouse), GoodData platform supports direct Integration with the following third-party data warehouses:

You can integrate data from your Snowflake instance, Redshift cluster or BigQuery project directly into the GoodData platform. The Automated Data Distribution (ADD) process synchronizes data from the warehouse with your customers’ workspaces based on a defined schedule. This is a key approach in building optimized multi-tenant analytics for all your customers and users without the runaway costs associated with executing direct queries to your data warehouse. For more information about ADD benefits and usage, see Automated Data Distribution Reference.

Setting up direct data distribution from a warehouse requires actions that you perform both on your Snowflake instance, Redshift cluster or BigQuery project and in your GoodData workspace. Follow our step-by-step tutorials that will help you integrate your warehouse and GoodData and provide you with as much automation during integration as possible. Depending on your experience, you can start with your own data or you can first try using our sample data for your warehouse-GoodData integration to better understand the processes involved:

If you have a GoodData workspace with the logical data model (LDM) that meets your business requirements for data analysis, see Integrate Data Warehouses Directly to GoodData based on an Existing LDM.

Components of Direct Data Distribution

Data Source

A Data Source is an entity that stores data warehouse credentials and the location of the Output Stage.

The Data Source is the main reference point when you are performing the following tasks:

  • Generating the Output Stage. During the process, we scan the data warehouse schema stored in your Data Source and generate recommended views for the Output Stage.
  • Generating a logical data model (LDM). The process scans the Output Stage connected to your Data Source and provides a definition of the LDM, which can then be used for generating the LDM in your workspace.
  • Validating the mapping between the Data Source and the LDM. This compares the Output Stage connected to your Data Source to the LDM and returns a list of inconsistencies. Validate the mapping after you have changed the Output Stage or the LDM to see what changes are required.
  • Managing data mapping items. You can use data mapping if you want to override the way how data is loaded into workspaces. To do so, provide an alternative mapping scheme for project_id or client_id, respectively.

Note: You can perform all the above tasks using individual API calls. For more information about creating and listing Data Sources, see the API Reference.

Output Stage

The Output Stage is a set of tables and/or views that will serve as a source for loading data to the GoodData platform. You can prepare the Output Stage manually or generate the Output Stage for the Data Source.

If you decide to create the Output Stage manually, make sure that the following requirements are met:

If you decide to generate the Output Stage for the Data Source, review the resulting SQL code to check the following:

  • All columns that should serve as connection points are prefixed with cp__.
  • All columns that should serve as references are prefixed with r__.
  • Attributes that are represented by numeric values (for example, a customer tier that can be 1, 2, or 3) are prefixed with a__. Unless this is done, the Data Source by default identifies all columns with a numerical data type (INT, FLOAT, and other) as facts (the prefix f__).

You can generate the Output Stage from a different schema than the schema that will contain the Output Stage.

Best Practices

For better data load performance, we recommend that you apply the following best practices:

  • Use tables/views to store data for all customers together with a client_id differentiator. ADD can load such data to the customers’ workspaces much faster than when the data is stored per customer in dedicated tables/views.
  • Use incremental loads instead of full loads.