Import Sample Data to Your Data Source

When integrating the Data Source (see Integrate a Data Source for a Data Warehouse), download and review the sample CSV file. We suggest that you examine the data first to help you understand how GoodData processes the rows and columns.

The import method is slightly different for each of the supported warehouses. Click the one that applies to your setup and proceed.

Contents:

Azure SQL Database

Log in to your Azure SQL Database with the account that you plan to use with GoodData. Ensure that the user that you want to use in the Data Source has all necessary privileges and that your Azure SQL Database can be accessed by GoodData. For more information about the required privileges, see GoodData-Azure SQL Database Integration Details.

Steps:

  1. Create a separate Azure SQL Database instance.
  2. Execute the following SQL queries to create a schema and table:

    DROP SCHEMA [gdtrial];
    CREATE SCHEMA [gdtrial];
    
    DROP TABLE [gdtrial].[out_csv_order_lines];
    CREATE TABLE [gdtrial].[out_csv_order_lines] (
        cp__order_line_id VARCHAR(255),
        a__order_id VARCHAR(255),
        d__date date,
        a__order_status VARCHAR(255),
        a__customer_id VARCHAR(255),
        a__customer_name VARCHAR(255),
        a__state VARCHAR(255),
        a__product_id VARCHAR(255),
        a__product_name VARCHAR(255),
        a__category VARCHAR(255),
        f__price NUMERIC(15,3),
        f__quantity NUMERIC(15,3)
    );
  3. Download and execute this SQL script that will execute the INSERT command to copy the CSV file into your database.

Azure Synapse Analytics

Your Azure Synapse Analytics must have a Logical Data Warehouse layer. If it does not, create a Logical Data Warehouse that contains table/view examples.

Log in to your Azure Synapse Analytics with the account that you plan to use with GoodData. Ensure that the user that you want to use in the Data Source has all necessary privileges and that your Azure Synapse Analytics database can be accessed by GoodData. For more information about the required privileges, see GoodData-Azure Synapse Analytics Integration Details.

Steps:

  1. Create a separate Azure Synapse Analytics instance.
  2. Execute the following SQL queries to create a schema and table:

    DROP SCHEMA [gdtrial];
    CREATE SCHEMA [gdtrial];
    
    DROP TABLE [gdtrial].[out_csv_order_lines];
    CREATE TABLE [gdtrial].[out_csv_order_lines] (
        cp__order_line_id VARCHAR(255),
        a__order_id VARCHAR(255),
        d__date date,
        a__order_status VARCHAR(255),
        a__customer_id VARCHAR(255),
        a__customer_name VARCHAR(255),
        a__state VARCHAR(255),
        a__product_id VARCHAR(255),
        a__product_name VARCHAR(255),
        a__category VARCHAR(255),
        f__price NUMERIC(15,3),
        f__quantity NUMERIC(15,3)
    );
  3. Download and execute this SQL script that will execute the INSERT command to copy the CSV file into your database.

BigQuery

Ensure that you log in to your BigQuery project with the account that you plan to use with GoodData. Ensure that the user that you want to use in the Data Source has all necessary privileges and that your BigQuery project can be accessed by GoodData. For more information about the required privileges, see GoodData-BigQuery Integration Details.

Steps:

  1. Create a BigQuery service account.
  2. Create a new dataset named gdtrial.
  3. In the dataset, create a new table based on the sample CSV file.
    1. Name the destination table out_order_lines.
    2. Load the following schema:

      a__order_line_id:STRING,
      a__order_id:STRING,
      d__date:DATE,
      a__order_status:STRING,
      a__customer_id:STRING,
      a__customer_name:STRING,
      a__state:STRING,
      a__product_id:STRING,
      a__product_name:STRING,
      a__category:STRING,
      f__price:FLOAT,
      f__quantity:FLOAT
    3. Set the header rows option to skip 1.

Microsoft SQL Server

Log in to your Microsoft SQL Server with the account that you plan to use with GoodData. Ensure that the user that you want to use in the Data Source has all necessary privileges and that your Microsoft SQL Server can be accessed by GoodData. For more information about the required privileges, see GoodData-Microsoft SQL Server Integration Details.

Steps:

  1. Create a separate Microsoft SQL Server instance.
  2. Execute the following SQL queries to create a schema and table:

    DROP SCHEMA [gdtrial];
    CREATE SCHEMA [gdtrial];
    
    DROP TABLE [gdtrial].[out_csv_order_lines];
    CREATE TABLE [gdtrial].[out_csv_order_lines] (
        cp__order_line_id VARCHAR(255),
        a__order_id VARCHAR(255),
        d__date date,
        a__order_status VARCHAR(255),
        a__customer_id VARCHAR(255),
        a__customer_name VARCHAR(255),
        a__state VARCHAR(255),
        a__product_id VARCHAR(255),
        a__product_name VARCHAR(255),
        a__category VARCHAR(255),
        f__price NUMERIC(15,3),
        f__quantity NUMERIC(15,3)
    );
  3. Download and execute this SQL script that will execute the INSERT command to copy the CSV file into your database.

MongoDB BI Connector

Ensure that the user that you want to use in the Data Source has all necessary privileges and that your MongoDB Connector for BI can be accessed by GoodData. For more information about the required privileges, see GoodData-MongoDB BI Connector Integration Details

Steps:

  1. Download the sample CSV data.
  2. Run the following command after specifying the path to the sample CSV data in the --file command line argument. For example, --file/users/gd/downloads/gooddata_integration_sample_data.csv

    mongoimport --db=gdctrial --collection=out_csv_order_lines --type=csv \
       --columnsHaveTypes \
       --fields="cp__order_line_id.string(),a__order_id.string(),d__date.date(2006-01-02),a__order_status.string(),a__customer_id.string(),a__customer_name.string(),a__state.string(),a__product_id.string(),a__product_name.string(),a__category.string(),f__price.double(),f__quantity.int32()" \
       --file=/users/gd/downloads/gooddata_integration_sample_data.csv
  3. Restart the MongoDB Connector for BI.

MySQL

Log in to your MySQL database with the account that you plan to use with GoodData. Ensure that the user that you want to use in the Data Source has all necessary privileges and that your MySQL database can be accessed by GoodData. For more information about the required privileges, see GoodData-MySQL Integration Details

Steps:

  1. Create a separate MySQL database.
  2. Execute the following SQL queries to create a schema and table:

    CREATE DATABASE IF NOT EXISTS gdtrial;
    
    DROP TABLE IF EXISTS gdtrial.out_csv_order_lines;
        CREATE TABLE gdtrial.out_csv_order_lines (
            cp__order_line_id VARCHAR(255),
            a__order_id VARCHAR(255),
            d__date date,
            a__order_status VARCHAR(255),
            a__customer_id VARCHAR(255),
            a__customer_name VARCHAR(255),
            a__state VARCHAR(255),
            a__product_id VARCHAR(255),
            a__product_name VARCHAR(255),
            a__category VARCHAR(255),
            f__price NUMERIC(15,3),
            f__quantity NUMERIC(15,3)
        );
  3. Download and execute this SQL script that will execute the INSERT command to copy the CSV file into your database.

PostgreSQL

Log in to your PostgreSQL database with the account that you plan to use with GoodData. Ensure that the user that you want to use in the Data Source has all necessary privileges and that your PostgreSQL database can be accessed by GoodData. For more information about the required privileges, see GoodData-PostgreSQL Integration Details.

Steps:

  1. Create a separate PostgreSQL database.
  2. Execute the following SQL queries to create a schema and table:

    DROP SCHEMA IF EXISTS GDTRIAL CASCADE;
    CREATE SCHEMA GDTRIAL;
    
    DROP TABLE IF EXISTS gdtrial.out_order_lines;
    CREATE TABLE gdtrial.out_order_lines (
        cp__order_line_id VARCHAR(255),
        a__order_id VARCHAR(255),
        d__date date,
        a__order_status VARCHAR(255),
        a__customer_id VARCHAR(255),
        a__customer_name VARCHAR(255),
        a__state VARCHAR(255),
        a__product_id VARCHAR(255),
        a__product_name VARCHAR(255),
        a__category VARCHAR(255),
        f__price NUMERIC(15,3),
        f__quantity NUMERIC(15,3)
    );
  3. Download and execute this SQL script that will execute the INSERT command to copy the CSV file into your database.

Redshift

Log in to your Redshift cluster with the account that you plan to use with GoodData. Ensure that the user that you want to use in the Data Source has all necessary privileges and that your Redshift cluster can be accessed by GoodData. For more information about the required privileges, see GoodData-Redshift Integration Details.

Steps:

  1. Create a separate Redshift database.
  2. Execute the following SQL queries to create a schema and table:

    DROP SCHEMA IF EXISTS GDTRIAL CASCADE;
    
    CREATE SCHEMA GDTRIAL;
    
    CREATE TABLE gdtrial.out_csv_order_lines (
    a__order_line_id VARCHAR(255),
    a__order_id VARCHAR(255),
    d__date date,
    a__order_status VARCHAR(255),
    a__customer_id VARCHAR(255),
    a__customer_name VARCHAR(255),
    a__state VARCHAR(255),
    a__product_id VARCHAR(255),
    a__product_name VARCHAR(255),
    a__category VARCHAR(255),
    f__price DECIMAL(12,2),
    f__quantity DECIMAL(15,6)
    );
  3. Download and execute this SQL script that will execute the INSERT command to copy the CSV file into your database.

Snowflake

Log in to your Snowflake instance with the account that you plan to use with GoodData. Ensure that the user that you want to use in the Data Source has all necessary privileges and that your Snowflake instance can be accessed by GoodData. For more information about the required privileges, see GoodData-Snowflake Integration Details.

Execute the following SQL queries to create a database, schema, and table:

-- Create the "gdtrial" database
CREATE OR REPLACE DATABASE gdtrial;

-- Create the "gdtrial" schema
CREATE OR REPLACE SCHEMA gdtrial;

-- Create the "out_csv_order_lines" table
CREATE OR REPLACE TABLE gdtrial.out_csv_order_lines (
a__order_line_id VARCHAR(255),
a__order_id VARCHAR(255),
d__date date,
a__order_status VARCHAR(255),
a__customer_id VARCHAR(255),
a__customer_name VARCHAR(255),
a__state VARCHAR(255),
a__product_id VARCHAR(255),
a__product_name VARCHAR(255),
a__category VARCHAR(255),
f__price DECIMAL(12,2),
f__quantity DECIMAL(15,6)
);

-- Create a file format describing the format of the order_lines.csv file
CREATE OR REPLACE FILE FORMAT "GDTRIAL"."GDTRIAL".gdtrialfileformat TYPE = 'CSV' COMPRESSION = 'NONE' FIELD_DELIMITER = ',' RECORD_DELIMITER = '\n' SKIP_HEADER = 1 FIELD_OPTIONALLY_ENCLOSED_BY = 'NONE' TRIM_SPACE = FALSE ERROR_ON_COLUMN_COUNT_MISMATCH = TRUE ESCAPE = 'NONE' ESCAPE_UNENCLOSED_FIELD = '\134' DATE_FORMAT = 'AUTO' TIMESTAMP_FORMAT = 'AUTO' NULL_IF = ('\\N');

-- Create an S3 stage with the sample data
CREATE OR REPLACE STAGE gdstage
file_format = gdtrialfileformat
url = 's3://gdc-prod-gdtrial/data';

-- Copy the sample CSV file from the S3 stage into the "out_csv_order_lines" table
COPY INTO out_csv_order_lines
FROM @gdstage/step1/order_lines.csv
file_format = (format_name = gdtrialfileformat);
Powered by Atlassian Confluence and Scroll Viewport.