Create a Data Source

A Data Source is an entity that holds the properties (location and access credentials) of the source of your data.

  • If you integrate data from data warehouses, a Data Source stores the properties of a data warehouse and the prefix for the Output Stage if the Output Stage is used.
  • If you integrate data from object storage services, a Data Source stores the properties of an object storage service.

For more information, see Direct Data Distribution from Data Warehouses and Object Storage Services.

Data Sources exist in the context of a domain. You do not have to have access to any workspace to be able to view the Data Sources that exist in your domain.

You can also create Data Sources via the API.

Contents:

Create a Data Source for an Azure Blob Storage Account

Steps:

  1. On the top navigation bar, select Data.
    The LDM Modeler opens.
  2. Click Sources.
  3. Click the Azure Blob Storage icon, or click Add data in the left pane and select Azure Blob Storage.
    The connection parameter screen appears.
  4. Enter the name of the Data Source.
    The alias will be automatically generated from the name. You can update it, if needed.

    The alias is a reference to the Data Source, unique within the domain. The alias is used when exporting and importing the data pipeline (see Export and Import the Data Pipeline).

  5. Enter the connection string and the path to the source data.
    For more information, see GoodData-Azure Blob Storage Integration Details.
  6. Click Test connection.
    If the connection succeeds, the confirmation message appears.
  7. Click Save.
    The Data Source is created. The screen with the connection details opens.

Create a Data Source for an Azure SQL Database

Steps:

  1. On the top navigation bar, select Data.
    The LDM Modeler opens.
  2. Click Sources.
  3. Click the Azure SQL icon, or click Add data in the left pane and select Azure SQL.
    The connection parameter screen appears.
  4. Enter the name of the Data Source.
    The alias will be automatically generated from the name. You can update it, if needed.

    The alias is a reference to the Data Source, unique within the domain. The alias is used when exporting and importing the data pipeline (see Export and Import the Data Pipeline).

  5. Enter the details of your connection to the Azure SQL Database.
    For more information, see GoodData-Azure SQL Database Integration Details.

  6. (Optional) If you plan to use the Output Stage (see Direct Data Distribution from Data Warehouses and Object Storage Services), enter the prefix that will be used for the Output Stage tables and views.
  7. Click Test connection.
    If the connection succeeds, the confirmation message appears.
  8. Click Save.
    The Data Source is created. The screen with the connection details opens.

Create a Data Source for an Azure Synapse Analytics Service

Steps:

  1. On the top navigation bar, select Data.
    The LDM Modeler opens.
  2. Click Sources.
  3. Click the Synapse SQL icon, or click Add data in the left pane and select Synapse SQL.
    The connection parameter screen appears.
  4. Enter the name of the Data Source.
    The alias will be automatically generated from the name. You can update it, if needed.

    The alias is a reference to the Data Source, unique within the domain. The alias is used when exporting and importing the data pipeline (see Export and Import the Data Pipeline).

  5. Enter the details of your connection to the Azure Synapse Analytics service.
    For more information, see GoodData-Azure Synapse Analytics Integration Details.

  6. (Optional) If you plan to use the Output Stage (see Direct Data Distribution from Data Warehouses and Object Storage Services), enter the prefix that will be used for the Output Stage tables and views.
  7. Click Test connection.
    If the connection succeeds, the confirmation message appears.
  8. Click Save.
    The Data Source is created. The screen with the connection details opens.

Create a Data Source for a BigQuery Project

When creating a Data Source for a BigQuery project, you have the following options:

  • Upload the service account key file.
  • Fill in all the connection details manually.

For more information, see GoodData-BigQuery Integration Details.

Steps:

  1. On the top navigation bar, select Data.
    The LDM Modeler opens.
  2. Click Sources.
  3. Click the BigQuery icon, or click Add data in the left pane and select Google BigQuery.
    You are prompted to provide a service account key file.
  4. Do one of the following:
    • If you have a service account key, click Browse and navigate to the file and upload it.
    • If you want to enter the details manually, click connect manually and enter the details of your connection to the BigQuery project.
  5. Enter the name of the Data Source.
    The alias will be automatically generated from the name. You can update it, if needed.

    The alias is a reference to the Data Source, unique within the domain. The alias is used when exporting and importing the data pipeline (see Export and Import the Data Pipeline).

  6. (Optional) If you plan to use the Output Stage (see Direct Data Distribution from Data Warehouses and Object Storage Services), enter the prefix that will be used for the Output Stage tables and views.
  7. Click Test connection.
    If the connection succeeds, the confirmation message appears.
  8. Click Save.
    The Data Source is created. The screen with the connection details opens.

Create a Data Source for a Microsoft SQL Server

Steps:

  1. On the top navigation bar, select Data.
    The LDM Modeler opens.
  2. Click Sources.
  3. Click the Microsoft SQL Server icon, or click Add data in the left pane and select Microsoft SQL Server.
    The connection parameter screen appears.
  4. Enter the name of the Data Source.
    The alias will be automatically generated from the name. You can update it, if needed.

    The alias is a reference to the Data Source, unique within the domain. The alias is used when exporting and importing the data pipeline (see Export and Import the Data Pipeline).

  5. Enter the details of your connection to the Microsoft SQL Server.
    For more information, see GoodData-Microsoft SQL Server Integration Details.

  6. (Optional) If you plan to use the Output Stage (see Direct Data Distribution from Data Warehouses and Object Storage Services), enter the prefix that will be used for the Output Stage tables and views.
  7. Click Test connection.
    If the connection succeeds, the confirmation message appears.
  8. Click Save.
    The Data Source is created. The screen with the connection details opens.

Create a Data Source for a PostgreSQL Database

Steps:

  1. On the top navigation bar, select Data.
    The LDM Modeler opens.
  2. Click Sources.
  3. Click the PostgreSQL icon, or click Add data in the left pane and select PostgreSQL.
    The connection parameter screen appears.
  4. Enter the name of the Data Source.
    The alias will be automatically generated from the name. You can update it, if needed.

    The alias is a reference to the Data Source, unique within the domain. The alias is used when exporting and importing the data pipeline (see Export and Import the Data Pipeline).

  5. Enter the details of your connection to the PostgreSQL database.
    For more information, see GoodData-PostgreSQL Integration Details.

  6. (Optional) If you plan to use the Output Stage (see Direct Data Distribution from Data Warehouses and Object Storage Services), enter the prefix that will be used for the Output Stage tables and views.
  7. Click Test connection.
    If the connection succeeds, the confirmation message appears.
  8. Click Save.
    The Data Source is created. The screen with the connection details opens.

Create a Data Source for a Redshift Cluster

Steps:

  1. On the top navigation bar, select Data.
    The LDM Modeler opens.
  2. Click Sources.
  3. Click the Redshift icon, or click Add data in the left pane and select Amazon Redshift.
    The connection parameter screen appears.
  4. Enter the name of the Data Source.
    The alias will be automatically generated from the name. You can update it, if needed.

    The alias is a reference to the Data Source, unique within the domain. The alias is used when exporting and importing the data pipeline (see Export and Import the Data Pipeline).

  5. Enter the details of your connection to the Redshift cluster.
    For more information, see GoodData-Redshift Integration Details.
  6. (Optional) If you plan to use the Output Stage (see Direct Data Distribution from Data Warehouses and Object Storage Services), enter the prefix that will be used for the Output Stage tables and views.
  7. Click Test connection.
    If the connection succeeds, the confirmation message appears.
  8. Click Save.
    The Data Source is created. The screen with the connection details opens.

Create a Data Source for an S3 Bucket

Steps:

  1. On the top navigation bar, select Data.
    The LDM Modeler opens.
  2. Click Sources.
  3. Click the S3 icon, or click Add data in the left pane and select Amazon S3.
    The connection parameter screen appears.
  4. Enter the name of the Data Source.
    The alias will be automatically generated from the name. You can update it, if needed.

    The alias is a reference to the Data Source, unique within the domain. The alias is used when exporting and importing the data pipeline (see Export and Import the Data Pipeline).

  5. Enter the path to your S3 bucket, the access key, and the secret key.
    For more information, see GoodData-S3 Integration Details.
  6. If your S3 bucket uses a region different from the default US East (N. Virginia) one, enter the code of the bucket region.
    For the list of the region codes, see AWS Service Endpoints.
  7. If your S3 bucket supports server-side encryption, select the Server side encryption check-box.
  8. Click Test connection.
    If the connection succeeds, the confirmation message appears.
  9. Click Save.
    The Data Source is created. The screen with the connection details opens.

Create a Data Source for a Snowflake Instance

Steps:

  1. On the top navigation bar, select Data.
    The LDM Modeler opens.
  2. Click Sources.
  3. Click the Snowflake icon, or click Add data in the left pane and select Snowflake.
    The connection parameter screen appears.
  4. Enter the name of the Data Source.
    The alias will be automatically generated from the name. You can update it, if needed.

    The alias is a reference to the Data Source, unique within the domain. The alias is used when exporting and importing the data pipeline (see Export and Import the Data Pipeline).

  5. Enter the details of your connection to the Snowflake instance.
    For more information, see GoodData-Snowflake Integration Details.
  6. (Optional) If you plan to use the Output Stage (see Direct Data Distribution from Data Warehouses and Object Storage Services), enter the prefix that will be used for the Output Stage tables and views.
  7. Click Test connection.
    If the connection succeeds, the confirmation message appears.
  8. Click Save.
    The Data Source is created. The screen with the connection details opens.

Powered by Atlassian Confluence and Scroll Viewport.