Deploy a Data Loading Process for a Data Pipeline Brick

Data pipeline bricks help you perform specific tasks within the scope of the data preparation and distribution pipeline. Depending on the type, a data pipeline brick can download data from a data source, transform the data, or upload it to Data Warehouse for further distributing to workspaces.

For more information, see Data Preparation and Distribution Pipeline and Brick Reference.

Steps:

  1. Click your name in the top right corner, and select Data Integration Console.
    Alternatively, go to https://{your.domain.com}/admin/disc/.
  2. On the top navigation bar, select Workspaces and click the name of the workspace where you want to deploy a data pipeline brick.
  3. Click Deploy Process.
    The deployment dialog opens.
  4. From the Component dropdown, select the brick (downloader, executor, integrator, or utility) that you want to deploy.
  5. Fill in the fields to provide required information, such as path the configuration file and S3 properties.

    To save time when entering those parameters, you can set up dedicated Data Sources with this information and then reference those Data Sources in the deployed process. For more information, see Reuse Parameters in Multiple Data Loading Processes.

  6. Enter the name of the process.
    The alias will be automatically generated from the name. You can update it, if needed.

    The alias is a reference to the process, unique within the workspace. The alias is used when exporting and importing the data pipeline (see Export and Import the Data Pipeline).

  7. Click Deploy.
    The process is deployed.

You can now schedule the deployed data loading process (see Schedule a Data Load).

Powered by Atlassian Confluence and Scroll Viewport.