Rollout Brick

The rollout brick synchronizes all the client workspaces (logical data model, ETL processes, dashboards, reports, metrics, and so on) in the synchronized segments with the latest version of the segment's master workspace.

For information about how to use the brick, see How to Use a Brick.

Contents:

Prerequisites

Before using the rollout brick, make sure that the following is true:

  • The release brick (see Release Brick) and the provisioning brick (see Provisioning Brick) have been executed.
  • The segments have been created and associated with the clients.

How the Brick Works

The rollout brick synchronizes all the client workspaces (logical data model, ETL processes, dashboards, reports, metrics, and so on) in the synchronized segments with the latest version of the segment's master workspace. The latest version of the segment's master workspace is the one that the release brick (see Release Brick) created at its last execution.

The segments to synchronize are specified in the 'segments_filter' parameter.

Input

The rollout brick does not require any input besides the parameters that you have to add when scheduling the brick process.

Parameters

When scheduling the deployed brick (see How to Use a Brick and Schedule a Process on the Data Integration Console), add parameters to the schedule.

NameTypeMandatory?DefaultDescription

organization

string

yes

n/a

The name of the domain where the brick is executed

segments_filterarrayyesn/a

The segments that you want to roll out

You must encode this parameter using the 'gd_encoded_params' parameter (see Specifying Complex Parameters).

Example:

"segments_filter": ["BASIC", "PREMIUM"]
ads_clientJSONyesn/a

The Data Warehouse (ADS) instance where the lcm_release table exists

You must encode this parameter using the 'gd_encoded_params' parameter (see Specifying Complex Parameters).

Example:

"ads_client": {
  "jdbc_url": "jdbc:gdc:datawarehouse://my.company.com/gdc/datawarehouse/instances/kluuu4h3sogai9x2ztn4wc0g8lta7sn8"
}
data_productstringnodefault

The data product that contains the segments that you want to roll out

NOTE: If you have two or more data products, use the 'release_table_name' parameter (see further in this table).

release_table_namestringnoLCM_RELEASE

(If you have multiple data products stored in one Data Warehouse instance) The name of the table in the Data Warehouse instance where the latest segment's master workspace IDs and versions are stored

technical_usersarraynon/a

The users that are going to be added as admins to each client workspace

You must encode this parameter using the 'gd_encoded_params' parameter (see Specifying Complex Parameters).

Example:

"technical_users": ["dev_admin@gooddata.com", "admin@gooddata.com"]
update_preferenceJSONnon/aSee update_preference.

update_preference

The 'update_preference' parameter specifies the properties of MAQL diffs when propagating data from the master workspace to the clients' workspaces.

The 'update_preference' parameter is set up in JSON format. You must encode this parameter using the 'gd_encoded_params' parameter (see Specifying Complex Parameters).

The parameter is defined by the following JSON structure:

"update_preference": {
  "allow_cascade_drops": true|false,
  "keep_data": true|false
}
  • allow_cascade_drops: If set to 'true', the MAQL diff uses 'drops' with the 'cascade' option. These drops transitively delete all dashboard objects connected to the dropped LDM object. Set it to 'true' only if you are certain that you do not need metrics, reports, or dashboards that use the dropped object.
  • keep_data: If set to 'true', the MAQL diff execution does not truncate the data currently loaded in datasets included in the diff.

By default, the 'update_preference' parameter is set to the following, which is the least invasive scenario:

"update_preference": {
  "allow_cascade_drops": false,
  "keep_data": true
}

We recommend that you use the default configuration and set it up explicitly in the brick schedule. This way, you will be able to easily locate this parameter in your schedule and update it as needed in case of a failure.


The following are possible scenarios of how you can set up the 'update_preference' parameter depending on how you want the brick to behave:

  • You define neither 'allow_cascade_drops' no 'keep_data'. The brick will be using the MAQL diffs starting from the least invasive scenario (the default configuration) towards the most invasive one until the MAQL diff succeeds.

    "update_preference": { }
  • You define only 'keep_data' and do not set 'allow_cascade_drops' explicitly. The brick will first try to use the MAQL diffs with 'allow_cascade_drops' set to 'false' as a less invasive alternative. If it fails, the brick will try 'allow_cascade_drops' set to 'true'.

    "update_preference": {
      "keep_data": false|true
    }
  • You set 'allow_cascade_drops' to 'true' and do not set 'keep_data' explicitly. The brick will first try to use the MAQL diffs with 'keep_data' set to 'true' as a less invasive alternative. If it fails, the brick will try 'keep_data' set to 'false'.

    "update_preference": {
      "allow_cascade_drops": true
    }
  • You set 'allow_cascade_drops' to 'true' and set 'keep_data' to 'false'. This is the most invasive scenario. Use it carefully.

    "update_preference": {
      "allow_cascade_drops": true,
      "keep_data": false
    }

Example: Brick Configuration

The following is an example of configuring the brick parameters in the JSON format:

{
  "organization": "myCustomerDomain",
  "gd_encoded_params": {
    "segments_filter": ["BASIC"],
    "ads_client": {
      "jdbc_url": "jdbc:gdc:datawarehouse://analytics.myCustomDomain.com/gdc/datawarehouse/instances/kluuu4h3sogai9x2ztn4wc0g8lta7sn8"
    },
    "update_preference": {
      "allow_cascade_drops": false,
      "keep_data": true
    }
  }
}

Advanced Settings

This section describes advanced settings of the rollout brick.

Change these settings only if you are confident in executing the task or have no other options. Adjusting the advanced options in a wrong way may generate unexpected side effects.

Proceed with caution.

 Click here to view the advanced settings.
NameTypeMandatory?DefaultDescription
dynamic_paramsJSONnon/aSee 'dynamic_params' in Provisioning Brick.
delete_extra_process_scheduleBooleannotrue

Specifies how the brick should process the processes or schedules that are either not present in the master workspace or have been renamed in the master workspace

  • If not set or set to 'true', the brick deletes from the client workspaces the processes or schedules that are not present in the master workspace or have been renamed in the master workspace.
  • If set to 'false', the brick keeps the processes or schedules that are not present in the master workspace or have been renamed in the master workspace in the client workspaces.
exclude_fact_ruleBooleannofalse

Specifies whether to skip number format validation (up to 15 digits, including max. 6 digits after decimal point).

  • If not set or set to 'false', number format validation is used.
  • If set to 'true', number format validation is skipped.
synchronize_ldmstringnodiff_against_master_with_fallback

Specifies how the brick synchronizes the logical data model (LDM) of the master workspaces and their corresponding client workspaces. The brick checks the LDM of a client workspace and determines whether a MAQL diff should be applied and what the DDL statement should be.

Possible values:

  • diff_against_clients: The brick creates and applies a MAQL diff for each client workspace separately.
  • diff_against_master: The brick creates a MAQL diff for the master workspace only and applies it to all client workspaces in the segment. This option is faster than diff_against_clients, but if some adjustments have been made in a client workspace's LDM, the synchronization will fail for that workspace.
  • diff_against_master_with_fallback: The brick creates a MAQL diff for the master workspace and applies it all client workspaces in the segment. If some adjustments have been made in a client workspace's LDM and the synchronization fails for that workspace, the brick falls back and creates the MAQL diff for that client workspace and applies it separately. This option is usually as fast as diff_against_master providing no adjustments have been made in the client workspaces. It is also more resilient in case some adjustments have been made. This is the default.
include_computed_attributesBooleannotrue

Specifies whether to include computed attributes (see Use Computed Attributes) in the logical data model (LDM).

  • If not set or set to 'true', the datasets related to the computed attributes are included in the LDM.
  • If set to 'false', the computed attributes are ignored in both master and client workspaces.