Rollout Brick

The rollout brick synchronizes all the client workspaces (logical data model, ETL processes, dashboards, reports, metrics, and so on) in the synchronized segments with the latest version of the segment's master workspace.

For information about how to use the brick, see How to Use a Brick.

Contents:

Prerequisites

Before using the rollout brick, make sure that the following is true:

  • The release brick (see Release Brick) and the provisioning brick (see Provisioning Brick) have been executed.
  • The segments have been created and associated with the clients.

How the Brick Works

The rollout brick synchronizes all the client workspaces (logical data model, ETL processes, dashboards, reports, metrics, and so on) in the synchronized segments with the latest version of the segment's master workspace. The latest version of the segment's master workspace is the one that the release brick (see Release Brick) created at its last execution.

The segments to synchronize are specified in the segments_filter parameter.

Rollout and Provisioning Metadata

Client synchronization creates metadata objects on the GoodData platform. These objects are then used during provisioning.

These objects are automatically cleaned up three years after the synchronization/rollout process was last executed.

To prevent issues with provisioning related to the missing metadata objects, always keep the production master workspace synchronized with the production client workspaces. Use a different workspace for development.

Input

The rollout brick does not require any input besides the parameters that you have to add when scheduling the brick process.

Parameters

When scheduling the deployed brick (see How to Use a Brick and Schedule a Data Loading Process), add parameters to the schedule.

NameTypeMandatory?DefaultDescription

organization

string

yes

n/a

The name of the domain where the brick is executed

segments_filterarrayyesn/a

The segments that you want to roll out

You must encode this parameter using the gd_encoded_params parameter (see Specifying Complex Parameters).

Example:

"segments_filter": ["BASIC", "PREMIUM"]
ads_clientJSONsee "Description" columnn/a

(If your input source resides on Data Warehouse) The Data Warehouse (ADS) instance where the lcm_release table exists

You must encode this parameter using the gd_encoded_params parameter (see Specifying Complex Parameters).

Example:

"ads_client": {
  "jdbc_url": "jdbc:gdc:datawarehouse://my.company.com/gdc/datawarehouse/instances/kluuu4h3sogai9x2ztn4wc0g8lta7sn8"
}
data_productstringnodefault

The data product that contains the segments that you want to roll out

NOTE: If your input source resides on Data Warehouse and you have two or more data products, use the release_table_name parameter (see further in this table).

release_table_namestringsee "Description" columnLCM_RELEASE

(If your input source resides on Data Warehouse and you have multiple data products stored in one Data Warehouse instance) The name of the table in the Data Warehouse instance where the latest segment's master workspace IDs and versions are stored

technical_usersarraynon/a

The users that are going to be added as admins to each client workspace

The user logins are case-sensitive and must be written in lowercase.

You must encode this parameter using the gd_encoded_params parameter (see Specifying Complex Parameters).

Example:

"technical_users": ["dev_admin@gooddata.com", "admin@gooddata.com"]
update_preferenceJSONnon/aSee update_preference.

update_preference

The update_preference parameter specifies the properties of MAQL diffs when propagating data from the master workspace to the clients' workspaces.

The update_preference parameter is set up in JSON format. You must encode this parameter using the gd_encoded_params parameter (see Specifying Complex Parameters).

The parameter is defined by the following JSON structure:

"update_preference": {
  "allow_cascade_drops": true|false,
  "keep_data": true|false
}
  • allow_cascade_drops: If set to true, the MAQL diff uses drops with the cascade option. These drops transitively delete all dashboard objects connected to the dropped LDM object. Set it to true only if you are certain that you do not need metrics, reports, or dashboards that use the dropped object.
  • keep_data: If set to true, the MAQL diff execution does not truncate the data currently loaded in datasets included in the diff.

By default, the update_preference parameter is set to the following, which is the least invasive scenario:

"update_preference": {
  "allow_cascade_drops": false,
  "keep_data": true
}

We recommend that you use the default configuration and set it up explicitly in the brick schedule. This way, you will be able to easily locate this parameter in your schedule and update it as needed in case of a failure.


The following are possible scenarios of how you can set up the update_preference parameter depending on how you want the brick to behave:

  • You define neither allow_cascade_drops no keep_data. The brick will be using the MAQL diffs starting from the least invasive scenario (the default configuration) towards the most invasive one until the MAQL diff succeeds.

    "update_preference": { }
  • You define only keep_data and do not set allow_cascade_drops explicitly. The brick will first try to use the MAQL diffs with allow_cascade_drops set to false as a less invasive alternative. If it fails, the brick will try allow_cascade_drops set to true.

    "update_preference": {
      "keep_data": false|true
    }
  • You set allow_cascade_drops to true and do not set keep_data explicitly. The brick will first try to use the MAQL diffs with keep_data set to true as a less invasive alternative. If it fails, the brick will try keep_data set to false.

    "update_preference": {
      "allow_cascade_drops": true
    }
  • You set allow_cascade_drops to true and set keep_data to false. This is the most invasive scenario. Use it carefully.

    "update_preference": {
      "allow_cascade_drops": true,
      "keep_data": false
    }

Example - Brick Configuration

The following is an example of configuring the brick parameters in the JSON format:

{
  "organization": "myCustomDomain",
  "gd_encoded_params": {
    "segments_filter": ["BASIC"],
    "ads_client": {
      "jdbc_url": "jdbc:gdc:datawarehouse://analytics.myCustomDomain.com/gdc/datawarehouse/instances/kluuu4h3sogai9x2ztn4wc0g8lta7sn8"
    },
    "update_preference": {
      "allow_cascade_drops": false,
      "keep_data": true
    }
  }
}

Advanced Settings

This section describes advanced settings of the rollout brick.

Change these settings only if you are confident in executing the task or have no other options. Adjusting the advanced options in a wrong way may generate unexpected side effects.

Proceed with caution.

NameTypeMandatory?DefaultDescription
dynamic_paramsJSONnon/aSee "Advanced Settings" in Provisioning Brick.
delete_extra_process_scheduleBooleannotrue

Specifies how the brick should process the processes or schedules that are either not present in the master workspace or have been renamed in the master workspace

  • If not set or set to true, the brick deletes from the client workspaces the processes or schedules that are not present in the master workspace or have been renamed in the master workspace.
  • If set to false, the brick keeps the processes or schedules that are not present in the master workspace or have been renamed in the master workspace in the client workspaces.
exclude_fact_ruleBooleannofalse

Specifies whether to skip number format validation (up to 15 digits, including maximum 6 digits after the decimal point).

  • If not set or set to false, number format validation is used.
  • If set to true, number format validation is skipped.
synchronize_ldmstringnodiff_against_master_with_fallback

Specifies how the brick synchronizes the logical data model (LDM) of the master workspaces and their corresponding client workspaces. The brick checks the LDM of a client workspace and determines whether a MAQL diff should be applied and what the DDL statement should be.

Possible values:

  • diff_against_clients: The brick creates and applies a MAQL diff for each client workspace separately.
  • diff_against_master: The brick creates a MAQL diff for the master workspace only and applies it to all client workspaces in the segment. This option is faster than diff_against_clients, but if some adjustments have been made in a client workspace's LDM, the synchronization will fail for that workspace.
  • diff_against_master_with_fallback: The brick creates a MAQL diff for the master workspace and applies it all client workspaces in the segment. If some adjustments have been made in a client workspace's LDM and the synchronization fails for that workspace, the brick falls back and creates the MAQL diff for that client workspace and applies it separately. This option is usually as fast as diff_against_master providing no adjustments have been made in the client workspaces. It is also more resilient in case some adjustments have been made. This is the default.
include_deprecatedBooleannofalse

Specifies how to handle deprecated objects in the logical data model (LDM) while the one in a client workspace is being synchronized with the latest version of the segment's master workspace

  • If not set or set to false, the objects that are marked as deprecated in the LDM of either the master workspace, or the client workspace, or both, are not included in the generated MAQL diff that will be used for synchronizing the LDM of the client workspace.
  • If set to true, the deprecated objects are included in the generated MAQL diff and will be processed during the synchronization in the following way:
    • If an object is not deprecated in the master workspace LDM but deprecated in the client workspace LDM, the object will remain deprecated in the client workspace LDM.
    • If an object is deprecated in the master workspace LDM but not deprecated in the client workspace LDM, the object will remain not deprecated in the client workspace.
    • If an object is deprecated in the master workspace LDM but does not exist in the client workspace LDM, the object will be created in the client workspace as non deprecated.

NOTE: If the LDM of the master workspace or any client workspace include deprecated objects, we recommend that you set the include_deprecated parameter to true. In the situation when the LDM of the master workspace or the client workspace include deprecated objects (specifically, the objects that are marked as deprecated in one LDM but not in the other one) and include_deprecated is set to false, the synchronization may fail with the following error message:

Object %s already exists
include_computed_attributesBooleannotrue

Specifies whether to include computed attributes (see Use Computed Attributes) in the logical data model (LDM).

  • If not set or set to true, the datasets related to the computed attributes are included in the LDM.
  • If set to false, the computed attributes are ignored in both master and client workspaces.
skip_actionsarraynon/a

The actions or steps that you want the brick to skip while executing (for example, synchronizing computed attributes or collecting dynamically changing parameters)

The specified actions and steps will be excluded from the processing and will not be performed.

NOTE: Using this parameter in a wrong way may generate unexpected side effects. If you want to use it, contact the GoodData specialist who was involved in implementing LCM at your site.

Powered by Atlassian Confluence and Scroll Viewport.