Rollout Brick

The rollout brick synchronizes all the client workspaces (logical data model (LDM), data loading processes, dashboards, reports, metrics, and so on) in the synchronized segments with the latest version of the segment’s master workspace.

For information about how to use the brick, see How to Use a Brick.

Prerequisites

Before using the rollout brick, make sure that the following is true:

  • The release brick (see Release Brick) and the provisioning brick (see Provisioning Brick) have been executed.
  • The segments have been created and associated with the clients.

How the Brick Works

The rollout brick synchronizes all the client workspaces (LDM, data loading processes, dashboards, reports, metrics, and so on) in the synchronized segments with the latest version of the segment’s master workspace. The latest version of the segment’s master workspace is the one that the release brick (see Release Brick) created at its last execution.

The segments to synchronize are specified in the segments_filter parameter.

Rollout Brick and Custom Fields in the LDM

If you have custom fields set up in the LDM, consider the following:

Rollout and Provisioning Metadata

Client synchronization creates metadata objects on the GoodData platform. These objects are then used during provisioning.

These objects are automatically cleaned up three years after the synchronization/rollout process was last executed.

To prevent issues with provisioning related to the missing metadata objects, always keep the production master workspace synchronized with the production client workspaces. Use a different workspace for development.

Input

The rollout brick does not require any input besides the parameters that you have to add when scheduling the brick process.

Parameters

When scheduling the deployed brick (see How to Use a Brick and Schedule a Data Load), add parameters to the schedule.

NameTypeMandatory?DefaultDescription
organizationstringyesn/a

The name of the domain where the brick is executed

segments_filterarrayyesn/a

The segments that you want to roll out

You must encode this parameter using the gd_encoded_params parameter (see Specifying Complex Parameters).

Example:

"segments_filter": ["BASIC", "PREMIUM"]
ads_clientJSONsee "Description" columnn/a

(Only if your input source resides on GoodData Data Warehouse (ADS)) The ADS instance where the LCM_RELEASE table exists (see the release_table_name parameter later in this table)

You must encode this parameter using the gd_encoded_params parameter (see Specifying Complex Parameters).

Example:

"ads_client": {
  "jdbc_url": "jdbc:gdc:datawarehouse://my.company.com/gdc/datawarehouse/instances/kluuu4h3sogai9x2ztn4wc0g8lta7sn8"
}
data_productstringnodefault

The data product that contains the segments that you want to roll out

NOTE: If your input source resides on ADS and you have two or more data products, use the release_table_name parameter (see further in this table).

release_table_namestringsee "Description" columnLCM_RELEASE(Only if your input source resides on ADS and you have multiple data products stored in one ADS instance) The name of the table in the ADS instance where the latest segment's master workspace IDs and versions are stored
technical_usersarraynon/a

The users that are going to be added as admins to each client workspace

The user logins are case-sensitive and must be written in lowercase.

You must encode this parameter using the gd_encoded_params parameter (see Specifying Complex Parameters).

Example:

"technical_users": ["dev_admin@gooddata.com", "admin@gooddata.com"]
disable_kd_dashboard_permissionBooleannofalseShared dashboard permissions for user groups are enabled by default. Even when the parameter is not explicitly specified in the syntax, it is assumed. Set to true to disable synchronizing user group permissions for shared dashboards. For more information, see LCM and Shared Dashboards.
update_preferenceJSONnon/aSee update_preference.

LCM and Shared Dashboards

In a segment’s master workspace, dashboards can be private or shared with all or some users/user groups.

The sharing permissions are propagated from the master workspace to the client workspaces in the following way:

  • Dashboards set as private in the master workspace remain private in the client workspaces.
  • Dashboards shared with all users in the master workspace become dashboards shared with all users in the client workspaces.
  • Dashboards shared with some users/user groups in the master workspace become dashboards shared with some user groups in the client workspaces

In addition to the sharing permissions, the dashboards can be configured:

  • To allow only administrators to update the dashboards (that is, editors cannot update such dashboards)
  • To be displayed to users when they drill to these dashboards from facts, metrics, and attributes, even if the dashboards are not explicitly shared with those users

These settings are propagated to the client workspaces exactly as they are set in the master workspace. For more information about these settings, see Share Dashboards.

update_preference

The update_preference parameter specifies the properties of MAQL diffs when propagating data from the master workspace to the clients' workspaces.

The update_preference parameter is set up in JSON format. You must encode this parameter using the gd_encoded_params parameter (see Specifying Complex Parameters).

The parameter is defined by the following JSON structure:

"update_preference": {
  "allow_cascade_drops": true|false,
  "keep_data": true|false,
  "fallback_to_hard_sync": true|false 
}
  • allow_cascade_drops: If set to true, the MAQL diff uses drops with the cascade option. These drops transitively delete all dashboard objects connected to the dropped LDM object. Set it to true only if you are certain that you do not need metrics, reports, or dashboards that use the dropped object.

  • keep_data: If set to true, the MAQL diff execution does not truncate the data currently loaded in datasets included in the diff.

  • fallback_to_hard_sync: If set to true, the MAQL diff execution does not truncate the data currently loaded in datasets included in the diff. If the MAQL diff execution fails, it executes again and truncates the data currently loaded in the datasets.

    The default value is false. If the datasets do not need hard synchronization, we recommend to use the default value or ignore the parameter.

By default, the update_preference parameter is set to the following, which is the least invasive scenario:

"update_preference": {
  "allow_cascade_drops": false,
  "keep_data": true,
  "fallback_to_hard_sync": false 
}

The following are possible scenarios of how you can set up the update_preference parameter depending on how you want the brick to behave:

  • You define neither allow_cascade_drops no keep_data. The brick will be using the MAQL diffs starting from the least invasive scenario (the default configuration) towards the most invasive one until the MAQL diff succeeds.

    "update_preference": { }
    
  • You define only keep_data and do not set allow_cascade_drops explicitly. The brick will first try to use the MAQL diffs with allow_cascade_drops set to false as a less invasive alternative. If it fails, the brick will try allow_cascade_drops set to true.

    "update_preference": {
      "keep_data": false|true
    }
    
  • You set allow_cascade_drops to true and do not set keep_data explicitly. The brick will first try to use the MAQL diffs with keep_data set to true as a less invasive alternative. If it fails, the brick will try keep_data set to false.

    "update_preference": {
      "allow_cascade_drops": true
    }
    
  • You set allow_cascade_drops to true and set keep_data to false. This is the most invasive scenario. Use it carefully.

    "update_preference": {
      "allow_cascade_drops": true,
      "keep_data": false
    }
    
  • You set fallback_to_hard_sync to true, and set allow_cascade_drops and keep_data to false. The brick will execute hard synchronize.

    "update_preference": {
      "allow_cascade_drops": false,
      "keep_data": false,
      "fallback_to_hard_sync": true 
    }
    

Example - Brick Configuration

The following is an example of configuring the brick parameters in the JSON format:

{
  "organization": "myCustomDomain",
  "gd_encoded_params": {
    "segments_filter": ["BASIC"],
    "ads_client": {
      "jdbc_url": "jdbc:gdc:datawarehouse://analytics.myCustomDomain.com/gdc/datawarehouse/instances/kluuu4h3sogai9x2ztn4wc0g8lta7sn8"
    },
    "update_preference": {
      "allow_cascade_drops": false,
      "keep_data": true
    }
  }
}

Advanced Settings

This section describes advanced settings of the rollout brick.

NameTypeMandatory?DefaultDescription
dynamic_paramsJSONnon/aSee "Advanced Settings" in Provisioning Brick.
metric_formatJSONnon/aSee metric_format.
delete_extra_process_scheduleBooleannotrue

Specifies how the brick should process the processes or schedules that are either not present in the master workspace or have been renamed in the master workspace

  • If not set or set to true, the brick deletes from the client workspaces the processes or schedules that are not present in the master workspace or have been renamed in the master workspace.
  • If set to false, the brick keeps the processes or schedules that are not present in the master workspace or have been renamed in the master workspace in the client workspaces.
keep_only_previous_masters_countintegernon/a

The number of previous versions of the segment's master workspace to keep (the latest version not included) while all the remaining versions will be deleted.

Example: Imagine that you have six versions of the master workspace, version 6 being the most recent version and version 1 being the oldest one. If you set this parameter to 3, the brick will delete versions 1 and 2. Versions 3, 4, and 5 will remain untouched as well as version 6, which is the most recent version of the master workspace.

NOTE:

  • Setting this parameter to 0 deletes all previous versions of the segment's master workspace. Only the latest version of the master workspace remains. The deleted versions cannot be restored.
  • A previous version is considered for deletion only if it has been created by a successful execution of the release brick (see Release Brick). When the release brick fails or is stopped during the execution, a new version of the master workspace still may get created. When the release brick runs next time and completes successfully, it creates a new version of the master workspace with the same version number as the master workspace created at the previously failed execution. The versions created at failed executions of the release brick become orphans and are not deleted automatically. You can remove these orphan workspaces manually if needed.
exclude_fact_ruleBooleannofalse

Specifies whether to skip number format validation (up to 15 digits, including maximum 6 digits after the decimal point).

  • If not set or set to false, number format validation is used.
  • If set to true, number format validation is skipped.
synchronize_ldmstringnodiff_against_master_with_fallback

Specifies how the brick synchronizes the logical data model (LDM) of the master workspaces and their corresponding client workspaces. The brick checks the LDM of a client workspace and determines whether a MAQL diff should be applied and what the DDL statement should be.

Possible values:

  • diff_against_clients: The brick creates and applies a MAQL diff for each client workspace separately.
  • diff_against_master: The brick creates a MAQL diff for the master workspace only and applies it to all client workspaces in the segment. This option is faster than diff_against_clients, but if some adjustments have been made in a client workspace's LDM, the synchronization will fail for that workspace.
  • diff_against_master_with_fallback: The brick creates a MAQL diff for the master workspace and applies it all client workspaces in the segment. If some adjustments have been made in a client workspace's LDM and the synchronization fails for that workspace, the brick falls back and creates the MAQL diff for that client workspace and applies it separately. This option is usually as fast as diff_against_master providing no adjustments have been made in the client workspaces. It is also more resilient in case some adjustments have been made. This is the default.

NOTE: Set this parameter to diff_against_master if you have custom fields in the LDMs of the client workspaces and want to preserve them during the rollout. For more information, see Add Custom Fields to the LDMs in Client Workspaces within the Same Segment.

include_deprecatedBooleannofalse

Specifies how to handle deprecated objects in the logical data model (LDM) while the one in a client workspace is being synchronized with the latest version of the segment's master workspace

  • If not set or set to false, the objects that are marked as deprecated in the LDM of either the master workspace, or the client workspace, or both, are not included in the generated MAQL diff that will be used for synchronizing the LDM of the client workspace.
  • If set to true, the deprecated objects are included in the generated MAQL diff and will be processed during the synchronization in the following way:
    • If an object is not deprecated in the master workspace LDM but deprecated in the client workspace LDM, the object will remain deprecated in the client workspace LDM.
    • If an object is deprecated in the master workspace LDM but not deprecated in the client workspace LDM, the object will remain not deprecated in the client workspace.
    • If an object is deprecated in the master workspace LDM but does not exist in the client workspace LDM, the object will be created in the client workspace as non deprecated.

NOTE: If the LDM of the master workspace or any client workspace include deprecated objects, we recommend that you set the include_deprecated parameter to true. In the situation when the LDM of the master workspace or the client workspace include deprecated objects (specifically, the objects that are marked as deprecated in one LDM but not in the other one) and include_deprecated is set to false, the synchronization may fail with the following error message:

Object %s already exists
include_computed_attributesBooleannotrue

Specifies whether to include computed attributes (see Use Computed Attributes) in the logical data model (LDM).

  • If not set or set to true, the datasets related to the computed attributes are included in the LDM.
  • If set to false, the computed attributes are ignored in both master and client workspaces.
skip_actionsarraynon/a

The actions or steps that you want the brick to skip while executing (for example, synchronizing computed attributes or collecting dynamically changing parameters)

The specified actions and steps will be excluded from the processing and will not be performed.

NOTE: Using this parameter in a wrong way may generate unexpected side effects. If you want to use it, contact the GoodData specialist who was involved in implementing LCM at your site.

abort_on_errorBooleannotrue

Specifies whether the abort_on_error option when running the brick.

  • If not set or set to true, the abort_on_error option is enabled. If an error occurs, the brick fails.

  • If set to false, the  abort_on_error option is disabled. If an error occur, the brick keeps running and the error will be exported to the execution log.

The brick results in one of the following:

  • SUCCESS - when the brick finishes without any client workspace errors.

  • WARNING - when the brick finishes with at least one client workspace success and one client workspace error.

  • ERROR - when the brick finishes without any client workspace successes.

metric_format

The metric_format parameter lets you specify a custom number format for metrics in each client workspace. For example, you can set up different currency codes for client workspaces with data from different countries (USD for the clients operating in the USA, EUR for the clients operating in Germany, and so on).

For more information about the number formatting, see Formatting Numbers in Insights.

A custom format is applied to the metrics in a specific client workspace based on tags that you add to the metrics in advance. The custom format is applied everywhere where a number format is used in the GoodData Portal (see GoodData Portal). The custom format does not rewrite the format that is defined for a metric in a specific report/insight (for more information about setting a number format in a report/insight, see Formatting Table Values Using the Configuration Pane and Format Numbers).

Steps:

  1. Add tags to the metrics that you want to apply a custom format to (see Add a Tag to a Metric). For example, you can use format_# to tag metrics using the COUNT function, format_$ to tag currency metrics, format_% to tag metrics with percentages.

  2. Create a table that maps the tags to number formats and the client IDs of the client workspaces where the number formats should be applied. Name the table columns tag, format, and client_id, respectively.

    tagformatclient_id
    format_#

    [>=1000000000]#,,,.0 B;

    [>=1000000]#,,.0 M;

    [>=1000]#,.0 K;

    [>=0]#,##0;

    [<0]-#,##0

    client_id_best_foods
    format_%#,##0%client_id_zen_table
    format_%#,#0%client_id_best_foods
  3. Save the table in a supported location (see Types of Input Data Sources):

    • If you use a data warehouse (for example, ADS, Snowflake or Redshift), save the table as a database table named metric_formats.
    • If you use a file storage (for example, S3, Azure Blob Storage or a web location), save the table as a CSV file named metric_formats.csv.
  4. Create the JSON structure for the metric_format parameter, and add it to the brick schedule. Because it is a complex parameter, include it in your gd_encoded_params parameter (see Example - Brick Configuration). The metric_format parameter must contain a query for the metric_formats database table in the data warehouse or point to the metric_formats.csv file in the file storage.

    • The metric_formats table is located in a data warehouse.

      "gd_encoded_params": {
        ...
        "input_source": {
          "type": "{data_warehouse}",
          "query": "",
          "metric_format": {
            "query": "SELECT client_id, tag, format FROM metric_formats;"
          }
        }
      }
      

      For example, in ADS:

      "gd_encoded_params": {
        ...
        "input_source": {
          "type": "ads",
          "query": "",
          "metric_format": {
            "query": "SELECT client_id, tag, format FROM metric_formats;"
          }
        }
      }
      

      Or, in Redshift:

      "gd_encoded_params": {
        ...
        "input_source": {
          "type": "redshift",
          "query": "",
          "metric_format": {
            "query": "SELECT client_id, tag, format FROM metric_formats;"
          }
        }
      }
      
    • The metric_formats.csv file is located in a file storage.

      "gd_encoded_params": {
        ...
        "input_source": {
          "type": "{file_storage}",
          "{file_locator}": "",
          "metric_format": {
            "{file_locator}": "{location_of_metric_formats.csv}"
          }
        }
      }
      

      For example, in an S3 bucket:

      "gd_encoded_params": {
        ...
        "input_source": {
          "type": "s3",
          "file": "",
          "metric_format": {
            "file": "/data/upload/metric_formats.csv"
          }
        }
      }
      

      Or, in a web location:

      "gd_encoded_params": {
        ...
        "input_source": {
          "type": "web",
          "url": "",
          "metric_format": {
            "url": "https://files.acme.com/data/upload/metric_formats.csv"
          }
        }
      }