In the diagram below, you can review the core groups of platform services that interact with the objects of a project.
All clients accessing platform services utilize REST APIs to invoke them through the WebApp services.
Typically, WebApp services are invoked from a desktop or mobile browser. However, the platform can support access from other web-based clients.
The CloudControl Center (C4) serves as the centralized broker for tracking users and projects and the services on which they rely. Centralized in a high-performance database, the C4 services have awareness of all hardware nodes in the environment and broker use of the stateless and redundant services spread across the available servers.
- When the GoodData Computing Fabric is determining how to process requests, it queries C4 for information on whether the user and project permissions are in place to successfully execute the query.
- Extensive caching is deployed to enhance performance.
The Metadata Server manages all metadata (metrics, reports, and dashboards) created in the projects in the cluster and provides extensive support for graph operations, including the management of dependencies.
These data objects are stored in a separate relational database partition for each project.
The ROLAP Engine performs the core data crunching and is responsible for loading and storing computed data.
GoodData Computing Fabric
The GoodData Computing Fabric (GCF) manages the distribution of asynchronous tasks to the available services, including dynamic load balancing. These tasks are defined as sets of hierarchies with implied dependencies.
Key Benefit: To ensure fairness across all customers, the GoodData Platform enforces a Fair Use Policy, in which the workload of no single customer may exceed a predefined percentage of available resources. By enforcing this policy, GoodData keeps the platform available to all customers consistently throughout the workday.
Services are managed through an application container that contains specific workers for query, transformation, execution, and data export tasks, among others. By design, the GCF manages interoperable execution of these workers, which may be developed in different programming languages.
For each worker unit, the GCF manages a queue to avoid bottlenecks and to dynamically deploy new workers as needs arise.
Distributed File System
State information is shared through the Distributed File System. Workers managed by the GCF may query the DFS for state information required to complete their tasks.
The Distributed File System features redundant storage to prevent state loss and IO management services to support faster throughput.