Partitioning improves performance and reduces cost, critical for compute-based SQL query services.
Data Lake Partitioning Approach
The Openbridge default is to partition the data based on source and date. AWS, Microsoft, Google, Hive, Spark, or Presto recommend the date-based partitioning convention. We also partition based on the specific upstream data source to properly handle upstream data source changes.
For each registered data lake destination, we will follow this pattern;
Each aspect of the pattern defines a core element:
Storage = the Azure or AWS S3 bucket name
Parquet = The location for all parquet objects
Source = The name of the upstream data source
Partition = The date partition of the data
Object = The data object, stored as compressed Apache Parquet
Here is an example of using AWS S3 and an Amazon Advertising upstream data source:
Openbridge defaults to using Apache Parquet as the object format and Google Snappy for compression. As a result, the object key name follows the source and date-based naming convention:
Example: Data Lake Partitioning By Source and Date
As we mentioned previously, we partition the data by source. Why? This is done to preserve the lineage of data feeds. This is accomplished by creating a version of the feed based on the upstream definition. If the upstream source changes, a new version will be created. Examples of changes that trigger new versions can be changes in schemas, data types, or other modifications that require a new version:
Within each data feed, everything is date partitioned as described above:
For more reading on the topic, see these articles: