Data Lake Partitioning
Openbridge Support avatar
Written by Openbridge Support
Updated over a week ago

Partitioning improves performance and reduces cost, critical for compute-based SQL query services.

Data Lake Partitioning Approach

The Openbridge default is to partition the data based on source and date. AWS, Microsoft, Google, Hive, Spark, or Presto recommend the date-based partitioning convention. We also partition based on the specific upstream data source to properly handle upstream data source changes.

Pattern

For each registered data lake destination, we will follow this pattern;

  • /storage/parquet/source/dt=yyyymmdd/objectname

Each aspect of the pattern defines a core element:

  • Storage = the Azure or AWS S3 bucket name

  • Parquet = The location for all parquet objects

  • Source = The name of the upstream data source

  • Partition = The date partition of the data

  • Object = The data object, stored as compressed Apache Parquet

Here is an example of using AWS S3 and an Amazon Advertising upstream data source:

  • /mys3bucket/parquet/amazon_advertising_campains_v1/dt=20221222/

Openbridge defaults to using Apache Parquet as the object format and Google Snappy for compression. As a result, the object key name follows the source and date-based naming convention:

  • /mys3bucket/parquet/amazon_advertising_campains_v1/dt=20221222/8886b48543621d88f46260v77c686b21808c7b12-845e9_20180515000000.00000.SNAPPY.parquet

Example: Data Lake Partitioning By Source and Date

As we mentioned previously, we partition the data by source. Why? This is done to preserve the lineage of data feeds. This is accomplished by creating a version of the feed based on the upstream definition. If the upstream source changes, a new version will be created. Examples of changes that trigger new versions can be changes in schemas, data types, or other modifications that require a new version:

Within each data feed, everything is date partitioned as described above:

For more reading on the topic, see these articles:

Did this answer your question?