There may be cases where duplicate entries are present for a given date. Why does this occur? Schema changes and processing times. While we typically will attempt to de-duplicate any possible duplicate rows, there are cases where we may not.

For example, data was supplied on 4/19 for an Amazon Sponsored Brand campaign. On May 1, Amazon updated the schema, which triggered an update in our system.

Amazon had updated impressions and clicks for the Sponsored Brand campaign on 5/19. As a result of updated records for 5/19, a new 5/19 row for the Sponsored Brand campaign came in. This updated row came in after an Amazon schema update. As a result, the 5/19 Sponsored Brand campaign data is resident in the new schema v11, whereas the old 4/19 record is still resident in v10.

This can also occur when re-requesting older dates as part of the backfill or historical data process. For example, if you have already requested data for Oct 2021, there is a schema change in Dec 2021, and you request Oct 2021 data in Jan 2022, this can result in October 2021 data being present in the older and current table versions similar to the Amazon example above.

Since we do not remove older snapshots, you can use the most recent record for a given ob_date (or other date-specific columns for a given data source). Here is an example query using ob_date:

PARTITION BY ob_transaction_id
ORDER BY ob_processed_at ASC
) AS row
WHERE campaign_id = 1234567890 and row = 1
ORDER BY ob_date ASC

If you are using Tableau, Power BI, Google Data Studio... there are built-in operators for using the MAX dates, which would allow you to count the most recent record for a date.

Lastly, if you are not comfortable doing this in a BI tool, another option is creating a custom view with this logic. The custom view can be specifically tailored to the analysis. A view looks like a table but runs custom SQL behind the scenes.

Did this answer your question?