Attribution Reporting

Draft Community Group Report , 11

This version:
https://wicg.github.io/attribution-reporting-api
Issue Tracking:
GitHub
Inline In Spec
Editors:
( Google Inc. )
( Google Inc. )
( Google Inc. )
Not Ready For Implementation

This spec is not yet ready for implementation. It exists in this repository to record the ideas and promote discussion.

Before attempting to implement this spec, please contact the editors.


Abstract

An API to report that an event may have been caused by another cross-site event. These reports are designed to transfer little enough data between sites that the sites can’t use them to track individual users.

Status of this document

This specification was published by the Web Platform Incubator Community Group . It is not a W3C Standard nor is it on the W3C Standards Track. Please note that under the W3C Community Contributor License Agreement (CLA) there is a limited opt-out and other conditions apply. Learn more about W3C Community and Business Groups .

1. Introduction

This section is non-normative

This specification describes how web browsers can provide a mechanism to the web that supports measuring and attributing conversions (e.g. purchases) to ads a user interacted with on another site. This mechanism should remove one need for cross-site identifiers like third-party cookies.

1.1. Overview

Pages/embedded sites are given the ability to register attribution sources and attribution triggers , which can be linked by the User Agent to generate and send attribution reports containing information from both of those events.

A reporter https://reporter.example embedded on https://source.example is able to measure whether an iteraction on the page lead to an action on https://destination.example by registering an attribution source with attribution destinations of « https://destination.example ». Reporters are able to register sources through a variety of surfaces, but ultimately the reporter is required to provide the User Agent with an HTTP-response header which allows the source to be eligible for attribution.

At a later point in time, the reporter, now embedded on https://destination.example , may register an attribution trigger . Reporters can register triggers by sending an HTTP-response header containing information about the action/event that occurred. Internally, the User Agent attempts to match the trigger to previously registered source events based on where the sources/triggers were registered and configurations provided by the reporter.

If the User Agent is able to attribute the trigger to a source, it will generate and send an attribution report to the reporter via an HTTP POST request at a later point in time.

2. HTML monkeypatches

2.1. API for elements

interface mixin HTMLAttributionSrcElementUtils {
    [CEReactions, SecureContext] attribute USVString attributionSrc;
};
HTMLAnchorElement includes HTMLAttributionSrcElementUtils;
HTMLImageElement includes HTMLAttributionSrcElementUtils;
HTMLScriptElement includes HTMLAttributionSrcElementUtils;

Add the following content attributes :

a

attributionsrc - URL for attribution registration

img

attributionsrc - URL for attribution registration

script

attributionsrc - URL for attribution registration

Add the following content attribute descriptions:

a

The attributionsrc attribute is a string representing the URL of the resource that will register an attribution source when the a is navigated.

img

The attributionsrc attribute is a string representing the URL of the resource that will register an attribution source or attribution trigger when set.

script

The attributionsrc attribute is a string representing the URL of the resource that will register an attribution source or attribution trigger when set.

The IDL attribute attributionSrc must reflect the respective content attribute of the same name.

To make background attributionsrc requests given an HTMLAttributionSrcElementUtils element and an eligibility eligibility :

  1. Let attributionSrc be element ’s attributionSrc .

  2. Let tokens be the result of splitting attributionSrc on ASCII whitespace.

  3. For each token of tokens :

    1. Parse token , relative to element ’s node document . If that is not successful, continue . Otherwise, let url be the resulting URL record .

    2. Run make a background attributionsrc request with url , contextOrigin , eligibility , and element ’s node document .

Set contextOrigin properly.

Consider allowing the user agent to limit the size of tokens .

Whenever an img or a script element is created or element ’s attributionSrc attribute is set or changed, run make background attributionsrc requests with element and " event-source-or-trigger ".

Monkeypatch img and script loading so that the presence of an attributionSrc attribute sets the src or src request’s Attribution Reporting eligibility to " event-source-or-trigger ".

Modify follow the hyperlink as follows:

After the step

If subject ’s link types includes...

add the steps

  1. Let navigationSourceEligible be false.

  2. If subject has an attributionsrc attribute:

    1. Set navigationSourceEligible to true.

    2. Make background attributionsrc requests with subject and " navigation-source ".

Add "and navigationSourceEligible set to navigationSourceEligible " to the step

Navigate targetNavigable ...

2.2. Window open steps

Modify the tokenize the features argument as follows:

Replace the step

Collect a sequence of code points that are not feature separators code points from features given position . Set value to the collected code points, converted to ASCII lowercase .

with

Collect a sequence of code points that are not feature separators code points from features given position . Set value to the collected code points, converted to ASCII lowercase . Set originalCaseValue to the collected code points.

Replace the step

If name is not the empty string, then set tokenizedFeatures [ name ] to value .

with the steps

  1. If name is not the empty string:

    1. Switch on name :

      " attributionsrc "

      Run the following steps:

      1. If tokenizedFeatures [ name ] does not exist , set tokenizedFeatures [ name ] to a new list .

      2. Append originalCaseValue to tokenizedFeatures [ name ].

      Anything else

      Set tokenizedFeatures [ name ] to value .

Modify the window open steps as follows:

After the step

Let tokenizedFeatures be the result of tokenizing features .

add the steps

  1. Let navigationSourceEligible be false.

  2. If tokenizedFeatures [" attributionsrc "] exists :

    1. Assert : tokenizedFeatures [" attributionsrc "] is a list .

    2. Set navigationSourceEligible to true.

    3. Set attributionSrcUrls to a new list .

    4. For each value of tokenizedFeatures [" attributionsrc "]:

      1. If value is the empty string, continue .

      2. Let decodedSrcBytes be the result of percent-decoding value .

      3. Let decodedSrc be the UTF-8 decode without BOM of decodedSrcBytes .

      4. Parse decodedSrc relative to the entry settings object , and set urlRecord to the resulting URL record , if any. If parsing failed, continue .

      5. Append urlRecord to attributionSrcUrls .

Use attributionSrcUrls with make a background attributionsrc request .

In each step that calls navigate , set navigationSourceEligible to navigationSourceEligible .

Add the following item to navigation params :

navigationSourceEligible

A boolean indicating whether the navigation can register a navigation source in its response. Defaults to false.

Modify navigate as follows:

Add an optional boolean parameter called navigationSourceEligible , defaulting to false.

In the step

Set navigationParams to a new navigation params with...

add the property

navigationSourceEligible

navigationSourceEligible

Use/propagate navigationSourceEligible to the navigation request 's Attribution Reporting eligibility .

4. Network monkeypatches

dictionary AttributionReportingRequestOptions {
  required boolean eventSourceEligible;
  required boolean triggerEligible;
};
partial dictionary RequestInit {
  AttributionReportingRequestOptions attributionReporting;
};
partial interface XMLHttpRequest {
  [SecureContext]
  undefined setAttributionReporting(AttributionReportingRequestOptions options);
};

A request has an associated Attribution Reporting eligibility (an eligibility ). Unless otherwise stated it is " unset ".

To get an eligibility from AttributionReportingRequestOptions given an optional AttributionReportingRequestOptions options :

  1. If options is null, return " unset ".

  2. Let eventSourceEligible be options ’s eventSourceEligible .

  3. Let triggerEligible be options ’s triggerEligible .

  4. If ( eventSourceEligible , triggerEligible ) is:

    (false, false)

    Return " empty ".

    (false, true)

    Return " trigger ".

    (true, false)

    Return " event-source ".

    (true, true)

    Return " event-source-or-trigger ".

Check permissions policy.

" Attribution-Reporting-Eligible " is a Dictionary Structured Header set on a request that indicates which registrations, if any, are allowed on the corresponding response . Its values are not specified and its allowed keys are:

" event-source "

An event source may be registered.

" navigation-source "

A navigation source may be registered.

" trigger "

A trigger may be registered.

To set Attribution Reporting headers given a header list headers and an eligibility eligibility :

  1. Delete " Attribution-Reporting-Eligible " from headers .

  2. Delete " Attribution-Reporting-Support " from headers .

  3. If eligibility is " unset ", return.

  4. Let dict be an ordered map .

  5. If eligibility is:

    " empty "

    Do nothing.

    " event-source "

    Set dict [" event-source "] to true.

    " navigation-source "

    Set dict [" navigation-source "] to true.

    " trigger "

    Set dict [" trigger "] to true.

    " event-source-or-trigger "

    Set dict [" event-source "] to true and set dict [" trigger "] to true.

  6. Set a structured field value given (" Attribution-Reporting-Eligible ", dict ) in headers .

  7. Set an OS-support header in headers .

4.1. Fetch monkeypatches

Modify fetch as follows:

After the step

If request ’s header list does not contain Accept ...

add the step

  1. Set Attribution Reporting headers with request ’s header list and request ’s Attribution Reporting eligibility .

Modify Request(input, init) as follows:

In the step

Set request to a new request with the following properties:

add the property

Attribution Reporting eligibility

request ’s Attribution Reporting eligibility .

After the step

If init [" priority "] exists , then:

add the step

  1. If init [" attributionReporting "] exists , then set request ’s Attribution Reporting eligibility to the result of get an eligibility from AttributionReportingRequestOptions with it.

4.2. XMLHttpRequest monkeypatches

An XMLHttpRequest object has an associated Attribution Reporting eligibility (an eligibility ). Unless otherwise stated it is " unset ".

The setAttributionReporting(options) method must run these steps:

  1. If this ’s state is not opened , then throw an " InvalidStateError " DOMException .

  2. If this ’s send() flag is set, then throw an " InvalidStateError " DOMException .

  3. Set this ’s Attribution Reporting eligibility to the result of get an eligibility from AttributionReportingRequestOptions with options .

Modify send(body) as follows:

After the step:

Let req be a new request , initialized as follows...

Add the step:

  1. Set Attribution Reporting headers with req ’s header list and this ’s Attribution Reporting eligibility .

5. Permissions Policy integration

This specification defines a policy-controlled feature identified by the string " attribution-reporting ". Its default allowlist is * .

6. Clear Site Data integration

In clear DOM-accessible storage for origin , add the following step:

  1. Run clear site data with origin .

To clear site data given an origin origin :

  1. For each attribution source source of the attribution source cache :

    1. If source ’s reporting origin and origin are same origin , remove source from the attribution source cache .

  2. For each event-level report report of the event-level report cache :

    1. If report ’s reporting origin and origin are same origin , remove report from the event-level report cache .

  3. For each aggregatable report report of the aggregatable report cache :

    1. If report ’s reporting origin and origin are same origin , remove report from the aggregatable report cache .

Note: We deliberately do not remove matching entries from the attribution rate-limit cache , as doing so would allow a site to reset and therefore exceed the intended rate limits at will.

7. Structures

7.1. Trigger state

A trigger state is a struct with the following items:

trigger data

A non-negative 64-bit integer.

report window

A non-negative integer.

7.2. Randomized response output configuration

A randomized response output configuration is a struct with the following items:

max attributions per source

A positive integer.

trigger data cardinality

A positive integer.

num report windows

A positive integer.

7.3. Randomized source response

A randomized source response is null or a set of trigger states .

7.4. Attribution filtering

A filter value is an ordered set of strings .

A filter map is an ordered map whose keys are strings and whose values are filter values .

7.5. Suitable origin

A suitable origin is an origin that is suitable .

7.6. Source type

A source type is one of the following:

" navigation "

The source was associated with a top-level navigation.

" event "

The source was not associated with a top-level navigation.

7.7. Attribution source

An attribution source is a struct with the following items:

source identifier

A string .

source origin

A suitable origin .

event ID

A non-negative 64-bit integer.

attribution destinations

An ordered set of sites .

reporting origin

A suitable origin .

source type

A source type .

expiry

A duration .

event report window

A duration .

aggregatable report window

A duration .

priority

A 64-bit integer.

source time

A moment .

number of event-level reports

Number of event-level reports created for this attribution source .

event-level attributable (default true)

A boolean .

dedup keys

ordered set of dedup keys associated with this attribution source .

randomized response

A randomized source response .

randomized trigger rate

A number between 0 and 1 (both inclusive).

filter data

A filter map .

debug key

Null or a non-negative 64-bit integer.

aggregation keys

An ordered map whose keys are strings and whose values are non-negative 128-bit integers.

aggregatable budget consumed

A non-negative integer, total value of all aggregatable contributions created with this attribution source .

aggregatable dedup keys

ordered set of aggregatable dedup key values associated with this attribution source .

debug reporting enabled

A boolean .

number of aggregatable reports

Number of aggregatable reports created for this attribution source .

An attribution source source ’s expiry time is source ’s source time + source ’s expiry .

An attribution source source ’s event report window time is source ’s source time + source ’s event report window .

An attribution source source ’s aggregatable report window time is source ’s source time + source ’s aggregatable report window .

An attribution source source ’s source site is the result of obtaining a site from source ’s source origin .

7.8. Aggregatable trigger data

An aggregatable trigger data is a struct with the following items:

key piece

A non-negative 128-bit integer.

source keys

An ordered set of strings .

filters

A list of filter maps .

negated filters

A list of filter maps .

7.9. Aggregatable dedup key

An aggregatable dedup key is a struct with the following items:

dedup key

Null or a non-negative 64-bit integer.

filters

A filter map .

negated filters

A filter map .

7.10. Event-level trigger configuration

An event-level trigger configuration is a struct with the following items:

trigger data

A non-negative 64-bit integer.

dedup key

Null or a non-negative 64-bit integer.

priority

A 64-bit integer.

filters

A list of filter maps .

negated filters

A list of filter maps .

7.11. Aggregation coordinator

An aggregation coordinator is one of a user-agent-determined set of strings that specifies which aggregation service deployment to use.

7.12. Aggregatable source registration time configuration

An aggregatable source registration time configuration is one of the following:

" exclude "

" source_registration_time " is excluded from an aggregatable report 's shared info .

" include "

" source_registration_time " is included in an aggregatable report 's shared info .

7.13. Attribution trigger

An attribution trigger is a struct with the following items:

attribution destination

A site .

trigger time

A moment .

reporting origin

A suitable origin .

filters

A list of filter maps .

negated filters

A list of filter maps .

debug key

Null or a non-negative 64-bit integer.

event-level trigger configurations

A set of event-level trigger configuration .

aggregatable trigger data

A list of aggregatable trigger data .

aggregatable values

An ordered map whose keys are strings and whose values are non-negative 32-bit integers.

aggregatable dedup keys

A list of aggregatable dedup key .

serialized private state tokens

A list of byte sequence .

debug reporting enabled

A boolean .

aggregation coordinator

An aggregation coordinator .

aggregatable source registration time configuration

An aggregatable source registration time configuration .

7.14. Attribution report

An attribution report is a struct with the following items:

reporting origin

A suitable origin .

report time

A moment .

original report time

A moment .

delivered (default false)

A boolean .

report ID

A string .

source debug key

Null or a non-negative 64-bit integer.

trigger debug key

Null or a non-negative 64-bit integer.

7.15. Event-level report

An event-level report is an attribution report with the following additional items:

event ID

A non-negative 64-bit integer.

source type

A source type .

trigger data

A non-negative 64-bit integer.

randomized trigger rate

A number between 0 and 1 (both inclusive).

trigger priority

A 64-bit integer.

trigger time

A moment .

source identifier

A string.

attribution destinations

An ordered set of sites .

7.16. Aggregatable contribution

An aggregatable contribution is a struct with the following items:

key

A non-negative 128-bit integer.

value

A non-negative 32-bit integer.

7.17. Aggregatable report

An aggregatable report is an attribution report with the following additional items:

source time

A moment .

contributions

A list of aggregatable contributions .

effective attribution destination

A site .

serialized private state token

A byte sequence .

aggregation coordinator

An aggregation coordinator .

source registration time configuration

An aggregatable source registration time configuration .

is null report (default false)

A boolean .

7.18. Attribution rate-limits

A rate-limit scope is one of the following:

An attribution rate-limit record is a struct with the following items:

scope

A rate-limit scope .

source site

A site .

attribution destination

A site .

reporting origin

A suitable origin .

time

A moment .

expiry time

Null or a moment .

7.19. Attribution debug data

A debug data type is a non-empty string that specifies the set of data that is contained in the body of an attribution debug data .

A source debug data type is a debug data type for source registrations. Possible values are:

A trigger debug data type is a debug data type for trigger registrations. Possible values are:

An attribution debug data is a struct with the following items:

data type

A debug data type .

body

A map whose fields are determined by the data type .

7.20. Attribution debug report

An attribution debug report is a struct with the following items:

data

A list of attribution debug data .

reporting origin

A suitable origin .

7.21. Triggering result

A triggering status is one of the following:

Note: " noised " only applies for triggering event-level attribution when it is attributed successfully but dropped as the noise was applied to the source.

A triggering result is a tuple with the following items:

status

A triggering status .

debug data

Null or an attribution debug data .

8. Storage

A user agent holds an attribution source cache , which is an ordered set of attribution sources .

A user agent holds an event-level report cache , which is an ordered set of event-level reports .

A user agent holds an aggregatable report cache , which is an ordered set of aggregatable reports .

A user agent holds an attribution rate-limit cache , which is an ordered set of attribution rate-limit records .

The above caches are collectively known as the attribution caches . The attribution caches are shared among all environment settings objects .

Note: This would ideally use storage bottles to provide access to the attribution caches. However attribution data is inherently cross-site, and operations on storage would need to span across all storage bottle maps.

9. Vendor-Specific Values

Max source expiry is a positive duration that controls the maximum value that can be used as an expiry . It must be greater than or equal to 30 days.

Max entries per filter data is a positive integer that controls the maximum size of an attribution source 's filter data .

Max values per filter data entry is a positive integer that controls the maximum size of each value of an attribution source 's filter data .

Max aggregation keys per attribution is a positive integer that controls the maximum size of an attribution source 's aggregation keys , the maximum size of an aggregatable trigger data 's source keys , and the maximum size of an attribution trigger 's aggregatable values .

Max pending sources per source origin is a positive integer that controls how many attribution sources can be in the attribution source cache per source origin .

Navigation-source trigger data cardinality is a positive integer that controls the valid range of trigger data for triggers that are attributed to an attribution source whose source type is " navigation ": 0 <= trigger data < navigation-source trigger data cardinality .

Event-source trigger data cardinality is a positive integer that controls the valid range of trigger data for triggers that are attributed to an attribution source whose source type is " event ": 0 <= trigger data < event-source trigger data cardinality .

Randomized response epsilon is a non-negative double that controls the randomized response probability of an attribution source .

Randomized null report rate excluding source registration time is a double between 0 and 1 (both inclusive) that controls the randomized number of null reports generated for an attribution trigger whose [attribution trigger/aggregatable source registration time configuration] is " exclude ".

Randomized null report rate including source registration time is a double between 0 and 1 (both inclusive) that controls the randomized number of null reports generated for an attribution trigger whose [attribution trigger/aggregatable source registration time configuration] is " include ".

Max event-level reports per attribution destination is a positive integer that controls how many event-level reports can be in the event-level report cache per site in attribution destinations .

Max aggregatable reports per attribution destination is a positive integer that controls how many aggregatable reports can be in the aggregatable report cache per effective attribution destination .

Max attributions per navigation source is a positive integer that controls how many times a single attribution source whose source type is " navigation " can create an event-level report .

Max attributions per event source is a positive integer that controls how many times a single attribution source whose source type is " event " can create an event-level report .

Max aggregatable reports per source is a positive integer that controls how many aggregatable reports can be created by attribution triggers attributed to a single attribution source .

Max destinations covered by unexpired sources is a positive integer that controls the maximum number of distinct sites across all attribution destinations for unexpired attribution sources with a given ( source site , reporting origin site ).

Attribution rate-limit window is a positive duration that controls the rate-limiting window for attribution.

Max source reporting origins per rate-limit window is a positive integer that controls the maximum number of distinct reporting origins for a ( source site , attribution destination ) that can create attribution sources per attribution rate-limit window .

Max source reporting origins per source reporting site is a positive integer that controls the maximum number of distinct reporting origins for a ( source site , reporting origin site ) that can create attribution sources per origin rate-limit window .

Origin rate-limit window is a positive duration that controls the rate-limiting window for max source reporting origins per source reporting site .

Max attribution reporting origins per rate-limit window is a positive integer that controls the maximum number of distinct reporting origins for a ( source site , attribution destination ) that can create event-level reports per attribution rate-limit window .

Max attributions per rate-limit window is a positive integer that controls the maximum number of attributions for a ( source site , attribution destination , reporting origin site ) per attribution rate-limit window .

Allowed aggregatable budget per source is a positive integer that controls the total required aggregatable budget of all aggregatable reports created for an attribution source .

Min aggregatable report delay is a non-negative duration that controls the minimum delay to deliver an aggregatable report .

Randomized aggregatable report delay is a positive duration that controls the random delay to deliver an aggregatable report .

Default aggregation coordinator is the aggregation coordinator that controls how to obtain the public key for encrypting an aggregatable report by default.

10. General Algorithms

10.1. Serialize an integer

To serialize an integer , represent it as a string of the shortest possible decimal number.

This would ideally be replaced by a more descriptive algorithm in Infra. See infra/201

10.2. Serialize attribution destinations

To serialize attribution destinations destinations , run the following steps:

  1. Assert : destinations is not empty .

  2. Let destinationStrings be a list .

  3. For each destination in destinations :

    1. Assert : destination is not the opaque origin .

    2. Append destination serialized to destinationStrings .

  4. If destinationStrings ’s size is equal to 1, return destinationStrings [0].

  5. Return destinationStrings .

To check if a scheme is suitable given a string scheme :

  1. If scheme is " http " or " https ", return true.

  2. Return false.

To check if an origin is suitable given an origin origin :

  1. If origin is not a potentially trustworthy origin , return false.

  2. If origin ’s scheme is not suitable , return false.

  3. Return true.

10.3. Parsing filter data

To parse filter values given a value :

  1. If value is not a map , return null.

  2. Let result be a new filter map .

  3. For each filter data of value :

    1. If data is not a list , return null.

    2. Let set be a new ordered set .

    3. For each d of data :

      1. If d is not a string , return null.

      2. Append d to set .

    4. Set result [ filter ] to set .

  4. Return result .

To parse filter data given a value :

  1. Let map be the result of running parse filter values with value .

  2. If map is null, return null.

  3. If map ’s size is greater than the user agent’s max entries per filter data , return null.

  4. For each filter set of map :

    1. If set ’s size is greater than the user agent’s max values per filter data entry , return null.

  5. Return map .

Determine whether to limit length or code point length for filter and d above.

10.4. Parsing filters

To parse filters given a value :

  1. Let filtersList be a new list .

  2. If value is a map , then:

    1. Let filterMap be the result of running parse filter values with value .

    2. If filterMap is null, return null.

    3. Append filterMap to filtersList .

    4. Return filtersList .

  3. If value is not a list , return null.

  4. For each data of value :

    1. Let filterMap be the result of running parse filter values with data .

    2. If filterMap is null, return null.

    3. Append filterMap to filtersList .

  5. Return filtersList .

10.5. Cookie-based debugging

To check if cookie-based debugging is allowed given a suitable origin reportingOrigin :

  1. Let domain be the canonicalized domain name of reportingOrigin ’s host .

  2. For each cookie of the user agent’s cookie store :

    1. If cookie ’s name is not " ar_debug ", continue .

    2. If cookie ’s http-only-flag is false, continue .

    3. If cookie ’s secure-flag is false, continue .

    4. If cookie ’s same-site-flag is not " None ", continue .

    5. If cookie ’s host-only-flag is true and domain is not identical to cookie ’s domain, continue .

    6. If cookie ’s host-only-flag is false and domain does not domain-match cookie ’s domain, continue .

    7. If " / " does not path-match cookie ’s path, continue .

    8. Return allowed .

  3. Return blocked .

Ideally this would use the cookie-retrieval algorithm , but it cannot: There is no way to consider only cookies whose http-only-flag is true and whose same-site-flag is " None "; there is no way to prevent the last-access-time from being modified; and the return value is a string that would have to be further processed to check for the " ar_debug " cookie.

10.6. Obtaining a randomized response

To obtain a randomized response given trueValue , a set possibleValues , and a double randomPickRate :

  1. Assert : randomPickRate is between 0 and 1 (both inclusive).

  2. Let r be a random double between 0 (inclusive) and 1 (exclusive) with uniform probability.

  3. If r is less than randomPickRate , return a random item from possibleValues with uniform probability.

  4. Otherwise, return trueValue .

10.7. Parsing aggregation key piece

To parse an aggregation key piece given a string input , perform the following steps. This algorithm will return either a non-negative 128-bit integer or an error.

  1. If input ’s code point length is not between 3 and 34 (both inclusive), return an error.

  2. If the first character is not a U+0030 DIGIT ZERO (0), return an error.

  3. If the second character is not a U+0058 LATIN CAPITAL LETTER X character (X) and not a U+0078 LATIN SMALL LETTER X character (x), return an error.

  4. Let value be the code point substring from 2 to the end of input .

  5. If the characters within value are not all ASCII hex digits , return an error.

  6. Interpret value as a hexadecimal number and return as a non-negative 128-bit integer.

10.8. Can attribution rate-limit record be removed

Given an attribution rate-limit record record and a moment now :

  1. If record ’s time is after now , return false.

  2. If record ’s scope is " attribution ", return true.

  3. If record ’s expiry time is after now , return false.

  4. Return true.

10.9. Obtaining and delivering an attribution debug report

To obtain and deliver a debug report given a list of attribution debug data data and a suitable origin reportingOrigin :

  1. Let debugReport be an attribution debug report with the items:

    data

    data

    reporting origin

    reportingOrigin

  2. Queue a task to attempt to deliver a verbose debug report with debugReport .

10.10. Making a background attributionsrc request

An eligibility is one of the following:

" unset "

Depending on context, a trigger may or may not be registered.

" empty "

Neither a source nor a trigger may be registered.

" event-source "

An event source may be registered.

" navigation-source "

A navigation source may be registered.

" trigger "

A trigger may be registered.

" event-source-or-trigger "

An event source or a trigger may be registered.

A registrar is one of the following:

" web "

The user agent supports web registrations.

" os "

The user agent supports OS registrations.

To validate a background attributionsrc eligibility given an eligibility eligibility :

  1. Assert : eligibility is " navigation-source " or " event-source-or-trigger ".

To make a background attributionsrc request given a URL url , a suitable origin contextOrigin , an eligibility eligibility , and a Document document :

  1. Validate eligibility .

  2. If url ’s scheme is not suitable , return.

  3. Let context be document ’s relevant settings object .

  4. If context is not a secure context , return.

  5. If the " attribution-reporting " feature is not enabled in document with document ’s origin , return.

  6. Let supportedRegistrars be the result of getting supported registrars .

  7. If supportedRegistrars is empty , return.

  8. Let request be a new request with the following properties:

    method

    " GET "

    URL

    url

    keepalive

    true

    Attribution Reporting eligibility

    eligibility

  9. Fetch request with processResponse being process an attributionsrc response with contextOrigin , eligibility , and context .

Audit other properties on request and set them properly.

Support header-processing on redirects.

Check for transient activation with " navigation-source ".

To process an attributionsrc response given a suitable origin contextOrigin , an eligibility eligibility , an environment settings objects context , and a response response :

  1. Validate eligibility .

  2. Let reportingOrigin be response ’s URL 's origin .

  3. If reportingOrigin is not suitable , return.

  4. Let sourceHeader be the result of getting " Attribution-Reporting-Register-Source " from response ’s header list .

  5. Let triggerHeader be the result of getting " Attribution-Reporting-Register-Trigger " from response ’s header list .

  6. Let osSourceURLs be the result of getting OS-registration URLs from response ’s header list with " Attribution-Reporting-Register-OS-Source ".

  7. Let osTriggerURLs be the result of getting OS-registration URLs from response ’s header list with " Attribution-Reporting-Register-OS-Trigger ".

  8. If eligibility is:

    " navigation-source "

    Run the following steps:

    1. If sourceHeader and osSourceURLs are both null or both not null, return.

    2. If sourceHeader is not null:

      1. Let source be the result of running parse source-registration JSON with sourceHeader , contextOrigin , reportingOrigin , " navigation ", and context ’s current wall time .

      2. If source is not null, process source .

    3. If osSourceURLs is not null and the user agent supports OS registrations, process osSourceURLs according to an implementation-defined algorithm.

    " event-source-or-trigger "

    Run the following steps:

    1. If the number of non-null entries in « sourceHeader , triggerHeader , osSourceURLs , osTriggerURLs » is not 1, return.

    2. If sourceHeader is not null:

      1. Let source be the result of running parse source-registration JSON with sourceHeader , contextOrigin , reportingOrigin , " event ", and context ’s current wall time .

      2. If source is not null, process source .

    3. If triggerHeader is not null:

      1. Let destinationSite be the result of obtaining a site from contextOrigin .

      2. Let privateStateTokens be an empty list .

      3. Let trigger be the result of running create an attribution trigger with triggerHeader destinationSite , reportingOrigin , privateStateTokens , and context ’s current wall time .

      4. If trigger is not null, trigger attribution with trigger .

    4. If osSourceURLs is not null and the user agent supports OS registrations, process osSourceURLs according to an implementation-defined algorithm.

    5. If osTriggerURLs is not null and the user agent supports OS registrations, process osTriggerURLs according to an implementation-defined algorithm.

Set privateStateTokens properly.

11. Source Algorithms

11.1. Obtaining a randomized source response

To obtain a set of possible trigger states given a randomized response output configuration config :

  1. Let possibleTriggerStates be a new empty set .

  2. For each integer triggerData between 0 (inclusive) and config ’s trigger data cardinality (exclusive):

    1. For each integer reportWindow between 0 (inclusive) and config ’s num report windows (exclusive):

      1. Let state be a new trigger state with the items:

        trigger data

        triggerData

        report window

        reportWindow

      2. Append state to possibleTriggerStates .

  3. Let possibleValues be a new empty set .

  4. For each integer attributions between 0 (inclusive) and |config’s| max attributions per source (inclusive):

    1. Append to possibleValues all distinct attributions -length combinations of possibleTriggerStates .

To obtain a randomized source response pick rate given a randomized response output configuration config and a double epsilon :

  1. Let possibleValues be the result of obtaining a set of possible trigger states with config .

  2. Let numPossibleValues be the size of possibleValues .

  3. Return numPossibleValues / ( numPossibleValues - 1 + e epsilon ).

To obtain a randomized source response given a randomized response output configuration config and a double epsilon :

  1. Let possibleValues be the result of obtaining a set of possible trigger states with config .

  2. Let pickRate be the result of obtaining a randomized source response pick rate with config and epsilon .

  3. Return the result of obtaining a randomized response with null, possibleValues , and pickRate .

11.2. Parsing source-registration JSON

To parse an attribution destination from a string str :

  1. Let url be the result of running the URL parser on the value of the str .

  2. If url is failure or null, return null.

  3. If url ’s origin is not suitable , return null.

  4. Return the result of obtaining a site from url ’s origin .

To parse attribution destinations from a value val :

  1. Let result be an ordered set .

  2. If val is a string , append the result of parse an attribution destination to result , and return result .

  3. If val is not a list , return null.

  4. For each value of val :

    1. If value is not a string , return null.

    2. Let destination be the result of parse an attribution destination with value .

    3. If destination is null, return null.

    4. Append destination to result .

  5. If result ’s size is greater than 3, return null.

  6. If result is empty , return null.

  7. return result .

Confirm that the maximum destinations size is workable.

To obtain a source expiry given a value :

  1. If value is not a string , return null.

  2. Let expirySeconds be the result of applying the rules for parsing integers to value .

  3. If expirySeconds is an error, return null.

  4. Let expiry be expirySeconds seconds.

  5. If expiry is less than 1 day, set expiry to 1 day.

  6. If expiry is greater than the user agent’s max source expiry , set expiry to that value.

  7. Return expiry .

To parse aggregation keys given an ordered map map :

  1. Let aggregationKeys be a new ordered map .

  2. If map [" aggregation_keys "] does not exist , return aggregationKeys .

  3. Let values be map [" aggregation_keys "].

  4. If values is not an ordered map , return null.

  5. If values ’s size is greater than the user agent’s max aggregation keys per attribution , return null.

  6. For each key value of values :

    1. If value is not a string , return null.

    2. Let keyPiece be the result of running parse an aggregation key piece with value .

    3. If keyPiece is an error, return null.

    4. Set aggregationKeys [ key ] to keyPiece .

  7. Return aggregationKeys .

Determine whether to limit length or code point length for key above.

To parse source-registration JSON given a byte sequence json , a suitable origin sourceOrigin , a suitable origin reportingOrigin , a source type sourceType , and a moment sourceTime :

  1. Let value be the result of running parse JSON bytes to an Infra value with json .

  2. If value is not an ordered map , return null.

  3. Let sourceEventId be 0.

  4. If value [" source_event_id "] exists and is a string :

    1. Set sourceEventId to the result of applying the rules for parsing non-negative integers to value [" source_event_id "].

    2. If sourceEventId is an error, set sourceEventId to 0.

  5. If value [" destination "] does not exist , return null.

  6. Let attributionDestinations be the result of running parse attribution destinations with value [" destination "].

  7. If attributionDestinations is null, return null.

  8. Let expiry be the result of running obtain a source expiry on value [" expiry "].

  9. If expiry is null, set expiry to 30 days.

  10. Let eventReportWindow be the result of running obtain a source expiry on value [" event_report_window "].

  11. Let aggregatableReportWindow be the result of running obtain a source expiry on value [" aggregatable_report_window "].

  12. Let priority be 0.

  13. If value [" priority "] exists and is a string :

    1. Set priority to the result of applying the rules for parsing integers to value [" priority "].

    2. If priority is an error, set priority to 0.

  14. Let filterData be a new filter map .

  15. If value [" filter_data "] exists :

    1. Set filterData to the result of running parse filter data with value [" filter_data "].

    2. If filterData is null, return null.

    3. If filterData [" source_type "] exists , return null.

  16. Set filterData [" source_type "] to « sourceType ».

  17. Let debugKey be null.

  18. If value [" debug_key "] exists and is a string :

    1. Set debugKey to the result of applying the rules for parsing non-negative integers to value [" debug_key "].

    2. If debugKey is an error, set debugKey to null.

    3. If the result of running check if cookie-based debugging is allowed with reportingOrigin is blocked , set debugKey to null.

  19. Let aggregationKeys be the result of running parse aggregation keys with value .

  20. If aggregationKeys is null, return null.

  21. Let triggerDataCardinality be the user agent’s navigation-source trigger data cardinality .

  22. Let maxAttributionsPerSource be the user agent’s max attributions per navigation source .

  23. If sourceType is " event ":

    1. Round expiry away from zero to the nearest day (86400 seconds).

    2. Set triggerDataCardinality to the user agent’s event-source trigger data cardinality .

    3. Set maxAttributionsPerSource to the user agent’s max attributions per event source .

  24. If eventReportWindow is null or greater than expiry , set eventReportWindow to expiry .

  25. If aggregatableReportWindow is null or greater than expiry , set aggregatableReportWindow to expiry .

  26. Let debugReportingEnabled be false.

  27. If value [" debug_reporting "] exists and is a boolean , set debugReportingEnabled to value [" debug_reporting "].

  28. Let randomizedResponseConfig be a new randomized response output configuration whose items are:

    max attributions per source

    maxAttributionsPerSource

    num report windows

    The result of obtaining the number of report windows with sourceType and eventReportWindow

    trigger data cardinality

    triggerDataCardinality

  29. Let epsilon be the user agent’s randomized response epsilon .

  30. Let source be a new attribution source struct whose items are:

    source identifier

    A new unique string

    source origin

    sourceOrigin

    event ID

    sourceEventId

    attribution destinations

    attributionDestinations

    reporting origin

    reportingOrigin

    expiry

    expiry

    event report window

    eventReportWindow

    aggregatable report window

    aggregatableReportWindow

    priority

    priority

    source time

    sourceTime

    source type

    sourceType

    randomized response

    The result of obtaining a randomized source response with randomizedResponseConfig and epsilon .

    randomized trigger rate

    The result of obtaining a randomized source response pick rate with randomizedResponseConfig and epsilon .

    filter data

    filterData

    debug key

    debugKey

    aggregation keys

    aggregationKeys

    aggregatable budget consumed

    0

    debug reporting enabled

    debugReportingEnabled

  31. Return source .

Determine proper charset-handling for the JSON header value.

11.3. Processing an attribution source

To check if an attribution source exceeds the unexpired destination limit given an attribution source source , run the following steps:

  1. Let unexpiredSources be all attribution rate-limit records record in the attribution rate-limit cache where all of the following are true:

  2. Let unexpiredDestinations be an empty set .

  3. For each attribution rate-limit record unexpiredRecord of unexpiredSources :

    1. Append unexpiredRecord ’s attribution destination to unexpiredDestinations .

  4. Let newDestinations be the result of taking the union of unexpiredDestinations and source ’s attribution destinations .

  5. Return whether newDestinations ’s size is greater than the user agent’s max destinations covered by unexpired sources .

To obtain a fake report given an attribution source source and a trigger state triggerState :

  1. Let fakeConfig be a new event-level trigger configuration with the items:

    trigger data

    triggerState ’s trigger data

    dedup key

    null

    priority

    0

    filters

    «[ " source_type " → « source ’s source type » ]»

  2. Let fakeTrigger be a new attribution trigger with the items:

    attribution destinations

    source ’s attribution destinations

    trigger time

    source ’s source time

    reporting origin

    source ’s reporting origin

    filters

    «[]»

    debug key

    null

    event-level trigger configurations

    « fakeConfig »

    aggregatable trigger data

    «»

    aggregatable values

    «[]»

    aggregatable dedup key

    «»

    debug reporting enabled

    false

    aggregation coordinator

    default aggregation coordinator

    serialized private state tokens

    «»

    aggregatable source registration time configuration

    " exclude "

  3. Let fakeReport be the result of running obtain an event-level report with source , fakeTrigger , and fakeConfig .

  4. Set fakeReport ’s report time to the result of running obtain the report time at a window with source and triggerState ’s report window .

  5. Return fakeReport .

To check if debug reporting is allowed given a source debug data type dataType and a suitable origin reportingOrigin :

  1. If dataType is:

    " source-destination-limit "

    Return allowed .

    " source-noised "
    " source-storage-limit "
    " source-success "
    " source-unknown-error "

    Return the result of running check if cookie-based debugging is allowed with reportingOrigin .

To obtain and deliver a debug report on source registration given a source debug data type dataType and an attribution source source :

  1. If source ’s debug reporting enabled is false, return.

  2. If the result of running check if debug reporting is allowed with dataType and source ’s reporting origin is blocked , return.

  3. Let body be a new map with the following key/value pairs:

    " attribution_destination "

    source ’s attribution destinations , serialized .

    " source_event_id "

    source ’s event ID , serialized .

    " source_site "

    source ’s source site , serialized .

  4. If source ’s debug key is not null, set body [" source_debug_key "] to source ’s debug key , serialized .

  5. If dataType is:

    " source-destination-limit "

    Set body [" limit "] to the user agent’s max destinations covered by unexpired sources , serialized .

    " source-storage-limit "

    Set body [" limit "] to the user agent’s max pending sources per source origin , serialized .

  6. Let data be a new attribution debug data with the items:

    data type

    dataType

    body

    body

  7. Run obtain and deliver a debug report with « data » and source ’s reporting origin .

To process an attribution source given an attribution source source :

  1. Let cache be the user agent’s attribution source cache .

  2. Remove all attribution sources entry in cache where entry ’s expiry time is less than source ’s source time .

  3. Let pendingSourcesForSourceOrigin be the set of all attribution sources pendingSource of cache where pendingSource ’s source origin and source ’s source origin are same origin .

  4. If pendingSourcesForSourceOrigin ’s size is greater than or equal to the user agent’s max pending sources per source origin :

    1. Run obtain and deliver a debug report on source registration with " source-storage-limit " and source .

    2. Return.

  5. If the result of running check if an attribution source exceeds the unexpired destination limit with source is true:

    1. Run obtain and deliver a debug report on source registration with " source-destination-limit " and source .

    2. Return.

  6. For each destination in source ’s attribution destinations :

    1. Let rateLimitRecord be a new attribution rate-limit record with the items:

      scope

      " source "

      source site

      source ’s source site

      attribution destination

      destination

      reporting origin

      source ’s reporting origin

      time

      source ’s source time

      expiry time

      source ’s expiry time

    2. If the result of running should processing be blocked by reporting-origin limit with rateLimitRecord is blocked :

      1. Run obtain and deliver a debug report on source registration with " source-success " and source .

      2. Return.

    3. Append rateLimitRecord to the attribution rate-limit cache .

  7. Remove all attribution rate-limit records entry from the attribution rate-limit cache if the result of running can attribution rate-limit record be removed with entry and source ’s source time is true.

  8. Let debugDataType be " source-success ".

  9. If source ’s randomized response is not null and is a set :

    1. For each trigger state triggerState of source ’s randomized response :

      1. Let fakeReport be the result of running obtain a fake report with source and triggerState .

      2. Append fakeReport to the event-level report cache .

    2. If source ’s randomized response is not empty , then set source ’s event-level attributable value to false.

    3. For each destination in source 's attribution destinations :

      1. Let rateLimitRecord be a new attribution rate-limit record with the items:

        scope

        " attribution "

        source site

        source ’s source site

        attribution destination

        destination

        reporting origin

        source ’s reporting origin

        time

        source ’s source time

        expiry time

        null

      2. Append rateLimitRecord to the attribution rate-limit cache .

    4. Set debugDataType to " source-noised ".

  10. Run obtain and deliver a debug report on source registration with debugDataType and source .

  11. Append source to cache .

Note: Because a fake report does not have a "real" effective destination, we need to subtract from the privacy budget of all possible destinations.

Should fake reports respect the user agent’s max event-level reports per attribution destination ?

12. Triggering Algorithms

12.1. Creating an attribution trigger

To parse event triggers given an ordered map map :

  1. Let eventTriggers be a new set .

  2. If map [" event_trigger_data "] does not exist , return eventTriggers .

  3. Let values be map [" event_trigger_data "].

  4. If values is not a list , return null.

  5. For each value of values :

    1. If value is not an ordered map , return null.

    2. Let triggerData be 0.

    3. If value [" trigger_data "] exists and is a string :

      1. Set triggerData to the result of applying the rules for parsing non-negative integers to value [" trigger_data "].

      2. If triggerData is an error, set triggerData to 0.

    4. Let dedupKey be null.

    5. If value [" deduplication_key "] exists and is a string :

      1. Set dedupKey to the result of applying the rules for parsing non-negative integers to value [" deduplication_key "].

      2. If dedupKey is an error, set dedupKey to null.

    6. Let priority be 0.

    7. If value [" priority "] exists and is a string :

      1. Set priority to the result of applying the rules for parsing integers to value [" priority "].

      2. If priority is an error, set priority to 0.

    8. Let filters be a list of filter maps , initially empty.

    9. If value [" filters "] exists :

      1. Set filters to the result of running parse filters with value [" filters "].

      2. If filters is null, return null.

    10. Let negatedFilters be a list of filter maps , initially empty.

    11. If value [" not_filters "] exists :

      1. Set negatedFilters to the result of running parse filters with value [" not_filters "].

      2. If negatedFilters is null, return null.

    12. Let eventTrigger be a new event-level trigger configuration with the items:

      trigger data

      triggerData

      dedup key

      dedupKey

      priority

      priority

      filters

      filters

      negated filters

      negatedFilters

    13. Append eventTrigger to eventTriggers .

  6. Return eventTriggers .

To parse aggregatable trigger data given an ordered map map :

  1. Let aggregatableTriggerData be a new list .

  2. If map [" aggregatable_trigger_data "] does not exist , return aggregatableTriggerData .

  3. Let values be map [" aggregatable_trigger_data "].

  4. If values is not a list , return null.

  5. For each value of values :

    1. If value is not an ordered map , return null.

    2. If value [" key_piece "] does not exist or is not a string , return null.

    3. Let keyPiece be the result of running parse an aggregation key piece with value [" key_piece "].

    4. If keyPiece is an error, return null.

    5. Let sourceKeys be a new ordered set .

    6. If value [" source_keys "] exists :

      1. If value [" source_keys "] is not a list , return null.

      2. If value [" source_keys "]'s size is greater than the user agent’s max aggregation keys per attribution , return null.

      3. For each sourceKey of value [" source_keys "]:

        1. If sourceKey is not a string , return null.

        2. Append sourceKey to sourceKeys .

    7. Let filters be a list of filter maps , initially empty.

    8. If value [" filters "] exists :

      1. Set filters to the result of running parse filters with value [" filters "].

      2. If filters is null, return null.

    9. Let negatedFilters be a list of filter maps , initially empty.

    10. If value [" not_filters "] exists :

      1. Set negatedFilters to the result of running parse filters with value [" not_filters "].

      2. If negatedFilters is null, return null.

    11. Let aggregatableTrigger be a new aggregatable trigger data with the items:

      key piece

      keyPiece

      source keys

      sourceKeys

      filters

      filters

      negated filters

      negatedFilters

    12. Append aggregatableTrigger to aggregatableTriggerData .

  6. Return aggregatableTriggerData .

Determine whether to limit length or code point length for sourceKey above.

To parse aggregatable values given an ordered map map :

  1. If map [" aggregatable_values "] does not exist , return «[]».

  2. Let values be map [" aggregatable_values "].

  3. If values is not an ordered map , return null.

  4. If values ’s size is greater than the user agent’s max aggregation keys per attribution , return null.

  5. For each key value of values :

    1. If value is not an integer, return null.

    2. If value is less than or equal to 0, return null.

    3. If value is greater than allowed aggregatable budget per source , return null.

  6. Return values .

Determine whether to limit length or code point length for key above.

To parse aggregatable dedup keys given an ordered map map :

  1. Let aggregatableDedupKeys be a new list .

  2. If map [" aggregatable_deduplication_keys "] does not exist , return aggregatableDedupKeys .

  3. Let values be map [" aggregatable_deduplication_keys "].

  4. If values is not a list , return null.

  5. For each value of values :

    1. If value is not an ordered map , return null.

    2. Let dedupKey be null.

    3. If value [" deduplication_key "] exists and is a string :

      1. Set dedupKey to the result of applying the rules for parsing non-negative integers to value [" deduplication_key "].

      2. If dedupKey is an error, set dedupKey to null.

    4. Let filters be a new filter map .

    5. If values [" filters "] exists :

      1. Set filters to the result of running parse filter data with value [" filters "].

      2. If filters is null, return null.

    6. Let negatedFilters be a new filter map .

    7. If values [" not_filters "] exists :

      1. Set negatedFilters to the result of running parse filter data with value [" not_filters "].

      2. If negatedFilters is null, return null.

    8. Let aggregatableDedupKey be a new aggregatable dedup key with the items:

      dedup key

      dedupKey

      filters

      filters

      negated filters

      negatedFilters

    9. Append aggregatableDedupKey to aggregatableDedupKeys .

  6. Return aggregatableDedupKeys .

To serialize a private state token given a string encodedBlindedPrivateStateToken :

  1. If encodedBlindedPrivateStateToken is null, return null.

  2. Let decoded be the result of forgiving-base64 decoding encodedBlindedPrivateStateToken .

  3. If decoded is failure, return null.

  4. Let tokens be the result of finishing issuance of decoded .

properly define the "finishing issuance" operation.

  1. If tokens is null, or has size not equal to 1, return null.

  2. Let token be tokens [0].

  3. Let redeemedBytes be the result of "beginning redemption" with token , the empty byte sequence (data), and 0 (the null timestamp).

properly define "begin redemption" operation. Consider running the algorithm at report sending time.

  1. Return the result of forgiving-base64 encoding redeemedBytes .

To create an attribution trigger given a byte sequence json , a site destination , a suitable origin reportingOrigin , a list of strings privateStateTokens , and a moment triggerTime :

  1. Let value be the result of running parse JSON bytes to an Infra value with json .

  2. If value is not an ordered map , return null.

  3. Let eventTriggers be the result of running parse event triggers with value .

  4. If eventTriggers is null, return null.

  5. Let aggregatableTriggerData be the result of running parse aggregatable trigger data with value .

  6. If aggregatableTriggerData is null, return null.

  7. Let aggregatableValues be the result of running parse aggregatable values with value .

  8. If aggregatableValues is null, return null.

  9. Let aggregatableDedupKeys be the result of running parse aggregatable dedup keys with value .

  10. If aggregatableDedupKeys is null, return null.

  11. Let debugKey be null.

  12. If value [" debug_key "] exists and is a string :

    1. Set debugKey to the result of applying the rules for parsing non-negative integers to value [" debug_key "].

    2. If debugKey is an error, set debugKey to null.

    3. If the result of running check if cookie-based debugging is allowed with reportingOrigin is blocked , set debugKey to null.

  13. Let filters be a list of filter maps , initially empty.

  14. If value [" filters "] exists:

    1. Set filters to the result of running parse filters with value [" filters "].

    2. If filters is null, return null.

  15. Let negatedFilters be a list of filter maps , initially empty.

  16. If value [" not_filters "] exists:

    1. Set negatedFilters to the result of running parse filters with value [" not_filters "].

    2. If negatedFilters is null, return null.

  17. Let debugReportingEnabled be false.

  18. If value [" debug_reporting "] exists and is a boolean , set debugReportingEnabled to value[" debug_reporting "].

  19. Let aggregationCoordinator be default aggregation coordinator .

  20. If value [" aggregation_coordinator_identifier "] exists :

    1. If value [" aggregation_coordinator_identifier "] is not a string , return null.

    2. If value [" aggregation_coordinator_identifier "] is not an aggregation coordinator , return null.

    3. Set aggregationCoordinator to value [" aggregation_coordinator_identifier "].

  21. Let aggregatableSourceRegTimeConfig be " exclude ".

  22. If value [" aggregatable_source_registration_time "] exists :

    1. If value [" aggregatable_source_registration_time "] is not a string , return null.

    2. If value [" aggregatable_source_registration_time "] is not an aggregatable source registration time configuration , return null.

    3. Set aggregatableSourceRegTimeConfig to value [" aggregatable_source_registration_time "].

  23. Let serializedPrivateStateTokens be a new empty list .

  24. For each privateStateToken of privateStateTokens :

    1. Let serializedPrivateStateToken be the result of serializing a private state token with privateStateToken .

    2. Append serializedPrivateStateToken to serializedPrivateStateTokens .

  25. Let trigger be a new attribution trigger with the items:

    attribution destination

    destination

    trigger time

    triggerTime

    reporting origin

    reportingOrigin

    filters

    filters

    negated filters

    negatedFilters

    debug key

    debugKey

    event-level trigger configurations

    eventTriggers

    aggregatable trigger data

    aggregatableTriggerData

    aggregatable values

    aggregatableValues

    aggregatable dedup keys

    aggregatableDedupKeys

    serialized private state tokens

    serializedPrivateStateTokens

    debug reporting enabled

    debugReportingEnabled

    aggregation coordinator

    aggregationCoordinator

    aggregatable source registration time configuration

    aggregatableSourceRegTimeConfig

  26. Return trigger .

Determine proper charset-handling for the JSON header value.

12.2. Does filter data match

To match filter values given a filter value a and a filter value b :

  1. If b is empty , then:

    1. If a is empty , then return true.

    2. Otherwise, return false.

  2. Let i be the intersection of a and b .

  3. If i is empty , then return false.

  4. Return true.

To match filter values with negation given a filter value a and a filter value b :

  1. If b is empty , then:

    1. If a is not empty , then return true.

    2. Otherwise, return false.

  2. Let i be the intersection of a and b .

  3. If i is not empty , then return false.

  4. Return true.

To match an attribution source’s filter data against a filter map given an attribution source source , a filter map filter , and a boolean isNegated :

  1. Let sourceData be source ’s filter data .

  2. For each key filterValues of filter :

    1. If sourceData [ key ] does not exist , continue .

    2. Let sourceValues be sourceData [ key ].

    3. If isNegated is:

      false
      If the result of running match filter values with sourceValues and filterValues is false, return false.
      true
      If the result of running match filter values with negation with sourceValues and filterValues is false, return false.
  3. Return true.

To match an attribution source’s filter data against filters given an attribution source source , a list of filter maps filters , and a boolean isNegated :

  1. If filters is empty , return true.

  2. For each filter of filters :

    1. If the result of running match an attribution source’s filter data against a filter map with source , filter and isNegated is true, return true.

  3. Return false.

To match an attribution source’s filter data against filters and negated filters given an attribution source source , a list of filter maps filters , and a list of filter maps notFilters :

  1. If the result of running match an attribution source’s filter data against filters with source , filters , and isNegated set to false is false, return false.

  2. If the result of running match an attribution source’s filter data against filters with source , notFilters , and isNegated set to true is false, return false.

  3. Return true.

12.3. Should attribution be blocked by attribution rate limit

Given an attribution trigger trigger and attribution source sourceToAttribute :

  1. Let matchingRateLimitRecords be all attribution rate-limit records record of attribution rate-limit cache where all of the following are true:

  2. If matchingRateLimitRecords ’s size is greater than or equal to max attributions per rate-limit window , return blocked .

  3. Return allowed .

12.4. Should processing be blocked by reporting-origin limit

Given an attribution rate-limit record newRecord :

  1. Let max be max source reporting origins per rate-limit window .

  2. If newRecord ’s scope is " attribution ", set max to max attribution reporting origins per rate-limit window .

  3. Let matchingRateLimitRecords be all attribution rate-limit records record in the attribution rate-limit cache where all of the following are true:

  4. Let distinctReportingOrigins be a new empty the ordered set . of all reporting origin in matchingRateLimitRecords , unioned with « newRecord ’s reporting origin »

  5. If distinctReportingOrigins ’s size is greater than max , return blocked .

For each NOTE: source scopes have an auxiliary max source reporting origins per source reporting site rate limit that also must be enforced.

  1. If newRecord ’s scope is " attribution ", return allowed

  2. Let matchingRateLimitRecords be all attribution rate-limit records record in the attribution rate-limit cache where all of the following are true:

  3. Let distinctReportingOrigins contains be the set of all reporting origin in matchingRateLimitRecords , unioned with « newRecord ’s reporting origin , return allowed . »

  4. If distinctReportingOrigins ’s size is greater than or equal to max , source reporting origins per source reporting site , return blocked .

  5. Return allowed .

12.5. Should attribution be blocked by rate limits

Given an attribution trigger trigger , an attribution source sourceToAttribute , and an attribution rate-limit record newRecord :

  1. If the result of running should attribution be blocked by attribution rate limit with trigger and sourceToAttribute is blocked :

    1. Let debugData be the result of running obtain debug data on trigger registration with " trigger-attributions-per-source-destination-limit ", trigger , sourceToAttribute and report set to null.

    2. Return the triggering result (" dropped ", debugData ).

  2. If the result of running should processing be blocked by reporting-origin limit with newRecord is blocked :

    1. Let debugData be the result of running obtain debug data on trigger registration with " trigger-reporting-origin-limit ", trigger , sourceToAttribute and report set to null.

    2. Return the triggering result (" dropped ", debugData ).

  3. Return null.

12.6. Creating aggregatable contributions

To create aggregatable contributions given an attribution source source and an attribution trigger trigger , run the following steps:

  1. Let aggregationKeys be the result of cloning source ’s aggregation keys .

  2. For each triggerData of trigger ’s aggregatable trigger data :

    1. If the result of running match an attribution source’s filter data against filters and negated filters with source , triggerData ’s filters , and triggerData ’s negated filters is false, continue .

    2. For each sourceKey of triggerData ’s source keys :

      1. If aggregationKeys [ sourceKey ] does not exist , continue .

      2. Set aggregationKeys [ sourceKey ] to aggregationKeys [ sourceKey ] OR triggerData ’s key piece .

  3. Let aggregatableValues be trigger ’s aggregatable values .

  4. Let contributions be a new empty list .

  5. For each id key of aggregationKeys :

    1. If aggregatableValues [ id ] does not exist , continue .

    2. Let contribution be a new aggregatable contribution with the items:

      key

      key

      value

      aggregatableValues [ id ]

    3. Append contribution to contributions .

  6. Return contributions .

12.7. Can source create aggregatable contributions

To check if an attribution source can create aggregatable contributions given an aggregatable report report and an attribution source sourceToAttribute , run the following steps:

  1. Let remainingAggregatableBudget be allowed aggregatable budget per source minus sourceToAttribute ’s aggregatable budget consumed .

  2. Assert : remainingAggregatableBudget is greater than or equal to 0.

  3. If report ’s required aggregatable budget is greater than remainingAggregatableBudget , return false.

  4. Return true.

12.8. Obtaining debug data on trigger registration

To obtain debug data body on trigger registration given a trigger debug data type dataType , an attribution trigger trigger , an optional attribution source sourceToAttribute , and an optional attribution report report :

  1. Let body be a new empty map .

  2. If dataType is:

    " trigger-attributions-per-source-destination-limit "

    Set body [" limit "] to the user agent’s max attributions per rate-limit window , serialized .

    " trigger-reporting-origin-limit "

    Set body [" limit "] to the user agent’s max attribution reporting origins per rate-limit window , serialized .

    " trigger-event-storage-limit "

    Set body [" limit "] to max event-level reports per attribution destination , serialized .

    " trigger-aggregate-storage-limit "

    Set body [" limit "] to max aggregatable reports per attribution destination , serialized .

    " trigger-aggregate-insufficient-budget "

    Set body [" limit "] to allowed aggregatable budget per source , serialized .

    " trigger-aggregate-excessive-reports "

    Set body [" limit "] to max aggregatable reports per source ,

    " trigger-event-low-priority "
    " trigger-event-excessive-reports "
    1. Assert : report is not null and is an event-level report .

    2. Return the result of running obtain an event-level report body with report .

  3. Set body [" attribution_destination "] to trigger ’s attribution destination , serialized .

  4. If trigger ’s debug key is not null, set body [" trigger_debug_key "] to trigger ’s debug key , serialized .

  5. If sourceToAttribute is not null:

    1. Set body [" source_event_id "] to source ’s event ID , serialized .

    2. Set body [" source_site "] to source ’s source site , serialized .

    3. If sourceToAttribute ’s debug key is not null, set body [" source_debug_key "] to sourceToAttribute ’s debug key , serialized .

  6. Return body .

To obtain debug data on trigger registration given a trigger debug data type dataType , an attribution trigger trigger , an optional attribution source sourceToAttribute , and an optional attribution report report :

  1. If trigger ’s debug reporting enabled is false, return null.

  2. If the result of running check if cookie-based debugging is allowed with trigger ’s reporting origin is blocked , return null.

  3. Let data be a new attribution debug data with the items:

    data type

    dataType .

    body

    The result of running obtain debug data body on trigger registration with dataType , trigger , sourceToAttribute and report .

  4. Return data .

12.9. Triggering event-level attribution

To trigger event-level attribution given an attribution trigger trigger , an attribution source sourceToAttribute , and an attribution rate-limit record rateLimitRecord , run the following steps:

  1. If trigger ’s event-level trigger configurations is empty , return the triggering result (" dropped ", null).

  2. If sourceToAttribute ’s randomized response is not null and is not empty :

    1. Assert : sourceToAttribute ’s event-level attributable is false.

    2. Let debugData be the result of running obtain debug data on trigger registration with " trigger-event-noise ", trigger , sourceToAttribute and report set to null.

    3. Return the triggering result (" dropped ", debugData ).

  3. If sourceToAttribute ’s event report window time is less than trigger ’s trigger time :

    1. Let debugData be the result of running obtain debug data on trigger registration with " trigger-event-report-window-passed ", trigger , sourceToAttribute and report set to null.

    2. Return the triggering result (" dropped ", debugData ).

  4. Let matchedConfig be null.

  5. For each event-level trigger configuration config of trigger ’s event-level trigger configurations :

    1. If the result of running match an attribution source’s filter data against filters and negated filters with sourceToAttribute , config ’s filters , and config ’s negated filters is true:

      1. Set matchedConfig to config .

      2. Break .

  6. If matchedConfig is null:

    1. Let debugData be the result of running obtain debug data on trigger registration with " trigger-event-no-matching-configurations ", trigger , sourceToAttribute and report set to null.

    2. Return the triggering result (" dropped ", debugData ).

  7. If matchedConfig ’s dedup key is not null and sourceToAttribute ’s dedup keys contains it:

    1. Let debugData be the result of running obtain debug data on trigger registration with " trigger-event-deduplicated ", trigger , sourceToAttribute and report set to null.

    2. Return the triggering result (" dropped ", debugData ).

  8. Let numMatchingReports be the number of entries in the event-level report cache whose attribution destinations contains trigger ’s attribution destination .

  9. If numMatchingReports is greater than or equal to the user agent’s max event-level reports per attribution destination :

    1. Let debugData be the result of running obtain debug data on trigger registration with " trigger-event-storage-limit ", trigger , sourceToAttribute and report set to null.

    2. Return the triggering result (" dropped ", debugData ).

  10. If the result of running should attribution be blocked by rate limits with trigger , sourceToAttribute , and rateLimitRecord is not null, return it.

  11. Let report be the result of running obtain an event-level report with sourceToAttribute , trigger , and matchedConfig .

  12. If sourceToAttribute ’s event-level attributable value is false:

    1. Let debugData be the result of running obtain debug data on trigger registration with " trigger-event-excessive-reports ", trigger , sourceToAttribute and report .

    2. Return the triggering result (" dropped ", debugData ).

  13. Let maxAttributionsPerSource be the user agent’s max attributions per navigation source .

  14. If sourceToAttribute ’s source type is " event ", set maxAttributionsPerSource to the user agent’s max attributions per event source .

  15. If sourceToAttribute ’s number of event-level reports value is equal to maxAttributionsPerSource , then:

    1. Let matchingReports be all entries in the event-level report cache where all of the following are true:

    2. If matchingReports is empty:

      1. Set sourceToAttribute ’s event-level attributable value to false.

      2. Let debugData be the result of running obtain debug data on trigger registration with " trigger-event-excessive-reports ", trigger , sourceToAttribute and report .

      3. Return the triggering result (" dropped ", debugData ).

    3. Set matchingReports to the result of sorting matchingReports in ascending order, with a being less than b if any of the following are true:

    4. Let lowestPriorityReport be the first item in matchingReports .

    5. If report ’s trigger priority is less than or equal to lowestPriorityReport ’s trigger priority :

      1. Let debugData be the result of running obtain debug data on trigger registration with " trigger-event-low-priority ", trigger , sourceToAttribute and report .

      2. Return the triggering result (" dropped ", debugData ).

    6. Remove lowestPriorityReport from the event-level report cache .

    7. Decrement sourceToAttribute ’s number of event-level reports value by 1.

  16. Let triggeringStatus be " attributed ".

  17. Let debugData be null.

  18. If sourceToAttribute ’s randomized response is:

    null

    Append report to the event-level report cache .

    not null
    1. Set triggeringStatus to " noised ".

    2. Set debugData to the result of running obtain debug data on trigger registration with " trigger-event-noise ", trigger , sourceToAttribute and report set to null.

  19. Increment sourceToAttribute ’s number of event-level reports value by 1.

  20. If matchedConfig ’s dedup key is not null, append it to sourceToAttribute ’s dedup keys .

  21. If report ’s source debug key is not null and report ’s trigger debug key is not null, queue a task to attempt to deliver a debug report with report .

  22. Return the triggering result ( triggeringStatus , debugData ).

12.10. Triggering aggregatable attribution

To trigger aggregatable attribution given an attribution trigger trigger , an attribution source sourceToAttribute , and an attribution rate-limit record rateLimitRecord , run the following steps:

  1. If the result of running check if an attribution trigger contains aggregatable data is false, return the triggering result (" dropped ", null).

  2. If sourceToAttribute ’s aggregatable report window time is less than trigger ’s trigger time :

    1. Let debugData be the result of running obtain debug data on trigger registration with " trigger-aggregate-report-window-passed ", trigger , sourceToAttribute and report set to null.

    2. Return the triggering result (" dropped ", debugData ).

  3. Let matchedDedupKey be null.

  4. For each aggregatable dedup key aggregatableDedupKey of trigger ’s aggregatable dedup keys :

    1. If the result of running match an attribution source’s filter data against filters and negated filters with sourceToAttribute , aggregatableDedupKey ’s filters , and aggregatableDedupKey ’s negated filters is true:

      1. Set matchedDedupKey to aggregatableDedupKey ’s dedup key .

      2. Break .

  5. If matchedDedupKey is not null and sourceToAttribute ’s aggregatable dedup keys contains it:

    1. Let debugData be the result of running obtain debug data on trigger registration with " trigger-aggregate-deduplicated ", trigger , sourceToAttribute and report set to null.

    2. Return the triggering result (" dropped ", debugData ).

  6. Let report be the result of running obtain an aggregatable report with sourceToAttribute and trigger .

  7. If report ’s contributions is empty :

    1. Let debugData be the result of running obtain debug data on trigger registration with " trigger-aggregate-no-contributions ", trigger , sourceToAttribute and report set to null.

    2. Return the triggering result (" dropped ", debugData ).

  8. Let numMatchingReports be the number of entries in the aggregatable report cache whose effective attribution destination equals trigger ’s attribution destination and is null report is false.

  9. If numMatchingReports is greater than or equal to the user agent’s max aggregatable reports per attribution destination :

    1. Let debugData be the result of running obtain debug data on trigger registration with " trigger-aggregate-storage-limit ", trigger , sourceToAttribute and report set to null.

    2. Return the triggering result (" dropped ", debugData ).

  10. If the result of running should attribution be blocked by rate limits with trigger , sourceToAttribute , and rateLimitRecord is not null, return it.

  11. If sourceToAttribute ’s number of aggregatable reports value is equal to max aggregatable reports per source , then:

    1. Let debugData be the result of running obtain debug data on trigger registration with " trigger-aggregate-excessive-reports ", trigger , sourceToAttribute , and report .

    2. Return the triggering result (" dropped ", debugData ).

  12. If the result of running check if an attribution source can create aggregatable contributions with report and sourceToAttribute is false:

    1. Let debugData be the result of running obtain debug data on trigger registration with " trigger-aggregate-insufficient-budget ", trigger , sourceToAttribute and report set to null.

    2. Return the triggering result (" dropped ", debugData ).

  13. Add report to the aggregatable report cache .

  14. Increment sourceToAttribute ’s number of aggregatable reports value by 1.

  15. Increment sourceToAttribute ’s aggregatable budget consumed value by report ’s required aggregatable budget .

  16. If matchedDedupKey is not null, append it to sourceToAttribute ’s aggregatable dedup keys .

  17. Run generate null reports and assign private state tokens with trigger and report .

  18. If report ’s source debug key is not null and report ’s trigger debug key is not null, queue a task to attempt to deliver a debug report with report .

  19. Return the triggering result (" attributed ", null).

12.11. Triggering attribution

To obtain and deliver a debug report on trigger registration given a trigger debug data type dataType , an attribution trigger trigger and an optional attribution source sourceToAttribute :

  1. Let debugData be the result of running obtain debug data on trigger registration with dataType , trigger , sourceToAttribute and report set to null.

  2. If debugData is null, return.

  3. Run obtain and deliver a debug report with « debugData » and trigger ’s reporting origin .

To find matching sources given an attribution trigger trigger :

  1. Let matchingSources be a new empty list .

  2. For each source of the attribution source cache :

    1. If source ’s attribution destinations does not contain trigger ’s attribution destination , continue .

    2. If source ’s reporting origin and trigger ’s reporting origin are not same origin , continue .

    3. If source ’s expiry time is less than or equal to trigger ’s trigger time , continue .

    4. Append source to matchingSources .

  3. Set matchingSources to the result of sorting matchingSources in descending order, with a being less than b if any of the following are true:

  4. Return matchingSources .

To check if an attribution trigger contains aggregatable data given an attribution trigger trigger , run the following steps:

  1. If trigger ’s aggregatable trigger data is not empty , return true.

  2. If trigger ’s aggregatable values is not empty , return true.

  3. Return false.

To trigger attribution given an attribution trigger trigger , run the following steps:

  1. Let hasAggregatableData be the result of checking if an attribution trigger contains aggregatable data with trigger .

  2. If trigger ’s event-level trigger configurations is empty and hasAggregatableData is false, return.

  3. Let matchingSources be the result of running find matching sources with trigger .

  4. If matchingSources is empty :

    1. Run obtain and deliver a debug report on trigger registration with " trigger-no-matching-source ", trigger and sourceToAttribute set to null.

    2. If hasAggregatableData is true, then run generate null reports and assign private state tokens with trigger and report set to null.

    3. Return.

  5. Let sourceToAttribute be matchingSources [0].

  6. If the result of running match an attribution source’s filter data against filters and negated filters with sourceToAttribute , trigger ’s filters , and trigger ’s negated filters is false,

    1. Run obtain and deliver a debug report on trigger registration with " trigger-no-matching-filter-data ", trigger , and sourceToAttribute .

    2. If hasAggregatableData is true, then run generate null reports and assign private state tokens with trigger and report set to null.

    3. Return.

  7. Let rateLimitRecord be a new attribution rate-limit record with the items:

    scope

    " attribution "

    source site

    sourceToAttribute ’s source site

    attribution destination

    trigger ’s attribution destination

    reporting origin

    sourceToAttribute ’s reporting origin

    time

    sourceToAttribute ’s source time

    expiry time

    null

  8. Let eventLevelResult be the result of running trigger event-level attribution with trigger , sourceToAttribute , and rateLimitRecord .

  9. Let aggregatableResult be the result of running trigger aggregatable attribution with trigger , sourceToAttribute , and rateLimitRecord .

  10. Let eventLevelDebugData be eventLevelResult ’s debug data .

  11. Let aggregatableDebugData be aggregatableResult ’s debug data .

  12. Let debugDataList be an empty list .

  13. If eventLevelDebugData is not null, then append eventLevelDebugData to debugDataList .

  14. If aggregatableDebugData is not null:

    1. If debugDataList is empty or aggregatableDebugData ’s data type does not equal eventLevelDebugData ’s data type , then append aggregatableDebugData to debugDataList .

  15. If debugDataList is not empty , then run obtain and deliver a debug report with debugDataList and trigger ’s reporting origin .

  16. If hasAggregatableData and aggregatableResult ’s status is " dropped ", run generate null reports and assign private state tokens with trigger and report set to null.

  17. If both eventLevelResult ’s status and aggregatableResult ’s status are " dropped ", return.

  18. Remove sourceToAttribute from matchingSources .

  19. For each item of matchingSources :

    1. Remove item from the attribution source cache .

  20. If neither eventLevelResult ’s status nor aggregatableResult ’s status is " attributed ", return.

  21. Append rateLimitRecord to the attribution rate-limit cache .

  22. Remove all attribution rate-limit records entry from the attribution rate-limit cache if the result of running can attribution rate-limit record be removed with entry and trigger ’s trigger time is true.

12.12. Establishing report delivery time

To obtain early deadlines given a source type sourceType :

  1. If sourceType is " event ", return «».

  2. Return « 2 days, 7 days ».

To obtain effective deadlines given an source type sourceType and a duration eventReportWindow :

  1. Let deadlines be the result of obtaining early deadlines given sourceType .

  2. Remove all elements in deadlines that are greater than eventReportWindow .

  3. Append eventReportWindow to deadlines .

  4. Return deadlines .

To obtain the number of report windows given a source type sourceType and a duration eventReportWindow :

  1. Let deadlines be the result of obtaining effective deadlines for sourceType and eventReportWindow .

  2. Return the size of deadlines .

To obtain a report time from deadline given a moment sourceTime and a duration deadline :

  1. Return sourceTime + deadline + 1 hour.

To obtain the report time at a window given an attribution source source and a non-negative integer window :

  1. Let deadlines be the result of running obtain effective deadlines with source ’s source type and event report window .

  2. Assert : deadlines [ window ] exists .

  3. Let deadline be deadlines [ window ].

  4. Return the result of running obtain a report time from deadline with source ’s source time and deadline .

To obtain an event-level report delivery time given an attribution source source and a moment triggerTime :

  1. Let deadlines be the result of obtaining effective deadlines with source .

  2. Let deadlineToUse be the last item in deadlines .

  3. For each deadline of deadlines :

    1. Let time be source ’s source time + deadline .

    2. If time is less than triggerTime , continue .

    3. Set deadlineToUse to deadline .

    4. Break .

  4. Return the result of running obtain a report time from deadline with source ’s source time and deadlineToUse .

To obtain an aggregatable report delivery time given a moment triggerTime , perform the following steps. They return a moment .

  1. Let r be a random double between 0 (inclusive) and 1 (exclusive) with uniform probability.

  2. Return triggerTime + min aggregatable report delay + r * randomized aggregatable report delay .

12.13. Obtaining an event-level report

To obtain an event-level report given an attribution source source , an attribution trigger trigger , and an event-level trigger configuration config :

  1. Let triggerDataCardinality be the user agent’s navigation-source trigger data cardinality .

  2. If source ’s source type is " event ", set triggerDataCardinality to the user agent’s event-source trigger data cardinality .

  3. Let reportTime be the result of running obtain an event-level report delivery time with source and trigger ’s trigger time .

  4. Let report be a new event-level report struct whose items are:

    event ID

    source ’s event ID .

    trigger data

    The remainder when dividing config ’s trigger data by triggerDataCardinality .

    randomized trigger rate

    source ’s randomized trigger rate .

    reporting origin

    source ’s reporting origin .

    attribution destinations

    source ’s attribution destinations .

    report time

    reportTime

    original report time

    reportTime

    trigger priority

    config ’s priority .

    trigger time

    trigger ’s trigger time .

    source identifier

    source ’s source identifier .

    report id

    The result of generating a random UUID .

    source debug key

    source ’s debug key .

    trigger debug key

    trigger ’s debug key .

  5. Return report .

12.14. Obtaining an aggregatable report’s required budget

An aggregatable report report ’s required aggregatable budget is the total value of report ’s contributions .

12.15. Obtaining an aggregatable report

To obtain an aggregatable report given an attribution source source and an attribution trigger trigger :

  1. Let reportTime be the result of running obtain an aggregatable report delivery time with trigger ’s trigger time .

  2. Let report be a new aggregatable report struct whose items are:

    reporting origin

    source ’s reporting origin .

    effective attribution destination

    trigger ’s attribution destination .

    source time

    source ’s source time .

    original report time

    reportTime .

    report time

    reportTime .

    report id

    The result of generating a random UUID .

    source debug key

    source ’s debug key .

    trigger debug key

    trigger ’s debug key .

    contributions

    The result of running create aggregatable contributions with source and trigger .

    serialized private state token

    null.

    aggregation coordinator

    trigger ’s aggregation coordinator .

    source registration time configuration

    trigger ’s aggregatable source registration time configuration .

  3. Return report .

12.16. Generating randomized null reports

To obtain a null report given an attribution trigger trigger and a moment sourceTime :

  1. Let reportTime be the result of running obtain an aggregatable report delivery time with trigger ’s trigger time .

  2. Let contribution be a new aggregatable contribution with the items:

    key

    0

    value

    0

  3. Let report be a new aggregatable report struct whose items are:

    reporting origin

    trigger ’s reporting origin

    effective attribution destination

    trigger ’s attribution destination

    source time

    sourceTime

    original report time

    reportTime

    report time

    reportTime

    report id

    The result of generating a random UUID

    source debug key

    null

    trigger debug key

    trigger ’s debug key

    contributions

    « contribution »

    serialized private state token

    null

    aggregation coordinator

    trigger ’s aggregation coordinator

    source registration time configuration

    trigger ’s aggregatable source registration time configuration

    is null report

    true

  4. Return report .

To obtain rounded source time given a moment sourceTime , return sourceTime in seconds since the UNIX epoch, rounded down to a multiple of a whole day (86400 seconds).

To determine if a randomized null report is generated given a double randomPickRate :

  1. Assert : randomPickRate is between 0 and 1 (both inclusive).

  2. Let r be a random double beween 0 (inclusive) and 1 (exclusive) with uniform probability.

  3. If r is less than randomPickRate , return true.

  4. Otherwise, return false.

To generate null reports given an attribution trigger trigger and an optional aggregatable report report defaulting to null:

  1. Let nullReports be a new empty list .

  2. If trigger ’s aggregatable source registration time configuration is " exclude ":

    1. If report is null and the result of determining if a randomized null report is generated with randomized null report rate excluding source registration time is true:

      1. Let nullReport be the result of obtaining a null report with trigger and trigger ’s trigger time .

      2. Append nullReport to the aggregatable report cache .

      3. Append nullReport to nullReports .

  3. Otherwise:

    1. Let maxSourceExpiry be max source expiry .

    2. Round maxSourceExpiry away from zero to the nearest day (86400 seconds).

    3. Let roundedAttributedSourceTime be null.

    4. If report is not null, set roundedAttributedSourceTime to the result of obtaining rounded source time with report ’s source time .

    5. For each integer day between 0 (inclusive) and the number of days in maxSourceExpiry (inclusive):

      1. Let fakeSourceTime be trigger ’s trigger time - day days.

      2. If roundedAttributedSourceTime is not null and equals the result of obtaining rounded source time with fakeSourceTime :

        1. Continue .

      3. If the result of determining if a randomized null report is generated with randomized null report rate including source registration time is true:

        1. Let nullReport be the result of obtaining a null report with trigger and fakeSourceTime .

        2. Append nullReport to the aggregatable report cache .

        3. Append nullReport to nullReports .

  4. Return nullReports .

To shuffle a list list , reorder list ’s elements such that each possible permutation has equal probability of appearance.

To assign private state tokens given a list of aggregatable reports reports and an attribution trigger trigger :

  1. If reports is empty , return.

  2. Let privateStateTokens be trigger ’s serialized private state tokens .

  3. If privateStateTokens is empty , return.

  4. Shuffle reports .

  5. Shuffle privateStateTokens .

  6. Let n be the minimum of reports ’s size and privateStateTokens ’s size .

  7. For each integer i between 0 (inclusive) and n (exclusive):

    1. Set reports [i]'s serialized private state token to privateStateTokens [i].

Assign the ID associated with the private state token to report ID .

To generate null reports and assign private state tokens given an attribution trigger trigger and an optional aggregatable report report defaulting to null:

  1. Let reports be the result of generating null reports with trigger and report .

  2. If report is not null:

    1. Append report to reports .

  3. Run assign private state tokens with reports and trigger .

13. Report delivery

The user agent MUST periodically iterate over its event-level report cache and aggregatable report cache and run queue a report for delivery on each item.

To queue a report for delivery given an attribution report report and an environment settings objects context , run the following steps in parallel :

  1. If report ’s delivered value is true, return.

  2. Set report ’s delivered value to true.

  3. If report ’s report time is less than context ’s current wall time , add an implementation-defined random non-negative duration to report ’s report time .

    Note: On startup, it is possible the user agent will need to send many reports whose report times passed while the browser was closed. Adding random delay prevents temporal joining of reports from different source origin s.

  4. Wait until context ’s current wall time is equal to report ’s report time .

  5. Optionally, wait a further implementation-defined duration .

    Note: This is intended to allow user agents to optimize device resource usage.

  6. Run attempt to deliver a report with report .

13.1. Encode an unsigned k-bit integer

To encode an unsigned k-bit integer , represent it as a big-endian byte sequence of length k / 8, left padding with zero as necessary.

13.2. Obtaining an aggregatable report’s debug mode

An aggregatable report report ’s debug mode is the result of running the following steps:

  1. If report ’s source debug key is null, return disabled .

  2. If report ’s trigger debug key is null, return disabled .

  3. Return enabled .

13.3. Obtaining an aggregatable report’s shared info

An aggregatable report report ’s shared info is the result of running the following steps:

  1. Let reportingOrigin be report ’s reporting origin .

  2. Let sharedInfo be an ordered map of the following key/value pairs:

    " api "

    " attribution-reporting "

    " attribution_destination "

    report ’s effective attribution destination , serialized

    " report_id "

    report ’s report ID

    " reporting_origin "

    reportingOrigin , serialized

    " scheduled_report_time "

    report ’s original report time in seconds since the UNIX epoch, serialized

    " version "

    A string , API version.

  3. If report ’s debug mode is enabled , set sharedInfo [" debug_mode "] to " enabled ".

  4. If report ’s source registration time configuration is " include ", set sharedInfo [" source_registration_time "] to the result of obtaining rounded source time with report ’s source time , serialized .

  5. Return the string resulting from executing serialize an infra value to a json string on sharedInfo .

13.4. Obtaining an aggregatable report’s aggregation service payloads

To obtain the public key for encryption given an aggregation coordinator aggregationCoordinator , asynchronously return a user-agent-determined public key or an error in the event that the user agent failed to obtain the public key.

Note: The user agent might enforce weekly key rotation. If there are multiple keys, the user agent might independently pick a key uniformly at random for every encryption operation. The key should be uniquely identifiable.

An aggregatable report report ’s plaintext payload is the result of running the following steps:

  1. Let payloadData be a new empty list .

  2. For each contribution of report ’s contributions :

    1. Let contributionData be a map of the following key/value pairs:

      " bucket "

      contribution ’s key , encoded

      " value "

      contribution ’s value , encoded

    2. Append contributionData to payloadData .

  3. Let payload be a map of the following key/value pairs:

    " data "

    payloadData

    " operation "

    " histogram "

  4. Return the byte sequence resulting from CBOR encoding payload .

To obtain the encrypted payload given an aggregatable report report and a public key pkR , run the following steps:

  1. Let plaintext be report ’s plaintext payload .

  2. Let encodedSharedInfo be report ’s shared info , encoded .

  3. Let info be the concatenation of «" aggregation_service ", encodedSharedInfo ».

  4. Set up HPKE sender’s context with pkR and info .

  5. Return the byte sequence or an error resulting from encrypting plaintext with the sender’s context .

To obtain the aggregation service payloads given an aggregatable report report , run the following steps:

  1. Let pkR be the result of running obtain the public key for encryption with report ’s aggregation coordinator .

  2. If pkR is an error, return pkR .

  3. Let encryptedPayload be the result of running obtain the encrypted payload with report and pkR .

  4. If encryptedPayload is an error, return encryptedPayload .

  5. Let aggregationServicePayloads be a new empty list .

  6. Let aggregationServicePayload be a map of the following key/value pairs:

    " payload "

    encryptedPayload , base64 encoded

    " key_id "

    A string identifying pkR

  7. If report ’s debug mode is enabled , set aggregationServicePayload [" debug_cleartext_payload "] to report ’s plaintext payload , base64 encoded .

  8. Append aggregationServicePayload to aggregationServicePayloads .

  9. Return aggregationServicePayloads .

13.5. Serialize attribution report body

To obtain an event-level report body given an attribution report report , run the following steps:

  1. Let data be a map of the following key/value pairs:

    " attribution_destination "

    report ’s attribution destinations , serialized .

    " randomized_trigger_rate "

    report ’s randomized trigger rate

    " source_type "

    report ’s source type

    " source_event_id "

    report ’s event ID , serialized

    " trigger_data "

    report ’s trigger data , serialized

    " report_id "

    report ’s report ID

    " scheduled_report_time "

    report ’s original report time in seconds since the UNIX epoch, serialized

  2. If report ’s source debug key is not null, set data [" source_debug_key "] to report ’s source debug key , serialized .

  3. If report ’s trigger debug key is not null, set data [" trigger_debug_key "] to report ’s trigger debug key , serialized .

  4. Return data .

To serialize an event-level report report , run the following steps:

  1. Let data be the result of running obtain an event-level report body with report .

  2. Return the byte sequence resulting from executing serialize an infra value to JSON bytes on data .

To serialize an aggregatable report report , run the following steps:

  1. Assert : report ’s effective attribution destination is not the opaque origin .

  2. Let aggregationServicePayloads be the result of running obtain the aggregation service payloads .

  3. If aggregationServicePayloads is an error, return aggregationServicePayloads .

  4. Let data be a map of the following key/value pairs:

    " shared_info "

    report ’s shared info

    " aggregation_service_payloads "

    aggregationServicePayloads

    " aggregation_coordinator_identifier "

    report ’s aggregation coordinator

  5. If report ’s source debug key is not null, set data [" source_debug_key "] to report ’s source debug key , serialized .

  6. If report ’s trigger debug key is not null, set data [" trigger_debug_key "] to report ’s trigger debug key , serialized .

  7. Return the byte sequence resulting from executing serialize an infra value to JSON bytes on data .

To serialize an attribution report report , run the following steps:

  1. If report is an:

    event-level report
    Return the result of running serialize an event-level report with report .
    aggregatable report
    Return the result of running serialize an aggregatable report with report .

Note: The inclusion of " report_id " in the report body is intended to allow the report recipient to perform deduplication and prevent double counting, in the event that the user agent retries reports on failure. To prevent the report recipient from learning additional information about whether a user is online, retries might be limited in number and subject to random delays.

13.6. Serialize attribution debug report body

To serialize an attribution debug report report , run the following steps:

  1. Let collection be an empty list .

  2. For each debugData of report ’s data :

    1. Let data be a map of the following key/value pairs:

      " type "

      debugData ’s data type

      " body "

      debugData ’s body

    2. Append data to collection .

  3. Return the byte sequence resulting from executing serialize an Infra value to JSON bytes on collection .

13.7. Get report request URL

To generate a report URL given a suitable origin reportingOrigin and a list of strings path :

  1. Let reportUrl be a new URL record.

  2. Set reportUrl ’s scheme to reportingOrigin ’s scheme .

  3. Set reportUrl ’s host to reportingOrigin ’s host .

  4. Set reportUrl ’s port to reportingOrigin ’s port .

  5. Let fullPath be «" .well-known ", " attribution-reporting "».

  6. Append path to fullPath .

  7. Set reportUrl ’s path to path .

  8. Return reportUrl .

To generate an attribution report URL given an attribution report report and an optional boolean isDebugReport (default false):

  1. Let path be an empty list .

  2. If isDebugReport is true, append " debug " to path .

  3. If report is an:

    event-level report
    Append " report-event-attribution " to path .
    aggregatable report
    Append " report-aggregate-attribution " to path .
  4. Return the result of running generate a report URL with report ’s reporting origin and path .

To generate an attribution debug report URL given an attribution debug report report :

  1. Let path be «" debug ", " verbose "».

  2. Return the result of running generate a report URL with report ’s reporting origin and path .

13.8. Creating a report request

To create a report request given a URL url , a byte sequence body , and a header list newHeaders (defaults to an empty list ):

  1. Let headers be a new header list containing a header named " Content-Type " whose value is " application/json ".

  2. For each header in newHeaders :

    1. Append header to headers .

  3. Let request be a new request with the following properties:

    method

    " POST "

    URL

    url

    header list

    headers

    body

    A body whose source is body .

    referrer

    " no-referrer "

    client

    null

    window

    " no-window "

    service-workers mode

    " none "

    initiator

    ""

    mode

    " cors "

    unsafe-request flag

    set

    credentials mode

    " omit "

    cache mode

    " no-store "

  4. Return request .

13.9. Get report request headers

To generate attribution report headers given an attribution report report :

  1. Let newHeaders be a new header list .

  2. If report is an aggregatable report :

    1. If report ’s serialized private state token is not null, append a new header named " Sec-Attribution-Reporting-Private-State-Token " to newHeaders whose value is report ’s serialized private state token .

  3. Return newHeaders .

13.10. Issuing a report request

This algorithm constructs a request and attempts to deliver it to a suitable origin .

To remove a report from the cache given an attribution report report :

  1. If report is an:

    event-level report
    Queue a task to remove report from the event-level report cache .
    aggregatable report
    Queue a task to remove report from the aggregatable report cache .

To attempt to deliver a report given an attribution report report , run the following steps:

  1. Let url be the result of executing generate an attribution report URL on report .

  2. Let data be the result of executing serialize an attribution report on report .

  3. If data is an error, run remove a report from the cache with report and return.

  4. Let headers be the result of executing generate attribution report headers .

  5. Let request be the result of executing create a report request on url , data , and headers .

  6. Queue a task to fetch request with processResponse being remove a report from the cache with report .

This fetch should use a network partition key for an opaque origin. [Issue #220]

A user agent MAY retry this algorithm in the event that there was an error.

13.11. Issuing a debug report request

To attempt to deliver a debug report given an attribution report report :

  1. Let url be the result of executing generate an attribution report URL on report with isDebugReport set to true.

  2. Let data be the result of executing serialize an attribution report on report .

  3. If data is an error, return.

  4. Let headers be the result of executing generate attribution report headers .

  5. Let request be the result of executing create a report request on url , data , and headers .

  6. Fetch request .

13.12. Issuing a verbose debug request

To attempt to deliver a verbose debug report given an attribution debug report report :

  1. Let url be the result of executing generate an attribution debug report URL on report .

  2. Let data be the result of executing serialize an attribution debug report on report .

  3. Let request be the result of executing create a report request on url and data .

  4. Fetch request .

This fetch should use a network partition key for an opaque origin. [Issue #220]

A user agent MAY retry this algorithm in the event that there was an error.

14. Cross App and Web Algorithms

To get OS-registration URLs from a header list given a header list headers and a header name name :

  1. Let values be the result of getting name from headers with a type of " list ".

  2. If values is not a list , return null.

  3. Let urls be a new list .

  4. For each value of values :

    1. If value is not a string , continue .

    2. Let url be the result of running the URL parser on value .

    3. If url is failure or null, continue .

    4. Append url to urls .

  5. Return urls .

To get supported registrars :

  1. Let supportedRegistrars be an empty ordered set .

  2. If the user agent supports web registrations, append " web " to supportedRegistrars .

  3. If the user agent supports OS registrations, append " os " to supportedRegistrars .

  4. Return supportedRegistrars .

" Attribution-Reporting-Support " is a Dictionary Structured Header set on a request that indicates which registrars, if any, the corresponding response can use. Its values are not specified and its allowed keys are the registrars .

To set an OS-support header given a header list headers :

  1. Let supportedRegistrars be the result of getting supported registrars .

  2. Let dict be an ordered map .

  3. For each registrar of supportedRegistrars :

    1. Set dict [ registrar ] to true.

  4. Set a structured field value given (" Attribution-Reporting-Support ", dict ) in headers .

15. Security considerations

TODO

16. Privacy consideration

TODO

16.1. Clearing attribution storage

A user agent’s attribution caches contain data about a user’s web activity. When a user agent clears an origin’s storage, it MUST also remove entries in the attribution caches whose source origin , attribution destinations , reporting origin , attribution destinations , or reporting origin is the same as the cleared origin.

A user agent MAY clear attribution cache entries at other times. For example, when a user agent clears an origin from a user’s browsing history.

Conformance

Document conventions

Conformance requirements are expressed with a combination of descriptive assertions and RFC 2119 terminology. The key words “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL NOT”, “SHOULD”, “SHOULD NOT”, “RECOMMENDED”, “MAY”, and “OPTIONAL” in the normative parts of this document are to be interpreted as described in RFC 2119. However, for readability, these words do not appear in all uppercase letters in this specification.

All of the text of this specification is normative except sections explicitly marked as non-normative, examples, and notes. [RFC2119]

Examples in this specification are introduced with the words “for example” or are set apart from the normative text with class="example" , like this:

This is an example of an informative example.

Informative notes begin with the word “Note” and are set apart from the normative text with class="note" , like this:

Note, this is an informative note.

Index

Terms defined by this specification

Terms defined by reference

References

Normative References

[CLEAR-SITE-DATA]
Mike West. Clear Site Data . URL: https://w3c.github.io/webappsec-clear-site-data/
[DOM]
Anne van Kesteren. DOM Standard . Living Standard. URL: https://dom.spec.whatwg.org/
[ENCODING]
Anne van Kesteren. Encoding Standard . Living Standard. URL: https://encoding.spec.whatwg.org/
[FETCH]
Anne van Kesteren. Fetch Standard . Living Standard. URL: https://fetch.spec.whatwg.org/
[HR-TIME-3]
Yoav Weiss. High Resolution Time . URL: https://w3c.github.io/hr-time/
[HTML]
Anne van Kesteren; et al. HTML Standard . Living Standard. URL: https://html.spec.whatwg.org/multipage/
[INFRA]
Anne van Kesteren; Domenic Denicola. Infra Standard . Living Standard. URL: https://infra.spec.whatwg.org/
[MEDIACAPTURE-STREAMS]
Cullen Jennings; et al. Media Capture and Streams . URL: https://w3c.github.io/mediacapture-main/
[PERMISSIONS-POLICY-1]
Ian Clelland. Permissions Policy . URL: https://w3c.github.io/webappsec-permissions-policy/
[RFC2119]
S. Bradner. Key words for use in RFCs to Indicate Requirement Levels . March 1997. Best Current Practice. URL: https://datatracker.ietf.org/doc/html/rfc2119
[RFC8949]
C. Bormann; P. Hoffman. Concise Binary Object Representation (CBOR) . December 2020. Internet Standard. URL: https://www.rfc-editor.org/rfc/rfc8949
[RFC9180]
R. Barnes; et al. Hybrid Public Key Encryption . February 2022. Informational. URL: https://www.rfc-editor.org/rfc/rfc9180
[SECURE-CONTEXTS]
Mike West. Secure Contexts . URL: https://w3c.github.io/webappsec-secure-contexts/
[URL]
Anne van Kesteren. URL Standard . Living Standard. URL: https://url.spec.whatwg.org/
[WEBIDL]
Edgar Chen; Timothy Gu. Web IDL Standard . Living Standard. URL: https://webidl.spec.whatwg.org/
[XHR]
Anne van Kesteren. XMLHttpRequest Standard . Living Standard. URL: https://xhr.spec.whatwg.org/

Informative References

[STORAGE]
Anne van Kesteren. Storage Standard . Living Standard. URL: https://storage.spec.whatwg.org/

IDL Index

interface mixin HTMLAttributionSrcElementUtils {
    [CEReactions, SecureContext] attribute USVString attributionSrc;
};
HTMLAnchorElement includes HTMLAttributionSrcElementUtils;
HTMLImageElement includes HTMLAttributionSrcElementUtils;
HTMLScriptElement includes HTMLAttributionSrcElementUtils;
dictionary AttributionReportingRequestOptions {
  required boolean eventSourceEligible;
  required boolean triggerEligible;
};
partial dictionary RequestInit {
  AttributionReportingRequestOptions attributionReporting;
};
partial interface XMLHttpRequest {
  [SecureContext]
  undefined setAttributionReporting(AttributionReportingRequestOptions options);
};

Issues Index

Set contextOrigin properly.
Consider allowing the user agent to limit the size of tokens .
Monkeypatch img and script loading so that the presence of an attributionSrc attribute sets the src or src request’s Attribution Reporting eligibility to " event-source-or-trigger ".
Use attributionSrcUrls with make a background attributionsrc request .
Use/propagate navigationSourceEligible to the navigation request 's Attribution Reporting eligibility .
Check permissions policy.
This would ideally be replaced by a more descriptive algorithm in Infra. See infra/201
Determine whether to limit length or code point length for filter and d above.
Ideally this would use the cookie-retrieval algorithm , but it cannot: There is no way to consider only cookies whose http-only-flag is true and whose same-site-flag is " None "; there is no way to prevent the last-access-time from being modified; and the return value is a string that would have to be further processed to check for the " ar_debug " cookie.
Audit other properties on request and set them properly.
Support header-processing on redirects.
Check for transient activation with " navigation-source ".
Set privateStateTokens properly.
Confirm that the maximum destinations size is workable.
Determine whether to limit length or code point length for key above.
Determine proper charset-handling for the JSON header value.
Should fake reports respect the user agent’s max event-level reports per attribution destination ?
Determine whether to limit length or code point length for sourceKey above.
Determine whether to limit length or code point length for key above.
properly define the "finishing issuance" operation.
properly define "begin redemption" operation. Consider running the algorithm at report sending time.
Determine proper charset-handling for the JSON header value.
Assign the ID associated with the private state token to report ID .
This fetch should use a network partition key for an opaque origin. [Issue #220]
This fetch should use a network partition key for an opaque origin. [Issue #220]