skip to main content
research-article
Open access

TaDA Live: Compositional Reasoning for Termination of Fine-grained Concurrent Programs

Published: 10 November 2021 Publication History

Abstract

We present TaDA Live, a concurrent separation logic for reasoning compositionally about the termination of blocking fine-grained concurrent programs. The crucial challenge is how to deal with abstract atomic blocking: that is, abstract atomic operations that have blocking behaviour arising from busy-waiting patterns as found in, for example, fine-grained spin locks. Our fundamental innovation is with the design of abstract specifications that capture this blocking behaviour as liveness assumptions on the environment. We design a logic that can reason about the termination of clients that use such operations without breaking their abstraction boundaries, and the correctness of the implementations of the operations with respect to their abstract specifications. We introduce a novel semantic model using layered subjective obligations to express liveness invariants and a proof system that is sound with respect to the model. The subtlety of our specifications and reasoning is illustrated using several case studies.

1 Introduction

Compositional reasoning for fine-grained concurrent programs interacting with shared memory is a fundamental, open research problem. We are beginning to obtain a good understanding of how to reason about safety properties of concurrent programs: i.e., if the program terminates and the input satisfies the precondition, then the program does not fault and the result satisfies the postcondition. O’Hearn and Brookes [4, 36] introduced concurrent separation logic for reasoning compositionally about course-grained concurrent programs. Since then, there has been a flowering of work on modern concurrent separation logics for reasoning compositionally about safety properties of fine-grained concurrent programs: e.g., CAP [10], TaDA [7], Iris [24], and FCSL [34]. With these modern logics, it is possible to provide abstract specifications that match the intuitive software interface understood by the developer and to verify both implementations and client programs.
We have comparatively little understanding of how to reason compositionally about progress (liveness) properties for fine-grained concurrent algorithms: i.e., something good eventually happens. Examples of progress properties include termination, livelock-freedom, or that every user request is eventually served. The intricacies of the design of concurrent programs often arise precisely from the need to make the program correct with respect to progress properties. The goal of this article is to design a program logic to reason compositionally about the safety and termination of fine-grained concurrent programs: i.e., to be able to prove that if the input satisfies the precondition, then the program terminates without faulting and the result satisfies the postcondition. As with safety, the aim is to provide abstract specifications and to verify implementations and clients.
A truly compositional approach would achieve proof scalability through the reduction of large complex proofs into a composition of smaller, more tractable proofs, and proof reuse through the ability to define abstract interfaces between independent sub-proofs. Proof scalability for concurrent systems is achieved through thread-local reasoning: i.e., the proof of the parallel composition of threads should be the composition of smaller, separate proofs of each thread. Proof reuse is achieved when the right abstract interface for a module is identified, so the proof of correctness of the implementation of the module and the proof of its clients is decoupled: A proof of a client can be reused when swapping the implementation of the module for one satisfying the same specification; a proof of an implementation can be reused when the specification is general enough to support arbitrary correct clients.
For safety, thread-local reasoning can be obtained through rely/guarantee proofs: A protocol on shared state is specified in terms of the set of allowed updates, and each thread is verified to respect the protocol under the assumption that the environment respects the protocol. There have been successful attempts at using rely/guarantee reasoning to prove progress properties, such as termination, of non-blocking concurrent programs [5, 8, 13, 14, 21, 32], which are the programs where the progress of a thread does not depend on the progress of other threads. For example, the Total TaDA concurrent separation logic [8] was introduced to provide compositional reasoning about the safety and termination of non-blocking programs. It provided thread-local reasoning and abstract specification of module interfaces without the need to extend the rely/guarantee reasoning.
Standard rely/guarantee reasoning is not enough to prove progress properties for blocking programs. In a blocking program, termination of a thread may depend on other threads performing some updates to the shared state. For example, if a thread is requesting a lock that has been acquired by another thread, then the lack of progress of the thread currently owning the lock would hinder the progress of . Thread is blocked, waiting for the lock owner to release the lock. In such situation, a safety abstraction of the environment is insufficient to support a termination argument for : Knowing that the release of the lock is always allowed to happen does not imply that it is eventually happening.
There has been some work [3, 22, 27] on proving progress properties for programs where blocking is caused solely by blocking primitives such as built-in locks or channels. However, it is very common, especially for fine-grained programs, to use ad hoc busy-waiting patterns. For example, consider a thread running . The termination of this thread is entirely dependent on the environment eventually storing 1 in . This form of blocking is completely different from a call to a blocking primitive that cannot take a step in the current state. It instead corresponds to code executing steps without making real progress. We call this pattern of behaviour abstract blocking.
We have identified two ways to reason about progress in the presence of abstract blocking in the literature: the history-based approach and the refinement-based approach. The history-based approach [15, 25, 37] is very general but results in complex and indirect specifications with complicated reasoning involving explicit trace manipulations. We discuss this approach further in Section 6. In the refinement-based approach, the LiLi logic [30, 31] is the work most closely related to our goals. LiLi extends rely/guarantee with liveness information to prove a progress-preserving contextual refinement between the implementation of a module’s operations and simpler code representing their specifications. LiLi’s extension of rely/guarantee requires, however, heavy use of global auxiliary shared state manipulated through ghost code, which makes the proofs less local. Moreover, the specification code associated with abstractly atomic operations that are blocking is not atomic and exposes implementation details, which hinders scalability and reuse. We give a detailed comparison with our work and LiLi in Sections 2, 5.4, 6.
The refinement approach does not prove termination directly, but instead relates termination of implementation code with termination of specification code. By contrast, our goal is to develop a program logic with which we are able to verify specifications that describe termination directly, without the manipulation of histories, with proofs that keep auxiliary state as local as possible without requiring the addition of ghost code, and with specifications that allow the abstraction of implementation details while representing precisely the abstract termination guarantees.
Contributions. Our starting observation is that just as safety rely/guarantee arguments are centred around invariants, i.e., facts of the form always P, so liveness rely/guarantee arguments for proving progress in the presence of blocking should be centered around liveness invariants, i.e., facts of the form always eventually P. TaDA Live’s design is based on the idea that this is not a fluke: The dependence on liveness invariants might be considered a definition of abstract blocking. To capture this observation within a program logic, we introduce a number of key innovations:
subjective obligations, a new form of logical ghost state to express liveness invariants in a thread-local way without the need for ghost code;
obligation layers, to express dependencies between liveness invariants and avoid unsound circular reasoning;
abstract specifications for atomic blocking operations, to express termination guarantees conditionally on an environment liveness assumption of the form “always eventually .”
We obtain TaDA Live, a concurrent separation logic that uses liveness invariants to provide compositional reasoning for establishing safety and termination for blocking programs. The logic makes extensive use of abstract specifications for atomic blocking operations to achieve proof scalability and reuse. This article presents the following contributions:
the TaDA Live logic and its specification format;
a novel semantic model and soundness proof for the logic: the new model is a substantial re-definition of the TaDA model to allow for the non-trivial extensions needed to incorporate the liveness content of the TaDA Live specifications;
TaDA Live proofs for several paradigmatic case studies: two fine-grained implementations of locks showcase abstraction in the specifications and the obligation mechanism; a program mixing locks and busy-waiting illustrates common proof patterns for clients; two counter modules illustrate TaDA Live’s ability to hide internal blocking and proof reuse; and a set module using a lock-coupling pattern illustrates the generality of the layer system.
Outline. Section 2 provides an example-driven overview of the main innovations of TaDA Live. Section 3 introduces the assertion language and the semantics of the TaDA Live specifications. Section 4 presents the crucial proof rules of TaDA Live, with a running example to illustrate their use. Section 5 presents TaDA Live proofs of several key case studies and a discussion on the limitations of the TaDA Live reasoning. Section 6 contains related work, and Section 7 ends with conclusions and future work.

2 An Overview of TaDA Live

We introduce the main ideas of TaDA Live in this section, leaving the complex technical details for the following sections. Consider a simple example program with non-primitive blocking behaviour:
We use a first-order, fine-grained concurrent while language for manipulating shared state. The shared state comprises heap cells that have addresses and store values (addresses, integers, Booleans). The notation denotes the value stored at the heap cell with address . The thread on the left () is busy-waiting on the value stored at the shared heap cell at . Under fair scheduling, the program is guaranteed to terminate: Eventually, the right-hand thread () will be scheduled and will set the heap cell to 1; after that, eventually the left-hand thread will read the value 1 into the local variable and the while loop will terminate. Since we are aiming at a thread-local proof method, we should be able to break the proof of termination of the program into two separate proofs for the two threads.
We first explore how to provide a thread-local proof of safety for this example program using the TaDA logic [7]. We then extend the reasoning with the ingredients needed to prove termination. TaDA is a concurrent separation logic, so it uses the standard separation logic assertions. Let us assume the precondition and, for simplicity, aim at the postcondition . TaDA uses the standard parallel rule for concurrent separation logics, where the precondition is separated into two preconditions , one for each thread. Since both threads dereference , we need a means to share the heap cell in the assertions, turning into a duplicable assertion, called a shared region in TaDA. For our example, we define a shared region with an associated interpretation, which specifies which resource is being shared. The region type (for “ample”) is the name associated with this interpretation, and the region identifier is an abstract identifier associated with this specific instance of the region type . The arguments of the region are called the abstract state of the region. The definition of a region is completed by an interference protocol that restricts, in rely/guarantee style, the allowed updates to the abstract state. Here, we encode the facts that (a) only can update and (b) can only be updated to 1. Although such strong invariants are not required to just prove safety, they will be useful for the termination proof later. To encode fact (a), we introduce a form of ghost state called a guard, , which gives xclusive permission to update . Formally, guards (probably first introduced in deny-guarantee reasoning [11]) form a partial commutative monoid (PCM), where in this case is undefined to capture exclusive permission: If a thread owns , then no other thread can own it at the same time. To link with the ability to change , the protocol allows the guarded update
(1)
Fact (b) is encoded by this being the only allowed update.
In TaDA and other modern separation logics such as Iris, implication is generalised to the viewshift construct () from Reference [9], which can be used to consistently update ghost information, purely within the logic (as opposed to through ghost code). Here, it can be used to turn the owned resource into a shared resource where and . The guard assertion indicates ownership of the guard for the region with identity . Using standard reasoning, one can then prove and , which entails, by the parallel rule . By consequence and existential elimination on , we obtain our goal .
Let us now turn to termination. A thread-local approach would proceed by first proving that and terminate separately, and then concluding that their parallel composition terminates. In the case of non-blocking code, it is possible to obtain a proof of this form: By definition, a non-blocking thread does not need the progress of another thread to terminate. For non-blocking code, a rely/guarantee protocol that only asserts safety facts about the extent of the interference of the threads is all that is needed to prove termination. This is exploited by virtually all the program logics that prove total specifications for non-blocking programs [5, 8, 21, 32]. The non-blocking case allows the use of a while rule that is essentially the one of total Hoare logics:
Here, is an ordinal-valued variant that is shown to strictly decrease after each iteration. By well-foundedness of ordinals, there can only be finitely many iterations, and hence the loop terminates. However, this rule is completely inadequate for blocking code: In our example, the loop of admits no variant, since the iterations do not achieve any sort of progress. Indeed, none of the cited works can handle this simple example. Reasoning about progress for blocking programs requires a whole set of new reasoning principles and a genuine extension of rely/guarantee with liveness information.
In TaDA Live, the while rule has a more general form:1
The crucial difference is that the rule uses a set of target states : When an iteration starts in a target state, the variant must be shown to strictly decrease, (i.e., the iteration needs to produce measurable progress); when an iteration starts from a non-target state, the variant is only required not to increase, (i.e., no progress is undone). These two conditions alone do not prove the termination of the loop: The execution may be constantly in a non-target state. In our example, the is . To conclude that the loop terminates, the first premise requires : That is, in traces where holds constantly, with the help of the environment, we will be eventually always in a target state. The assertion captures facts that hold at any point of the iterations of the loop, as it is in the triple of the conclusion but framed off the triples in the premises. When finally happens, by fairness of the scheduler the loop will execute, and will do so from a state where, by the third premise, the iterations will make progress towards termination.
To make this reasoning work, the first problem we encounter is that none of the information in a standard rely/guarantee specification supports proving . Indeed, nothing in the protocol defined by expresses the idea that at some point the environment will help by setting to 1. A safety rely merely expresses that an update is allowed, not that it will be eventually executed. In other words, a safety rely alone is too imprecise an abstraction: It cannot distinguish between environments that make the local thread terminate from the ones that do not. The first question we have to answer is: How can “help” from the environment be represented in a rely/guarantee proof?

Innovation 1: Subjective Obligations for Liveness Invariants

Safety arguments are centred around invariants: That is, facts of the form always P, encoded using regions in TaDA. TaDA Live’s basic observation is that to represent help from the environment, all that is needed is liveness invariants: That is, facts of the form always eventually P. By combining liveness invariants and safety invariants one can encode more complex progress conditions such as . To represent liveness invariants in a thread-local way, TaDA Live introduces a new kind of ghost state called obligations. Similarly to guards, they form a PCM. The interference protocol is augmented by a component that explains how an update affects the obligations. In our example, we want to represent the liveness invariant always eventually which, together with the invariant that can only be set to 1, implies . We therefore introduce an obligation (for pdate-to-1), where again undefined captures exclusivity, and extend the protocol to link to the update:
(2)
This transition to update the region can be executed by a thread with both the guard and the obligation; the effect of the update is to “consume” the resource, as the obligation resulting from the update is the unit . We say the update fulfils the obligation .
A safety rely, as expressed by specification (1), says: Verify a thread under the assumption that the environment steps will obey the protocol. As a first approximation, our liveness rely, as expressed by Equation (2), additionally says: Verify a thread under the assumption that the environment will always eventually fulfil the obligations it owns. (We will refine this idea in the next section to avoid unsound circular reasoning.) We say an obligation is assumed live if the environment always eventually fulfils . In other words: If, at any time, the environment owns , it eventually fulfils .
This idea introduces a complication: We need to locally keep track of which (relevant) obligations are owned by the environment to make use of the liveness rely assumption. We solve this problem by taking inspiration from the concept of subjective separation of Reference [29]. We introduce subjective obligation assertions: local obligations, , asserting local ownership of the obligation associated with region , and environmental obligations, , asserting environment ownership of the obligation . What makes these assertions interesting is the way they compose: That is, . If we start with local obligation and we want to fork into two threads, then we use to give responsibility of to one thread and knowledge that the environment has this responsibility to the other.
To complete the proof sketch for our example, we first need to extend the region interpretation by adding the obligation protocol:2
When the value at is 1, the obligation is owned by the interpretation, and hence owned by no thread. A thread owning and setting to 1 fulfils the obligation precisely by leaving it inside the interpretation. There is no other way of losing ownership of an obligation, because we adopt a classical interpretation of separation: That is, . For soundness, the interpretation of a region with ID is only allowed to own obligations of .
The TaDA Live proof starts by using viewshift to transform the resource in the precondition into this new region that is shared between the two threads:
where and are the preconditions of the proofs of and , respectively. To discharge in the proof of the while loop of , we can use , which holds throughout the loop: If we are in a state , then either , in which case we are in a target state and the value of will remain 1 forever; or in which case we know . By the liveness rely, when the environment owns , it will eventually fulfil it, which by Equation (2) can only be done by setting . Section 4 explains in detail how this argument is carried out formally in TaDA Live.
At this point, we are able to prove the total triples and . However, the standard parallel rule is unsound in the sense that the two triples can be proven even with but, in this case, the parallel composition would not terminate! TaDA Live’s parallel rule can recover soundness by checking that the postconditions of the two threads do not own pending obligations, which we can show by proving the stronger triples and . This condition is too restrictive in general, and we will relax it appropriately in the next section.

Innovation 2: Obligation Layers to Avoid Circular Arguments

Structuring liveness invariants through obligations, as sketched, presents a significant problem for soundness due to the possibility of making unsound circular liveness assumptions. Consider the following variant of our busy-waiting example:
There are two shared heap cells at and , respectively. The thread on the left () is busy-waiting on , which is supposed to be set by the thread on the right (), and vice versa, causing a classic high-level deadlock3 situation: The program does not terminate.
Let us try to replicate the argument we used for the busy-waiting example. We require a region sharing both cells, , where is the value stored at . We use two guards and , and two obligations, and linked to the update of and , respectively:
(3)
(4)
Without additional precautions, we would be able to derive the triples (for )
(5)
where . Given the interpretation we sketched earlier, these triples mean: Thread terminates provided its environment (i.e., thread ) always eventually fulfils obligation . This leads, in the application of the parallel rule, to an unsound circular argument: To show thread fulfils obligation , thread is relying on the assumption about the eventual fulfilment of by the environment, which, in turn, relies on the eventual fulfilment of by thread itself. The question is then: How can we rule out circular arguments, while keeping the proof thread-local? In particular, we want a solution that allows us to keep the abstraction of the environment as local and abstract as possible, without revealing unnecessary structure of the other threads.
Our solution is to specify dependencies between liveness invariants. We do this by imposing a partial order on obligations: Each obligation is associated with a layer, denoted lay , which is an element of a user-defined well-founded partial order, . Using layers, we can refine our reasoning principle and gain soundness: To be allowed to assume is live, one has to show all the locally owned obligations have layers greater than lay . The intuition is that local fulfilment of can depend on the environment’s fulfilment of only if lay .
In our deadlocking example, layers expose the circularity issue and prevent the triples (5) from being derivable. Specifically, the proof of the loop of requires us to prove . At this point, we are continuously holding the obligation so, to be able to assume live, we require lay . However, the proof of the loop of would require the symmetric constraint, lay , leading to a contradiction.
If we replace with , then the program terminates and indeed the proof goes through with lay . This is because the first instruction of fulfils so the loop no longer constantly owns it while assuming live. The structure of does not impose any dependency on the two liveness invariants.
The generalisation of the liveness rely to use obligations with layers enables us to give a general parallel rule: Instead of just forbidding pending obligations in the postconditions, we require that the postcondition of each thread only owns obligations with layers greater than the layers of obligations assumed live in the other thread’s proof.
Let us contrast our layered obligations with other solutions found in the literature. The LiLi logic cannot verify the above examples, as it lacks support for parallel composition.4 LiLi’s while rule does share the same high-level structure as WhileB, a structure that can be traced as far back as Reference [37]. The main crucial difference is in how is proven. LiLi proposes the idea of definite actions, a reincarnation of “leads-to” assertions of Reference [37], to build a liveness rely. Definite actions require the identification of a logical global “queue” of threads where the thread at the front is always able to execute its action and that action implies global progress. In LiLi, the target states are the ones where the local thread is at the head of this queue, and the condition is proven by showing that when the head of the queue executes an action, there is some local well-founded progress measure that decreases. Definite actions have a number of drawbacks:
they require heavy introduction of ghost code for manipulating globally shared ghost state to construct the queue of threads; and
the progress reasoning on the queue requires analysing all possible ways the other threads may finally produce the target states.
Layered obligations are key to resolving these problems:
they remove the need for ghost code altogether and make liveness invariants local using the local/environmental obligation assertions; and
by only relying on the eventual fulfilment of layered obligations, the proof of can ignore how the environment is going to implement such fulfilment; the only important fact to retain about the how is which liveness invariants are assumed to guarantee the fulfilment.
There has been work on proving various safety (e.g., global deadlock-freedom) [16, 27] and progress (e.g., deadlock-freedom, termination) [3, 22, 26, 28] properties of concurrent programs, which assume the only source of blocking behaviour comes from the use of blocking primitives (e.g., built-in locks or channels). Although none of them can handle busy-waiting patterns like our previous examples, they typically detect deadlocks using “tokens” (often also called obligations) that represent the responsibility to call a blocking primitive. These tokens are arranged in an acyclic graph of dependencies. Superficially, these tokens are related to our layered obligations in that they both are devices to rule out cyclical dependencies. There are, however, deep differences between the two. Tokens are linked (ad hoc in the operational semantics and through ghost code) to blocking primitive operations calls, and dependencies between the tokens represent causal dependencies between these primitive events. By contrast, our layers represent dependencies between liveness assumptions and reflect a purely logical structure. This makes our layered obligations particularly general and flexible: They are able to express arbitrary high-level blocking patterns and not just primitive blocking operations, enabling truly abstract specifications.

Innovation 3: Abstract Atomic Specifications for Blocking Operations

Understanding blocking behaviour as the need for an abstraction of the environment that includes liveness invariants unlocks a novel approach in giving abstract, precise and reusable total specifications for abstractly atomic operations. Building on Total TaDA, we propose a new specification format that expresses the atomic effect of a linearisable operation, and succinctly states the liveness invariant required for ensuring termination, at the right level of abstraction. To see the problem and our solution, let us consider the paradigmatic example of two fine-grained implementations of a lock module.
Two Lock Implementations. Consider the spin lock and the CLH lock given in Figure 1. The implementations enable threads to compete for the acquisition of a lock at address by running concurrent invocations of the operation. Only one thread will succeed, leaving the others to wait until the operation is called by the winning thread.
Fig. 1.
Fig. 1. Two fine-grained lock implementations.
The primitive commands, such as assignment, lookup and mutation, are primitive atomic and non-blocking: every primitive command, if given a CPU cycle, will terminate in one step. Since reads and writes may race, the language is equipped with a compare-and-swap primitive command, , which checks if the value stored at is : If so, it atomically stores at and returns 1; otherwise it just returns 0. Similarly, the fetch-and-set primitive command, , stores at returning the value that was stored at just before overwriting it.
The spin lock in Figure 1 is standard. Its state comprises a heap cell at that stores either 0 (unlocked) or 1 (locked). The Craig-Landin-Hagersten (CLH) lock [18] in Figure 1 serves threads competing for the lock in a FIFO order. It queues requests, keeping a head and a tail pointer (at and , respectively). The predecessor pointers are stored in each thread’s local state (in ). The lock can be acquired by a thread once its predecessor signals release of the lock by setting its queue node to 0. Unlocking the lock corresponds to setting the queue’s head node value to 0.
Let us focus on the lock operation of the CLH lock. The interesting aspect is that displays blocking behaviour that is observable by the client of the module (it is indeed the quintessence of blocking). We cannot just provide a total triple for it: The operation does not always terminate. The challenge is to design a specification format that accurately captures the abstract functionality of the operation and its subtle termination properties.
First off, one would like a specification that hides the implementation details and only exposes the abstract state of the lock to the client: A lock instance is represented by an abstract resource ,5 where indicates the lock is locked, and means it is unlocked. It is worth noting that traditional Hoare triples are not able to represent the useful behaviour of . The triple requires the client to establish that the lock is unlocked before calling the operation, defying the very purpose of the operation’s functionality. The triple allows the operation to be called in the locked state, but is not precise enough, since the same triple holds for a simple assignment . It does not express the property that, upon termination of the operation, we can claim that we have acquired the lock. A partial specification of a lock is already a challenge; a total specification more difficult still.
Proposed solutions in the literature can be divided into history-based, refinement-based, and abstract atomicity-based approaches. The history-based approach (e.g., Reference [38] for safety, References [15, 25] for progress) is expressive but at the price of complex and indirect specifications; the verification requires explicit manipulation of the histories, complicating client reasoning. The only progress-aware refinement-based approach that can modularly verify the CLH lock is the LiLi logic [31]. LiLi’s refinement is progress-preserving and contextual, allowing the result to be reused in arbitrary client contexts. For example, the LiLi proof for CLH lock (under weak fair scheduling) shows that
where is defined (in pseudocode) as6
The abstract state of the lock is represented by , but to represent the fact that threads will not be starved, an abstract FIFO queue at keeps track of the threads to be served; morekeywords=self]self is the thread ID of the caller. The command morekeywords=await]await(){} is a blocking primitive introduced to express the non-primitive blocking of the implementation. The potential absence of progress of the implementation’s busy-waiting steps is represented by potential absence of a step ahead in the specification.
LiLi’s specification style has three major drawbacks:
(1)
the specification code is not much simpler than the original implementation and is not able to hide the implementation detail of the thread queue;
(2)
the specification code is not atomic: It produces one step for entering the queue and one step for acquiring the lock;
(3)
since the termination properties are represented through the behaviour of code, a client proof that wants to make use of these properties must reprove them on the specification code before being able to use them in the argument.
These problems limit the abstraction capabilities, proof reuse, and scalability of the approach.
The abstract atomicity approach has been pioneered by the TaDA logic. It directly influenced logical atomicity in Iris [24] and was extended to provide total specification for non-blocking programs in Total TaDA [8]. The aim of the TaDA approach is to keep the Hoare-triple style of specification while being able to give precise and abstract specifications to fine-grained code like CLH lock. The TaDA solution is to provide a Hoare triple for , which embraces the fact that, between the invocation of the operation and the execution of the atomic update of the lock, there is a phase of interference where the environment can change the value of the lock. It is important to be able to distinguish the imprecise precondition that holds during the interference phase, , and the precise precondition, , that holds just before the atomic update performed by the lock operation at its linearisation point [20].
The TaDA safety specification for is the partial atomic triple:
(6)
The interference precondition describes the interference phase. It states that the environment must preserve the existence of the lock at but may change the value of , and the implementation of the lock must tolerate these environmental changes. The pseudo-quantifier is unusual, behaving like an evolving universal quantifier in that the environment is able to keep changing over time and behaving like an existential quantifier in that the implementation can assume that the lock always exists with . The triple (6) states that, if the environment satisfies the interference precondition and the operation terminates, then the implementation guarantees that, just before the linearisation point, the lock must have been available for locking () and, just afterwards, the lock has been locked by the operation (). Exclusive ownership of the lock after the operation terminates can be derived from the assertion in the postcondition: Just before we locked it, nobody else could claim that they owned the lock. The TaDA safety specification for is the partial atomic triple
This triple7 states that, to be used correctly, the unlock operation requires the lock to be locked and not changed by the environment during the interference phase; in return, the operation promises to atomically set the lock to be unlocked.
TaDA Live builds on the TaDA specification format. To turn the TaDA triple for into a total specification, the termination guarantee must depend on the environment: If the environment decides to hold the lock indefinitely, then no lock implementation should allow the operation to terminate. Hence, we express blocking as a liveness condition on the environment during the interference phase of an abstractly atomic operation. The CLH operation will terminate under weak fairness, provided that, if the lock is locked by the environment during the interference phase, then the environment will eventually unlock it. In general, a blocking operation will require an environment that is live: It will always eventually bring the abstract state to a good (e.g., unlocked) state.
The TaDA Live total specification of the CLH operation is:
(7)
The interference precondition is with the pseudo-quantifier now incorporating the environment liveness condition. As well as stating that the environment can keep changing the lock, the interference precondition also states that if the lock is in a bad state (), then the environment must always eventually change it to a good state (). The implementation needs to ensure termination under the assumption that the lock always eventually returns to the unlocked state. Note that the environment is allowed to change back to 1 arbitrarily many times, provided it always eventually sets it back to 0. To see why this is enough to ensure termination, consider Figure 2(a), where we chart the evolution of the abstract state induced by a live environment in the interference phase of . Progress towards termination of is guaranteed by the progress measure charted in Figure 2(b): Every time the environment unlocks, the value of decreases from 1 to 0; when the environment locks, although increases to 1, the number of threads in front of us in the queue decreases. One crucial aspect of our specification design is that we do not want to expose the progress argument to the client unless part of the argument needs to be made by the client. With CLH, the part of the argument appealing to the queue of threads is completely internal to the implementation of the operation, while the argument for the environment’s liveness must be provided by the client (the implementation has no power over this). We prove this formally in Section 5.
Fig. 2.
Fig. 2. Live environment (a); measure of progress for CLH lock where is the number of threads ahead in the queue (b); live environment with bounded impedance (c); measure of progress for spin lock (d).
Now let us consider the spin lock implementation. The spin operation cannot promise to terminate just by relying on a live environment. The problem is that when the environment locks the lock, there is no measure of progress that decreases: We are genuinely delayed by this action. We call this effect impedance. We conceptualise impedance as a greater leaking of the progress argument to the client. In the spin lock example, the whole of the progress argument needs to be provided by the client: The client needs to ensure that the environment will always eventually unlock the lock and that it will only impede the operation a bounded number of times. To represent this extra bounded impedance requirement (depicted in Figure 2(c)), we extend the abstract state of the lock with an ordinal , an impedance budget that strictly decreases when the lock state is set to 1. We arrive at the following TaDA Live specification for spin :
(8)
The lock is now represented by the predicate assertion with ordinal , which can also be changed by the environment during the interference phase. As well as expressing the dependency on a live environment on , this triple states that every operation consumes the budget by a non-trivial amount, thus providing a logical measure of progress from good to bad states. The initial value of the budget and the function from ordinals to ordinals is determined by the client, which must demonstrate that the budget is enough to make all its calls.
The TaDA Live total specification of for the CLH is the same as the TaDA partial specification. By contrast, the TaDA Live specification of for the spin lock needs to incorporate the ordinals: The impedance budget is preserved by unlock. This encodes the fact that does not impede the other operations, but also that by unlocking we cannot increase the budget. By combining these assumptions about the budget (it decreases when locking, stays constant when unlocking), it is possible to conclude that the implementation of the spin lock terminates using the progress measure in Figure 2(d). Crucially, for spin lock, the whole of the progress argument is provided by (and thus visible to) the client.
The impedance budget technique was first introduced to concurrent separation logics for non-blocking operations in Total TaDA [8]. Here, we smoothly integrate ordinals into TaDA Live, which fully supports blocking.

2.1 Abstraction and Proof Reuse

The TaDA Live program logic works with hybrid triples of the form:
which generalises both Hoare triples and abstract atomic triples. This triple comprises: a pseudo-quantifier with its environment liveness condition; atomic pre-/post-conditions and ; and Hoare pre-/post-conditions, and . The Hoare pre-/post-conditions describe stable resources that are owned locally by and can be updated non-atomically. Hoare triples correspond to the case where and . Abstract atomic triples correspond to the case when and are empty. We have omitted some details from the hybrid triples, such as layers and levels, since they are not important for the ideas of this section; the full details are given in Section 3.8.
The integration of the liveness annotations in triples achieves the goal of keeping the specification abstract and atomic. To obtain the goal of reuse of proofs, there are two missing ingredients: a mechanism to make use of the assumption in a proof of an implementation of the specification; and a way to reuse the specification in an arbitrary client context.
Imagine proving the CLH implementation correct with respect to specification (7). The while loop needs to discharge that “finally, the current thread is at the head of the queue, and the lock is unlocked.” This can only be proven with the help of the liveness assumption coming from the lock specification. To this end, in addition to liveness assumptions given by obligation assertions, TaDA Live extends judgements to allow contexts with liveness assumptions, used to discharge the condition in the while rule. The full details are given in Section 3.8.
Now consider proving a client of a lock using the specifications of the lock operations for the calls to these operations. This requires the Liveness Check rule:
The rule’s crucial effect is to remove the liveness annotation , which can only be done in a situation where the corresponding liveness assumption is satisfied. Just like the WhileB rule, we frame an assertion which is information that holds for the duration of the call. Typically, asserts the existence of some shared region and that the environment holds some obligations depending on the state of the region. We also need to provide a set of target states capturing when (second premise). The crucial check of the rule is the first premise, which examines the traces where holds everywhere, and asks us to prove that in those traces we see satisfied infinitely often (and thus infinitely often). If that is true, then we can conclude that the command terminates in the current context without the extra assumption in the pseudo-quantification. The resulting triple can then be manipulated using standard TaDA reasoning.
Take the typical use of (CLH) locks in a client . To share the lock resource , the client proof would specify some region where is the abstract state of the lock. A typical client would include the abstract state of other shared resources, too, but for simplicity, we focus here on the lock. The client needs to specify in its protocol that the lock will be always eventually unlocked by the threads sharing it. We therefore introduce an exclusive obligation (the ey of the lock), which is obtained when locking the lock and fulfilled when unlocking it:
The protocol is mirrored in the region’s interpretation .
With the application of standard TaDA reasoning, it is possible to derive
which amounts to saying that if atomically locks the lock region, then it also atomically updates the client region containing the lock. Notice that the annotation is propagated as is. In other words, the update on the lock is put in the context of the current client. In such context, we can set the frame to be : according to the protocol of the current client, the environment holds an obligation when . Because of the liveness invariant encoded by , it is true that the environment will always eventually unlock the lock, allowing us to discharge the side condition of LiveC’:
Indeed, if , then the precondition gives us , which means that the environment owns and will therefore eventually fulfil it, which can only be done by setting . The environment is allowed to then lock again, but that is fine: As we discussed, a CLH lock can promise termination under this milder condition.
Thanks to the smooth integration of liveness annotations in the specifications and liveness invariants expressed as obligations, TaDA Live proofs can properly abstract and encapsulate behaviour. Consider a module implementing a counter that can be safely used concurrently, thanks to the internal use of locks to protect access to the shared cell holding the value of the counter. For example, the increment operation can be implemented as
While the use of locks involves blocking behaviour, the blocking is handled completely internally and a client of the counter cannot observe it. The TaDA Live specification of the increment operation thus does not leak this implementation detail:
A client of the counter does not need to worry about the internal blocking, since the specification does not entail any liveness proof obligation. The proof of discharges the liveness assumption of the specification of by using obligations analogous to above, specified in an internal protocol that is not exposed to the client proof. In Section 4.9, we discuss the encapsulation properties of TaDA Live’s specifications in more detail.
Our approach contrasts significantly with previous work [3, 22] where blocking is represented in specifications by the acquisition of tokens acting as obligations. In this work, the specification style fixes an expected protocol to be followed by the client. For example, the axiom for a built-in lock acquisition operation returns a built-in token representing the need for calling a lock release primitive.
In contrast, our lock specification does not impose on the client any particular way in which its environment liveness assumption should be enforced. It is the job of the client to devise a protocol that ensures the environment liveness assumptions of the lock specifications will be provable. For locks, this is indeed often achieved by making sure every thread that locks a lock eventually unlocks it. Such a protocol is encoded by the liveness invariants of the client’s region (e.g., in the example above) and the k-obligation pattern. The specification of the lock, however, does not transfer obligations to the client, leaving open the possibility for clients to use completely different protocols. The following example client illustrates the added flexibility of our approach:
The code assumes a lock has been allocated at , and initially stores 0. In the specification style where the expected (liveness) protocol is built-in, the call in the left-hand thread would return a built-in token that can only be consumed by calling . This, in turn, requires an extension of the logic—as done, e.g., in Reference [17]—providing some mechanism for the sound delegation of tokens from one thread to the other. In TaDA Live, there is no need for such an extension. The protocol of this client does not need to associate obligations with the lock; one can simply define an obligation (owned initially by the left-hand thread) that is fulfilled when is set to 1 and use it to prove the appropriate environment liveness conditions for the proof.
In this informal overview, we used temporal logic formulas to represent the key liveness conditions in the WhileB and LiveC’ rules. The formal versions of these TaDA Live rules, however, implement those checks with what we call the environment liveness condition, which reduces these liveness properties to safety checks via a dedicated set of rules (explained in Section 4). Remarkably, the liveness checks of both rules can be phrased in terms of the environment liveness condition, which therefore provides a uniform proof principle for blocking termination.

2.2 A Guide for the Reader

The rest of the article proceeds by introducing the assertion language and the semantic model of TaDA Live in full detail (Section 3), then presenting the proof rules through the proof of an example (Section 4), then walking through the proofs of our case studies and commenting on limitations (Section 5), and finally discussing related work (Section 6). A reader interested in the proof rules can skim through Sections 3.3 to 3.5 and 3.8 and the beginning of Section 3.9 to familiarise with the basic definitions, and then move to Section 4 to understand how the rules themselves work, and the typical proof patterns.

3 The TaDA Live Semantic Model

We introduce the semantic model that justifies TaDA Live, defining:
the operational semantics of commands and their fair traces;
the assertion language, regions, guards, obligations, and protocols;
the semantics of assertions and viewshifts;
the specification format; and
the trace semantics of specifications.
In Section 2, we introduced hybrid triples that generalise Hoare and atomic triples. For our formal semantics, we separate triples into two components: the command and the specification comprising the pseudo-quantifier, the precondition, and the postcondition. We introduce the semantic judgement, with , which captures the semantic properties of a command that satisfies a specification: i.e., safety and termination of its fair traces. This required a complete reformulation of the model of TaDA. First, we give a trace semantics to specifications independently of commands. This enables us to define the semantic judgement to hold when : that is, when the concrete traces of a command are allowed by the specification traces. This approach is unusual for separation logics based on Hoare-style triples and brings the semantics nearer to approaches based on refinement. Second, the trace model is an “open-world” semantics where traces include both individual local steps made by the command and individual arbitrary environment steps. Other models typically model the environment interference indirectly, representing a sequence of environment steps as a single big jump. Our “open-world” approach is crucial to capture the assumptions on the liveness of the environment stipulated by the specifications. Third, the trace semantics of the specification is given in a style that is closely related to alternating automata [40]. The specification is seen as an automaton that traverses a concrete trace and only accepts those traces that satisfy the specification. This enables us to cleanly separate the (alternating) safety constraints from the (linear time) liveness constraints, imposed by a specification.

3.1 Notation

We write for the set of partial functions from to , and for the set of finite partial functions. Given , we write if is undefined on , and . We write for the finite function that maps each of the to and is undefined on any other input. We write for the partial function that coincides with except on where it returns , and write analogously. The disjoint union between partial functions is defined if their domains are disjoint. In contexts where the expected type is a function, we write for the empty function.

3.2 Fair Trace Semantics of Commands

We present a standard first-order imperative language, called While, with shared-memory concurrency and fine-grained non-blocking primitives, and we define the fair concrete trace semantics of its commands. Our While language is parametrised by the following sets: the Booleans, ; the values, ; the program variables, ; and the function names, . The set contains a special element, , the name of a local variable that holds a function’s return value.
Definition 3.1 (Commands).
The set of commands, , is defined by the grammar in Figure 3, where , is a list of pairwise distinct variables, and . The notation denotes .
Fig. 3.
Fig. 3. Syntax of commands and basic semantic domains.
We place some restrictions on these commands to simplify exposition. We write for the free program variables of a command. The set is the set of free variables that are potentially modified by a command, i.e., any free of appearing in instructions of the form ; in particular, . In a command , we apply the mild syntactic restriction that . Each individual thread is still able to modify variables that are created locally and to modify shared heap cells, but are not allowed to modify the free variables.8 In a function definition , we use the natural restriction . Also for simplicity, we assume each function name is given at most one definition. The function returns the function names occurring in that are not bound by a . Although function definitions may be recursive, we will disallow recursion in our logical rules to simplify the development. In the programs we consider, all potentially divergent behaviour stems from . It is straightforward to reformulate the While rule into a Let rule that supports terminating recursion.
Commands manipulate heaps (where and is the empty heap) and local variable stores, . A command can contain free function names, so we use a function implementation context,, to map function names to pairs comprising a finite list of distinct variables (the formal arguments) and a command (the body of the function).
A command induces transitions over program configurations that keep track of the current variable store and global heap, and the program states (see Figure 3) that represent the set of the active threads and their execution state. The program configuration represents a faulty configuration, e.g., the one reached after dereferencing an unallocated address. For the details of program states, we refer to Appendix D; what is relevant is that ✓ is the program state of a terminated thread, and we can define a function that computes the set of thread identifiers () of the active threads of a program configuration (details in Appendix).
To model fair traces of commands, we use a small-step operational semantics, parametrised by a function implementation context and defined by a relation . In a transition , the scheduling annotation, , keeps track of who executed the step:
that is, either a local active thread or the environment. Environment steps can have arbitrary effects on the heap and can generate faults at any time:
The full definition of the transition semantics is defined in Figure 36 and Figure 37 of the Appendix.
We call program traces the infinite sequences of the form where, for all , and . We use to range over infinite suffixes of program traces and for the set of all program traces. We define the set of -program traces
Definition 3.2 (Fairness).
A -program trace is fair if:
(9)
(10)
That is: A trace is fair if, at any point in time, every thread that can take a step (and the environment) will eventually be scheduled.
The open-world program semantics defines the behaviour of a command when run concurrently with an arbitrary environment. It corresponds to the fair program traces of a command, with the information about program states and thread identifiers removed.
Definition 3.3 (Open World Semantics).
We call traces the infinite sequences where, for all , and . We use for ranging over infinite suffixes of traces and for the set of all traces. For a trace , we define , and . The function is defined by where
The open-world program semantics function, is defined by
The notation is syntactic sugar for .
The goal of TaDA Live is to prove termination of the local command.
Definition 3.4 (Local Termination).
A trace is locally terminating, written , if it contains finitely many occurrences of .
It might seem odd that our program semantics only contains infinite traces, since our goal is proving termination. Traces that locally terminate simply have an infinite tail of environment steps. To simulate a closed system, one can select for the traces where the environment steps are all identity steps.
Remark 1 (On Primitive Blocking).
It is important to remember that the primitives of our programming language are non-blocking, in the sense that they can always take a step if scheduled: For all , for all with , and every , there is a such that . Hence, a trace is locally terminating only if all the threads terminated.
For languages that have blocking primitives (e.g., built-in locks/channels), traces may be locally terminating, because a non-terminated thread may not have a local successor (i.e., it is not enabled) at any point in the future (e.g., if a built-in lock remains locked forever, then an acquire operation would not have local successors). With blocking primitives, fairness also comes in two variants: strong and weak. Strong fairness requires that if an operation is infinitely often enabled it is infinitely often executed. Strong and weak fairness coincide for languages like ours where every primitive is enabled at all times.
Notice that our lack of blocking primitives does not make our setting less general: Blocking primitives can be implemented on top of non-blocking ones, both with weak and strong fairness assumptions for termination, as illustrated by our spin and ticket lock examples. In other words, blocking primitives can be given TaDA Live specifications and be treated uniformly by the logic. The addition of built-in blocking primitives to the language does not pose new challenges.

3.3 TaDA Live Assertions and Worlds

We formally introduce the TaDA Live assertion language, and its semantics in terms of its models, called worlds. The TaDA Live assertions are built from the standard classical connectives and quantifiers of separation logic,9 TaDA region and guard assertions, and new TaDA Live obligation and layer assertions. To formalise the assertions, we assume a number of basic domains:
a set of logical variables, denoted , disjoint from ;
an enumerable set of region types, ;
an enumerable set of region identifiers, ;
the set of levels, , to stratify regions to avoid the problem of re-entrancy10 (explained in Remark 2);
a set of abstract states, , including sets and lists of values;
a set of guards, , which will offer the support for the guard algebras defined later;
a well-founded partial order of layers, which will be associated to special guards called obligations; and
a set of ordinals, .
For layers, we use the abbreviations and . The set of abstract values is .
As is standard, when used in assertions, we extend numeric and Boolean expressions to use logical variables and abstract values, too. A logical variable store, , assigns values to logical variables. Given a logical and a program variable store , the evaluation of expressions and of Boolean expressions are standard.
Assertions and worlds are built using partial commutative monoids.
Definition 3.5 (PCM).
A (multi-unit) partial commutative monoid(PCM) is a tuple comprising a set , a binary partial composition operation and a set of unit elements , such that the following axioms are satisfied (where either both sides are defined and equal, or both sides are undefined):
For , we write if , and if . A PCM is cancellative when, for any , if , then .
The partial heaps form a PCM , as standard in separation logics. We also use guard algebras and obligation algebras, which are PCMs for describing auxilary ghost state, specified by the user of the logic.
Definition 3.6 (Guard Algebras).
A guard algebra is a PCM with . TaDA Live is parametrised by a function mapping a region type to a guard algebra . The subscript is omitted from and when it is clear from the context.
As discussed, the obligations represent ghost state for describing liveness invariants. They form an obligation algebra that is little more complicated to define due to the association of obligations with layers.
Definition 3.7 (Obligation Algebras).
TaDA Live is parametrised by a set of atoms and a layered obligation structure: that is, a pair where and lay such that . We will implicitly coerce atoms into obligations . An obligation algebra is a guard algebra where , , is union of disjoint sets and .
TaDA Live is parametrised by a function mapping a region type to an obligation algebra . The subscript is omitted from and when its clear from the context.
In Section 2, we have seen examples of obligation algebras. For instance, the example used two atoms and , giving rise to the obligation algebra with elements , , and . As mentioned, we make no difference between an atom and the obligation using the symbol of the former for both. For our examples, it is enough to assign layers to atoms, e.g., lay , and extend the layers to obligations by taking the minimum layer of the composed atoms, for example lay . Note that, by construction, each obligation is incompatible with itself: .
Definition 3.8 (TaDA Live Assertions).
The set of TaDA Live assertions, , is defined by the grammar in Figure 4. The only binder is . The function returns the free variables of an assertion and its definition is standard. We also define and . We write to indicate that and, for , write for .
Fig. 4.
Fig. 4. Syntax of assertions. Logical expressions, , and logical Boolean expressions, , are standard.
We summarise the intuitive meaning of our assertions before giving their formal semantics.
TaDA region assertion asserts the existence of a shared region with type , identity , level , and abstract state . Region assertions represent shared resources and, hence, are duplicable. We have .
TaDA atomicity tracking assertion gives permission to perform a single atomic change of the state of region . Once the change is performed, the assertion becomes recording the abstract states just before and after the change (the linearisation point). The assertion asserts that the environment has the permission to do the atomic update. We have , and .
TaDA guard assertion asserts that the guard is held locally. Guard composition is reflected by separation: .
TaDA Live local obligation assertion asserts that obligation is held locally. We have . Separating conjunction is interpreted classically precisely so we do not lose local obligation information: That is, . It is often useful to use the same guard algebra for guards and obligations. We write .
TaDA Live environment obligation assertion asserts that is held by the environment: . Unlike for local obligations, it is possible to lose this information, , because we do not need to keep track of the full obligations held by the environment, just a lower bound. The composition of environment and local obligation assertions is subtle, inspired by the subjective separation of Reference [29]. The existence of the local obligation can be recorded in a frame: . We also have the derived law , giving knowledge to each thread of the obligations delegated to the other.
TaDA Live empty obligation assertion (respectively, ) asserts that no obligation is locally held for regions with identifiers in (respectively, regions of level ).
TaDA Live layer assertion asserts that the layer of the obligations held locally for region with identifier is greater or equal than . We often use notation such as to denote .
We introduce the worlds of TaDA Live, which are instrumented heaps providing the models of the assertions of TaDA Live. A world is a local model in the sense that it reflects the state as seen from the perspective of a single thread. It is built from a local heap and a set of shared regions with associated guards and obligations. Worlds are parametrised by a set of region identifiers that, intuitively, are the regions that the current operation is supposed to update abstractly exactly once. We say the regions in are tracked for proving atomicity, using special ghost state given by the atomicity tracking algebra that supports the semantics of the atomicity tracking assertions.
Definition 3.9 (Atomicity Tracking Algebra).
The atomicity tracking algebra is a PCM defined by , where the composition is , and (undefined otherwise), and the set of unit elements is . The expression evaluation function is extended to map expressions in the atomicity tracking assertions to the corresponding elements of : , , .
Definition 3.10 (Worlds).
Let . A world, , is a tuple where
is the local heap, i.e., the cells owned locally;
describes the shared regions;
describes the local guards;
describes the local atomicity tracking components;
describes the local obligations;
describes the environment obligations, known to be held locally by the environment;
satisfying the following well-formedness constraints:
,
if , then , , ,
.
A shared region with identifier , given by , has type and abstract state . For a world , we write and and so on, for the corresponding components of . We also define , , and , if .
We define a PCM on worlds (called world algebra). We define how worlds compose by first definining composition on each component of a world. Heap composition is disjoint union. Region maps only compose if they are equal. Given , the compositions and are:
and undefined otherwise. The composition on is defined analogously to on .
The local and environment obligation maps compose in a subtle way inspired by the subjective separation of Reference [29]. To express this interaction, we define a composition on pairs of local/environment obligation maps. Given , we define
Note that, for obligation algebras, the minimum taken by the definition is always unique if it exists. Indeed, in general one can set . For example, assuming , , , , , and are distinct atoms, we have
provided the composition is defined. Furthermore, this definition supports the implication , since .
Definition 3.11 (World Algebras).
The PCM of world algebras, , is defined by the set of worlds ,
the subjective world composition, , given by:
if , , , , and , undefined otherwise; and
Notice that the units are worlds with arbitrary shared regions, atomicity components from , and arbitrary environment obligations.
Subjective composition of worlds is lifted to composition of sets of worlds , defined as .
We want the region and environment obligations assertions to enjoy the elimination rule, e.g., . Assertions therefore denote sets of worlds that are upward-closed with respect to adding regions and adding environment obligations. Formally, we define the world ordering as the smallest reflexive and transitive relation such that:
The upward-closed sets of worlds are the semantic domain of our assertions.
Definition 3.12 (Satisfaction Relation).
Let be the union of a program and logic variable store. For a world and an assertion , the assertion satisfaction relation, , is defined in Figure 5.
Fig. 5.
Fig. 5. Definition of assertion satisfaction.
We write if, for , we have , and write for any assertion . It is easy to check that for every and .

3.4 Protocols: Interference and World Rely

A world describes the state of the current thread, both the local state owned by the thread (the heap, guards, local obligations, and atomity tracking components), the shared state (the regions) and the environment obligations describing obligations owned locally by the environment. We define the world rely relation, which describes how the world may change as a result of the “well-behaved” interference of the environment characterised by the region interference relations, the atomicity tracking components, and the environment obligations. To define the world rely, we need to introduce two other components of TaDA Live: the region protocols, expressed by the region interference function, and atomicity contexts.
The type of each region is associated with a region interference function that establishes which updates to a shared region are allowed by the owner of which guards.
Definition 3.13 (Region Interference).
TaDA Live is parametrised by the region interference function, , which takes a region type and returns a function . Every function is required to satisfy three properties:
reflexivity: , for all ;
monotonicity in the guards: ;
closure under obligation frames: for all , if and and , then .
We write for . For any , we write .
The final concept we need before introducing the world rely relation is the atomicity context, . In TaDA Live proofs, we keep in the context of the judgement information about which updates we are currently proving are abstractly atomic. The rule driving this bookkeeping is the MkAtom rule. Although we will properly explain the rule in Section 4.7, we sketch the main idea as a motivation for the atomicity context now. The relevant “skeleton” of the rule is as follows:
The judgements include the context information such as the layer , the level and the atomicity context , and the pseudo-quantifier includes a layer . We formally introduce these details in Section 3.8. Here, we focus on motivating the use of the atomicity context . This rule describes how an update to the state of a region can be declared atomic even if it was realised through a series of steps. It does this by converting a Hoare triple to an atomic triple, provided the Hoare triple bears evidence (through the atomicity tracking assertions of the premise) that, although many steps might have been taken, the abstract state was changed by the command exactly once. The atomic triple may constrain the environment interference with a non-trivial pseudo-quantifier. The proof of the premise in general needs to make use of these assumptions about the environment, but the conversion to a Hoare triple means we cannot use pseudo-quantification to represent them. These assumptions are instead made available to the proof of the Hoare triple using the atomicity context, which records the information from the pseudo-quantifier and the relation that stores the update that we are proving happens atomically.
Definition 3.14 (Atomicity Context).
An atomicity context is a finite partial function from to tuples of the form where , , and are closed under obligation frames (as in Definition 3.13).
Assuming , we write , , , which we write , and . For every , we require . The set declares the regions for which we are tracking atomicity: For , the environment will only change the abstract state within and will obey the liveness condition given by that the environment will always eventually return to a good state in ; and the local thread will only change the abstract state at most once according to the relation . We write for , and similarly for , , and .
Definition 3.15 (World Rely).
The world rely relation, , is the smallest reflexive and transitive relation satisfying the rules in Figure 6.
Fig. 6.
Fig. 6. World rely rules.
Rule wr describes the case where the environment can update the abstract state of a region according to the interference relation . Notice that, for this rule, when , the environment can only change the abstract state to something in . When is undefined or a pair of abstract states, then the environment does not have this restriction and can do any update consistent with . Also, notice how the environment obligations map is affected by the transition. Rule wr describes the case where the atomic update given by has been delegated to the environment () in which case the current thread can observe the abstract state change corresponding to the update.
So far, we have introduced assertions and worlds as their models. These structures express information mostly over ghost state, that is, state that is purely logical and has no representation in concrete executions. For example, the notion that there is some shared region is purely fictional, as in the concrete machine there is no special way to mark a portion of the heap as shared. We introduced interference protocols and the world rely as a way to specify the expected well-behaved transformations shared resources may be subjected to. Since well-behaved interference from the environment can change the state of shared regions, a single world (describing a single state for each region) cannot capture the logical state we may be in when interleaved with environment actions. Views are the sets of worlds that can explain the logical state we may be in after being suspended for an arbitrary number of environment steps. Views represent information about the logical state, which cannot be invalidated by a well-behaved environment.
Definition 3.16 (Views, Stability).
An upward-closed set of worlds, , is an A-view if it is closed under : that is, . An assertion is A-stable, written , if and only if, for all , is an A-view.
We write for the set of all A-views and for the set of all A-stable assertions.
Definition 3.17 (View Algebra).
The PCM of view algebras, , is formed from the set , and the composition .
Notice that the composition of views always gives rise to a view: In the case where there are no compatible pairs of worlds in the views, the result is the empty view (the denotation of ).
On checking stability. TaDA Live proofs require checking stability of assertions in some crucial steps. The notion of stability of Definition 3.16 is given in terms of the semantics of assertions, but it is possible, in principle, to provide a set of lemmas to prove stability of common cases without reasoning at the level of the model. For example, any traditional separation logic assertion (such as , , pure formulas) is always stable; guard and local obligation assertions are also automatically stable; stability is preserved by , , , and existential quantification. The crucial sources of instability are region assertions, environment obligation assertions, and . Stability of the first two can be established by inspecting the protocol of regions. A rule that would be expressive enough to prove most stability checks for our examples is:
It is similarly easy to extract from Figure 6 rules involving the atomicity context information:

3.5 Linking Levels of Abstraction: Interpretations and Reification

As we mentioned, worlds and views represent ghost information about state. Ultimately, however, we want to use this information to express properties of concrete execution traces. To do so, we need to formalise the link between worlds with their logical instrumentation and concrete states comprising variable stores and heaps. The first component that contributes to this link is a region interpretation, which specifies the implementation-dependent content of a shared region: For example, for a shared spin lock, the interpretation of the abstract shared region is the view given by , a single cell storing at .
Definition 3.18 (Region Interpretation).
TaDA Live is parametrised by a region interpretation function for each , such that, for every , , , . We also require the interpretation to be -safe, a technical condition explained in Section 4.3 that is usually immediate to check (see Lemma 4.2). A region interpretation’s companion is the syntactic region interpretation where , , , and . We write for . We require that ; in practice, we will define region interpretations by writing syntactic interpretations and using the previous equation as a definition for the corresponding region interpretation functions.
It is important to understand that interpretations are not merely an indirect way of writing assertions. In our spin lock example, the crucial difference between the two assertions and is that the first is subjected to interference, while the latter expresses ownership of the cell at . The requirement that the interpretation of some region with ID must imply forbids an intepretation to own local obligations of other regions. This is necessary for soundness: If we removed the restriction, then we could fool ourselves into thinking that we fulfilled an obligation by creating another region with the obligation in its interpretation.
Remark 2 (On “Opening” Regions and Levels).
As in TaDA, the region interpretation is used to “open” a region: That is, import the region interpretation as local state to do a single atomic update. The idea is to obtain instantaneously the ownership of the content of the region for the atomic update, and to re-establish the region interpretation for the updated abstract state, before immediately relinquishing ownership by “closing” up the region. As in TaDA, this opening and closing mechanism depends on the level of the region, which is a device to avoid inconsistencies. With a specification at level , the rules enable a region to be opened at level to obtain a resulting specification at level . This means that, although a region can be shared (), it cannot be opened twice, which would result in with a potential contradictory duplication of non-duplicable resources.
The second component that expresses the link between the instrumented worlds and concrete states is the reification function. Reification has two main purposes. First, at level , all the regions with level lower than are closed, which means that the resources in their interpretation do not exist as far as the world is concerned. The concrete heap cells accounted for inside these interpretations will, however, correspond to cells in the concrete heap. To bridge this gap, the reification opens all closed regions importing the resources in their interpretation as local resources, obtaining a “collapsed” world. Second, all the “ghost” components of collapsed worlds (such as regions, guards, or obligations) do not have any representation in memory, so reification erases them.
Definition 3.19 (Reification).
Let and let . The region collapse function, , is defined by:
The function is called the world reification of at level . For any , the function is called reification of at level .
To understand if a world can represent local resources consistent with some global heap , we need to identify if there is a world representing the resources of the environment such that . That would mean that it is possible to factor as , where are the cells fully owned by the local thread, are the ones fully owned by the environment, and the cells in are the ones that are shared and come from opening the interpretations of closed regions in the world collapse. When collapsing, we are assuming, conceptually, that we are collapsing a world that represents every resource in the system. Correspondingly, the definitions that use reification—crucially, Definitions 3.20 and 3.22—always complete the local resources with some “global” frame before applying reification.
In addition to opening shared regions, the collapse function also checks that no environment obligations are assumed. To understand this, consider a world representing local resources, and a world completing it to a world representing the global resources. The global world cannot assert the existence of obligations in the environment: all those have been already accounted for in . The definition of collapse explicitly enforces this constraint by the condition on the environment obligation map. We explain why this condition is important in Section 3.7.

3.6 Frame Preservation

Having established the link between worlds/views and concrete state, we can move to establishing a link between concrete steps in a trace and their logical justification in terms of logical state. The fundamental driver of this link is the notion of frame-preserving update, inspired by the frame-preserving update from Reference [24], which represents the essence of the Rely/Guarantee reasoning in TaDA Live. The frame-preserving update looks at a specific concrete update from some to and states under which conditions this logical update can be described as an update from logical state to logical state . The and are sets of worlds describing local resource, whereas the and are global concrete heaps. We therefore need to complete with some frame and use reification to relate this logical state to : That is, . There will usually be more than one such . The frame-preserving update requires that any such that is a view should remain a valid frame after the update: That is, .
Definition 3.20 (Frame-preserving Update).
Given , and , we define to hold if and only if
TaDA Live implements the Rely/Guarantee proof principle by requiring every update to be frame-preserving. Views are resources that are preserved by protocol-compliant environment interference. The idea of a Rely, a set of allowed environment updates, is represented by assuming environment steps are frame-preserving updates on resources that are compatible with our current view. By frame preservation, any such update would preserve our view. Conversely, the idea of a Guarantee, an over-approximation of the effects of local steps under the assumption of Rely, is encoded by requiring every local step to be a frame-preserving update, and thus unable to disrupt any view held by the environment.
To see how this works more concretely, let us consider an example. We use the notation to mean , that is, holds when to can be used to justify some concrete update.
Example 3.21.
Assume we have a region type with abstract states , a single guard (with ) and interference protocol consisting of transitions and . We want to show that (for ) holds, but and do not. Consider any view that is a frame of . The cannot hold because is not compatible with itself. As a consequence, since is a view, it needs to be closed under world rely, which means that it is closed under the interference, which can transform into and into . For to be compatible with , it needs to contain some world associating to ; to be a view, needs to contain some other world associating to , which makes it compatible with . Therefore, holds.
Now, the view above is not required to contain any world associating to . Such an is a counterexample to holding.
Finally, consider : We can construct a frame in which all worlds associate to and own the guard . Such set of worlds can be a view, because owning disables the transition from to . However, would be compatible with but not with , which means does not hold.
This definition of frame-preserving update simplifies drastically the semantics of TaDA specifications. For TaDA Live, however, we need to introduce the stronger notion of atomic frame-preserving update. To see the motivation behind the stronger condition, consider the region interference relation and . The update from to via is very different from a direct update from to . The intermediate step to fulfils the obligation , which may be crucial information for the progress argument. We therefore want to enforce that if we are justifying a step as going from to , then all the allowed transitions between region states need to match a single transition in the interference protocol.
Definition 3.22 (Atomic Frame-preserving Update).
Given , and , we define to hold if and only if
where the atomic world rely relation, , is defined to be the smallest reflexive relation closed under the rules of Figure 6, with the restriction that rules wr and wr can be applied at most once per region identifier. It is formally defined in Appendix E.1.
Intuitively, this says that if the environment has some resource compatible with , then it should expect that after a step, the resource might be transformed into . When is a view, one gets back Definition 3.20, as views are precisely the resources that cannot be invalidated by any number of updates of the environment. We will use atomic frame-preserving updates to check the safety of logical traces with respect to some specification in Definition 3.28.

3.7 Viewshifts and “Classical” Resources

Before moving to specifications, we define viewshift, a semantic generalisation of implication, which is a prime example of application of frame-preserving update, used in our Cons rule. They correspond to “purely logical” updates in that they update the ghost resources without affecting the concrete memory.
Definition 3.23 (Viewshift).
Given , the judgement , holds if . For two assertions , the assertion viewshifts to , written , if and only if, , .
Viewshifts are typically employed to “allocate” a new region by sharing some local resource (a form of weakening). For example, assume . We have that viewshifts to : The underlying reification does not change, and any frame of with non-empty reification must only have regions reifying to cells disjoint from ; moreover, such frame will only have finitely many regions allocated, so it is always possible to draw a fresh from the infinite set to satisfy the existential quantification over .
Viewshifts also ensure that obligation information is not updated inconsistently. For example, in the “region allocation” step above, we cannot viewshift to , which would mean we are pretending there is an obligation in the environment without any evidence of that being true. To show that the viewshift does not hold, we can choose and show that is false. To see this, pick as the global frame; is in the reification of but the reification of is empty: The frame has empty local obligation map, so every world considered by the region collapse of has . The idiomatic, and correct, pattern of creation of environment obligations would viewshift to, say, for some relevant obligation compatible with , and then with implication obtain : In this case, the environment obligation has been created from the evidence of the existence of a corresponding local obligation.
It is important to note that, in a logic with the ability to share assertions, as regions in TaDA or invariants in Iris allow, having classical resources does not have the expected effect. By definition, a classical resource cannot be “forgotten,” i.e., . By using viewshift, however, it is possible to create a region with interpretation defined so it contains and immediately discard it (regions are not classical resources), obtaining . This, for example, makes TaDA Live incapable of proving absence of memory leaks even if its heap assertions are classical. We, however, manage to avoid this issue for local obligation assertions because of their specific semantics. First, can only ever be part of the interpretation for the region , as imposed by our restrictions on region interpretations. Second, the very notion of fulfilling the obligation is defined as transferring its ownership to the interpretation. Moreover, the region protocol constrains the loss of an obligation to happen only in correspondence with some region state change, so the only way to get rid of a local obligation is to induce the desired state change in the region and transfer the obligation to the interpretation. We need obligations to be classical resources for this to be the only way of losing them. We made heaps and guards behave classically for the sake of uniformity, but this is not essential.
The issue of having genuinely classical resources in a logic with regions/invariants has been tackled in Reference [1], with the main use cases being proving absence of memory leaks. The techniques presented there could provide the basis for an alternative way of handling TaDA Live-style obligations.

3.8 Specification Format

With all these definitions in place, we can now proceed to define TaDA Live specifications and their trace semantics. Most of the time, TaDA Live proofs manipulate triples of two forms:
(11a)
(11b)
called atomic triples and Hoare triples, respectively. It is, however, possible for a command to manipulate some resources non-atomically, and some other resources atomically, at the same time. In general, specifications use hybrid triples:
a minor generalisation11 of hybrid triple discussed in Section 2.1. Intuitively, the Hoare precondition is a resource that is owned by the command and, as such, cannot be invalidated by actions of the environment. The command is allowed to manipulate this owned resource non-atomically, provided it satisfies the Hoare postcondition upon termination. The atomic precondition represents the resource that can be shared between the command and the environment. The environment can update it, but only with the effect of going from for some to for some . The command is allowed to update it exactly once from to perform its linearisation point, transforming it to a resource satisfying the atomic postcondition . The atomic postcondition only needs to be true just after the linearisation point, as the environment is allowed to update it immediately afterwards. The pseudo-quantified variable has two important uses: It represents the “surface” of allowed interference by the environment; it is bound in the postcondition to the value of the parameter of the atomic precondition just before the linearisation point.
The atomic and Hoare triples in Equations (11a) and (11b) are then special cases of the hybrid triple:12
(12a)
(12b)
respectively, where , , and (for technical reasons the atomic pre-/post-conditions in the general triples cannot contain program variables). We omit the pseudo-quantifier from an atomic triple (as above) when the pseudo-quantified variable does not occur in the triple, and thus could be quantified as . We also use the abbreviated form when the liveness assumption is trivial, i.e., .
Definition 3.24 (Specification).
Specifications, , have the form:
where
, and ;
;
and ;
for all and ;
for all and , and .
.
.
In addition to the atomicity context , the context of a specification consists also of a layer , and a level . These components record information about the proof context of the judgement. The layer indicates that we are in a context where we are forbidden from assuming as live obligations with layers (or incomparable to ). The level indicates that the regions with level are open (and cannot be re-opened).

3.9 Trace Semantics of Specifications

Finally, we can define the semantics of a specification. The idea of the semantics is to collect all traces that are deemed as acceptable to a specification , so we can later say a command satisfies if its traces are all accepted by the semantics of . The general principle in accepting a trace is the following: The local steps are only expected to be correctly implementing the functionality declared by if the environment satisfies the assumptions implied by the (safety and liveness) protocols and itself. If a trace has gone wrong as a consequence of the environment making moves outside of the assumptions, then that trace is accepted, as the problem is not the responsibility of the local command itself. If a trace has gone wrong as a consequence of local steps, then the trace is rejected.
The semantics of a specification therefore traverses a trace to decide whether to accept or reject it by determining who is to blame for failures. We decouple the traversal needed for checking the safety constraints and the one checking the liveness ones. In terms of safety, a specification like ($\color{blue}\maltese$) (in Definition 3.24) expects that:
the precondition holds: The starting resource satisfies , for some ;
the interference precondition holds: Every step of the environment, before the local linearisation point takes place goes from a resource satisfying , for some , to a resource satisfying , for some .
If any of the above are violated, then the blame is on the environment. In return, the local steps are expected to:
respect atomicity: transform the resources of exactly once to resources satisfying , for any and some ;
respect the pre-/post-conditions: transform (in possibly many steps) the resources in to resources satisfying at the end of the execution.
In this sense, the resources in should be understood as shared: The environment can use them to change the value of , and the local steps can use them atomically to perform the linearisation point. Note that is only guaranteed to hold immediately after the linearisation point.
Key idea of the liveness semantics. In terms of termination, the specification ($\color{blue}\maltese$) guarantees local termination only if the environment is live, i.e., it satisfies the layered liveness invariants represented by the pseudo-quantifiers (of the specification and in ) and the obligations. The idea is again to identify when non-termination is caused by a bad environment or by bad local steps. Consider the case of liveness invariants encoded by obligations. Imagine we annotate each position of a trace indicating which obligations are held at that point by the environment and which are held locally. Now suppose the environment always eventually fulfils every obligation (i.e., for each obligation there are infinitely many positions where is not held by the environment). This environment is certainly live, so it cannot be blamed for non-termination. The layer structure, however, allows the environment to fulfil obligations of layer by relying on eventual fulfilment of obligations at layer . Therefore, if there is an obligation that is locally held forever, the environment is still considered live if it never fulfils obligations at layer . In this scenario, the local steps are blamed for non-termination: By holding forever, the local steps are not allowed to rely on the environment being live at higher layers than lay .
This scheme leads to the following semantic interpretation of layers. The local steps can blame the environment for non-termination by waiting for the fulfilment of some environment obligation indefinitely; in turn, the environment can blame the local steps for the inability to fulfil by claiming to be waiting for the fulfilment of some local obligation with lay ; the local step can justify the indefinite postponement of the fulfilment of by shifting blame on the environment again, appealing to an environment obligation with even lower layer, and so on. This blame-shifting cannot be unbounded: Every time the blame is shifted, the layers considered are strictly lower and, by well-foundedness of layers, this cannot happen ad libitum. Ultimately, the blame is unambiguously placed on the environment or the local steps and the trace is accepted or rejected accordingly.
This intuition about obligations extends to liveness assumptions attached to pseudo-quantifications in the triple and in the atomicity context. All these assumptions need to be layered to avoid unsound circularities, which is why the pseudo-quantifier carries a layer . The specifications mention another layer, , which represents a (strict) upper bound on the layers that we may consider live when proving some command satisfies the specification. An environment is still considered live by the specification if it keeps an obligation of layer forever unfulfilled.
We now define the formal semantics of specifications, as set of concrete traces that satisfy the specification. To check if a concrete trace satisfies a specification, the semantics first collects all the possible “logical” justifications of the trace in a set . To justify a trace means to instrument each step with sets of worlds that show how the trace respects the (safety) logical constraints of the specification. The set is then further analysed to check that every instrumented trace where the environment satisfies the liveness assumptions is locally terminating.
We begin by defining the trace safety judgement, of the form , the purpose of which is to check the safety constraints implied by . The judgement formalises the idea of a specification as a trace acceptor, that is, an automaton reading a trace step-by-step, and either accepting or rejecting it. If we ignore for a moment, then the trace safety judgement represents a snapshot of the state of this imaginary automaton at a point when some prefix of the trace has been already successfully processed, and is the suffix that remains to be processed. The automaton traverses the trace producing a guess for an instrumentation, i.e., logical resources corresponding to the concrete memory contents that explains why the trace is acceptable. The instrumentation needs to describe, for instance, when the linearisation point is thought of taking place, what portions of the state are considered as shared and which owned. Let be the current concrete state, i.e., . The triple in the judgement encode the automaton’s current state, representing the current guess for the instrumentation of . The resources currently considered as locally owned are represented by the view ; the can be either an abstract value, in which case the automaton thinks that we are still in the interference phase, or it can be a pair , which means we are past the linearisation point, which updated the abstract state from to ; the is a set of worlds parametric on and corresponds to the shared atomic resources if we are before the linearisation point, or it is the empty resource if we are past it. The judgement assumes that for some frame , i.e., the current concrete state is consistent with the current instrumentation guess. The initial state of the automaton will be chosen so this holds at the beginning of the trace, and each transition of the automaton will by construction preserve this correspondence.
As it walks down a trace, the automaton updates , , and , trying to construct a consistent instrumentation for the whole trace. Such a sequence of automaton states constitute its run over the trace. Specification traces augment each state of a trace with the instrumentation from a run of the automaton.
Definition 3.25 (Specification Traces).
Define , the set of specification states to be and the set of specification configurations to be . The set of specification traces, , is the set of infinite sequences of the form where and . Given a set of specification traces , we write for the set .
The trace safety judgement accumulates, as it traverses a trace, all the successful instrumentations of the trace in , which we can later check against liveness properties. Let us define the judgement formally, and then explain it in detail.
Definition 3.26 (Trace Safety).
Let with components named as ($\color{blue}\maltese$), , , and such that
The trace safety judgement is the relation defined coinductively in Figure 7.13 We write if the trace contains no local steps.
Fig. 7.
Fig. 7. Safety specification semantics.
The judgement assumes the initial configuration of the trace satisfies . Rule Stutter checks that any local step other than the linearisation point updates the local Hoare view (to some ) in a frame-preserving manner; this implies that, before the linearisation point, the abstract state needs to be preserved by such step. Rule LinPt checks that the linearisation point is frame-preserving and consistent with the atomic postcondition . Both rules Stutter and LinPt check that the Hoare postcondition is satisfied if we are considering the last local step of the trace (i.e., if holds). Rule Env checks whether the current environment step, assumed to happen before the linearisation point, can be seen as a transition changing the abstract state from to in a way that does not disrupt any frame (including ). If that is the case, then the rest of the trace is checked for safety. Rule Env’ performs the same check but after the linearisation point. In both cases, if the environment step cannot be seen as frame-preserving, then the trace is accepted, since the environment did not respect the assumptions. Similarly, Rule Env accepts the trace after a fault caused by the environment.
As we briefly mentioned, Definition 3.26 is inspired by alternating automata [40]. The “alternation” aspect is necessary because of the angelic/demonic duality between local and environment steps: When processing an environment step, we need to be prepared to handle every possible interpretation of the update that took place; for local steps, we are allowed to pick any interpretation of the update. Note that these ambiguities arise purely from the fact that we are instrumenting the trace with “ghost” logical state: At each step there is no ambiguity in a trace about how the concrete state has been updated. This dual interpretation gives rise to the two kinds of transitions in an alternating automaton. An automata-based presentation of the trace safety judgement would use existentially branching transitions for local steps and universally branching transitions for environment steps. We further mimic alternating automata in the way we factor safety and liveness constraints. Alternating automata impose safety constraints by constructing sets of runs that linearise the choices for the existential transitions and the branching due to universal transitions. In our setting these sets of runs correspond to . The liveness constraints can then be checked by, for example, requiring each run in the set to visit final states infinitely often, the usual Büchi-style acceptance condition. Here, we also examine the instrumented traces of individually and impose a liveness acceptance condition; the condition in our case is more complex, as it has to take into account layers, pseudo-quantifiers, and obligations. One key simplification introduced by this approach is that we can cleanly separate the branching (safety) aspect—the quantifier alternation due to duality environment/local steps—from the linear-time liveness aspect.
Building on trace safety, we can now define the semantics of a specification as the set of traces that are safe and that additionally satisfy the liveness constraints implied by the obligations and the liveness assumptions of . Conceptually, we want to require local termination if the environment satisfies the layered liveness invariants represented by pseudo-quantifiers and obligations. To harmonise the pseudo-quantification and obligation-related liveness assumptions of a specification, , we collect all of them in a set of so-called pseudo-obligations:
where , and are taken from the specification.
We extend the layer function to lay by setting lay if and lay , or , or . Furthermore, define
Now, we want to understand, for each position of a specification trace, which pseudo-obligations we are holding locally and which are held by the environment. This information is contained in the single worlds, so as a first step, we extract, from a specification trace, the set of traces of worlds that it represents.
Definition 3.27 (World Traces).
Given an atomicity context, , we call world traces, , ranged over by , the infinite sequences of the form , where, for all , , and . We define the function:
which we extend to specification traces by . A world trace is -respecting if for all :
Given specification trace , the set is the set of world traces of , defined by
We lift to apply to sets of specification traces in the obvious way.
We can now define two predicates indicating when a pseudo-obligation is considered to be held by the environment () or locally () in a position of a world trace:
Equipped with these definitions, we can state the liveness constraints associated with a specification. The idea is that one can assign the “blame” for local non-termination either to the environment or to the local behaviour. If we deem the environment responsible for non-termination, then the specification will classify the trace as acceptable, otherwise it will reject it. The idea behind this “blame” assignment is to examine the world traces justifying the safety of a trace and consider, for each position, which obligations are held by the environment and which are held locally. To understand the intuition, consider the case of liveness invariants encoded by obligations. Suppose the environment always eventually fulfils every obligation, i.e., for each obligation there are infinitely many positions where is not held by the environment. This environment is certainly live, i.e., it respects the liveness assumptions and the local code is responsible for any non-terminating behaviour. But what if the environment itself is blocking on some locally held obligation, and as a consequence is not able to fulfil some ? Whether the environment or the local code is to blame depends on the layers. The environment is to blame if, from some point in the trace, it never fulfils some but the local steps always eventually fulfil every obligation of layer strictly lower than lay . Conversely, an environment that keeps unfulfilled because of some forever-unfulfilled obligation held locally with lay cannot be blamed for local non-termination.
This intuition about obligations extends to liveness assumptions attached to pseudo-quantifications in the triple and in the atomicity context. The predicate given in Definition 3.28 formalises the above blame-assigning mechanism. A world trace that satisfies is one where the environment cannot be blamed for local non-termination. The specification semantics then is the set of safe traces where, if is satisfied, then the trace is locally terminating.
Definition 3.28 (Specification Semantics).
Fix a specification with components named as in ($\color{blue}\maltese$). The predicate checks whether the environment is satisfying the liveness assumptions of the specification:
Let , and . We define the trace semantics of specification as the set:
where and are the level and atomicity context from the specification .
The more precise intuition behind the specification semantics is as follows: Once it has been established that there is a way to instrument the trace to justify why the local steps satisfy the safety constraints of , we consider the set of valid instrumentations . First, we extract the set of world traces represented by the traces of . Each such world trace should either be locally terminating, in which case the trace is accepted, or, if it is non-terminating, the non-termination should be due to the environment not satisfying the liveness assumptions of . The predicate holds for a specification trace if the environment always eventually fulfils any pseudo-obligation with layer and if no obligation of layer is constantly held by the local thread. Blame for non-termination can be unambiguously assigned, thanks to well-foundedness of layers: If there is a forever-unfulfilled local obligation , then we can try to blame the environment by identifying a lower-layer obligation that is forever-unfulfilled by the environment; the environment can shift the blame back to the local steps if one can find a lower-layer local obligation that is forever-unfulfilled. Well-foundedness implies this blame-shifting game must be bounded in length and the ultimate culprit can always be identified. This effectively encodes the acyclicity of the layered termination argument.

3.10 The Semantic Judgement

We are now ready to define the semantic version of our judgements, , indexed by a function specification context, , which, for each function, provides the arguments of the function and the specification of the function body.
Definition 3.29 (Function Specification Context).
A function specification context, , is a partial function .
Definition 3.30 (Semantic Triple).
Given and , a function implementation context is a correct implementation of , written , if and only if The semantic triple , stating that command satisfies specification under any correct implementation of the functions specified in , is defined by:
Note that when has no free function names, the judgement is equivalent to .
Since the semantics of our triples is a complex conditional termination statement, it is useful to show when it corresponds to unconditional termination. Intuitively, to state facts about the behaviour of a command in an “empty” environment, we should be using a Hoare triple (no resource needs to be shared, no interference experienced) and there should be no assumption of obligations being owned by the environment. We characterise the preconditions that ensure this as the ones that always admit as a global frame.
Definition 3.31 (Grounded View).
Fix an arbitrary level . We say is -grounded if
We say a stable assertion is -grounded if, for all , the view is -grounded.
Examples of grounded assertions are standard separation logic assertions like or .
Unconditional termination applies to programs running in isolation. Note that, technically, we cannot consider traces without environment steps as the fairness constraint requires infinitely many of those. We therefore model the isolated executions of a command as the executions where the environment steps do not modify the current state. It is easy to check that, for each finite or infinite sequence of local steps of , there is a corresponding fair trace of with only identity environment steps and vice versa.
Definition 3.32 (Closed-world Semantics).
Given a command , its closed-world semantics is the subset of the open-world semantics of the traces where every environment step is an identity step, i.e., of the form . Additionally, for an assertion , we define , which is the closed-world traces of that start from a state satisfying the precondition .
Theorem 3.33 (Adequacy).
For every -grounded assertion , if , then all traces in are locally terminating.
Proof.
Take a trace and let . From the semantic triple, we know that . We have and, since is -grounded, . By the definition of the specification trace semantics, we therefore know that for some . Note that a frame-preserving update on a grounded view keeps it grounded, and that identity environment steps can always be justified as a frame-preserving update that does not update the resources. In particular, from these facts, we can deduce that there is some such that at each point in time the global frame is . From this, we can extract a world-trace such that which implies . By the definition of the specification’s semantics this implies local termination of our concrete trace .
As a corollary, we have that if holds, then every isolated execution of from the empty heap and arbitrary store terminates.

4 TaDA Live Rules

We now introduce the rules of TaDA Live, summarised in Figure 9, using a simple but tricky running example to motivate and explain them.
Example 4.1 (Distinguishing Client).
Consider the following client of a lock module:
The code is interesting in that it can distinguish whether the lock implementation is a spin or CLH lock. Under weak fairness, when is a spin lock, this client program does not always terminate. It is possible for the lock invocation of the left thread to be scheduled infinitely often but always in a state in which the lock is locked. As a result, will never be set to , making the while loop spin forever. The spin lock has been starved by the other thread. In contrast, when is a CLH lock, this client program is guaranteed to terminate: A fair scheduler will eventually allow the left thread to enqueue itself in the internal queue of the lock; from then on, the thread on the right can only acquire the lock at most once; after unlocking, the next call of the right thread would enqueue it after the left thread, which is now the only unblocked thread. The CLH lock is starvation free.
It is worth noting that none of the proof principles of References [5, 8, 13, 14, 21, 32] are powerful enough to handle this example due to the blocking behaviour it displays. Even replacing locks with primitive locks, due to the mix of busy-waiting blocking and locks, the example cannot be handled by any of the proof systems of References [3, 22, 26, 28]. Since LiLi does not have a rule for parallel, this client cannot be proven within the LiLi logic.
We show that the distinguishing client terminates with the CLH lock by proving the Hoare triple , where and are the left and right threads of the example, respectively. Since our triples are total, this triple immediately guarantees termination of the program. Our overall argument is as follows: The CLH specification guarantees termination of a call to if the lock is always eventually unlocked by the environment. This is intuitively true for both threads: They always unlock the lock after having acquired it. The call to will therefore terminate in both threads. The only other potentially non-terminating operation is the while loop in the right thread. The loop is implementing a busy-wait pattern on and needs the help of the left thread to terminate. We will be able to prove that, since is going to be eventually set to true (and never reset to false), the loop will terminate.

4.1 The Basics: Regions

Let us formalise the argument in TaDA Live, introducing the proof rules as they are needed. Recall the specifications of CLH lock:
where we make explicit the previously omitted layers (we justify the choice of layers in the proof of CLH lock). These specifications will be available in the proof as “axioms” stored in a function specification context parametrising every triple of the client proof; we omit the parametrisation to aid readability. The predicate is given a definition in the proof of the lock module, and, in the spirit of CAP, the client proof should not be relying on the Definition of the predicate but in its abstract properties. Here, we rely on the fact that is false, expressing that a lock is an exclusively owned resource—see Section 4.8 for the other properties of exposed to the client.
The two threads of the distinguishing client both access the lock and the heap cell . Consider the precondition . Both resources in the precondition are non-duplicable, so if we give them to , then the other thread would not be able to also have them. As we anticipated, shared state in TaDA is handled using regions. We therefore introduce a new region type (for istinguishing lient) that encapsulates the resources in the precondition: is the shared resource encapsulating a lock at with state and a cell at storing the Boolean . Although in this case the abstract state is not hiding any detail, since both and are visible, in general the abstraction of the contents is an essential mechanism for reasoning about abstract atomicity. In the proof of CLH lock, for example, to be able to see the operations as abstractly atomic, it is essential to hide the queue from the abstract state.
Assuming the lock region encapsulated by the predicate has level , the lock specifications will have level in the context, indicating they consider the lock region closed. To allow to encapsulate the lock region and use the lock specifications to derive updates to its own state, we let it have level . The top-level triples for the distinguishing client have level as a consequence. We will elide all details about levels, as they can be mechanically inferred from the applications of the LiftA and UpdReg rules.
We now design the protocol of the region, with the intent of encoding the following safety invariants:
(L1)
the addresses of the lock and the flag never change;
(L2)
only the thread that acquired the lock can unlock it;
(L3)
only the left thread will ever modify , and at most once from to ;
and the following liveness invariants:
(L4)
the lock will always eventually be unlocked;
(L5)
the value at will always eventually be .
Note that, together, invariants 3 and 2 imply that eventually the value at will always be true. To encode invariants 2 and 3, we introduce a guard algebra for generated from two guards (for ey) and (for one), with and to represent exclusivity of the permissions they give on the lock and flag, respectively. We can reuse the same guard algebra for the obligation algebra associated with : The atom obligations and will represent liveness invariants 1 and 2, respectively. The protocol formalises all the invariants:
(13)
(14)
(15)
The fact that no transition can change and encodes 1; this is such a common pattern that we adopt the convention to declare which are the fixed components of the abstract state and omit them from the protocol transitions completely. The choice for the guards of Equations (14) and (15) reflects 2 and 3, respectively; we will give to the left-hand thread, and will be obtained by locking the lock. The obligation is obtained by locking and fulfilled by unlocking; the obligation is fulfilled by setting to ; these facts encode 1 and 2.
We assign layers to the obligations to reflect the intuitive dependency: The lock needs to be assumed live in the process of fulfilling the obligation on the flag. We therefore set .
To complete the definition of shared region , we link its abstract state to the actual heap content that it encapsulates using the region interpretation:
This assertion describes a portion of the heap being shared (the lock at and the cell at ) and the linking of the ghost state (the guards and obligations) with the abstract state. The assertion is an abbreviation for , which indicates local ownership of the guard and obligation . The interpretation of a region establishes the invariant that, when , the guard and obligation will be “owned” by the region (and by no thread as a consequence). Similar links are established between the value of and .
Now that we set the scene, we can proceed with the proof, outlined in Figure 8. The first operation to do is to transform the precondition to an assertion about the region. We can do that by using the consequence rule:
which allows the use of viewshift to logically manipulate the assertions. Since triples are only well-defined if the Hoare pre-/post-conditions are stable, the rule asks to check stability of the assertions of the conclusion, as this does not follow from stability of the assertions of the triple in the premise. An analogous rule, called Cons, holds for hybrid triples—with no stability checks on the atomic pre-/post-conditions—so viewshifting is available at any point in a derivation.
Fig. 8.
Fig. 8. Proof sketch of the distinguishing client. Here, .
In our example, we want to create the guards and obligations needed to match the interpretation of and create the region, replacing its interpretation. Here, we might be tempted to match the interpretation with and , to viewshift to . While this viewshift holds, there is an issue: In TaDA Live, all the assertions in Hoare triples (or in Hoare position in hybrid triples) need to be stable for the triple to have well-defined semantics. The proof system enforces the stability of these assertions, by inserting stability checks in crucial rules. This means that if we viewshift now to a non-stable assertion, then we would fail at some point in the derivation to satisfy a stability check. While is stable, as we own these resources, the assertion is not stable: A region is subjected to the transitions of the protocol. Since we have the guard (from ) the environment cannot own it, hence cannot fire the transitions guarded by ; this makes stable. The transitions changing the state of the lock, however, can affect the region. This leads us to14 where we also add when , a stable fact.

4.2 The Parallel Rule

We now want to proceed with an application of the parallel rule:
The abbreviation means , that is, all the obligations owned by have layer . means and . The intuition behind these constraints is as follows: The layer in the context of the triples indicates a strict upper bound on the layers that can be assumed live in the proofs of the triples. If thread 2 needs layers lower than , then if thread 1 has unfulfilled obligations by the end of its execution, these cannot conflict with the assumptions made by the proof of thread 2. It is not, however, sound to leave an obligation of layer unfulfilled by thread 1: If thread 1 terminates first, leaving unfulfilled in its postcondition, then thread 2 may be assuming live in a situation where it will never be fulfilled.
In our example, we need to apply consequence again to massage the viewshifted precondition into an assertion of the form . The region assertion is duplicable, as is , but we want to give the non-duplicable resource to the thread on the left, as it is the one that will fulfil the obligation. This has a side-effect though: Since the guard is given to the left thread, the value of from the right thread’s perspective is not stably . Moreover, we want to know that the left thread has the obligation . So, we use to give to the thread on the right, and we stabilise the assertion to . All in all, we obtain:
which allows us to apply consequence and the standard Elim rule to obtain a precondition in the form required by the Par rule. For both threads, we are aiming at postcondition which has no obligation, so it satisfies the layer conditions trivially.
To see why the layer conditions are important for soundness, imagine we forgot to unlock in the left thread, obtaining a non-terminating program. We would obtain in the postcondition of , but the check would fail as . Choosing for the context layer of the triple for would not work: In its proof, we need to assume live, and lay .

4.3 Handling a Call to

Let us focus on the proof of the left-hand thread first. The difficult step is the execution of the first instruction, since this is the only potentially non-terminating instruction of the thread. If we let , then Step 1 can be derived as follows:
Let us unpack the derivation. As a first step, we would like to frame the irrelevant resources, in this case . In TaDA Live, this step is more subtle and interesting than usual, because of the layer-related side-conditions of rule FrameH (a special case of rule Frame):
With this rule, one can only frame obligations if they are of layer greater or equal the context layer. Here, we can use consequence (omitted) to obtain a stable frame of the pre- and postconditions. We have but ; since the layer in the context of the goal is , before we can apply FrameH, we need to artificially lower the layer using LayWH before applying frame:
Notice that lowering the layer in the context is always sound (even for hybrid triples): If we can prove the triple assuming live only layers , then the proof is valid in contexts where layers up to can be assumed live.
The layer constraint of Frame is crucial for soundness. Suppose we remove the constraint. Then, we would be able to frame a locally held obligation with layer , i.e., one of the layers that might be assumed live by the proof. This would allow the proof to assume live environment obligations that have layer , the eventual fulfilment of which might depend on the eventual fulfilment of . But, since is in the frame, it is constantly held and not fulfilled for the whole duration of the execution of the command we are proving. The frame condition forces us to record the “minimum” layer of the frame in the context, ruling out unsound circular reasoning.
After framing, we use the rule of consequence to massage the assertions to prepare them to the form required for the later application of LiveC.
The rest of the derivation does not involve liveness reasoning and follows a standard TaDA proof pattern. We use AElim and AtomW to turn the Hoare triple into an atomic triple:
Rule AElim says that if one can prove the command is resilient to interference on and does not affect the resource on until its atomic update, then we can relax the specification to state that the command allows changes to and might also affect during the interference phase. Rule AtomW says that if you prove a command is atomic, then you can relax the specification not to insist on atomicity; this can be done provided the atomic pre- and postcondition are stable, as required for the Hoare triple to be well-defined.
The combination of AElim and AtomW simply states that if we can prove a command performs an update atomically, and the pre- and postconditions are stable, then we can prove that the command also performs the update non-atomically.
Now let us consider the derivation for Step 2, which lifts the specification of CLH lock to the context of the client:
Rule SubPqA simply gives a way to manipulate the pseudo-quantified variable and its domain:
These are manipulations that would normally be carried out using consequence, but need to be done specially, since the pseudo-quantifier is a component of triples and not of assertions. In our example, we simply use it to remove the unused variable , choosing to be the first projection.
The interesting step of the derivation of Step 2 is the application of LiftA:
Let us unpack it. The purpose of the rule is to take an atomic specification that applies to some resource and lift it to the effect the atomic update has on some region that contains that resource in its interpretation. In our example, it says: You have proven that the command locks the lock; the lock is part of the interpretation of and the update to the lock amounts to going from the interpretation with to the interpretation with . The rule needs to make sure that the region update is among the ones permitted by the associated protocol. It does so by checking:
(1)
that there is a transition in the protocol matching the update;
(2)
that such transition is guarded by a guard that is owned;
(3)
that the local obligations are updated as the protocol mandates.
To check the first condition, the rule uses a relation between abstract states of the region; by the fourth premise, can only include updates that the owner of is allowed to perform. The second condition is enforced by requiring the precondition to own . The third condition is ensured by going from owning to owning , which, according to the fourth premise, is the expected update of obligations. In our example, we have and and the update matches transition (13). The third premise uses the abbreviation “,” which stands for . This implies that and cannot own obligations of , and so and capture the whole of the updated obligations. Note that because of the well-formedness restrictions on triples, in the conclusion of the rule the obligations are transferred to the Hoare pre-/post-conditions: There they belong to a closed region. The first premise says: If the region we are updating is tracked by the atomicity context, then this needs to be a trivial update, or else it would count as a linearisation point (which is instead handled using rule UpdReg). In our example , as we are not proving atomicity of the client, we are allowed any protocol compliant update.
Finally, the second premise requires to preserve its meaning at level . The formal definition of -safety is given in Appendix B.2.1; all the -safety conditions in our proofs can be immediately discharged by applications of the following lemma:
Lemma 4.2.
The properties below hold, for arbitrary :
(1)
, and are -safe.
(2)
and are both -safe.
(3)
If , then is -safe.
(4)
If are both -safe, then so are , , and .
(5)
If is -safe for all , then is -safe.

4.4 The LiveC Rule

In a TaDA safety proof, the derivations of Step 1 and Step 2 could be plugged together: The safety specification of the operation does not contain the component, and the premise of Step 1 matches exactly the conclusion of Step 2 (modulo framing , which would anyway not be used in a safety proof). In TaDA Live, the discrepancy between the two steps expresses the need for a termination argument for this call. What needs to be proven is the fact that, in the current context of the protocol, during the interference phase of this call to , the environment will always eventually unlock the lock. The LiveC rule allows to remove the liveness condition of the specification in a context where the corresponding liveness invariant can be proven to hold:
The first premise is called the environment liveness condition, and it roughly corresponds to checking (with acting as a certificate of the property holding, explained later). Here, we pick:
(16)
(17)
and we can conclude because when does not hold, i.e., , then we know, from , that holds; if we can show is live, then the protocol says that if the environment holds , then it will eventually fulfil it; under the protocol the transition fulfilling it is setting , which brings us to . The environment can always set again after that, but the same argument then applies.
To show is live, we have to look at the layers. Here, we have and . Recall that holds if . We can therefore set : holds, since . Since we do not own any obligation locally ( has been framed, recording this fact in the context layer ), we can consider live when proving the environment liveness condition.
The environment liveness condition is the central component of both LiveC and While; we explain it in depth now, and then resume our proof of the distinguishing client.

4.5 The Environment Liveness Condition

The essence of the termination argument is captured in LiveC and While by the conditions of the form . They establish “always eventually holds” facts. The condition is parametrised by , an assertion that holds at any point in the traces we are considering, an assertion , characterising the so-called target states, and an assertion parametric on some ordinal , which represents the environment progress measure. Intuitively, the condition states that, from any state satisfying , for some , we can find an environment transition that must eventually happen that would take us either to , or to some state satisfying with . Additionally, any transition from to that may happen does not strictly increase the progress measure, unless they end in a target state. The transitions that must happen are characterised by being those that either:
(1)
fulfil some obligation known to be in the environment and with layer lower than the ones we may hold locally, or
(2)
fulfil some environment liveness assumption stored in with layer lower than the ones we may hold locally.
This entails that, under an environment that always eventually fulfils the obligations we are assuming live, holds, as desired.
In the While rule, an environment liveness condition is combined with the condition
which requires us to prove that any protocol-compliant step from a state satisfying for some will take us to a state satisfying for some . In other words, in traces satisfying the progress measure never increases. This, in conjunction with , entails , as needed for soundness of rule While.
Take the environment liveness condition required by the application of LiveC in the proof of the distinguishing client. Given and , we have to prove:
That is, during the interference phase, we know that at any point in time the lock will be in some state ; we want to prove that the environment will always eventually set to 0. Here, this is particularly easy to show: states that when the obligation is held by the environment; since lay (and does not hold obligations), we can assume the obligation will be eventually fulfilled; the only transition that can fulfil it is the one that sets , so in exactly one such step we reach . This justifies the trivial definition of : We do not need to keep track of progress towards , as we reach it in exactly one of the steps that must eventually happen.
The environment liveness condition can be proven using the rules in Figure 10. The only rule that applies directly is EnvLive, which checks that in a state satisfying one can always measure progress (second premise), and then asks to discharge an auxiliary judgement of the form , which is best explained with the help of the illustration in Figure 11. The condition works under the hypothesis that the assertion holds at any point of the traces under consideration, so in the picture we are considering infinite sequences of steps within the outer rectangle. The target states describe some subset of , which we want to show is visited infinitely often15 by any infinite trace that complies with the liveness rely as specified by the region protocols and pseudo-quantifiers. Rule ECase allows the splitting of the invariant into a disjunction of, say, , and as in the picture. We need to prove there is going to be eventual progress towards reaching from each of these cases. If we start from , then we already reached the target, and this case can be discharged by rule LiveT. The other cases are covered by Rule LiveO, which justifies progress by appealing to an environment-owned atomic obligation that is live (premises two and three); and Rule LiveA, which justifies progress by appealing to a liveness assumption stored in the atomicity context. The EQuant rule is a generalisation of rule ECase.
To see how progress is justified, consider the trace of Figure 11 starting from . Assume the progress measure at is (i.e., holds in ). Each case can be discharged with either rule LiveO or rule LiveA. Imagine is discharged using LiveO: The rule requires us to find an obligation that, in every state of , is necessarily owned by the environment () for some region . Then the condition checks that the progress measure will improve when the environment will fulfil ; formally:
Definition 4.3
Given assertions , and , the condition holds if and only if, for arbitrary , letting
the following holds:
Intuitively, the condition considers an arbitrary transition from the current case to , obeying the atomic rely (thus allowed by the safety constraints of the protocols). It then compares the progress measure and , taken before and after the transition, checking that:
(1)
the measure strictly improved (); or
(2)
the measure stalled () but we remained within case , and thus the pending obligation /liveness assumption are still pending; or
(3)
we reached (allowing the measure to vary arbitrarily)
Examine the trace from in Figure 11. While the trace stays within the environment obligation stays unfulfilled (steps are labelled with the obligation they fulfil, if any), and requires the measure to decrease, or in the worst case stay constant. Since is live, the environment will eventually fulfil it, thus taking us outside of . If such transition takes us to another case, in the illustration, then requires the measure to strictly decrease to some . This process cannot repeat ad libitum: The progress measure is an ordinal and hence well-founded. The effect is that, eventually, the only option is to reach the target. Note that transitions that end in the target are allowed by to increase the progress measure: In the picture the transition reaching increases the measure from to . This allows the “reset” of the measure so the trace can go outside of and the whole process of reaching again can be repeated an unbounded number of times.
The idea behind Rule LiveA is analogous to the above description, but progress is justified by appealing to an environment liveness assumption stored in . The layer of the assumption needs to be lower than any layer we may be holding. Since the environment liveness assumptions only hold in the interference phase of an update, the rule needs evidence that the linearisation point on has not occurred yet, which is provided by .
In the proof of the distinguishing client, the environment liveness condition for the application of rule LiveC between Step 1 and Step 2, is proved by:
where and are defined in Equations (16) and (17), and . Since trivially implies , we can apply EnvLive, setting . Then, we apply ECase to split on the value of : where and . If , then we can apply LiveT, as we are already in ; if , entails , so we can apply LiveO with and . The atomic obligation is live: , and holds no obligations. To check that the condition is satisfied, we need to consider the transitions allowed by the protocol :
to keeps the measure constant but keeps us in ,
to brings us directly in .
Although in this case the progress measure is trivial and the proof of the environment liveness condition simple, the generality provided by non-trivial progress measures is needed for more interesting examples. For instance, our proofs of spin lock (Section 5.1) and CLH lock (Section 5.2) do make use of this added generality.
We chose to state the condition as a semantic check; while this achieves full generality, in typical proofs the environment liveness condition only involves a single region, and can be checked by examining the region’s protocol.

4.6 The While Rule

By using the rules we described so far, one can justify most of the proof outline of the distinguishing client in Figure 8. For instance, the proof of for the left thread can be reused as is to prove the call in the body of the loop of the right-hand thread.
The main missing step is the application of the While rule:
Let us review the main differences with the simplified WhileB rule presented in Section 2. First, the two triples in the premises of WhileB (corresponding to the blocked and unblocked case) are compressed here in a single triple: This is convenient in proofs, as the proof of the two triples only differs on the treatment of the variant. When , , obtaining the first triple of WhileB, when , we obtain the other triple. Second, the target states are parametrised over the variant : Each value of the variant may represent a different “phase” of the local progress of the while loop; in each of these phases the loop may be blocked waiting for a different set of target states to be reached. Third, as anticipated, the condition is expressed as the conjunction of the first and third premise.
There are two additional side-conditions. Since and assert facts about arbitrary intermediate states of an iteration, they cannot refer to any local variable that may be modified by the body of the loop, hence the fourth premise.
The most important addition is the layer condition of the second premise. The idea is that we should be forbidden from constantly owning obligations of layers that we might assume live in the proof of the environment liveness condition. By requiring , we make sure that the loop invariant only owns obligations of layer higher than , and the in the context of the environment liveness condition indicates that only layers lower than that may be assumed live. The layer in the context of the triple in the conclusion is an upper bound for any layer that may be assumed live in the proof of the loop.
Consider the application of While in the proof of the distinguishing client. The while loop of the right-hand thread is busy-waiting until is set to true. The target states are therefore . In this example, the target states do not depend on the variant , which itself is quite trivial: when the loop is finally unblocked, it needs at most one iteration to terminate. The local variant can simply be , i.e., when is there needs to be one unblocked iteration to terminate, and when is finally the loop will take no more iterations. The loop invariant is
on which we can frame the stable assertion
Since the loop invariant owns no obligations, we can set , and we need to prove the environment liveness condition ; here, as for the application of LiveC, with the fulfilment of the environment obligation , we immediately reach the target, so can be trivial (). The derivation is as follows:
where . We split into two cases using ECase. In the first case holds, so LiveT applies. In the second, and, since lay , is a live obligation. The condition is satisfied: The allowed transitions either keep constant or set it to , taking us directly to .
The stability of holds trivially, as is constantly 0. The condition is this trivial in this case, because it checks that transitions to and from are not resetting the progress measure; here, once is set to true, it will not be set to anymore, so once is reached there is no way to leave it.
Non termination of distinguishing client with spin lock. If the lock at is implemented as a spin lock, then the distinguishing client may not terminate. Indeed, there is no TaDA Live proof for the distinguishing client if one assumes the spin lock specifications: In the precondition, we need to specify an impedance budget for the lock ; whatever ordinal we may choose for , there is no way to consume some budget at every potential iteration of the loop of and never exhaust the budget, as the number of iterations is effectively unbounded.

4.7 Other Rules

The rules in Figure 9 are the most important TaDA Live-specific rules. We have omitted standard rules, such as the axioms for primitive atomic commands, the rules handling sequencing, function calls (recall that for simplicity, we restrict to non-recursive function definitions), and structural manipulations. They are reproduced in full in the Appendix.
Fig. 9.
Fig. 9. Key TaDA Live rules. Abbreviations: means ; can be discharged using Lemma 4.2; means ; means .
Let us conclude with an explanation of the two TaDA Live-specific rules of Figure 9 that are not illustrated by the proof of distinguishing client: rules UpdReg and MkAtom. While the goal of LiftA is to lift an atomic update on a resource inside the interpretation of a region to the corresponding update on the region itself, UpdReg obtains the same effect but on a region that is supposed to be updated once atomically (i.e., ). While LiftA applies to regions with , the update allowed in that case needs to be an identity step from the point of view of the abstract state of the region. A genuine update to the region needs to be recorded as the unique linearisation point on that region; this is precisely the purpose of UpdReg. Most of the premises of UpdReg have the same function as in LiftA: checking that the update of the abstract state and the obligations comply with the protocol. The difference is that here the update expected to take place in the linearisation point is recorded in (i.e., the components of recording the expected update to abstract state and obligations of ). To be able to claim the linearisation point took place exactly once, the precondition of the triple requires the resource, which represents the unique permission to perform the linearisation point. The postcondition allows for two cases: Either the update was successful, in which case the atomicity tracking component is recording the update ; or the update was not performed () and the resource is still available for future updates.
Rule MkAtom is another crucial rule for proving abstract atomicity: It states that a Hoare triple can be promoted to an atomic triple if it contains a “certificate,” in the form of being updated to , that the region in question was updated atomically exactly once, with the expected update. The expected update and the additional interference assumptions given by the pseudo-quantifiers need to be stored in the atomicity context so the triple in the premise can make use of the interference precondition assumptions and certify the right update took place. Any expected update must be protocol-compliant (). Notice how the atomicity context records the liveness assumption expressed by the pseudo-quantifier, so it is available for termination proofs in the proof of the triple in the premise; in particular, they can be used by applications of LiveA. The proof of spin lock and CLH lock in Section 5 illustrate applications of MkAtom and UpdReg.

4.8 Abstract Predicates

In the spirit of CAP, abstract resources provided by a library should be presented to clients by only exposing their abstract properties, and not their definition.
In our example, the predicate is defined internally to the proof of the lock module, say, using internal regions (of some maximum level ) expressing the internal protocols of the module.
The proof of our distinguishing client only relies on the following abstract properties:
(L1)
is false, expressing that a lock is an exclusively owned resource;
(L2)
is stable for all ;
(L3)
is -safe for all ;
(L4)
is obligation-free, i.e.,
(which also entails for all and ).
For instance, the interpretation is well-formed, thanks to properties 2 and 4. The proof also involves side conditions on layers, stability, and ’-safety, which can be discharged by appealing to 2, 3, and 4.
More generally, a module would typically expose to clients viewshifts representing separation properties of the abstract predicates (e.g., duplicability), stability properties, -safety, obligation freedom, and relevant facts.

4.9 What Is Leaked by TaDA Live Specifications?

TaDA Live’s triples are rather expressive: They support strong specifications of updates via logical atomicity, and conditional termination properties via liveness assumptions. It is natural to ask whether our triples force the leak of any unnecessary detail about the implementation. In particular, there are three components of the proof system that have a “global” flavour: the level and layer in the context of the judgement and the layer decorating the liveness assumption of the pseudo-quantification. Although necessary for soundness, the management of levels is tedious but relatively straightforward. Iris introduced namespaces for invariants to ease the management of so-called masks, which serve essentially the same function as levels in TaDA. A similar construction could be used to ease management of layers. Here, we keep it simple and require proofs of clients to use layers high enough to be able to reuse the libraries specifications.
The layers decorating a triple, however, are a more delicate matter. The main complication arises from the choice of parametrising TaDA Live with a global layer structure. If a specification insists on the use of a specific subset of layers, then that could seem like an unnecessary leak of implementation details. For example, there could be two valid implementations of a module that use wildly different internal layer structures to justify their internal blocking behaviour. Should the abstract specification of the module insist on a specific layer structure for the internal layers, that would rule out valid implementations for no good reason. In TaDA Live modularity of the layers can be achieved by exploiting a crucial property of derivations: Their validity is invariant under a strict-ordering-preserving remapping of layers. This allows a style of specification that generalises the one we have seen in our examples until now, where the layer structure relevant for the proof of an implementation is parametrised over a client-provided remapping of layers. To avoid cluttering the proofs, we do not explicitly parametrise the proofs in Section 5. In Section 5.5, where the construction becomes relevant and used in a non-trivial way, we explain how to convert a proof so it is parametric on the layer remapping.
In terms of behaviour, TaDA Live’s specifications are able to hide internal blocking, as shown by the blocking counter example of Section 2.1 (formalised in Appendix 5.3). There is, in fact, one progress property leaked by the specification layers that is currently not exploited by TaDA Live. In the special case when the layer in the context is the globally smallest layer , the proof of the triple cannot rely on any liveness assumption at all. This can be used to differentiate, for example, a wait-free counter implemented as a hardware-atomic fetch-and-add (which admits a proof with in the context) and a blocking counter (which only admits proofs with layer ). This is a useful distinction: Wait-freedom is an important progress property, asserting termination without assumptions on liveness of other threads and without fairness assumptions on the scheduler [19]. Currently, however, TaDA Live’s semantics does not support deriving wait-freedom as a consequence of as the context layer: The current triple semantics only implies termination of the fair traces. Extending TaDA Live’s semantics to encompass wait-freedom is left as future work.

4.10 Soundness

We have proven soundness of TaDA Live rules against the semantic judgement of Definition 3.30.
Theorem 4.4 (Soundness).
If , then .
The detailed proofs of the liveness-related rules are produced in Appendix E. The soundness of most rules is an adaptation of the soundness arguments of the corresponding TaDA rules. The rules that drive the liveness argument are rule Par, rule LiveC, and rule While.
The soundness of the parallel rule follows from the layered liveness invariants semantics explained in Section 3.9. The argument is roughly as follows: There are two possible ways the parallel composition may fail to terminate: Either one thread terminates and the other does not, or they both do not terminate. In the first case, when the terminating thread, say, , terminated, we are in a state where thread 1 does not own any obligation of layers that may be assumed live by (this is from the conditions on the layers of the postcondition of ). By the triple about in the premises, is only allowed not to terminate if the environment is constantly owning an obligation of layer lower than . Since cannot do that, we obtain that said must be owned by the overall environment of the parallel composition. In such case the triple of the conclusion allows the program to diverge.
In the second case, both threads are not terminating. Each of the threads, say, 1, is allowed to keep an obligation constantly unfulfilled, as long as it can blame thread 2 by showing an obligation of strictly lower layer that is kept constantly unfulfilled by 2. Since layers are well-founded there needs to be some thread that will not be justified in not fulfilling some of its obligations. This cannot be, as we were able to prove the two triples in the premises.
The soundness of rule While considers the worst-case scenario for progress: an infinite sequence of iterations, all of which do not start from a target state in , and therefore do not decrease the variant . In such case, we know that the assertion holds at every point of the trace: it has been framed so the local steps and the environment steps must preserve it, and it is stable (as checked by EnvLive). We are thus within the hypothesis of the environment liveness condition, which proves, together with the premise asking the progress measure to never increase, that eventually the target states will be reached. Although they may be reached in the middle of an iteration, instead of at the beginning, as it would be required to invoke the triple that decreases the variant ; in the worst case this can happen boundedly many times (the progress measure is well-founded and must always decrease). Therefore, we eventually reach and do not leave it until the next iteration starts from a state satisfying , which matches the premise that ensures the variant decreases. This can only happen boundedly many times, as the variant is well-founded.
Rule LiveC’s soundness argument is a variation of the one for While.
As a simple corollary of soundness and Theorem 3.33, if we can prove , then run in isolation terminates from the empty heap. For our distinguishing client (Example 4.1) for instance, we can wrap up the proof by initialising the state and prove
which implies termination of the program.

5 Evaluation

In the previous section, we introduced the TaDA Live proof system, explaining the rules on the distinguishing client, which showcases in a simple setting the proof mechanics of the logic.
In this section, we consider more challenging case studies to demonstrate how TaDA Live achieves proof scalability and reuse in practice.
We start by proving correctness of the spin and CLH lock implementations against the specifications we discussed in Section 2. The proof of spin lock highlights the use of the liveness assumption of a pseudo-quantifier in a proof and the handling of impedance through the impedance budget. The proof of CLH has a number of interesting features. The CLH code exhibits both internal blocking, i.e., blocking that is resolved internally and does not leak to the client, and external blocking, i.e., blocking that has to be resolved by the client and thus leaks in the liveness assumption of the pseudo-quantifier. As a consequence, the termination argument requires using a combination of obligations (for internal blocking) and the liveness assumption of the pseudo-quantifier (for external blocking). Moreover, the obligations (and their layers) are not simple tokens like the ones for the simple examples of Sections 2 and 4, but form an infinite set. This reflects the unboundedness of the internal queue of threads.
The two lock examples demonstrate TaDA Live’s ability to abstract from implementation details and only leak to the client the parts of the termination argument that depend on the choices of the clients. In the same vein, we will follow this with a counter module using a spin lock to protect access to a cell holding the value of the counter. Interestingly, since the blocking due to the use of a lock is internal, the specification of the counter will not be blocking. The impedance suffered by the internal spin lock does, however, leak to the interface for the counter: The counter will have its own impedance budget that will be internally spent to call operations on the lock.
To exhibit TaDA Live’s ability to reason about liveness locally, we will verify a double blocking counter, showing that for simple common programming patterns, the layer system leads to natural and modular client proofs.
Finally, we comment on a proof of a lock-coupling set, produced in full in Appendix C. The example considers a data structure implemented as a linked list with CLH locks guarding the single cells. The example is challenging for the presence of a dynamic number of locks. At first sight it might seem it is impossible to represent this using the static association of layers to obligations of TaDA Live.
Obligations, however, as demonstrated in this case study, are a very general form of ghost state and can easily represent dynamic properties of state.
Other case studies. Ticket lock and MCS lock [18] are alternative implementations of starvation-free locks; they can be given the same specification as the CLH lock, and their liveness argument can be carried out in the same way as the one we present for CLH.16 A paradigmatic example of fine-grained data structure is the Treiber stack [18], which, in its standard form, is non-blocking and has been proven in Total TaDA already. It is easy to adapt the code to have a operation that blocks on an empty stack. Such operation would be blocking and suffers impedance. Its specification and proof mirrors closely the proof of the spin lock. Challenging variants of the lock-coupling set are the “optimistic” and “lazy” sets. The proof of optimistic set uses a combination of the proof of the lock-coupling set and the impedance budget technique (optimistic set operations impede each other).
These case studies cover all the proof patterns needed to prove all the examples of the LiLi papers [30, 31]. Notably, proofs in LiLi involving modules that use locks require in-lining some non-atomic implementation of the lock operations in the client, resulting in non-modular proofs and unnecessarily intertwined termination arguments.

5.1 Spin Lock

Code. The spin lock module implements a lock by storing a single bit in a heap cell; locking is implemented by trying to CAS the heap cell from 0 to 1 until the CAS succeeded; unlocking simply sets the cell back to 0. In Figure 12, we give all the operations of a spin lock module.
Specifications. We will prove the module satisfies the following specifications:
where abstractly represents the lock resource at abstract location (omitted for readability in Section 2) and concrete address , with abstract state and impedance budget (an ordinal). The purpose of the impedance budget, as described in Section 2, is to prevent the environment from taking possession of the lock an unbounded number of times. Without this bound, the operation in the implementation of could be indefinitely preempted by the environment locking the lock, preventing it from ever taking its possession and terminating, even if the environment always unlocks the lock when it is locked. This is enforced by requiring the lock operation to strictly decrease the impedance budget using , a function that can be freely instantiated by the client upon usage of the specification, which indicates precisely how much the budget will decrease after this call (which is client-dependent information). The specification of then allows the client to pick an arbitrary ordinal as the initial budget.
Shared Region. The abstract shared lock resource will be represented by a region where , , . Here, is a fixed parameter of the region.
Convention 1.
An exclusive guard, , is very commonly used to express some exclusive permission on some shared resource, which cannot be composed with itself: i.e., . Local ownership of is exclusive in that no other thread can at the same time assert ownership of . A ubiquitous use of this guard is in representing the resource offered by a module.
Take for example the current spin lock module. Since this is a concurrent module it uses internally shared resources. We therefore have a region encapsulating the shared internal resources of the counter. From the perspective of the client, however, at the moment of creation of a lock by, say, a operation, the lock is exclusively owned by the client. This, for example, is reflected in the fact that, until the client shares the lock or invokes operations on it, it remains unlocked. To represent this fact, one typically defines an exclusive guard guarding each transition of the region interference: e.g., . Then the operation can be given the specification above, which gives to the client the stable assertion , wrapped in the predicate . (Note how is not stable on its own.) To re-share the lock, the client will create its own region encoding the invariants governing the interaction over the lock (and the other resources of the client), the interpretation of which will contain the guard .
Note that assertions have very different meanings if occurring in the atomic precondition of a triple, as opposed to the Hoare precondition: The resources in the atomic precondition are not owned by the local thread, but only acquired instantaneously at the linearisation point. For example, in the triple
the exclusivity of is only granted instantaneously to the thread acting on it atomically, i.e., either the environment during the interference phase as allowed by the pseudo-quantifier or the local thread at the linearisation point.
Since this pattern is ubiquitous, we reserve the guard constructor for this use and will omit the axiom when specifying guard algebras.
Guards and Obligations. For the region, we only have the exclusive guard , and no obligation constructors, as the implementation has no internal blocking. All the blocking behaviour is represented by the liveness assumption in the pseudo-quantifier of the specification of . Note that without the exclusive guard, the specification of would not hold as the lock would not be stably unlocked.
Region protocol. The interference protocol for is very simple:
It states that whoever owns can freely acquire or release the lock, provided that at each acquisition, some budget is spent (), preventing the lock from being locked an unbounded number of times.
Region interpretation. The implementation uses a single cell stored in the heap, and we have no non-trivial guards/obligations; the interpretation is thus straightforward:
Note how is pure ghost state in that it is not linked to any information in the concrete memory.
Predicates. The lock resource is abstractly represented by the predicate
Verification of . Figure 13 is the proof of the operation. The only step that involves reasoning about liveness is the application of the While rule. To apply this rule, we must first define the loop invariant, , the target states, , the persistent loop invariant, , , and the environmental progress measure, .
The loop invariant is
which contains:
the safety information to prove the uniqueness of the linearisation point, namely, that if the failed, i.e., , then we have not touched the resource yet and we still have permission to perform the linearisation point (); whereas if the succeeded, i.e., , then we did perform the linearisation point with the expected effect.
the definition of the local variant as an upper bound on the impedance budget .
Indeed, whenever some budget is spent, the loop approaches termination, as, eventually, the exhaustion of the budget prevents further interference, allowing the operation to succeed and the loop to terminate. Therefore, decreasing the upper bound to the interference budget corresponds to progress for the operation. Without additional information, however, we cannot show the local variant must eventually strictly decrease, indeed, in the case , we cannot exit the loop and the environment is not forced to spend budget. Therefore, the termination argument will need the assumption that the environment always eventually unlocks the lock to allow the termination of the loop or further decrease of the variant due to the environment locking the lock. This guarantee is given by the atomicity context with .
The target states, , must clearly include unlocked states, where , but, as it must eventually be stable, this is insufficient, since once the lock is unlocked, the environment can lock it again. However, when the lock is unlocked, if the environment takes possession of it, then the environment must also simultaneously decrease the impedance budget, i.e., .
The argument that is always eventually true relies on the assumption from the atomicity context that the environment will always eventually unlock the lock. However, this assumption only holds before the linearisation point. In particular, as the loop variant must contain , since the loop body may perform the linearisation point, the persistent loop invariant cannot, and therefore must also contain a disjunct where the linearisation point has occurred and holds a witness .
We therefore declare the target states as the ones where either the linearisation point has been performed, the lock is unlocked, or some budget was spent:
The persistent loop invariant here is simply , which is a valid stable frame of the loop.
To apply While, we also need to specify , which in this case is simply , which satisfies the layer constraints of the rule; and the environment progress measure :
(Here, we use the variable for the environment progress measure variable to avoid clashes with the impedance budget .) This environmental progress measure is decreased by both the environment locking and unlocking the lock:
Unlocking the lock decreases from 1 to 0, so as , the environmental progress measure decreases.
Locking the lock decreases the impedance budget from to , while also increasing from 0 to 1. Since implies , , the environmental progress measure decreases.
Given these parameters, the proof first establishes the loop invariant holds at the beginning for some , by applying Cons:
Note that we will often implicitly apply the Cons rule in proofs, only detailing the application when emphasis is desired. Next, Elim on gets rid of the existential quantification, so we are ready to apply While.
To complete the application of the rule, we need to show
(18)
(19)
(20)
Condition (19) is easily seen to hold, as we showed above, all possible environmental interference on the region decreases the environmental progress metric, which is sufficient for this to hold.
Condition (20) is also easily seen to hold, as the only program variable predicated over in , and is , which is not modified by the body of the loop.
Finally, condition (18) is proven as follows. We observe that:
We can then derive the environment liveness condition:
Formally, the application of EnvLive requires us to prove which is trivial. An application of the ECase rule then splits between the cases where and hold. Intuitively, represents the case where we performed the linearisation point or the lock is unlocked, while the case where we still have not performed the linearisation point and the lock is locked. If holds, then holds, so no progress of the environment is required, therefore, this case can be discharged via an application of rule LiveT. In the case where holds, we can apply rule LiveA to invoke the liveness assumption stored in : If the lock is unlocked, then the metric strictly decreases.
To show that the liveness assumption encoded in the atomicity context for the region , is active, the LiveA rule requires that in the current case:
The abstractly atomic update being tracked on has yet to occur:
No obligations of layer less than or equal to is continuously held locally:
If these hold, then the predicate shows that discharging the liveness invariant will strictly decrease . To show this holds, taking arbitrary and letting
we need to show
This holds, as, given an arbitrary , any step taken from by the atomic world rely relation either leaves the state of the region unchanged, preserving the state , or releases the lock, decreasing the metric. Therefore, for any , holds, which implies the goal.
To conclude the argument, we briefly comment on the proof of the body of the loop. The full proof of the body can be found in Figure 14. The applications of rules UpdReg and Frame lift the concrete atomic to a (potential) update to the region. An application of Cons allows us to introduce as an upper bound to the impedance budget, initially after the linearisation point.
Then, we apply rule AElim to remove the pseudo-quantification on and . At this point, the abstract state of the region in the postcondition is weakened to any state that might be reached before or after the linearisation point. However, we keep record of what happened exactly at the linearisation point because of the assertions. The later application of MkAtom will be able to fetch the atomic update witness and declare the appropriate atomic update in the overall specification. Note that the overall Hoare postcondition after the application of AtomW is stable.
Finally, Figure 15 shows the proof outlines for the and operations. The only notable step of the proof of is the last application of Cons to viewshift the postcondition from to , which is possible because the interpretation of the region matches with this resource, so the reifications of the two assertions coincide.
The proof of is a straightforward lifting of the atomic reset of the cell at to the region . Neither proof involves a liveness argument.

5.2 CLH Lock

Code. A CLH lock is an implementation of a fair lock module that guarantees fairness by queuing the threads that are waiting to take its possession. Its implementation is shown in Figure 17.
The diagram in Figure 16 describes the state of the queued threads, , waiting to take possession of the lock, as well as the module’s head and tail pointers into the queue.
As described in Section 2, this queue is represented by associating each of the threads queuing on the lock with the heap cells in memory. Each thread executing the operation to take possession of the lock then holds in its local state the address of its cell and that of its predecessor’s cell. These are held in the program variables and , respectively, in the implementation of . The local instance of these program variables for each queued threads and the cells they are pointing to can be seen in Figure 16.
The thread associated with the cell at the head of the queue is said to hold the lock, and the value stored in its cell determines the state of the lock, . When a thread first takes possession of the lock, the lock will be locked. Therefore, the initial value in these cells, when the associated threads join the queue, is 1. This can be seen in the implementation of the operation, which allocates and sets its associated cell to value 1 on line 3 before enqueuing itself. Once the thread holding the lock wishes to release it, it can do so by setting the value of its cell to 0, unlocking the lock and signalling to the next thread in the queue that it can now take possession of the lock. This can be seen in the implementation of the operation, which fetches the address of the cell associated with the lock’s owner from the queue’s head pointer and then sets its value to 0.
In Figure 16, the thread is at the head of the queue, waiting for the lock to be released. If the lock is released by its owner, then gains the exclusive permission to take possession of the lock by setting the value of the module’s head pointer to the address of its associated cell. detects the lock has been released by repeatedly reading the value of its predecessor’s cell in the loop on line 6 and then sets the head pointer to the address of its cell, , on line 7.
Once the lock is released, only the thread at the head of the queue (if any) has the permission to take possession of the lock next. Due to this, if the owners of the lock continuously eventually release it, then the threads waiting on the lock take possession of it in the order they are enqueued.
To enqueue itself, the operation performs a operation on the tail pointer, placing the cell it has allocated with value 1 at the tail of the queue and writing the address of its predecessor to the program variable. The order in which the operations are enqueued is then the order in which they executed line 4. Any weakly fair scheduler will eventually give each thread executing the operation the opportunity to execute this operation, allowing it to enqueue itself.
As long as the client then guarantees that every thread holding the lock eventually releases it, the thread will eventually take possession of the lock once it reaches the front of the queue and the operation will terminate, guaranteeing fairness.
To be able to provide the same guarantee, that every thread requesting the lock will eventually be able to take its possession as long as the lock is always eventually released, the spin lock requires that its client only call the operation concurrently a finite number of times. This is exposed in the spin lock specification via ordinals bounding the impedance on the lock.
An interesting aspect of this example is that it features a combination of internal and external blocking: The client needs to always eventually unlock the lock—external blocking, requiring the client to provide a guarantee—and the operation needs to eventually take possession of the lock once the previous thread signals its release—internal blocking, guaranteed by the implementation. This second guarantee will be enforced using obligations not exposed in the specification. The proof will therefore involve an environment liveness condition discharged using both LiveO and LiveA.
Specifications. We will prove the following fair lock module specifications:
where abstractly represents the lock resource at abstract location (omitted for readability in Section 2) and concrete address , with abstract state .
To abstract the representation of a thread’s position in the queue, we will associate, through ghost state, to each thread requesting the lock, a ticket number that corresponds to the order of arrival of the implementation at line 4. Every time a thread joins the queue, it gets assigned the next available ticket.
This example shows a common proof pattern of TaDA Live: There is an inner region that exposes all the information needed for the termination argument (here, the value of the next ticket to be handed out, , so individual threads can reason about the threads queuing on the lock) and an outer one that hides enough information to make the operation abstractly atomic. This pattern nicely separates the concerns in the proof: proving atomicity is done via the outer region, termination via the inner one. Because of this, the abstract location of the lock will consist of the pair of inner and outer region identifiers. This is not a concern for modularity, however: The type of can be made opaque to the client, which just threads it through the proof unmodified.
Shared Regions. The abstract shared lock resource will be represented by a region where , , , . Here, , the region identifier of the inner region and , the address of the lock, are the fixed parameters of the region. The abstract state of the region includes , which represents the lock’s state, , which is the ticket number of the thread holding the lock, and is the address of the cell associated with the owner.
Once a operation has enqueued itself, the difference between the ticket of the lock’s owner, and the operation’s ticket, , , corresponds to the thread’s current position in the queue.
The internal region also exposes the next ticket to be handed to the next thread queuing on the lock, .
Notation. Lists will frequently be used in the ghost state for the proof of the CLH lock. We introduce notation to manipulate lists to simplify the exposition of the reasoning. Given and lists of elements of , we write , , and for prepend, append, and concatenation, respectively; is the length of , and states that the ith element (from 0) in is and ; and are the first and the last element of , respectively, and represents the list without the first element when is non-empty.
Guard algebra:. Take arbitrary. For this proof, two guards will be necessary. First , which encodes the current thread’s ticket, , once it has joined the queue, as well as , pointers to the thread’s predecessor’s cell in the queue and its own, respectively. The second guard we require is , which is used to track the overall queue, by tracking the cells associated with enqueued threads, , and the ticket number of the current owner, .
To use this as intended, a few axioms on the guard algebra will be required. First, an axiom to create new tickets, adding a new cell to the queue and associating a new, unique ticket number to the thread:
This will be used to create the relevant guard resources when a lock operation enqueues itself on line 4. Similarly, an axiom to remove a thread’s predecessor from the queue once it can take possession of the lock:
This will be used to update the relevant guard resources with the relevant when a lock operation takes possession of the lock on line 7, placing its associated cell, , at the head of the queue. Finally, an axiom to guarantee that a ticket guard, , is well-formed with respect to the queue in a guard :
Obligation algebra:. Take arbitrary. As mentioned above, to verify the totality of the CLH lock operation, once a thread is enqueued, if its predecessor relinquishes possession of the lock, then it must eventually take its possession. Otherwise, although the lock will be permanently unlocked, no other thread waiting for the lock can take its possession, as they are not at the head of the queue.
To encode this liveness invariant that must be fulfilled, we associate an atom obligation with the ownership of the ticket . The CLH lock’s transition system will then require that this obligation be discharged by taking possession of the lock once it is unlocked by the thread with ticket .
The layer associated with is then , so these obligations are resolved in the order the associated threads are enqueued. Finally, as with the guard algebra, we have an obligation , which will remain in the shared region’s state and track the owner’s ticket, , and the next ticket to be handed out, , associated with the obligation via the obvious axioms.
Region protocols. The interference protocol for the region is as follows:
The first transition allows a thread to place itself in the queue waiting to obtain the CLH lock, updating the next ticket to be handed out from to . While doing so, the thread acquires an obligation, , requiring it to eventually take possession of the lock once it is at the head of the queue. The second allows the thread at the head of the queue to take possession of the lock by changing the state, , incrementing the owner ticket, , to its own (tracked by the thread’s obligation) and changing the owner pointer of the lock to that of its own associated cell. This discharges the obligation , as the thread then leaves the queue to take possession of the lock. Finally, the third transition allows the lock to be unlocked.
The interference protocol for the region is then:
This hides the enqueuing step of the operation, allowing the operation to appear atomic.
Region interpretation. As explained above, the CLH lock associates a cell with each thread queuing on it, as well as its owner. The list of each of these cells in the order in which the associated threads are queued, with the owner’s cell as the head, will be denoted . is then the list of cells queueing on the lock. While threads are queuing, the associated cells must have value 1; this is represented using the predicate :
The inner shared region, , holds the cells associated with each queued thread, this is represented by the resource in the region interpretation.
The shared region also holds a pointer to the tail of the queue, , as well as a pointer to its owner’s cell, whose value is the state of the lock, , as described above. This is represented by the resource:
The shared region’s ghost state then comprises:
the guard keeping track of the list of cells, and the current owner of the lock, .
the obligation keeping track of the next ticket to hand out, , and the current owner’s ticket, .
Finally, the invariant is used to guarantee that each thread that holds a ticket is associated with a cell in the queue and , associates the head of and the address of the owner’s cell. All of this ties together to give the following region interpretation:
The outer shared region then holds full permission to update the inner region, , and asserts that each thread queuing on the lock, with tickets to , holds an obligation to take possession of the lock once their predecessor releases it, , where is the identifier of the inner region:
Predicates. The lock resource is then abstractly represented by the predicate:
which abstracts away the CLH lock’s implementation details: the ticket and cell address associated with the lock’s current owner.
Proof of . Figure 18 gives an outline of the proof of the clh operation implementation; the definition of the loop invariant will be given later. The steps involving liveness are the operation, which enqueues the thread, hence obtaining the obligation to take possesion of the lock once the previous thread relinquishes possession of it; the while loop, which waits for the previous thread to release the lock, whose liveness depends on the previous threads in the queue taking possession and then releasing the lock in turn; and the write operation at line 7, which takes possession of the lock. We begin with the details of the operation’s proof, shown in Figure 19.
There, Step 4 is composed of the rules: FrameH, AtomW, AElim, LiftA, AElim, LiftA, AElim. The application of the FrameH rule frames off the view , the AtomW rule transfers all the remaining resources to the atomic precondition and postcondition, the AElim rule pseudo-quantifies , , and , LiftA then opens up the region , the applications of AElim and LiftA then pseudo-quantify and open the region and the final application of AElim rule pseudo-quantifies .
After using LayWH to decrease the level of the assertion to and Frame to frame off everything except the region interpretation’s tail pointer, the operation atomically updates it. After everything is framed back on, the consequence rule is then applied to the postcondition to re-establish the invariant. The axioms
are used to update the queue , by enqueuing —the local thread’s cell—at its tail, and updating the next ticket to . While doing so, the thread acquires the guard , the obligation, , which represent the thread’s position in the queue and its obligation to take possession of the lock once its predecessor reliquishes it, respectively.
As environmental obligations can always be duplicated, the thread also obtains locally. These environmental assertions will be necessary for the application of the While rule. To finish reestablishing the invariant, as the thread is retaining locally, it can leave in the region invariant. Finally, using the axiom
as we hold locally, the assertion holds stably.
Next, consider the proof of the loop. The loop invariant is:
which asserts that:
, the local thread is queueing for the lock with ticket and with the address of the predecessor’s cell and the current thread’s cell in and , respectively.
, the current owner must come before the local thread with ticket . This is stable due to the guard.
, if , the last read of the value of the predecessor cell, is 0, then the owner is the predecessor of the current thread has unlocked the lock, as only then can it set its cell to 0. Therefore, , and, consequently, the lock is unlocked, . The owner’s cell, , will also take the value of that of the predecessor.
, which asserts that once the thread has observed that its predecessor has taken possession of and then unlocked the lock (by reading the cell at address into ). will have value 1 otherwise.
A thread with ticket can take possession of a CLH lock once its predecessor has taken possession of and relinquished the lock. Once the lock reaches this state, and hold stably, as all transitions from this state would set , however, we know that, .
The intent of this loop is to wait till this occurs, allowing the thread to safely take possession of the lock once the loop terminates. Hence, the goal state is:
Once the lock reaches this state, a subsequent iteration of this loop will terminate with , breaking the loop. To reach the goal state, threads that come before the current thread must both take possession and then unlock the lock. The first is guaranteed due to obligations for and the second due to the pseudo-quantifier, guaranteeing that the lock must always eventually be released. The progress measure
is decreased by both of these actions and, as implies , the progress measure, , is a natural number, and therefore well-founded.
The use of the difference between , the local thread’s ticket and the owner’s ticket, , to bound the number of threads that can take possession of the lock before the local thread removes the necessity for the impedance bound, , required in the proof of the spin lock module, and that must leak in the associated specification (as it imposes a restriction on any client).
To support this argument, the persistent loop invariant, , must contain the resource to make use of the liveness assumptions of the pseudo-quantifier, guaranteeing that the lock is always eventually unlocked, and the relevant environmental liveness assertions guaranteeing the threads queued before the current thread will take possession of it once their predecessor relinquishes it:
The While rule is applied as in Figure 20. The rule Elim is applied to quantify and over the antecedent. To complete the application of the rule, we need to show
(21)
(22)
Condition (24) holds trivially, as seen above, all the possible operations on the module decrease the environmental metric.
To prove Condition (23), take
First split on :
In the case , the rule LiveT applies directly. To show holds, split on the state of the lock, .
In the case , for each , the ticket of the current owner of the lock, the environment is guaranteed to eventually take possession of the lock due to the environmental obligation assertion . To consider each case for , we first apply the rule EQuant and then the LiveO rule:
(23)
With the exception of , all of these conditions hold trivially. This last condition holds as, given , all possible transitions either preserve or decrease the metric.
In the case , progress is guaranteed due to the assumptions in the atomicity context, , that eventually, the lock must be released, so the LiveA rule is applied:
(24)
Once again, with the exception of , all of these conditions hold trivially. This last condition holds as, given , all possible transitions either preserve or decrease the metric.
To conclude the proof of , the argument for the body of the loop’s proof is purely a safety argument; the full proof is in Figure 21.
The key step uses the axiom
Since we hold the guard , we can infer . Then, after the value of the cell at has been read, if the value, , is 0, then, since only the thread holding the lock can change the value of their associated cell to 0, then, . As a consequence, if holds initially, then after the body of the loop is executed, therefore the loop variant in the postcondition, . As initially, we know from the loop condition, , therefore .
Finally, in Figure 22, we consider the details of the linearisation point, when the operation takes possession of the lock. First Elim rule is applied to quantify the ticket of the current owner, , (the predecessor of the current thread) over the antecedent. Then the AtomW and UpdReg rules are applied to atomically update the region state by acting on its interpretation. The rules AElim, LiftA, and AElim are then applied to pseudo-quantify and , the two variables that are existentially quantified within the region invariants and open the region . Finally, the Cons rule is applied to re-establish the invariant in the postcondition by adjusting the ghost state. Specifically, the guard and the obligation are reabsorbed into and , respectively, to update the list of threads waiting on the lock and increment the owner. This is done using the axioms:
The inner part of the proof then decreases the layer and frames off unnecessary resources to apply the update. Note that this step of the proof discharges the obligation . This concludes the verification of the operation.
The CLH lock proof is able to internally encode the impedance bound enforced by thread queueing using ghost state: the local ticket numbers of each thread queueing for the lock and the owner’s ticket number that is visible in the abstract state of the region , but hidden from the client.
Proof of. Let and . The proof of the operation is as follows:

5.3 Blocking Counter

We sketch the proof of a blocking counter module: A single cell storing a natural number that can be incremented, guarded by a non-fair lock for concurrent access. The example illustrates how the TaDA Live specifications and proofs neatly support hiding blocking when it is unobservable by the client, while still leaking the requirement of bounded impedance from the lock. This requires any client to only call operations making use of the lock (in this case, the operation) a bounded number of times.
Code. The implementation of the module’s operations is:
Specifications. The abstract predicate represents a blocking counter at address with value and impedance bound .
Shared Regions. This proof will use two region types: and where , , , , and is the abstract location of the lock guarding the counter resource. Here, , , and are the fixed parameters of the regions, representing, respectively, the region identifier of the inner region, the address of the blocking counter and the abstract location, and address of the associated lock.
As in the CLH lock example, we will use two nested regions. The region type will be used as an inner region revealing sufficient information to prove desired liveness properties, in particular, exposing the state of the lock, . The region type will be used to prove linearisability of our operations; to this end, it only exposes the value of the blocking counter and the lock’s impedence bound .
Guards and Obligations. We associate the exclusive guard with both and . Besides this, this proof will also require the guards , and , where , for the latter region. These guards will be used to record the update to the value of the counter that will occur at the moment the module’s lock is locked in the proof of . Since other threads cannot observe the value of the counter without first holding the lock, performing this abstract update on the state of the outer region, , and then updating the concrete state of the counter before releasing the lock results in a linearisable implementation.
To allow this, once the lock is locked, the concrete value of the counter, , and the updated value of the counter, , are stored in the guard within the region . The thread holding the lock then holds the guard , which keeps a local record of the concrete and updated counter values; the values are required to match with those stored in within the region by the axiom:
When the lock is unlocked, the guard is stored within the region . When a thread takes possession of the lock, it can be split into the guards and using the axiom:
Finally, if a thread holds the guard , then it holds the lock, which can be inferred from the axiom:
This pattern of three guards is often used as a TaDA pattern to encode mutual exclusion on some resource when a thread has possession of a shared lock.
We also associate a single atom obligation with the region type . This obligation encodes ownership of the blocking counter’s lock, as well as the obligation to unlock it. We set lay .
Region Protocols. The guard-labelled transition system of the region is:
and the guard-labelled transition system of the region is:
Region Interpretations. The interpretation of the locked counter region links the state of the lock and counter to the abstract state of the region and the ownership of .
The region is a wrapper around the region that hides the state of the lock and allows the counter value of the region to be disconnected from that of the outer region when the lock is locked.
Predicates. The counter resource is abstractly represented by the predicate
Verification of . The proof of can be found in Figure 24. The only step requiring liveness reasoning is the call , which is handled very similarly to the same call in the left thread of the distinguishing client where the environment liveness condition of the LiveC rule application is discharged using the fact that when holds, then , which, in this case, is obtained from the interpretation of the outer region, . The details of the proof of the operation cab be found in Figure 23.
Verification of the and operations. The proof of proceeds, using standard steps on Hoare triples, by establishing the postcondition which can be viewshifted to .
The proof of is almost identical to the proof in Figure 24. The reader might wonder if the lock acquisition in the code is strictly necessary. Indeed, it is not given the current set of operations available to the client. To prove the version where does not acquire the lock, however, we would need to change the region’s protocol to encode the fact that while holding a lock a single write to it is possible. Since one would conceivably want to extend the module with other operations that write to the counter multiple times while holding the lock, we formalised the more general protocol. In the presence of such additional operations, would need to acquire the lock to be correct.

5.4 Double Blocking Counter

We now develop the proof of a double blocking counter module, that is, a module encapsulating two integers each protected by a fair lock. The module offers linearisable operations to increment/read each counter in isolation and an operation to atomically increment both. The implementation of needs to deal with the ubiquitous pattern of locking multiple locks in a nested fashion, which is one of the most common sources of deadlocks in coarse-grained concurrent programs. The example illustrates how the specification format and layer system of TaDA Live allow for modular proofs of deadlock-freedom. In particular, verifying the example in LiLi would require:
(1)
replacing the calls to the lock operations with some non-atomic abstract code
(2)
building a termination argument that talks about the queues of the two fair locks; in particular, the variant argument would need to consider both queues at the same time and argue about all the possible ways the threads in the environment may enter and exit both queues.
We avoid these complications by:
(1)
reusing the (fair) lock specifications that are truly atomic and properly hide the queues,
(2)
arguing about termination by means of two obligations with layers the order of which reflect the order of acquisition of locks. These obligations only represent the liveness invariant that each lock is always eventually released; the layers represent the dependency between the two locks. The proof requires no detail about why, thanks to the internal queues, this is sufficient to ensure global progress: That part of the argument has already been made in proving the lock specifications!
Code. The implementation of the module’s operations is in Figure 25 using the following abbreviations for readability:
Specifications. The fair lock module specifications assumed in this example are
where and are layers parametrised on the region identifier of the shared lock. It is a common TaDA Live pattern to parametrise the layers of specifications so they can be instantiated differently for each instance of the module. In Section 5.5, we explain this parametrisation in general and how to parametrise the implementation proof accordingly.
The abstract predicate represents a double counter at address with abstract location and values and , respectively. We wish to the show the implementations of the module’s operations satisfy the following specifications:
It is important to note here that we are making explicit the parametrisation of the layers in the region identifiers , because we will need to associate different layers with the two instances of the lock. As we will see later, we will have two region identifiers and , one per lock, with associated layers . The lock specifications themselves only require and but we will additionally impose, for this client proof, . This represents the fact that, in this client, the release of lock 1 will depend on the acquisition of lock 2.
Shared Regions. Like for the single counter example, we need two nested regions, one to prove the atomicity of the operation () and an inner one to prove termination (). They differ in that only records the abstract states of the counters, while includes the abstract states of the locks. Formally: and where , , and , and is a tuple with and . Here, , and are the fixed parameters of the two regions, respectively. The double blocking counter resource is abstractly represented by the predicate .
Guards and Obligations. We introduce the guard constructors , , and , for , for bookkeeping of the value of the counters. We need this ghost state, because in there is an intermediate state where one counter has been updated but the other has not; we cannot update the abstract state in two steps, because we are proving atomicity of the operation, so we need to update both counter values in the abstract state in one go. We record the intermediate concrete state in these guards so the information is there locally without affecting the shared abstract state prematurely. The guard composition satisfies the axioms
Here, tracks the reference value (left in the region interpretation) for the ith counter’s abstract () and concrete () value and is a local “witness” for the same information about the ith counter, which can only be obtained when locking the ith lock (otherwise, it would not be stable information). This is enforced by the interpretation given later.
We associate two atom obligations and with the region type , encoding ownership of the double counter’s locks, respectively, as well as the obligation to unlock them.
As anticipated, we choose the layers of the lock specifications in a way that represents the dependency between the two locks. We have a (double-counter-local) top () and a bottom () layer, and intermediate layers for the locks:17
Region Protocols. The interference protocol of the region trivially allows for any change to the counter values:
The interference protocol of the region encodes the constraint that we can update a counter only by holding the corresponding lock:
Region Interpretations. The interpretation of formalises the fact that the outer region simply hides the state of the locks for the atomicity argument, while the actual internal protocol of the module is encoded in the interpretation of the inner region :
Proof of . The proof outline of is reproduced in Figure 26. Most of the proof is routine; the derivation for the acquisition of the first lock follows closely the pattern we already explained in Sections 4 and 5.3. We show the proof of the acquisition of the second lock in more detail to show the interplay between the layers. At that point, we are continuously holding the obligation of the first lock, with layer greater than , so apply LayWH to lower the layer to enabling the application of Frame to frame . The obligation has layer lower than so we are allowed to invoke it to discharge the environment liveness condition of the LiveC application in a way that is analogous to the derivations of the distinguishing client and Appendix 5.3.
A comparison with LiLi. As we have seen in Section 2 (Innovation 3), the call of a CLH lock in LiLi involves two distinct atomic actions: requesting the lock and acquiring it. Requesting a lock is a non-blocking action, as it just enqueues the current thread in the (concrete) queue for , but the acquisition is represented with a (primitive) blocking operation that waits until the current thread is at the head of the lock’s queue, and the lock is unlocked. When proving the call to in , the LiLi proof would require arguing about termination of acquisition by appealing to progress of the threads in the environment.
To do so, in the LiLi methodology, one has to identify the threads in the environment that will be able to make progress and show how this progress is bringing us closer to acquiring lock . Consider the case when there are threads ahead of us in the queue for . Assume thread is the head of the queue for . It can make progress in three ways:
if is unlocked, then it can acquire it;
if is locked it, then can unlock it;
if is locked, then it can request .
How do these actions represent progress for us? The first case makes progress by moving to the second or third case. The second case removes from the queue of , bringing us closer to the front of the queue. The third case complicates matters: In this case, is enqueued in the queue of with a non-deterministic number of threads ahead of it. The thread is now blocked, and to track progress, we need to consider the head of the queue for , which can only make progress by acquiring the lock when unlocked, or releasing the lock when locked. What progress had been made towards us acquiring ? The measure of progress needs to consider the contents of the queues for both threads: The measure before requests needs to be (ordered lexicographically) so we can lower the measure to once joined the queue of . Whenever reaches, finally, the head of the queue of , the measure of progress would become , and the only option for is to release . Now thread is back to the three options as above. This is a problem, because nothing would prevent from requesting again. This could repeat ad libitum, leaving us to starve on . To rule this out, the argument needs to place a bound on the number of times can be acquired while holding ; in our example, this bound can be 1. By mixing this bound in the measure , the action of releasing brings real progress by taking from 1 to 0. When that happens, the only option for is to release the lock. This brings down , the number of threads ahead of us; at the same time, we want to reset to and to 1 to allow the new head of the queue of to request .
This substantiates our claim that LiLi’s rely/guarantee reasoning lacks in scalability; the key reason for this is that the progress argument is forced to walk through all the possible ways the environment could be implementing progress. This, in turn, requires to expose the internal state of both locks (their queues) to be used in the client’s proof. In other words, the abstraction of the environment is not abstract enough. By comparison, TaDA Live’s atomic specifications allow for the termination of the calls in the double blocking counter to be reasoned about individually, without direct reference to the termination of the other, nor to internal state, using layers to prevent circular reasoning. The appeal to obligation being live to justify why the call to terminates abstracts away how the environment may be keeping it live. The layers capture the essential information: The only thing that is important is that to keep live, the environment does not assume live.

5.5 Lock-coupling Set

To conclude this series of examples, we present a challenging fine-grained lock-based implementation of a linearisable finite set. A lock-coupling set implements a set by maintaining an ordered linked list of the elements with fair locks (here, CLH locks) guarding each individual element. The module exposes an and operation to add and remove elements from the abstract set it represents. To make modifications to the nodes of the linked list, the operations traverse the list using a lock-coupling pattern. In this pattern, all threads start the traversal at the head of the list. To be at position a thread must acquire the lock at that position. To move to position the thread would first acquire the lock at and then release the lock at position . This way, the threads cannot overtake each other, and owning a lock allows the owner to safely perform modifications at that position. We sketch here the main points of interest of our proof; the full details can be found in Appendix C.
This example is challenging, because it makes use of a dynamically changing list of locks with non-trivial liveness dependencies between them. In particular, the termination of the acquisition of each lock depends on the usage of the locks further down the list. Although these dependencies are acyclic, they change over time as the list grows or shrinks. At first sight, it is unclear how the seemingly static layer structure of TaDA Liveand the fixed layers decorating the specifications of lock operations can cope with this complexity without breaking modularity.
The TaDA Live proof of this example relies on solving two key challenges:
How can we modularly coordinate the choice of layers needed for the proof of a module and the ones needed for the proofs of its clients?
How can we dynamically reassign layers to resources?
We solve the first challenge by introducing a style of specification that allows the client to “remap” the layers of the implementation into a larger layer structure and the implementation to prove correctness with respect to a “local” layer structure that is opaque to the client. The key observation is that a TaDA Live derivation’s validity is preserved by transformations of the layer structure that preserve the strict order between layers. This leads to the following proof style. Given two partial orders and , a function is strictly monotone if . A layer map is a strictly monotone function between the two partial orders. Using this notion, we generalise the client-facing CLH lock specifications as follows:
From the perspective of the implementation, a proof of correctness would start by defining the partial order of the “internal” layers. In the case of CLH, as we have seen in Section 5.2, we would let with and . Then, to be able to prove the triples with the layers remapped by the arbitrary layer map , we would reproduce the derivation presented in Section 5.2 but with applied to every occurrence of an internal layer. For example, the region type would also be parametrised by the layer map, , so its associated obligations and their layers can depend on , e.g., lay . Since the map preserves the strict order of , the proof goes through exactly as in the un-parametrised case.
From the perspective of the client, to use these specifications one would first obtain the arbitrary from the existential quantification. Then the client would be able to choose a layer map from to . Here, could be the global layer structure, in the case of a closed proof, or itself being the internal layer structure of a module using the lock module internally. Note that the client needs to define parametrically on , since it has no control on the inner structure of . For example, in the case of a client with a static list of locks, one would use as the lexicographically ordered set of pairs from where the first component corresponds to the position of the lock from the end of the list. Then, for the lock at position , the client would instantiate the specifications choosing .
The second challenge is also solved by a slight generalisation of the lock specifications, following a proof pattern that, if adopted, always increases the generality of module specifications: adding some fractional permissions to control the update of ghost parameters of the resource. The idea is that the layer map is ghost state, and as such, we should be able to update it using a viewshift. To do this without invalidating the other thread’s information about the region we are updating, we add standard fractional permissions to the lock specifications. We introduce the abstract predicate representing ownership of the fraction of permissions for a lock at abstract location . To split permissions, the predicate satisfies, for , . The generalised lock specifications would then be:
When creating a new lock, one gets a local resource representing an unlocked lock and full permissions. Typically, then permissions are distributed to the threads by splitting the full permission into smaller fractions. A non-trivial fraction of permission is now needed to perform the operation. We can then provide the viewshift which allows to change the layer map without invalidating the knowledge about it in any other thread: If we own , then no other thread can race on the lock. Adapting the proof of CLH to support permissions and the viewshift above follows standard (safety) proof patterns that we explain in Appendix C.
Let us briefly explain how we can use this viewshift in the lock-coupling set example. Conceptually, we want to organise the layers of the lock-coupling set module as for a static list of locks: They go in decreasing order from the head of the list to the tail. A thread holding a lock at position will be able to eventually acquire the lock at position because the release of such lock is associated with an obligation of strictly lower layer than the one associated with the lock at . Each operation of the module inserts at most one element to the set per traversal of the list. We therefore arrange the proof invariants so each thread traversing the list will shift up the layer of the lock at the thread’s current position by one. This way, when the thread finally finds the position where the new element has to be inserted, there is already a gap of 1 between the layers associated with the positions being altered by the thread. The layer sitting at the gap will be the one we associate with the lock of the new element. The layer-map-altering viewshift we explained above is used at each step of the traversal to shift up the layer of the current lock. This is possible without breaking the information owned by other threads, because when the current thread holds the lock at position and the lock at finally becomes available, the current thread is the only thread with access to the reference (and the associated resources) of the lock at . Formally, this means that when we obtain the lock at , we are able to obtain full permissions for it until we unlock the lock at . With the full permissions, we can apply the viewshift and effectively shift up the layers associated with the lock at .
The only exception to this scheme is the lock at the head of the queue: This is the only lock that does not need a remapping of layers, as its associated layer can be , which is always bigger than any layer ever associated with the locks at the other positions.
It is worth noting that the LiLi proof of the same example does not use the specifications of the fair locks modularly, but instead inlines the code of the lock operations, allowing for a non-modular handling of the internal state.
Interestingly, the same lock-coupling set specifications can be implemented by using spin locks instead of CLH locks, for each element except the one at the head. In fact, the locks in the tail of the list do not experience any impedance. At first sight, it seems impossible to represent this fact using our specifications for spin lock: The operation needs to consume non-trivial budget, but there is no bound on the number of calls to it. The TaDA Live way of expressing the absence of impedance in this example uses a viewshift similar to the one we introduced above, which allows us to reset the budget (and the layer map) when we own full permissions. The proof in LiLi of this variant of the lock-coupling set again inlines the lock code, with the effect of being able to redefine which internal steps are susceptible of impedance and which do not, breaking modularity.

5.6 Limitations

Non-local linearisation points. As with other total program logics, TaDA Live does not support helping/speculation. Such patterns are challenging for the identification of the linearisation point, which is entirely a safety property. Extensions to TaDA that could support such patterns are discussed in Reference [6]. Such extensions are orthogonal to the termination argument. We therefore choose, in line with the related literature, to explore termination in a simpler logic.
Non-structural thread creation. TaDA Live currently supports only structural parallel composition. We believe the support of non-structural fork/join would not require substantial new ideas. For comparison, LiLi does not support parallel nor fork/join.
Scheduling non-determinism. A more interesting limitation comes from our approach to specifying impedance. For non-blocking programs, the ordinal-based approach is complete. It is not complete for blocking programs. Consider where is the distinguishing client with a spin lock. Scheduler fairness guarantees the right-hand thread of will be eventually executed. The specification of spin lock, however, states that every call to needs to consume budget, forcing the client to provide an upper bound for the total number of calls to initialise the budget. Unfortunately, will call an arbitrary unbounded number of times, determined only by the choices of the scheduler. It is, thus, not possible to provide the initial budget, and TaDA Live cannot prove that the program terminates. The impedance on the lock is only relevant when the client is unblocked (i.e., is true) but the specifications do not allow for the distinction. To accommodate this behaviour, we could introduce to represent a prophecy of the number of steps it will take to fulfil live obligation . This would solve the problem for , because (where is fulfilled by setting to true) would be the required budget. How to introduce this extension soundly is future work. To the best of our knowledge, none of the approaches in the literature can handle this example.
Loop body specifications. Consider a loop invariant asserting the possession of obligation . We cannot distinguish, by only looking at the specification of the loop body, the case where is continuously held throughout the execution of the body, from the case where is fulfilled and then reacquired before the end of an iteration. The current While rule conservatively rules out the use of assumptions with layer higher than or equal to lay ; doing otherwise would be unsound in the case when is held continuously. A solution would be to introduce an assertion , certifying that an obligation is fulfilled at some point in a block of code. It would allow the While rule to only forbid layers that may depend on obligations one holds in the loop invariant and for which it was not possible to prove .
More Expressive Layers. Advanced examples like the lock-coupling set of Section 5.5 need powerful parametric specifications to work around the fact that the lay function is statically specified. We are not aware of any example that cannot be proved using static layers and critically requires more expressive layers. Even for current proofs, however, being able to constrain layers through assertions and allowing them to change as result of interference would allow for more concise and intuitive proofs. The lay function could in principle be encoded as “regular” ghost state and the crucial relative order between layers be enforced through invariants. It is, however, not clear how to ensure soundness if interference on layers is allowed. We leave this exploration as future work.

6 Related Work

Primitive Blocking. There has been work on termination and deadlock-freedom of concurrent programs with primitive blocking constructs. Starting from the seminal work of Reference [27], the idea of tracking dependencies between blocking actions and ensuring their acyclicity has been used to prove deadlock-freedom of shared-memory concurrent programs using primitive locks and (synchronous) channels [3, 28]. Similar techniques have been used in Reference [16] to prove global deadlock-freedom (a safety property requiring that at least some thread can take a step) and Reference [22] to prove termination. This entire line of work assumes the invocation of lock/channel primitives as the only source of blocking. As a consequence, this methodology provides no insight on the issue of understanding abstract blocking patterns arising from busy-waiting and shared memory interference. Moreover, the specifications for blocking built-ins (hardcoded in the logic as ad hoc axioms) impose a usage protocol in the client, instead of just capturing the abstract effect of the operation: For instance, a call to always entails an obligation to unlock the lock, regardless of how the client intends to use the lock. This has had the side effect of requiring ad hoc extensions of the reasoning principles to increase the expressivity of this hard-coded protocol to allow, for example, for delegation of obligations [17]. Our solution uniformly handles programs that mix blocking primitives and ad hoc synchronisation patterns and is not imposing any specific protocol on the client.
The notion of “obligations” found in References [3, 16, 22, 28] is only superficially related to our obligations. First, obligations found in the literature represent primitive blocking events (like the acquisition of a lock). They are also typically equipped with a structure to represent causal dependencies between these events to detect deadlocks. Our layered obligations are associated with arbitrary abstract state changes, removing the need for ad hoc treatment of primitives and supporting abstraction and abstract atomicity. Moreover, our layers do not represent causal dependencies between events, but rather dependencies between liveness assumptions in a termination argument. This reflects in our specifications, e.g., a lock operation does not return an obligation in its post-condition. Whether there is a need for an obligation linked to that lock is entirely dependent on how the client will decide to use the lock. Nevertheless, the specification precisely captures the termination guarantees of lock operations. Finally, obligations in the literature have a purely safety semantics, from which one can only derive safety properties as non-blocking or deadlock-freedom. Our obligations explain how to express proper liveness invariants, how to blend them with the layers, and how to use them for proving termination.
Temporal Logics. There is substantial literature on using temporal logics to prove liveness and termination of concurrent programs, e.g., Reference [37]. By working directly at the level of traces with liveness properties stated as temporal logic formulas, this approach is very general. It does, however, provide less guidance on how to prove programs and does not tackle the problem of abstract interfaces and proof reuse. Our adoption of concurrent separation logic as the basis of our reasoning achieves superior compositionality of the reasoning including proof reuse.
History-based methods. The CertiKOS project [15, 25] developed mechanised techniques for the specification and verification of fine-grained low-level code with explicit support for abstract atomicity and progress verification. The approach is based on histories: The abstract state is a log of the abstract events of a trace; and the specification of an atomic operation inserts exactly one event in the log. Local reasoning is achieved by rely/guarantee through complex automata product constructions. The framework is very expressive, with the downside that specifications are more complex and difficult to read, and verification requires manipulation of abstract traces/interleavings. Our work is similar in aim and scope, but our strategy is different. We try to specify/verify programs using the minimal machinery possible, keeping the specifications as close to the developer’s intuition as we can. As a result, our specifications are more readable (compare our fair-lock specification with the corresponding 30-line specification from Figure 7 in Reference [25]), and our reasoning is simpler (the layered obligation system leads to a more intuitive proof compared to the proof of MCS locks in Reference [25]).18
Contextual refinement. Another approach to specify and prove progress of concurrent systems is to prove refinement between the implementation and simpler, abstract code acting as a specification [30, 31, 39]. By making sure the refinement preserves progress properties, one can represent the salient termination properties of the implementation by the termination properties of the specification code. The Iris implementation of this idea [39] uses a non-contextual refinement, which means that the refinement is proven between the closed-world behaviour of implementation and specification code, and does not necessarily carry over contexts. This severely hinders proof reuse. The only refinement-based work that is able to modularly verify blocking code is the LiLi logic, discussed below.
There has been work on extending linearisability, characterised as a contextual refinement, to support reasoning about progress properties, e.g., [14]. This work only supports non-blocking operations. Liang et al. [33] studies the exact relationship between common progress properties of fine-grained operations and contextual refinement. The study of the contextual refinement induced by our triple semantics is future work.
LiLi. The work closest to ours is LiLi [30, 31]. LiLi was the first concurrent separation logic to prove progress specifications for linearisable concurrent objects with internal blocking [30], and it was then extended to handle external blocking [31]. Although we share most of our goals with LiLi, our approach differs in two important ways.
First, LiLi’s goal is to prove a progress-preserving contextual refinement between the implementation of a module and its specification. Termination properties of implementation code are not represented directly, but in terms of the termination properties of the specification code. Although proof of clients of the module have to be done outside of the LiLi logic (there is no rule for parallel, nor for calling a module’s operation) such proofs would need to reprove the relevant termination properties of the specification code so the properties themselves become available in the proof. Moreover, as we outlined in Section 2 for CLH lock, the specification code for blocking operations may be non-atomic even in the case of linearisable operations. Instead, we aim at specifications that directly represent termination properties as a logical statement that can be readily used in a client proof and in the proof of the implementation. Our specification format obtains a crucial advantage: It achieves abstraction and can represent atomicity for blocking operations, enabling more scalable and reusable reasoning.
Second, LiLi’s rely/guarantee incorporates a form of liveness invariants through so-called definite actions. Definite actions require the identification of a logical global “queue” of threads where the thread at the front is always able to execute its action and that action implies global progress. This queue is maintained as shared auxiliary state manipulated through ghost code. It is due to this global view that definite actions can side-step the issue of circular reasoning. Our layered subjective obligations push the idea much further, obtaining sound liveness invariants that can be represented thread-locally and without the need for ghost code, improving proof scalability. The design choice of making both rely/guarantee and specification represent blocking via liveness assumptions is the key to making the blocking specifications directly usable in the proof system.

7 Conclusions and Future Work

We have introduced TaDA Live, a concurrent separation logic for reasoning compositionally about the termination of fine-grained blocking concurrent programs and proved a substantial soundness result. Our key contribution is our approach to abstract atomic blocking as the reliance of termination on the liveness properties of the environment. By wholly embracing this point of view, we have designed a rely/guarantee principle that incorporates liveness invariants using layered subjective obligations, a new form of local ghost state, and have extended TaDA’s abstract atomic specifications to provide total specifications for blocking programs using environment liveness assumptions. Through several case studies, we have illustrated how our formalisation of abstract blocking allows for the right level of abstraction in specification and strong thread-locality of the proofs. The result is a verification system with scalable and reusable proofs.
The work presented in this article opens a number of immediate directions for future work on concurrent separation logics. A first direction is to extend TaDA Live to prove general liveness properties beyond termination. A possible way to achieve this is to wrap a refinement calculus around TaDA Live’s atomic specifications, as was done in the safety case in TaDA Refine [35]. Specifications would be able to sequentially compose atomic triples and take fixpoints, thus being able to specify linear-time temporal properties of infinite traces. A second direction is to study general fork/join concurrency and provide a generalisation of the liveness rely/guarantee necessary to accommodate patterns typical of distributed/reactive systems, where long-lived maintenance threads interact with an environment to realise an operation’s effect. A third direction is to transfer ideas from TaDA Live to the Iris framework [23] to provide a Coq-mechanised environment for reasoning about the termination of concurrent programs. More widely, we hope that our emphasis on environment liveness invariants for proving termination will transfer to other forms of reasoning about blocking concurrent programs.
APPENDICES
We present here omitted definitions and details of proofs. An extended version of this article is also available at https://arxiv.org/abs/1901.05750 [12].

Footnotes

1
We simplify the rule for this introductory section, informally using the standard LTL notation for always P (i.e., holds at every point of a trace) and for eventually P (i.e., holds at some point of a trace). The full rule is given in Section 4.6.
2
The assertion stands for .
3
This liveness form of deadlock is also known as “livelock,” since every thread is always taking steps, although no global progress is made by any of those steps. This is not to be confused with the safety property of “global” deadlock, as found in languages with blocking primitives.
4
Indeed, LiLi’s goal is limited to proving that a module’s implementation refines its specification. The code of the module cannot fork threads, but any multi-threaded client of the module is guaranteed not to be able to distinguish the implementation from the specification.
5
We omit the region identifier to simplify the discussion.
6
In Reference [31], this is the result of applying the appropriate wrapper to the lock specification: .
7
We typically omit the pseudo-quantifier in the case where the set has just one element, e.g., .
8
To lift this restriction, one can use the “variables as resources” technique [2]. Our restriction simplifies the handling of the local state without sacrificing expressivity: Any local variable in the scope common to both threads that needs to be modified can be instead implemented by using a shared memory cell.
9
TaDA interprets the separating conjunction intuitively. With TaDA Live, we interpret it classically to not lose information about the obligations.
10
In Iris, levels roughly correspond to masks.
11
The difference is the , which is used in the uncommon case when the linearisation point is non-deterministic and the Hoare postcondition depends on this non-deterministic choice.
12
We use the standard notation to mean .
13
Here, ranges over subsequences of traces.
14
Recall that stands for .
15
Notice that “ is visited infinitely often” is equivalent to .
16
The proof of ticket lock requires some minor ghost code to side-step the lack of support for helping.
17
The proof works with , too, but the ordered version better emphasises the dependency between the locks.
18
The proof is a variation of the one for CLH.
19
To lift this restriction, one could use standard techniques, such as “variables as resources” [2]. Our restriction minimises the noise generated by handling local state in the formalisation of the model and the assertions. Note that expressivity is not really limited by our restriction: Any local variable in the scope common to both threads that needs to be modified can instead be implemented by using a shared memory cell.

A Some Proofs Conventions

A.1 Specification Abbreviations

Here is a summary of all the abbreviations we use in writing specifications. The full hybrid specification format is
The quantification is a normal existential quantification, but its scope extends over both the Hoare and the atomic post-conditions. We omit it when does not occur in the triple.
An omitted pseudo-quantifier is to be understood as the trivial pseudo-quantifier , for an unused .
The triples
are abbreviated with
respectively, where , , and (for technical reasons, the atomic pre-/post-conditions in the general triples cannot contain program variables). In other words, the program variables mentioned in the atomic pre-/post-conditions refer to the value stored in them at the beginning of the execution of the command. Most commonly, the program variables used this way are actually not modified by the command.

A.2 Guard and Obligation Algebras

Defining a guard algebra can be tedious. In program proofs, we will define guard algebras by generating them from some guard constructors and some axioms defining the guard operation.
Consider two common guard patterns in TaDA Live: the use of an exclusive guard and the , , pattern used to represent possession of a lock in ghost state.
An exclusive guard, , is very commonly used to express some exclusive permission on some shared resource, which cannot be composed with itself: i.e., . Local ownership of is exclusive in that no other thread can at the same time assert ownership of . A ubiquitous use of this guard is in representing the resource offered by a module.
The , , pattern is commonly used to represent ownership of a lock guarding a resource. The thread records its ownership of a lock by holding the ghost state , which cannot be composed with the guard , recording the lock is unlocked: The region holds the associated guard , which can be recombined with the guard once the thread releases the lock to form the guard : .
We explain the construction of a guard or obligations algebra from these axioms by introducing some unsurprising auxiliary definitions.
Given a set , the set is the set of multisets over ; is the empty multiset (i.e., the function mapping every element to 0) and is multiset union (i.e., the pointwise lifting of ). The expression denotes the multiset containing the elements . Given a set , the free commutative monoid over is the monoid . Given a commutative monoid and a congruence relation , the quotient is a commutative monoid. Given a commutative monoid and a set with , the PCM over induced by is where
and for , if , otherwise undefined.
For each guard algebra to be defined, we will introduce a number of symbols , called guard constructors, each with some guard domain for some . They induce the set of guard terms. By specifying some guard constructors, a congruence and a set , one obtains the guard algebra .
The guard constructors are specified by listing their domains, writing to mean , as, in certain cases, we may want to further restrict the domain of the guard constructors to simplify the reasoning.
The congruence is specified as the smallest congruence satisfying given axioms of the form
which we write using the syntax
The set is specified as the smallest set satisfying given axioms of the form
which we write using the syntax
Example A.1.
The guard algebra used in Example 4.1, is expressed by using two guard constructors with empty domain, and , and axioms: , Note that with no congruence axioms, the induced congruence relation is equality. These induce the guard algebra with elements .

A.3 Levels

Region levels are used to remove the possibility of unsound duplication of resources by opening regions. The presentation of the program proofs omits the level annotations to ease readability. The levels can be unambiguously derived from the sequence of application of rules UpdReg and LiftA.
To see the problem consider a generic region ; we have : This is the essence of what it means for a region to be a shared resource. When we open a region, however, we obtain ownership of the contents of its interpretation ; the interpretation can contain resources that are not shared, for example, heap assertions, in which case, we have . Without constraining levels, one could start with , produce the equivalent , open the first region assertion with UpdReg or LiftA, then open the second region assertion and end up with . Levels are a mean to avoid unsound derivations that use the above chain of implications. A level in the context of a judgement records that all the regions of level or higher might have been already opened and should not be opened again. The rules that do open regions (rules UpdReg and LiftA) can only open a region of level if the level in the context is , and they record the operation by setting the context level to , so the region cannot be opened again.

A.4 Region Type Specifications

—Abstract state domain. It can be tedious (and detrimental to readability) to always explicitly write the domains of quantified variables in the assertions of program proofs, especially when they can be easily inferred from context. Consider the case of regions. Some of the rules, for example, MkAtom, need the precise domain of the abstract state (), because it needs to match the pseudo-quantifier’s domain (). To improve readability, we adopt the following strategy: Suppose the region type has abstract state in the domain . We can define the interpretation function so it constrains the domain of the abstract state accordingly: . Then, we trivially have that . We thus can omit the domains from existential quantification and implicitly apply rule Cons whenever the domain information is needed in the proof.
To further ease the specification of region types, when defining a new region type, we will introduce the domain of the corresponding abstract state and omit the obvious constraint from the interpretation definition.
—Fixed parameters. It is very common to have a product domain as abstract state of regions, as one needs to assemble in an abstract state many bits of information that characterise region’s state. Typically, the abstract state domain can be seen as the product of two domains , the domain of the fixed parameters and the domain of the non-fixed parameters . (Both and can be themselves products of simpler domains.) The fixed parameters are set at the point of creation of the region and can never be updated; they typically define the “interface” of the region. For example, if the address of a lock module instance is the fixed parameter of a hypothetical region and the non-fixed parameter representing the state of the lock. When introducing a new region type, we will specify which parameters are fixed, and they will be omitted from the region interference specification, as they are left untouched by every transition. For example, for the region above, we may write and to denote and .
—Interference protocols and atomicity contexts. Definition 3.13 requires to be monotone in the guards, reflexive and closed under obligation frames. Since writing the whole function can be tedious and redundant, we will only write a number of expressions of the form
(25)
which will set , and implicitly complete the function by closing under the properties above.
Similarly, atomicity contexts associate to some region identifier records that have (unguarded) transition relations as their last component . We therefore borrow the syntax from Equation (25) and write to specify as the minimal relation that includes such relations and is closed under obligation frames.

A.5 Proof Patterns

There are some recurring patterns in TaDA Live proofs, which we summarise here to help the reader navigate the examples.
—The exclusive guard. Take for example a concurrent counter module. Abstractly, we have a (fixed) location for the module instance and an abstract state representing the current value of the counter. Since this is a concurrent counter, it uses internally shared resources. We therefore have a region encapsulating the shared internal resources of the counter. From the perspective of the client, however, at the moment of creation of the counter with, say, an operation , the counter is exclusively owned by the client. This, for example, is reflected in the fact that, until the client shares the counter or invokes operations on it, the value of the counter will be stably 0. To represent this fact, one typically defines an exclusive guard guarding each transition of the region interference: e.g., . Then the operation can be given the specification
which gives to the client the stable assertion . (Note how is not stable.) To re-share the counter, the client will create its own region encoding the invariants governing the interaction over the counter (and the other resources of the client), the interpretation of which will contain .
Note that the assertion has a very different meaning if occurring in the atomic precondition of a triple, as opposed to the Hoare precondition: The resources in the atomic precondition are not owned by the local thread, but only acquired instantaneously at the linearisation point. For example, in the triple
the exclusivity of is only granted instantaneously to the thread acting on it atomically, i.e., either the environment during the interference phase as allowed by the pseudo-quantifier or the local thread at the linearisation point.
Since this pattern is ubiquitous, we reserve the guard constructor for this use and will omit the axiom when specifying guard algebras.

A.6 Modules

TaDA Live is a logics that emphasises modularity of the proofs. One aspect of this is that when a program is naturally structured as a collection of modules, one would want the proof of correctness to be decomposed into independent proofs of each module exporting some specifications for the externally accessible operations and a proof that the client of these modules is correct, which depends only on these abstract module specifications.
In our model, a module is nothing but a conceptually related set of operations that are defined in a statement: . Here, is what we call “client” of a module offering operations . The operation deals with let statements by populating a function associating each function name to its formal parameters and its implementation .
Similarly, the proof of correctness of , will need to fetch the abstract specifications of the functions (which appear as free names in ) from some mapping from function names to their specifications. The fact that the implementation of each operation satisfies its specification is checked in the proof derivation for the let statement (rule Let) but then the proof of the client and of the module are done separately.
For this reason, we present proofs of just a module against its abstract specifications, which can be used as if they were axioms in the proof of any client using them. To talk about modules independently of their clients, we introduce the notation , which can be understood as populating an entry of for . We will then prove some specification for that will populate an entry of for .
In the proof of some client, we will recall the module specifications that are assumed in and use rule Call to handle the calls to the operations of the module. We will omit from the proof outlines and the applications of rule Call for readability.

A.7 Proof Outlines

In program proof outlines, we adopt a number of notational conventions. First, unless it involves a viewshift or we want to highlight it, we will apply rule Cons without mentioning it. Similarly, we omit the obvious applications of rules Var, Call, and SubPq and the axioms (i.e., the rules associated with primitive commands).
Next, in outline such as
the specification of the inner step inherits the context and the pseudo-quantifier of the specification of the outer step, as in the derivation on the right.

B The TaDA Live Proof System

In this section, we present the full proof system of TaDA Live.
For brevity, we use the metavariable to range over expressions of the form and is used in rules when the pseudo-quantification is simply copied verbatim from premise to conclusion.
In the rules, we use the following abbreviation:
The -safety condition is defined in Appendix B.2.1 and can be typically proven by using Lemma 4.2.

B.1 Liveness Rules

For reference, we reproduce the liveness-related rules:

B.1.1 The Environment Liveness Rules.

The Environment liveness rules use the condition (Definition 4.3) recalled here for convenience:
Definition B.1 () Given assertions , and , the condition holds if and only if, for arbitrary , letting
the following holds:
We reproduce below for completeness the rules to prove the environment liveness condition.

B.2 Atomicity Rules

We first give the formal definition of -safety and prove its properties, and then give the general forms of rules LiftA, MkAtom, and UpdReg, which are the ones dealing with proving atomicity.

B.2.1 The -safety Condition.

The rules of TaDA Live dealing with opening and closing regions (rules UpdReg and LiftA) require the -safety side condition for the postcondition. While the definition of -safety is technical, its intuition is simple: Those are the assertions that preserve their meaning when interpreted at level or at level . The only possible contradictions arising by increasing the level come from assertions about the state and environment obligations of regions that are open at but not at .
Definition B.2 (Havoc).
Let . The set is the set of region IDs of that are closed at level but not at level . We define the function on worlds:
We extend it to a function on sets of worlds in the obvious way: .
Definition B.3 (-safety) A set is -safe if . An assertion is -safe, written if, for all , is -safe.
Since proving -safety in general involves meddling with the semantics of assertions, we provide the following lemma that can be used to immediately prove all the -safety side conditions involved in our program proofs:
Lemma B.4.
The properties below hold, for arbitrary :
(1)
, and are -safe.
(2)
and are both -safe.
(3)
If , then is -safe.
(4)
If are both -safe, then so are , , and .
(5)
If is -safe for all , then is -safe.

B.2.2 Generalised Atomicity Rules.

The following rules are the general forms of rules LiftA, MkAtom, and UpdReg of Figure 9.

B.3 General Forms

The following rules are the general forms of some of the rules in Figure 9:

B.4 Logical Manipulation Rules

The rules below allow for basic logical manipulation.

B.5 Axioms

B.6 Standard Hoare Rules

B.7 On Stability Checks

A triple is well-defined, according to Definition 3.24, if the Hoare pre- and post-conditions are both stable assertions. The rules all assume the triples in the premises are well-defined and ensure that the triple in the conclusion is well-defined as well. The only exceptions are rules MkAtomG, SubPq, and Elim, where the Hoare pre-/post-conditions should be checked for stability to ensure the conclusion is a well-defined triple. We omitted these stability checks from these rules to improve readability.
In practice, however, this way of handling stability has a drawback: If one starts with a goal that has unstable pre-/post-conditions, then one would only see the mistake much further up in the proof, typically at applications of AtomW or Frame (which requires stability of the frames) just before applications of the axioms. Therefore, in practice, to make the proof fail early in case of mistakes, one would want to additionally check stability at the top-level goal and applications of Par.

C Case Study: Lock-coupling Set

We develop the proof of a lock-coupling set module, which represents a set of integer numbers using an ordered linked list. The module interface presents three operations, , , and for adding and removing elements from the abstract set representing the module’s state, as well as checking membership of an integer in this set.
Each cell of the linked list contains either a value from this set or (representing dummy beginning and end nodes, respectively), a pointer to a lock and a pointer to the next cell of the linked list (null for the final cell, with value ). The values of the cells in the linked list are sorted in strictly increasing order.
The value and lock associated with a cell in the linked list are immutable, however, the module’s protocol allows a thread holding the lock associated with a cell to change the value of the pointer to the next cell, allowing cells to be added and removed from the linked list.
The internal operation performs a traversal of the linked list using hand-over-hand locking to, given some value , find and lock the two adjacent cells with values and such that . All the operations would use to obtain ownership of the nodes that they need to modify.
To perform this hand-over-hand locking, the operation must hold the lock associated with a cell while locking the lock associated with the next, therefore the layers of the locks associated with each cell of the linked list must strictly decrease as the list is traversed.
As we explained in Section 5.5, the example is challenging for the handling of layers. Intuitively, we want to associate layers with each lock in the list, in strictly decreasing order. This represents the dependencies between the locks introduced by the order of the traversal: The release of lock at position from the head depends on the liveness of the lock at position . This introduces two challenges: We need to associate different layers to each instance of a lock while the lock specifications mention fixed layers; and we need to dynamically reassign layers to locks as the list grows. As we already anticipated, we can solve both challenges by a suitable generalisation of the lock specifications. Let us first introduce this generalisation formally, and then use it for the proof of the lock-coupling set.

C.1 Interlude: A Generalisation of Fair Lock Specifications

We generalise the fair lock specifications we used for the CLH lock in three ways:
(1)
we parametrise the specifications with client-definable layer maps;
(2)
we provide a viewshift to the client with which it is possible to reassign layers;
(3)
we add the operation, since the lock-coupling set’s operation disposes of the removed cells; we omit its implementation and proof, as it is standard.
First let us recall the definition of a layer map. Given two partial orders and , a function is strictly monotone if . A layer map is a strictly monotone function between the two partial orders.
Let be the level of the region used in the proof of the CLH lock. We generalise the client-facing CLH lock specifications as follows:
In particular, the abstract predicate represents a lock resource with abstract identifier (i.e., a pair of region identifiers; the client will treat this type opaquely), concrete address , and abstract state .
Moreover, the specifications would export to the client the following viewshifts, for every , and every :
(26)
(27)
Note that the naming choice here suggests CLH as the implementation to keep the discussion grounded, but the specification would be the same for any other fair lock implementation.
We now sketch the modifications needed to adapt the proof of CLH presented in Section 5.2 to prove the generalised specification.
First, we pick, just as in Section 5.2, . We then need to parametrise the two regions with a layer map , for an arbitrary . We include it in the regions abstract state: and . The abstract predicate for the lock can the be defined as:
We similarly parametrise every obligation with a layer map as well, obtaining obligations and with layers lay lay .
The protocol of the regions is extended by having each transition preserve the layer map. Before extending the protocol with a transition that can update the layer map, we motivate the need for fractional permissions by showing what goes wrong without them. Suppose we just provide a transition, guarded by , to update the current layer map to an arbitrary new one, and define . With this protocol it would be impossible to prove the layer-map-altering viewshift (27). The reason lies in the definition of the interpretation of :
In the case where , which represents the case where there are threads enqueued waiting to acquire the lock, the interpretation ensures that the environment will contain obligations for each issued ticket . When we try to prove the viewshift, we need to obtain with the new layer map, which can be obtained only by creating out-of-thin-air the corresponding resources. These would be created in the local state, leaving us with which cannot be viewshifted to the desired since there is no way to get rid of the local obligations. Conceptually this encodes the following fact: If we were to remap the layers of the lock when other threads are queued, then the obligations held by those threads would become unfulfillable, and we would inherit copies of them with the new mapping, which we also would not be able to fulfil on behalf of the other threads.
To resolve this impasse, we need to allow the layer map to be update only when there is no thread queued to acquire the lock. This way, we would have and so no environment obligation laying around. We cannot achieve this by exposing the queue in the abstract state of the lock, however, without losing the atomicity of the lock specifications. With the introduction of fractional permissions, giving the right to enqueue to the lock, we can encode the emptiness of the queue by asserting we are the only one with that right.
To achieve this technically, we start by encoding fractional permissions as a guard algebra. We introduce guards with the axioms and . We then define the abstract predicate
For technical reasons explained later, we introduce guards with exactly the same axioms as the guards. To encode the fact that full permissions imply empty queue, we adapt the interpretation of as follows:
From we can deduce that inside the region interpretation, and hence .
Finally, we add to the protocol of the possibility of updating the layer map when owning full permissions:
The reason for including the is as follows: When a thread enqueues on the lock, it gives up a non-trivial fraction of the permission it owns to be able to make . When it dequeues, it should get back that fraction; the guards are obtained as “leftovers” when putting in the region’s interpretation. Those are proof that the region interpretation has at least in it when we want to get it back.
Adapting the proof of Section 5.2 to use these generalised definitions is a routine application of standard TaDA patterns. The satisfiability of the layer constraints is preserved by strict monotonicity of layer maps.

C.2 Correctness of the Lock-coupling Set

—Code. The implementation of the module’s operations is in Figure 27 with the implementation of the constructor in Figure 28. We write for the deallocation of the three contiguous cells from address . The auxiliary operation (also in Figure 28) is meant to only be used internally. The code uses a “record” syntax for readability, desugared as follows:
—Specifications. The abstract predicate represents a lock-coupling set at address abstractly representing the set .
—Region Types. This proof will utilise two region types: and where , , , , . Here, , , , and are fixed parameters of both regions. The lock-coupling set resource is abstractly represented by the predicate
—Guards. We introduce a number of guards that are used to represent ownership of information regarding nodes of the linked list. To ease readability, we will adopt a record notation for tuples (i.e., tuples with named positions). In particular, we will make a record with the following information for each node: an address (), a lock address (), a lock abstract identifier (), a value (), and a layer (). A guard records the list of values represented by the linked list. An unlocked node is represented by the guard where is a record of the value, lock address/id, layer associated with the node of the list at address . So, in particular, the cell at would store the tuple and we will have the resource associated with its lock (we will explain the layer map when introducing the region interpretations). A locked node is represented by two guards and , following the usual pattern for locks. These guards additionally store the address of the next node, which is stable if we hold the lock at . Moreover, assuming is the node following , if we hold the lock at , then we know that all the information in is stable (i.e., everything but the address of the node following ). To represent this, we make use of a guard .
The following axioms reflect the operations we desire to perform on the nodes. For locking/unlocking a non-terminal node, when :
For locking/unlocking the last node:
For inserting a node between and :
For deleting a node :
Then, the following axioms keep the guard’s information for the nodes consistent:
—Layers and Obligations. We use the layer structure (where ), ordered by the lexicographic ordering and with and . Roughly, take a non-initial node that is at position from the end of the list; we will associate with it the layer , which is guaranteed to be strictly greater than any layer associated with the nodes following in the list. Intuitively, no matter what has been chosen for the proof of the implementation of locks, there are enough layers between and to allow the proof of the lock of not to conflict with the lock of the node ahead.
We construct obligations out of the atoms (representing the “key” of the lock associated with the layer ) and (representing a “free” spot at layer ) for . We set lay and lay . We also define an obligation acting as a “reservoir” of atoms:
We can always split a pair of and atoms from the reservoir: .
—Interference protocol. The guard-labelled transition system of the region is:
and the guard-labelled transition system of the region is:
They represent, in order: the acquisition of the first lock, obtaining both the key for that lock and a “free” layer spot; the acquisition of a next lock, obtaining its key; the release of the previous lock, swapping layer of the next with the free one; the insertion of a node that gets assigned the free layer between the two adjacent locks held (used by ); the deletion of a node that also drops the non-needed free layer spot (used by ); the drop of a non-needed free layer spot (used by ); the release of a lock.
—Region interpretation. The lock-coupling set internally represents the elements of the set with a lock-coupling linked list. To represent these in ghost state, we will use a list of quadruples of each cell’s value, the state of the associated lock, as well as its layer and region identifier. We introduce the predicate , which verifies that the value in the list are in strictly increasing order, while the layers of the associated locks are in strictly decreasing order:
We also introduce a function that allows us to extract the associated set of values from such a list, , and a function that similarly allows us to extract a list of just the values, retaining their order, :
The interpretation of the outer region is a straightforward wrapper around the inner one:
As usual, the outer region has two purposes: hiding internal state so the operations can be seen as abstractly atomic and keeping track of the obligations held by threads.
The interpretation of the inner region encapsulates the concrete heap cells and the lock-related guards and obligations:
where the resources associated with each node are described by the following auxiliary predicates:
The layer map maps the layers of to the ones of as follows:
—Proof of . We use the following specification for the internal operation :
where represents the ownership of two adjacent list nodes representing value and with (where is the value we wanted to locate in the list):
The proof of is shown in Figures 29, 30 and 31. In the outlines, we expand the record notation to tuples, e.g., . We detail here the application of LiveC. The associated environment liveness condition is proved by:
(28)
—Proof of . The proof of the operation builds on the specification of . We show its outline in Figure 32 with a more detailed derivation showing how the first unlock operation is handled in Figure 33.
—Proof of , and . We omit the proofs of the , , and operations, as they do not add much to the presentation. can be proved as standard by keeping track of the nodes created locally and with a final viewshift to create the two nested regions representing an empty set. The hard part of the proofs of and is the call to , which has been already presented in detail. The rest is handled analogously to .

D Programming Language Definition

We will make regular use of partial functions. We write for the set of partial function from to and for the set of finite partial function. Given , we write if is undefined on , and . We will use the notation for the finite function that maps each of the to and is undefined on any other input. Given elements and , and functions and , we define the functions and by:
We write for the partial function that is undefined on but otherwise behaves like . The union of two partial function is a well-defined partial function as long as where their domains overlap.
We use the set of Booleans, , a set of values, , a set of program variables, , and a set of function names, . The set contains a special element, , that holds a function’s return value. Heap addresses are represented by natural numbers, . The natural numbers in represent both numeric values and heap addresses.
Definition D.1 (Numeric and Boolean Expressions).
Let be an arbitrary set of variables and an arbitrary set of values. The set of numerical expressions, , and the set of Boolean expressions, , are defined by the grammars:
The numeric and Boolean program expressions are defined by the sets and , respectively. In Section 3.3, we also work with logical expressions built from both program and logical variables and values, hence the reason for the expression definition defined over an arbitrary variable and value sets.
The functions and provide the sets of free variables for the numeric and Boolean expressions, respectively. They are defined inductively on the structure of expressions by:
Definition D.2 (Commands).
The set of commands, , is defined by the grammar in Figure 34 where , , , is a list of pairwise distinct variables, and .
We use [] to denote the value of the heap cell with address given by . In Figure 35, we define operators and , which identify the variables that a command can access and the variables that are potentially modified by a command, respectively. In a command , we apply a strong syntactic restriction that . Each individual thread is still able to modify variables that are created locally and to modify shared heap cells, but are not allowed to modify the free variables.19 In a function definition , we use the natural restriction . Also for simplicity, we assume each function name is given a definition at most once. The function returns the function names occurring in that are not bound by a .
Definition D.3 (Variable Store).
A program variable store, , is a finite partial function from program variables to values. The right-biased union of variable stores, , is defined by:
Definition D.4 (Expression Evaluation).
Let be an arbitrary function from an arbitrary set of variables to values. The numeric expression evaluation function, , and the Boolean expression evaluation function, , are defined by:
The program expressions are evaluated using program store . In Section 3.3, we also work with logical expressions that are evaluated over both program and logical variables and values. The right-biased union of stores is used to describe how, when nesting scopes, a variable occurrence is bound by the innermost binder surrounding it. The notation denotes .
Definition D.5 (Heap).
A heap, , is a finite partial function from addresses to values. The set of heaps, , forms a PCM with defined only if .
Definition D.6 (Function Implementation Context).
A function implementation context, , is a finite partial function from function names to pairs comprising a finite list of distinct variables and a command.
We write where variable list represents the function arguments and represents the function body. We use the notation and to refer to the arguments and function body of , respectively.
To describe the behaviour of local variable binding and function calls, we define program states that extend commands with variable stores. For example, the program state indicates that the command is evaluated in the current store updated with the variables in .
Definition D.7 (Program States).
The set of program states, is defined by the grammar:
The ✓ indicates a terminated program. It is a technical device, so every , including , takes at least one step.
In the operational semantics, we need to keep track of which thread is originating each step to be able to define later concepts of fairness of the scheduling. We do this tracking using thread identifiers, which are strings of letters (for the left thread) and (for the right thread). will be used to denote the thread identifier that is an empty sequence. Intuitively, such a string identifies a single thread as the path in the syntax tree of parallel compositions at which the thread is found.
Definition D.8 (Command Semantics).
A scheduler annotation is an element of the set
A program configuration is an element of the set . Let . The operational semantics of the commands is given by the labelled relation, , defined in Figure 36 and Figure 37. We write for . We also define .
To simplify the development, in our programming language the initial state’s store assigns arbitrary values to the free variables of a program. With such assumption, every reference to a local variable will be in the domain of the current store. This ensures that in every application of the rules in Figure 36 and Figure 37 to construct a trace, the evaluations of (Boolean) expressions are well-defined.
Definition D.9 (Threads).
Given a program state , the set is the set of threads of that can take a step. The function is defined as follows:
Definition D.10 (Program Traces and Fairness).
We call program traces, the infinite sequences of the form where, for all , , . We use for ranging over infinite suffixes of program traces and for the set of all program traces. For a program trace , we define , and . We define the set of -program traces
A program trace is (weakly) fair if and only if:
(29)
(30)
That is: A trace is fair if, at any point in time, every thread that can take a step (and the environment) will eventually be scheduled.
The open-world program semantics defines the behaviour of a command when run concurrently with an arbitrary environment. This semantics interleaves steps from two “players”: the local thread given by the relation; and its environment given by the relation, respectively.
Definition D.11 (Open World Semantics).
We call traces the infinite sequences where, for all , , . We use for ranging over infinite suffixes of traces and for the set of all traces. For a trace , we define , and . The function is defined by where
The open-world program semantics function, is the function such that
The notation is syntactic sugar for .
Definition D.12.
A trace is locally terminating, written , if it contains finitely many occurrences of .
Remark 3 (Design of Semantics).
We made some design choices in crafting this semantics, with the motivation of making manipulation easier in the proofs. The first choice is to model environmental steps explicitly. These steps drive the argument about progress in the presence of blocking, where the local thread is not able to make progress in isolation but is relying on the environment actively performing some state changes that would lead to local progress.
The second choice we highlight is that the semantics of a program only contains infinite traces. This might seem odd when the goal is proving termination. Traces that locally terminate simply have an infinite tail of environment steps. To simulate a closed system, one can select for the traces where the environment steps preserve the heaps. More importantly, we strip the information about threads and program state, which means that information about when the local thread terminated (in the form of ✓ or ) has been erased. However, by construction, traces obtained from fair program traces can only contain finitely many local steps if the program terminated, justifying our definition of local termination.
Example D.13.
The traces in can be characterised as follows: They all start from some configuration with . A (possibly zero) finite number of environment steps follow; these steps preserve the store, but arbitrarily alter the heap, or they lead to a fault, terminating the trace with an infinite tail of steps. If no fault happened, then a local step is taken from some configuration for an arbitrary . If , then the local step leads to a fault, leading again to a tail. Otherwise, it leads to the configuration . After that, there is an infinite number of environment steps, which again preserve the store but arbitrarily mutate the heap or lead to an infinite fault tail.

E Soundness

In this section, we provide the details of the soundness of three rules: LiveC, Par, While, Frame, LiveO, and LiveA. These are the only proof rules in TaDA-Live that bring in non-trivial liveness information. All other proof rules follow in the same way as for TaDA, with the liveness constraints on the traces being identical between the antecedent and consequent of such rules or being trivial in the case of command axioms. We will focus particularly on the liveness argument for these rules.
We start by giving some technical definitions omitted from the main text and then move to the soundness argument.

E.1 Atomic World Rely

Recall that the atomic world rely relation, , coincides with the smallest reflexive relation closed under the rules of the world rely (Figure 6), with the restriction that rules wr and wr can be applied at most once per region identifier.
Definition E.1 (Atomic World Rely Relation).
The atomic world rely relation, , is defined as , where , taking , is defined to be the smallest reflexive relation closed under:

E.2 Environment Liveness Judgement Semantics

We give semantics to the judgements defined in Figure 10.
Fig. 10.
Fig. 10. Environment liveness condition rules.
Fig. 11.
Fig. 11. Illustration of rule EnvLive and the condition.
Fig. 12.
Fig. 12. Code of spin lock operations.
Fig. 13.
Fig. 13. Spin lock: proof of .
Fig. 14.
Fig. 14. Spin lock: Proof of loop body.
Fig. 15.
Fig. 15. Spin lock: proof of and . Here, Step 3 is LiftA, Frame, SubPq.
Fig. 16.
Fig. 16. Illustration of the memory layout of CLH lock.
Fig. 17.
Fig. 17. Code of CLH lock operation.
Fig. 18.
Fig. 18. Outline of CLH lock proof.
Fig. 19.
Fig. 19. Proof outline of the FAS call of CLH lock.
Fig. 20.
Fig. 20. Application of While in the CLH lock proof.
Fig. 21.
Fig. 21. Proof outline of the CLH ’s loop body.
Fig. 22.
Fig. 22. Proof outline for the linearisation point of CLH . Step 5 is Cons, Elim, AtomW, UpdReg, AElim, LiftA, AElim, Cons.
Fig. 23.
Fig. 23. Details of the proof of the call of . Step 6 is LiftA, Frame.
Fig. 24.
Fig. 24. Blocking counter: proof of .
Fig. 25.
Fig. 25. Code of the double blocking counter operations.
Fig. 26.
Fig. 26. Double blocking counter: proof of . Step 7 is LiftA, Frame.
Fig. 27.
Fig. 27. Implementation of the lock-coupling set operations.
Fig. 28.
Fig. 28. Implementation of and the internal operation.
Fig. 29.
Fig. 29. Proof outline of .
Fig. 30.
Fig. 30. Details of while loop in .
Fig. 31.
Fig. 31. Details of in while loop of . Step 8 is Elim, AtomW, AElim, LiftA, QL, Frame. Step 9 is LiftA, QL, Cons, Frame.
Fig. 32.
Fig. 32. Proof outline of operation.
Fig. 33.
Fig. 33. Details of in . Step 10 is LiftA, Cons, Frame.
Fig. 34.
Fig. 34. Syntax of commands.
Fig. 35.
Fig. 35. The sets of free and modified program variables.
Fig. 36.
Fig. 36. The small-step operational semantics.
Fig. 37.
Fig. 37. The small-step operational semantics, failure cases.
Definition E.2 (Auxiliary Environment Liveness Judgement Semantics).
Let such that
.
.
and let
Then, the auxiliary semantic environmental liveness judgement holds when, for arbitrary , there exist such that and for all , either or there exists some and
such that
hold, where:
and
Definition E.3 (Environment Liveness Judgement Semantics).
The semantic environmental liveness judgement:
where , holds when
Theorem E.4.
For arbitrary , an atomicity context, such that
(31)
(32)
if , then .
Proof.
Assuming and taking arbitrary, the proof proceeds by induction on the structure of derivation trees of the auxiliary environmental liveness condition. We start of with the bases cases: LiveO, LiveA, and LiveT.
—case LiveO. In this case, for some ,
(33)
(34)
(35)
(36)
(37)
hold. From this, we need to show .
Let , clearly the union of the elements of this set equals , as required. Assuming and setting , which is in given (33), it suffices to show
(38)
(39)
hold to complete the proof.
We start off by showing that (37) holds. Taking and arbitrary, given (34), it is clear that lay holds and, given (35) and (36), holds. From these two conclusions, we can infer that holds. Then, from (36), it is clear that holds, and therefore, .
Finally, (38) follows immediately from (32) and the definition of .
—case LiveA. In this case, for some ,
(40)
(41)
(42)
(43)
(44)
(45)
hold. From this, we need to show .
Let , clearly the union of the elements of this set equals as required. Assuming and setting , which is in given (40) and (42), it suffices to show
(46)
(47)
to complete the proof.
We start off by showing that (45) holds. Taking and arbitrary, given (41), it is clear that lay holds and, given (43) and (44), holds. From these two conclusions, we can infer that holds. Then, from (44), it is clear that holds, and therefore, .
Finally, (46) follows immediately from (39) and the definition of .
—case LiveT. In this case holds. From this, we need to show . Let , clearly the union of the elements of this set equals as required. From , clearly , therefore holds, as required.
Finally, we complete this theorem’s proof with a proof of the soundness of the one inductive case, EQuant. Note that ECase can be derived directly from EQuant.
—case EQuant. In this case, for some and
(48)
hold. Letting
From (47), for any there exists such that with the appropriate conditions holding for each .
Setting , given the definition of , clearly and for each , there exists some such that and therefore, the appropriate properties hold due to (47), as required.
By induction on the structure of derivation trees of the auxiliary environmental liveness condition, holds, as required.
Theorem E.5.
If , then .
Proof.
This theorem follows trivially from Theorem E.4.

E.3 Soundness of Frame

For the rest of the section, we let
such that
Lemma E.6.
For arbitrary , and atomicity context, , and :
Proof.
Assume , which is equivalent to:
Substituting , this is equivalent to:
As , holds, and therefore, as required:
Lemma E.7.
For arbitrary , and atomicity context, , and :
Proof.
To start off, assume and . Clearly, this second assumption entails , which is equivalent to:
Choosing the initial and applying the first assumption yields , as required.
Definition E.8.
For arbitrary and , we define the predicate , identifying traces that only modify the program variables in :
Definition E.9.
For :
Lemma E.10.
Given and arbitrary such that , then:
Proof.
Easy coinduction on the small-step operational semantics of commands.
Definition E.11.
We define an auxiliary operation that takes a Hoare frame and an atomic frame and applies the frames at each position of a specification trace if the heaps at each position are compatible with said frames (and returns the empty set otherwise).
This can be lifted to sets of specification traces, :
Lemma E.12.
For arbitrary , , and , then
holds, where
Proof.
Taking , , and arbitrary such that:
(49)
(50)
The proof proceeds by coinduction on the structure of . We consider the rules can apply from the trace safety judgement: Stutter, LinPt, Env, Env’, and Env.
—Case Stutter. In this case, and . From (49), for some , the following hold:
(51)
(52)
(53)
Given that , using Lemma E.6, (50) implies
(54)
By Lemma E.7, (48) and (50) imply:
(55)
Given that , holds, and therefore:
(56)
From this, given (55), (54), and (51) and using the inductive assumption, we derive:
(57)
Finally, assuming , given (52), we know . From this and (55), we infer that and therefore,
(58)
From (53), (56), and (57) by Stutter, holds, as required.
—Case LinPt. In this case, and . From (49), the following hold for some :
(59)
(60)
(61)
Given that , using Lemma E.6, (58) implies
(62)
By Lemma E.7, (48) and (58) imply:
(63)
Given that , holds, and therefore:
(64)
From this, given (64), (63), and (60) and using the inductive assumption, we derive:
(65)
Finally, assuming , given (60), we know . From this and (63), we infer that and therefore,
(66)
From (61), (64), and (65) by LinPt, holds, as required.
—case Env. In this case, and
From (49), we have that . Taking arbitrary and, assuming given some , for the goal specification:
It then suffices to show that holds. This follows from Lemma E.6 by using and , yielding:
as required.
—Case Env’. This case follows similarly to the Env case.
—Case Env. This case is trivially true.
Lemma E.13.
Letting
and assuming , then, given :
Proof.
Taking arbitrary such that . This implies that:
As , there must be some such that . Taking arbitrary, to show , take arbitrary such that
Given and the definition of , the following holds:
Now, from :
From this, as required:
Theorem E.14 (Soundness of.
Frame) Assuming
(67)
and given arbitrary such that
(68)
and arbitrary such that
then
Proof.
To start off, as , clearly and , and therefore, .
Taking arbitrary such that (67) holds, arbitrary such that holds and arbitrary such that holds, then . From Lemma E.10 and (67), we can also infer that and therefore, it is clear that . From this, we know that it is sufficient to show that , to show that holds, and therefore, , as required.
Therefore, taking arbitrary, it is sufficient to show .
Let
To show , for some arbitrary , assume , from which it follows that . Then, as , for some :
(69)
(70)
From Lemma E.12 and (68), . To reach the goal now, it suffices to show that for some arbitrary :
This holds trivially from Lemma E.13, (66) and (69).
Note that Theorem E.14 has the side condition rather than as in Frame. This is because this theorem applies to TaDA Live specifications without the syntactic sugar that permits program variables to be directly referenced in the atomic pre-condition and post-condition of a TaDA Live hybrid triple. The side condition present in the Frame rule permits it to be applied directly to sugared hybrid specifications, as it guarantees that the necessary side condition for the corresponding desugared specification holds.

E.4 Soundness of LiveC

Let
where .
Definition E.15.
For atomicity context and layer from the context of and sets and ’ as well as the layer from the pseudo-quantifier of , let
Then predicate checks whether the environment is satisfying the liveness assumptions of the specification:
Lemma E.16.
Given such that
(71)
(72)
hold. Take and let
Taking arbitrary and such that
(73)
(74)
(75)
for arbitrary such that
(76)
there exists , such that, if
(77)
then
Proof.
Taking and arbitrary such that (72), (73), and (74) hold and , satisfying (75). From this, we know that there exists and , such that, for all :
(78)
(79)
hold. If (76) does not hold, then our proof is complete, otherwise, from (70), there exists some such that .
We now show by transfinite induction on , that
Base case (): Take and assume . Since , for some , we have . We now assume, towards a contradiction, that and therefore . We now demonstrate, that under this assumption, by induction on that
(80)
Assume that for , holds. From (77), holds and from (70), setting , , therefore, either or hold. In the latter case, from (71), , a contradiction. Therefore, holds, proving (79).
From (70), there exists some and such that:
(81)
Taking and arbitrary, since{\text {lay}} , lay and therefore lay and lay hold. As lay holds, given (76), for some , holds. Given (78), we know and for some , and therefore, . Given (79), holds, and, therefore, given (80), we know that:
(82)
(83)
Given (81), , as otherwise, lay and therefore, as lay , we reach a contradiction. Then, from and , we know , as , and therefore, . From this, it follows that
(84)
holds.
Letting , first, consider the case . As an obligation’s composition with itself within the obligation algebra of the region type of the shared region must be undefined, one of
holds, as if both the environment and local worlds hold , their composition would be undefined.
If holds, then and hold. From (79), we know that , which, given (80) and , implies that . Given (82) and the invariant on atomic resources, we also know that , and therefore, since we know that , given the definition of , we reach a contradiction, as required.
Otherwise, if holds, from (75)
holds, and therefore, given (83), there exists some minimal , such that holds. Since is minimal, also holds. From this, letting
we know that and . Then, given (82) and the invariant on atomic resources, we know and , and therefore, by the definition of , and .
If , then by construction of , holds, and therefore, from the definition of , a contradiction.
Otherwise, if , then by construction of , holds, and therefore, from the definition of , . Given (77), , and therefore, from the definition of , . Given that we know that , then clearly , from which it follows that . From this and (79) it is clear that . However, given that , from the definition of , it must be the case that , a contradiction.
Finally, the case reaches a contradiction similarly. To finish the base case, it now suffices to consider the case . Given (75) and (83), we know that for some , holds. Letting , given (80), and hold. Given and , holds, a contradiction.
By contradiction, the base case holds, as required.
Inductive case: Take and assume . Since holds for some , we have . Now assume, towards a contradiction, that and therefore . We now demonstrate that, under this assumption, the following holds:
(85)
To show this, it is sufficient to show that implies . We proceed to prove by induction on . The base case holds by our assumptions. Now for the inductive case, assume that for , holds. From (77), holds and from (70), setting , , therefore, either , or hold. In the case where holds, from (71), , a contradiction and in the case where holds, we reach a contradiction with , which implies . Therefore, holds, as required, completing the proof by induction.
The inductive case then follows from (84). The goal follows similarly to the base case for the first disjunct and by inductive assumption in the second.
Lemma E.17.
Given such that
(86)
(87)
hold. Take and let
Taking arbitrary and such that
(88)
(89)
(90)
and arbitrary such that
(91)
holds, then, there exists , such that:
Proof.
This lemma follows straightforwardly from lemma E.16.
Theorem E.18.
Taking and such that and
(92)
(93)
Then, for any and , if
(94)
(95)
then
Proof.
Taking and arbitrary such that , (91) and (92) hold. Then, to start off, given (91), holds, and therefore, . From this, we can infer, .
Then, taking arbitrary such that (67) holds, arbitrary such that holds and arbitrary such that holds, then . From Lemma E.10 and (93), we can also infer that and therefore, it is clear that . From this, we know that it is sufficient to show that , to show that holds, and therefore, , as required.
Therefore, taking arbitrary, it is sufficient to show . Let
To show , assume for some , . Then, given , for some :
Given Lemma E.12 and that the definition of the trace safety judgement does not depend on the good states of a specification, , clearly, holds. To complete the proof, it suffices show that
This follows straightforwardly from Lemma E.17.

E.5 Soundness of While

Definition E.19 (Concrete Trace Sequence Operator).
A similarly defined overloading of this operator exists for specification traces, and the obvious lifting to sets .
Lemma E.20.
For arbitrary , , either and , or and there exists and , such that .
Proof.
Straightforward by induction on .
Lemma E.21.
Given an arbitrary specification
for an arbitrary trace , let
If for some and , and , then:
(96)
Proof.
Straightforward by induction on the specification semantics rules.
For the rest of the section, let
Lemma E.22.
Take and arbitrary and take such that . Let
As , by Lemma E.20, there exists and , such that . If, for arbitrary and , then there exists , such that:
then:
and one of the following hold:
Proof.
This lemma is proven by coinduction on the structure of . First, assume:
(97)
(98)
(99)
As, clearly, , using (96), (97), (98), Lemma E.12 and Lemma E.21, by coinduction, we can derive:
Now, split on . If , then the goal holds, otherwise, split again on . If , then , so from (97), , from this, we infer that . Given that the definition of only references the pseudo-quantifier, context layer, and atomicity context of the parametrising specification, this clearly implies our goal, , as required. Otherwise, . To not terminate, the while loop must iterate at least one more time, as is a fair trace, therefore holds. We can then use Lemma E.20 and our coinductive assumption to obtain and that one of the following holds:
If the first holds, then , so the goal is proven; if the second holds, then from :
Theorem E.23.
Given
(100)
(101)
(102)
(103)
(104)
(105)
then:
Proof.
Taking arbitrary such that and arbitary. We need to show . Let
To reach the goal, assume . By Lemma E.22, in the case that , and our assumptions, there exists :
and one of the following holds:
In the first case, , therefore, , as required. In the second, clearly also holds. Finally, we consider the third case. Take arbitrary and assume . Now, for a contradiction, assume . In this case, due to (100), with an argument similar to that in the soundness of (LiveC), at every point, every eventually reaches a state satisfying . This must eventually be stable due to the metric stably decreasing due to assumption (101), holding till the next iteration, at which point, the loop variant decreases due to (99) with . By repeating this argument with the continuation, by well-foundness of ordinals, the while loop must eventually terminate if holds, leading to a contradiction. Therefore, in all cases, holds, as required, concluding the proof.

E.6 Soundness of Par

Definition E.24 (Bowtie Operator).
The bowtie operator, , which interleaves the subjective traces of two commands executed in parallel into a command from their combined perspective:
All other cases are undefined.
Definition E.25 (Specification Bowtie Operator).
The specification bowtie operator, , which interleaves the subjective specification traces of two commands executed in parallel into a command from their combined perspective:
All other cases are undefined.
Lemma E.26.
For any :
Proof.
Straightforward by induction on .
Lemma E.27.
For any trace , we have .
Proof.
Straightforward by induction on the length of the trace.
For the rest of the section, we name the specifications involved in the Par rule as follows:
Lemma E.28.
For arbitrary , , , and, , then:
Proof.
This lemma is proven by coinduction on the structure of .
The trace either starts with a local, or an environment step. We split on the two cases: [Case :] Take , , , and, arbitrary, and assume:
(106)
(107)
(108)
(109)
(110)
(111)
Given (105) and the definition of :
Now to prove the goal, consider the case and . In this case, take , so Env’ must hold for the goal as well as (106) and (107). Note that this choice of immediately satisfies the third conjunct of the goal. To show Env’ holds for the goal, given some , assume:
By substitution, this implies both:
Given (106) and (107), these imply:
Assumption (108) and yield:
(112)
Now, by using the inductive assumption, as (109) and (110) clearly imply the same assertions for and , respectively, for some :
(113)
(114)
From this first consequence:
holds, where . This is the first conjunct of the goal.
Finally, taking arbitrary, there exists such that . From the second consequence of our inductive assumption, it follows that there exist and such that . Then, from the definitions of and , and hold, and holds, as required.
Other cases for , follow similarly.
Case :
Here, the variable store does not change as , due to Lemma E.27 and the syntactic restriction on parallel commands, requiring both threads to not modify the value of any variable. To prove the goal, take , , , and, arbitrary, and assume:
(115)
(116)
(117)
(118)
(119)
(120)
Given (114) and the definition of , either:
or:
and in both cases:
Consider the first case, the second will follow symmetrically. Assume that the Stutter rule holds for , then, for some :
where . Given (117) and , holds. Given , , also holds. Using this and Env or Env’:
where . Now using the inductive assumption, as, once again, (118) and (119) clearly imply the same assertions for and , respectively, for some :
(121)
(122)
(123)
The second and third consequents imply the equivalent conjuncts of the goal with the same method as in the case and directly, respectively. As we have shown, holds, using the Stutter rule, to show that holds, where , it suffices to show:
Assuming holds, then and hold. From this it follows that , so, due to (122), .
Finally, due to and , and hold, respectively, yielding , as required.
The LinPt rule follows similarly.
Theorem E.29.
Given
(124)
(125)
(126)
(127)
then:
Proof.
Taking arbitrary such that , from (123) and (124), and hold. Given an arbitrary , need to show . Let
To reach the goal, assume . Then, and hold. From E.26 and the definition of , there exists and such that . As and , and hold. Now, as and , then for some :
As all commands must take at least one step, and hold, therefore:
hold. Now, using Lemma E.28, there exists such that:
and for any , there exist and , such that . It now suffices to show that . Take arbitrary and and such that and . From above:
(128)
(129)
holds. To reach the goal, split on and .
Case :
In this case, clearly holds, therefore holds trivially, as required.
Case :
From , by (128), holds:
As , by Lemma E.21, there exists some , an index after which the trace only performs steps, in particular, for any , , where and . Therefore, , where , such that . Given , it is clear that:
and similarly as :
As , . Finally, from this holds, implying , as required.
Case :
Similarly to the previous case.
Case :
Given (127) and (128), we can infer and . Assume for a contradiction. From , for some :
From this and , there is some such that:
From , for some :
Given that , for to hold, it must be the case that lay . This argument can be repeated ad-infinitum, which, by the well-foundedness of layers, leads to a contradiction, and therefore holds. This implies .
From these cases, we deduce that .
From this, we can infer and consequently, , as required.

E.7 Soundness of LiftAG

Recall the triples of the premise and conclusion of rule LiftAG:
and let us name the semantic counterparts of their atomic pre-/post-conditions as follows:
Definition E.30 (Lift).
This can be lifted to sets of specification traces, :
As a technical tool of our proofs, we use the function , which removes the information about states of regions that are open at level from .
Definition E.31 (Obliv).
Let , we then define the function on worlds:
We extend it to a function on sets of worlds in the obvious way: .
Lemma E.32.
For arbitrary :
Lemma E.33.
For arbitrary such that
(130)
then
Proof.
Given arbitrary , take arbitrary. Then, there exists and such that . Given that :
(131)
(132)
(133)
We also know that . Now, letting and , from the definition of and , we also know:
(134)
(135)
for some and .
Given that , there exists such that and . From the definition of , we know that , where and are such that and
(136)
(137)
(138)
Let . From the definition of , and from (135) and (136), we know that .
Then, since all our the region interpretations at level are -safe, there exists , such that . As , . By (129), this implies that . As , this is equivalent to .
From this, we can infer there exists and such that:
Then there exists some and , such that, from the definition of reification and the fact that , this implies:
where and therefore:
as required.
Lemma E.34.
For arbitrary such that
(139)
then
Proof.
Proof follows similarly to Lemma E.33.
Lemma E.35.
For arbitrary such that
(140)
then
Proof.
Proof follows similarly to Lemma E.33.
Lemma E.36.
For arbitrary such that
(141)
then there exists such that:
Lemma E.37.
For arbitrary such that
(142)
then there exists such that
Lemma E.38.
For arbitrary :
Theorem E.39 (Soundness of Rule .
LiftAG) Assuming
for such that
then, given arbitrary such that
(143)
then
Proof.
To reach the goal, it suffices to show that . Taking arbitrary, let
Then, assume that for arbitrary , holds. Then, clearly , and therefore, from (142), for some :
(144)
(145)
From (143), by coinduction over the structure of the trace safety judgement, using our assumptions and Lemmas E.32 to E.37, the following holds:
Finally, taking arbitrary such that holds, from Lemma E.38, . By (144), the following holds, as required:

Acknowledgments

We would like to thank Hongjin Liang, Xinyu Feng, Martin Bodin, Shale Xiong and Petar Maksimovic, for the helpful discussions and comments. We also thank the anonymous reviewers for their thorough critical reading of the article and insightful feedback.

References

[1]
Ales Bizjak, Daniel Gratzer, Robbert Krebbers, and Lars Birkedal. 2019. Iron: Managing obligations in higher-order concurrent separation logic. Proc. ACM Program. Lang. 3, POPL (2019), 65:1–65:30.
[2]
Richard Bornat, Cristiano Calcagno, and Hongseok Yang. 2005. Variables as resource in separation logic. In MFPS (Electronic Notes in Theoretical Computer Science), Vol. 155. Elsevier, 247–276.
[3]
Pontus Boström and Peter Müller. 2015. Modular verification of finite blocking in non-terminating programs. In ECOOP. 639–663.
[4]
Stephen D. Brookes. 2004. A semantics for concurrent separation logic. In CONCUR (Lecture Notes in Computer Science), Vol. 3170. Springer, 16–34.
[5]
Byron Cook, Andreas Podelski, and Andrey Rybalchenko. 2007. Proving thread termination. In PLDI. ACM, 320–330.
[6]
Pedro da Rocha Pinto. 2016. Reasoning with Time and Data Abstractions. Ph.D. Dissertation. Imperial College London.
[7]
Pedro da Rocha Pinto, Thomas Dinsdale-Young, and Philippa Gardner. 2014. TaDA: A logic for time and data abstraction. In ECOOP 2014–Object-Oriented Programming. Springer Berlin, 207–231.
[8]
Pedro da Rocha Pinto, Thomas Dinsdale-Young, Philippa Gardner, and Julian Sutherland. 2016. Modular termination verification for non-blocking concurrency. In ESOP (Lecture Notes in Computer Science), Vol. 9632. Springer, 176–201.
[9]
Thomas Dinsdale-Young, Lars Birkedal, Philippa Gardner, Matthew J. Parkinson, and Hongseok Yang. 2013. Views: Compositional reasoning for concurrent programs. In POPL. ACM, 287–300.
[10]
Thomas Dinsdale-Young, Mike Dodds, Philippa Gardner, Matthew J. Parkinson, and Viktor Vafeiadis. 2010. Concurrent abstract predicates. In ECOOP (Lecture Notes in Computer Science), Vol. 6183. Springer, 504–528.
[11]
Mike Dodds, Xinyu Feng, Matthew J. Parkinson, and Viktor Vafeiadis. 2009. Deny-guarantee reasoning. In ESOP (Lecture Notes in Computer Science), Vol. 5502. Springer, 363–377.
[12]
Emanuele D’Osualdo, Azadeh Farzan, Philippa Gardner, and Julian Sutherland. 2021. TaDA Live: Compositional reasoning for termination of fine-grained concurrent programs. CoRR abs/1901.05750 (2021).
[13]
Alexey Gotsman, Byron Cook, Matthew J. Parkinson, and Viktor Vafeiadis. 2009. Proving that non-blocking algorithms don’t block. In POPL. ACM, 16–28.
[14]
Alexey Gotsman and Hongseok Yang. 2011. Liveness-preserving atomicity abstraction. In ICALP. 453–465.
[15]
Ronghui Gu, Zhong Shao, Jieung Kim, Xiongnan (Newman) Wu, Jérémie Koenig, Vilhelm Sjöberg, Hao Chen, David Costanzo, and Tahina Ramananandro. 2018. Certified concurrent abstraction layers. In PLDI. ACM, 646–661.
[16]
Jafar Hamin and Bart Jacobs. 2018. Deadlock-free monitors. In ESOP (Lecture Notes in Computer Science), Vol. 10801. Springer, 415–441.
[17]
Jafar Hamin and Bart Jacobs. 2019. Transferring obligations through synchronizations. In ECOOP (LIPIcs), Vol. 134. Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik, 19:1–19:58.
[18]
Maurice Herlihy and Nir Shavit. 2008. The Art of Multiprocessor Programming. Morgan Kaufmann.
[19]
Maurice Herlihy and Nir Shavit. 2011. On the nature of progress. In Principles of Distributed Systems - 15th International Conference, OPODIS 2011, Toulouse, France, December 13-16, 2011. Proceedings. Springer, 313–328.
[20]
Maurice Herlihy and Jeannette M. Wing. 1990. Linearizability: A correctness condition for concurrent objects. ACM Trans. Program. Lang. Syst. 12, 3 (1990), 463–492.
[21]
Jan Hoffmann, Michael Marmar, and Zhong Shao. 2013. Quantitative reasoning for proving lock-freedom. In LICS. IEEE Computer Society, 124–133.
[22]
Bart Jacobs, Dragan Bosnacki, and Ruurd Kuiper. 2018. Modular termination verification of single-threaded and multithreaded programs. ACM Trans. Program. Lang. Syst. 40, 3 (2018), 12:1–12:59.
[23]
Ralf Jung, Robbert Krebbers, Jacques-Henri Jourdan, Ales Bizjak, Lars Birkedal, and Derek Dreyer. 2018. Iris from the ground up: A modular foundation for higher-order concurrent separation logic. J. Funct. Program. 28 (2018), e20.
[24]
Ralf Jung, David Swasey, Filip Sieczkowski, Kasper Svendsen, Aaron Turon, Lars Birkedal, and Derek Dreyer. 2015. Iris: Monoids and invariants as an orthogonal basis for concurrent reasoning. In POPL. ACM, 637–650.
[25]
Jieung Kim, Vilhelm Sjöberg, Ronghui Gu, and Zhong Shao. 2017. Safety and liveness of MCS lock—layer by layer. In APLAS (Lecture Notes in Computer Science), Vol. 10695. Springer, 273–297.
[26]
Naoki Kobayashi. 2000. Type systems for concurrent processes: From deadlock-freedom to livelock-freedom, time-boundedness. In IFIP TCS (Lecture Notes in Computer Science), Vol. 1872. Springer, 365–389.
[27]
Naoki Kobayashi. 2006. A new type system for deadlock-free processes. In CONCUR (Lecture Notes in Computer Science), Vol. 4137. Springer, 233–247.
[28]
K. Rustan M. Leino, Peter Müller, and Jan Smans. 2010. Deadlock-free channels and locks. In ESOP (Lecture Notes in Computer Science), Vol. 6012. Springer, 407–426.
[29]
Ruy Ley-Wild and Aleksandar Nanevski. 2013. Subjective auxiliary state for coarse-grained concurrency. In POPL. ACM, 561–574.
[30]
Hongjin Liang and Xinyu Feng. 2016. A program logic for concurrent objects under fair scheduling. In POPL. 385–399.
[31]
Hongjin Liang and Xinyu Feng. 2018. Progress of concurrent objects with partial methods. PACMPL 2, POPL (2018), 20:1–20:31.
[32]
Hongjin Liang, Xinyu Feng, and Zhong Shao. 2014. Compositional verification of termination-preserving refinement of concurrent programs. In CSL-LICS. 65:1–65:10.
[33]
Hongjin Liang, Jan Hoffmann, Xinyu Feng, and Zhong Shao. 2013. Characterizing progress properties of concurrent objects via contextual refinements. In CONCUR (Lecture Notes in Computer Science), Vol. 8052. Springer, 227–241.
[34]
Aleksandar Nanevski, Ruy Ley-Wild, Ilya Sergey, and Germán Andrés Delbianco. 2014. Communicating state transition systems for fine-grained concurrent resources. In ESOP. 290–310.
[35]
Gian Ntzik, Pedro da Rocha Pinto, Julian Sutherland, and Philippa Gardner. 2018. A concurrent specification of POSIX file systems. In ECOOP (LIPIcs), Vol. 109. Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik, 4:1–4:28.
[36]
Peter W. O’Hearn. 2004. Resources, concurrency and local reasoning. In CONCUR (Lecture Notes in Computer Science), Vol. 3170. Springer, 49–67.
[37]
Susan S. Owicki and Leslie Lamport. 1982. Proving liveness properties of concurrent programs. ACM Trans. Program. Lang. Syst. 4, 3 (1982), 455–495.
[38]
Ilya Sergey, Aleksandar Nanevski, and Anindya Banerjee. 2015. Specifying and verifying concurrent algorithms with histories and subjectivity. In ESOP (Lecture Notes in Computer Science), Vol. 9032. Springer, 333–358.
[39]
Joseph Tassarotti, Ralf Jung, and Robert Harper. 2017. A higher-order logic for concurrent termination-preserving refinement. In ESOP (Lecture Notes in Computer Science), Vol. 10201. Springer, 909–936.
[40]
Moshe Y. Vardi. 1995. Alternating automata and program verification. In Computer Science Today. Lecture Notes in Computer Science, Vol. 1000. Springer, 471–485.

Cited By

View all
  • (2024)Deadlock-Free Separation Logic: Linearity Yields Progress for Dependent Higher-Order Message PassingProceedings of the ACM on Programming Languages10.1145/36328898:POPL(1385-1417)Online publication date: 5-Jan-2024
  • (2024)Practical Rely/Guarantee Verification of an Efficient Lock for seL4 on Multicore ArchitecturesThe Practice of Formal Methods10.1007/978-3-031-66676-6_4(65-87)Online publication date: 4-Sep-2024
  • (2023)Fair Operational SemanticsProceedings of the ACM on Programming Languages10.1145/35912537:PLDI(811-834)Online publication date: 6-Jun-2023

Index Terms

  1. TaDA Live: Compositional Reasoning for Termination of Fine-grained Concurrent Programs

        Recommendations

        Comments

        Information & Contributors

        Information

        Published In

        cover image ACM Transactions on Programming Languages and Systems
        ACM Transactions on Programming Languages and Systems  Volume 43, Issue 4
        December 2021
        272 pages
        ISSN:0164-0925
        EISSN:1558-4593
        DOI:10.1145/3492431
        Issue’s Table of Contents
        This work is licensed under a Creative Commons Attribution International 4.0 License.

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        Published: 10 November 2021
        Accepted: 01 June 2021
        Revised: 01 February 2021
        Received: 01 January 2020
        Published in TOPLAS Volume 43, Issue 4

        Permissions

        Request permissions for this article.

        Check for updates

        Author Tags

        1. Fine-grained concurrency
        2. linearisability
        3. busy-waiting
        4. termination
        5. liveness
        6. concurrent separation logics

        Qualifiers

        • Research-article
        • Refereed

        Funding Sources

        • EPSRC Programme
        • European Union’s Horizon 2020 research and innovation programme
        • Department of Computing PhD Scholarship from Imperial
        • UKRI Established Fellowship “VeTSpec: Verified Trustworthy Software Specification”
        • ERC Consolidator
        • EU Horizon 2020

        Contributors

        Other Metrics

        Bibliometrics & Citations

        Bibliometrics

        Article Metrics

        • Downloads (Last 12 months)785
        • Downloads (Last 6 weeks)75
        Reflects downloads up to 15 Sep 2024

        Other Metrics

        Citations

        Cited By

        View all
        • (2024)Deadlock-Free Separation Logic: Linearity Yields Progress for Dependent Higher-Order Message PassingProceedings of the ACM on Programming Languages10.1145/36328898:POPL(1385-1417)Online publication date: 5-Jan-2024
        • (2024)Practical Rely/Guarantee Verification of an Efficient Lock for seL4 on Multicore ArchitecturesThe Practice of Formal Methods10.1007/978-3-031-66676-6_4(65-87)Online publication date: 4-Sep-2024
        • (2023)Fair Operational SemanticsProceedings of the ACM on Programming Languages10.1145/35912537:PLDI(811-834)Online publication date: 6-Jun-2023
        • (2023)Higher-Order Leak and Deadlock Free LocksProceedings of the ACM on Programming Languages10.1145/35712297:POPL(1027-1057)Online publication date: 11-Jan-2023

        View Options

        View options

        PDF

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        HTML Format

        View this article in HTML Format.

        HTML Format

        Get Access

        Login options

        Full Access

        Media

        Figures

        Other

        Tables

        Share

        Share

        Share this Publication link

        Share on social media