Primitives for Contract-based Synchronization
Primitives for Contract-based Synchronization
Xxxxxxx Xxxxxxxxxx
Dipartimento di Matematica e Informatica, Universita` degli Studi di Cagliari
Xxxxxxx Xxxxxx
Dipartimento di Ingegneria e Scienza dell’Informazione, Universita` degli Studi di Trento
We investigate how contracts can be used to regulate the interaction between processes. To do that, we study a variant of the concurrent constraints calculus presented in [2] , featuring primitives for multi- party synchronization via contracts. We proceed in two directions. First, we exploit our primitives to model some contract-based interactions. Then, we discuss how several models for concurrency can be expressed through our primitives. In particular, we encode the π-calculus and graph rewriting.
1 Introduction
A contract is a binding agreement stipulated between two or more parties, which dictates their rights and their duties, and the penalties each party has to pay in case the contract is not honoured.
In the current practice of information technology, contracts are not that different from those legal agreements traditionally enforced in courts of law. Both software and services commit themselves to re- spect some (typically weak, if not “without any expressed or implied warranty”) service level agreement. In the case this is not honoured, the only thing the user can do is to take legal steps against the software vendor or service provider. Since legal disputes may require a lot of time, as well as relevant expenses, such kinds of contracts serve more as an instrument to discourage users, rather than making easier for users to demand their rights.
Recent research has then addressed the problem of devising new kinds of contracts, to be exploited for specifying and automatically regulating the interaction among users and service providers. See e.g. [6, 8, 11, 13, 20], to cite a few. A contract subordinates the behaviour promised by a client (e.g. “I will pay for a service X”) to the behaviour promised by a service (e.g. “I will provide you with a service Y”), and vice versa. The crucial problems are then how to formalise the concept of contract, how to understand when a set of contracts gives rise to an agreement among the stipulating parties, and how to actually enforce this agreement in an open, and possibly unreliable, environment.
In the Concurrent Constraint Programming (CCP) paradigm [23, 24], concurrent processes commu- nicate through a global constraint store. A process can add a constraint c to the store through the tell c primitive. Dually, the primitive ask c makes a process block until the constraint c is entailed by the store. Very roughly, such primitives may be used to model two basic operations on contracts: a tell c is for publishing the contract c, and an ask c′ is for waiting until one has to fulfill some duty c′.
While this may suggest CCP as a good candidate for modelling contract-based interactions, some important features seem to be missing. Consider e.g. a set of parties, each offering her own contract. When some of the contracts at hand give rise to an agreement, all the involved parties accept the contract, and start interacting to accomplish it. A third party (possibly, an “electronic” court of law) may later on join these parties, so to provide the stipulated remedies in the case an infringement to the contract is found. To model this typical contract-based dynamics, we need the ability of making all the parties
Xxxxxxx, Xxxxx, Xxxxxxxx, Xxxxx (Eds.):
Third Interaction and Concurrency Experience (ICE 2010) EPTCS 38, 2010, pp. 67–82, doi:10.4204/EPTCS.38.8
⃝c X. Xxxxxxxxxx & X. Xxxxxx This work is licensed under the
Creative Commons Attribution License.
involved in a contract synchronise when an agreement is found, establishing a session. Also, we need to allow an external party to join a running session, according to some condition on the status of the contract.
In this paper we study a variant of [2], an extension of CCP which allows for modelling such kinds of scenarios. Our calculus features two primitives, called fuse and join: the first fuses all the processes agreeing on a given contract, while the second joins a process with those already participating to a contract. Technically, the prefix fusex c probes the constraint store to find whether it entails c; when this happens, the variable x is bound to a fresh session identifier, shared among the parties involved in the contract. Such parties are chosen according to a local minimal fusion policy. The prefix joinx c is similar,
yet it looks for an already existing session identifier, rather than creating a fresh one. While our calculus
is undogmatic about the underlying constraint system, in the contract-based scenarios presented here we commit ourselves to using PCL formulae [2] as constraints.
Contributions. Our contribution consists of the following points. In Sect. 2 we study a calculus for contracting processes. Compared to the calculus in [2], the one in this paper differs in the treatment of the main primitives fuse and join, which have a simplified semantics. Moreover, we also provide here a reduction semantics, and compare it to the labelled one. In Sect. 3 we show our calculus suitable for modelling complex interactions of contracting parties. In Sect. 4 we substantiate a statement made in [2], by showing how to actually encode into our calculus some common concurrency idioms, among which the π-calculus [19] and graph rewriting [21]. In Sect. 5 we discuss further differences between the two calculi, and compare them with other frameworks.
2 A contract calculus
We now define our calculus of contracting processes. The calculus is similar to that in [2], yet it diverges in the treatment of the crucial primitives fuse and join . We will detail the differences between the two versions in Sect. 5. Our calculus features both names, ranged over by n, m, . . . , and variables, ranged over by x, y, Constraints are terms over variables and names, and include a special element ⊥; the set
of constraints D is ranged over by c, d. Our calculus is parametric with respect to an arbitrary constraint system (D, ⊢) (Def. 1).
Definition 1 (Constraint system [24]) A constraint system is a pair (D, ⊢), where D is a countable set, and ⊢⊆ P(D) × D is a relation satisfying: (i) C ⊢ c whenever c ∈ C; (ii) C ⊢ c whenever for all c′ ∈ C′ we have C ⊢ c′, and C′ ⊢ c; (iii) for any c, C ⊢ c whenever C ⊢ ⊥.
Syntax. Names in our calculus behave similarly to the names in the π-calculus: that is, distinct names represent distinct concrete objects. Instead, variables behave as the names in the fusion calculus: that is, distinct variables can be bound to the same concrete object, so they can be fused. A fusion σ is a substitution that maps a set of variables to a single name. We write σ = {n/→x} for the fusion that replaces each variable in →x with the name n. We use metavariables a, b, to range over both names and variables.
Definition 2 (Processes) The set of prefixes and processes are defined as follows:
π ::= τ tell c check c ask c joinx c fusex c (prefixes)
P ::= c ∑i∈I πi.Pi | P|P (a)P X (→a) (processes)
Prefixes π include τ (the silent operation as in CCS), as well as tell , check and ask as in Concurrent Constraints [24]. The prefix tell c augments the context with the constraint c. The prefix check c checks
if c is consistent with the context. The prefix ask c causes a process to stop until the constraint c is entailed by the context. The prefixes fusex c and joinx c drive the fusion of the variable x, in two different flavours. The prefix joinx c instantiates x to any known name, provided that after the instantiation the constraint c is entailed. The prefix fusex c fuses x with any set of known variables, provided that, when all the fused variables are instantiated to a fresh name, the constraint c is entailed. To avoid unnecessary fusion, the set of variables is required to be minimal (see Def. 7). To grasp the intuition behind the two kinds of fusions, think of names as session identifiers. Then, a fusex c initiates a new session, while a joinx c joins an already initiated session.
.
Processes P include the constraint c, the summation ∑i∈I πi.Pi of guarded processes over indexing set I, the parallel composition P|Q, the scope delimitation (a)P, and the instantiated constant X (→a), where →a is a tuple of names/variables. When a constraint c is at the top-level of a process, we say it is active. We
use a set of defining equations {Xi(→x) = Pi}i with the provision that each occurrence of Xj in Pk is guarded,
i.e. it is behind some prefix. We shall often use C = {c1, c2, . . .} as a process, standing for c1|c2| · · · . We write 0 for the empty sum. Xxxxxxxxx sums are simply written π.P. We use + to merge sums as follows:
∑i∈I πi.Pi + ∑i∈J πi.Pi = ∑i∈I ∪J πi.Pi if I ∩ J = 0/ . We stipulate that + binds more tightly than |.
Free variables and names of processes are defined as usual: they are free whenever they occur in
a process not under a delimitation. Alpha conversion and substitutions are defined accordingly. As a special case, we let (fusex c){n/x} = (joinx c){n/x} = ask c{n/x}. That is, when a variable x is instantiated to a name, the prefixes fusex c and joinx c can no longer require the fusion of x, so they behave as a plain ask c. Henceforth, we will consider processes up-to alpha-conversion.
We provide our calculus with both a reduction semantics and a labelled transition semantics. As usual for CCP, the former explains how a process evolves within the whole context (so, it is not compositional), while the latter also explains how a process interacts with the environment.
Reduction semantics. The structural equivalence relation ≡ is the smallest equivalence between pro- cesses satisfying the following axioms:
P|0 ≡ P P|Q ≡ Q|P P|(Q|R) ≡ (P|Q)|R P + 0 ≡ P P + Q ≡ Q + P P + (Q + R) ≡ (P + Q) + R
(a)(P|Q) ≡ P|(a)Q if a /∈ free(P) (a)(b)P ≡ (b)(a)Q X (→a) ≡ P{→a/→x} if X (→x . P
) =
Definition 3 (Reduction) Reduction → is the smallest relation satisfying the rules in Fig. 1.
We now comment the rules for reduction. Rule TAU simply fires the τ prefix. Rule TELL augments the context (R) with a constraint c. Similarly to [24], we do not check for the consistency of c with the other constraints in R. If desired, a side condition similar to that of rule CHECK (discussed below) can be added, at the cost reduced compositionality. As another option, one might restrict the constraint c in tell c.P to a class of coherent constraints, as done e.g. in [2]. Rule ASK checks whether the context has enough active constraints C so to entail c. Rule CHECK checks the context for consistency with c. Since this requires inspecting every active constraint in the context, a side condition precisely separates the context between C and R, so that all the active constraints are in C, which in this case acts as a global constraint store.
Rule FUSE replaces a set of variables x→y with a bound name n, hence fusing all the variables together. One variable in the set, x, is the one mentioned in the fusex c prefix, while the others, →y, are taken from the context. The replacement of variables is done by the substitution σ in the rule premises. The actual set of variables →y to fuse is chosen according to the minimal fusion policy, formally defined below.
σ
Definition 4 (Minimal Fusion) A fusion σ = {n/→z} is minimal for C, c, written C ⊢min c, iff:
Cσ ⊢ cσ ∧ ∄ →w Ç→z : C{n/→w} ⊢ c{n/→w}
[TAU] [TELL]
(→a)(τ .P + Q | R) → (→a)(P | R) (→a)(tell c.P + Q | R) → (→a)(c | P | R)
C ⊢ c
(→a)(C | ask c.P + Q | R) → (→a)(C | P | R)
[ASK]
C, c /⊢ ⊥ R free from active constraints
(→a)(C | check c.P + Q | R) → (→a)(P | R)
[Check]
σ
σ = {n/x→y} n fresh in P, Q, R,C, c,→a C ⊢min c
(x→y→a)(C | fusex c.P + Q | R) → (n→a)((C | P | R) σ)
[Fuse]
C{n/x} ⊢ c{n/x}
[JOIN]
P ≡ P′ → Q′ ≡ Q
[Struct]
(xn→a)(C | joinx c.P + Q | R) → (n→a) (C | P | R){n/x}
P → Q
Figure 1: The reduction relation
A minimal fusion σ must cause the entailment of c by the context C. Furthermore, fusing a proper subset of variables must not cause the entailment. The rationale for minimality is that we want to fuse those variables only, which are actually involved in the entailment of c – not any arbitrary superset. Prag- matically, we will often use fusex c as a construct to establish sessions: the participants are then chosen among those actually involved in the satisfaction of the constraint c, and each participant “receives” the fresh name n through the application of σ. In this case, n would act as a sort of session identifier.
Note that the context R in rule FUSE may contain active constraints. So, the fusion σ is actually required to be minimal with respect to a subset C of the active constraints of the whole system. Techni- cally, this will allow us to provide a compositional semantics for fusex c. Also, this models the fact that processes have a “local” view of the context, as we will discuss later in this section.
Rule JOIN replaces a variable x with a name n taken from the context. Note that, unlike FUSE, n is not fresh here. To enable a joinx c prefix, the substitution must cause c to be entailed by the context C. Intuitively, this prefix allows to “search” in the context for some x satisfying a constraint c. This can also be used to join a session which was previously established by a FUSE.
Note that rules FUSE and JOIN provide a non-deterministic semantics for prefixes fuse and join since several distinct fusions σ could be used to derive a transition. Each σ involves only names and variables occurring in the process at hand, plus a fresh name n in the case of fuse . If we consider n up-to renaming, we have a finite number of choices for σ. Together with guarded recursion, this makes the transition system to be finitely-branching.
Rule STRUCT simply allows us to consider processes up-to structural equivalence.
Transition semantics. We now present an alternative semantics, specified through a labelled transition
−→
relation. Unlike the reduction semantics, the labelled relation α is compositional: all the prefixes can
be fired by considering the relevant portion of the system at hand. The only exception is the check c
prefix, which is inherently non-compositional. We deal with check c by layering the reduction relation
−→
over the relation α . While defining the transition semantics, we borrow most of the intuitions from the semantics in [2]. The crucial difference between the two is how they finalize the actions generated by a fuse (rule CLOSEFUSE). Roughly, in [2] we need a quite complex treatment, since there we have to accommodate with “principals” mentioned in the constraints. Since here we do not consider principals, we can give a smoother treatment. We discuss in detail such issues in Sect. 5.
We start by introducing in Def. 5 the actions of our semantics, that is the set of admissible labels of the LTS. The transition relation is then presented in Def. 6.
Definition 5 (Actions) Actions α are as follows, where C denotes a set of constraints.
F J
α ::= τ
C C ⊢ c C ⊢x c C ⊢x c C /⊢ ⊥ (a) α
x
x
The action τ represents an internal move. The action C advertises a set of active constraints. The action C ⊢ c is a tentative action, generated by a process attempting to fire an ask c prefix. This action carries the collection C of the active constraints discovered so far. Similarly for C ⊢F c and fusex c, for C ⊢J c and joinx c, as well as for C /⊢ ⊥ and check c. In the last case, C includes c. The delimitation in (a)α is for scope extrusion, as in the labelled semantics of the π-calculus [22]. We write (→a)α to denote
a set of distinct delimitations, neglecting their order, e.g. (ab) = (ba). We simply write (→a→b) for (→a ∪→b).
−→
Definition 6 (Transition relation) The transition relations α
are the smallest relations between pro-
cesses satisfying the rules in Fig. 2. The last two rules in Fig. 2 define the reduction relation .
Many rules in Fig. 2 are rather standard, so we comment on the most peculiar ones, only. Note in passing that ≡ is not used in this semantics. The rules for prefixes simply generate the corresponding tentative actions. Rule CONSTR advertises an active constraint, which is then used to augment the tentative actions through the PAR* rules. Rule OPEN lifts a restriction to the label, allowing for scope extrusion. The CLOSE* rules put the restriction back at the process level, and also convert tentative actions into τ .
The overall idea is the following: a tentative action label carries all the proof obligations needed to fire the corresponding prefix. The PAR* rules allow for exploring the context, and augment the label with the observed constraints. The CLOSE* rules check that enough constraints have been collected so that the proof obligations can be discharged, and transform the label into a τ .
The TOP* rules act on the top-level, only, and define the semantics of check c.
σ
The side condition of rule CLOSEFUSE involves a variant of the minimal fusion relation we used previously. As for the reduction semantics, we require σ to be minimal, so not to fuse more variables than necessary. Recall however that in the reduction semantics minimality was required with respect to a part of the active constraints at hand. In our labelled semantics, rules PAR* collect each active constraint found in the syntax tree of the process. If we simply used C ⊢min c in CLOSEFUSE, we would handle the
following example differently. Let C = q(y) |q(z) ∨ s → p(y), let P = (x)(y)(z)(fusex p(x).P | C | s), and let Q = (x)(y)(z)(fusex p(x).P | C) | s. In P we must collect s before applying CLOSEFUSE, and so σ1 =
{n/xy} would be the only minimal fusion. Instead in Q we can also apply CLOSEFUSE before discovering s, yielding the minimal fusion σ2 = {n/xyz}. This would be inconsistent with ≡ (and our reduction semantics as well). To recover this, we instead require in CLOSEFUSE the following relation, stating that σ must be
minimal with respect to a part of the observed constraints, only.
σ
Definition 7 (Local Minimal Fusion) A fusion σ = {n/→z} is local minimal for C, c, written C ⊢loc c, iff:
σ
∃C′ ⊆ C : C′ ⊢min c
−→
While we did not use structural equivalence in the definition of the labelled transition semantics, it turns out to be a bisimulation.
−→
Theorem 1 The relation ≡ is a bisimulation, i.e.: P ≡ Q α
Q′ =⇒ ∃P′. P α
P′ ≡ Q′.
We also have the expected correspondence between the reduction and labelled semantics.
Theorem 2 P → P′ ⇐⇒ ∃Q, Q′. P ≡ Q Q′ ≡ P′
Q α
The right implication is by rule induction. To prove the left implication, an induction argument on
−→ Q′ suffices, exploiting the fact that all the constraints of Q are accumulated in the label.
−→
τ .P τ
P [TAU] ask c.P 0/⊢c
P [ASK] tell c.P τ
c | P [Tell]
−−→
−→
{c}/⊢⊥
0/ ⊢F c
0/ ⊢Jc
check c.P −−−−→ P [CHECK] fusex c.P −−−x→ P [FUSE] joinx c.P −−x→ P [JOIN]
u {u} 0/
−−→ u [CONSTR] ∑i πi.Pi −→ ∑i πi.Pi [IDLESUM]
P (→ )
′
a C
−−→ P
→b C′
Q ( )
−−−→ Q′
[ParConstr]
a C
P (→ )
′
−−→ P
→b C′ c
Q ( )( ⊢ )
−−−−−→ Q′
[ParAsk]
P (→a→b)(C∪C′) ′ ′
(→a→b)(C∪C′⊢c) ′ ′
|Q −−−−−−→ P |Q P|Q −−−−−−−−→ P |Q
(→a)C
(→b)(C′⊢F c)
(→a)C
(→b)(C′⊢J c)
P −−→ P′ Q −−−−−x−→ Q′
x ′ ′
(→a→b)(C∪C′⊢F c)
P|Q −−−−−−−−→ P |Q
[ParFuse]
P −−→ P′ Q −−−−−−x→ Q′
x ′ ′
(→a→b)(C∪C′⊢J c)
P τ
P|Q −−−−−−−−→ P |Q
[ParJoin]
P (→ )
′
a C
−−→ P
→b C′
Q ( )( /⊢⊥)
−−−−−−→ Q′
[ParCheck]
−→ P′
[ParTau]
P Q (→a→b)(C∪C′/⊢⊥)
P′ Q′
P|Q′ τ
P′|Q′
π .P
|
α P′
−−−−−−−−→ |
P{→a/→x α P′
−→
P α P′
j j −→
[SUM]
} −→
if X (→x
. P [DEF]
−→
if a /∈ α [DEL]
∑ π .P α P′
X (a→
α P′ ) =
(a)P α
a)P′
i i i −→
P α
−→ P′
) −→
P (→ )( ⊢ )
′
a C c
P τ
−−−−−→ P
−→ (
(a)P (a)α P′
−−→
(x→y→a)(C ⊢F c)
[Open]
−→ (→a)P
if C ⊢ c [CLOSEASK]
′
σ
P −−−−−−x−→ P′ σ = {n/x→y} n fresh C ⊢loc c
P τ ′
[CloseFuse]
(xn→a)(C⊢J c)
P −−−−−−−→ P
−→ (n→a)(P σ)
x ′
Cσ ⊢ cσ,
−→ P
P τ ′
P τ ′
if σ = {n/x} [CLOSEJOIN]
[TopTau]
′
−→ (n→a)P σ
−−−−−→ P
P (→a)(C/⊢⊥) ′
P (→a)P′
P P
if C /⊢ ⊥ [TOPCHECK]
Figure 2: The labelled transition relation. Symmetric rules for +, | are omitted. The rules PAR* have the following no-capture side condition: →a is fresh in→b,C′, c, x, Q′, while→b is fresh in C, P′.
3 Examples
We illustrate our calculus by modelling scenarios where the interaction among parties is driven by con- tracts. In all the examples below, we use as constraints a smooth extension of the propositional contract logic PCL [2]. A comprehensive presentation of PCL is beyond the scope of this paper, so we give here just a broad overview, and we refer the reader to [2, 1] for all the technical details and further examples. PCL is an extension of intuitionistic propositional logic IPC [25], featuring a contractual implication
connective ։. Differently from IPC, a “contract” b ։ a implies a not only when b is true, like IPC implication, but also in the case that a “compatible” contract, e.g. a ։ b, holds. So, PCL allows for sort of “circular” assume-guarantee reasoning, summarized by the theorem ⊢ (b ։ a) ∧ (a ։ b) → a ∧ b.
The proof system of PCL extends that of IPC with the following axioms:
⊤ ։ ⊤ (p ։ p) → p (p′ → p) → (p ։ q) → (q → q′) → (p′ ։ q′)
A main result about PCL is its decidability, proved via cut elimination. Therefore, we can use the (decidable) provability relation of PCL as the entailment relation ⊢ of the constraint structure.
Example 1 (Greedy handshaking) Suppose there are three kids who want to play together. Xxxxx has a toy airplane, Bob has a bike, while Xxxx has a toy car. Each of the kids is willing to share his toy, but only provided that the other two kids promise they will lend their toys to him. So, before sharing their toys, the three kids stipulate a “gentlemen’s agreement”, modelled by the following PCL contracts:
cAlice(x) = (b(x) ∧ c(x)) ։ a(x) cBob(y) = (a(y) ∧ c(y)) ։ b(y) cCarl(z) = (a(z) ∧ b(z)) ։ c(z)
Xxxxx’s contract cAlice(x) says that Xxxxx promises to share her airplane in a session x, written a(x), provided that both Bob and Xxxx will share their toys in the same session. Bob’s and Xxxx’s contracts are dual. The proof system of PCL allows to deduce that the three kids will indeed share their toys in any session n, i.e. cAlice(n) ∧ cBob(n) ∧ cCarl(n) → a(n) ∧ b(n) ∧ c(n) is a theorem of PCL . We model the actual behaviour of the three kids through the following processes:
Xxxxx = (x)
tell cAlice(x). fusex a(x). xxxxX
Xxx = (y)
tell cBob(y). fusey b(y). lendB
Xxxx = (z) tell cCarl(z). fusez c(z). lendC
A possible trace of the LTS semantics is the following:
Xxxxx | Bob | Xxxx τ x)
(x) | fuse a(x).
| Bob | Xxxx
−τ ( → |
x) cAlice(x) | fus |
ex a(x). xxxxX | (y) cBob(y) | fusey b(y). lendB | Xxxx | |||
−τ ( → −τ ( → |
x) cAlice(x) | fus
n) cAlice(n) | len |
ex a(x). xxxxX | (y) cBob(y) | fusey b(y). lendB | ( n n |
z) cCa ask c(n | rl(z) | fusez c(z)
). lendC{n/y} | |
−τ ( → |
n) cAlice(n) | len | n n | arl(n) | ask c(n). | lendC{ |
n/y} |
τ |
| n n | n |
|
−→ (
xXxxxx
x xxxxX
−→ (n)
dA{ /x} | cBob(n) | ask b(n). lendB{ /y} | cCarl(n) |
dA{ /x} | cBob(n) | lendB{ /y} | xX
xXxxxx(n) | xxxxX{ /x} | cBob(n) | lendB{ /y} | cCarl(n) | lendC{ /y}
. lendC
x
x
In step one, we use Tell,ParTau,Del to fire the prefix tell cAlice(x). Similarly, in steps two and three, we fire the prefixes tell cBob(y) and tell cCarl(z). Step four is the crucial one. There, the prefix fusex a(x) is fired through rule Fuse. Through rules Constr,ParFuse, we discover the active constraint cAlice(x). We use rule Open to obtain the action (x){cAlice(x)} ⊢F a(x) for the Xxxxx part. For the Bob part, we use rule Constr to discover cBob(y), which we then merge with the empty set of constraints obtained through rule IdleSum; similarly for Xxxx. We then apply Open twice and obtain (y){cBob(y)} and (z){cCarl(z)}. At the top level, we apply ParFuse to deduce (x, y, z){cAlice(x), cBob(y), cCarl(z)} ⊢F a(x). Finally, we apply CloseFuse, which fuses x, y and z by instantiating them to the fresh name n. It is easy to check that
σ
{cAlice(x), cBob(y), cCarl(z)} ⊢loc a(x) where σ = {n/xyz}. Note that all the three kids have to cooperate, in order to proceed. Indeed, fusing e.g. only x and y would not allow to discharge the premise c(z) from the contracts cAlice and cBob, hence preventing any fuse prefix from being fired.
Example 2 (Insured Sale) A seller S will ship an order as long as she is either paid upfront, or she receives an insurance from the insurance company I, which she trusts. We model the seller contract as the PCL formula s(x) = order(x) ∧ (pay(x) ∨ insurance(x)) ։ ship(x) where x represents the session where the order is placed. The seller S is a recursive process, allowing multiple orders to be shipped.
S .
= (x) tell s(x). fusex ship(x).(S | doShip(x))
The insurer contract i(x) = premium(x) ։ insurance(x) plainly states that a premium must be paid upfront. The associated insurer process I is modelled as follows:
I .
= (x)tell i(x).fusex insurance(x).
I | τ .check ¬pay(x).(refundS(x) | debtCollect(x))
When the insurance is paid for, the insurer will wait for some time, modelled by the τ prefix. After that, he will check whether the buyer has not paid the shipped goods. In that case, the insurer will immediately indemnify the seller, and contact a debt collector to recover the money from the buyer. Note that S and I do not explicitly mention any specific buyer. As the interaction among the parties is loosely specified, many scenarios are possible. For instance, consider the following buyers B0, B1, B2, B3:
b0(x) = ship(x) ։ order(x) ∧ pay(x) B0 = (x) tell b0(x).receive(x)
b1(x) = ship(x) ։ order(x) ∧ premium(x) B1 = (x) tell b1(x).(receive(x) | τ .tellpay(x))
b2(x) = order(x) ∧ pay(x) B2 = B0{b2/b0}
b3(x) = order(x) ∧ premium(x) B3 = B0{b3/b0}
The buyer B0 pays upfront. The buyer B1 will pay later, by providing the needed insurance. The “incautious” buyer B2 will pay upfront, without asking any shipping guarantees. The buyer B3 is insured, and will not pay. The insurer will then refund the seller, and start a debt collecting procedure. This is an example where a violated promise can be detected so to trigger a suitable recovery action. The minimality requirement guarantees that the insurer will be involved only when actually needed.
Example 3 (Automated Judge) Consider an online market, where buyers and sellers trade items. The contract of a buyer is to pay for an item, provided that the seller promises to send it; dually, the contract of a seller is to send an item, provided that the buyer pays. A buyer first issues her contract, then waits until discovering she has to pay, and eventually proceeds with the process CheckOut. At this point,
the buyer may either abort the transaction (process NoPay), or actually pay the item, by issuing the constraint paid(x). After the item has been paid, the buyer may wait for the item to be sent (ask sent(x)), or possibly open a dispute with the seller (telldispute(x)). Note that, as in the real world, one can always open a dispute, even when the other party is perfectly right.
Buyer = (x) tell send(x) ։ pay(x). fusex pay(x). CheckOut
CheckOut = τ .NoPay + τ . tellpaid(x).(τ . tell dispute(x) + ask sent(x))
The behaviour of the seller is dual: issue the contract, wait until she has to send, and then proceed with Ship. There, either choose not to send, or send the item and then wait for the payment or open a dispute.
Seller = (y) tellpay(y) ։ send(y). fusey send(y). Ship
Ship = τ .NoSend + τ . tellsent(y).(τ . tell dispute(y) + ask paid(y))
To automatically resolve disputes, the process Judge may enter a session initiated between a buyer and a seller, provided that a dispute has been opened, and either the obligations pay or send have been inferred. This is done through the joinz primitive, which binds the variable z to the name of the session established between buyer and seller. If the obligation pay(z) is found but the item has not been paid (i.e. check¬paid(z) passes), then the buyer is convicted (by jailBuyer(z), not further detailed). Similarly, if the obligation send(z) has not been supported by a corresponding sent(z), the seller is convicted.
Judge = (z)
joinz (pay(z) ∧ dispute(z)).check ¬paid(z). jailBuyer(z) |
joinz (send(z) ∧ dispute(z)).check ¬sent(z). jailSeller(z)
A possible trace of the LTS semantics is the following:
→
Buyer | Seller | Judge −τ ∗ (n)
τ ∗
send(n) ։ pay(n) |paid(n) | telldispute(n) |pay(n) ։ send(n) | NoSend | Judge
−→ (n) send(n) ։ pay(n) |paid(n) |dispute(n) |pay(n) ։ send(n) | NoSend |check¬sent(n). jailSeller(n)|···
→
−τ ∗ (n)
jailSeller(n)|···
A more complex version of this example is in [2], also dealing with the identities of the principals performing the relative promises. The simplified variant presented here does not require the more general rule for fuse found in [2].
Example 4 (All-you-can-eat) Consider a restaurant offering an all-you-can-eat buffet. Customers are allowed to have a single trip down the buffet line, where they can pick anything they want. After the meal is over, they are no longer allowed to return to the buffet. In other words, multiple dishes can be consumed, but only in a single step. We model this scenario as follows:
Buffet = (x) (pasta(x) | chicken(x) | cheese(x) | fruit(x) | cake(x))
Bob = (x) fusex pasta(x) ∧ chicken(x). SatiatedB Xxxx = (x) fusex pasta(x).fusex chicken(x). SatiatedC
The Buffet can interact with either Bob or Xxxx, and make them satiated. Bob eats both pasta and chicken in a single meal, while Xxxx eats the same dishes but in two different meals, thus violating the Buffet policy, i.e.: Buffet | Xxxx →∗ SatiatedC | P. Indeed, the Buffet should forbid Xxxx to eat the chicken, i.e. to fire the second fusex . To enforce the Buffet policy, we first define the auxiliary operator ⊕. Let (pi)i∈I be PCL formulae, let r be a fresh prime, o a fresh name, and z,(zi)i∈I fresh variables. Then:
i∈ ∈ ∈I
L pi = (o)(z)(zi)i I(r(o, z) | ||i Ir(o, zi) → pi)
To see how this works, consider the process ⊕i∈I pi|Q where Q fires a fusex which demands a subset of the constraints (pi)i∈J with J ⊆ I. To deduce pi we are forced to fuse zi with z (and x); otherwise we can not satisfy the premise r(o, zi). Therefore all the (zi)i∈J are fused, while minimality of fusion ensures that the (zi)i∈I\J are not. After fusion we then reach:
) ( i i∈ i i i∈J i) |I\J i∈I\J
(o)(m z ) ( || r(o, z ) → p )| || (r(o, m)|r(o, m) → p Q′
where m is a fresh name resulting from the fusion. Note that the (pi)i∈I\J can no longer be deduced through fusion, since the variable z was “consumed” by the first fusion. The rough result is that ⊕i pi allows a subset of the (pi)i∈I to be demanded through fusion, after which the rest is no longer available.
We can now exploit the ⊕ operator to redefine the Buffet as follows:
Buffet′ = (x)(pasta(x) ⊕ chicken(x) ⊕ cheese(x) ⊕ fruit(x) ⊕ cake(x))
The new specification actually enforces the Buffet policy, i.e.: Buffet′ | Xxxx /→∗ SatiatedC | P. Note that the operator ⊕ will be exploited in Sect. 4, when we will encode graph rewriting in our calculus.
4 Expressive power
We now discuss the expressive power of our synchronization primitives, by showing how to encode some common concurrency idioms into our calculus.
Semaphores. Semaphores admit a simple encoding in our calculus. Below, n is the name associated with the semaphore, while x is a fresh variable. P(n) and V (n) denote the standard semaphore operations, and process Q is their continuation.
P(n).Q = (x) fusex p(n, x).Q V (n).Q = (x) tellp(n, x).Q
Each fusex p(n, x) instantiates a variable x such that p(n, x) holds. Of course, the same x cannot be instantiated twice, so it is effectively consumed. New variables are furnished by V (n).
Memory cells. We model below a memory cell.
New(n, v).Q = (x)tellc(n, x) ∧ d(x, v).Q
Get(n, y).Q = (w)fusew c(n, w).joiny d(w, y).New(n, y).Q Set(n, v).Q = (w)fusex c(n, w).New(n, v).Q
The process New(n, v) initializes the cell having name n and initial value v (a name). The process Get(n, y) recovers v by fusing it with y: the procedure is destructive, hence the cell is re-created. The process Set(n, v) destroys the current cell and creates a new one.
Xxxxx. Our calculus can model a tuple space, and implement the insertion and retrieval of tuples as in Xxxxx [14]. For illustration, we only consider p-tagged pairs here.
Out(w, y).Q = (x)tellp(x) ∧ p1(x, w) ∧ p2(x, y).Q In(w, y).Q = (x)fusex p1(x, w) ∧ p2(x, y).Q
In(?w, y).Q = (x)fusex p2(x, y).joinw p1(x, w).Q In(w,?y).Q = (x)fusex p1(x, w).joiny p2(x, y).Q In(?w,?y).Q = (x)fusex p(x).joinw p1(x, w).joiny p2(x, y).Q
The operation Out inserts a new pair in the tuple space. A fresh variable x is related to the pair com- ponents through suitable predicates. The operation In retrieves a pair by pattern matching. The pattern In(w, y) mandates an exact match, so we require that both components are as specified. Note that fuse will instantiate the variable x, effectively consuming the tuple. The pattern In(?w, y) requires to match
only against the y component. We do exactly that in the fuse prefix. Then, we use join to recover the first component of the pair, and bind it to w. The pattern In(w,?y) is symmetric. The pattern In(?w,?y) matches any pair, so we specify a weak requirement for the fusion. Then we recover the pair components.
Synchronous π-calculus. We encode the synchronous π-calculus [19] into our calculus as follows:
[P|Q] = [P] | [Q] [(νn)P] = (n)[P] [X (→a)] = X (→a) [X (→y
. P] = X (→y
. P]
[a¯⟨b⟩.P] = (x) msg(x, b) | fusex in(a, x). [P]
(x fresh)
) = ) = [
[a(z).Q] = (y) in(a, y) | (z)joinz msg(y, z). [Q]
(y fresh)
Our encoding preserves parallel composition, and maps name restriction to name delimitation, as one might desire. The output cannot proceed with P until x is fused with some y. Dually, the input cannot proceed until y is instantiated to a name, that is until y is fused with some x – otherwise, there is no way to satisfy msg(y, z).
The encoding above satisfies the requirements of [15]. It is compositional, mapping each π con-
struct in a context of our calculus. Further, the encoding is name invariant and preserves termination, divergence and success. Finally it is operationally corresponding since, writing →π for reduction in π,
π
P →∗ P′ =⇒ [P] ∗∼ [P′]
π
[P] ∗ Q =⇒ ∃P′. Q ∗∼ [P′] ∧ P →∗ P′
where ∼ is -bisimilarity. For instance, note that:
) msg( } ∼ (
[(νm)(n⟨m⟩.P|n(z).Q)] ∗ (m)(o o, m)|[P]|in(n, o)|[Q]{m/y m)[P|Q{m/y}]
since the name o is fresh and the constraints msg(o, m), in(n, o) do not affect the behaviour of P, Q. To see this, consider the inputs and outputs occurring in P, Q. Indeed, in the encoding of inputs, the fusex
prefix will instantiate x to a fresh name, hence not with o. On the other hand, in the encoding of outputs, the joinz prefix can fire only after y has been fused with x, hence instantiated with a fresh name. The presence of msg(o, m) has no impact on this firing.
Note that our encoding does not handle non-deterministic choice. This is however manageable through the very same operator ⊕ of Ex. 4. We will also exploit ⊕ below, to encode graph rewriting.
4.1 Graph rewriting
In the encoding of the π-calculus we have modelled a simple interaction pattern; namely, Xxxxxx-style synchronization. Our calculus is also able to model more sophisticated synchronization mechanisms, such as those employed in graph rewriting techniques [21]. Before dealing with the general case, we introduce our encoding through a simple example.
Example 5 Consider the following “ring-to-star” graph rewriting rule, inspired from an example in [17]:
A1
A2
A4
A3
B1
B2
B4
B3
Whenever the processes A1 . . . A4 are in a configuration matching the left side of the rule (where the bullets represent shared names) a transition is enabled, leading to the right side. The processes change to B1 . . . B4, and a fresh name is shared among all of them, while the old names are forgotten. Modelling this kind of synchronization in, e.g., the π-calculus would be cumbersome, since a discovery protocol must be devised to allow processes to realize the transition is enabled. Note that no process is directly connected to the others, so this protocol is non-trivial.
Our calculus allows for an elegant, symmetric translation of the rule above, which is interpreted as an agreement among the processes A1 . . . A4. Intuitively, each process Ai promises to change into Bi, and to adjust the names, provided that all the others perform the analogous action. Since each Ai shares two names with the other processes, we write it as Ai(n, m). The advertised contract is specified below as a PCL formula, where we denote addition and subtraction modulo four as ⊞ and ⊟, respectively:
ai(n, m, x) = fi⊞1(x, m) ∧ si⊟1(x, n) ։ fi(x, n) ∧ si(x, m) (1)
An intuitive interpretation of f, s is as follows: fi(x, n) states that n is the first name of some process Ai(n, −) which is about to apply the rule. Similarly for si(x, m) and the second name. The parameter x is a session ID, uniquely identifying the current transition. The contract ai(n, m, x) states that Xx agrees to fire the rule provided both its neighbours do as well. The actual Ai process is as follows.
Ai(n, m . x)tell ai(n, m, x).fusex fi(x, n) ∧ si(x, m). Bi(x)
) = (
Our PCL logic enables the wanted transition: P = ||i Ai(ni, ni⊞1) ∗ (m)||i Bi(m).
Note that the above works even when nodes ni are shared among multiple parallel copies of the same
processes. For instance, P|P will fire the rule twice, possibly mixing Ai components between the two P’s.
General Case. We now deal with the general case of a graph rewriting system.
Definition 8 An hypergraph G is a pair (VG, EG) where VG is a set of vertices and EG is a set of hyper- edges. Each hyperedge e ∈ EG has an associated tag tag(e) and an ordered tuple of vertices (e1, . . . , ek) where x x ∈ VG. The tag tag(e) uniquely determines the arity k.
Definition 9 A graph rewriting system is a set of graph rewriting rules {Gi ⇒ Hi}i where Gi, Hi are the source and target hypergraphs, respectively. No rule is allowed to discard vertices, i.e. VGi ⊆ VHi . Without loss of generality, we require that the sets of hyperedges EGi are pairwise disjoint.
In Def. 10 below, we recall how to apply a rewriting rule G ⇒ H to a given graph J. The first step is to identify an embedding σ of G inside J. The embedding σ roughly maps H \ G to a “fresh extension” of J (i.e. to the part of the graph that is created by the rewriting). Finally, we replace σ(G) with σ(H).
Definition 10 Let {Gi ⇒ Hi}i be a graph rewriting system, and let J be a hypergraph. An embedding σ
of Gi in J is a function such that: (1) σ(v) ∈ VJ for each v ∈ VGi , and σ(v) /∈ VJ for each v ∈ VHi \VGi ;
(2) σ(e) ∈ EJ for each e ∈ EGi , and σ(e) /∈ EJ for each e ∈ EHi \ EGi ; (3) σ(v) = σ(v′) =⇒ v = v′ for each v, v′ ∈ VHi \VGi ; (4) σ(e) = σ(e′) =⇒ e = e′ for each e, e′ ∈ EGi ∪ EHi ; (5) tag(e) = tag(σ(e)) for each e ∈ EGi ∪ EHi ; (6) σ(e)h = σ(eh) for each e ∈ EGi ∪ EHi and 1 ≤ h ≤ k.
The rewriting relation J → K holds iff, for some embedding σ, we have VK = (VJ \ σ(VGi )) ∪ σ(VHi ) and EK = (EJ \ σ(EGi )) ∪ σ(EHi ). The assumption VGi ⊆ VHi of Def. 9 ensures VJ ⊆ VK, so no dangling hyperedges are created by rewriting.
We now proceed to encode graph rewriting in our calculus. To simplify our encoding, we make a mild assumption: we require each Gi to be a connected hypergraph. Then, encoding a generic hypergraph J is performed in a compositional way: we assign a unique name n to each vertex in VJ, and then build a parallel composition of processes Atag(e)(→n), one for each hyperedge e in EJ, where →n = (n1, . . . , nk) identifies the adjacent vertices. Note that since the behaviour of an hyperedge e depends on its tag, only, we index A with t = tag(e). Note that t might be the tag of several hyperedges in each source hypergraph Gi. We stress this point: tag t may occur in distinct source graphs Gi, and each of these may have multiple hyperedges tagged with t. The process At must then be able to play the role of any of these hyperedges. The willingness to play the role of such a hyperedge e relatively to a single node n is modelled by a formula pe,h(x, n) meaning “I agree to play the role of e in session x, and my h-th node is n”. The session variable x is exploited to “group” all the constraints related to the same rewriting. We use the formula pe,h(x, n) in the definition of At. The process At(→n) promises pe,1(x, n1), . . . , pe,k(x, nk) (roughly, “I agree to be rewritten as e”), provided that all the other hyperedges sharing a node nh agree to be rewritten according to their roles e¯. Formally, the contract related to e ∈ EGi is the following:
^
ae(x,→n) =
1 ≤ h ≤ k, e¯∈ EGi , e¯h¯ = eh
^
pe¯,h¯(x, nh) ։
1≤h≤k
pe,h(x, nh) (2)
Note that in the previous example we indeed followed this schema of contracts. There, the hyper- graph J has four hyperedges e1, e2, e3, e4, each with a unique tag. The formulae fi and si in (1) are rendered as pei ,1 and pei,2 in (2). Also the operators ⊞1 and ⊟1, used in (1) to choose neighbours, are
generalized in (2) through the condition e¯h¯ = eh.
Back to the general case, the process At will advertise the contract ae for each e having tag t, and then
will try to fuse variable x. Note that, since the neighbours are advertising the analogous contract, we can not derive any pe,h(x, nh) unless all the hyperedges in the connected component agree to be rewritten. Since Gi is connected by hypothesis, this means that we indeed require the whole graph to agree.
However, advertising the contracts ae using a simple parallel composition can lead to unwanted results when non-determinism is involved. Consider two unary hyperedges, which share a node n, and can be rewritten using two distinct rules: G ⇒ H with e1, e2 ∈ EG, and G¯ ⇒ H¯ with e¯1, e¯2 ∈ EG¯. Let
tag(e1) = tag(e¯1) = t1 and tag(e¯2) = tag(e¯2) = t2. Each process thus advertises two contracts, e.g.:
At1 = (x) (ae1(x, n) | ae¯1(x, n) | Fusiont1) At2 = (x) (ae2(x, n) | ae¯2(x, n) | Fusiont2)
Consider now At1 | At2. After the fusion of x, it is crucial that both hyperedges agree on the rewriting rule that is being applied – that is either they play the roles of e1, e2 or those of e¯1, e¯2. However, only one Fusion process above will perform the fusion, say e.g. the first one (the name m below is fresh):
(m)(ae1(m, n)|ae¯1(m, n)|Rewritee1|ae2(m, n)|ae¯2(m, n)|Fusiont2{m/x})
Note that the process Fusiont2{m/x} can still proceed with the other rewriting, since the substitution above cannot disable a prefix which was enabled before. So, we can end up with Rewritee¯2, leading to an inconsistent rewriting. Indeed, At1 was rewritten using G ⇒ H, while At2 according to G¯ ⇒ H¯.
To avoid this, we resort to the construction ⊕i pi discussed in Ex. 4. We can then define At as follows.
A (→n
. M
x)(
a (x,→n)|
fuse
^
p (x, n ).B (x,→n))
t ) = (
e
tag(e)=t
∑
tag(e)=t
x
1≤h≤k
e,n h e
In each At, the contracts ae are exposed under the ⊕. The consequences of these contracts are then demanded by a sum of fusex . We defer the definition of Be.
Consider now the behaviour of the encoding of a whole hypergraph: At(→n)| · · · |At′ (→n′). If the hyper- graph J contains an occurrence of G, where G ⇒ H is a rewriting rule, each of the processes involved
in the occurrence P1, . . . , Pl may fire a fusex prefix. Note that this prefix demands exactly one contract ae from each process inside of the occurrence of G. This is because, by construction, each ae under the same ⊕ involves distinct pe,n. This implies that, whenever a fusion is performed, the contracts which are not directly involved in the handshaking, but are present in the occurrence of G triggering the rewriting, are then effectively disabled. In other words, after a fusion the sums in the other involved processes have exactly one enabled branch, and so they are now committed to apply the rewriting coherently.
After the fusion Be(x,→n) is reached, where x has been instantiated with a fresh session name m which
is common to all the participants to the rewriting. It is then easy to exploit this name m to reconfig-
ure the graph. Each involved vertex (say, with name n) can be exposed to all the participants through
x.x. xxxxxxxxx(m, n), and retrieved through the corresponding joiny verth(m, y). Since m is fresh, there is no risk of interference between parallel rewritings. New vertices (those in VH \VG) can be spawned and broadcast in a similar fashion. Once all the names are shared, the target hypergraph H is formed by spawning its hyperedges EH through a simple parallel composition of At(→n) processes – each one with the relevant names. Note that the processes At, where t ranges over all the tags, are mutually recursive.
Correctness. Whenever we have a rewriting J → K, it is simple to check that the contracts used in the encoding yield an handshaking, so causing the corresponding transitions in our process calculus. The reader might wonder whether the opposite also holds, hence establishing an operational correspondence. It turns out that our encoding actually allows more rewritings to take place, with respect to Def. 10. Using the Ai from Ex. 5, we have that the following loop of length 8 can perform a transition.
P = A1(n1, n2)|A2(n2, n3)|A3(n3, n4)|A4(n4, n5)|A1(n5, n6)|A2(n6, n7)|A3(n7, n8)|A4(n8, n1)
Indeed, any edge here has exactly the same “local view” of the graph as the corresponding G of the rewriting rule. So, an handshaking takes place. Roughly, if a graph J0 triggers a rewriting in the encoding, then each “bisimilar” graph J1 will trigger the same rewriting.
A possible solution to capture graph rewriting in an exact way would be to mention all the vertices in each contract. That is, edge A1 would use pA1 (n1, n2, x, y), while edge A2 would use pA2 (w, n2, n3, z), and so on, using fresh variables for each locally-unknown node. Then, we would need the fuse prefix to match these variables as well, hence precisely establishing the embedding σ of Def. 10. The semantics of fuse introduced in [2] allows for such treatment.
5 Discussion
We have investigated primitives for contract-based interaction. Such primitives extend those of Concur- rent Constraints, by allowing a multi-party mechanism for the fusion of variables which well suites to model contract agreements. We have shown our calculus expressive enough to model a variety of typi- cal contracting scenarios. To do that, we have also exploited our propositional contract logic PCL [2] to deduce the duties inferred from a set of contracts. Finally, we have encoded into our calculus some common idioms for concurrency, among which the π-calculus and graph rewriting.
Compared to the calculus in [2], the current one features a different rule for managing the fusion of variables. In [2], the prefix fusex c picks from the context a set of variables →y (like ours) and a set of names →m (unlike ours). Then, the (minimal) fusion σ causes the variables in x→y to be replaced with names in n→m, where n is fresh, and σ(x) = n. The motivation underneath this complex fusion mechanism is that, to establish a session in [2], we need to instantiate the variables →y which represent the identities of the principals involved in the handshaking. Similarly, join→x c is allowed to instantiate a set of variables
→x. Instead, in this paper, to present our expressivity results we have chosen a simplified version of the
calculus, where fusex c considers a single name, and joinx c a single variable. At the time of writing, we do not know whether the simplified calculus presented here is as expressive as the calculus of [2]. The contract-related examples shown in this paper did not require the more sophisticated rules for fuse , nor did the encodings of most of the concurrency idioms. As a main exception, we were unable to perfectly encode graph rewriting in our simplified calculus; the difficulty there was that of distinguishing between bisimilar graphs. We conjecture that the more general fusion of [2] is needed to make the encoding perfect; proving this would show our simplified calculus strictly less expressive than the one in [2].
In our model of contracts we have abstracted from most of the implementation issues. For instance, in insecure environments populated by attackers, the operation of exchanging contracts requires particular care. Clearly, integrity of contracts is a main concern, so we expect that suitable mechanisms have to be applied to ensure that contracts are not tampered with. Further, establishing an agreement between par- ticipants in a distributed system with unreliable communications appears similar to establishing common knowledge among the stipulating parties [16], so an implementation has to cope with the related issues. For instance, the fusex prefix requires a fresh name to be delivered among all the contracting parties, so the implementation must ensure everyone agrees on that name. Also, it is important that participants can be coerced to respect their contracts after the stipulation: to this aim, the implementation should at least ensure the non repudiation of contracts [26].
Negotiation and service-level agreement are dealt with in cc-pi [8, 9], a calculus combining features from concurrent constraints and name passing; [7] adds rules for handling transactions. As in the π- calculus, synchronization is channel-based: it only happens between two processes sharing a name. Synchronization fuses two names, similarly to the fusion calculus and ours. A main difference between cc-pi and our calculus is that in cc-pi only two parties may simultaneously reach an agreement, while our fuse allows for simultaneous multi-party agreements. Also, in our calculus the parties involved in an agreement do not have to share a pre-agreed name. This is useful for modelling scenarios where a
contract can be accepted by any party meeting the required terms (see e.g. Ex. 3).
In [12] contracts are CCS-like processes. A client contract is compliant with a service contract if any possible interaction between the client and the service will always succeed, i.e. all the expected synchro- nizations will take place. This is rather different from what we expect from a calculus for contracts. For instance, consider a simple buyer-seller scenario. In our vision, it is important to provide the buyer with the guarantee that, after the payment has been made, then either the payed goods are made available,
or a refund is issued. Also, we want to assure the seller that a buyer will not repudiate a completed transaction. We can model this by the following contracts in PCL: Buyer = (ship ∨ refund) ։ pay, and Seller = pay ։ (ship∨ refund). Such contracts lead to an agreement. The contracts of [12] would have a rather different form, e.g. Buyer = pay.(ship+refund) and Seller = pay.(ship⊕refund), where + and ⊕ stand respectively for external and internal choice. This models the client outputting a payment, and then
either receiving the item or a refund (at service discretion). Dually, the service will first input a payment, and then opt for shipping the item or issuing a refund. This is quite distant from our notion of contracts. Our contracts could be seen as a declarative underspecified description of which behavioural contracts are an implementation. Behavioural contracts seem more rigid than ours, as they precisely fix the or- der in which the actions must be performed. Even though in some cases this may be desirable, many real-world contracts allow for a more liberal way of constraining the involved parties (e.g., “I will pay before the deadline”). While the crucial notion in [12] is compatibility (which results in a yes/no output), we focus on the inferring the obligations that arise from a set of contracts. This provides a fine-grained quantification of the reached agreement, e.g. we may identify who is responsible of a contract violation.
Our calculus could be exploited to enhance the compensation mechanism of long-running transac- tions [4, 5, 10]. There, a transaction is partitioned into a sequence of smaller ones, each one associated with a compensation, to be run upon failures of the standard execution. While in long-running transac- tions clients have little control on the compensations (specified by the designer), in our approach clients can use contracts to select those services offering the desired compensation.
An interesting line for future work is that of comparing the expressiveness of our calculus against other general synchronization models. Our synchronization mechanism, based on the local minimal fusion policy and PCL contracts, seems to share some similarities with the synchronization algebras with mobility [18]. Indeed, in many cases, it seems to be possible to achieve the synchronization defined by a XXX through some handshaking in our model. We expect that a number of XXXx can be encoded through suitable PCL contracts, without changing the entailment relation. Dually, we expect that the interactions deriving from a set of contracts could often be specified through a XXX.
Another general model for synchronization is the BIP model [3]. Here, complex coordination schemes can be defined through an algebra of connectors. While some of these schemes could be mod- elled by contracts, encoding BIP priorities into our framework seems to be hard. Actually, the only apparent link between priorities and our calculus is the minimality requirement on fusions. However, our mechanism appears to be less general. For instance, BIP allows maximal progress as its priority relation, which contrasts with the minimality of our fusions.
Acknowledgments. Work partially supported by MIUR Project SOFT, and Autonomous Region of Sardinia Project TESLA.
References
[2] Xxxxxxx Xxxxxxxxxx and Xxxxxxx Xxxxxx. A calculus of contracting processes. In Proc. LICS, 2010.
[3] Xxxxx Xxxxxxx and Xxxxxx Xxxxxxx. The algebra of connectors—structuring interaction in BIP. IEEE Trans- actions on Computers, 57(10), 2008.
[4] Xxxxx Xxxxxx, Xxxxxx Xxxxxx, and Xxxxxxxxx Xxxxxxxxx. A calculus for long running transactions. In Proc. FMOODS, 2003.
[5] Xxxxxxx Xxxxx, Herna´n X. Xxxxxxxxx, and Xxx Xxxxxxxxx. Theoretical foundations for compensations in flow composition languages. In Proc. POPL, 2005.
[7] Xxxxx Xxxxxx Xxxxxxx and Xxxxx´n X. Xxxxxxxxx. Transactional service level agreement. In Proc. TGC, 2007.
[8] Xxxxx Xxxxxx Xxxxxxx and Xxx Xxxxxxxxx. CC-Pi: A constraint-based language for specifying service level agreements. In Proc. ESOP, 2007.
[9] Xxxxx Xxxxxx Xxxxxxx and Xxx Xxxxxxxxx. Open bisimulation for the Concurrent Constraint Pi-Calculus. In
Proc. ESOP, 2008.
[10] Xxxxxxx X. Xxxxxx, X. X. X. Xxxxx, and Xxxxx Xxxxxxxx. A trace semantics for long-running transactions. In
Communicating Sequential Processes: The First 25 Years, 2004.
[11] Xxxxxxx Xxxxxxxxx and Xxxxxx Xxxxxx. A basic contract language for web services. In Proc. ESOP, 2006.
[12] Xxxxxxxx Xxxxxxxx, Xxxx Xxxxxxx, and Xxxx Xxxxxxxx. A theory of contracts for web services. ACM Trans. on Prog. Lang. and Systems, 31(5), 2009.
[13] Xxxxxxxx Xxxxxxxx and Xxxx Xxxxxxxx. Contracts for mobile processes. In Proc. CONCUR, 2009.
[14] Xxxxx Xxxxxxxxx. Generative communication in Xxxxx. ACM Trans. on Prog. Lang. and Systems, 7(1), 1985.
[15] Xxxxxxx Xxxxx. Towards a unified approach to encodability and separation results for process calculi. In Proc. CONCUR, 2008.
[16] Xxxxxx X. Xxxxxxx and Xxxxx Xxxxx. Knowledge and common knowledge in a distributed environment. J. ACM, 37(3), 1990.
[17] Xxxx Xxxxxx and Xxx Xxxxxxxxx. Mapping fusion and synchronized hyperedge replacement into logic pro- gramming. Theory and Practice of Logic Programming, 7(1-2), 2007.
[18] Xxxx Xxxxxx and Xxxxxx Xxxxxx. Synchronized hyperedge replacement for heterogeneous systems. In CO- ORDINATION, 2005.
[19] Xxxxx Xxxxxx, Xxxxxxx Xxxxxx, and Xxxxx Xxxxxx. A Calculus of Mobile Processes, I and II. Information and Computation, 100(1), 1992.
[20] Xxxx Xxxxxxxx. Contract-based discovery and adaptation of web services. In Proc. SFM, 2009.
[21] Xxxxxxxx Xxxxxxxxx, editor. Handbook of graph grammars and computing by graph transformation: volume
I. foundations. World Scientific Publishing Co., Inc., 1997.
[22] Xxxxxx Xxxxxxxxx and Xxxxx Xxxxxx. The π-calculus: A Theory of Mobile Processes. Cambridge University Press, 2001.
[23] Xxxxx Xxxxxxxx. Concurrent Constraint Programming. MIT Press, 1993.
[24] Xxxxx Xxxxxxxx, Xxxxxxx Xxxxxxxxxx, and Xxxxxx Xxxxxx. Semantic foundations of concurrent constraint programming. In Proc. POPL, 1991.
[25] Xxxx Xxxxxxxxx and Xxxx xxx Xxxxx. Constructivism in Mathematics, vol. 1. North-Holland, 1988.
[26] Xxxxxxxx Xxxx. Non-repudiation in Electronic Commerce. Artech House, 2001.