# Linear Logic

*First published Wed 6 Sep, 2006*

Linear logic is a refinement of classical and intuitionistic logic.
Instead of emphasizing *truth*, as in classical logic, or
*proof*, as in intuitionistic logic, linear logic emphasizes the
role of formulas as *resources*. To achieve this focus, linear
logic does not allow the usual structural rules of contraction and
weakening to apply to all formulas but only those formulas marked with
certain modals. Linear logic contains a fully involutive negation while
maintaining a strong constructive interpretation. Linear logic also
provides new insights into the nature of proofs in both classical and
intuitionistic logic. Given its focus on resources, linear logic has
found many applications in Computer Science.

- 1. Legend
- 2. Introduction
- 3. Proof Systems
- 4. Semantics
- 5. Complexity
- 6. Computer science impact
- 7. Variations
- Bibliography
- Other Internet Resources
- Related Entries

## 1. Legend

This entry uses special symbols that have become standard in texts on linear logic. We've told your web browser to display them as certain Unicode symbols. However, not all browsers will have these symbols available in all fonts. Follow the instructions on our Displaying Special Characters page if the symbol displayed in the middle column of the following table doesn't look roughly like the graphic displayed in the right column.

SymbolWhat Your Browser DisplaysWhat It Should Look Likeadditive and& & additive or⊕ multiplicative and⊗ multiplicative or⅋ classical conjunction ∧ top ⊤ classical disjunction ∨ bottom ⊥ linear implication ⊸

## 2. Introduction

### 2.1 A bit of history

Linear logic was introduced by Jean-Yves Girard in his seminal work Girard 1987. While the origin of the discovery of this new logic come from a semantical analysis of the models of System F (or polymorphic lambda calculus), one can see the whole system of linear logic as a bold attempt to reconcile the beauty and symmetry of the systems for classical logic with the quest for constructive proofs that had led to intuitionistic logic.

Indeed, one could present a first fragment of linear logic, known as
*multiplicative additive linear logic* (MALL), as the outcome
of two simple observations:

- In the sequent calculus presentation of classical logic, the
rules for the connectives "
*and*" and "*or*", as well as the Cut rule and the rule for implication may be presented equivalently in an additive form (the context of the premises are the same) or a multiplicative form (the context of the premises are different). These two presentations are equivalent, in classical logic, because of the availability of a few so-called “structural” rules, namely, contraction and weakening. - The non-constructive proofs that one can perform in classical logic actually use, in the sequent calculus presentation, one or the other of these structural rules.

So, if we want to eliminate the non-constructive proofs without
destroying the symmetry of the sequent calculus, as is done in
intuitionistic logic, we can try to eliminate the contraction and
weakening rules instead. In doing so, we are left with two different
version of each connective: one additive version for *and* and
*or*, and one multiplicative version of *and* and
*or*, which are now no longer equivalent. These new connectives
are called & (additive *and*), ⊕ (additive
*or*), ⊗ (multiplicative *and*) and ⅋
(multiplicative *or*).

This duplication of the connectives actually leads to a much clearer
understanding of the conflict between classical and intuitionistic
logic: the law of excluded middle, for example (*A* *or*
not-*A*), is considered an evidence in the classical world, and
considered absurd in the intuitionistic one. But in linear logic, this
law has two readings, the additive (*A* ⊕
¬*A*}) and the multiplicative (*A* ⅋
¬*A*): the additive one is not provable, and corresponds to
the intuitionistic reading of disjunction; the multiplicative one is
trivially provable and simply corresponds to the tautology (*A*
*implies* *A*) that is perfectly acceptable in
intuitionistic logic too.

Also, the disjunction property, essential in constructivism, is easily established for the additive disjunction.

We find then inside this richer language a way to represent both the needs of intuitionism and the elegance of classical logic: negation is involutive, sequents are symmetric, connectives are inter-definable, models are simple, and yet for the right encoding of intuitionistic logic into linear logic we can establish a soundness and completeness theorem. Contrast these properties with those of intuitionistic logic, where negation is not involutive, sequents are not symmetric, connectives are all not inter-definable, and Kripke models are quite more involved than boolean algebras.

Notice though that once one has eliminated the contraction and
weakening rule, formulas no longer behave as immutable truth values:
indeed, when we have a proof of *A* ⇒ *B* and a
proof of *A* in linear logic, by composing them we actually
consume them to get a proof of *B*, so that *A* ⇒
*B* and *A* are no longer available after the
composition. Linear logic formulas behave now more like
*resources* that can only be used once.

To recover the full expressive power of intuitionistic and classical
logic, it is then necessary to add to the MALL fragment two dual
“modalities” that allow to arbitrarily duplicate (the
**!** modality) or discard resources (the
**?** modality): for the formulas
**!***B* and **?***B*, and
*only* for these, we are allowed to use contraction and
weakening. This leads to the full propositional linear logic, and to a
very nice connection with computer science.

### 2.2 Linear logic and computer science

Logic, or at least proof-theory, is focused on formal proof systems:
intuitionistic predicate calculus, classical predicate calculus,
arithmetics, higher order calculi, and a wealth of similar consistent
and structured sets of process-building rules. Intuitionistic and
constructive logic began when people saw the possibility of reading
‘*A* ⇒ *B*’ as ‘if you give me an
*A*, I will give you a *B*’, which is a
significant departure from the classical reading ‘*B* is
true whenever *A* is’.

Computer science, on the other hand, focuses on computational
mechanisms: function application, exception handling, method
invocation in object oriented languages, variable assignment and
similar sets of process-building rules. Except that the mechanisms of
these processes have to made explicit: a function of type *A*
→ *B* gives a formal account of how to transform an
*A* into a *B*.

At a given moment these two sciences met. People realized that the set
of implication-only *intuitionistic* deductions was a core
functional language called simply-typed lambda-calculus: the
programming language was a logic, the logic a programming language.
This memorable meeting was called the ‘Curry-Howard
isomorphism’
(Howard 1980).

Linear logic begins with a further twist in the reading of
‘*A* ⇒ *B*’: read now ‘give me
*as many* *A* as I might need and I will give you
*one* *B*’. The notion of *copy* which is
so central to the idea of computation is now wired into the logic.

This new viewpoint opens up new possibilities, including:

*new formulas*expressing refined operational properties like ‘give me*A*once and I will give you*B*’. Applications here range from knowledge representation in AI, refined logic programming where the ability of linear logic to represent states is put to use, analysis of classical logic and computational interpretations thereof, namely exception mechanisms in programming languages, by means of embeddings in linear logic, refined temporal logics, linearity analysis.*new rules*expressing constraints on the use of copies resulting in a fragment of linear logic for polytime computations to mention only the most spectacular application.*new ways*of representing proofs

## 3. Proof Systems

The core propositional connectives of linear logic are divided into
additive and multiplicative connectives. The classical conjunction
and its identity, ∧ and ⊤, split into the additive &
(with) and ⊤ (top) and the multiplicative ⊗ (tensor) and 1
(one). Similarly, the classical disjunction and its identity, ∨
and ⊥, split into the additive ⊕ (oplus) and 0 (zero) and
the multiplicative ⅋ (par) and ⊥ (bottom). Negation is
generally treated in one of two ways in presentations a linear
logic. Negation can be viewed as a primitive propositional connective
with no restrictions on its occurrences within formulas. Since
deMorgan dualities exist between negation and all propositional
connectives, exponentials, and quantifiers, it is also possible to
treat negation as a special system that only occurs applied to atomic
formulas. Implications are also commonly introduced into linear logic
via definitions: the linear implication *B* ⊸ *C*
can be defined as ¬*B* ⅋ *C*, while the
intuitionistic implication *B* ⇒ *C* can be defined
as **!***B* ⊸ *C*. The operators
**!** and **?** are variously called modals
or exponentials. The term “exponential” is particularly
appropriate since, following the usual relationship between
exponentiation, addition, and multiplication, linear logic supports
the equivalences **!**(*B* & *C*)
≡ (**!***B* ⊗
**!***C*) and **?**(*B*
⊕ *C*) ≡ (**?***B* ⅋
**?***C*), as well as the 0-ary versions of these
equivalences, namely, (**!**⊤ ≡ 1) and
(**?**0 ≡ ⊥). Here, we use the binary
equivalence *B* ≡ *C* to mean that the formula
(*B* ⊸ *C*) & (*C* ⊸ *B*)
is derivable in linear logic.

### 3.1 Sequent calculus

A two-sided sequent calculus for linear logic is presented in the figure below. Notice here that negation is treated as if it were any other logic connective: that is, its occurrences in formulas are not restricted and there are introduction rules on the left and right for negation.

Here, the left and right side of sequents are multiset of formulas:
thus, the order of formulas in these contexts does not matter but
their multiplicity does matter. Notice that the rules of weakening and
contraction are available only for formulas marked with the
exponential **!** on the left or **?** on
the right of the sequent. The usual proviso for the ∀-right and
∃-left introduction rules are assumed: in particular, the
variable *y* must not be free in the formulas of the sequent
conclusion of those inference rules. Quantification here is assumed to
be first-order: higher-order versions of linear logic can be written
along standard lines.

The cut rule can be eliminated and completeness is still maintained.
Dually, the *init* rule can also be eliminated as well except
for the occurrences of *init* involving atomic formulas.

### 3.2 Focused proof search

An important normal form theorem for the structure of cut-free
proofs was provided in
(Andreoli 1992).
He classified a non-atomic formula as *asynchronous* if its
top-level logical connective is ⊤, &, ⊥, ⅋,
**?**, or ∀ or *synchronous* if its
top-level logical connective is 0, ⊕, 1, ⊗,
**!**, or ∃.

When viewing proof search as a computational model, we can see
formulas in a sequent as being “agents” that may act
independently or in concert with others in its environment: the action
of the agent is determined by reading the introduction rule for it
bottom-up. If an asynchronous formula occurs on the right of a sequent,
it can evolve without affecting provability and without interacting
with its context. For example, the agent (*B* ⅋
*C*) becomes (by applying the ⅋-right introduction rule)
the two agents *B* and *C* (now working in parallel).
Similarly, the agent (*B* & *C*) yields (by applying
the &-right introduction rule) two different identical worlds
(sequents) except that *B* is in one of these worlds and
*C* is in the other.

On the other hand, if we view a synchronous formula as an agent whose evolution is determined by the corresponding right-introduction rule, then it is possible for a provable sequent to evolve to a non-provable sequent (for example, by applying the ⊕ right-introduction rule). Also, the instances of such inference rules depend on details of the context of the formula. For example, the success of the 1-right introduction rule requires that the surrounding context must be empty and the success of the ⊗-right introduction rule can depends on how the agent's surround context is divided into two different sequents. Thus, the evolution of such agents depend on “synchronizing” with other parts of the context.

Now consider a one-sided sequent calculus presentation of linear
logic where the only introduction rules are right-introduction rules.
Given the above classification of connectives, it is possible to show
that proof search can be structured into the following phases without
loss of completeness. The *asynchronous phase* occurs if there
is an asynchronous formula present in the sequent. In this phase,
right-introduction rules are applied in any order until there are no
further asynchronous formulas. In the *synchronous phase* some
synchronous formula is selected and becomes the “focus” of
this phase: that is, right-introduction rules are applied to it and to
any synchronous subformula that it might generate.

The following figure illustrates the focusing proof system linear
logic. Notice that the two phases are represented by different arrows:
the up-arrow denotes the asynchronous phase and the down-arrow denotes
the synchronous phase. Atoms are given polarity and in the figure below,
*A* stands for positive atoms and the negation of *A* stands
for negative atoms. In this proof system, Ψ denotes a set of
formulas and Δ and *L* denote multisets of
formulas. Since *L* appears only in rules involve the up-arrow,
it is also possible for *L* to denote a list: that is, it is
possible to impose an arbitrary order on the formulas to the right of
the up-arrow since the introduction of asynchronous formulas can be
done in any order. Proofs built by these inference rules are called
*focused* proofs. The result in
Andreoli 1992
is that focused proofs are complete for linear logic.

### 3.3 Proof nets

Proofs presented using sequent calculus contain a lot of details that
sometimes are uninteresting: consider for example how many
uninterestingly different ways there are to form a proof of ⊢
Γ, (*A*_{1}⅋ *A*_{2}),
…, (*A*_{n-1}⅋
*A*_{n}) from a derivation of ⊢ Γ,
*A*_{1}, *A*_{2}, …,
*A*_{n}. This unpleasant fact derives from the
sequential nature of proofs in sequent calculus: if we want to apply a
set *S* of *n* rules to different parts of a sequent, we
cannot apply them in one step, even if they do not interfere with each
other! We must *sequentialize* them,
i.e., choose a linear order on *S* and apply the rules in
*n* steps, according to this order. This sequential nature of
the sequent calculus is a major obstacle when doing proofs of
cut-elimination, for example, where one needs to take into account
also all the non-interesting derivations.

A natural question arises: “is there a representation of
proofs that abstracts from such uninteresting details?”. A
similar question is answered positively in the case of intuitionistic
sequent calculus by means of what is known as *natural
deduction*, that has, via the Curry-Howard correspondence, a strong
connection with the computational device known as lambda calculus.

For linear logic, this succinct representation of proofs is given by
*proof nets*, graph-like structures that enjoy particularly good
properties for the MLL fragment of the logic. The first step towards
this representation is to convert all the sequent calculus system,
using the involutivity of negation, into a one-sided system, where
sequents are of the form ⊢ Γ. As a consequence, the number
of rules is halved, as we keep only the left column of the system shown
above, but we keep the same expressive power, as provability stays the
same.

To each asymmetric sequent calculus proof of in MLL, one can
inductively associate a proof net with the same *conclusions*
as follows:

- To a proof reduced to the axiom rule, we associate an
*axiom link* - For a proof obtained by applying the cut rule to two proofs P1 and
P2, we first inductively build the proof nets associated to P1 and P2,
and then we combine them using a
*cut link* - For a proof obtained by applying the tensor rule to two proofs P1
and P2, we first inductively build the proof nets associated to P1 and
P2, and then we combine them using a
*tensor link* - For a proof obtained by applying the par rule to a proof P1, we
first inductively build the proof net associated to P1, and then we add
a "par link"

All this can be properly formalized using hypergraphs (formulas are nodes and “links” are oriented hyperedges with hypotheses and conclusions), and we can formally define as a proof net a hypergraph inductively built out of a sequent calculus derivation of MLL. Notice that there are quite a lot of hypergraphs that are not proof nets.

Now if you look at the proof net built from the derivations of ⊢
Γ, (*A*_{1}⅋ *A*_{2}),
…, (*A*_{n−1}⅋
*A*_{n}) obtained from ⊢ Γ,
*A*_{1}, *A*_{2}, …,
*A*_{n}, you will see that all trace of the
order of application of the rules has disappeared. In a sense, the
proof nets are an equivalence class of sequent calculus derivations
with respect to the derivation order of rules whose application
commute.

Suppose that somebody now comes to you with a huge hypergraph built with axiom, cut, par and tensor links, pretending that it is actually a concise representation of the proof of some long standing mathematical open problem. How can you verify that it is actually a representation of a proof, and not just a random structure?

Performing this *correctness* check is a challenge that amounts
to rebuild a sequential construction history for your structure,
corresponding to a derivation in sequent calculus, and seems at first
a very complex problem: the first correctness criterion for MLL proof
nets, called the “long trip criterion”, and present in
Girard's original paper, is exponential, as well as the ACC (Acyclic
connected) criterion of Danos and Regnier found later on.
Nevertheless, there exists a much more efficient criterion, known as
Contractibility, due to Danos and Regnier, that has more recently been
reformulated as the following elegant graph parsing criterion by
Guerrini, Martini and Masini: a hypergraph is a proof net if and only
if it reduces to the singleton node “net” via the
following graph reduction rules

Performing this check naively can take quadratic time (each application of a rule may require an entire lookup of the graph to find the redex, and we need to perform as many steps as there are hyperlinks in the graph), but a result due to Guerrini shows how to actually perform this check in almost linear time. Many other correctness criteria are known, some of which also linear, like the one by Murawski and Ong derived from a criterion for Intuitionistic MLL.

On proof nets, one can perform cut elimination in a particularly clean way: due to their parallel nature, cuts can be eliminated locally via two simplification rules:

- Axiom move:
- Multiplicative move:

These are actually computation rules over proof nets, and the correctness criteria allow to verify easily that any such rule preserves correctness, and as a consequence the reductum of a proof net still comes from a sequent calculus proof of the same sequent.

Hence, cut elimination in MLL proof nets can be performed in linear time, and gives a simple, elegant cut-elimination result for all of MLL.

The proof nets approach can be extended to larger subsets of linear logic, but then it is less clear how to obtain the same elegant results as for MLL: the original system proposed in Girard 1987 works for MELL, for example, by associating to the four exponential rules the following hypergraph constructions:

- Contraction
- Weakening
- Dereliction
- Promotion, that introduces the notion of “box”, a
sequentialisation mark around a piece of a proof net materialised in
the pictures of the graphs by the rectangle drawn around the proof net
of conclusions
*A*and**?**Γ.

While these constructions, and the associated graph reductions bear striking similarity with lambda calculus with explicit substitutions, as first remarked by Di Cosmo & Kesner (1997), they are too similar to the corresponding sequent calculus rules: the parallelization effect so elegant for MLL does not properly carry on here, and the graph reduction rules involve boxes and are not local.

To recover a satisfactory system, many proposals have been made,
starting from the one by Danos & Regnier
(1993)
but we want to mention here the very elegant
approach by Guerrini, Martini and Masini
(Guerrini *et al*. 2003),
that neatly shows the connection between two level proof systems for
modal logic, proper proof nets for MELL, and optimal reduction in the
lambda calculus.

Extending proof nets to include a treatment of the additives connectives has various technical presentations, none of which appears canonical and satisfactory. Their treatment in proof-net-like proof systems is currently a topic of active research.

## 4. Semantics

Approaching the semantics of linear logic is usually done along two different paths. First, there are various semantic structures available to characterize “truth” in a model. Such approaches can be used to establish soundness and completeness for (some fragments of) linear logic. A more novel semantic approach to linear logic involves giving semantics to proof object directly. We describe briefly these two approaches and provide some links to the literature.

### 4.1 Semantics of provability

One approach to attempting a sound and complete semantics for
fragments of linear logic associates to a formula the set of all
contexts that can be used to prove that formula. Of course, such a
collection may need to be more abstract and to be given various
closure properties. The *phase semantics* of Girard
(1987)
provides one such semantics: some uses of such semantics have been
made in computer science to provide counterexamples and as a tool that
can help establish that a given concurrent systems cannot evolve into
another with certain properties
(Fages *et al*. 2001).
Similarly,
Kripke-style semantics have been provided in
Allwein & Dunn 1993
and
Hodas & Miller 1994.
Quantales, certain kind of partially ordered algebraic structures,
have also been used to provide early semantic models for parts of
linear logic
(Yetter 1990).

### 4.2 Semantics of proofs

In the formulas-as-types analogy which is so popular and fruitful in
theoretical computer science, a logical system is put in
correspondence with a typed computational device (like typed lambda
calculus), by associating to each proof of a formula a program having
as type that very formula. For example, a proof of the tautology
*A* ⊃ *A* corresponds to the program
*fun* (*x*:*A*).*x* : *A*→*A*, the identity function on objects of type *A*. This
is why, in constructive logical systems, and in linear logic, so much
importance is attached to proofs, to the point that building and
studying models of proofs gets so much more attention than building
and studying models of provability: we are not satisfied to know that
a formula is probable, we really want to know the *computational
content* of its proof.

Many models of linear logic proofs have been proposed; we consider that, to date, the simplest and most intuitive construction is those based on the so-called “relational semantics”, where formulas are interpreted as multisets, sequents as tuples of multisets and proofs as relations over the interpretation of sequents. If one wants to give a purely set-theoretic semantics, without resorting to multisets, it is possible to do it by means of coherence spaces, sets equipped with a special coherence relation, as originally shown by in Girard 1987. There are interesting category theoretical models of linear logic, such as the *-autonomous categories (Barr 1991) and hypercoherences (Ehrhard 1993).

Another approach to semantics of proofs is given by Girard's
*Geometry of Interaction*, that allows to obtain a fully
algebraic characterization of proofs: to each proof net, one can
associate a partial permutation matrix σ corresponding to the
cut links, and a proper matrix *M* corresponding to the pure
net whose entries contain expressions built out of the so-called
dynamic algebra, that describe the possible paths inside the pure
net. It is then possible to fully describe the proof net via the
*execution formula*

EX(σ,M) = (1−σ^{2}) ( ∑_{i}M(σM) ) (1−σ^{2})

which, in the MLL case, is an invariant of the normalization process. Some nice connection to results coming from data-flow theory has been shown in some early work of Abramsky & Jagadeesan (1994).

A special mention deserves the extremely rich research area that has
developed around the so-called *game semantics*, whose strong
connection to linear logic was pointed out quite early by A. Blass
(1992).
In this approach, formulas are interpreted
as games, logical connectives as game constructors, and proofs as
strategies that describe how a player react to opponent's moves. By
imposing different restrictions on the rules of the game, one can
actually provide a quite precise semantics (technically, a fully
abstract model) for various features of actual programming languages,
whence the huge interest in this area over the past years. See, for
example,
Abramsky & Jagadeesan 1994,
Abramsky & Melliès 1999,
and
Hyland & Ong 2000.

## 5. Complexity

An extensive research work has been dedicated to the study of the complexity and decidability issues for several fragments of propositional linear logic. It is known that

- MLL provability is NP-complete (Kanovich 1992)
- MALL provability is PSPACE-complete
(Lincoln
*et al*. 1992) - LL provability is undecidable (Lincoln 1995)
- the question of decidability is still open concerning MELL: given that Petri net reachability can be encoded into MELL (Gunter & Gehlot 1989), deciding the latter must be at least EXPSPACE-hard.

Surprisingly, for those that may forget that the novelty in linear logic lies in the way formulas are managed without structural rules, these results stay the same even if we focus on the fragments of the logics where only the constants, and no propositional variables, are allowed (Kanovich 1994, Lincoln & Winkler 1994). Indeed, it is possible to encode arbitrary formulas into constant-only formulas preserving provability.

A good overview of complexity results surrounding linear logic can be found in Lincoln 1995.

## 6. Computer science impact

When intuitionistic logic was first proposed early in the last century, it was presented as a challenge to the way traditional mathematicians were suppose to do business. Uses of the excluded middle and of proof-by-contradiction were considered suspect and problematic, particularly in the presence of infinity. As intuitionistic logic concerns were developed into constructive mathematics, new constructive approaches have arose to topics such as topology, algebra, and analysis. Given that linear logic encompasses dynamics of proof (algorithm) and resources, its primary impacts has been not in traditional mathematics but in computer science. Before overviewing the natural of that impact, we outline the various ways that logic more generally is used in computer science.

Logic plays different roles in the specification of computations. We can identify the following broad different approaches and note which roles have felt influences from linear logic.

In the *computation-as-model* approach, computations are
encoded as mathematical structures and consist of such items as nodes,
transitions, and states. Logic is used in an external sense to make
statements *about* those structures. That is, computations are
used as models for logical expressions. Intensional operators, such as
the modals of temporal and dynamic logics or the triples of Hoare
logic, are often employed to express propositions about the change in
state. This use of logic to represent and reason about computation is
probably the oldest and most broadly successful use of logic for
representing computation. This role for logic has felt little influence
from linear logic.

In the *computation-as-deduction* approach, pieces of logic's
syntax (such as formulas, terms, types, and proofs) are used directly
as elements of the specified computation. In this more rarefied
setting, there are two rather different approaches to how computation
is modeled, called the *proof normalization* approach and the
*proof search* approach.

We outline below the significant impacts that linear logic has had both of these different settings.

### 6.1 Proof normalization

The *proof normalization* approach views the state of a
computation as a proof term and the process of computing as
normalization (know variously as β-reduction or cut-elimination).
Functional programming can be explained using proof-normalization as
its theoretical basis
(Martin-Löf 1982)
and has been used to justify the design of new functional
programming languages, e.g.,
Abramsky 1993.
Linear logic provides this approach to computational specification
new types, new declarative means for statically understand how
resources may be used in a computation, and provided an appealing
means for formalizing the duality between a function and the
environment that supplies it with arguments.

Another area where linear logic has been a powerful theoretical
instrument is that of *optimal reduction*. The problem of
building efficient (optimal) interpreters for the Lambda Calculus, that
stayed open for quiet a long time after its original definition by J.J.
Lévy, was solved for the first time in
Lamping 1990,
via a sophisticated sharing graph implementation involving a quite
impressive amount of rules. Using ideas and intuition from linear
logic, many authors reconstructed Lamping's solution, simplifying it
and leading to a rich theory connected to that of the Geometry of
Interaction. A good reference for those interested is the book
“The optimal implementation of functional programming
languages” by Asperti and Guerrini.

The refinement of intuitionistic logic provided by linear logic and the dualities of linear logic provided a setting in which one could view a function and its environment similar entities that interact dually. For example, a function with the type α ⊸ β can be modeled as a process that consumes a value of type α from its environment and transforms it into a value of type β. In linear logic, this implication is equivalent to its contrapositive form: the type ¬β ⊸ ¬α can be lead to interpreting the same function as a process that transforms a demand for a value of type β into a demand for a value of type α (notice that this does not happen with functions of intuitionistic type since, for example, input argument may be vacuous) [Curien 2003]. The recent successes of using game semantics to model functional computation are similarly related to dual treatment of function and environment (Abramsky and Jagadeesan 1994, Hyland & Ong 2000).

Finally, we mention that in the area encoding computation as proof normalization, linear logic has been used to provide a type-based description of polytime recursive functions. For example, M. Hofmann (2003) introduced a λ-calculus with modal and linear types that extended the function algebra of Bellantoni & Cook 1992 to higher types. Types based on linear logic have also been used within functional programs: see Guzman & Hudak 1990 and Wadler 1991.

### 6.2 Proof search

The *proof search* approach views the state of a computation
as a sequent (a structured collection of formulas) and the process of
computing as the process of searching for a proof of a sequent: the
changes that take place in sequents capture the dynamics of
computation. Logic programming can be explained using proof search as
its theoretical basis, and linear logic provides this approach to
computational specification with new combinators for building logic
programs, new means to capture rich dynamics dynamically, and new
declarative means to specify concurrent computations. (See
Miller 2004
for an overview of linear logic programming languages.)

The completeness of focusing proof system can be used to provide a
declarative explanation of part of the operational semantics of logic
programming within linear logic. Consider, for example, the subset
*L* of formulas of linear logic that are built from only the
connectives ⊤, &, ⊸, ⇒, and ∀. (Notice
that if one adds ⊥ to this list, it is possible to encode all
connectives of linear logic.) It is possible to see that cut-free
proof search in *L* can be defined into to phases. Given a
sequent Ψ; Δ → *G*, where Γ is a set of
formulas (assumed to have an implicit **!** applied to
its formulas), Δ is multiset of formulas, and *G* is a
single formula (all from *L*), then proof search proceeds as
follows.

- If
*G*is non-atomic, then only a right introduction can be used to conclude this sequent. Given this set of connectives, the right rules are invertible and this*goal-reduction*phase corresponds exactly to the asynchronous phase of focused proofs. - If
*G*is atomic, then proof search proceeds by*deciding*on a single formula from a left-hand context. - Once a formula is chosen for the
*focus*, all left-introduction rules are applied to only that formula and any positive subformulas that arise. This*backchaining*phase corresponds to the asynchronous phase of focused proof construction.

Formally, these various phases can be described using the following
single inference rules. Notice that a new sequent arrow is introduced:
this arrow is labeled with the formula that is the result of a
left-introduction rule. Notice that the rule for left-introduction of
⊸ requires splitting the bounded context
Δ_{1},Δ_{2} into two parts (when reading
the rule bottom up). There are, of course, 2^{n} such
splittings if that context has *n* ≥ 0 distinct
formulas.

In particular, another specification language for computational systems, a role that is also occupied by, say, Petri nets, process calculi, λ-calculus, etc. Given that linear logic is, in fact, a logic with a proof theory and various kinds of semantics, broad avenues of reasoning about computation can follow from the meta-theory of linear logic itself.

Given that sequent calculi for linear use multisets of formulas,
proof search can directly encode *multiset rewriting*. Since
many computations can naturally be seen as multiset rewriting, it has
been possible to make numerous connections between linear logic and
Petri nets
(Gunter & Gehlot 1989),
process calculi
(Andreoli & Pareschi 1991,
Kobayashi *et al*. 1999,
Miller 1996),
and security protocols
(Cervesato *et al*. 1999,
Miller 2003).

## 7. Variations

### 7.1 Different treatments of modality

The exponentials **!** and **?** in linear
logic are less carved in the marble than the other
connectives. Indeed, if one uses traditional sequent calculus
presentations, the exponentials are not “canonical”: if
you introduce another copy of exponentials, say
**!**′ and **?**′, with the same
rules as the original ones, there is no way to prove that
**!** is equivalent to **!**′, and
**?** to **?**′, while for the other
connectives this is easily established.

In this respect, the **!** and **?**
resemble the box and diamond connectives found in modal logic, and it
is then possible and interesting to study variations for the logical
rules of these connectives. For example, *elementary linear
logic* (ELL) is obtained by replacing the **!** and
**?** introduction by a single rule introducing
**!** and **?** at the same time. As a
consequence, ELL can encode all and only the functions over integers
that normalize in time bounded by an elementary function.

Other variations are *light linear logic*
(Girard 1998),
*soft linear logic*
(Lafont 2004),
and many others (see for example
Baillot & Terui 2004).

### 7.2 Non-commutative linear logic

While linear logic rejects the universal application of the two
structural rules of weakening and contraction, it allows the the
unrestricted use of the *exchange* structural rule. A sequent
calculus that does not universally employ the exchange rule has
sequents whose left and right contexts are lists: the order of formulas
within context becomes an expressive element of the logic. In this
case, the multiplicative disjunction and conjunction can become
non-commutative.

One of the first logics that rejects all three structural rules of
the sequent calculus was given in
Lambek 1958.
While this logic contains *two* implications, it does not
contain a negation nor exponentials. Various recent papers have
proposed extending linear logic to include non-commutative features
and, at present, no proposal seems canonical. For a sampling of
non-commutative linear logics, see
Yetter 1990,
Abrusci 1991,
Retoré 1997,
Abrusci & Ruet 1999, and
Guglielmi & Strassburger 2001.

## Bibliography

- Abramsky, S., 1993, "Computational
interpretations of linear logic",
*Theoretical Computer Science*, 111: 3-57. - Abramsky, S., and Jagadeesan, R., 1994,
"New Foundations for the Geometry of Interaction",
*Information and Computation*, 111(1): 53-119. - Abramsky, S., and Melliès,
Paul-André, 1999, "Concurrent Games and Full Completeness",
*14th Annual Symposium on Logic in Computer Science*, Trento, Italy, IEEE Computer Society Press., pp. 431-442. - Abrusci, V. M., 1991, "Phase
semantics and sequent calculus for pure non-commutative classical
linear propositional logic",
*Journal of Symbolic Logic*, 56(4): 1403-1451. - Abrusci, V.M., and Ruet, P., 1999,
"Non-Commutative Logic I: The Multiplicative Fragment",
*Annals of Pure Applied Logic*, 101(1): 29-64. - Allwein, G. and Dunn, J.M., 1993,
"Kripke Models for Linear Logic",
*Journal of Symbolic Logic*, 58(2): 514-545. - Andreoli, J.-M. and Pareschi, R., 1991,
"Linear objects: Logical processes with built-in inheritance",
*New Generation Computing*, 9(3-4): 445-473. - Andreoli, J.-M., 1992, "Logic programming
with focusing proofs in linear logic",
*Journal of Logic and Computation*, 2(3): 297-347. - Baillot, P., and Terui, K., 2004,
"Light Types for Polynomial Time Computation in Lambda-Calculus",
*LICS*Turku, Finland, IEEE Computer Society Press, pp. 266-275. - Barr, M., 1991, "*-Autonomous categories
and linear logic",
*Mathemathical Structures in Computer Science*, 1(2): 159-178. - Bellantoni, S. and Cook, S., 1992,
"A New Recursion-Theoretic Characterization of the Polytime Functions",
*Computational Complexity*, 2: 97-110. - Blass, A., 1992, "A game semantics for
linear logic",
*Annals Pure Applied Logic*, 56: 183-220. - Cervesato, I., Durgin, N., Lincoln,
P., Mitchell, J., and Scedrov, A., 1999, "A meta-notation for protocol
analysis", in R. Gorrieri (ed.),
*Proceedings of the 12th IEEE Computer Security Foundations Workshop — CSFW 1999*, Los Alamitos, CA: IEEE Computer Society Press, pp. 55-69. - Curien, P.-L., 2003, "Symmetry and
interactivity in programming",
*Bulletin of Symbolic Logic*, 9(2): 169-180. - Danos, V., Regnier, L., 1993, "Proof-nets
and Hilbert space", in J.-Y. Girard, Y. Lafont, and L. Regnier
(eds.),
*Advances in Linear Logic*, Cambridge: Cambridge University Press, 1995, pp. 307-328. - Di Cosmo, R., and Kesner, D., 1997,
"Strong normalization of explicit substitutions via cut elimination in
proof nets", (extended abstract) in
*Proceedings of the Twelfth Annual IEEE Symposium on Logic in Computer Science*, Los Alamitos, CA: IEEE Computer Society Press, pp. 35-46. - Ehrhard, T., 1993, "Hypercoherences: A
Strongly Stable Model of Linear Logic",
*Mathematical Structures in Computer Science*, 3(4): 365-385. - Fages, F., Ruet, P., and Soliman, S.,
2001, "Linear Concurrent Constraint Programming: Operational and Phase
Semantics",
*Information and Computation*, 165(1): 14-41. - Girard, J.-Y., 1987, "Linear logic",
*Theoretical Computer Science*, 50: 1-102. - –––, 1998, "Light
Linear Logic",
*Information and Computation*, 143(2):175-204. - Guerrini, S., Martini, S., and Masini,
A., 2003, "Coherence for sharing proof-nets",
*Theor. Comput. Sci.*, 294(3): 379-409. - Guglielmi, A., and Strassburger, L.,
2001, "Non-commutativity and MELL in the Calculus of Structures",
*Computer Science Logic*, Lecture Notes in Computer Science, vol. 2142, Springer Verlag, pp. 54-68. - Gunter, C. A., and Gehlot, V., 1989,
"Nets as Tensor Theories", in G. De Michelis (ed.),
*Proceedings of the Tenth International Conference on Application and Theory of Petri Nets*, Lecture Notes in Computer Science n. 616, Springer-VerlagBonn, pp. 174-191. - Guzman, J.C. and Hudak, P., 1990,
"Single-threaded polymorphic lambda calculus", in
*Proceedings of the Fifth IEEE Symposium on Logic in Computer Science*, Philadelphia, IEEE Computer Society Press. - Hodas, J., and Miller, D., 1994, "Logic
programming in a fragment of intuitionistic linear logic",
*Information and Computation*, 110(2): 327-365. - Hofmann, M., 2003, "Linear types and
non-size increasing polynomial time computation",
*Information and Computation*, 183(1): 57-85. - Howard, W.A., 1980, "The formulae-as-type
notion of construction, 1969", in J.P. Seldin and R. Hindley (eds.),
*To H. B. Curry: Essays in Combinatory Logic, Lambda Calculus, and Formalism*, New York: Academic Press, pp. 479-490. - Hyland, J.M.E., and Ong, C.-H.L.,
2000, "On Full Abstraction for PCF: I. Models, observables and the
full abstraction problem, II. Dialogue games and innocent strategies,
III. A fully abstract and universal game model",
*Information and Computation*, 163: 285-408. - Kanovich, M.I., 1992, "Horn Programming
in Linear Logic is NP-Complete", in
*Proceedings of the Seventh Annual Symposium on Logic in Computer Science*, Santa Cruz, IEEE Computer Society Press, pp. 200-210. - –––, 1994, "Simulating Linear Logic with 1-Linear Logic", Technical Report 94-08, Laboratoire de Mathématiques Discrétes, University of Marseille, 1994.
- Kobayashi, N., Shimizu, T., and
Yonezawa, A., 1999, "Distributed concurrent linear logic programming",
*Theoretical Computer Science*, 227(1-2): 185-220. - Lafont, Y., 2004, "Soft linear logic and
polynomial time",
*Theoretical Computer Science*, 318(1-2): 163-180. - Lambek, J., 1958, "The mathematics of
sentence structure",
*American Mathematical Monthly*, 65: 154-169. - Lamping, J., An algorithm for optimal lambda-calculus reductions. 17th Annual Symposium on Principles of Programming Languages, San Francisco, ACM Press, pages 16--30, 1990.
- Lincoln, P., 1995, "Deciding provability
of linear logic formulas",
*Proceedings of the Workshop on Advances in Linear Logic*, J.-Y. Girard, Y. Lafont, and L. Regnier (eds.), Cambridge: Cambridge University Press, pp. 197-210. - Lincoln, P., Mitchell, J., Scedrov, A.,
and Shankar, N., 1992, "Decision problems for propositional linear
logic",
*Annals of Pure and Applied Logic*, 56: 239-311. - Lincoln, P., and Winkler, T., 1994,
"Constant-Only Multiplicative Linear Logic is NP-Complete",
*Theoretical Computer Science*, 135:155-169. - Martin-Löf, P., 1982,
"Constructive Mathematics and Computer Programming",
*Sixth International Congress for Logic, Methodology, and Philosophy of Science*, Amsterdam, North-Holland, pp. 153-175. - Miller, D., 1996, "Forum: A
multiple-conclusion specification language",
*Theoretical Computer Science*, 165(1): 201-232. - –––, 2003, "Encryption
as an abstract data-type: An extended abstract", in I. Cervesato
(ed.),
*16th Workshop on Foundations of Computer Security*, Asilomar, IEEE Computing Society, pp. 3-14, 2003. - –––, 2004, "Overview
of linear logic programming",
*Linear Logic in Computer Science*, T. Ehrhard, J.-Y. Girard, P. Ruet, and P. Scott (eds.) (London Mathematical Society Lecture Note 316), Cambridge: Cambridge University Press, pp. 119-150. - Retoré, C., 1997, "Pomset logic:
a non-commutative extension of classical linear logic",
*Typed Lambda Calculi and Applications*, Lecture notes in Computer Science vol. 1210, Spreinger Verlag, pp. 300-318. - Wadler, P., 1991, "Is there a use for
linear logic?",
*Proceedings of ACM/SIGPLAN Workshop Partial Evaluation and Semantics-Based Program Manipulation*, New Haven, ACM Press, pp. 255-273, 1991. - Yetter, D.N., 1990, "Quantales and
(noncommutative) linear logic",
*Journal of Symbolic Logic*, 55(1): 41-64.

## Other Internet Resources

- The linear logic bibliography (up to 1998).
- Vincent Danos and Roberto Di Cosmo. The Linear Logic Primer. Course Notes. http://www.dicosmo.org/CourseNotes/LinLog/ (1992).