Alessio Guglielmi's Research and Teaching / Deep Inference / Current Research Topics and Open Problems
In this page I list all open and currently explored research subjects I am aware of, in the area of deep inference and closely related matters. The solutions to most of these problems are instrumental in reaching the common goal of a comprehensive bureaucracy-free proof theory based on geometric methods.
In very few words, what we do is looking for the best syntax and the best semantics for proofs (notice that semantics of proofs is a very different thing from semantics of formulae!). It is necessary to improve on the current state of the art in structural proof theory in order to solve important open problems like that of identity of proofs. By employing deep inference, which yields the necessary flexibility for the task, we are making very fast progress.
In case I can assess the difficulty of a topic with reasonable confidence, I rate it according to the following scale:
= simple project for an expert researcher, MSc thesis;
= complex project, suitable for a PhD thesis;
= major or very risky research project.
Please keep in mind that there are three conceptual levels to distinguish, and an associated terminology I'm trying to stick to:
I'm trying to avoid the word 'calculus' as much as possible, because it can ambiguously denote a formalism and a proof system, which are very different things for us.
A logic is considered independent of its proof systems. Very roughly, a logic can simply be considered the set of formulae which are true according to some criterion.
I also distinguish between derivation and proof: a derivation is any object in a proof system, normally the establishing of a conclusion from some premisses; a proof is a derivation without premisses, so it's the derivation of a tautology.
I am always very happy to hear from students and colleagues and to discuss these topics. Consider also using our friendly and responsive mailing list Frogs. Please let me know of problems and conjectures you think should be listed here, and also let me know if anything in this page is obsolete or incorrect.
The following pages contain interesting open problems related to those in this page, and different takes on the same problems one finds below:
The calculus of structures (CoS) is now mature for logics with involutive negation. This means that for classical logic, for linear logic in all its classical fragments and for modal logics, presentations in CoS exist and their main proof theoretic properties are established, together with the novel properties provided by deep inference. The major, general, but probably not very difficult open problem in the CoS area is:
/ Calculus of structures without involutive negation The task is to study the properties of deep inference when one restricts application of certain inference rules to certain places in formulae, probably those of positive or negative occurrence in the scope of negations. More in general, one could study the case when negation is not involutive and so it cannot be pushed to atoms.
There is no reason for deep inference not to be equally effective as it is in the case of involutive negation. The natural starting point should be modifying the CoS presentation of classical logic in order to get intuitionistic logic, and then proving cut elimination. There are actually at least two proof systems for intuitionistic logic in CoS, but they haven't been studied much. This might be suitable for a MSc thesis.
One can then move on to many other intuitionistic logics, try to study the problem in general and come up with a comprehensive understanding of logics with no involutive negation. This broader exercise should make for a fine PhD thesis.
The most knowledgeable person on this topic is Kai Brünnler, and the very first thing to read is his PhD thesis Deep Inference and Symmetry in Classical Proofs.
Apart from using explicit negation, one could use polarities. I suspect this could be my favourite approach, because it's closer to my relation webs (see my paper A System of Interaction and Structure), but I am biased and anyway I'm not really sure. A few years ago Philippe de Groote designed a CoS system for intuitionistic logic adopting polarities, it might be worthy to have a look at that, too. (24.11.2004)
A general class of possibly interesting problems is:
/ Designing term calculi for systems in the calculus of structures It is a natural question to ask: does a given system in CoS admit a term calculus, i.e., a language of terms corresponding to proofs? The difficulty and importance of this task depends, of course, on the logic it is done for. The first person to contact regarding this topic is Stéphane Lengrand, who has designed a term calculus for CoS and for formalism A. (24.11.2004)
I will now proceed to examine more circumscribed areas inside CoS.
The best expert of the proof theory of classical logic in CoS is Kai Brünnler and the best starting point is his thesis Deep Inference and Symmetry in Classical Proofs.
/ New normal forms In the sequent calculus, natural deduction and proof nets, there is only one important notion of normalisation, namely, cut elimination. In deep inference, there are instead many, given the greater flexibility of inference rules. Some notions of normal form seem particularly interesting, for example one that I like a lot is described in my note Normalisation Without Cut Elimination. Instead of eliminating all cuts, only some are eliminated, and the rest provide for a nice case analysis, which leads both to a normal form and to a meaningful mathematical interpretation.
There are several directions along which such a topic can be developed, mainly in connection with (categorical) semantics of proofs and formalism A. I am not working on this topic, but I might start doing so and I'd be happy to hear from interested people, also in view of a MSc or PhD thesis. (24.11.2004)
Low complexity splitting Splitting theorems provide for a uniform and very efficient way of proving cut elimination for many logics. It is very easy to prove splitting for propositional classical logic, but I have a conjecture about tightening the bounds for the normalised proofs which is resisting several attempts at proving it. You can find a discussion of the subject in two threads of the Frogs list: 1 and 2. Apart from myself, knowledgeable persons on the subject are Stéphane Lengrand, Elaine Pimentel and Lutz Straßburger. If you have any idea, it would make for a fine paper and possible applications. (24.11.2004)
Complexity of propositional proofs Deep inference allows for much shorter proofs than shallow inference, in many cases providing for an exponential speed-up (see my note Polynomial Size Deep-Inference Proofs Instead of Exponential Size Shallow-Inference Proofs and the slides of this talk). Paola Bruscoli and I are writing a paper establishing CoS as the best existing analytic formalism from the point of view of proof complexity, since it can polynomially simulate every other formalism and allows for possibilities which are precluded to others, by still retaining all desirable properties, like analyticity. There are many obvious questions to ask. One which should be suitable for a PhD thesis is the following. Find a class of tautologies that requires exponentially growing proofs in KS (there should be one if NP ≠ coNP); an obvious candidate is the pigeonhole class of tautologies. (13.3.2007)
? Syntactic proof of Craig's interpolation normal forms There is a normal form of propositional classical proofs which generalises both Craig's interpolation and cut elimination. Kai Brünnler proves this by semantic means in his thesis. It would be nice to have a syntactic proof, and, even better, a syntactic proof which allows to put interesting bounds on the size of the interpolant. It is not easy to assess the difficulty of this task, it may be risky for a MSc thesis, but it also might be rewarding to put a limited amount of effort on it (one can be lucky). It is likely that the necessary techniques would open new possibilities elsewhere. Maybe Alessandra Carbone's work on logical flow graphs can help (I am very fond of her book A Graphic Apology for Symmetry and Implicitness, written with Stephen Semmes). The expert on this subject is Kai. (24.11.2004)
Intermediate logics It should be possible, and probably rather easy, to present in CoS some intermediate logics like Corsi's logic and Gödel's logic. This should be done by requiring the usual properties of locality and analyticity. Notice that no analytic system for Corsi's logic exists yet. If IL is intuitionistic logic, then Corsi's logic is
CL = IL + ((A → B) ∨ (B → A))
and Gödel's logic is
GL = CL + (FA x.(A(x) ∨ B) → (FA x.A(x) ∨ B)).
An excellent starting point should be the recent paper by Alwen Tiu, A Local System for Intuitionistic Logic. An interesting, related topic, is finding syntactic proofs of the interpolation theorem for these logics. (28.2.2005)
Computability logic is a logic motivated by an intuitive semantics of computation, recently conceived by Giorgi Japaridze. It's rather clear already that a good proof theory for it, i.e., proof systems with good properties, is going to require some form of deep inference.
I agree with Giorgi that a logic should mostly be conceived out of semantic needs, and I agree also that this is not really the case with linear logic (he argues about this here). Since computability logic looks very challenging for the traditional syntactic formalisms, it's interesting to see whether the more liberal attitude of deep inference can make for a successful marriage.
Giorgi made some interesting progress with his cirquent calculus, where he shows a proof system adopting a new notion of sharing.
Making a good proof system for computability logic The problem is to design proof systems and cut elimination procedures for the various fragments of computability logic. Some preliminary observations have been made on the Frogs mailing list in the following threads: 1, 2 and 3. Considerable progress is represented by the cirquent calculus, which should be able to adopt deep inference (or the other way around). I am definitely interested in this topic and I would like to be kept current on the developments; the expert on this matter is Giorgi Japaridze. (3.11.2005)
There is a broad, ongoing research effort about understanding languages for concurrency, especially process algebras, like CCS and the π-calculus, in a logical setting. For example, one would like to establish a close correspondence between, say, computations in CCS and derivations in some cut-free proof system, like NEL.
Proof theory can be considered a way to explore and exploit the internal symmetries of a language. By doing so for computational languages one hopes to put to use (like, say, in verification of protocols), the plasticity of proof manipulation, which is especially powerful in deep inference.
Establishing a strong connection between proofs and (concurrent) computation means also unifying their semantics. I believe that the future of semantics of proofs, as well as that of semantics of concurrent computations, lies in the use of algebraic topology methods, which will be applied starting from deductive nets. So, this thread of research fits in our general scheme of finding the best semantics of proofs.
I call this thread 'language design' because ultimately we will create new languages: the odds that the old ones perfectly fit proof theory are rather slim! But don't worry, good proof theory does not create monsters; actually, there are chances of smoothing out some of the rough edges of languages designed without a clear understanding of symmetry.
Equivalence of BV and BVL System BV (introduced by myself in A System of Interaction and Structure) was motivated in part by the desire of capturing sequential composition in CCS. In her paper A Purely Logical Account of Sequentiality in Proof Search, Paola Bruscoli succeeds in this task and puts forward an interesting conjecture. In order to reach her goal, she introduces a very clean variation of BV, which she calls system BVL, and which behaves lazily regarding the use of identity rules. This is done for better matching CCS computations; however, system BVL is pretty natural and good looking, also from the point of view of proof theory.
BVL is equivalent to BV on a restriction of structures, which is enough for capturing CCS. But is it equivalent on all structures? It looks like it is. So, the problem is: prove or disprove Paola's conjecture. It's very likely that the solution to this problem, no matter whether positive or negative, is applicable to many other cases, for example in the problem Capturing the π-calculus. (5.12.2004)
/ Capturing the π-calculus From our perspective, there are some challenges in the π-calculus, and one of the biggest is probably dealing with the scoping mechanisms the language provides. An explanation of the problem, together with possible directions of research, is in a Frogs list thread; the first message is by Alwen Tiu, who's currently working on this. Another knowledgeable person on the subject is Paola Bruscoli. (5.12.2004)
An interesting possibility exists about unifying, at the level of languages, the two fields of planning and concurrency. Planning, or reasoning about actions, is an area of artificial intelligence concerned about scheduling actions for solving specified tasks. Clearly, this is an inherently concurrent activity. Many languages for planning are very close to logic, for example those in the neighbourhood of logic programming. So, an investigation aiming at providing a common linguistic setting can be both interesting and fruitful.
A good reason for doing so is that, again, semantics of concurrent computations could be applied in the case of planning. Apart from its intrinsic interest, I also see a political interest in this research. Artificial intelligence seems to be resisting all formal methods which require more technical expertise than that of high school students. It looks to me that AI is trying to make progress by connecting philosophical intuition directly to engineering; this (of course!!) cannot work: a solid scientific layer in the middle is needed. So, we'd like modestly to contribute to this scientific layer.
Planning and concurrency Deep inference made possible several advances (by Paola Bruscoli, Lutz Straßburger and myself) in designing proof systems (BV, BVL, NEL) which extend linear logic by a sequential operator. It is now possible to make the connections between planning and concurrency, because we obtained the union of linguistic features necessary for both fields. (Thanks to Alwen Tiu's result in A System of Interaction and Structure II: The Need for Deep Inference, we know that sequentiality is only achievable with deep inference.) Ozan Kahramanogullari is investigating languages at the intersection of planning and concurrency; see his paper Towards Planning as Concurrency. (5.12.2004)
It's great to have the freedom and expressiveness of CoS, but for proof search more freedom means a larger search space, what we need to restrict if we look for implementations of our proof systems. Splitting theorems are the key to reducing nondeterminism, because they allow for a trade-off between length of proofs and width of the search space, and this ratio can be varied at will. We are now in the same situation as the sequent calculus was before Dale Miller introduced the notion of uniform provability, which is a way of matching provability and the kind of goal-drivenness necessary in implementations.
/ Taming nondeterminism What would be a good notion of goal-driven proof in the more general setting of CoS? This question can certainly be answered positively and in many ways, allowing for several MSc and PhD theses possibilities. The notions to be developed might or might not be dependent on particular logics and by modulating this aspect one can make for a more focused or a broader project. I think it is fundamental to know Dale Miller's work in depth, in order to be successful at this task. Recently, Ozan Kahramanogullari has made some decisive advances in this direction, in his paper Reducing Nondeterminism in the Calculus of Structures. I find this topic most interesting and I'm very happy to collaborate on it. (26.8.2006)
There is still no real name for what I call formalism A. This is the first in a two step movement for removing all bureaucracy induced by permutation of rules in CoS and in all other formalisms. The movement should also lead us to deductive nets (see below). Formalism A doesn't yield the final victory over bureaucracy, but it is a very convenient and straightforward formalism, arguably of independent interest. It should allow representing and conveniently studying a very vast array of logics.
Designing and studying formalism A A tentative, natural definition for formalism A is in my note Formalism A. The task is to design it in such a way that all properties of CoS are retained, but a certain form of bureaucracy due to permutability of rules is completely eliminated. Kai Brünnler, Stéphane Lengrand, Michel Parigot and I will do the initial development of formalism A, also along the lines shown in this Frogs list thread. Suggestions are in Kai and Stéphane's Getting Formalisms A and B by Proof-Terms and Typing Systems and in On Two Forms of Bureaucracy in Derivations. (6.10.2007)
One of the major goals of our research is obtaining deductive nets. These objects should be derivations without bureaucracy, and such that the mathematical argument the derivation stands for is immediately readable from the corresponding deductive net.
Right now, all formalisms for proof systems suffer from bureaucracy: for example, one can always come up with two proofs of the same sentence that are morally the same but differ in inessential details, such as the permutation of two inference rules upon two independent subformulae.
In proof nets (which come in several variations over a notion that Jean-Yves Girard introduced), there is no bureaucracy. However, proof nets are not deductive, because it's very difficult, by simply looking at a proof net, to know what is the argument of the proof, and even if the net stands for a valid proof (technically, in the sense of proof complexity,proof nets are not a proof system). The correctness of the net must be checked by a correctness criterion, which in general is a rather complex algorithm, or perhaps a simple but computationally expensive one.
So, the chase is open for a formalism that is both bureaucracy-free and deductive. Deductive nets should be both bureaucracy-free derivations of some proof system and graphical objects, such that they can be manipulated and studied by geometric methods.
The first natural candidate for a logic for which to accomplish this is, of course, classical logic. Its deductive nets would be the natural playground for solving the long-standing problem of identity of proofs, which asks for non-trivial criteria for comparing proofs. It is embarrassing for proof theory that this absolutely basic question still hasn't been answered!
Deep inference has two important assets: it allows for inference rules of very fine granularity and extreme regularity. Both aspects are key to our ability of manipulating graphs, which is what deductive nets ultimately will be.
My current working hypothesis is the following. What I call formalism B is the formalism in which to design proof systems that are bureaucracy-free and where to study the properties deductive nets should possess. A closely related but different problem is finding deductive nets; they should be suitable, geometric representations of derivations in formalism B.
There are currently several slightly different proposed definitions for formalism B, however I know what is the most important property they must possess: inference rules apply to derivations (as opposed to formulae/structures). What I know about B is in the note Formalism B. There is a discussion about bureaucracy and deductive nets in this Frogs list thread.
Designing and studying formalism B A tentative understanding of formalism B is in my note Formalism B. The task is to design it in such a way that all properties of CoS and formalism A are retained, but a form of bureaucracy due to permutability of rules is completely eliminated. This will be a further, and definitive improvement over formalism A and CoS regarding the dealing with bureaucracy. Kai Brünnler, Stéphane Lengrand, Michel Parigot and I will do the initial development of formalism B, after having made experience with formalism A. (27.12.2004)
Discovering deductive nets This highly risky and speculative task is about defining the concept of deductive net along the lines explained above and in this Frogs list thread. I believe that its successful achievement depends on completing the design of formalism B and drawing from experience in formalism A and proof nets of the kind François Lamarche and Lutz Straßburger are studying. I also believe that the experience we will gain from subatomic proof theory will be essential. Finally, this problem should be tackled together with the next one on proof identity. (27.12.2004)
The most important goal of our research is giving satisfactory, non-trivial answers to the problem of identity of proofs, or, better, more in general, identity of derivations. This objective is important per se, and also because such broad, open objectives as this one typically spawn several lines of research just for making the tools necessary to solve the problem. This is what is happening to us, and I think this will continue to happen for a long time.
Defining non-trivial notions of identity of derivations Notions of proof identity should be associated to canonical, bureaucracy-free forms of derivations. No currently available formalism allows for this, at the exception of certain kinds of proof nets, which are not deductive, though. I expect many such notions to be available as soon as formalism B is defined (for some of them formalism A will probably suffice). Then the fun will be to design deductive nets such that the canonical forms associated to identical proofs will be reachable and understandable by geometric means. (27.12.2004)
In the three problems mentioned above I was careful in not asking for the formalisms to be defined for a particular logic. This is so because I believe that the problem of addressing bureaucracy and identity of derivations is essentially formalism-dependent.
It seems to me that many people, especially coming from Girard's school, believe that these are problems (at least in part) dependent on the logic. I believe that this is just another illusion of the sequent calculus. These are problems of the formalism!
Anyway, if a logic is to be preferred for at least giving partial answers to the problems above, than this must clearly be classical logic (and not linear logic, say).
I recently became aware of a very interesting, independent characterisation of bureaucracy based on n-dimensional categories. Here is a description:
/ Normalisation as 4-dimensional rewriting In local calculi such as SKS or SLLS, derivations can be equipped with two more categorical compositions. These come in addition to the 'vertical' one giving a derivation A → C from derivations A → B and B → C. When only this composition is considered, derivations are seen as paths in a graph, or 1-dimensional arrows in a 1-category: this is the CoS case. If one more composition is taken in account, this generates identifications between derivations, now seen as 2-dimensional cells in a 2-category: this is linked to formalism A. If the three compositions are considered, a new class of identifications is generated, yielding a 3-dimensional interpretation of derivations: this is linked to formalism B. Finally, transformations of derivations form a 4-dimensional rewriting calculus, which comes with the tool box of higher-dimensional rewriting. This investigation is carried on by Yves Guiraud and Lutz Straßburger; check Yves's paper The Three Dimensions of Proofs. (25.3.2005)
There is currently much activity going on in this area, especially about classical logic. Two questions are intimately related and motivate this research: What is the essence of a proof? and How can we eliminate bureaucracy from a proof?.
Suppose a mathematician produces a proof of a theorem: every known formal system admits several different representations of this proof, which differ in inessential details. Typically, the differences are in the order in which the inference rules are applied. After Girard, people call this phenomenon bureaucracy. In another section in this page I argue about deductive nets as a formalism which will get rid of bureaucracy. Finding deductive nets is no easy task, and there are two possible directions of attack: getting there from syntax and from semantics. Deep inference helps in finding our way on both fronts.
Semantics of proofs should not be confused with semantics of formulae. We all know that formulae can be given truth values by way of tarskian semantics, for classical logic, or frame semantics, for modal and intuitionistic logics, or other algebraic structures for other logics. The point of semantics of proofs is to give semantics to proofs themselves. This is normally done by building a category whose objects are formulas and whose arrows are derivations. It turns out that a very good way of capturing the essence of proofs is to to employ categories where the arrows are proof nets, i.e., derivations stripped of all bureaucracy. Proof nets also provide for very concrete, geometric objects, amenable to be dealt with by distributed algorithms. They provide insight on many aspects of theoretical computer science, for example they help in finding optimal computational models for functional programming languages.
Much of the inspiration in this area comes from considering proofs as functional programs, in the so called Curry-Howard correspondence. The meaning of a proof is normally an invariant for the cut elimination procedure, which is a way of computing the program associated to the proof. For too long, people believed that classical logic can not be given any meaningful semantics in this sense. Our point of view is that this was a consequence of the sequent calculus's misleading characteristics.
In fact, when proofs are looked at under the microscope of deep inference, many assumptions turn out to be false, because deep inference rules have a much smaller granularity than shallow inference ones, so many roadblocks can be removed, and one can see new, simpler proof nets, which in turn open new perspectives for semantics.
Experts in this area are François Lamarche and Lutz Straßburger. Richard McKinley recently started working on it by employing deep inference. Lutz keeps a web page about this subject, where more open problems are listed.
Currently, the area is in a state of rapid movement. Probably, concentrating on classical logic is what makes most sense, although proof nets for linear logic are certainly still interesting. I think the two most important 'open problems' are at the same time technical and sociopolitical, meaning that we should expect several proposals in addition to the existing ones, and at some point in the future one will prevail over the others, also based on intangible properties like beauty and luck.
Finding the best proof nets for classical propositional logic There are now several proposals for proof nets for classical propositional logic. Which one is the best? What are the criteria we can use to decide? Is there anything better according to these criteria? (19.1.2005)
Finding the best semantics of proofs for classical propositional logic There are now several proposals for semantics of proofs for classical propositional logic. Which one is the best? What are the criteria we can use to decide? Is there anything better according to these criteria? (19.1.2005)
The display calculus of Nuel Belnap (Journal of Philosophical Logic, 11:375-417, 1982) is another, very general proof theoretical formalism, which employs a mechanism for making inferences anywhere deep inside formulae.
It enjoys the important property that any system in this formalism meeting eight simple conditions is guaranteed to enjoy cut admissibility. Jeremy Dawson and Rajeev Goré have formalised this general cut elimination proof in Isabelle, and have extended it to a proof of strong normalisation for all systems in the display calculus formalism (see Formalised Cut Admissibility for Display Logic). All of this suggests the following project:
General cut elimination for CoS inspired by the display calculus Answer the following natural questions:
If you are interested in any of these questions, then feel free to contact Rajeev Goré for further details or suggestions. (24.11.2004)
I know a bit of display calculus, but am no expert. My personal take on this project is that it will be successful, i.e., it will be possible to use many of the ideas in display calculus for a very general cut elimination proof. By using CoS it will also be possible to simplify the technical treatment.
However, since I believe that the CoS version of deep inference is stronger than the one in the display calculus, I think it is impossible to find a general cut elimination proof for every CoS system (see the project below). The success of the display calculus stems from its strict control of the meta-level form, what instead in CoS is completely liberal. In fact, I believe the following project can be done:
Find a logic which can be expressed in CoS and not in the display calculus In the display calculus, the form of the meta-level is given, and does not depend on the object-level connectives. This is not the case in CoS, where the object-level and the meta-level coincide. It should then be possible to design a CoS system which requires a stronger meta-level than the display calculus provides. Such an argument can probably be made along the lines followed by Alwen Tiu in his remarkable A System of Interaction and Structure II: The Need for Deep Inference. Please get in touch with me for further comments on this issue. (24.11.2004)
Deep inference is more general than hypersequents, and goes farther in the same direction, which is coping with the limitations of the sequent calculus meta level. I am not an expert of hypersequents, but Kai Brünnler, Rajeev Goré, Robert Hein, Charles Stewart and Phiniki Stouppa are.
There is a fantastic possibility of studying most proof theory for most logics once and for all by examining a single proof system, entirely generated by a single rule. The trick is considering atoms as logical operations over units. You can find the basic observations in my (very rough and terse) notes Subatomic Logic and Some News on Subatomic Logic. The successful development of this theory depends crucially on our ability of proving splitting theorems. This is weird stuff, stay away from it!
Proof theory by one rule The task is to develop subatomic proof theory by showing that classical and linear logic are generated by a single rule in a more general logic which conservatively extends them. This is a crazy project unsuitable for any kind of thesis, I'm working to it whenever I can, and so far there is no material available for public consumption. One day it will work and it will be beautiful. (24.11.2004)
This is something absolutely fantastic, which I basically don't understand at all: it looks like system BV is able to describe quantum causal evolution. This physical phenomenon causes several problems to other logics which people considered for the task, like linear logic. The basic idea is to represent quantum states as propositions and interactions as axioms. Thus if a and b are states of two systems coming together and their interaction produces c, d and e, we would write
a, b |– c, d, e .
The question is: what sort of logic correctly captures entanglement and causality? It seems that BV is the only logic that does it properly. The secret of its success is apparently its ability to deal with a self-dual connective, intermediate between (linear) conjunction and disjunction.
/ ? Describing causal quantum evolution The task is to describe causal quantum evolution in CoS, by using system BV or a suitable variant thereof, and to formally develop the theory. The expert of this is Prakash Panangaden, whose note Using BV to Describe Causal Quantum Evolution should be a useful starting point; Lutz Straßburger worked on an alternative solution, still using BV. Lutz, Prakash and I are working at a paper, together with Richard Blute. (25.3.2005)
20.11.2007 Alessio Guglielmi email