Alessio Guglielmi's Research and Teaching

Electric lamps were not invented by improving candles

Perhaps you were looking for the Deep Inference pages?

Contents

  1. Overview of Research
    1. Normalisation and Proof Complexity
    2. Identity of Proofs and Deductive Nets
    3. Summary
  2. Politics
  3. Recent Papers
  4. Notes
  5. Recent Talks
  6. Research Students
  7. Recent Grants
  8. Mailing Lists
  9. Teaching

Address

Department of Computer Science
University of Bath
Bath BA2 7AY
United Kingdom
fax: +44 (1225) 383 493

A.GuglielmiATBath.Ac.UK

1 Overview of Research

I do proof theory from a theoretical computer science perspective. I am mainly interested in deep inference, a new methodology in proof theory that my collaborators and I have introduced.

Theoretical computer science mainly helps proof theory by bringing to it quantitative and semantic ideas, and proof theory mainly helps computer science by supporting the design of programming languages and the verification of software correctness.

The heart of what I do in proof theory is normalisation. This is still a mysterious combinatorial phenomenon, that is intimately connected to fundamental questions about how much we can compress information. Proof-theoretic research takes place in the purest setting where these problems can be defined, and, as such, it is a fundamental study that has and will continue to have far-reaching applications in computer science.

Two problems motivated the development of this research: understanding normalisation and the identity of proofs.

1.1 Normalisation and Proof Complexity

One research objective is to understand normalisation in propositional logics (first order and, at a later time, second order). Deep inference reveals finer and finer structure in the inference rules. With the methods developed for deep inference, we can see that cut elimination is only a special instance of a much broader normalisation phenomenon. Our techniques unveil new properties of derivations, impossible to be observed in the sequent calculus. One of the most interesting discoveries is that deep-inference rules show an unprecedented regularity in the shape of inference rules of very different logics. In what I call 'subatomic proof theory', it seems possible to describe most of the known logics by just one, simple inference rule.

I'm also interested in proof complexity: since deep inference generalises the sequent calculus, proofs in deep inference can only be shorter than in the sequent calculus. Indeed, on some classes of tautologies, like Statman's ones, there is an exponential speed-up of an analytic deep-inference system over all analytic sequent-calculus ones. We started only recently to study the proof complexity of deep inference. Deep inference is a powerful methodology in this respect, on par with Frege-Hilbert systems (but with better properties). We wonder whether deep inference could give good insights into the many open questions in proof complexity.

We are obtaining a broad theory of normalisation and, more in general, of the design of deductive systems, which are based on more basic and simple principles than traditional proof theory offers.

1.2 Identity of Proofs and Deductive Nets

Another research objective is to come up with useful, interesting notions of identity of proofs. It is embarrassing for proof theory that the natural question of 'when are two proofs to be considered identical?' lacks good answers. (This question is tightly connected to the equally non-well-understood problem of deciding when two algorithms are the same.) One reason for our embarrassment is that the notion of normalisation available, viz. cut elimination, is inadequate to decide the question; I believe that the problem is that this is the only (reasonable) normalisation notion available in the sequent calculus and natural deduction. Thanks to its finer granularity, deep inference offers several natural new notions of normalisation, this way improving our ability to study new concepts of identity.

The calculus of structures is now a fully developed deep inference formalism, but I don't believe that it is the natural setting where to study the proof identity problem: there still is too much bureaucracy in it. I believe that deductive nets will be the right place where to work. Proof nets eliminate, in fact, much of the bureaucracy of calculi like the sequent calculus or the calculus of structures. But which proof nets? Linear logic proof nets are certainly inadequate, because of the poor way they deal with 'sharing', and then because they are not deductive, in the sense that they are not a proof system. We need a much more refined notion, suitable also to classical logic.

We observed that deep inference yields some interesting notions towards deductive nets. In fact, deep inference allows us to design deductive systems whose rules are all local, i.e., their computational complexity is constant (this is provably impossible in traditional proof theory). This property lends itself naturally to a geometric understanding of proofs. In other words, we can now see a geometric structure lurking behind the syntax, and we can control this structure thanks to our local inference rules. We are already able to use part of this geometric structure to control normalisation, by what we call atomic flows. I think that in a few years we will be able to exploit the full potential of the geometry behind proofs in order to have a satisfactory, topological answer to the problem of the identity of proofs.

1.3 Summary

Deep inference reveals regularities that traditional formalisms cannot detect, and it offers new proof theoretical properties, of special interest for normalisation. My colleagues and I are exploiting these regularities and properties in a unified theory of several logics. This will yield deductive nets, for which the problem of identity of proofs will be solved in a satisfactory way.

I have three main reasons to work in this area: 1) It is beautiful; this research involves working with beautiful, elegant mathematical objects. 2) It is interesting; it addresses important, challenging problems in proof theory, and they are difficult. 3) It is useful for designing computer science languages; in fact, my research started from questions raised by computer languages for concurrency.

I believe that mathematics is mostly about finding the right definitions, those that affect the way we see things.

The following web page contains more details about my current research: Deep Inference (FAQ, papers, notes, etc.).

2 Politics

This is something I feel strongly about:

Knowledge cannot be measured. As soon as measuring knowledge is attempted, researchers, who are not self-confident enough to resist, try to improve their score (number of papers, h-index, grant money, etc.), and the result is uninspired, incremental research of little value. Unfortunately, most politicians and all managers come from a business culture where everything is thought to be measurable, including knowledge.

Academic freedom is declining. As a result of its success in generating technology and economic power, science is now considered a commodity by governments. They are now predominantly financing research with demonstrable short-term impact on the economy. Therefore, research is now dominated by a class of technology-oriented researchers whose main drive is not curiosity but career progress and commercial exploitation of research. These people are subservient to governments and do not understand and support research with revolutionary potential. Academics have historically been a strong force of change, but they are now sadly becoming another branch of the establishment. What is even sadder is that most younger researchers do not even realise this.

Researchers should resist the business culture because, when applied to science, it is morally wrong and it stifles progress. This paper by Smaldino and McElreath (published by the Royal Society) shows how a culture of poor research methods is created by current incentives, and illustrates all of the above points: The Natural Selection of Bad Science.

I have some more specific opinions on refereeing and conferences here.

3 Recent Papers

4 Notes

5 Recent Talks

6 Research Students

7 Recent Grants

8 Mailing Lists

I help running a mailing list devoted to proof theory:

I also manage a mailing list specifically devoted to structural proof theory and deep inference:

Groucho is here.

9 Teaching

In my teaching, I insist on the practical motivations behind science. One of my deepest convictions is that being grounded in practice is an essential ingredient also for interesting research. I tend to build my courses around 'important questions'. I ask students to think about natural questions and problems and explore different paths and possible lines of attack before showing the solution. I privilege concrete knowledge over abstract one: I prefer to insist on some core concepts and the ability to solve (perhaps conceptual) exercises rather than showing one more theorem. I also try to convey the idea that abstraction might become necessary in order to be effective at solving practical problems. It worked for me with the conception of deep inference, where the desire of managing certain concrete problems of concurrent languages led me to reformulate the very foundations of proof theory. In summary, I prefer to transmit basic and general methods whenever it is possible. I am sure that a solid education in the fundamentals is useful also for the majority of students, who do not intend to pursue a research career: higher education should provide mental flexibility, not just notions.

In the past, I taught, among others, Formal Logic and Semantics, Logic and Its Applications, Computability and Decidability, Proof Theory and Deep Inference, Complexity Theory, Logic, Deep Inference and the Calculus of Structures, Sequent Calculus and Abstract Logic Programming, Deduction Systems, Networking and Advanced Programming Principles.

15.5.2023 Alessio Guglielmiemail