Sequent calculus fun with Logitext

(This post is overdue; I’ve been more than usually busy at work on a new project, and have not had the energy to do anything interesting when I get home. Well, I saw this a while ago and today out of the blue remembered something interesting in relation to it. So I’ll start by posting about Logitext since it’s a pretty nicely made tool, and because the sequent calculus always deserves a bit of publicity.)

On Reddit some time ago, I saw Logitext. It’s a web-based sequent calculus explorer. It’s like a short text-book on the sequent calculus, with one special feature: the examples in it will change in response to input.

Like (I suppose) many computer scientists, I used to wonder: proving stuff is so hard and seems to require either miraculous leaps of intuition, or hopelessly intractable brute force searches over the set of proof derivations. How then am I supposed to write a program to do it?

The sequent calculus provides one answer: represent the proof as a tree, and define a few rules for growing branches until the tree is complete — in other words, until all the leaves of that tree are axioms. Start with the proposition to be proved as the unadorned root of a tree. Then, at each step, there are a finite number of rules that can be applied to grow the tree. (Wikipedia and the Logitext tutorial explain it better.)

A program that proves even simple theorems like p -> (q -> p) can be a hard thing to implement naively. Having parsed the proposition into a term, where do you start proving it? In the sequent calculus, you parse it into the sequent |= p -> (q -> p), and have a finite number of options on what to do next. One of which is the introduction rule for ->: move the p into the antecedent, creating a branch p |= q -> p. Performing it again grows to p, q |= p. And finally, a structural rule lets us remove q to finally grow a leaf: the axiom p |= p.

This is represented as a tree (a trivial one with no branching, in this case):

\cfrac{\cfrac{\cfrac{\cfrac{}{p \vdash p} \quad (I)}  {p, q \vdash p} \quad (WL)}  {p \vdash q \to p} \quad (\to R)}  {\vdash p \to (q \to p)} \quad (\to R)

One of the great properties of the LK system in sequent calculus is that it has the subformula property: all the terms needed to complete a proof are subterms of the initial proposition. So a program only needs to apply rules using those terms, and can be sure that it will find a proof if one exists.

I’ve forgotten a lot since I last studied it in… 2006, but some memories are coming back. We had an assignment to implement a theorem prover in Prolog, using the LJ and LK systems. It was remarkably easy, except for the feature for converting a proof, easily found with the sequent calculus, into a natural deduction proof. I have to confess I’ve not figured that part out yet…

Footnote. I played with Logitext earlier this year. But today — due to, perhaps, a cosmic ray — it suddenly occurred to me that it should look eerily familiar. After all, my own Lambda Calculus Explorer was the same thing: terms are presented as part of a web page, and clicking on parts of them will result in the creation of new terms. Tragically (or perhaps as a serendipitous reprieve for blog readers) I no longer have PHP and JavaScript code for it. But I did salvage these slides out of Gmail:

Few outside my supervisor and circle of reviewers must have known of the Lambda Calculus Explorer. I thought it was moderately original at the time, but I can only wonder how many other web-based formal system explorers there are out there.

This entry was posted in Math, Programming and tagged , . Bookmark the permalink.

2 Responses to Sequent calculus fun with Logitext

  1. Neil Leslie says:

    Extracting an NJ proof from an LJ proof is pretty straightforward: NJ intro rules correspond clearly to LJ intro-right rules, NJ elimination rules to LJ intro-left rules.

    LK is a bit different though. Consider the following LK derivation:

    A => A
    ——-
    => A, -A
    ———
    => A v -A

    It is not clear how to get a natural deduction proof out of this: you (probably) need to make use of a rule that deals with some instances of connectives to overcome the use of the multiple conclusions. But it is not nearly so clean as the LJ/NJ relationship.

    You can easily construct a sequent calculus for classical proposition logic with (at most) one formula on the right. You will loose the subformula property, but you’ll get somethng just as good (probably sub-formulas of the formulas to the left of the sequent arrow, and sub-formulas of the double-negation of the formula to the right of the sequent arrow). And you will be able to read the NK proof off of the sequent derivation.

    • ejrh says:

      As you can imagine I’ve forgotten at least some of what I did in one paper in 2006, not to mention lost all my notes and all my work. I don’t recall whether NJ proofs were only required for LJ proofs, though it seems likely. I know I was having one of those tricky programming problems where behaviour X — something about how (un)discharged assumptions are generated as the LJ tree is grown — doesn’t occur as often as it should, and a hacky code change will fix that but make behaviour X happen when it shouldn’t. Of course, as a functional programmer you may be mystified by my use of the word “behaviour”, and even my occasional descriptions of a program using temporal concepts like “occur”, “happen”, “generated”, “often”, “to grow” or, in fact, almost any verb, since most verbs involve some kind of mutation.

      All that notwithstanding, I think there was a cunning plan to use the antecedent as a place to store the un(discharged) assumptions on the way up the tree.

      Anyway, thanks for the response. I’ll need to re-research a bit more before I can play with it though. You should take a look at Logitext in the meantime.

      I cheerfully remind you that you still have my halfremembered ramblings on Epigram to look foward to.

Leave a comment