I sent the following letter to HW to clairify my response to a question he asked me in Vienna about lengths of proofs. I’m posting it here in case others are interested.
I want to expand on my answer to your question about how much longer proofs are in Coq vs paper proof, and if there is an exponential blow up. I said that there appears to be only a constant multiplicative factor increase in size. This size refers to the size of the input file provided to Coq. This input file is a high level language that is interpreted as a λ-expression that represents is the proof term. This process is subject to an exponential blow up. For example the user could issue the command, “the current goal is a tautology”, and this could create a large λ-expression representing the proof of the tautology. In fact there could be more than an exponential blow up because the high level commands could run arbitrary ML programs to generate arbitrarily large λ-expressions.
But in practice these problems are solved in a different manner because Coq (and most dependent type theories) supports so-called proofs by reflection. Coq has functions, and many decision procedures can be implemented as a Coq λ-expression. Then, because the function is internal, this decision procedure can be proved sound. Thus the λ-expression (proof) of problems solved by the decision procedure consists of the statement of the problem + the implementation of the decision procedure + the proof of soundness of the decision procedure. The last two expressions are of constant size, so the λ-expression (proof) is on order of the size of the problem. During type checking (proof verification) the decision procedure is evaluated and returns true (if the expression is well typed). By the soundness theorem the proof is sound.
Because type checking (proof verification) can execute arbitrary Coq functions, the time to type check (proof verification) is not linear in the size of the λ-expression. This is where the exponential blow up that you are probably expecting is hidden. This blow up is potentially larger than exponential because Coq has very fast growing functions, but not as fast as recursive functions can grow because Coq cannot not evaluate all the recursive functions (assuming the consistency of Coq). The tautology procedure could also be implemented as a decision procedure this way, but presumably isn’t because the tautologies that arise in practice end up having smaller proofs if they are shown directly.
This blow up doesn’t seem to be a problem in practice because the problems solved by decision procedures in paper proofs are so simple that they are solved quickly by these super exponential decision procedures. This is presumably because if the steps were too big, humans wouldn’t be able to follow them. Of course, the decision procedures can take a while for the lemmas found in non-paper proofs, such as the four colour theorem. But the ability for proof assistants to accept these proofs in a reasonable amount of time (a few hours, or a few days) is one of the big advantages of proof assistants over paper proofs. So I hope this was clarifying.