There is much written about the duality between strict-order (call-by-value) evalutaion for the lambda calculus and the normal-order (call-by-need) evaluation (or semantic equivently, lazy evaluation). In the simply typed lambda calculus, all evaluation eventually terminates, so both evaluation strategies result in the same values. However, when general recursion is added to the simply typed lambda calculus (via a fixpoint operator, for example) then evaluation of some expressions does not terminate. More expressions terminate with normal-order evaluation than with strict-order evaluation. In fact, if evaluation terminates in any order, then it terminates with normal-order evaluation.
I would like to discuss the possibility of a third, even laxer evaluation strategy available for the typed lambda calculus that allows for even more expressions to terminate. I did just say that normal-order evaluation is, in some sense, a best possible evaluation order, so, in order to beat it, we will be adding more redexes that add the commuting conversions.
The typed lambda calculus enjoys certain commuting conversions for case expressions that allow every elimination term to pass through the case expression.
For example, the commuting conversion for the π₁
elimination term and the case
experssion says that
π₁(case e₀ of σ₁ x. e₁ | σ₂ y. e₂)
converts to
case e₀ of σ₁ x. π₁(e₁) | σ₂ y. π₁(e₂)
These commuting conversions are required so that the subformula property holds.
My understanding is that a corollary of this says that
f(case e₀ of σ₁ x. e₁ | σ₂ y. e₂)
and
case e₀ of σ₁ x. f(e₁) | σ₂ y. f(e₂)
are denotationally equivalent whenever f
is a strict function.
I would like to develop a version of the lambda calculus that allows these two expressions to denote the same value for any f
.
Call this, the unrestricted commuting conversion property.
A lambda calculus with this property would necessarily be parallel and thus will require a parallel evaluation strategy.
For example, the natural definition of or
becomes the parallel-or operation.
or x y := if x then True else y
This definition has the usual short-circuit property that or True ⊥
is True
where ⊥
is defined by
⊥ := fix id
If we use the unrestricted commuting conversion property then we also have the that or ⊥ True
is True
:
or ⊥ True = {definition of or} if ⊥ then True else True = {β-expansion} if ⊥ then const True 〈〉 else const True 〈〉 = {commuting} const True (if ⊥ then 〈〉 else 〈〉) = {β-reduction} True
Hence or
is parallel-or.
Other parallel functions, such as the majority function, also follow from their natural definitions.
maj x y z := if x then (or y z) else (and y z)
In this case maj ⊥ True True
is True
.
maj ⊥ True True = {definition of maj} if ⊥ then (or True True) else (and True True) = {evaluation of (or True True) and (and True True) if ⊥ then True else True = {commuting} True
It is easy to verify that maj True ⊥ True
and maj True True ⊥
are also both True
.
My big question is whether we can devise some nice operational semantics for the lambda calculus that will have the unrestricted commuting conversions property that I desire. Below I document my first attempt at such operational semantics, but, spoiler alert, it does not work.
The usual rule for computing weak head normal form for the case expression case e₀ of σ₁ x. e₁ | σ₂ y. e₂
says to first convert e₀
to weak head normal form.
If it is σ₁ e₀′
then return the weak head normal form of e₁[x ↦ e₀′]
.
If it is σ₂ e₀′
then return the weak head normal form of e₂[y ↦ e₀′]
.
In addition to this rule, I want to add another rule for computing the weak head normal form for the case expression.
This alernative rules says that we compute the weak head normal forms of e₁
and e₂
.
If we get C₁ e₁′
and C₂ e₂′
respectively for introduction terms (a.k.a. constructors) C₁
and C₂
, and
if additionally C₁
= C₂
then return the following as a weak head normal form, C₁ (case e₀ of σ₁ x. e₁′ | σ₂ y. e₂′)
.
If C₁
≠ C₂
or if we get stuck on a neutral term (e.g. a varaible), then this rule does not apply.
This new rule is in addition to the usual rule for case
. Any implementation must run these two rules in parallel because it is possible that either rule (or both) can result in non-termination when recursivly computing weak head normal forms of sub-terms.
I suppose that in case one has unlifted products then when computing the weak head normal form of a case
expression having a product type or function type, one can immediately return
〈case e₀ of σ₁ x. π₁ e₁ | σ₂ y. π₁ e₂, case e₀ of σ₁ x. π₂ e₁ | σ₂ y. π₂ e₂〉or
λz. case e₀ of σ₁ x. e₁ z | σ₂ y. e₂ z
This amended computation of weak head normal form seems to work for computing or
and maj
functions above so that they are non-strict in every argument, but there is another example where even this method of computing weak head normal form is not sufficent.
Consider the functions that implements associativity for the sum type.
assocL : A + (B + C) -> (A + B) + C assocL z := case z of σ₁ a. σ₁ (σ₁ a) | σ₂ bc. (case bc of σ₁ b. σ₁ (σ₂ b) | σ₂ c. σ₂ c) assocR : (A + B) + C -> A + (B + C) assocR z := case z of σ₁ ab. (case ab of σ₁ a. σ₁ a | σ₂ b. σ₂ (σ₁ b)) | σ₂ c. σ₂ (σ₂ c)
Now let us use unrestricted commuting conversions to evaluate assocR (assocL (σ₂ ⊥))
.
assocR (assocL (σ₂ ⊥)) = { definition of assocL and case evaluation } assocR (case ⊥. σ₁ b. σ₁ (σ₂ b) | σ₂ c. σ₂ c) = { commuting conversion } case ⊥. σ₁ b. assocR (σ₁ (σ₂ b)) | σ₂ c. assocR (σ₂ c) = { definition of assocR and case evaluations } case ⊥. σ₁ b. σ₂ (σ₁ b) | σ₂ c. σ₂ (σ₂ c) = { commuting conversion } σ₂ (case ⊥. σ₁ b. σ₁ b | σ₂ c. σ₂ c) = { η-contraction for case } σ₂ ⊥
Even if η-contraction is not a reduction rule used for computation, it is still the case that t
and case t. σ₁ b. σ₁ b | σ₂ c. σ₂ c
should always be dentotationally equivalent.
Anyhow, we see that by using commuting conversions that a weak head normal form of assocR (assocL (σ₂ ⊥))
should expose the σ₂
constructor.
However, if you apply even my ammended computation of weak head normal form, it will not produce any constructor.
What I find particularly surprising is the domain semantics of assocL
and assocR
.
assocL
seems to map σ₂ ⊥
to ⊥
because no constructor can be exposed.
assocR
maps ⊥
to ⊥
.
Therefore, according to the denotational semantics, the composition should map σ₂ ⊥
to ⊥
, but as we saw, under parallel evaluation it does not.
It would seem that the naive denotational semantics appears to not capture the semantics of parallel evaluation.
The term case ⊥. σ₁ b. σ₁ (σ₂ b) | σ₂ c. σ₂ c
seems to be more defined than ⊥, even though no constructor is available in the head position.
Although my attempt at nice operational semantics failed, I am still confident some nice computation method exists.
At the very least, I believe a rewriting system will work which has all the usual rewriting rules plus a few extra new redexes that says that an elimination term applied to the case
expression commutes the elimination term into all of the branches,
and another that says when all branches of a case expression contain the same introduction term, that introduction term is commuted to the outside of the case expression, and maybe also the rules I listed above for unlifted products.
I conjecture this rewriting system is confluent and unrestricted commuting conversions are convertable (probably requiring η-conversions as well).
Without proofs of my conjectures I am a little concerned that this all does not actually work out. There may be some bad interaction with fixpoints that I am not seeing. If this does all work out then shouldn’t I have heard about it by now?