ICFP 2014 Post-Mortem


I participated in the 2014 ICFP programming contest this year. This year’s task was to write an AI for a simplified Pac-Man game called Lambda-Man. You could write the AI in any language you wanted, as long as it complies to a specific SECD machine architecture invented for the contest. At the end of the lightening round, it was announced that the final task included writing an AI for the ghosts as well. Again, the ghost AI could be written in any language, as long as it compiles to a separate 8-bit architecture invented for the contest.

I spent the first several hours implementing my own simulator of the arcade. Eventually I realized that I would have to start working on the AI if I was going to have an entry for the 24-hour lightening division. It was at that point I realized that the provided on-line simulator was plenty adequate for my needs and I never completed my simulator.

I have some previous experience writing assembler DSLs in Haskell to handle linking. After the 2006 ICFP contest, our team wrote a fake UM-DOS shell so that we could submit our solution in UM format. This lead me to writing an article in The Monad Reader about how to write an assembler using recursive do. After that, I encountered a really elegant and simple formulation of an assembler monad on some paste site. Unfortunately, I do not recall the author, but here is how the implementation looks.

newtype Label = Label { unLabel :: Int }
data ASMMonad w a = ASMMonad { runASM :: Label -> ([w],a) }

instance Monad (ASMMonad w) where
  return a = ASMMonad $ \_ -> ([], a)
  x >>= y = ASMMonad (\(Label i) -> let (o0, a) = runASM x (Label i)
                                        (o1, b) = runASM (y a) (Label (i+length o0))
                                     in (o0 ++ o1, b))

instance MonadFix (ASMMonad w) where
  mfix f = ASMMonad (\i -> let (o0, a) = runASM (f a) i in (o0, a))
execASM :: ASMMonad w a -> [w]
execASM m = fst $ runASM m (Label 0)

Next one adds two primitive operations. The tell function is similar to the version for the writer monad. The label function returns the current index of the output stream.

tell :: [w] -> ASMMonad w ()
tell l = ASMMonad $ \_ -> (l,())

label :: ASMMonad w Label
label = ASMMonad $ \i -> ([],i)

Lastly one makes an ASMMonadic value for each assembly instruction

data ASM = LDC Int32  -- load constant
         | LD Int Int -- load variable
         | LDF Label  -- load function
         | ADD
         {- … -}
         deriving Show

ldc x = tell [LDC x]
ld x y = tell [LD x y]
ldf x = tell [LDF x]
add = tell [ADD]
{- … -}

At the risk of jumping ahead too far, my compiler can produce linked assembly code very simply. The clause below compiles a lambda abstraction to linked SECD assembly using recursive do.

compileH env (Abs vars body) = mdo
  jmp end
  begin <- label
  compileH (update env vars) body
  end <- label
  ldf begin

Thanks to recursive do, the first line, jmp end, refers to the end label which is bound in the second last line.

With a DSL assembler written in Haskell, I turned to creating another DSL language in Haskell to compile to this assembly language. The SECD machine is designed for Lisp compilers, so I created a little Lisp language.

data Binding a = a := Lisp a

data Lisp a = Var a
            | Const Int32
            | Cons (Lisp a) (Lisp a)
            | Abs [a] (Lisp a)
            | Rec [Binding a] (Lisp a)
            {- … -}

The Abs constructor builds an n-ary lambda function. The Rec constructor plays the role of letrec to build mutually recursive references. With some abuse of the Num class and OverloadedStrings, this Lisp DSL is barely tolerable to program with directly in Haskell.

  Rec [ {- … -}
      ,"heapNew" := ["cmp"]! (Cons "cmp" 0) -- heap layout 0 = leaf | (Cons (Cons /heap is full/ /value/) (Cons /left tree/ /right tree/))
                                            -- "cmp" @@ ["x","y"] returns true when "x" < "y"
      ,"heapIsFull" := ["h"]! If (Atom "h") 1 (caar "h")
      ,"heapInsert" := ["cmpHeap", "v"]! Rec ["cmp" := (car "cmpHeap")
                                             ,"insert" := ["heap", "v"]! -- returns (Cons /new heap is full/ /new heap/)
                                                If (Atom "heap") (Cons (Cons 1 "v") (Cons 0 0))
                                                (Rec ["root" := cdar "heap"
                                                     ,"left" := cadr "heap"
                                                     ,"right" := cddr "heap"
                                                     ] $
                                                 Rec ["swap" := "cmp" @@ ["v", "root"]] $
                                                 Rec ["newRoot" := If "swap" "v" "root"
                                                     ,"newV" := If "swap" "root" "v"
                                                     ] $
                                                 If (caar "heap" `ou` Not ("heapIsFull" @@ ["left"]))
                                                    (Rec ["rec" := "insert" @@ ["left", "newV"]] $
                                                     Cons (Cons 0 "newRoot") (Cons "rec" "right"))
                                                    (Rec ["rec" := "insert" @@ ["right", "newV"]] $
                                                     Cons (Cons ("heapIsFull" @@ ["rec"]) "newRoot") (Cons "left" "rec")))
                                             (Cons "cmp" ("insert" @@ [cdr "cmpHeap","v"]))
  {- … -}

The @@ operator is infix application for the Lisp langauge and the ! operator is infix lambda abstraction for the Lisp langauge.

This Lisp language compiles to the SECD assembly and the assembly is printed out. The compiler is very simple. It does not even implement tail call optimization. There is a bit of an annoying problem with the compiler; the assembly code is structured in exactly the same way that the original Lisp is structured. In particular, lambda abstractions are compiled directly in place, and since lambda expressions are typically not executed in the location they are declared, I have to jump over the compiled code. You can see this happening in the snippet of my compiler above. I would have preferred to write

compileH env (Abs vars body) = do
  fun <- proc (compileH (update env vars) body)
  ldf fun
where proc is some function that takes an ASMMonad value and sticks the assembly code “at the end” and returns a label holding the location where the assembly code got stashed. However, I could not figure out a clever and elegent way of modifing the assembly monad to support this new primitive. This is something for you to ponder.

My Lambda AI, written in my Lisp variant, is fairly simple and similar to other entries. Lambda-Man searches out the maze for the nearest edible object. It searches down each path until it hits a junction and inserts the location of the junction into a binary heap. It also inserts the junction into a binary tree of encountered junctions. If the junction is already in the binary tree, it does not insert the junction into the heap because it has already considered it. The closest junction is popped off the heap, and the search is resumed.

There is at least one bit of surprising behaviour. If there is more than one path from one junction to another, sometimes Lambda-Man ends up taking the longer path. This behaviour did not seem to be bothersome enough to warrant fixing.

This programming task has renewed my appreciation for typed languages. The Lisp language I developed is untyped, and I made several type errors programming in it. Although it is true that I did detect (all?) my errors at run-time, they were still frustrating to debug. In a typed language, when an invariant enforced by the type system is violated, you get a compile time error that, more or less, points to the code location where the invariant is violated. In an untyped language, when an invariant is violated, you get a run-time error that, more or less, points to some point in the code where missing invariant has caused a problem. While this often is enough to determine what invariant was violated, I had little idea where the code breaking the invariant was located.

With some effort I probably could have used GADTs to bring Haskell’s type checker to the Lisp DSL, but I was not confident enough I could pull that off in time.

I also needed to write some ghost AIs. The 8-bit machine that the ghosts run on is so constrained, 256 bytes of data memory; 256 code locations; 8 registers, that it seemed to make sense to write the code in raw assembly.

The first thing I tried was to make the ghosts move randomly. This meant I needed to write my own pseudo-random number generator. Wikipedia lead me to a paper on how to write long period xorshift random number generators. The examples in that paper are all for 32-bit or 64-bit machines, but I had an 8-bit architecture. I wrote a little Haskell program to find analogous random number generators for 8-bit machines. It found 6 possibilities for 32-bit state random number generator composed of four 8-bit words that satisfied the xorshift constraints described in the paper. Here is the assembly code for getting a 2 bit pseudo-random value.

mov a,[0]
div a,2
xor [0],a
mov a,[0]
mul a,2
xor a,[0]
mov [0],[1]
mov [1],[2]
mov [2],[3]
mul [3],8
xor [3],[2]
xor [3],a
; get 2 bits
mov a,[3]
div a,64

The random seed is held in memory locations [0] through [3]. After moving to the successive the state, this code takes 2 pseudo-random bits from memory location [3] and puts it into register a.

I did not check the quality of this random number generator beyond constructing it so that it has a period of 232-1. I expect the bit stream to appear to be quite random.

My Lambda-Man performed reasonably well against my random ghosts, so I put some effort into making my random ghosts a little smarter. I wrote a ghost AI that tried to get above Lambda-man and attack him from above. Then I made each other ghost try to attack Lambda-man from the other three directions in the same manner. The idea is to try to trap Lambda-man between two ghosts.

These smarter ghosts were quite a bit more successful against my simple Lambda-man AI. At this point I was out of contest time, so that was it for my 2014 ICFP contest submission.

Thanks to the organizers for a terrific contest problem. I am looking forward to see the final rankings.



Russell O’Connor: contact me