Yesterday I had a computable real x, and I was computing x210. I did this by repeatedly squaring my value.
This morning I though that method was dumb. X1024 is a perfectly good polynomial. I should just plug my value into that polynomial and get my result in one fell swoop. To my surprise the polynomial computation was way slower. How could this be?
I was actually doing a bit more that repeatedly squaring in my iteration method. I also interleaved calls to a function I call compress
. compress
will take my computable real and attempt to reduce the size (in bits) of the rational approximations the computable real produces. Somehow this was making my computation faster. But I still felt uneasy about it.
But then a lunch discussion of my problem inspired me to think of an analogy (bit of a stretch here) in cryptography. In cryptography it is common to compute xy modulo n. One does not compute this value directly, and then take the modulus. One repeatedly squares x, taking the modulus after each step. So my polynomial computation was like trying to compute xy modulo n directly, and my iterated method is analogous to computing xy modulo n wisely.