This exercise deals with diferrentiating between recursive and iterative processes by tracing the evaluation of two functions.
Expansion of (+ 4 5) using first definition
As can be seen, there is a chain of deferred inc operations as function is evaluated. Thus, this process is recursive.
Another way to determine that this process is recursive is by observing that the recursive call is nested within another function. The outer function needs to be stored by the interpreter until the inner function returns from the recursive evaluation.
Expansion of (+ 4 5) using second definition
In this case, there are no deferred operations. The function evaluates exactly to another instance of itself which in turn evaluates to itself till the terminal case. There are no deferred operations to be tracked. Thus, this process is iterative.
Exercise 1.10
This exercise mainly teaches us to interpret procedures into the mathematical functions. For that purpose, we are presented with the Ackermann’s function:-
Evaluating the given expressions, we get
Mathematical definition of (A 0 n)
Let us evaluate (A 0 n) for various arguments
As can be seen, (f n) evaluates to . As we run through the function definition, when (= x 0) evaluates to #t, the Ackermann’s function evaluates to (* 2 y) which is the value we see.
Mathematical definition of (A 1 n)
Let us evaluate (A 1 n) for various arguments
As can be seen, with the exception of when n is 0, (g n) evaluates to . To see why this is the case, we run through the Ackermann’s function evaluation
We can see a series developing. in infix notation. The evaluation expands till the terminal case (= n 1) which evaluates to 2. Thus, we multiply 2 with itself n times to get
Mathematical definition of (A 2 n)
Evaluating (A 2 n) for various arguments, we get
Its not clear what is going on immediately. But the expansion of evaluation gives us a clue,
Combining with the definition of (A 1 n), we get the relation . Expanding down to the terminal case of (= n 1), we see that 2 is powered with itself n times. This can also be written using Knuth’s up-arrow notation as .
Exercise 1.11
This exercise introduces to us the method of solving the same problem using recursive and iterative processes. We have a function similar to Fibonacci series,
Recursive process
The recursive process can be written simply using the definition.
Iterative process
The iterative process can be written by using transformations applied on a series of integers similar to the method used for computing Fibonacci series iteratively. Let us initilize , and . We can apply simultaneous transformations.
Once the transformations are applied n number of times, c would evaluate to .
The iterative process is more powerful since it can take significantly smaller time in the order of to compute the result as opposed to the recursive process which takes time to compute.
Exercise 1.12
This exercise makes us write a procedure which evaluates the elements in a Pascal’s triangle using a recursive process. We use the coordinate system below.
We can define the pascal function at any row and column as such:-
Exercise 1.13
This exercise wants us to prove that is the closest integer to where . Let us derive this using the hints given.
Firstly, we are told to prove that
where . We are also given that and .
Let us use mathematical induction to prove the relation.
Base case
Left-hand side:-
Right-hand side:-
n = 0
n = 1
n = 2
Inductive step
Let us assume that the following relations hold true,
We need to prove that the assumptions lead to
From definition,
Hence, we have proven the relation.
Original problem
Going back to the original problem, we can reformulate it to say that we need to prove .
From the inductive proof above, we can see that .
Thus, we need to prove or .
We have and .
Since , we have where n is any positive integer in .
We can safely say that
Thus, Fib(n) is the closest integer to .
Exercise 1.14
This exercise tasks us with understanding the growth of a recursive process. We are told to draw the tree illustrating the process generated by the given procedure (count-change 11).
The graph obtained is as follows (click to enlarge):-
Note:- The Python code used to generate this tree is found here
Complexity of procedure
For the purpose of simplicity, let us denote n as the total amount and m as the number of types of coins(denominations).
Space Complexity: The space complexity of the procedure is fairly straight forward. The maximum space taken by the solution occurs when the most depth is reached. The longest depth can be reached by first taking the branch where kinds-of-coins is reduced to 1 followed by reducing the amount by the smallest denomination of 1. This results in a space complexity of . Since m is usually much smaller than n, we get an asymptotic space complexity of .
Time Complexity: The time complexity can be determined by building up from simpler cases. Thanks to Bill the Lizard’s post here.
We start with a simple case whereupon we have no coins, ie. we call (cc amount 0). In this case, we get the following graph:-
A simple case where we have only one branch yielding time complexity .
Next, we move to one type of coin invoking (cc amount 1). To determine its time complexity, we look at the process generated for that case:-
Using the same notation as in space complexity, we can see that in this case, the number of steps taken is given by calling (cc n 1) times with successively smaller values which results in (cc amount 0) given above being called n times, yielding a time of and time complexity of .
We next look at two types of coins invoking (cc amount 2). We once again look at the process generated for this case.
We can see the main path calling (cc n 2) gets called times because of the second denomination of 5 cents. Each node in this path each calls its own (cc n 1) sub-process. Summing them up, we obtain a sum for total time, . The number of terms in this summation is yielding a time complexity of .
As such, we can see a pattern building up. Since we have five types of coins ultimately, we end up with a final time complexity for (cc amount 5) to be .
Exercise 1.15
As with previous case, we try to analyse the complexity of the following function:-
To get the number of times procedure p is applied in evaluating (sine 12.15), we look at its expansion:-
From the above, we can see that the procedure p is applied 5 times.
Order of growth
The time and space complexity should have the same order in this case because we do not see any branches in the evaluation tree.
As we can see, the number of steps taken by the evaluation of (sine a) increases by one when a is tripled. This hints at a logarithmic growth of for both time and space complexity.
Exercise 1.16
This exercise tasks us with writing the given fast-exponentiation function with a procedure utilizing iterative process. It also introduces the concept of a loop invariant. In this case, we use a accummulator variable a which starts at 1 and evaluates to at the end of the iteration. Similar to the rule provided for odd and even powers, we make a new rule such that does not vary between successive iterations.
We write the procedure in scheme.
Exercise 1.17
This exercise tasks us with writing fast integer multiplication using a method similar to fast exponentiation presented in this section.
Writing new helper functions halve and double, we have
Exercise 1.18
In this exercise, we extend the solutions from the previous two exercises to create an iterative process for computing the multiplication of two numbers. In this case, we use an accummulator variable p to hold the intermediate product. We get an invariant of .
Exercise 1.19
In this interesting exercise, we are told to extend the solutions developed in the previous exercises to the computation of the Fibonacci series. To help with that, we develop a generic transformation ,
wherein, is the special case denoting one step of transformation in generating Fibonacci series. To accelerate this process, let us apply twice to obtain a compact transformation that does two steps in one step.
Ultimately, we obtain and .
Substituting it into the provided function template, we get the following code which computes in time.
Exercise 1.20
In this exercise, we revisit the applicative and normal-order evaluation strategies presented in Section 1.1 to trace the given function and determine how many times remainder is called.
Normal-order evaluation
In normal-order evaluation, we fully expand first before evaluating.
As can be seen, the overly long evaluation chain evaluates remainder 18 times.
Applicative-order evaluation
In applicative order evaluation, we evaluate available sub-expressions before expanding.
As can be seen, remainder is only called 4 times in the applicative order.
Exercise 1.21
This exercise tasks to find the smallest divisors of 199, 1999 and 19999 using the given smallest-divisor function given in the book.
Exercise 1.22
In this exercise, we try to measure the time taken for the prime? function to execute and verify that it indeed follows time complexity. For that purpose, we use the built-in runtime function which gives the number of seconds since the start of execution. The given timing code is built on top of the code used in previous exercise as follows:-
We now write a function that gives the primes in a range. And prints out the time taken to test a number.
The process of finding the prime numbers near the given range of 1000 to 1000000 was too quick to measure using the runtime function in the 2.2 GHz Intel Core 2 Duo system running Ubuntu 12.04. Thus, larger number ranges of 1e10 to 1e13 were used for this purpose.
Removing the lines for non-prime numbers and truncating after 3 prime numbers are generated, the following results were obtained:-
We take the median value of the three results for each query and compare the ratios.
The ratio of time taken when input is multiplied by 10 is given to be approximately equal to . Thus, we can say that the algorithm has a time complexity of .
Exercise 1.23
In this exercise, we modify the code used in the previous exercise such that the time taken for the smallest-divisor function is halved by using only the odd numbers.
For that purpose, we replace the find-divisor function with the following:-
Rerunning the same code as before, we get the following results:-
Once again, we take the median time value and compute the ratio
As we can see, instead of the expected ratio of 2, we instead get an ratio of 1.73. This is probably because of the extra overhead incurred with replacing a simple (+ test-divisor 1) with a more complex next which involves evaluating an if statement. This extra overhead prevents us from reaching a ratio of 2.
Exercise 1.24
In this exercise, we are tasked to compute the time complexity of the fast-prime? function similar to the previous exercises. For that purpose, the following timing code is used:-
For our exercise, 10000 trials were used for testing. The following times were observed.
As can be seen, when the digits are doubled, ie. the number is squared, the time taken for the procedure to complete is approximately doubled. Thus, the time complexity of the fast-prime? function is .
Exercise 1.25
In this exercise, we are tasked with trying to understand why we should not compute exponentials directly for this particular problem.
When we compute (fast-expt base exp) directly, we have to deal with very large numbers when the argument exp is large. The numbers may sometimes be larger than what can be stored in a regular integer. In these cases the computation may take a long while to complete.
Thus, we use the modular arithmetic formula to compute the final value without dealing with the large values.
This idea is also mentioned in the footnote 46 of the book.
Exercise 1.26
We are asked to explain why the following expmod function by Louis Reasoner is slow.
As can be seen, the only modification is replacing (square (expmod base (/ exp 2) m)) with (* (expmod base (/ exp 2) m) (expmod base (/ exp 2) m)). This means that when evaluating the second statement, (expmod base (/ exp 2) m) is evaluated twice since they are written twice.
Performing a simple time complexity analysis, we can see that for the original case, time taken for computing is simply the time taken for computing plus some constant, leading to a time complexity of . Whereas, for the new code, time taken to compute is twice that of time taken for computing . This leads to a branching execution similar to the original Fibonacci example. The new process takes time.
Exercise 1.27
This exercise tasks us with verifying that Carmichael numbers fool the Fermat test. For that purpose, we write a function that takes an integer n and tests whether for all .
Trying it on the given Carmichael numbers, we get
Thus, we can see that Carmichael numbers fool the Fermat test.
Exercise 1.28
In this exercise, we implement a modified form of Fermat test names Miller-Rabin test that cannot be fooled by Carmichael numbers. Basically, we check for albeit with some checks in the middle. For that purpose, we modify the existing Fermat test.
Running this code on some random prime and non-prime numbers, we get the following:-
Verifying that this test still works, when we try Carmichael numbers, we get the following:-
Thus, the Miller-Rabin test can be used to test Carmichael numbers too.