In this exercise, we are simply asked to determine the responses from the interpreter when given expressions are evauated. This is to understand the quotation mechanic.
When we evaluate (list 'a 'b 'c), a, b and c are quoted directly, thus, we get (a b c) directly.
When we evaluate (list (list 'george)), george is quoted. Including nested lists, we expect ((george)).
When we evaluate (cdr '((x1 x2) (y1 y2))), we get the list with the first element removed. We expect ((y1 y2)).
When we evaluate (cadr '((x1 x2) (y1 y2))), we get the car of the previous result which is simple (y1 y2).
When we evaluate (pair? (car '(a short list))), we check if a is a pair which should result in #f.
When we evaluate (memq 'red '((red shoes) (blue socks))), we should get #f since there is no member in the main list directly equal to red
When we evaluate (memq 'red '(red shoes blue socks)), we should get the whole list in return because the first element is the element we are looking for. We expect (red shoes blue socks)
Now, let us actually run it in the REPL:-
Exercise 2.54
In this exercise, we are tasked with writing a procedure called equal? which can be used to recursively determine the equality of two lists of symbols. We are given eq? which evaluates to true when the two arguments are the same symbols.
Exercise 2.55
When Eva Lu Ator typed (car ''abracadabra), the REPL printed back quote. This exercise ask us to explain how.
When we type the original quoted expression in the REPL, we get the following:-
Thus, by using two quotes, we also quote in the inner quote. The inner quote thus does not get evaluated and shows up as is. When we take the car of this list, we simply get the quote back.
Exercise 2.56
In this exercise, we are tasked with extending the deriv program given in the book to handle exponentiations of the form:-
We can do that by first implementing the data structure for exponentiation.
Let us now include this code in the deriv expression.
Let us test this code:-
Exercise 2.57
In this exercise, we are tasked with extending the differentiation program to handle the sums and products of arbitrary number of terms.
We can do this by simply modifying the selector functions to return another sum/product expression if there are more than two terms.
Let us test this:-
Exercise 2.58
In this exercise, we are required to modify the deriv program to work on a more natural form of expressing equations.
Differentiation of infix expressions
In the first part of this exercise, we modify the differentiation process to work on expressions in infix form. So as not to fizzle our brains, we are told first to approach this in a form where each operator only has two operands. Thus, expressions are natually parenthesized in terms of precedence and list decompositions give the required sub expressions.
Further, this task can be accomplished by only modifying the lower level representaion of expressions. The higher level deriv function need not be touched. First, let us convert the representations of sum and product.
Once that is done, let us test the code with some test cases:-
Differentiation of standard algebraic notation
In this part, we go all the way out such that we allow expressions with standard algebraic notation in the deriv program. This program should also take operator precedence into consideration when computing.
As before, the logic of the deriv procedure need not be altered. We are also told that we only have multipliaction and addition operations which simplifies the problem substantially.
First, let us look at parsing sum expressions. When given an expression containing a '+ symbol (disregarding the sub-expressions), we split the equation into two parts around the symbol. For eg., . We can safely do this because of the lower precedence of the addition compared to multiplication. With this in mind, let us write the selectors and predicates for sum.
Let us now redefine the definition for multiplication. A given expression can only be considered a product expression when two conditions are fulfilled:-
The expression contains a '* symbol (disregarding the sub-expressions).
The expression does not contain a '+ symbol (disregarding the sub-expressions).
For example, can be considered a product expression whereas cannot be considered so because it is a sum expression of form . With these ideas in mind, let us write the code for products:-
Let us test the final program out:-
Exercise 2.59
In this exercise, we are asked to implement union-set, a procedure used to find the union of two sets represented as unordered-lists.
We can simply take the first set and use adjoin-set to combine them with the second set.
Exercise 2.60
In this exercise, we are asked to create a set representation that allows us to have duplicate elements in a set. It can be done using the folowing code:-
Let us test this,
As can be seen, we get the expected results.
This kind of representation of sets is used for different use cases compared to the previous representation without any duplicates. One example is if we want to keep track how many times each element occurs instead of just the types of elements.
Compared to the previous case, efficiency is as follows:-
No Duplicates
With Duplicates
element-of-set?
adjoin-set
union-set
intersection-set
We can see that having duplicates simplifies adjoin and union operations. This is because we do not perform element-of-set? anymore for these operations. However, if there are a lot of duplicate elements, the time complexity would go up for the union-set and intersection-set procedures because n would tend to be a high value for the lists with duplicate elements.
Exercise 2.61
In this exercise, we look at creating an implementation of adjoin-set for sets represented using ordered lists.
This function is similar to the element-of-set? function given in the book.
Similar to the element-of-set? function from before, this function will take half as many steps in average compared to the un-ordered version.
Exercise 2.62
In this exercise, we need to implement an time implementation of union-set for ordered lists.
This is a single sweep algorithm similar to the merging step in Merge sort. It has time complexity.
Exercise 2.63
In this exercise, we are given two recursive procedures for converting trees to lists. We are asked to compare both of them. Here is the given code:-
Let us test them with some example trees:-
We get the same results for both cases. Now let us look deeper into each version.
Given a tree node, tree->list-1 recursively accumulates the elements on the left branch and appends to the list created by cons of the entry and the accumulation of the right branch. We have already seen that append operation takes time. For a balanced tree containing n elements, . Using the Master theorem, we can see that the time complexity is .
As for tree->list-2, the process keeps a running list of all the accumulated values so far and returns that once all nodes are visited. At a tree node, it copies all the nodes in the right edge using cons to the given list, add its own entry and then feeds it to the left child elements. By using cons, it follows the more efficient method of collecting larger elements on the deeper end of the list first. For each node, we only call a cons procedure which takes time. In total for n nodes, this procedure takes time.
Exercise 2.64
In this exercise, we are given the code that generates a balanced binary tree given a list of elements in ascending order.
The diagram of the tree that was produced is given below:-
list->tree works by delegating its job to partial-tree which takes in two arguments, a list elts and a number n. With these, it returns a tuple. The first element contains the balanced tree constructed using the first n members of elts and the remaining members in a list. The initial recursion is initiated by requesting a tree with all the elements in the list.
Now, for each iteration in partial-tree, we first compute the number of elements needed in the left branch. It is simply the first elements in the list. Thus, by calling (partial-tree elts left-size), we get a balanced tree for the left branch and the remaining elements not used to construct the branch. Next, we take the first element of the remaining elements to the the node entry, then construct the right branch by calling (partial-tree (cdr non-left-elts) right-size)). Finally obtaining all three, we construct a tree using (make-tree this-entry left-tree right-tree), then cons it with the elements remaining after making the right tree and return this tuple.
At every element, we call cons, car, cdr and make-tree a fixed number of times, each operation takes time. Thus, the total time complexity for n elements is .
Exercise 2.65
This exercise tasks us with implementing a implementation for union-set and intersection-set for sets implemented as balanced binary trees. This can be done very simply. First we convert a balanced binary tree to a list in ascending order. Next, we perform union and intersection on the ordered lists. Finally, we convert the ordered list back to a balanced tree. Each of the steps can be completed in time using the procedures we have developed earlier. The first one can be done using tree->list-2 and the last one can be done using list->tree.
The code is as follows:-
Exercise 2.66
In this exercise, we are told to implement a lookup procedure that essentially finds a record from a set of records structured as a binary tree ordered by the numerical value of the keys.
The following code can be used for that purpose:-
This procedure can retreive a record in time complexity.
Exercise 2.67
This exercise puts the given Huffman decoding procedure through its paces. Thus, its a simple evaluation in the REPL.
Thus the given binary code is decoded to (a d a b b c a). Looking at the given Huffman tree below, this makes sense.
Exercise 2.68
This exercise deal with the encode procedure which takes a message and a Huffman tree and evaluates to the list of bits that gives the encoded message. We are given the initial scaffolding:-
We are told to give the encode-symbol that takes one symbol and returns the bit representation of the symbol.
As can be seen, the encode function works perfectly in re-encoding the decoded message.
Exercise 2.69
In this exercise, we are tasked with generating Huffman encoding tree, given a list of symbol-frequency pairs. We are given the following starting code:-
We initially create a set of leaves in ascending order of frequency using make-leaf-set. Then, we call successive-merge which we need to write.
Exercise 2.70
In this exercise, we exercise our new-found compression skills by encoding a song lyrics using Huffman encoding. We are given the frequencies of the most commonly used symbols in 1950s rock songs.
We have our final encoded song. If encoded with a fixed-length code, we would require 3-bit to store all possible words.
As can be seen, with Huffman encoding, we have compressed the song by 24 bits (22% compression).
Exercise 2.71
In this exercise, we are told to examine the Huffman tree for an alphabet of n symbols with frequencies of .
The following are the Huffman trees that were generated:-
We can see for both cases it takes 1 bit for the most frequent symbol and n-1 bits to encode the least frequent symbol. Also, the Huffman tree looks similar and has the deepest configuration possible. This is because for any symbol with frequency x, the sum of frequencies for all the symbols with lower frequencies is x-1. This is due to the rule of taking frequencies as powers of two.
Exercise 2.72
In this exercise we are asked to evaluate the order of growth in the number of steps required to evaluate a symbol. Since this is a difficult question to answer for a general Huffman tree, we are asked to give the order of growth for special cases where the Huffman trees look like the ones created in the previous exercise.
Here is the code we use for finding a symbol in a tree:-
In this code, contains-symbol? is similar in complexity to memq. For a list of n elements, it takes time complexity. Moving down to the next branch is a trivial operation that takes a constant time.
Now, when we try to encode the most frequent symbol in the Huffman tree, we simply run contains-symbol on two lists totalling n in length. The right branch is immediately identified and it being a leaf immediately returns the encoding. Thus, the encoding of the most frequent symbol has a growth.
Finally, considering the symbol with the least frequency, we first run contains-symbol? on the first node on a list with n entries. Unsuccessful, we go down the left branch and run the process again on a list with n-1 entries. This goes on until the bottom where we run scans on 2 entries. . Thus, encoding the letter of lowest frequency has a growth of .