Read Structure and Interpretation of Computer Programs Online

Authors: Harold Abelson and Gerald Jay Sussman with Julie Sussman

Structure and Interpretation of Computer Programs (8 page)

1.2  Procedures and the Processes They Generate

We have now considered the elements of programming: We have used
primitive arithmetic operations, we have combined these operations, and
we have abstracted these composite operations by defining them as compound
procedures. But that is not enough to enable us to say that we know
how to program. Our situation is analogous to that of someone who has
learned the rules for how the pieces move in chess but knows nothing
of typical openings, tactics, or strategy. Like the novice chess
player, we don't yet know the common patterns of usage in the domain.
We lack the knowledge of which moves are worth making (which
procedures are worth defining). We lack the experience to predict the
consequences of making a move (executing a procedure).

The ability to visualize the consequences of the actions under
consideration is crucial to becoming an expert programmer, just as it
is in any synthetic, creative activity. In becoming an expert
photographer, for example, one must learn how to look at a scene and
know how dark each region will appear on a print for each possible
choice of exposure and development conditions. Only then can one
reason backward, planning framing, lighting, exposure, and development
to obtain the desired effects. So it is with programming, where we
are planning the course of action to be taken by a process and where
we control the process by means of a program. To become experts, we
must learn to visualize the processes generated by various types of
procedures. Only after we have developed such a skill can we learn
to reliably construct programs that exhibit the desired behavior.

A procedure is a pattern for the
local evolution
of a
computational process. It specifies how each stage of the process is
built upon the previous stage. We would like to be able to make
statements about the overall, or
global
, behavior of a
process whose local evolution has been specified by a procedure. This
is very difficult to do in general, but we can at least try to
describe some typical patterns of process evolution.

In this section we will examine some common “shapes” for processes
generated by simple procedures. We will also investigate the
rates at which these processes consume the important computational
resources of time and space. The procedures we will consider
are very simple. Their role is like that played by test patterns in
photography: as oversimplified prototypical patterns, rather than
practical examples in their own right.

1.2.1  Linear Recursion and Iteration
Figure 1.3:
  A linear recursive process for computing 6!.

We begin by considering the factorial function, defined by

n
! =
n
· (
n
- 1) · (
n
- 2) ⋯ 3 · 2 · 1

There are many ways to compute factorials. One way is to make use of
the observation that
n
! is equal to
n
times (
n
- 1)! for
any positive integer 
n
:

n
! =
n
· [(
n
- 1) · (
n
- 2) ⋯ 3 · 2 · 1] =
n
· (
n
- 1)!

Thus, we can compute
n
! by computing (
n
- 1)! and multiplying the
result by
n
. If we add the stipulation that 1! is equal to 1,
this observation translates directly into a procedure:

(define (factorial n)
  (if (= n 1)
      1
      (* n (factorial (- n 1)))))

We can use the substitution model of
section 
1.1.5
to watch this procedure in action
computing 6!, as shown in figure 
1.3
.

Now let's take a different perspective on computing factorials. We
could describe a rule for computing
n
! by specifying that we
first multiply 1 by 2, then multiply the result by 3, then by 4,
and so on until we reach
n
.
More formally, we maintain a running product, together with a counter
that counts from 1 up to
n
. We can describe the computation by
saying that the counter and the product simultaneously change from one
step to the next according to the rule

product ⟵ counter · product

counter ⟵ counter + 1

and stipulating that
n
! is the value of the product when
the counter exceeds
n
.

Figure 1.4:
  A linear iterative process for computing 6!.

Once again, we can recast our description as a procedure for computing
factorials:
29

(define (factorial n)
  (fact-iter 1 1 n))
(define (fact-iter product counter max-count)
  (if (> counter max-count)
      product
      (fact-iter (* counter product)
                 (+ counter 1)
                 max-count)))

As before, we can use the substitution model to visualize the process
of computing 6!, as shown in figure 
1.4
.

Compare the two processes. From one point of view, they seem hardly
different at all. Both compute the same mathematical function on the
same domain, and each requires a number of steps proportional to
n
to compute
n
!. Indeed, both processes even carry out the same
sequence of multiplications, obtaining the same sequence of partial
products. On the other hand, when we consider the
“shapes” of the
two processes, we find that they evolve quite differently.

Consider the first process. The substitution model reveals a shape of
expansion followed by contraction, indicated by the arrow in
figure 
1.3
. The expansion occurs as the
process builds up a chain of
deferred operations
(in this case,
a chain of multiplications). The contraction occurs as
the operations are
actually performed. This type of process, characterized by a chain of
deferred operations, is called a
recursive process
. Carrying
out this process requires that the interpreter keep track of the
operations to be performed later on. In the computation of
n
!,
the length of the chain of deferred multiplications, and hence the amount
of information needed to keep track of it,
grows linearly with
n
(is proportional to
n
), just like the number of steps.
Such a process is called a
linear recursive process
.

By contrast, the second process does not grow and shrink. At each
step, all we need to keep track of, for any
n
, are the current
values of the variables
product
,
counter
, and
max-count
. We call this an
iterative process
. In general, an
iterative process is one whose state can be summarized by a fixed
number of
state variables
, together with a fixed rule that
describes how the state variables should be updated as the process
moves from state to state and an (optional) end test that specifies
conditions under which the process should terminate. In computing
n
!, the number of steps required grows linearly with
n
. Such a process is
called a
linear iterative process
.

The contrast between the two processes can be seen in another way. In
the iterative case, the program variables provide a complete
description of the state of the process at any point. If we stopped
the computation between steps, all we would need to do to resume the
computation is to supply the interpreter with the values of the three
program variables. Not so with the recursive process. In this case
there is some additional “hidden” information, maintained by the
interpreter and not contained in the program variables, which
indicates “where the process is” in negotiating the chain of
deferred operations. The longer the chain, the more information must
be maintained.
30

In contrasting iteration and recursion, we must be careful not to
confuse the notion of a
recursive
process
with the notion of a
recursive
procedure
. When we describe a procedure as recursive,
we are referring to the syntactic fact that the procedure definition
refers (either directly or indirectly) to the procedure itself. But
when we describe a process as following a pattern that is, say,
linearly recursive, we are speaking about how the process evolves, not
about the syntax of how a procedure is written. It may seem
disturbing that we refer to a recursive procedure such as
fact-iter
as generating an iterative process. However, the process
really is iterative: Its state is captured completely by its three
state variables, and an interpreter need keep track of only three
variables in order to execute the process.

One reason that the distinction between process and procedure may be
confusing is that most implementations of common languages (including
Ada, Pascal, and C) are designed in such a way that the
interpretation of any recursive procedure consumes an amount of memory
that grows with the number of procedure calls, even when the process
described is, in principle, iterative. As a consequence, these
languages can describe iterative processes only by resorting to
special-purpose
“looping constructs” such as
do
,
repeat
,
until
,
for
, and
while
. The implementation of Scheme
we shall consider in chapter 5 does not share this defect. It will
execute an iterative process in constant space, even if the iterative
process is described by a recursive procedure. An implementation with
this property is called
tail-recursive
. With a tail-recursive
implementation,
iteration can be expressed using the ordinary
procedure call mechanism, so that special iteration constructs are
useful only as
syntactic sugar.
31

Exercise 1.9.
  Each of the following two procedures defines a method for adding two
positive integers in terms of the procedures
inc
,
which increments its argument by 1, and
dec
, which decrements
its argument by 1.

(define (+ a b)
  (if (= a 0)
      b
      (inc (+ (dec a) b))))
(define (+ a b)
  (if (= a 0)
      b
      (+ (dec a) (inc b))))

Using the substitution model, illustrate the process generated by each
procedure in evaluating
(+ 4 5)
. Are these processes
iterative or recursive?

Exercise 1.10.
  
The following procedure computes a mathematical function called
Ackermann's function.

(define (A x y)
  (cond ((= y 0) 0)
        ((= x 0) (* 2 y))
        ((= y 1) 2)
        (else (A (- x 1)
                 (A x (- y 1))))))

What are the values of the following expressions?

(A 1 10)
(A 2 4)
(A 3 3)

Consider the following procedures, where
A
is the procedure
defined above:

(define (f n) (A 0 n))
(define (g n) (A 1 n))
(define (h n) (A 2 n))
(define (k n) (* 5 n n))

Give concise mathematical definitions for the functions computed by
the procedures
f
,
g
, and
h
for positive integer
values of
n
. For example,
(k n)
computes 5
n
2
.

1.2.2  Tree Recursion

Another common pattern of computation is called
tree recursion
.
As an example, consider computing the sequence of
Fibonacci numbers,
in which each number is the sum of the preceding two:

0, 1, 1, 2, 3, 5, 8, 13, 21, …

In general, the Fibonacci numbers can be defined by the rule

Other books

The Spurned Viscountess by Shelley Munro
Don't Blame the Devil by Pat G'Orge-Walker
Davita's Harp by Chaim Potok
Carnival at Candlelight by Mary Pope Osborne
The Rembrandt Secret by Alex Connor
Paranormal Pleasure by Mindy Wilde
Blythewood by Carol Goodman
Magic to the Bone by Devon Monk
The Story of Childhood by Libby Brooks
JF05 - The Valkyrie Song by Craig Russell


readsbookonline.com Copyright 2016 - 2024