Read Theory of Fun for Game Design Online

Authors: Raph Koster

Tags: #COMPUTERS / Programming / Games

Theory of Fun for Game Design (21 page)

Filling in blanks and not seeing your nose
: Some fun experiments to demonstrate blind spots and the brain filling in known data can be found at
http://faculty.washington.edu/chudler/chvision.html
. Many popular optical illusions are based on the fact that the brain makes assumptions about what we are seeing.

The brain
…: Steven Johnson’s book
Mind Wide Open
(Scribner, 2004) is a wonderful excursion into the mysteries of the human mind.

Cognitive theory
: The field of cognition breaks down into several different areas. Cognitive psychology, the mainstream tradition of the field, is mostly abstract and doesn’t reference biology very much, whereas the relatively new field of cognitive neuroscience attempts to relate information flow to how the brain works. This latter field is relatively new, and it is what most of the commentary in this book references.

Chunking
: According to G. A. Miller’s influential 1958 paper “The Magical Number Seven, Plus or Minus Two,” our short-term memory (which you can think of as our “scratch pad” for doing mental work) can only handle around seven units of information. If you overload your short-term memory, you’ll forget some of the items. Each unit of information can be fairly complex, as long as we are capable of reducing it down to a “chunk,” or a collected unit of information with a single easy-to-remember label. This has important implications for a number of fields, including linguistics, interface design, and of course, games—it helps explain why adding more numbers to keep track of in a game will very quickly make the game too hard. Only short-term memory has this limitation; the brain itself is capable of far more.

Automatic chunked patterns
: Cognitive science uses numerous terms for many of these related concepts, including chunks, routines, categories, and mental models. In this book I used “chunk” because it’s already used in different ways by different disciplines, plus it makes sense on a layman’s level. Technically, most of the big “chunked patterns” to which I refer are called
schemata
.

Chunks not behaving as we expect them to
: When people learn information, the brain always tags it as “correct” and rarely considers the source’s credibility. It takes conscious work to determine otherwise. People also tend to automatically group similar things together in the absence of complete data—thus, a person who didn’t know much about either might consider a pumpkin and a basketball to be the same type of object. This can lead to unpleasant surprises when you try to make a pie. There is a field called “source monitoring” within the study of memory that works on examining these issues.

The golden section
: Also called the golden mean, golden ratio, and divine proportion. This is too large a topic to discuss in an endnote; whole books have been written about it (such as Mario Livio’s
The Golden Ratio: The Story of Phi, the World’s Most Astonishing Number
). The golden ratio is the irrational number, approximately 1.618, called
phi
or ϕ. Ever since the ancient Greeks, art composed using this ratio in the composition has been deemed more beautiful. Some degree of this perception seems to be hardwired into our brain, perhaps because the ratio manifests in a wide range of natural phenomena, including the spiral pattern of seeds and petals around a flower stem, the shape of curling sea-shells, and certain proportions of the human body.

Even static has patterns
: A concept from algorithmic information theory. An algorithm is an elegant way to describe complex information. The algorithm “22 divided by 7” is a lot shorter than writing out 3.1428571. When we look at 3.1428571, it looks like chaos (it might look like π, but it’s only an approximation). And yet the algorithm 22/7 expresses this very big, dense piece of information in a concise manner. What
looks
like highly disordered information may actually be highly
ordered
information—we just might not know what the algorithm to describe it is. Three people described algorithmic information theory nearly simultaneously: Andrei Kolmogorov, Raymond Solomonoff, and Gregory Chaitin, all of whom arrived at it independently.

Three chords and the truth
: One of the most basic chord progressions in all of music is the progression from tonic to subdominant to dominant and back again, often written as I-IV-V. In most folk music, blues, and classic rock, this pattern repeats over and over again, albeit in different keys. Music theory states that certain chords lead naturally into others because of leading tones within the chord—the V chord “wants to” go to the I chord because the V chord includes a note that is one half-step below the tonic note. Stopping on the V makes the music sound unresolved. This is also an expression of information theory, in that skilled musicians can intuitively guess what sorts of harmonic structures will follow on a given chord based on their experience.

Flat fifth
: A major or minor chord will make use of a
perfect fifth
, which is two notes that are exactly seven half-steps apart on the scale (seven black or white keys on the piano). The flat fifth, or
tritone
, is six half-steps and is extremely dissonant, unlike the perfect fifth and perfect fourth. In much classical music, the tritone is not permitted and is called “the devil’s interval.” It is, however, extremely common in jazz.

Alternating bass
: A rhythm whereby the bass alternates steadily between the tonic note of a chord and the perfect fifth above it.

Grok and Robert Heinlein
: The definition offered in the book is “
Grok
means to understand so thoroughly that the observer becomes a part of the observed—to merge, blend, intermarry, lose identity in group experience. It means almost everything that we mean by religion, philosophy, and science—and it means as little to us (because we are from Earth) as color means to a blind man.” In Martian, however, the word means “to drink.”

Brain functioning on three levels
: A good book describing this theory is
Hare Brain, Tortoise Mind
by cognitive scientist Guy Claxton, published by Ecco in 2000. He describes how many problems are best solved by the unconscious mind rather than the conscious or “D-mode” brain.

Approximations of reality
: The best example of this that I can come up with is “weight.” Physics tells us there is no such thing—mass is the correct concept. But in everyday life weight is “good enough.” Another example: hot water is composed of highly excited molecules. But even hot water has molecules that are barely moving (and are therefore “cold”). When we speak of the temperature of water, we don’t consider the trillions of water molecules with varying levels of excitation—we instead consider the average of all of them and call it “temperature,” a convenient fiction that makes sense for us because we’re so big and molecules are so small. Ludwig Boltzmann explained the difference between “temperature” and “individual molecule excitation” as the difference between a
macrostate
and a
microstate
. The schemata the brain works with are macrostates—they are algorithmic representations of reality.

Sticking your finger in fire
: The typical elapsed time for a reflex reaction such as this is around 250 milliseconds. Doing it consciously requires around 500 milliseconds.

The football player and instinctive reactions
: In the book
Sources of Power: How People Make Decisions
, Gary Klein describes how most complex decisions are made based on the first impulse that came to mind and not conscious thought. Eerily, the first impulse is usually
right
. When they are wrong, however, it can be disastrous. The joke about the football player is funny because it rings true—we recognize something about how the brain works in it.

Knowing what to do on the mandolin
: This is also an expression of information theory. In 1948 Claude Shannon developed the basics of information theory, proposing the notion that you could regard an information stream as a chain of probability events. Assume a limited set of symbols (like, say, the alphabet). When you get one given symbol in a sequence (like, say, the letter
Q
), you can reduce the possible symbols that might come next (like, say, to just the letter
U
) because you know enough about the symbolic system within which
Q
and
U
exist. You’re not likely to pick
K
, but you might think of
E
for
Q.E.D
. or
A
for
Qatar
. Music happens to be a highly ordered and fairly limited formal system, and so as you develop a “musical vocabulary,” you are also developing a sense of the shape of the entire problem domain, even though a few new letters in the alphabet (such as tremolo on the mandolin) might be new to you.

Practice
: Alan Turing, better known as the father of modern computing, is also the creator of something called “Turing’s Halting Problem.” We know that you can get a computer to tackle incredibly difficult problems. However, we do not know how long it will take for the answers to be returned; no predictive method works. This is because of the Church-Turing thesis, which simply states that you can compute anything that has already been computed—problems we haven’t computed yet are unknown territory. Only experience tells us the scope of a problem. In short, we only really learn things by experiencing them.

Mental practice
: This is called “mental imagery” and it is widely used in sports training. One study by Anne Isaac in 1992 showed that mental imagery helps an athlete improve in a skill. Other studies have found that autonomic nervous system responses are triggered by detailed mental imagery. It’s important to note that actual practice is still better than just imagining yourself doing something—the mental images have to be highly detailed and specific to provide a benefit. One of the most famous examples of mental imagery in this century is shown in the film
The Pianist
, where Wladyslaw Szpilman, played by Adrien Brody, “plays” piano while hovering his fingers above the keys so as to avoid detection by the Nazis.

Chapter 3:

Our perception of reality is basically abstractions
: An important paper called “What the Frog’s Eye Tells the Frog’s Brain,” by Lettvin, Maturana, McCulloch, and Pitts, described the fact that what the brain “sees” as output from the eyes is not even vaguely close to the literal visual image. A significant amount of processing turns the literal input of light and shadow into something that the brain copes with. In a very real sense, we do not see the world—we see what our brain tells us we see. Solipsism is five blocks down and to the left.

The map is not the territory
: This is a condensation of a statement by the father of general semantics, Alfred Korzybski: “A map is not the territory it represents, but if correct, it has a similar structure to the territory, which accounts for its usefulness.” This echoes Kant’s differentiation between
Das Ding as Sich
(things as they are) and
Das Sing für uns
(things as we know them).

Run permutations on a book
: This statement is a bit too forceful. There exist works of literature that are intended in this manner. Examples include the entire genre of hypertext fiction (
Victory Garden
by Stuart Moulthrop is a good starting point). There are also books such as Julio Cortázar’s
Rayuela
(translated as
Hopscotch
) that are intended to be read in multiple different orders. And of course, the genre of games known as “interactive fiction” or text adventures can be seen as a computer-assisted form of this type of book.

Deeply nested clauses
: This is typically seen as an expression of G. A. Miller’s number cited above: 7±2. In assessing a deeply nested sentence, it’s important to realize that each word is itself already being “chunked” from a collection of letters.

The limitations of rules: This is a game-specific way of explaining Gödel’s Theorem. Kurt Gödel in his 1931 paper “On Formally Undecidable Propositions in
Principia Mathematica
and Related Systems” proved that there are always propositions that lie outside the boundaries of a given formal system. No formal system can know itself fully. The “magic circle” is basically an attempt to protect the integrity of a model, in the same way that Hilbert’s view of mathematics attempted to fully define a system.

Endorphins
: “Endorphin” is abbreviated from “endogenous morphine.” I’m not kidding when I say we’re on drugs when we’re having fun! Endorphins are an opiate. The “chill down the spine” effect is often explained as the release of endorphins into the spinal fluid. Pleasure is not the only thing that gives us this effect, of course—adrenaline rushes caused by fear provide a similar sensation.

Break out in a smile
: There’s good evidence that the smile causes us to be happy and not the other way around. For more reading on emotions, I recommend the work of Paul Ekman.

Learning is the drug
: “Fun is the emotional response to learning.” – Chris Crawford, March 2004.

Sensory overload
: The input capacity of the conscious mind is only around 16 bits a second. Sensory overload can be thought of as the difference between the amount of
information
and the amount of
meaning
. You can have a large stack of information—such as a book typed by monkeys—that is very low in meaning. When the amount of information is too high and we fail to extract meaning from it, we say we’re in overload.

Other books

Dark Advent by Brian Hodge
Hellfire by Ed Macy
The Earl's Revenge by Allison Lane
Lost and Found by Van Hakes, Chris
Skull Session by Daniel Hecht
Quarterdeck by Julian Stockwin
Addicted to Nick by Bronwyn Jameson
Fever (Flu) by Wayne Simmons


readsbookonline.com Copyright 2016 - 2024