Mental Architectures
Start from the basic assumption that cognition is a form of information-processing
A mental architecture is a model of how the mind is organized and how it
works to process information
The Physical Symbol System: The first serious attempt to build a mental
architecture
The Origins of Artificial Intelligence
1950s 1960s: Herbert Simon and Alan Newell developed fundamental
ideas concerning the nature of intelligence, and the equivalence of mind
and machine
This was the era when it was recognized that computers manipulated symbols
generally, not just numbers
Simon and Newell saw in the computer the essence of intelligence
1975 Turing Award: Given by Association of Computing Machinery to Allen
Newell and Herbert Simon
Logic Theory Machine (1956) - proved theorems
General Problem Solver (1957) - generalized to broad class of problems
Newell and Simon used their Turing lecture to talk about the basic principles
for studying intelligent information-processing
Basic components of a representation:
Description of given situation
Operators for changing the situation
A goal situation
Tests to determine whether the goal has been reached
A popular problem: cryptarithmetic
DONALD
+ GERALD
ROBERT (Given D = 5)
Means-End Analysis
1. What is the difference between start state and goal state
2. Find an operator that will reduce that difference
3. New goal: Apply the operator
Leads to a recursive strategy. Using many levels of recursion places a
heavy load on cognitive resources
An Example of Means End Analysis
Finding a move in a game of chess:
Goal: Capture opponents piece
Difference: Piece is protected by Bishop
Reduce Difference: Threaten Bishop
Goal: Threaten Bishop with Knight
Difference: Knight is protecting Queen
Reduce Difference : Protect Queen with something else
Etc.
Physical Symbol Systems
A physical symbol system has the necessary and sufficient means for intelligent
action
A method for representing a problem, the operators used for its solution,
and the strategy for employing those operators
Note - this is a domain-general procedure, with all that that implies
Chapter 6: Section 6.1
What is a physical symbol system? Explain the four key ideas..
1. Symbols are physical patterns
2. Symbols can be combined to form complex structures
3. The system contains processes for manipulating complex symbol structures
4. The processes can themselves be symbolically represented (very important)
According to Newell and Simon, which ability lies at the heart of intelligence?
The essence of intelligent thinking is the ability to solve problems
Intelligence is the ability to work out, when confronted with a range
of options, which of those options best matches certain requirements and
constraints
What is a search space? How do computer scientists often represent them?
Branching tree of achievable situations defined by potential application
of operators to initial situation
Think of a game like chess
20 possible first moves. For each one, 20 possible responses. 19 to 31
second moves
Explain the notion of a heuristic search technique?
Problem-spaces are generally too large to be searched exhaustively, by
brute force algorithms
chess: maybe 10^120 possible states
Search must be selective - close off branches of the tree
E.g., chess: ignore branches that start with a piece lost without
compensation
Heuristic Search Hypothesis
Problems are solved by generating and modifying symbol structures until
a solution structure is reached
General Problem Solver starts with symbolic descriptions of the start
state and the goal state
It then tries to find a sequence of transformations to change the start
state into the goal state
What is a universal Turing machine? Why does it help to illustrate the
fourth requirement of a physical symbol system?
A universal Turing machine is one that can mimic any other Turing machine
I.e., a general purpose computer
It works by encoding the program itself as a set of symbols, which can
themselves be programmed
This, according to the PSSH, is intelligence
Heuristic Search and Algorithms
The PSSH is a reductive description of intelligence
Reducing a complex process to multiple simple processes
It is only illuminating if physical symbol systems are not themselves
intelligent
This means that the physical symbol systems must function algorithmically
(The algorithms themselves are not intelligent - they generate intelligent
behavior)
Is the physical symbol system hypothesis correct?
The only way to find out is by experience
There are many failures (e.g., playing chess), but these may or may not
be failures in principle perhaps we do not yet have the right set
of algorithms
Does problem solving lie at the heart of intelligence, as Newell and Simon
suggest?
Intelligence may be too broad and vague a term
But some sort of problem solving does lie at the heart of intelligence.
Perhaps problem solving is modular.
It may require reference to the form of the implementation in order to
be achieved. (It may not be multiply realizable)
Chapter 6: Section 6.2
What is a propositional attitude? How does a consideration of propositional
attitudes lead to the Intentional Realism hypothesis?
Propositional attitudes are beliefs, desires, goals, knowledge, etc.
Our thinking occurs in terms of these attitudes
This is Stanovichs intentional level of analysis
Intentional Thinking
Humans can think recursively about intentionality
I know that you believe you understand what you think I said, but
I'm not sure you realize that what you heard is not what I meant.
(Robert McCloskey)
Much of our thinking concerns the mental processes of others theories
about their propositional attitudes
Lisa Zunshine: This is the foundation of literature
Six levels of intensional thinking:
Shakespeare INTENDS that his audience BELIEVES that Iago WISHES that
Othello SUPPOSES that Desdemona HOPES that Casio LOVES Desdemona.
Note that the falsity of some propositions is key to appreciating the
story
Explain the difference between formal and semantic properties of information
processing systems. Why does this distinction lead to the puzzle of causation
by content?
The brain operates in terms of its physical or formal properties
(rules concerning form)
Content of attitudes causes behavior by virtue of its semantic properties
(meaning)
How can content at the semantic level cause behavior at the formal level?
How does Fodor's language of thought hypothesis solve the puzzle of causation
by content?
Computers manipulate symbols in a way that is (a) sensitive to their
formal properties, while (b) respecting their semantic properties
Fodor: Brains must do the same thing that computers do
This requires LOT, a formal language that is more like a language of logic
Is intentional realism the correct approach to thinking about propositional
attitudes? What are some other options?
How does this account for non-conscious causes of behavior?
Do propositional attitudes have to be conscious?
Is causation by content a puzzling phenomenon? What do you think of Fodors
proposed solution to it?
Does it matter to anyone other than a philosopher?
If philosophers are puzzled, should we be too?
Chapter 6: Section 6.3
Describe the Chinese room thought experiment.
Assume:
The Chinese Room is input-output identical to a real Chinese speaker
The internal processing in the Chinese room is purely syntactic
(based on the shapes of the symbols)
The person in the Chinese room has no understanding of Chinese
What claim does the Chinese room argument challenge?
Searle argues that what occurs in someone who does understand Chinese
cannot be what takes place in the Chinese room
PSSH is committed to strong AI (the idea that appropriately programmed
computers might be minds)
Searle uses the CRA to argue that strong AI is in principle impossible
Describe the Turing test.
A discrimination test: Is the behavior of a machine discriminably different
from the behavior of a person?
Note that the Chinese Room does pass the Turing test (by definition)
The Turing test is not identical to the PSSH
Is it an adequate test of strong AI?
What might be a better test?
Explain the systems reply to the Chinese room argument.
Based in part on the Turing test
The understanding does not reside in the person, it resides
in the room itself
The difficulty of making this distinction may be part of the intuitive
force of the CRA
Responses to the Responses
Searle: So put the room inside the person
But does the argument now have any force?
It presumes the homunculus fallacy - that there is "something"
("someone") inside the mind that (who) does the understanding.
But it is the mind as a whole that understands things. To assume otherwise
implies an unwarranted dualism.
Is Searles Chinese room thought experiment a convincing argument?
Probably no-one has been convinced (either way) who was not already convinced
Daniel Dennet: The Chinese room is an intuition pump - a device designed
to elicit intuitive but incorrect answers to complex questions.
The speed of operation fallacy - one reason why intuitively the CR does
not seem to "understand" is that the process is so slow and
deliberate.
In general, what do you think of the use of thought experiments?
Note that the Chinese room debate is really another version of the "causation
by content" puzzle.