MIND VERSUS COMPUTER, WERE DREYFUS AND WINOGRAD RIGHT?

Book announcement:

The `Mind Versus Computer - were Dreyfus and Winograd right' book published recently by the IOS Press is dedicated to the basic question if mind is just a very complex computer or not. The strong versus weak AI debate is combined with new AI approaches and ideas. Twenty papers gathered and refereed through the Internet are presented in three parts:
- Overview and General Issues
- New Approaches
- Computability and Form vs. Meaning

ISBN: 90 5199 357 9 (IOS Press)
ISBN: 4 274 90181 C3000 (Ohmsha)
ISSSN: 0922-6389
IOS Press, Van Diemenstraat 94, 1031 CN Amsterdam, Netherlands, Europe
IOS Press and book announcement

The Mind Versus Computer aims to reevaluate the soundness of current AI research, especially the heavily disputed strong-AI paradigm, and to pursue new directions towards achieving true intelligence. The subtitle `Were Dreyfus and Winograd right' refers to the debate between the first pioneers rejecting the strong AI approach and the predominantly formalistic scientific community. Two decades later we examine basic positions in arguments, analyse current status, and try to predict future AI orientations.

The book is based on a special issue of the Informatica journal that started with two papers: `Making a Mind vs. Modelling the Brain: AI Back at a Branchpoint' by H.L. Dreyfus and S.E. Dreyfus, and `Thinking machines: Can there be? Are we?' by T. Winograd. Both papers are unique and worth reading again and again. Indeed, they present the motto of the special issue and the book - were not H.L. Dreyfus, S.E. Dreyfus and T. Winograd right about this issue years ago? Were the attacks on them by the strong-AI community and large parts of the formal-sciences community unjustified? We believe the answer is yes.

This brainstorming publication presents analyses of core ideas that will possibly shape future AI. We have tried to include critical papers representing different positions on these issues. Submissions were invited through the Internet in all subareas and on all aspects of AI research and its new directions, especially:
- the current state, positions, and true advances achieved in the last 5-10 years in various subfields of AI (as opposed to parametric improvements),
- the trends, perspectives and foundations of artificial and natural intelligence, and
- strong AI vs. weak AI and the reality of most current `typical' publications in AI.

Due to the broad scope of book, readers should find at least a couple of papers relevant for them.

Papers were refereed through the Internet, and all authors were asked to accommodate comments from referees. The book is based on the papers published in the special issue of Informatica, three papers were deleted and two added (Geller, Perus). All authors of the papers were invited to modify their papers. The accepted papers are grouped into the following three categories:

A. OVERVIEW AND GENERAL ISSUES

`Strong AI: An Adolescent Disorder' by D. Michie advocates an integrative approach - let us forget about differences and together keep doing interesting things. Machine learning is seen as one of the central points of AI research.

`AI Progress, Massive Parallelism and Humility' by J. Geller agrees with the between-extremes viewpoint and proposes massive parallelism as the most appropriate approach for AI studies. Old AI with its mind=computer ideas was notoriously overoptimistic, and counterproductive.

`Self and Self-Organisation in Complex AI Systems' by B. Goertzel rejects formal logic as means of achieving human-level intelligence and advocates designing complex self-aware systems. Consciousness and self are the key to true intelligence.

`Is Weak AI Stronger than Strong AI?' by M. Gams presents an overview of the antagonistic approaches and proposes an AI version of the Heisenberg principle delimiting strong from weak AI. Both AI directions will undoubtedly progress, but weak AI is proposed as the more perspective in the long term.

`Naive Psychology and Alien Intelligence' by S. Watt argues for naive commonsense psychology, by analogy to naive physics. People understand physics and psychology even in their childhood without any formal logic or equations. There are the roots of intelligence.

`Cramming Mind into Computer: Knowledge and Learning for Intelligent Systems' by K.J. Cherkauer analyses knowledge acquisition and learning as the key issues necessary for designing intelligent computers.

`The Quest for Meaning' by L. Marinoff attacks strong AI through analyses of the meaning of a poem. Can computers understand poems? Has Turing slain the Jabberwock?

The papers in this section are a mixture of interdisciplinary approaches, from computer- to cognitive sciences. The average paper takes a critical stand against strong AI. However, the level of criticism and acclaim for intelligent digital computers varies.

B. NEW APPROACHES

`Computation and Embodied Agency' by P.E. Agre analyses computational theories of agents' interactions with their environments. An overview of several approaches is presented promoting the increasing dialog between AI and other fields as disparate as phenomenology and physics.

`Why Philosophy? Knowledge Representation and its Relation to Modeling Cognition' by M.F. Peschl investigates the role of representation in both cognitive modeling and the development of human-computer interfaces.

`Intelligent Objects: An Integration of Knowledge, Inference and Objects' by X. Wu, S. Ramakrishnan and H. Schmidt introduces intelligent knowledge objects as a step further from programming objects.

`Emotion-Based Learning: Building Santient Survivable Systems' by S. Walczak advocates implementing features such as affects in order to design intelligent systems. The program FEEL demonstrates benefits of emotion-based learning.

`The Theoretical Foundations for Engineering a Conscious Quantum Computer' by R.L. Amoroso is closely connecting physics and AI in order to design a consciousness computer.

`Mind: Neural Computing Plus Quantum Consciousness' by M. Perus promotes neural quantum computing as means of achieving intelligence in future computers.

`Computation without Representation: Nonsymbolic-Analog Processing' by R.S. Stufflebeam addresses the connectionist approach. What has happened to the neural-network wave of optimism?

C. COMPUTABILITY AND FORM VS. MEANING

`Is Consciousness a Computational Property?' by G. Caplain proposes a detailed argument to show that mind can not be computationally modeled. The arguments settles with a `weak dualism'.

`Computation and the Science of Mind' by P. Schweizer claims that computational procedures are not constitutive of the mind, and thus cannot play a fundamental role in AI. The computational approach may provide a scientifically fruitful level of analyses, but can not explain the mind.

`Mind versus Goedel' by D. Bojadziev presents an overview of the uses of Goedel's theorems, claiming that they apply equally to humans and computers. Unlike Penrose, Bojadziev sees no theoretical differences in computational capabilities of minds and computers.

`Computation and Understanding' by M. Radovan examines various strengths and shortcomings of computers and minds. Although computers in many ways exceed natural mind, brains still have quite a few aces left. The computational approach has similar limitations as humans, however, have to overcome several hard barriers to approach human level.

`What Internal Languages Can't Do' by P. Hipwell analyses the limitations of internal representation languages in contrast with the brain's representations. It is claimed that there are no differences between the power of natural languages and artificial languages.

`The Chinese Room Argument: Consciousness and Understanding' by S. Gozzano proposes yet another reason why Searle's Chinese rooms present a hypothetical situation only. The Searle's room is not accepted as a true counterexample of strong AI.

PREFACE by Terry Winograd

Since long before modern digital computers were invented, people have been fascinated with the questions of mind and mechanism. Are we nothing more than elaborate machines? Could we some day aspire to build machines that are equally (and perhaps identically) elaborate? Might we in fact be able to build machines that exceed our own capacities for thought, understanding, and knowledge?

These questions are not ones that lend themselves to well-defined experimental tests or to easy philosophical analysis. The depth and difficulty of the subject is attested to by the papers in this volume and the many that have preceded them in the literature. The arguments will be advanced, but surely not settled, by the discussion in these chapters. We can expect the debates to continue for decades, centuries, and perhaps millennia.

In fact, even before we can seek answers, we need to question just what the questions mean. The open-ended question `Can we create artificial intelligence' is sensible only if we have some way of identifying whether an artifact is `intelligent', and that question is in turn just as problematic as our initial ones.

The debate about AI has often become confused when opponents with different fundamental assumptions about intelligence proceeded to argue whether a given machine or behavior exhibited it. Terms such as `strong AI' and `weak AI' are used in different senses by different participants in the discourse, further confounding attempts to get at the heart of the matter.

In the end, I believe that the question `Can machines be intelligent' is incoherent. It derives an apparent coherence from a false sense that there is an objectively definable meaning for `intelligent': a meaning that is not utterance-dependent and interpreter-dependent. The same is true for all of the question's variants and corollaries, such as `Can machines have intentions?' `Are minds machines?' and so on.

It is important to distinguish these absolutist questions from operational questions about what computers can do. A question such as `Can a computer beat the world champion at chess?' is perfectly coherent and subject to test (In fact, such a test is beginning just as these words are being written). The coherence of this question does not depend on any properties of chess, although those properties may well be critical to the empirical answer. Questions such as `Can a computer compose a symphony that people find moving and beautiful?' or `Can a computer come up with a cure for cancer?' are equally understandable and testable. But `Can a computer understand' or `Can a computer have intentions' are not, since they depend on an assumption that a predicate such as `X understands' is objectively meaningful.

Many people have misinterpreted Searle's term `strong AI', distorting it in order to apply it to the argument that raged in the AI community during the 70s and 80s. Some AI researchers maintained faithfulness to a long-term goal of producing full human-like intelligence, while others argued that more pragmatic, `weak' approaches (under labels such as `knowledge engineering' and `expert systems') were the appropriate focus for research. Searle's argument was not about the question of whether AI should strive for human-like behavior. In his discussion he was willing to simply cede the question of whether full human-like behavior could be achieved. He argued that even if it were, the result would not be `strong AI'. The machine might act in every way as though it were a fully intelligent human (given a robotic body, a developmental history, and other such accoutrements), but it would never have real intentionality.

This is the incoherence that I referred to earlier: the assumption that `has real intentionality' is a meaningful predicate. Consider a somewhat flippant analogy. I ask my teenage daughter which world-wide-web sites are `cool'. She (and her friends and others) can all make judgments about particular instances. I now ask `What if that site were really put up by the CIA as an imitation of the web site you think it is - would it still be cool?' The answer is not defined. Coolness isn't the kind of concept that has a sharp delineation, or which accounts for the dimension I have raised. Nevertheless, the term is used successfully for communicating within a community of people who have a (relatively) shared understanding of it.

We would like to believe that something as respectable as `intentionality' doesn't have the open-ended interpreter-dependent quality of `cool'. With so many intelligent philosophers spilling much ink about it over the years, it must have a real definition, like `triangle' in mathematics, or at least to the degree of `molecule' in physics. Unfortunately this is an illusion. Every person has a strong subjective sense of what it means to have intentions, to be conscious, or to understand Chinese. The clarity of that subjective impression fools us into thinking that there is a corresponding clarity of the concept. In the end, the question as to whether Searle's Chinese room really understands Chinese is no more objectively answerable than the question as to whether his home page is a cool web site.

This does not imply, of course, that there are no meaningful questions to be argued. We can identify operational qualities that we care about, which fall within the community of usage of terms such as `intention', `understand', and `intelligent'. We can ask whether a specific technological approach is capable of achieving those qualities. These were the kinds of questions addressed in the writings by the Dreyfus brothers, myself, Flores, and others, which initiated the journal issue that preceded this book. The arguments are not about how we would label a machine's behaviors if it COULD achieve human-like capacities, but about HOW such capacities could potentially be achieved. The thrust of our argument is that the traditional symbolic/rational approach to AI can not achieve capacities such as normal human language understanding. This is not a quibble about whether the computer's behavior deserves to be called `understanding', but a claim that computers will not be able to duplicate the full range of human language capabilities without some fundamentally new approach to the way in which they are built and programmed.

This kind of question is ultimately open to empirical test. Perhaps some unknown genius in a secluded garage has already followed the traditional AI approach and produced a machine which we would all agree meets the full range of operational tests of human intelligence. Then the Dreyfus/Winograd claim would be falsified. By this measure of course, it can never be validated, since no sequence of failures precludes the possibility that the next attempt will succeed. But arguments have great practical value, even when we are inherently unable to come up with objective proof.

Imagine that some visionary were to propose that world peace could be achieved by having enough people around the world sit down in crystal pyramids, hold hands, and chant. It would be impossible to prove him wrong a priori, and he might even be able to cite evidence, such as the peace-bringing effect that this activity had on his commune and those of his friends. But it would certainly be worth arguing about the plausibility of extending the approach to world-wide peace, before committing resources to carry out the experiment. Similarly, the philosophical arguments about the basis of AI can shed significant light on the plausibility of different approaches, and can therefore be of great consequence in the development of computer technologies.

My own interests in the debate lie along pragmatic lines. As active participants in an intellectual field, we are always faced with the question of `What is worth doing?' Many of my colleagues in computer science and engineering are skeptical about the value of philosophical debate in answering such questions. They do not see such discourse producing the kinds of hard-edged answers that would give them definite direction and specific guidance. I believe that they are wrong, both in rejecting the value of such discussions, and in expecting answers that meet the criteria of mathematical and scientific argumentation.

The kind of debate represented by this volume is indeed relevant and practical. The wisdom of philosophically grounded knowledge complements the power of technologically grounded analysis. We may not be able to give precise answers to the questions we ask, but in asking them and pursuing them with serious rigor, we continually open up new possibilities for thought and for action.

Editors: Matjaz Gams, Marcin Paprzycki, Xindong Wu
Scientific contact person: Matjaz Gams, matjaz.gams@ijs.si
Publisher: IOS PRESS

List of reviewers:
-- Witold Abramowicz
-- Kenneth Aizawa
-- Alan Aliu
-- John Anderson
-- Istvan Berkeley
-- Balaji Bharadwaj
-- Leslie Burkholder
-- Frada Burstein
-- Wojciech Chybowski
-- Andrzej Ciepielewski
-- Sait Dogru
-- Marek Druzdzel
-- James Geller
-- Stavros Kokkotos
-- Kevin Korb
-- Aare Laakso
-- Witold Marciszewski
-- Tomasz Maruszewski
-- Timothy Menzies
-- Madhav Moganti
-- John Mueller
-- Hari Narayanan
-- James Pomykalski
-- David Robertson
-- Piotr Teczynski
-- Olivier de Vel
-- Zygmunt Vetulani
-- John Weckert
-- Stefan Wrobel

mg