Is Biological Intelligence Different From Machine Intelligence?
#1
Quote:Mod Note Start

Thread split from here - http://nirmukta.net/Thread-Stephen-Wolfr...Everything

Some comments from the old thread that are relevant here:

sojourner Wrote:In my worldview, biological intelligence is completely different than machine intelligence and that it is a mistake to to confuse one with the other.

Kanad Kanhere Wrote:Sojourner, I don't understand your claim "Biological intelligence is completely different than machine intelligence and that it is a mistake to to confuse one with the other". Can you elaborate more on that?

Mod note End

(09-Aug-2011, 05:30 AM)sojourner Wrote: In my worldview, biological intelligence is completely different than machine intelligence and that it is a mistake to to confuse one with the other.

That remains a deep question because human-like intelligence (passing a broad-based Turing Test) is yet to be convincingly demonstrated in non-biological substrates. As for the assertion that machines with subjective states cannot be built, several arguments exist both for and against it.

Reply
#2
I was planning on posting a more detailed reply which I may still do. In the meantime, here is a quick reply. (I am home, sick.)

My interest is solely biological intelligence. My point is very simply this: even though we have built impressive examples of machine intelligence (for instance IBM's Jeopardy playing computer), these throw very little light on how humans think, problem solve, etc. We can take terminology from machine intelligence to describe/"explain" biological intelligence but nothing will be gained from such an effort.
Reply
#3
First and foremost, the discussion is not very relevant to the original thread topic so the moderators might want to move this to a new thread. I don't know how to do that and I don't think I have the right to this discretion.

Hi Sojourner,
I still don't understand why you think the intelligences are so different.
I work in the semiconductor industry and deal with processor design on a daily basis. As per my understanding if we are able to do the following
1. Create a processor with sufficiently high computing power (we are probably midway on this)
2. Have huge memories (we do have this)
3. And implement some sophisticated learning algorithms (we do not have these)
we should be able to create an artificial brain.

Hi ArvindIyer,
I went through the "against" part of the link and wasn't convinced much. The chinese room examples doesn't really hold on several instances as correctly pointed out by the responses.

Regards
Kanad
Reply
#4
Wow! There isn't a better forum for me to share what I stumbled upon just a few minutes back. A new article by Scott Aaranson of MIT talks about how computational complexity obviates the field of philosophy. Very dense stuff. Start with the Technology Review blog about the paper, and then read the paper at your leisure. This is as fresh out of the oven as it gets.

http://www.technologyreview.com/blog/arx...8/?ref=rss

http://arxiv.org/abs/1108.1791

I think this paper will be extremely interesting to read!


From the blog,

Quote:One way to measure the difference between a human and computer is with a Turing test. The idea is that if we cannot tell the difference between the responses given by a computer and a human, then there is no measurable difference.

But imagine a computer that records all conversations it hears between humans. Over time, this computer will build up a considerable database that it can use to make conversation. If it is asked a question, it looks up the question in its database and reproduces the answer given by a real human.

In this way a computer with a big enough look up table can always have a conversation that is essentially indistinguishable from one that humans would have

"So if there is a fundamental obstacle to computers passing the Turing Test, then it is not to be found in computability theory," says Aaronson.

Instead, a more fruitful way forward is to think about the computational complexity of the problem. He points out that while the database (or look up table) approach "works," it requires computational resources that grow exponentially with the length of the conversation.

Aaronson points out that this leads to a powerful new way to think about the problem of AI. He says that Penrose could say that even though the look up table approach is possible in principle, it is effectively impractical because of the huge computational resources it requires.

By this argument, the difference between humans and machines is essentially one of computational complexity.

That's an interesting new line of thought and just one of many that Aaronson explores in detail in this essay.

My emphasis on;

Quote:By this argument, the difference between humans and machines is essentially one of computational complexity.
Reply
#5
(10-Aug-2011, 09:25 PM)karatalaamalaka Wrote: A new article by Scott Aaranson of MIT talks about how computational complexity obviates the field of philosophy.

Complexity-like arguments have long been used to counter biocentric exceptionalism.
Even Fritjof Capra (with whom we may have a lot to disagree with on mysticism parallels) in his book 'The Hidden Connections' stresses the point that 'living'-'nonliving' (and in a sense animal-machine) differences are correctly viewed only as a difference in complexity. He uses a homely example to illustrate his point:

Quote:For example, when you kick a stone, it will react to the kick according to a linear chain of cause and effect. Its behavior can be calculated by applying the basic laws of Newtonian mechanics. When you kick a dog, the situation is quite different. The dog will respond with structural changes according to its own nature and (nonlinear) pattern of organization. The resulting behavior is generally unpredictable.

Given this 'spectrum of complexity' view of animal-machine differences, arguments for biocentric exceptionalism do seem unconvincing (as Kanad says for other reasons). It will be interesting to see what arguments the 'exceptionalists' that do not rely on complexity to establish the difference between human and machine.

Having said all that, I am wary of any claim of some scientific discovery 'obviating philosophy'. In his book 'The Emerging Mind', Vilayanur Ramachandran makes the grand claim (though seemingly mindful of the irony) that 'Neuroscience is the new Philosophy'. However far from obviating philosophy itself, what the chapter with that title does is offer naturalistic explanations for many behaviors once considered the sole preserve of philosophy. Sam Harris, more controversially, claims to have wrested Ethics from Philosophy into Science, but if anything, he only establishes how Science can inform our ethical choices and does not show that Science determines our ethical choices. Try as he might, he cannot reduce Ethics to an objective study as no empirical fact is objectively binding upon human ethical decisions (as Russell Blackford has often pointed out).

Philosophy will remain relevant for many reasons and some immediate ones which come to mind are:
(i) The study of Ethics (which needs a machinery to examine and evaluate intersubjective premises besides objective claims), Politics (the art of just social organization, in which arguments like eugenics can be rejected only on intersubjective and not objective grounds) and Aesthetics (which, despite the insights of neuro-aesthetics is one human endeavor in which imagination unbridled by objective realism can be exercised)
(ii) Philosophy has developed a body of tools to examine the epistemic rigour of other disciplines and hence disciplines like the Philosophy of Science will always remain relevant, however much Science progresses. For instance, Popper and Feyerabend still remain relevant to discussions on the scientific method.

An elegant defence of Philosophy was mounted by Will Durant here:

Quote:It(Philosophy) is the front trench in the siege of truth. Science is the captured territory, and behind it are those secure regions in which knowledge and art build our imperfect and marvelous world. Philosophy seems to stand still, perplexed, but only because she leaves the fruits of victory to her daughters the sciences, and herself passes on, divinely discontent, to the uncertain and unexplored.


[+] 3 users Like arvindiyer's post
Reply
#6
(10-Aug-2011, 10:08 PM)arvindiyer Wrote: Complexity-like arguments have long been used to counter biocentric exceptionalism.

Isn't 'computational complexity' more specific than the general concept of "complexity"?

Reply
#7
(10-Aug-2011, 10:34 PM)karatalaamalaka Wrote: Isn't 'computational complexity' more specific than the general concept of "complexity"?

I guess even in contexts which are not explicitly 'computational', the idea of complexity as 'descriptor length' (Kolmogorov Complexity) is widely applicable. Need to read more on the applicability of different complexity measures across disciplines though...

Reply
#8
Quote:I still don't understand why you think the intelligences are so different.
I work in the semiconductor industry and deal with processor design on a daily basis. As per my understanding if we are able to do the following
1. Create a processor with sufficiently high computing power (we are probably midway on this)
2. Have huge memories (we do have this)
3. And implement some sophisticated learning algorithms (we do not have these)
we should be able to create an artificial brain.

First a little bit of background: I make a living as a computer programmer. But my hobby is behavior analysis, which is B.F.Skinner's approach to psychology. I probably should describe the behavior analytic world view but don't have the energy for it at the moment.

Applied behavior analysis has been very successful in working with autistic people. I don't know of a single instance where something from cognitive science has been helpful in working with autistic people. My point is that work done under cognitive science will be of zero benefit for working with humans.
--------------------------------
Quote: the difference between humans and machines is essentially one of computational complexity.

I think that it is a total mistake to talk of computations when talking about human behavior and animal behavior. This is arguing by analogy at its worst.
--------------------------------
Several algorithms have been developed for the IBM machine Watson to play Jeopardy. Is any of these remotely helpful in a human playing Jeopardy? Absolutely not. Is any of these helpful for humans is any other way? I don't think so.
--------------------------------
Click here to read something that may be relevant here. I couldn't find an online link to Skinner's 1977 article "Why I am not a cognitive psychologist" referenced here. This link shows only the first page.

Reply
#9
(11-Aug-2011, 03:34 AM)sojourner Wrote: I think that it is a total mistake to talk of computations when talking about human behavior and animal behavior. This is arguing by analogy at its worst.

(11-Aug-2011, 03:34 AM)sojourner Wrote: My point is that work done under cognitive science will be of zero benefit for working with humans.

I : Use and abuse of the 'computational metaphor':

Unease with overstretching metaphors is well articulated by Berkeley philosopher John Searle thus:

Quote:Because we don't understand the brain very well we're constantly tempted to use the latest technology as a model for trying to understand it. In my childhood we were always assured that the brain was a telephone switchboard. (What else could it be?) And I was amused to see that Sherrington, the great British neuroscientist, thought that the brain worked like a telegraph system. Freud often compared the brain to hydraulic and electromagnetic systems. Leibniz compared it to a mill, and now, obviously, the metaphor is the digital computer.

The famed computer scientist Edsger Dijkstra was even more unsparing in his criticism of the 'computational approach' to understanding the psyche, which according to him benefits neither psychology nor computer science. Quoting from here:

Quote:The desire to understand Man in terms of Machine has —as is only to be expected— its inverse counterpart, viz. the desire to understand the Machine in terms of Man: in computing science the terminology is shockingly anthropomorphic. What with Babbage was still called "a store" is now "a memory", what used to be called "an instruction code" is now called "a programming language". I picked up the sentence "When this guy wants to talk to that guy..." while the speaker referred to distant components of a computer network. I contend that this preponderance of anthropomorphic terminology is the symptom of a widespread confusion, a confusion without which, for instance, so-called "conversational programming" would never have enjoyed the glamour that, at one time, it did enjoy.

But let us not forget that Dijsktra also said, "Computer Science is no more about computers than astronomy is about telescopes". Likewise, there are neuroscientists today who say "Cognitive science is no more about neurons than computer science is about computers". While this may sound too cute by half, it is really not so because cognitive processes are studied at multiple levels, namely (i) task (ii)algorithm (iii) implementation. This has been formalized in the Marr hierarchy which is a staple of most introductory cognitive science programs. At the first two Marr levels,the relevance of computational approaches is obvious.

Sweeping dismissals (or approvals) of the use of a computational metaphor are unwarranted, pending a clarification on which Marr level is the metaphor being employed. At the 'implementation' level, to say that 'a neuron is like a transistor' is obviously a flawed metaphor since a neuron is not exactly analog (since it spikes in an all-or-none way) and not exactly digital (it typically signals via a 'rate code').

However at the 'task' or 'strategy' level, computational approaches have enjoyed several success stories. Human decision-making under risk has been well-studied in the computational framework of minimizing Bayesian expected loss and thanks to the seminal work of Tversky and Kahneman with this approach, a number of cognitive biases (thanks to inherent 'distortions of probability') have been recognized. That is one area where a computational approach has proven its usefulness in understanding human behavior.

II: Yesterday's behavioral psychologist is today's cognitive neuroscientist:

Psychology departments have in the past few decades been rechristening their programs and many students who would earlier have been called psychology grad students are now called 'clinical neuroscience' or 'cognitive neuroscience' students. The sort of experiments that led to the applications of Skinner's work to autism therapy would in a campus today be more likely performed in a 'cognitive sciences' department. Also, the work of unmistakable 'cognitive scientists' like Antonio Damasio does have undeniable clinical significance, if not via immediate therapies at least in terms of assessment of behavioral deficits resulting from brain injury. It does not seem reasonable to therefore give 'behaviorism' the full credit for Skinner's work while dismissing 'cognitive science' as a wholly academic pursuit when it deserves credit for the work of the likes of Damasio.

III: Placing Skinner in the context of the History of Science:

'Skinnerism' I think has a place in the history of Psychology similar to the place Taylorism has in the history of Management. The similarities are many: both men emphasized a rigorous reductionistic approach in disciplines that were not 'exact sciences', both men were accused of (and even vilified for) suggesting that people be viewed as automatons who may be programmed to serve organizational ends and both men have a well-earned place in history. However, just like Scientific Management is now seen as foundational to Management but not as its entirety, Behaviorism is seen as integral to Psychology but not as it entirety and not entirely in the form intended by its original popularizer.
[+] 1 user Likes arvindiyer's post
Reply
#10
(11-Aug-2011, 07:49 AM)arvindiyer Wrote: III: Placing Skinner in the context of the History of Science:
Behaviorism is seen as integral to Psychology but not as it entirety and not entirely in the form intended by its original popularizer.

For now, let me say only the following [since it is past my bedtime :-)]

Behaviorism is more like Darwinism. After its acceptance by at least some, it underwent a dormant period called the eclipse of Darwinism. A similar thing is happening in psychology.

Let me also recommend the Skinner 1977 paper I referenced above.

I will follow this up in the next few days.

Skinner proposed a natural science of behavior, a branch of biology. It includes what makes humans human namely verbal behavior including private verbal behavior or thinking.

Reply
#11
Quote:Sweeping dismissals of the use of a computational metaphor are unwarranted

Is the dismissal of a proposal of a god as a bank of supercomputers unwarranted?

Isn't the proposal of any metaphor suspect in serious science (as opposed to directly studying the subject matter, observing its entities, and directly describing observed relationships and processes)? Aren't metaphors nothing but using words and sentences acquired in one domain to describe events and processes in another domain? Aren't these the stuff of American TV preachers? ["I was flying into Denver airport the other day. They used radar to guide our flight in. You have a radar too. It is called your conscience."]

Skinner proposed and developed a carefully specified science. [The reason a lot of people reject it is because they can not stay within his specified bounds or they completely misunderstand it.] Roughly speaking it was in the level of chemistry. It studies molecular responding as the dependent variable. The independent variables are mostly in the environment. It uses animal research where variables can be better controlled. It takes the laws/processes observed in animal research and extends it to human behavior which is the main topic of interest. One of its greatest successes is application to human verbal behavior. Its successes in autism are applications of the basic processes discovered. The processes are used in behavioral pharmacology and education, to mention a couple of fields.

Things happen at the neural level. Behavior analysis does not concern itself to events and processes at this level. This is left for others trained to do so. [ English majors working as cognitive neuroscientists are not trained to do so.] Just as there are atomic level explanations for observations in chemistry, there will be neural level explanations for the observations in behavior analysis.

In his masterpiece of a book Verbal Behavior, Skinner gives an example of getting someone to utter the response "pencil" by modifying external variables. [Since the dependent variable is one that occurs at a particular instant in time -- say 4.34 PM on Aug. 14, 2011, it is called a molecular response. It is also a direct statement of a response that actually occurs as opposed to something metaphorical. Nadal tossing a tennis ball at 1.23 PM on Aug. 7, 2011 is a good dependent variable. Nadal finding another gear during the second set in a match with Ivan Dodig on Aug. 8, 2011 is NOT a good dependent variable.]

Using the basic processes specified in Verbal Behavior, Skinner suggests several ways of getting someone to utter "pencil". None of the techniques are magical. They are indeed commonplace but we usually don't give them any credit.

Can cognitive science do likewise using its processes?

The autism applications are exciting because, again, they are applications of basic processes discovered.

A few days before he died Skinner gave a talk in which he called cognitive science the creation science of psychology. This talk has been published in American Psychologist. His reasons for rejecting cognitive psychology are also stated in the 1977 paper in Behaviorism that I have referred to in previous posts. It is best to hear from the master directly. [I don't have access to that paper at the moment. I have read it in the past.]

Skinner has no problems with people studying nervous systems. It is the "conceptual nervous systems" that he is opposed to.

Even when behaviorism was the dominant position in psychology, there were several in psychology who couldn't stomach it. This is not because it was wrong but because it was difficult. Some people go into psychology hoping to find solutions for their personal problems. This is like heart patients going into cardiology. Suffering the ailment related to the field and having empathy for it are hardly sufficient to put up with the demands and rigors of the field.

I was fortunate to meet Skinner once at a conference in NYC in 1979 or so. The only question I could think of asking him was

"Can your theories explain dreams?"

He replied,

"No, but nor can the other side."

How much behaviorism/behavior analysis has achieved is rather disappointing. This is mainly because we have not had anyone of the caliber of Skinner. However, "the other side" has not achieved anything where behavior analysis has failed.

I will close by pointing to some of behavior analysis's accomplishments:

Education: Teaching machines, programmed instruction, personalized system of instruction (and its variants)

Child Care: Baby tender, toilet training techniques

Autism: the techniques used are almost 100% behavior analytic

If someone understands Skinner's proposal in Verbal Behavior, it will knock his socks off.



Reply
#12
The following quotes are from this article.

****
The psychology of the late 20th Century took two forms: one was radical behaviorism, distinctly the minority position. The majority position was the “rest of psychology.” The “rest of psychology” was and is mediational, what B. F. Skinner would call “theoretical,” and all of it can be viewed as subdivisions of cognitive psychology, broadly defined. That is, the rest of psychology relies on explanations expressed in terms of underlying mechanisms.While there are as many of these “cognitive psychologies” as there are authors
****

****
Traditional psychology carries the burden of basic assumptions that agree with folk psychology and, therefore, lend popular appeal to its theories (cf., Baum, 1994). Needless to say, these assumptions also feature primitive ways of casting some important questions. For example, the assumption that “we” are minds “inside” bodies agrees with millennia of popular opinion, but it is neither a necessary nor a wise psychology
****

The assumption that "we" are software inside bodies appears to be an extension of this millennia long popular opinion.
Reply




Users browsing this thread: 3 Guest(s)