|
Post by Hannes Vilhjalmsson on Jan 19, 2009 19:49:26 GMT -5
Since many of the questions you submitted for the last reading concerned the definition of "artificial intelligence" and whether "machines could think", it is only appropriate that we next discuss one of the original papers describing the idea of thinking machines, written by Alan Turing in 1950. The paper is "Computing Machinery and Intelligence" and was published in The Quarterly Review of Psychology and Philosophy in October 1950. Since this is a relatively long paper (I had promised short ones!), you are also welcome to use the paper summary on Wikipedia, but I encourage you to check the original for further explanation when something is unclear. Post your discussion questions here by midnight on Friday night (the 23rd of January). Cheers, - hannes högni
|
|
|
Post by Helgi Páll Helgason on Jan 20, 2009 10:43:30 GMT -5
From a scientific point, it can be argued that the "Argument from continuity in the nervous system" or varations of it are the only arguments relevant today. As the article is quite old, have new arguments against the possibility of machine intelligence surfaced since?
And following up on the nervous system argument, aren't we nearing a point where we can answer with some certainty whether a neuron (with all it's currently known properties) can be simulated on a computer? And provided this was proven, couldn't that be seen as evidence that machine intelligence is possible?
|
|
|
Post by Birna Íris on Jan 22, 2009 5:39:06 GMT -5
The idea of consciousness is quite fascinating. What is it that makes us (humans) conscious and is it possible to implement those things in a computer? Turing suggests that to answer the question "can machines think" the mystery of consciousness does not have to be solved. This is probably true but the closer we get to analyse human consciousness the closer we get to implement (human) thinking into machines because human thinking takes place in a conscious mind. Connected to this is the "feeling" part, that I find very interesting as well. Can machines have feelings? Well feelings are originated in the brain and connected to our experience and memory - which implies that machines will be able to have feelings in the future - or what? (I think that the love-feeling will be challenging)
Ada Lovelace states that machines can "never do anything really new". Is this maybe connected to the study of creative computers? Can creativity be implemented in a computer?
|
|
|
Post by Christian Zehetmayer on Jan 22, 2009 7:05:07 GMT -5
1) Is a computer or a machine able "to think" or is it the program that is executed at the computer? The computer consists of a store, an executive unit and a control unit, like a body but without a program the parts don't know what to do and so the machine isn't able to think.
2) Is there a computer which passes the Turing test? Are there other intelligence tests for computers? Is there a computer program which is close to pass such a kind of test?
|
|
|
Post by Snorri Jónsson on Jan 22, 2009 18:31:47 GMT -5
1. What is thinking... ? (I don’t want the answer just the discussion about it)
2. Is there anyone with the opinion that computers will achieve "thinking"?
|
|
|
Post by arnij07 on Jan 22, 2009 19:17:11 GMT -5
Did this end up being true: "I believe that in about fifty years time it will be possible to programme computers with a storage capacity of about 10^9 to make them play the imitation game so well that an average interrogator will not have more than 70 per cent chance of making the right identification after five minutes of questioning." Is pretending to think really the same as thinking? I think Data, the AI in star trek next generation, would not have passed this test in at least the first 5 seasons
|
|
|
Post by halldorrh05 on Jan 23, 2009 6:10:04 GMT -5
Sometimes such a machine is described as having free will (though I would not use this phrase myself).
How can a random number generator be likened to free will?
Is being "self-aware" in the sense humans are really comparable to what a self debugging machine would be?
|
|
|
Post by olafurgi on Jan 23, 2009 10:20:36 GMT -5
1. Due to the fact that religion still is integrated into control/power-structure of many countries, are there evidence that religion has affected AI advances negatively ? (Regarding "The Theological Objection").
2. What is your opinion on the "Chinese Room" argument which says that being able to construct a sentence in a language from a specific rule-set to answer with is not the same as understanding the language itself).
|
|
|
Post by Björn Vignir Magnússon on Jan 23, 2009 10:24:29 GMT -5
I was wondering, in the paper one of the objections mentioned is that computers can't have original thought, but what is really the definition of original thought? Aren't original thoughts always conceived out of collaborations of simple things anyways?
Also I was wondering, are the objections in the paper considered as general knowledge or are they debatable?
|
|
|
Post by arnij07 on Jan 23, 2009 10:37:21 GMT -5
|
|
|
Post by Stefán Einarsson on Jan 23, 2009 11:46:33 GMT -5
1) Machines cannot make mistakes: I guess in a sense this is correct. Since the machine is our puppet. Only we can make the mistake in programming.
2) limits to what questions a computer system based on logic can answer: Is or was this limit based on lack of available memory?
|
|
|
Post by steinarhugi on Jan 23, 2009 12:20:35 GMT -5
1 What are the best attempts to archive this goal so far?
2 If the interrogator is rude the player should become insulted. How do we give a computer fellings? Humans often express their feelings without saying how they feel directly but still it can be observed by "reading between the lines". How would we model that expression behaviour for a computer?
|
|
|
Post by Hjalti Magnússon on Jan 23, 2009 13:19:38 GMT -5
1) My first question has to do with the "child machine" Turing talks about. An essential part of the curriculum would be mathematics, based, perhaps, on axioms of some sort. However, mathematics, as we (humans) have defined them, are inaccurate in some sense. How, for example, would you explain Russell's paradox to a computer or the term "infinity" or just the term "natural number"?
2) How could one program a computer to "learn"? Initially the program would have to be taught how to learn, but wouldn't it be limited in that sense. Children are not taught how to learn. No one teaches a child how to learn to drink or eat. And overlooking that fact, wouldn't the program also be limited by the way it's taught, just like humans are?
|
|
|
Post by Richard Ottó O'Brien on Jan 23, 2009 13:42:23 GMT -5
1. What are these limits to what a question/problem one can ask a computer to answer/solve? Are the regarding feelings and/or what we refer to as human emotions etc.?
2. They mention that any behavior from a digital machine can be imitated by a digital computer, given enough time. Does that mean the computer would be learning from the machine or is this only regarding the amount of processing time required for all possible actions and such?
|
|
|
Post by Hólmar Sigmundsson on Jan 23, 2009 13:50:46 GMT -5
1) Wouldn't the "child machine" be able to both program and reprogram itself, whenever it learns a new thing or when it learns from rewards or punishments from its teachers? How does one teach the machine to teach itself?
2) Why does storage capacity play such a large role in how well machines can imitate humans?
|
|