|
Post by Hannes Vilhjalmsson on Jan 18, 2011 19:34:18 GMT -5
Continuing our discussion about the definition of "artificial intelligence" and whether "machines can think", it's appropriate to check out one of the original papers describing the idea of thinking machines, written by Alan Turing in 1950. The paper is "Computing Machinery and Intelligence" and was published in The Quarterly Review of Psychology and Philosophy in October 1950. Since this is a relatively long paper, you are also welcome to use the paper summary on Wikipedia, but I encourage you to check the original for further explanation when something is unclear. Post your discussion questions here by midnight on Sunday night (the 23rd of January). Cheers, - hannes högni
|
|
|
Post by gunnar on Jan 20, 2011 5:56:39 GMT -5
1) Many people are unable to write a sonnet, so why should a computer be able to. Does the computer have to be more advanced/better than the average human to be considered intelligent? And how can we know if a computer feels or not unless it tells us? And how would we determine if the feelings are real or made up? A person can feel and not show it, why shouldn't a computer be able to?
2) If we were to make a child machine, with the knowledge available today, would it be able to learn by it self, or would the learning be the programmer adding code to it?
|
|
|
Post by baldurb09 on Jan 21, 2011 11:29:42 GMT -5
1) Many people are unable to write a sonnet, so why should a computer be able to. Why shouldn't it? Does the computer have to be more advanced/better than the average human to be considered intelligent? Better than the average human pertaining to what criterion? And why the “average” human—wouldn't humans a few standard deviations from the mean of your criterion qualify as “intelligent”? And how can we know if a computer feels or not unless it tells us? And how would we determine if the feelings are real or made up? Can you decipher whether a human has emotions? Can you decipher whether a human has consciousness? Check out en.wikipedia.org/wiki/Artificial_consciousness
|
|
|
Post by Ásgeir Jónasson on Jan 21, 2011 12:37:44 GMT -5
My first question is difficult to put into one phrase so it will have to be in a few parts:
"The short answer is that we are not asking whether all digital computers would do well in the game nor whether the computers at present available would do well, but whether there are imaginable computers which would do well"
When did Alan Turing think passing, the now called Turing Test, would be possible and if he thought it should've been done by now, why have we not lived up to his expectations ? Is it because hardware didn't improve as fast as he expected or is it because the algorithms didn't get good enough, fast enough ? In either case, was Turing being unreasonable ? On a related note, would the emergence of, for example, quantum computers make us less critical of our algorithms because of the performance increase ?
Second question:
"Most actual digital computers have only a finite store. There is no theoretical difficulty in the idea of a computer with an unlimited store."
I know we are far from understanding how humans store memories, but do you think humans have finite memory ? If so, do you think it varies between individuals and that it plays a big part in our ability to learn ?
|
|
|
Post by krafki on Jan 21, 2011 12:45:04 GMT -5
1) In the previous article the author mentioned that "the early AI pioneers were largely engaged in a revolt against behaviorism.", and Turing's (early AI pioneer) test has similarity to behaviorism. Were many of the "AI people" maybe little bit hasty when it came to ditching the behaviorism?
2) Will field of AI eventually lean fully towards cognitivism?
|
|
|
Post by helgil08 on Jan 21, 2011 14:37:55 GMT -5
My two cents:
1) Definition of artificial intelligence: if the program learns, reasons and acts based on that as opposed to following a script, it can be called artificial intelligence.
2) Can machines think: yes, because they can reason based on learned knowledge.
|
|
|
Post by lorenzo on Jan 22, 2011 12:39:39 GMT -5
1.In my opinion the concept of think related to the machine is not yet well definied at the moment for me a machine can think for how to make a decision on the basis of the knowladge that it has so for me nowadays the machine can't think like a human at 100%.
2. The objection that Turin recives are still valid or they strat to have no sense nowadays? Example the Theological one...
|
|
|
Post by baldurb09 on Jan 22, 2011 13:44:07 GMT -5
Did Turing's conclusion that contemplating on machine thought was meaningless generally get accepted in his time?
Why should we need to apply a punishment-and-rewards system to teach machines? Children are mainly punished for insubordination and rewarded as incentive but machines need neither.
|
|
|
Post by Helgi Siemsen Sigurðarson on Jan 22, 2011 17:40:37 GMT -5
1) On "Lady Lovelace's" Objection where she says that computers aren't capable of "original thought". Well what is "original thought" is it taking things apart an putting them together in an "original" way or did she mean some thing else ? (because taking things apart and putting the parts together again in a different ways can be done with a search algorithm)
2) Is there any one trying to stop AI development because of the 'Heads in the Sand' Objection ? (I have heard this objection but I was wandering is there any one doing any thing about it)
|
|
|
Post by finnur on Jan 23, 2011 8:03:08 GMT -5
"The claim that "machines cannot make mistakes"" he talks a bit about that. humans make mistakes because humans are imperfect. shouldn't the perfect being be the one who never makes mistakes?
"Not until a machine can write a sonnet or compose a concerto because of thoughts and emotions felt, and not by the chance fall of symbols, could we agree that machine equals brain-that is, not only write it but know that it had written it. No mechanism could feel pleasure at its successes, grief when its valves fuse, be warmed by flattery, be made miserable by its mistakes, be charmed by sex, be angry or depressed when it cannot get what it wants." Again, humans are imperfect, shouldn't the goal be to make an AI that only makes rational decisions but not ones of Emotion, isn't emotion, religion and "human traits" the reasons for war, so why should the goal to make an AI like a human?
|
|
|
Post by carmine on Jan 23, 2011 8:53:56 GMT -5
My question/comments about the article:
- the Author focuses his attention especially on hardware or computation power of computers, on how the machine should answer, how it should imitate the man... I think that one of the most important problem for a machine is to understand the question during the test, considering all sub-problems about the NLP. Why is it completely neglected in the text? I think that it is really more complicated to understand every kind of question than to find its answer..
- I'm really really interested in "various disabilities" proposed in the paragraph 5, especially "make mistakes" is an fascinating problem. Is it relevant the difference between "errors of conclusion" and "errors of functioning"? Does this differentiation appear in humans? When we make a mistake, what kind of errors are produced?
|
|
|
Post by hordurh10 on Jan 23, 2011 8:56:05 GMT -5
Which of the objectives against Turing's views put forward in the article is the most convincing one?
How does the objective "The Argument from Extrasensory Perception" strike you?
-- Hörður
|
|
|
Post by sindrib on Jan 23, 2011 9:27:06 GMT -5
In the paper the author assumes that the best strategy for the imitation game is to provide as 'human like' answers as possible. With this assumption, in my opinion, the game has little to do with intelligence and more to do with finding the best (e.g. statistical) model to answer the interrogator by maximizing the probability of winning this game. That is, the author reduces the game to pure game theory, which in this case seems to have a solution.
In part 6, he mentions the importance of increase in computer storage capacity as a predicate for playing the imitation game well. If that is the case, then it is in support of the reduction I mentioned above.
|
|
|
Post by thorsteinnth on Jan 23, 2011 9:58:27 GMT -5
1. About the threat of being "overtaken"; In a powerful system that can learn by itself, could constraints be programmed into that system? "Do not harm humans" like we see in the movies so often? Could a system learn how to override such constraints?
2. The Turing test; If the person was interacting with a complete moron, and would mistakenly think a human was a machine, could that show that machines do have some really really low level AI?
|
|
|
Post by gunnar on Jan 23, 2011 10:02:27 GMT -5
1) Many people are unable to write a sonnet, so why should a computer be able to. Why shouldn't it? Can you please elaborate on why you think it should? Its more a question does it really have to be able to in order to be classified as intelligent? And my question was more to be thought of as a whole. How advanced must A.I. be so people start accepting it as intelligence?
|
|