|
Post by Hrafn J. Geirsson on Jan 23, 2011 10:43:44 GMT -5
Since most of the arguments against Turings proposed game seemed to stem from the argument from conciousness I propose the following questions.
1) Is there a connection between thinking and feeling on one hand, and conciousness on the other ?
2) Is there a scientific reason to believe that human beings have "free will" and are not simply following predetermined rules along an unfolding (theoretically predictable) path.
|
|
|
Post by sigurdurjokull on Jan 23, 2011 11:03:45 GMT -5
I think that the question of whether machines can think is kind of silly once you realize the mechanistic nature of the human mind. And i think the question is really a product of human beings viewing themselves as somehow above or beyond nature. Evolution has already created machines that can think.
And it's hard to define artificial intelligence because we don't really know what intelligence is or how me measure it or how we measure humanity. So i think it would be more productive to ignore defining it, which by my best guess is what has happened.
But Turing said that the nervous system was not a discrete state machine. How do neurons form computation if not through discrete information processing? Is a computer processor any more discrete than a neuron? I don't really understand what he means by that.
|
|
|
Post by jonfs09 on Jan 23, 2011 11:04:04 GMT -5
Is the Turing test still used today?
If yes then how good are computers today at fooling people. Making them think they are humans.
If no, are we using any simular test's to find out if the computer is intelligent?
|
|
|
Post by Jökull Jóhannsson on Jan 23, 2011 12:34:59 GMT -5
1.Do you belive that the turing's test is reliable even tought the resault might be diffrent depending on what humans take part in the test.
2. About Ada Lovelace's statement that machines cannot be unpredictable. ? Do you think somtime in the future that there could be some machines that could program them selves to do new thinks based on theyr enviorment'?
|
|
|
Post by Eiríkur Fannar Torfason on Jan 23, 2011 12:48:19 GMT -5
Question 1: Have any serious attempts been made to write software that could take part in the Imitation Game? If so, what were the results?
Question 2: Has anyone ever tried to simulate evolution of simple agents by constructing a world governed by certain immutable laws and giving the agents the capacity to procreate and make random changes to their programming in an attempt to simulate mutation?
|
|
|
Post by Þorgeir Karlsson on Jan 23, 2011 14:12:48 GMT -5
If I understand Lovelace's objection correctly would this mean we wouldn't be able to create a computer that can discover things? We can make computers accomplish complex calculation at astounding speeds but could a computer ever discover a new scientific theory like the theory of relativity?
How long is the Turing test supposed to last? If you interview a man/computer wouldn't the length of time matter i.e. the longer the interview the harder the Turing test becomes for the machine to pass.
|
|
una
New Member
Posts: 12
|
Post by una on Jan 23, 2011 17:46:51 GMT -5
1. Have any computers passed the Turing test?
2. Last week the topic of the term "Artificial Intelligence" being a misnomer came up, but I got the impression from the article that he is talking about the imitation of intelligence rather than actual thinking. The Turing test is not checking if the machine can think, but rather if it can behave as if they were thinking.. therefore I'm wondering whether the term is in fact pretty accurate for the general idea of AI... ?
|
|
|
Post by Magnús Skúlason on Jan 23, 2011 17:59:19 GMT -5
Are we only trying to answer the philosophical question of whether machines is intelligent, or is there any real world practical use for making a machine play the "imitation game"
Aren't genetic algorithms capable of this? Even though they follow a predefined logic, the outcome can be new and unpredictable.
|
|
kristofer kristofersson
Guest
|
Post by kristofer kristofersson on Jan 23, 2011 18:08:32 GMT -5
I have two questions about the turing machines
first they talk about another computer taking care of part C are the talking about that they have computer as a judge to see if A or B are AI
secondly i was wondering when making a turing machine would you let the AI give wrong answers every now and again since if you have two individuals and as the 400 questions on a wide verity of subjects if one answers all 400 correctly it is most likely an AI since people usually don't have a huge knowledge span "there are exceptions though"
|
|
|
Post by kristjan on Jan 23, 2011 18:16:04 GMT -5
Some machines seem to be able to learn by them self. Check link. www.youtube.com/watch?v=UE4T20jF63wThere are interesting videos with this robot trying to learn about it's environment. What other way do you think is better for a computer to learn without the knowledge being put in by a programmer?
|
|
|
Post by gudrunht on Jan 23, 2011 18:19:39 GMT -5
#Interesting link "Turing test in action" video: news.bbc.co.uk/2/hi/science/nature/7666836.stm#1 I note that Turing has to deal with theological objections, some arguing that machines cannot "think" since they have no soul. It bothers me that scientists in modern times should have to waste their time on such (silly)matters when it comes to serious discussion about their (AI) field. I am curious as to how the class feels about this subject: do these kinds of questions have any merit today? #2 The AI community seems to be focused on making machines/agents that "think" like humans (as Turing states repeatedly). Would it not be more suitable to try and redefine these terms for a machine/agent in such a way that it would evolve it´s own entity (based on those terms) instead of trying to imitate humans? Of course we would also have to define what is meant my entity...
|
|
|
Post by grimurtomasson on Jan 23, 2011 18:25:30 GMT -5
Which of the nine common objections are still relevant?
Is the Turing test itself relevant today? And if so, to which sub-categories of AI?
Grímur
|
|
|
Post by Elín Carstens on Jan 23, 2011 18:39:56 GMT -5
"What would Professor Jefferson say if the sonnet-writing machine was able to answer like this in the viva voce?"
This got me thinking... Let's say that we have a robot that looks, moves, talks and smells like a human, so much so that we would not be able to tell it apart from a real human being. Then let's assume that the software (for the robot) had originally been designed to pass the Turing test and that it had failed.
1) Would we believe the robot to be an "intelligent"/"normal" human after interacting with it? Would we be more inclined to believe it human because of it's appearence than if we had interacted with it through text?
2) How big a role does appearence play in our perception of intelligence? Since we consider ourselves to be the only animals capable of possessing high-level intelligence, is it natural that we assume that no other being can possess that kind of an intelligence, whether it's mechanical or not? Can we, as such, refuse the notion that real intelligent beings can exsist in other forms than human? Do we refuse it?
3) Can the imitation of intelligence be called intelligence? Why/Why not?
|
|
|
Post by Niccolo on Jan 23, 2011 20:32:57 GMT -5
Can I make my suggestion for a new approach forward the definition of AI/I? Let's get two points: (i) human beings - can think (ii) mono-cellular life - cannot think We can get sample of every intermediate level and we believe humans derives from simpler life beings like mono-cellular lives. Where in the middle can we get intelligence? Establish a level. Observe the life at that level and inferior levels. The difference tells the definition of I. An AI is a human-constructed model that accomplish the defined requirements. #1 I note that Turing has to deal with theological objections, some arguing that machines cannot "think" since they have no soul. It bothers me that scientists in modern times should have to waste their time on such (silly)matters when it comes to serious discussion about their (AI) field. I am curious as to how the class feels about this subject: do these kinds of questions have any merit today? Unfortunately, the point is not what the class or even the entire humanity thinks. These concerns are valid until you are able to provide a formal proof whether soul or God exist or they do not. Until that time, we can only have an opinion, kind of say that blondes are better than brunettes and no much more. I think philosophy express this pretty well with the Greek word "doxa" (have a look at en.wikipedia.org/wiki/Doxa or better look for ephisteme and doxa). I belive Turing wanted to be very precise in his theory and with his ability to see beyond its time he could not accept to just ignore this kind of critics.
|
|
|
Post by sigurdurjokull on Jan 30, 2011 9:14:33 GMT -5
3) Can the imitation of intelligence be called intelligence? Why/Why not? Interesting question. If i create a computer program that imitates all the functionality of a human mind. Is that really intelligence? Is there really someone thinking "in there"? But as Turing pointed out you could easily question wether other people actually have intelligence. Because really the only proof of the computer's intelligence is it's ability to exhibit intelligent behavior such as engaging in a conversation with me. But i have no further proof of it's experience, i can't tell if it actually experiences anything. But the same thing goes for other humans, i have no proof of their experience. But it would be silly to deny the existence of other's experience. In the same way i think we cannot deny that a computer program is intelligent if it exhibits the same functionality as a human mind. But of course you could create a sloppy imitation not capturing all of the functions and ending up with something that seems human, but in reality is missing some part of being human, whatever that might be. But being human is not really well defined, are brains our similar at a fundamental level, but our thoughts differ and some people have different activations of different brain areas and some people are even missing some brain areas. When does a human stop being human? When we remove more than 50% of it's brain? What if a human has 51% cyborg parts and 49% human brain, is it still human? I would say yes because believe that the functionality is the only thing that matters(ignoring of course that i don't know completely how the functionality of the human mind works) but if i create a circuit board which exhibits the same functionality as a brain region did before that i would be able to replace the brain region with the circuit board in still remain the same. Just as the brain replaces the cells that constitute it constantly, but it's functionality remains. But then more questions, if i somehow isolate a "brain area" and sever all it's connections to the rest of the brain and replace it with the chip. Would it matter that the entire functionality stay the same or only if the input output values would stay the same to the rest of the brain. Undoubtably if the circuit board which has replaced my brain area gives the appropriate output to the appropriate input than the effect it has on the rest of the brain is the same as the brain are would have had, but the inner functionality is different. Wether that would affect experience would probably depend on whether that brain area was a "conscious" brain area or whether it only talks to conscious brain areas. But consciousness is not defined and is again just a vague way to talk about our experience. But anyways i think this opens a lot of interesting questions. In 20 or 30 years or 1000 years whatever i might be able to run a simulation with what is equivalent to the human mind and to them i would be god. I could create experience, make them suffer for instance. And i cannot predict the growth of computing power and such. But i think that we will be able to create intelligent machines that will themselves start to program and think and discover. I think that in a rapid stage of coevolution man and machine will evolve to incredible new stages of intelligence and existence. I think that the new technologies and sciences might have the interesting possibility of running enormous simulations that themselves might be called "universes". And we might even be living in a such a universe. Although somehow i feel that is more likely that we are a product of a non-intelligent evolutionary process, the opposite of this godlike universe which ours would evolve into which would probably harness every piece of matter and energy for computation. I went a bit to far i think.... But the point of humans and machines being equivalent once they exhibit the same functionality opens a lot of moral and ethical questions. Especially if you could create simulations containing "real" people that can think and have experience. Of course simulating a universe is so far out of reach it's silly. But simulating a mini world where a human can exist and interact is not far fetched. It could even be used for entertainment, why watch television when you can watch real people interact in a simulation. Why raise a child when you can just apply certain settings in simulation to create the child you want. These questions may be irrelevant once we reach the point but im at least glad this technology does not exist today, considering how we treat other human beings these cyberbeings would be in for suffering.
|
|