|
Post by Hannes Vilhjalmsson on Jan 27, 2009 16:27:46 GMT -5
The third paper we'll discuss will address the question of what is artificial intelligence in a very practical manner. It describes the field of AI with several examples, some of which you have seen in the class so far (e.g. search) and some which you will see later in the class (e.g. learning). One interesting thing about this paper is that it is written by a philosopher for those who are studying the brain, e.g. not necessarily for computer scientists. A nice thing about that is that it gives a very concise picture of a field within computer science with relatively "fresh eyes" and is also able to use concepts such as mind and consciousness knowledgeably. The paper is from a chapter called "Artificial Intelligence" in a book called "Matter and Consciousness" by Paul M. Churchland. It is a revised edition from 1993. There have been advances in AI methods since then, which we will cover in the class, but the fundamentals are the same. You can retrieve the paper here: programming_intelligence.pdf Since the paper got assigned so late in the day Tuesday (around 21:30), I'll give you one more day to discuss it, so post your discussions by the end of Saturday.
|
|
|
Post by Helgi Páll Helgason on Jan 28, 2009 5:42:49 GMT -5
How strong is the relationship between self consciousness and imagination? Are the two fundamentally distinct abilities? Does a computer that can run code in simulation mode, i.e. execution without affecting the world or itself for the sole purpose of observing results, have an imagination? Assuming it has some ability to decide for itself what simulations to run and apply the results. The core differences between the brain and a CPU are widely discussed in relation to AI. Isn't it obvious that we would need a new computing architecture/model capable of running "on top" of CPU's to create artificial general intelligence (AGI)? Or to paraphrase, does anyone expect to program AGI in C++ using no special constructs or architecture? My point is that many of the differences between the brain and the CPU can be abstracted away with architecture.
|
|
|
Post by halldorrh05 on Jan 29, 2009 12:25:28 GMT -5
So to be able to simulate the brain we need systems that have many cores, executing instructions at the same time? So the trend now towards "more cores per die" is welcomed in AI research?
Do researchers have any idea how much information the brain can store and work with at the same time?
|
|
|
Post by Hjalti Kolbeinsson on Jan 29, 2009 13:28:00 GMT -5
The author mentions that if a machine would simulate the inner workings of human completely than it would be racism to say that this machine was not a person. But then the question is do we have a soul? If so then the soul is not part of the body so the machine can not simulate the soul or could it? I also think that if a machine could look and act exactly like a human so that people wouldn't be able to tell, then many people would be scared to death of it, or even just the idea of such a machine.
In the learning section of the paper the author mentions learning new concepts. He also mentions some machines that seem to be able to do that to some extent. Has this been achieved so far (for example does the Leonardo robot do this) and also what kind of machines is the author talking about?
|
|
|
Post by Snorri Jónsson on Jan 30, 2009 8:48:51 GMT -5
did deep blue or other chess computers use learning or is it all preprogrammed?
have there bean any research trying to use great deal of small processors to simulate the brain function?
|
|
|
Post by hordur08 on Jan 30, 2009 9:48:31 GMT -5
1) Computers are good at doing certain things and even much better and faster than a human does. Do you think it´s likely that computers will ever be as good or better than humans at those tasks that are very difficult for computers, like scene apprehension and sensorimotor coordination?
2) Will computers ever be able to process simultaneously as much as a human brain does?
|
|
|
Post by Christian Zehetmayer on Jan 30, 2009 12:42:46 GMT -5
1) When a computer works a lot the CPU gets hot and it isn’t fast any more. To regulate its heat balance the fan is turned on. That’s the same phenomenon like sweating by humans. So maybe the computer has something like self-consciousness. It has needs and reacts to them. 2) Are there the very same problems with hearing as with visioning? There are voice control and dictation systems on the market which working pretty well. Maybe something interesting: There is an AI robot dog from Sony called AIBO. Now the production is stop but you could order it as a puppy and educate it. support.sony-europe.com/aibo/index.asp?language=enor a fascinating video from a robot developed by Boston Dynamics for the Defense Advanced Research Projects Agency www.youtube.com/watch?v=W1czBcnX1Ww
|
|
|
Post by Richard Ottó O'Brien on Jan 30, 2009 13:16:21 GMT -5
1) This book talks about the progression from serial processing (one cpu) to parallel processing in general processing/purpose A.I. . Which I understand is basically multiple systems or cpus. I have recently read some article about GPGPU (General-purpose computing on graphics processing units). Which basically utilizes the multiple cores from a graphics processor. Have or are you using this technology? and is the A.I. business second largest contributor the gaming industry? 2) Consciousness in a A.I. system is weird and a vague term, in my opinion. As this can mean that a system could be conscious of its self in our world and maybe start developing a "personality" of some sort or am I misinterpreting the the word consciousness?
|
|
|
Post by Jon Gisli Egilsson on Jan 30, 2009 16:24:26 GMT -5
1) About serial and parallel processing. The computer as a whole has bits and pieces that do various things. There is memory, there is graphics card, there is the CPU and other things. The graphics card is very good at matrix calculations and the CPU at some other calculations. Isn't this in a sense what's going on in our brain? We have a part of our brain just for recognizing faces - if that part goes out of order we can't see the difference between peoples faces although we can tell if a person has a big nose or a small nose. So I guess the question is. What is the ultimate machine for AI work? Which kind of calculations or functions are the best for AI?
2) There wasn't much new in this paper although it was a good one. So one question not so related to the article. I know people are developing exoskeletons for humans to wear. These have been tested and if worn by a human gives it tenfold the strength it would otherwise have. Since we can't (yet, possibly never) mimic the brain well enough why don't AI people look more into systems that work really well WITH the human brain like for instance these super strength exoskeletons.
|
|
|
Post by Bjarni Gunnarsson on Jan 30, 2009 18:30:46 GMT -5
1) Is it known/estimated how big the "hardcoded" part of the brain is vs. the "learned" part in an adult human? 2) Did anyone see what was in the "Cooperative Computation of Stereo Disparity" picture? I stared at it for too long I think and couldn't make anything out of it
|
|
|
Post by Birna Íris on Jan 31, 2009 7:02:43 GMT -5
1) Humans perceive the world in sometimes a very different manner, often based on their experience. Seeing and hearing can vary drastically between two human beings, resulting sometimes in misunderstanding and confusion. Should this be included in implementation of computer vision and language technology? If the purpose is to mimic humans it probably should. 2) Good and bad. Right and wrong. Human beings learn to distinguish between these things when raised in a social environment, and there is a lot of differences between good/bad, right/wrong in different social environments. Do you think a computer could learn this? Or would it be the programmers decision to give it reward and punishment whenever she felt it would be right? 3) In the ELIZA example the computer is showing sympathy with the patient by using the language in the right way, but will we be able to implement sympathy in a computer system, the feeling of sympathy? I read a short story by John McCarthy about a robot making decisions to save a childs life. I recommend it: www-formal.stanford.edu/jmc/robotandbaby.pdf.
|
|
|
Post by Stefán Einarsson on Jan 31, 2009 10:36:42 GMT -5
Read an article regarding the "Purposive Behavior and Problem solving" bit. The article was about that a computer can not loose at checkers. Using a "weakly solved" approach. Which means that when there are only 10 pieces left on the table. The AI will be activated and it is told that it cannot loose. There are also even with only 10 pieces left, 39 trillion possibilities. Wonder is such a thing is possible with Chess. If it is possible with only a few pieces of a certain type. The computer cannot loose. Anyways the article is here www.livescience.com/technology/070719_ap_checkers_comp.htmlAlso i am wondering about AI learning. It is written in the article that, "The first and simplest is just a matter of saving, in memory, solutions already achieved. Then the same problem is confronted again, the solution can be instantly recalled from memory and used directly". I cant think of what difference that method of learning is from human learning. Is there any? Was also wondering about our last discussion about putting "robotic baby's" into foster care and let it learn over a period of time how to "learn". I don't think that is the right way to teach someone. You should teach a person or robot at optimal speed that he/she/it will still "understand" what is going on. So i see teaching a computer would be something more likely as in the matrix. To plug in a program "plug in". And wool ah you just learned how to make AI.
|
|
|
Post by olafurgi on Jan 31, 2009 14:03:58 GMT -5
How advanced machine-vision is out there today (How easily do the systems today process the environment they are monitoring (seeing) ?
Would it be simpler approach in AI to start by simulating the brain of a simpler animal than the human (Is the concept of simulating human-mind creating frustration because of the complexity of the human brain?) ?
|
|
|
Post by Jón Trausti on Jan 31, 2009 14:23:25 GMT -5
1) I was also very curious about Chess. If we had enough memory and CPU, could we find a way, or perhaps an algorithm to always win at chess? Surely we'll have to look forward recursively into many trillion of moves? But from that, we could store the best move for X state. Therefore, in the end, we would end up with saving best move from every state possible. So my QUESTION is: Wouldn't it be possible today? We'd just calculate one state for few days, or aslong as it may take, then save the state and do it for next possible state and settera...?
2) I was curious about how people feel/learn and how it could be accomplished into AI. My idea was this: Every object in this world is basically a root in a tree, every child of the root has a relation to that object. For example a child hears a sound and sees some action of the father upon hearing the sound, therefore the child links the sound to that action. So when the child hears that sound next time, towards itself, it may try to do the action and see the reaction and link the reaction to the sound.. Just an idea that came to my mind. Has this been thought of before?
|
|
|
Post by Hólmar Sigmundsson on Jan 31, 2009 14:35:08 GMT -5
1) So the optimal number of cores in an AI system is as many as the brain cells? Maybe combined in diffrerent clusters doing different work, much like the brain?
2) Would it make a difference if we would make CPUs that specialize in certain tasks instead of making programs for those tasks?
|
|