|
Post by Hannes Vilhjalmsson on Jan 13, 2009 21:13:54 GMT -5
Dear students, The first paper we will discuss in this course is "Toward a Critical Technical Practice: Lessons Learned in Trying to Reform AI" by Philip E. Agre at UCLA. You can find the paper here: polaris.gseis.ucla.edu/pagre/critical.htmlNOTE: You only need to read sections 2, 3 and 4 (feel free to read more, but let's focus on these sections). This relates to our introduction to AI as a field and its history. After you have read these sections, come back to this forum and post 2 questions related to their contents. This post has to arrive by 23:59 on Sunday night (after this week we'll start doing this by Friday nights) to count towards your paper discussion grade. The questions can point out concepts that you have difficulty understanding, but preferably they should be questions that provoke discussion from the material. No grade will be given for these questions, you automatically get points for being on time. In the discussion section next week, chosen questions from those submitted will be discussed by the group as a whole and you are expected to participate.
|
|
|
Post by My questions on Jan 14, 2009 5:38:36 GMT -5
1) Does an "AI dictionary" exist anywhere, i.e. a definition of commonly used concepts and terms related to AI which has wide community agreement behind it?
2) Is there a good example of state-of-the-art in intelligent machines today? Preferably not domain specific but exhibiting artificial general intelligence to some degree.
|
|
|
Post by Helgi Páll Helgason on Jan 14, 2009 5:40:16 GMT -5
Forgot to login, here are the questions again :-)
They are general in nature, but the same can be said for the article.
1) Does an "AI dictionary" exist anywhere, i.e. a definition of commonly used concepts and terms related to AI which has wide community agreement behind it?
2) Is there a good example of state-of-the-art in intelligent machines today? Preferably not domain specific but exhibiting artificial general intelligence to some degree.
|
|
|
Post by Hjalti Kolbeinsson on Jan 14, 2009 7:04:35 GMT -5
1) Why do AI people say they don't want to use vague words (beginning of part 4), when they them selves use extremely vague words (end of part 4)?
2) What is phenomenology (end of part 2)? (Perhaps not connected to AI in anyway but the author read a lot of this when researching AI for the millitary).
|
|
|
Post by Jon Gisli Egilsson on Jan 14, 2009 17:28:03 GMT -5
1) The second world war was maybe the kickstart for AI development but how much af AI research goes on for the military today? Is the gameing industry perhaps a larger developer of AI now? Anything else that has more impact on the evolution of AI?
2) The last paragraph in part 3 looked interesting. The bad part is that I didn't understand it too well. How is AI programming compared too industrial computer programming (is that what you call general programming?) in 'mannamáli'?
|
|
|
Post by hordur08 on Jan 15, 2009 7:55:16 GMT -5
1) I have one consideration regarding warfare and scientific progress. I have often heard the claim that during warfare science progresses much faster then during peacetime. But as it says in this article AI research made more progress after the war. Maybe more money is put into research at war times but the focus of research might be limited to the weapon industry and therefore scientific progress in the sense of improvements of human life might largely be stopped. What is your opinion on that?
2) Another consideration is connected to the lecture last Tuesday as well as this article (though maybe not as clearly). How much can be gained by looking into how people think and act, in terms of making machines that can act in a natural environment? Let´s take computer vision for example, humans see their environments and draw conclusions about distances from several distant cues. Like the size of objects, how different objects seem to each eye (cues from stereovision) etc. In computers it is much more plausible to use other types of distance measures, like sending a signal that reflects back to the machine and measure the time it takes and the strength of the signal. I started wondering about this because of the example taken from how humans learned how to fly. They learned how to build aircrafts by studying areodynamics instead of just mimicing birds. So my question is will mimicing human behaviour or studying the environment more directly contribute more to our understanding of making machines that can react and to the environment.
|
|
|
Post by arnists on Jan 15, 2009 10:11:49 GMT -5
1) Are the philosophical underpinnings of AI research sound? There are any number of philosophical attacks on AI that would make the enterprise entirely empty. (I kept thinking about Wittgenstein's Beetle while reading the article) 2) Do we have a test that doesn't invoke The Stewart Rule? The problem of evaluating intelligence is that any definition of the solution makes the problem solvable by weak AI (see Searle) while not defining a form of a solution invokes The Stewart Rule.
|
|
|
Post by Richard Ottó O'Brien on Jan 15, 2009 13:05:00 GMT -5
1. When would you, as a AI-Person (Hannes) I assume, define a part of a program or a system as A.I.? Some system which just performs a set of actions when confronted with an problem or would it have to come up with/"think of" a solution and/or maybe learn from others, like humans/animals do?
2. Is it considered bad if an A.I. system makes an mistake, simply because since we humans and other intelligent lifeforms make them and these systems sort of try to mimic us/them? For example, when I plan to go shopping I may have planned to make "Kjötsúpu" beforehand, so I make a shopping list keeping in mind what I have at home and what I need to buy to complete the task of cooking the soup, now I forgot to put down one ingredient on the list. I now can not make the soup unless going again. Would it be acceptable for a A.I. system to make such a mistake?
|
|
|
Post by Christian Zehetmayer on Jan 15, 2009 16:54:42 GMT -5
1) When AI is going to be reality? How much effort will be needed realize it? And how much computing power will be needed?
2) In the meaning of defining AI: Is AI the emergence of reasoning, planning, learning, choosing, strategizing,…
|
|
|
Post by Hólmar Sigmundsson on Jan 15, 2009 17:38:31 GMT -5
|
|
|
Post by Hjalti Magnússon on Jan 16, 2009 10:13:44 GMT -5
1) Philosophy and social studies have contributed alot to AI, but in the article it is said that AI people regard these fields as imprecise, and my question is: Has AI contributed anything to those fields in terms of, maybe, more precise definitions of terminology or anything like that?
2) In the last paragraph of section 3 it is mentioned that modularity (which has been emphasized as being very important in every programming course I've taken) is not necessarily the best way to go when it comes to AI. Why is this the case, and does it apply to all fields of AI?
|
|
|
Post by Jón Trausti on Jan 16, 2009 11:32:26 GMT -5
It was rather hard for me to focus on what was being talked about on this reading material.
1) Is there a good formal definition of AI? Could you call a device with sensor an artifical intelligence? Such as a simple device that would only detect a person walking past it.
2) If the military was so interested in AI, did they ever study the consciousness of the human body? Such as, if a child did not have eyes, feelings or ears. Would it be brain dead, as in... would it not have thoughts throughout its lifetime? If so, could we not say the consciousness is seeded by those 3 (mininum) inputs?
I couldn't get any good questions directly related to the reading material, I hope those are close enough.
|
|
|
Post by Birna Íris on Jan 17, 2009 11:43:10 GMT -5
1) As many of you my thoughts on this article were regarding the part that war and military play in AI research. I think it is an interesting (and a bit sad) fact that war and military often encourage evolution in science. Why doesn't such a drive come from some other source. Of course research and science are driven from other things than war and military but they seem to weight a lot. Maybe it is just a question about money. In the U.S. there has of course been very much emphasis on military for many decades. So maybe the field of AI would have grown so much from some other source of money... (??) 2) In chapter 3 the author states that AI people treat human beings as physically realized, meaning, I assume, that they are aware of their own bodies and mind. Isn't this a simplistic statement? Doesn't AI deal with human behavior that does not fall under this definition of physically realized entities? I think without a doubt that we (meaning us humans) will be able to implement emotions. Emotions are controlled in the brain (sometimes unconsciously), and the brain can (and will) be implemented
|
|
|
Post by Stefn Einarsson on Jan 18, 2009 7:15:13 GMT -5
1) It seems that in this article the terms around "AI" are vague in definition. Kind of a religion more than science. So basically my thought is, if a theory is so vague that i can not be disproved. Is kind of like everybody is tip-toeing around each other and hope that no one disapproves their work. I would say that the vagueness does more harm for AI development than good. What is your opinion on that subject?
2) It says in chap 4. That AI is a computer system that exhibits intelligence. Was kind of wondering what is intelligence. Is it always making the right choice? Or is it making an educated guess from prior encounters and if it is a guess. Does that mean AI systems have to store all the input they get("sense") and base future encounters on that data?
|
|
|
Post by Stefán Einarsson on Jan 18, 2009 7:17:48 GMT -5
Sorry forgot to log in and posted accidentally as a guest. Here are my questions again...
1) It seems that in this article the terms around "AI" are vague in definition. Kind of a religion more than science. So basically my thought is, if a theory is so vague that i can not be disproved. Is kind of like everybody is tip-toeing around each other and hope that no one disapproves their work. I would say that the vagueness does more harm for AI development than good. What is your opinion on that subject?
2) It says in chap 4. That AI is a computer system that exhibits intelligence. Was kind of wondering what is intelligence. Is it always making the right choice? Or is it making an educated guess from prior encounters and if it is a guess. Does that mean AI systems have to store all the input they get("sense") and base future encounters on that data?
|
|