|
Post by halldorrh05 on Jan 18, 2009 9:12:23 GMT -5
1) AI researchers can build computer models of reasoning in particular domains because their discourse is, in one sense, precise. But they can only make such a wide range of domains commensurable with one another because their discourse is, in another sense, vague.
How can you be vague and precise at the same time?
2) In chapter 3 when referencing the hacker culture, is it fair to assume that AI researchers find themself largely as belonging to said culture?
|
|
|
Post by Snorri Jónsson on Jan 18, 2009 9:27:59 GMT -5
It was a hard read and took me quite some time to do.
1. My understanding of AI before starting this course is that main contributing field in AI research were computer specialist's/engineer. Are computer specialist/engineer really the main contributing field or is there any other field that contributes more towards AI research?
2. Witch industry are leading in using AI technology in some kind of commercial product's?
|
|
|
Post by Björn Vignir Magnússon on Jan 18, 2009 10:07:53 GMT -5
I would say this text is really thought provoking and after reading it it was really hard to narrow the questions down to just two. Nevertheless I would like to propose these questions which I think are relevant to the text.
1) When a rational agent performs "right thinking" is he aware of it himself or does he perform it because he's been programmed to do it in a certain way?
2) Does a rational agent have to have finite possibilities of answers to choose what is best in every situation?
|
|
|
Post by Haukur Jónasson on Jan 18, 2009 10:36:57 GMT -5
The paper mostly made me think of the future development of AI... 1) The following passage caught my interest: These domains appealed to early AI researchers in part because computer vision and robotics were very poorly developed, and they permitted research on cognition to begin without waiting on those other, possibly much more difficult research problems to be solved. and raised the question: Which related fields' incompleteness is currently most hampering to the continuing development of AI? 2) The 'robot uprising' is a common theme in science fiction, but is it an actual possibility that it will one day occur? Can human-created AI theoretically rise against it's makers?
|
|
|
Post by Alfreð Már Alfreðsson on Jan 18, 2009 13:10:16 GMT -5
I had some trouble reading this article. Lots of words i didn't understand.
1) It says in the beginning of part 2 that after World War II, psychologists and others started working in the field of AI. Are psychologists still a big part of AI research and design today?
2) I didn't quite understand the AI definition of the word "plan" in part 4.
|
|
|
Post by stefan on Jan 18, 2009 15:25:17 GMT -5
I found it quite difficult to read and understand this article.
1) The principle of modularity, for example, might be treated as an axiom or an instrumental expedient in industrial programming. What does that mean?
2) I'm interested in where AI is today compared to nature?
|
|
|
Post by steinarhugi on Jan 18, 2009 16:19:23 GMT -5
1. Is there a concrete definition of what AI software is? When is a program using AI to make decisions or simply making educated guesses? When does simple data cache become computer learning?
2. During my reading I got the feeling that the author is somehow unsecure about AI and is repeatedly defending it. Why is that? What is the most common reasoning against "computer thinking"?
|
|
|
Post by olafurgi on Jan 18, 2009 16:30:16 GMT -5
1. With rapid advances in computing power, is the AI field concentrating more on modularity now than before, rather than cutting down on modularity to make the AI more efficient ?
2. What are your views on (as the author describes it) vagueness of concepts e.g. planning and knowledge ? If the concepts were more precisely defined, would that simplify anything ?
|
|
|
Post by arnij07 on Jan 18, 2009 17:15:05 GMT -5
I dont have anything about this text specifically but here are a few thoughts that came up: 1. I heard somewhere that a human will never be able to fully understand the brain, simply because it's a brain trying to understand it self. In a way saying that you need a bigger brain to understand a smaller brain. Can that be applied to the AI field, will humans never be able to make human like AI? Do we need AI to understand the human brain? 2. Is it fair to compare computer to humans? We have 20+ years of runtime, but a computer at least allot less. 3. More of a joke then anything else. Lets say intelligent design is true, which i don't . Then the uncanny valley hypothesis is a warning from god not to dabble with AI? (see: en.wikipedia.org/wiki/Uncanny_Valley)
|
|
|
Post by Sigurður Júníusson on Jan 18, 2009 18:14:31 GMT -5
1. Since the military has had such an effect on AI, I was wondering if there have been developed some ethical guidelines for people working in AI?
2. It seems to me that it‘s very difficult for AI agents to distinguish relevant information from irrelevant, as their environment gets bigger and more things are added for them to perceive, isn‘t it a greater possibility that the AI agents start to make irrational decisions?
|
|
|
Post by Bjarni Gunnarsson on Jan 18, 2009 18:27:43 GMT -5
1) Isn't the goal to mimic the human behaviour a bit unrealistic in the near future? It took nature millions of years to get us here an we are hoping to recreate that it in like 60 years or so.
2) Are the military labs sharing their research on AI today or is it beleived they have gone further than we can imagine?
|
|