Post by halldorrh05 on Jan 18, 2009 9:12:23 GMT -5
1) AI researchers can build computer models of reasoning in particular domains because their discourse is, in one sense, precise. But they can only make such a wide range of domains commensurable with one another because their discourse is, in another sense, vague.
How can you be vague and precise at the same time?
2) In chapter 3 when referencing the hacker culture, is it fair to assume that AI researchers find themself largely as belonging to said culture?
Post by Snorri Jónsson on Jan 18, 2009 9:27:59 GMT -5
It was a hard read and took me quite some time to do.
1. My understanding of AI before starting this course is that main contributing field in AI research were computer specialist's/engineer. Are computer specialist/engineer really the main contributing field or is there any other field that contributes more towards AI research?
2. Witch industry are leading in using AI technology in some kind of commercial product's?
Post by Björn Vignir Magnússon on Jan 18, 2009 10:07:53 GMT -5
I would say this text is really thought provoking and after reading it it was really hard to narrow the questions down to just two. Nevertheless I would like to propose these questions which I think are relevant to the text.
1) When a rational agent performs "right thinking" is he aware of it himself or does he perform it because he's been programmed to do it in a certain way?
2) Does a rational agent have to have finite possibilities of answers to choose what is best in every situation?
Post by Haukur Jónasson on Jan 18, 2009 10:36:57 GMT -5
The paper mostly made me think of the future development of AI...
1) The following passage caught my interest:
These domains appealed to early AI researchers in part because computer vision and robotics were very poorly developed, and they permitted research on cognition to begin without waiting on those other, possibly much more difficult research problems to be solved.
and raised the question: Which related fields' incompleteness is currently most hampering to the continuing development of AI?
2) The 'robot uprising' is a common theme in science fiction, but is it an actual possibility that it will one day occur? Can human-created AI theoretically rise against it's makers?
"Artificial intelligence is no match for natural stupidity."
I dont have anything about this text specifically but here are a few thoughts that came up:
1. I heard somewhere that a human will never be able to fully understand the brain, simply because it's a brain trying to understand it self. In a way saying that you need a bigger brain to understand a smaller brain.
Can that be applied to the AI field, will humans never be able to make human like AI? Do we need AI to understand the human brain?
2. Is it fair to compare computer to humans? We have 20+ years of runtime, but a computer at least allot less.
3. More of a joke then anything else. Lets say intelligent design is true, which i don't . Then the uncanny valley hypothesis is a warning from god not to dabble with AI? (see: en.wikipedia.org/wiki/Uncanny_Valley)
Post by Sigurður Júníusson on Jan 18, 2009 18:14:31 GMT -5
1. Since the military has had such an effect on AI, I was wondering if there have been developed some ethical guidelines for people working in AI?
2. It seems to me that it‘s very difficult for AI agents to distinguish relevant information from irrelevant, as their environment gets bigger and more things are added for them to perceive, isn‘t it a greater possibility that the AI agents start to make irrational decisions?