|
Post by Hannes Vilhjalmsson on Jan 10, 2011 11:24:12 GMT -5
Dear students, The first paper we will discuss in this course is "Toward a Critical Technical Practice: Lessons Learned in Trying to Reform AI" by Philip E. Agre at UCLA. You can find the paper here: polaris.gseis.ucla.edu/pagre/critical.htmlNOTE: You only need to read sections 2, 3 and 4 (feel free to read more, but let's focus on these sections). This relates to our introduction to AI as a field and its history. After you have read these sections, come back to this forum and post 2 questions related to their contents. This post has to arrive by 23:59 on Sunday night to count towards your paper discussion grade. The questions can point out concepts that you have difficulty understanding, but preferably they should be questions that provoke discussion from the material. No grade will be given for these questions, you automatically get points for being on time. In the discussion section next week, we will choose some of you to pose them for the rest of us to discuss. You are all expected to participate in the discussion.
|
|
|
Post by Jökull Jóhannsson on Jan 12, 2011 8:05:49 GMT -5
I have no idea how the questions are supposed to be so i just wrote the first the think that came to mind after reading the article
1."The construction of an AI model begins with these most basic interpretations,and it proceeds systematically outward from them." This statement got me to thinking about comparing newly construct AI against a newborn baby. So do you think if AI were to be constructed to be similar to a baby and fed information in the form of how humans receive theyr information trought daily life over a long period of time it could be compared to human?
2. So the military had a big part in funding and starting the field of AI and in the paper it was stated that they had big interest in AI. What do you think about the use of AI in modern warfare and about the idea that the field of AI could have bean started for the reason of finding better ways of killing people?
|
|
|
Post by gunnar on Jan 13, 2011 8:32:46 GMT -5
1) I'm guessing that since the military had such a big part in funding and starting AI that it was the biggest field in AI research.? What is the biggest field today? An what do you think will be the biggest 10 years from now?
2) With all the methods we have to day in AI that seem to be working really well. How much research is being done trying to figure out new ways, languages, computers that might help us make AI more advanced? maybe the reason we aren't able to make AI like the human mind is because the methods and the hardware aren't capable. Wouldn't we need programs that can rewrite them selfs in order to make them capable of learning everything?
|
|
|
Post by Guðni Þór Guðnason on Jan 13, 2011 16:13:24 GMT -5
1) The army invested in AI research after ww2, what where some of the projects goals?
2) "This dual character of AI terminology -- the vernacular and formal faces that each technical term presents -- has enormous consequences for the borderlands between AI and its application domains" What are the application domains of the AI field?
|
|
|
Post by Ásgeir Jónasson on Jan 14, 2011 8:34:59 GMT -5
1.
Do you think that military funding of AI is still a cause of dissent today and that people fear the notion of an army that (probably) has no emotions and no morals (which of course is a matter of implementation) or do you think that people would actually welcome the existance of soldiers that do not have personal gain as a part of their performance measurement ? A related (but not separate) question, do you think that the use of AI armies will be prohibited in warfare (such as the use of bullets which expand or flatten upon impact) or that it may be the other way around, that the use of human soldiers will be prohibited in the future ?
2.
"AI people generally consider that their goals of mechanized intelligence are achievable for the simple reason that human beings are physically realized entities, no matter how complex or variable or sociable they might be"
Have most modern AI researchers abandoned the views of dualism or is it still considered possible that we may never be able to emulate the workings of a human (or other being's) brain simply because not all of it's aspects are physical in nature ? If so, is it because it requires them to be spiritual or religious (which I assume most of them, like most scientists according to polls, are not) or because it halts progress ? Furthermore, in the unlikely case that the views of duality would be proven to be correct, how would that affect AI research ?
|
|
|
Post by Carmine on Jan 14, 2011 12:25:39 GMT -5
These are the main questions that I had:
1) reading the paper, Artificial Intelligence seems a field continuously denigrated, disapproved or simply not-well considered. It seems that AI scientists have to find justifications or difenses about their work because they are criticized or they don't approve people that don't like their projects or intentions. Is this continuous conflict true in the reality? or is it just an idea of the author of the paper?
2) planning is a very wide question, is it possible to create a generic algorithm that is able to solve any kind of planning problem? or how much is it possible to generalize all planning problem? (taking into account that maybe all of them are solved in the same way my our mind)
3) in each our decision, what is the percentage of conscious and not-conscious thought that affect it? and can we think about implement also not-conscious (or not rational) mind-processes?
|
|
|
Post by lorenzo on Jan 14, 2011 14:03:47 GMT -5
The AI research starts during the 2 world war and give a great improve to the military technology but now this research cover a more huge area so i was thinking how the AI result affect our life?
Do you think that with the creation of new technology useful for the AI research this field can became more precise and less vouge in the definition of the solution of a probelm respect the beginning ?
|
|
|
Post by grimurtomasson on Jan 15, 2011 4:34:08 GMT -5
1. What has changed, if anything, in the AI field culture since the publication of this paper? Is working software still the main form of acceptable criticism?
2. Has the gap between informal definition of preexisting words (concepts), often having multiple meanings, and their formalization (implementation) been bridged to any extent?
This is what I find most interesting, along with wether interdisciplinary cooperation, or at least the use of concepts and models from other disciplines, than AI, is more acceptable today.
|
|
|
Post by Hrafn J. Geirsson on Jan 15, 2011 10:12:23 GMT -5
1) In the past, a large part of AI funding came from the military, yet the military did not seem too concerned about whether or not the goals of the researchers was military application. Where does AI get it's funding today? How much, and in what ways does the source of funding affect the emphasis of the field ?
2) Dualism is good and behaviorism is bad. Why do AI researchers favor one philosophy over another, when their measure of general success is defined as the functionality of their computer systems ?
|
|
|
Post by Þorgeir Karlsson on Jan 15, 2011 11:10:31 GMT -5
The author several times throughout the paper refers to AI researchers as a whole calling them "AI people" and provides a blanket statement covering all AI scientists saying they believe this or are against that. However we have learned in class that there are four distinct areas of AI and I'd like to know when AI first started to divide up into the four areas it is in today.
"Critics of AI have often treated these well-funded early AI labs as servants of the military and its goals." Did these AI labs manage to achieve any of the military's goals? What aspects of modern military technology was a result of them funding AI research?
|
|
|
Post by Helgi Siemsen Sigurðarson on Jan 15, 2011 13:42:17 GMT -5
After reading through the paper i got the impression that there was not very much collaboration or a standard to the field of AI. 1) would the field of IA be better of if there was more collaboration between "AI people" or a standard of some sort ?
2)(if answer to 1=yes ) what standards or collaboration would benefit the field ?
3)(if answer to 1=no) why not ? is there to many ways to look at the problems ?
|
|
|
Post by finnur on Jan 16, 2011 8:04:40 GMT -5
1) The funding after WWII mostly came from the military but they gave the researchers free rain over what they wanted to do, what did the military have to gain from that?
2) "The premise of AI, int rough terms , is the construction of computer systems that exhibit intelligence." Is it really enough show show intelligence, that could mean it's just acting intelligently, can that be considered AI?
|
|
|
Post by kristjan on Jan 16, 2011 8:28:10 GMT -5
It is mentioned that the early AI pioneers revolted against behaviorism but can it not be useful to the field of AI to study behaviors?
|
|
|
Post by kristoferk09 on Jan 16, 2011 11:14:00 GMT -5
ok after reading it there are two things that come to mind 1. the phrase closed world is mentioned and slightly explained but not entirely what does that phrase truly mean 2. it is true this has not yet been discussed in the book but as it is my top question regarding AI I will shoot anyway what is the fundamental difference between an AI that is preprogrammed with lots of data ("deep blue type systems") and the ones that are started of with almost no knowledge at all like children and need to learn to improve on their own
|
|
|
Post by krafki on Jan 16, 2011 11:21:33 GMT -5
1) "Having detected an element of behavioral regularity in the life of some organism, for example, one can immediately begin enumerating the unitary elements of behavior and identifying those as the "primitive actions" that the putative planner has assembled to produce its plan."
Is it possible to achieve some kind of universal standard of formalization, perhaps through these primitive actions, and give a solid basis for the construction of an AI model?
2) Does the different disciplines of AI even want to find a common ground in formalization of, for example, "planning" and "knowledge"?
|
|