|
Post by thorsteinnth on Jan 16, 2011 12:46:12 GMT -5
1. Ég sá minnst á "behaviorism" í einum kaflanum svo ég fór að hugsa hvort að takmarkið sé að líka eftir einhverju, hvenær hættir "simulation" að vera "simulation" og verður gervigreind?
2. Einnig sá ég minnst á "learning", "reasoning" og "planning"; Hvað felst í þessum orðum? Verða allir að verða sammála um hvað liggur að baki þessu öllu til að geta kallað eitthvað gervigreind sem uppfyllir þessi skilyrði?
|
|
|
Post by Elín Carstens on Jan 16, 2011 13:49:36 GMT -5
1) "Miller, Galanter, and Pribram's concept of a Plan also exemplifies another prominent feature of AI discourse: the tendency to conflate representations with the things that they represent."
Is this still a prominent feature today? If so, is that considered to be a good or a bad thing?
2) The vagueness of AI vocabulary: Should something be done to change that and why? What affects would such changes have on the field if any?
|
|
|
Post by jonfs09 on Jan 16, 2011 14:16:12 GMT -5
Þessi setning náði að fá mig til að hugsa mikið : can computers think? Ef við myndum ná að búa til tölvu sem gæti hugsað og væri með bara sinn eigin huga og hugsanir, værum við þá ekki nálægt því að búa til nýa tegund af lífveru?
|
|
|
Post by Petur O Adalgeirss on Jan 16, 2011 14:16:38 GMT -5
1. It seems to me that if the author's views of the culture of the AI field are grounded in reality, the only way for a reasearcher's contibution to be noticed, accepted and used, is for said researcher to demonstrate his idea by way of a working program that in some way solves a significant problem better than previous programs. But imagine one has an idea for a potentially fruitful theory, but there is so much groundwork that has to be laid first in order for that theory to begin demonstrating its potential, that one man would not live long enough to complete it, or even reach a point where he is able to explain the theory's potential to others to convince them to carry on the work and ensure they don't miss the point and develop the theory in a less fruitful direction than originally intended. Now imagine that such a theory would be the only way to carry the field of AI past a certain limitation. Doesn't the author's view of the AI culture, if correct, mean that the field would be bound never to arrive at such a liberating theory? Aren't researchers really on too tight a leash, if they're not allowed to stray from this driving force of incremental progress? By which I don't mean they don't have a choice, but rather, if they choose to stray away, then their carreers are likely to suffer.
2.In a similar vein, if one always has to demonstrate a working program to justify one's research, and that program has to be an improvement on what has been previously built in order to influence the progress of the field, can we not draw an anoly to evolution and say that the AI field itself (or it's individual subfields) can be thought of as an evolutionary entity? It evolves by incremental steps, and only those steps survive that give immediate benefits. What if the field is already past a point where its path diverged from a better path, and just by sheer bad luck no one at the time came up with the idea that might have steered the field in that better direction, and coming up with that idea now would mean backtracking first and going seemingly "downhill" before starting up the other path and eventually overtaking the others still struggling on the old path. What if we're already gorillas and we can't become human except by devolving first back to our common ancestors, in a sense? Don't the aforementioned restrictions on the evolution of the AI field make it very difficult for the field to exhibit the foresight required to take not just the steps that give immediate gain, but also consider steps that would give more benefit in the long run?
3. Is this really the case with the AI field?
|
|
petur
New Member
Posts: 1
|
Post by petur on Jan 16, 2011 14:21:18 GMT -5
Forgot to log in
1. It seems to me that if the author's views of the culture of the AI field are grounded in reality, the only way for a reasearcher's contibution to be noticed, accepted and used, is for said researcher to demonstrate his idea by way of a working program that in some way solves a significant problem better than previous programs. But imagine one has an idea for a potentially fruitful theory, but there is so much groundwork that has to be laid first in order for that theory to begin demonstrating its potential, that one man would not live long enough to complete it, or even reach a point where he is able to explain the theory's potential to others to convince them to carry on the work and ensure they don't miss the point and develop the theory in a less fruitful direction than originally intended. Now imagine that such a theory would be the only way to carry the field of AI past a certain limitation. Doesn't the author's view of the AI culture, if correct, mean that the field would be bound never to arrive at such a liberating theory? Aren't researchers really on too tight a leash, if they're not allowed to stray from this driving force of incremental progress? By which I don't mean they don't have a choice, but rather, if they choose to stray away, then their carreers are likely to suffer.
2.In a similar vein, if one always has to demonstrate a working program to justify one's research, and that program has to be an improvement on what has been previously built in order to influence the progress of the field, can we not draw an anoly to evolution and say that the AI field itself (or it's individual subfields) can be thought of as an evolutionary entity? It evolves by incremental steps, and only those steps survive that give immediate benefits. What if the field is already past a point where its path diverged from a better path, and just by sheer bad luck no one at the time came up with the idea that might have steered the field in that better direction, and coming up with that idea now would mean backtracking first and going seemingly "downhill" before starting up the other path and eventually overtaking the others still struggling on the old path. What if we're already gorillas and we can't become human except by devolving first back to our common ancestors, in a sense? Don't the aforementioned restrictions on the evolution of the AI field make it very difficult for the field to exhibit the foresight required to take not just the steps that give immediate gain, but also consider steps that would give more benefit in the long run?
3. Is this really the case with the AI field?
|
|
|
Post by Eiríkur Fannar Torfason on Jan 16, 2011 16:29:33 GMT -5
The first questions that emerged when I read the article were related to the meaning of many of the words and terms used by the author. It's been a while since I've encountered so many unfamiliar terms. What does "intentional vocabulary" mean? I guess it's a bit ironic to find my own vocabulary lacking while reading an article that mentions the word "vocabulary" no less than 11 times. I guess that my next questions relate to the very definition of AI. It seems rather vague. The explanation "building systems whose behaviour would be considered intelligent if exhibited by a human being" is subjective and it begs the question; what is considered intelligent? What if this intelligent behaviour is immutable like that exhibited by a computer game that always plays the same predictable moves (even though doing so repeatedly results in defeat)? Wouldn't that be considered artificial stupidity rather than artificial intelligence? If we follow that train of thought, does AI imply that the system must be capable of learning from its mistakes? Much of the article is devoted to the preciseness and use of terms such as "planning" and "knowledge" in the field of AI. This reminds me of reports that have been made to define difficult terms in another field; systems biology. See the following two reports for example: THE CONCEPT OF EMERGENCE IN SYSTEMS BIOLOGY www.stats.ox.ac.uk/__data/assets/pdf_file/0018/3906/Concept_of_Emergence.pdfTHE CONCEPT OF REDUCTION IN SYSTEMS BIOLOGY www.stats.ox.ac.uk/__data/assets/pdf_file/0019/5365/TheConceptofReduction.pdf
|
|
|
Post by Jón Þór Kristinsson on Jan 16, 2011 17:26:45 GMT -5
Why are philosophy and social sciences perceived as failed projects by AI people for being imprecise, woolly and vague. When these same things seem to effect AI vocabulary as well?
|
|
|
Post by baldurb09 on Jan 16, 2011 18:03:17 GMT -5
Er almennt meiri greinarmunur gerður á hversdagslegri og formlegri hugtakanotkun í gervigreind en í öðrum vísindagreinum?
Höfundur lagði mikla áherslu á upphaf gervigreindar sem fylgifisk stríðsáranna og útskýrt var hvaða áhrif tækniþróun síðari heimsstyrjaldarinnar hafði á myndun hennar — hefði gervigreind óhjálkvæmilega þróast á næstu áratugum án tilkomu stríðsins?
Aukaspurning: Í fimmta kafla segist höfundur hafa þekkt lítið annað en stærðfræði og tölvur — hvaða aðrar greinar eða fög væru góður grunnur undir nám í gervigreind? Væru greinar eins og heimspeki jafnvel betri grunnur en stærðfræði og tölvunarfræði?
|
|
una
New Member
Posts: 12
|
Post by una on Jan 16, 2011 18:06:59 GMT -5
2. So the military had a big part in funding and starting the field of AI and in the paper it was stated that they had big interest in AI. What do you think about the use of AI in modern warfare and about the idea that the field of AI could have bean started for the reason of finding better ways of killing people? 2. I think this question is one without any real answer, as the answers will be largely based on a persons view of war and its purpose. On the one hand, it can be argued that AI has the potential to create easier and better ways of killing people, but at the same time it can be considered as creating a better way to "protect" people. Since many consider war as a means to protect their country, not just to attack another, one could argue that AI has the potential to keep "your own" in safety. Taking it even further, if soldiers were eventually in some ways partially or completely replaced by machines, then technically it could be saving the lives of people who otherwise would have been in much more danger...
|
|
una
New Member
Posts: 12
|
Post by una on Jan 16, 2011 18:15:40 GMT -5
1) reading the paper, Artificial Intelligence seems a field continuously denigrated, disapproved or simply not-well considered. It seems that AI scientists have to find justifications or difenses about their work because they are criticized or they don't approve people that don't like their projects or intentions. Is this continuous conflict true in the reality? or is it just an idea of the author of the paper? I actually got more of the feeling that the author takes the disapproval with a grain of salt, rather than feeling too attacked or defensive over it. It seems as although it may be a nuisance to have a field of study so often questioned in ambiguity, that overall the "AI people" are self-assured enough about their intentions to not feel the need to get too hot-headed over the questioning of critics... In fact, maybe the constant skepticism works as more of a motivator than anything else, as usually the more that people are against you, the bigger the accomplishment is when you prove them wrong.
|
|
|
Post by arnists on Jan 16, 2011 18:40:32 GMT -5
1. Has the application of lambda calculus not provided the "Critical Technical Skill" and we just need more practice? 2. Who cares where the money came from?
|
|
|
Post by Niccolo on Jan 16, 2011 18:46:35 GMT -5
I would like to know what military actually have got from AI research and what technologies they are using. And also, have the military been interested in simulate the actual human thinking or they have been just interested in an emulation of it given by a great amount of elaboration?
The paper talks about the difficulty to get precise and widely accepted formalizations and definitions of AI. Indeed, the word 'vague' is repeated several times. I also think to notice that, in these sections, the authors mainly refer to the AI as a simulation of a single thinking mind. All these made me think about the difference between psychology, i.e. a field that try to explain our mental process, our motivations etc. as thinking beings, and neuro-psychology, i.e. a science that looks at the brain more like a composition of collaborating subsystems rather than a single entity. I know the definitions I'm giving are not precise, but you would agree that a therapist has a surely rational approach to the problems of her patients, but that differs from the approach of a doctor that explains behaviors as the results of chemical interactions. My wonder is, should maybe AI have an approach closer to neuro psychology to get more precise definitions and models?
|
|
|
Post by gudrunht on Jan 16, 2011 18:58:26 GMT -5
1) What is intelligence? The author talks about computer systems that exhitbit intelligence? I´d like to hear people´s views of how they define intelligence in regard of humans and that of computers.
2)The vagueness of AI vocabulary is something on the authors mind. How is this in effect today? Is this a real problem in the field of AI?
|
|