|
Post by Eiríkur Fannar Torfason on Jan 30, 2011 17:33:42 GMT -5
Q1: Is it possible that specialized circuitry, instead of general purpose CPUs, is required to catapult processing of visual and audio sensory data (much like the advent of GPUs catapulted 3D graphics)?
Q2: Much has been made of self-consciousness and AI. But what about free will? Isn't that a more fundamental issue from a philosophical and ethical standpoint?
|
|
|
Post by niccolo on Jan 30, 2011 18:14:41 GMT -5
1) I agree that intelligence appears in different forms and levels. Think about animals. I also agree that brain is modelled to specifically use the rest of the body, thus other animals do not have structure controls for hands like us. Anyway, shouldn't be there an underlying equal "intelligence-program"? This refers again to my believe that intelligence is the software and brain in hardware. Think e.g. to games running on wii and PC. they must use very different hardware. But still they run with the same program paradigm.
2) Don't you think that "emotion" is a missing topic of all these article we have been reading?
3) I think that an answer to the chronic problems described at the end of the article has to be researched in the deeply parallel computation that brains have already in animals (as the article states in the last section). I think we need a stronger theoretical understanding on how complex distribuited systems work.
|
|
|
Post by Elín Carstens on Jan 30, 2011 18:16:23 GMT -5
When the author talks about learning he says this: "This more profound type of learning is much more difficult to simulate or recreate than are the simpler types discussed above, for it will require of us some way of representing knowledge and information at a level beneath the level of linguistically expressible concepts, a level whose elements can in some way be combined or articulated to form any of a vast range of alternative possible concepts."
1) Is the author wondering about how to program a priori knowledge? Is that possible?
2) How do we represent knowledge and information without linguistically expressible concepts?
3) Has anyone done either of these things?
|
|
|
Post by Jón Þór Kristinsson on Jan 30, 2011 18:16:34 GMT -5
There was some part of this about games that got me wondering about whether there are some genres of games that an AI can not play at this moment and if there are games that have been proven to be unplayable by an AI?
|
|
|
Post by gunnar on Jan 30, 2011 18:39:44 GMT -5
Do we even have the computer power available today for an AI that works as fast as our brain? Wouldn't we need a computer that can do many calculations at the same time in parallel. And if/when we do get that will it be better to use brute force rather than heuristics to solve problems?
"If machines do come to simulate all of our internal cognitive activities, to the last computational detail, to deny them the status of genuine person would be nothing but a new form of racism." Will the maker be the owner or will the machine own it self? Why would people want to make this if as soon as you're done building it, it's not yours anymore?
|
|
|
Post by baldurb09 on Jan 30, 2011 18:42:46 GMT -5
1. The author mentions how intelligence is combined of multiple different skills and strategies—how easy is it currently to both combine and reconcile them inter se?
2. A continuation of the former question: Are there any recent AI libraries or modules that make AI development easier now than it used to be?
To those who asked—we currently cannot, to the best of my knowledge, tell whether any entity is conscious apart from ourselves. The Turing test is one of things which comes closest.
|
|
|
Post by gudrunht on Jan 30, 2011 18:48:56 GMT -5
The author talks about the limitations of image processing in AI. How far along has this field come in the present time? There are companies like Google that use facial recognition in their software, does anyone know what technology they´re using?
I am interested in the chess problem presented. The solutions discussed aim to win the game by outsmarting the opponent based on rationality. I wonder how a defensive strategy aiming to damage every possible strategy the human player comes up with in his first x many moves in the game? This strategy would be based on behavioural research data where human reaction (of chess players) of certain losses of pieces for an example or whatever action that could cause distress and increase the probability of illogical/or risk taking moves made by the player. In short, the goal would be to taunt the human opponent to make mistakes based on "human emotions" (by using statistic based on data)and thus increasing the likelihood of winning the game.
|
|
|
Post by jonfs09 on Jan 30, 2011 19:07:17 GMT -5
Í sambandið við að vélar geti "lært", það er sagt að það taki upplýsingar, og geymi það í minni til að geta sótt það hratt aftur, er það ekki bara.... minni? Ekki lærdómur? Ef tölvur myndu í raun þróa meðvitund, hvernig gætum við vitað að þær væru í raun með meðvitund en ekki bara háþróaður hermir? 1. Its just they same as us humans. We put what we learn into our memory. 2. Do u think if the computer had a consciousness that it would know itself that it had a consciousness. If it would look at a normal computer, would it feel like us looking at a person with no brain.
|
|
|
Post by grimurtomasson on Jan 31, 2011 4:26:58 GMT -5
Is there a general trend, in AI research today, when it comes to focusing on an isolated strand of what constitutes intelligence, or a combination of strands?
Are artificial visual systems any different from most other fields of AI in being hobbled by a lack of a high powered general intelligence?
Grímur
|
|
|
Post by Eiríkur Fannar Torfason on Jan 31, 2011 10:16:55 GMT -5
|
|