Post by Eiríkur Fannar Torfason on Jan 30, 2011 17:33:42 GMT -5
Q1: Is it possible that specialized circuitry, instead of general purpose CPUs, is required to catapult processing of visual and audio sensory data (much like the advent of GPUs catapulted 3D graphics)?
Q2: Much has been made of self-consciousness and AI. But what about free will? Isn't that a more fundamental issue from a philosophical and ethical standpoint?
1) I agree that intelligence appears in different forms and levels. Think about animals. I also agree that brain is modelled to specifically use the rest of the body, thus other animals do not have structure controls for hands like us. Anyway, shouldn't be there an underlying equal "intelligence-program"? This refers again to my believe that intelligence is the software and brain in hardware. Think e.g. to games running on wii and PC. they must use very different hardware. But still they run with the same program paradigm.
2) Don't you think that "emotion" is a missing topic of all these article we have been reading?
3) I think that an answer to the chronic problems described at the end of the article has to be researched in the deeply parallel computation that brains have already in animals (as the article states in the last section). I think we need a stronger theoretical understanding on how complex distribuited systems work.
Post by Elín Carstens on Jan 30, 2011 18:16:23 GMT -5
When the author talks about learning he says this: "This more profound type of learning is much more difficult to simulate or recreate than are the simpler types discussed above, for it will require of us some way of representing knowledge and information at a level beneath the level of linguistically expressible concepts, a level whose elements can in some way be combined or articulated to form any of a vast range of alternative possible concepts."
1) Is the author wondering about how to program a priori knowledge? Is that possible?
2) How do we represent knowledge and information without linguistically expressible concepts?
Post by Jón Þór Kristinsson on Jan 30, 2011 18:16:34 GMT -5
There was some part of this about games that got me wondering about whether there are some genres of games that an AI can not play at this moment and if there are games that have been proven to be unplayable by an AI?
Do we even have the computer power available today for an AI that works as fast as our brain? Wouldn't we need a computer that can do many calculations at the same time in parallel. And if/when we do get that will it be better to use brute force rather than heuristics to solve problems?
"If machines do come to simulate all of our internal cognitive activities, to the last computational detail, to deny them the status of genuine person would be nothing but a new form of racism." Will the maker be the owner or will the machine own it self? Why would people want to make this if as soon as you're done building it, it's not yours anymore?
The author talks about the limitations of image processing in AI. How far along has this field come in the present time? There are companies like Google that use facial recognition in their software, does anyone know what technology they´re using?
I am interested in the chess problem presented. The solutions discussed aim to win the game by outsmarting the opponent based on rationality. I wonder how a defensive strategy aiming to damage every possible strategy the human player comes up with in his first x many moves in the game? This strategy would be based on behavioural research data where human reaction (of chess players) of certain losses of pieces for an example or whatever action that could cause distress and increase the probability of illogical/or risk taking moves made by the player. In short, the goal would be to taunt the human opponent to make mistakes based on "human emotions" (by using statistic based on data)and thus increasing the likelihood of winning the game.