Post by Hannes Vilhjalmsson on Mar 18, 2009 19:13:58 GMT -5
This week we'll be reading about Kristinn Thórisson's fascinating PhD work, which shaped the field of "Embodied Conversational Agents". His Gandalf system has remained state-of-the-art for quite awhile (he completed it in 1996). The reading is a comprehensive paper on this project and is called "A Mind Model for Multimodal Communicative Creatures and Humanoids". If you don't get through all of it (it's all quite fascinating though), make sure you cover the intro, the layout of the architecture and how Gandalf operates. Post a couple of questions here by midnight on Sunday and make sure you provoke a fun discussion with Kristinn on Tuesday.
1) You mention in the summary that it will be interesting to see if the system can be pushed any further, for example by using neural network methods in the perceptual modules. Have you had any success since than in using neural networks?
2) Have you had any problems with the uncanny valley while trying to make softbots or robots behave as humans?
Post by Jon Gisli Egilsson on Mar 22, 2009 13:31:37 GMT -5
1) Has your / will your technology been / be integrated into humanoid robots and not just used on characters on a display?
2) Isn't making those characters 'experts' on certain things very time consuming? E.g. Gandalf the expert on the solar system - did you have to read in all the info he knows? Are there techniques to speed this up? that is so the character can learn these things himself.
Post by Birna Íris on Mar 23, 2009 12:44:23 GMT -5
People behave/make decisions/perceive in a different manner, although most of us follow some communicational model accepted in the society we live in. The same person might even behaves differently between days. Any thought on how to model such various behavior models?How much more complicated would that be?
How detailed is Ymir's Dialogue Knowledge Base? And how close is to being a perfect knowledge base of human interaction/dialogue?