|
Post by Hannes Vilhjalmsson on Mar 18, 2009 19:13:58 GMT -5
This week we'll be reading about Kristinn Thórisson's fascinating PhD work, which shaped the field of "Embodied Conversational Agents". His Gandalf system has remained state-of-the-art for quite awhile (he completed it in 1996). The reading is a comprehensive paper on this project and is called "A Mind Model for Multimodal Communicative Creatures and Humanoids". If you don't get through all of it (it's all quite fascinating though), make sure you cover the intro, the layout of the architecture and how Gandalf operates. Post a couple of questions here by midnight on Sunday and make sure you provoke a fun discussion with Kristinn on Tuesday.
|
|
|
Post by hordur08 on Mar 22, 2009 9:02:43 GMT -5
1) You mention in the summary that it will be interesting to see if the system can be pushed any further, for example by using neural network methods in the perceptual modules. Have you had any success since than in using neural networks?
2) Have you had any problems with the uncanny valley while trying to make softbots or robots behave as humans?
|
|
|
Post by Jon Gisli Egilsson on Mar 22, 2009 13:31:37 GMT -5
1) Has your / will your technology been / be integrated into humanoid robots and not just used on characters on a display?
2) Isn't making those characters 'experts' on certain things very time consuming? E.g. Gandalf the expert on the solar system - did you have to read in all the info he knows? Are there techniques to speed this up? that is so the character can learn these things himself.
|
|
|
Post by olafurgi on Mar 22, 2009 18:00:49 GMT -5
You mentioned that the Speech Recognition was a bit slow, has this been improved (using better algorithms and hardware) ?
Is the concept of using Blackboards for communications between module better or worse considering speed (knowing that it simplifies the structure) ?
|
|
|
Post by Christian Zehetmayer on Mar 22, 2009 18:42:41 GMT -5
What happened since 1999? Did you changed or add anything important.
Which agents use Yamir nowadays and at what field?
|
|
|
Post by halldorrh05 on Mar 22, 2009 18:58:39 GMT -5
How did you data mine the psychological literature and create the modules from that?
Why use blackboards?
|
|
|
Post by Helgi Páll Helgason on Mar 23, 2009 4:54:46 GMT -5
Why do you think Gandalf has remained state-of-the-art in realtime multimodal communication for so long?
How do you explain the lack of focus on time constraints (such as realtime) in the field of AI in general?
|
|
|
Post by Birna Íris on Mar 23, 2009 12:44:23 GMT -5
People behave/make decisions/perceive in a different manner, although most of us follow some communicational model accepted in the society we live in. The same person might even behaves differently between days. Any thought on how to model such various behavior models?How much more complicated would that be?
How detailed is Ymir's Dialogue Knowledge Base? And how close is to being a perfect knowledge base of human interaction/dialogue?
|
|