|
Post by Hannes Vilhjalmsson on Feb 24, 2011 6:25:16 GMT -5
Our guest on Monday will be Claudio Pedica, who finished his MSc degree from Reykjavik University in 2009, and has since been a research staff member on the "Humanoid Agents in Social Game Environments" project at CADIA and now at IIIM where he is involved in transferring his research into industry and the public domain. His master's thesis, and subsequently his focus at CADIA, revolves around the automation of social group dynamics according to a theory of human territoriality. The aim is to create more life-like character behavior in game environments. This reading is a research paper that summarizes Claudio's MSc thesis: Spontaneous Avatar Behavior for Human Territoriality, Pedica, C. and Vilhjálmsson, H. (2009), in Zs. Ruttkay et al. (Eds.): Proceedings of the 9th International Conference on Intelligent Virtual Agents, September 14-16, Amsterdam, The Netherlands. Lecture Notes in Artificial Intelligence, 5773: 344-357, Springer-Verlag, Berlin Heidelberg Ask Claudio all about this on Monday (but post your questions here first) :-)
|
|
|
Post by carmine on Feb 26, 2011 11:07:43 GMT -5
- do you think that one day in the future, we could implement and represent all possible features of humans-psychology? - about the motivations.. is it possible to create not-coherent behavior (created by opposite motivations)? For example, imaging that an agent "needs" to greet someone and at the same time to turn (opposite orientation). This can create a "strange" behavior?
|
|
|
Post by lorenzo on Feb 26, 2011 14:28:49 GMT -5
- Do you perform any kind of test on your framework for know how believable is it for the people?
- When the reactive performs is executed for select the behaviour you take into account the current status of the agent, i mean if it is upset, happy, etc.. because it can change how the behaviour is performed, or only the norms that govern the conversation for create the motivations?
|
|
|
Post by kristjan on Feb 27, 2011 8:31:53 GMT -5
In the paper it mentions vision but has there bin any implementation for the avatars to react to sound?
How does your avatar react if another avatar tends to continuously brake the social norms for example some avatar keeps entering the intimate zone of a different avatar that is a complete stranger?
|
|
|
Post by finnur on Feb 27, 2011 8:32:20 GMT -5
would it be possible to create an avatar that could successfully pretend to be a player?
has it ever been done? just put a avatar with AI inside a game as a player character and see if anyone figures it out?
|
|
|
Post by Helgi Siemsen Sigurðarson on Feb 27, 2011 10:41:41 GMT -5
Are there any limitation on how many avatars can participate in a conversation at a given time ( because of the "ring" becoming to large) ?
Can this be implemented to monitor groups of people rather than simulating them ?
|
|
|
Post by Ásgeir Jónasson on Feb 27, 2011 12:20:28 GMT -5
1.
During our discussion on common sense we explored the notion that different civilizations have different common sense. Does this also apply to social behavior ? Do people behave differently in social situations depending on where they are from and are the differences great enough to be taken into account when constructing these agents ?
2.
One of the reasons MMO's give the players a lot of control over their avatar's appearance, is that the player needs to feel connected to his or hers avatar to enjoy playing the game and really feel immersed in it. Shouldn't the player have a lot of control over how his avatar behaves in social situations ? For example if it is passive in conversations or always tries to get all the attention ? Can this be implemented easily ?
|
|
|
Post by krafki on Feb 27, 2011 13:50:43 GMT -5
1) Could we implement some spontaneus face expressions thru, for example, webcam and face recognizion, and direct these expressions straight to your avatar in real time? Helping others to see, what your avatar is going through at the moment.
2) In future, creating a character in some game, are we going to have social-part in which we define how the character should react in different social situations? For example, defining what is a comfortable distance in social interaction.
|
|
|
Post by Elín Carstens on Feb 27, 2011 14:12:35 GMT -5
1) Are there any types of behavior, generally speaking, that one would not be able to model the same way you do in Cadia Populus?
2) In chapter 4 you describe the avatar's reactivity as “the simulation of a low level mental process much closer to perception than higher levels of reasoning.”
Since the paper was written has any higher level reasoning been added? Should it be added for any particular behaviors? Why?
|
|
|
Post by helgil08 on Feb 27, 2011 14:55:50 GMT -5
What kind of mathematics are used for this kind of thing. Are there any particular formulas worth noting.
|
|
|
Post by thorsteinnth on Feb 27, 2011 16:16:44 GMT -5
Í kafla 5 er sagt "...an avatar can show a certain degree of context awareness when engaged in a social interaction."
Er þarna líka verið að meina viðbrögð við t.d. áreiti frá smávægilegum hlutum? T.d. hurð skellt, skyndileg birta eða annað?
|
|
|
Post by Hrafn J. Geirsson on Feb 27, 2011 16:38:06 GMT -5
1) In video games, we want the player to feel as strong a connection to his avatar as possible. It will be broken very easily if the avatar starts doing something the player didn't want it to do, or starts noticing things the player doesn't care about. Will this technology mostly be geared toward generating believable NP-Cs, or is this supposed to be applied to the player as well ? If so, how do we keep the agents mindset connected with the players, or perhaps it should be the other way around ?
2) Why focus on conversations ? Shouldn't we be thinking about group dynamics in general ? Won't the player be even more freaked out when agents can behave normally in one type of situation, but fail in others.
|
|
|
Post by grimurtomasson on Feb 27, 2011 16:38:09 GMT -5
Could this social behavioral model be used to enable human MMO players to create believable multi-modal communication using only text/voice input?
Is there any learning involved or is the model fixed for each avatar?
|
|
|
Post by Eiríkur Fannar Torfason on Feb 27, 2011 16:42:26 GMT -5
1. Have you tried giving the agents/avatars both social and personal motivations, so that instead of focusing solely on participating in a discussion the agent would also do things like eat, rest, go to the bathroom and so on? If so, does the inclusion of personal motivations bring to light any new emergent properties?
2. The CCP team is acknowledged in the article. Is CCP working on incorporating this type of social simulation into one of their games?
3. I believe I saw Hannes demonstrate this project last week where he simulated a party where some of the avatars had an 'interesting' status which was illustrated by a halo above their heads. Have you tried to simulate a VIP party where each and every avatar has an 'interesting' status?
|
|
|
Post by gunnar on Feb 27, 2011 16:59:47 GMT -5
What are the biggest improvements that have been made to this since this paper was written?
Do the agents only react to the "vision" of each other or can they also react to sound (like an agent really wants to talk to someone that is far away could he call him and by doing so increase the force that pulls them together?)
Can they be more drawn to some and less to others? Like in the case here above with the shouting. They could then maybe ignore, or show less interest in, agents that happen to be standing in the path they have to take to get to each other.
|
|