una
New Member
Posts: 12
|
Post by una on Feb 27, 2011 18:44:27 GMT -5
Has there been any recent development in making characters behave more realistically?
Do you think it would help create more of a connection between an avatar and a person if the person could speak into a microphone to speak with others instead of chatting?
Don't you think that eventually making avatars too realistic could become almost dangerous in terms of peoples ability to separate the virtual world from the real world? Isn't it possible that a person could get so attached to their virtual life that they begin to concentrate more on that than their real lives?
|
|
|
Post by Jón Þór Kristinsson on Feb 27, 2011 18:45:00 GMT -5
Can the avatars simulate a social situation where some of the avatars are people that would be using some kind of motion capturing equipment?
|
|
|
Post by baldurb09 on Feb 27, 2011 18:58:11 GMT -5
How should people in an F-formation react to an outsider joining in on their conversation, what if he doesn't follow the social norms?
What about personality, which has a large effect on how people react to interactions that do no follow social norms.
|
|
|
Post by jonfs09 on Feb 27, 2011 19:38:37 GMT -5
Don't you think that eventually making avatars too realistic could become almost dangerous in terms of peoples ability to separate the virtual world from the real world? Isn't it possible that a person could get so attached to their virtual life that they begin to concentrate more on that than their real lives? Well some people are actually much more attached to the virtual world than their real life. but in the future, do u think that people will actually just stop living their real lifes and live in the virtual world. Have their own work life, love life etc all in the virtual world.
|
|
|
Post by sigurdurjokull on Feb 27, 2011 20:30:12 GMT -5
It says in the paper that it makes sense to automate some form of social behaviour that seems to be autonomous in humans anyway. But it seems to me that it depends on focus or attention what is autonomous and what is under conscious control. So i thought it might be interesting to program some sort of attention to decide if an agent or a gameplayer would follow its "instinct" or take some sort of conscious action. Something that tries to judge which information is relevent to conscious control and what's not.
Also, if we have agents that can simulate social behaviour and simulate human behaviour and human thinking to our best capabalites might it be reasonable to make humans and agents play massive multiplayer online games together for simulation of natural selection and to make the agents learn from experience. With a cap set on the "instincts" or the lower level functions of the agents since they would just maximize the gameplay and beat the humans, but try to direct the learning of the agents to high level functions like thinking planning and such. Or maybe the game could be to decieve the other players it was human. Might that be a reasonable way to make agents smarter?
|
|
|
Post by niccolo on Feb 27, 2011 22:20:04 GMT -5
Hi, thanks again for the interesting reading. I have two kind of questions. The first is about the impact of these works on the external world. 1) How is going the process to export these knowledge from the academic environment to companies and more in general to services that everybody can use (e.g. games, virtual environments as Second Life..) ? 2) As you write in the second paragraph of section 6, these works can be very important to validate theories about social behavior that, by their nature, cannot be proven theoretically as we do for mathematics theorems. Are you aware of any interest of the sociology/psychology/anthropology communities towards your results? Have they acknowledged this importance?
I also have other comments about the specific topic of the paper. 3) It seems to me that you mainly consider visual interaction. Have you considered, maybe somewhere else, the interaction due to other senses/means? Let me make an example. If during your presentation in class I am looking and listening at you, and somebody behind me starts to loudly chat than he starts to interact with me (and of course without need to be in my visual range) and probably he will cause some movement of me. In general, I can notice someone or claim a space by other sense than sight. Do you agree that the attention to these aspects can be important for the correct simulation of real situations and also to implement realistic AI behaviors. 4) I liked the comparison in the Introduction: if when I walk I don't think about the movements of my feet than when I talk I don't wanna think about my eye's movements. Anyway, my behavior carries some information that is not fully redundant. I wonder if we could construct agents that make some behaviors because of their mental status (i.e. glad, worried..) instead of their words in the speech.
|
|