|
Post by Hannes Vilhjalmsson on Feb 16, 2009 13:38:27 GMT -5
Today we were looking at the dilemma we face when we want our avatars (i.e. "our graphical representations in a virtual environment") to have a very rich and believable interaction with the environment, but also don't want to increase our control overhead (e.g. by adding more buttons etc.). The presented solution was to add some "smarts" into the avatar itself, so that it can produce some of the increased interactivity by itself - on your behalf.
We mentioned avatars that are "smart" at dying a spectacular death in shooter games, typically through "knowledge" of physics (e.g. reacting to external physical forces). We also mentioned avatars "smart" at picking up objects by knowing how they are approached and held. Most of the lecture was then on "smarts" when it comes to exhibiting natural nonverbal behavior around and during conversations.
What other uses of avatar "smarts" can you think of? Think broadly. Think of all the different applications of virtual environments. Where would it be a good idea to reduce control overhead by using better/smarter avatars? What would the avatar have to "know" to accomplish this?
Have you used avatars that exhibited any particularly interesting automation traits? If you have, tell us about them.
|
|
thors
New Member
CS Dweeb
Posts: 23
|
Post by thors on Feb 18, 2009 21:20:05 GMT -5
Maybe not to reduce control overhead, but I can think of one thing that's missing from most FPS games, all VR/MMORPGs I've seen, and that's some indication that you don't exist in a vacuum. The line where Morpheus asks Neo „Do you think that's air you're breathing?“ springs to mind That the environment needs to contain is „environmental data“. It already contains light (as much as possible w/o ray-tracing) and physics (gravity, collisions etc.) but no air, water (in game water is just a wiggly texture w/o much function at all). What I'd like to see is avatar reaction to environmental factors. If there's wind, the AV's clothes and hair moves. If you enter water (a river or just if it's raining) you get wet and behave as wet, if you experience strong light, you cover your eyes and if you get cold, you shiver and gradually turn blue. As for control options, if the AV knows how to pick up an object, that really just adds to the realism of the environment, but really doesn't enhance the input function of yore when you clicked on an object (or typed „GET KEYS“) and „poof“ there it was. However, added realism in movement can be considered value-added for games. Consider a game where you have to traverse a narrow ledge. Added realism means the character would itself add body-movement to keep its balance. The added body-movement would make it more realistic for the player, and as such, the player may connect better with the character when it loses its balance and falls off the ledge. Also, the added movements make it harder for the player to play since he can't just keep a straight line like in older games where no natural factors were part of the game. On GameTV the other day, they introduced a game (which I'm quite excited to see, even though I can't recall its name) where the realism factors like water are used. How they solve the physics of water would be most intriguing to know
|
|
|
Post by David H. Brandt on Feb 19, 2009 4:33:35 GMT -5
Reykjavikious answer is of course that avatar smarts of all forms are natural in gaming environments, be it to extend (pseudo-)AI-driven avatar behavior or to enhance player avatar behavior. Many players of MMOs even go to the extent to add automation via external add-ons (usually illicitly modifying their client), f.ex. to do dull tasks such as "grinding" of levels, cash, skill training and traveling long routes within the virtual world. Thinking very broadly and out-of-the-box on the other hand, and with the SPARK project you introduced in class in mind, the following crazy ideas come to mind: Telepresence
Consider a realistic humanoid robot (android) capable of movement, some basic 3D analysis of the surroundings and facial expressions. Let it's eyes be video cameras and it's ears be microphones, and let it securely transmit it's sensory input data over the communications network to a remote individual, which may be using a standard computer monitor, mouse/keyboard to control the robot in the manner of a computer game. Obviously, VR goggles and other higher-tech I/O devices are optional improvements, but not strictly required. The robot is however modeling itself in a 3D world, and doing it's best to participate socially in it's environment with whatever cues it has to operate on, both from the user and the environment. It is thus mentally a "smart avatar", in spite of the fact that it's actions are duplicated in the real world. The user of the telepresence system can have a separate window open on his screen showing the robot's behavior and facial expressions. A telepresence bot such as this can have many uses, such as: - Military usage. A remotely operated soldier-bot can travel the streets safely without fear of snipers or car-bombs, yet performing almost all the same tasks as a human. The bot may be engineered to appear socially acceptable, a task that a heavily armed, white, American is not going to pull off in Iraq. If the bot is damaged, then he can easily be replaced. His sensory input can be analyzed after the fact to determine what really happened.
- Orthopedic usage. A telepresence bot can allow the disabled to travel (in which case the android probably has a "driver/guide" and a 3G network card). The bot can use text-to-speech if required to further reduce the user requirements.
- Medical usage. A telepresence bot can be used by a doctor to perform most medical services remotely. Thus you can f.ex. attend an appointment in Reykjavík with a specialist residing in LA. You could also go to your local health care center, waiting for the "next available doctor". The next available doctor might f.ex. turn out to be in the health care center of Akureyri, thus efficiency and quality of service in the health care system could be vastly increased. All communications could be stored in your medical history.
- Corporate and diplomatic usage. Telepresence bots could allow effective remote meetings to take place instantaneously across the globe (speaker-phones and video-conference equipment rarely performs as advertised). A hot-designer-bot might actually deliver better meeting results than a flabby, smelly, badly-clothed software engineer.
- Hazardous environment usage. A telepresence bot can operate in environments that humans cannot work.
The telepresence bot would have to have basic 3D analysis of the environment, emphasizing obstacle detection, movement capabilities, maintaining balance and detection of humans and their coarse-grained social behavior. It could benefit from VR goggles, user eye fixation tracking, speech to text capabilities, the ability to diagnose emotional undertones in speech and so forth. "Smart" communications
Assuming that SPARK works properly, it must be able to express emotions it detects in text. Thus it should be possible to flame-scan messages prior to delivery, effectively performing an emotion-sensitive "print-preview". This could work with email, forums, blogging and instant messaging. With client-sideplug-inss, the user could f.ex. be presented with a real-time "mirror" while typing, thus giving immediate feedback. F.ex. if you type YOU FARGIN BASTARD, you immediately see your avatar screaming angrily, and might wisely reconsider your message prior to submission. Such a "print-preview" could use avatars capable of showing emotion, and/or simple document formatting methods such as font attributes (style, color, size, linking, ...), stationary, emoticons and even embedded images (f.ex. using the "cartoon strip"metaphore to associate avatar-emotion with text paragraphs). If negative emotions are detected the user may further be prompted to confirm the outbound message, possibly even after a "cool-down-period". For certain types of communications, such as help desk communications and customer relations, the message may even be rerouted or forwarded to appropriate authorities if negative emotions are detected. To perform "smart" communications, the system would require the prior communication history between the participants, and possibly a "personality profile" of the communicants. It would also be beneficial to have a domain-knowledge dictionary as well as strong knowledge of anti-social communicational behavior. Text-to-video
While there exist text-to-speech modules which can f.ex. be used to deliver textual content to your phone, converting text to video is a different thing entirely. Of course this would mostly be a gimmick, as video devices can obviously display text already... Gimmicks do however sell toys, such as 3G phones. Thus f.ex. a tel-co might convert your inbound text messages into video streams, using smart avatars to act out the message in the background, speech-to-text to vocalize it, and playing the textual message in the foreground. To perform this, the conversion program would require a personality profile of the sender of the message, or use a generic profile. This could f.ex. be linked to a Facebook app to acquire such information. Clippy 2.0 - Clippy's Revenge
Obviously, the infamous Clippy agent in Microsoft Office, was a form of pseudo-smart-avatar. Clippy primarily succeeded in annoying the hell out of 95% of the user base, but assuming that he had actually been useful and less intrusive, he might of survived. A "smart avatar" with proper AI and domain knowledge could however possibly revive the horrid idea that was Clippy. As for having actually used avatars that exhibit particular automation traits... quite frankly, no. As far as I can recall, all avatars I have used have been of the "play fixed animation loop" variety, with some rag-doll physics when they're getting slaughtered.
|
|
|
Post by Birna Íris on Feb 22, 2009 5:29:22 GMT -5
You guys write so long and intensive posts here that it is quite hard to keep up with you. I don't think I'll even try What comes to my mind is variety. Human beings behave in various ways. Although most humans use their body similarly (obey the same social "laws") when in non-verbal communication we can clearly see that body movement, walking gait, gaze and gesture differ a lot between two individuals. I would consider it a smart environment that allows the avatars to differ. It looks very strange in a virtual, social environment when all the avatars move around in the exact same manner. I totally agree with Thor regarding the environmental data and with David regarding the showing of feelings. Other things could be spontant reactions to the environment with detailed sensor systems. For example if a rock comes flying towards the avatar he bends down or if some other avatar in the environment starts yelling, the avatar stops and looks surprised.
|
|
|
Post by ellioman on Feb 22, 2009 10:04:55 GMT -5
You guys write so long and intensive posts here that it is quite hard to keep up with you. I don't think I'll even try haha, so true Guys, don't be discouraged though, I love reading your posts ;D I find it quite interesting to think about how avatars would respond to things like extremely cold/hot weather, water, sand.. and so forth And things like storms, environments that make breathing difficult (going into a burning buildin), burns (maybe when the avatar get's burned by fire) and so forth... Do you remember the film 'Minority Report' with Tom Cruise? Do you remember when he was walking around and all the ads were speaking to him as he walked by them? That is an example of avatar smarts used in a creepy way
|
|
sikm
New Member
M.Sc. student in Computer Science at Reykjavik University
Posts: 6
|
Post by sikm on Feb 22, 2009 11:31:10 GMT -5
I really like Birna's idea of having different social awareness smarts for avatars. Since an avatar is a graphical representation of a user then each avatar should have slightly different body movement, posture, social awareness, eye gaze and etc. The virtual environment would become much more lifelike and represent the real enivironment better.
This could be done by using profiling but more interestingly the avatars could also learn how to act by monitoring the user that is controlling the avatar. All avatars could have some default smarts to begin with plus maybe some profiling but then as the user controls the avatar in various conditions the avatar learns how to act autonomously in the virtual environment.
For example like in real life people act similar to each other when they are around strangers, polite, body movement, posture and etc, but as we interact and talk to other people we get to know them and we behave differently based on past experience, relationship, respect and how we feel towards them. Avatars could monitor conversations, gestures, body movement and mood changes that the user controls, also monitoring other avatars around them and adjust their social smarts awareness based on that data.
|
|
|
Post by gudleifur05 on Feb 22, 2009 14:48:39 GMT -5
I also like the idea of non-verbal behavior in characters. This has in fact been looked into in the toolkit BEAT:Behavior Expression Animation Toolkit ([Hannes]Vilhjálmsson, Cassel & Bickmore). You can give the character much life by giving him the ability to move from the context of what he is saying. A more futuristic view of things would be to imagine that you could train an avatar in being able to know what gestures should be used in the right context. This could be for example used by educational applications, although i´m not fond of talking paperclips
|
|
peter
New Member
Posts: 10
|
Post by peter on Feb 22, 2009 18:06:18 GMT -5
I think that "natural" behaviour is also worth looking into outside of conversations. Especially third-person role-playing games carry the promise that the user feel they are playing an "authentic" character, yet the modification of the character's attributes (as they can be found in the classic AD&D-based games) often only exhibits any effect in combat powers and conversation options. What if rather low values in charisma and strength caused the avatar to behave timorously in shady bars and what if a character with an "evil chaotic" morale alignment and low intelligence tended to recklessly push and shove their way through crowds? Other "natural" subconscious behaviour could include the occasional glance at specific events in the surroundings, cursory looks to the left and right before crossing a street etc.
|
|
|
Post by kristjanbb02 on Feb 22, 2009 18:20:44 GMT -5
Maybe not to reduce control overhead, but I can think of one thing that's missing from most FPS games, all VR/MMORPGs I've seen, and that's some indication that you don't exist in a vacuum. ... The lack of 'air' physics has been a problem in games for a long time and becomes evident once more physics are added. Same applies to water and other fluid physics, but I don't think those are relevant to the avatar 'smartness', but rather a limitation of current physic engines (3D fluid physics are HEAVY and are just omitted or poorly faked). After all you don't choice if your hair movies in the wind or not, the environment does. There is nothing to delegate (from the user) on to the avatar 'smart system'. On the other hand once the environment starts slapping you with 120mph wind while hanging on to a moving train then you can delegate the task of resisting the wind, resisting to not look away, etc to the avatar smart system. However, added realism in movement can be considered value-added for games. Consider a game where you have to traverse a narrow ledge. Added realism means the character would itself add body-movement to keep its balance. The added body-movement would make it more realistic for the player, and as such, the player may connect better with the character when it loses its balance and falls off the ledge. Also, the added movements make it harder for the player to play since he can't just keep a straight line like in older games where no natural factors were part of the game. ... I have to disagree to a certain degree with that. While the avatar 'showing' you why you can not move fast over a narrow path is good. Any added 'negative' feedback that you need to fight against quickly becomes very annoying and adds nothing of value to a game. It may be interesting and fun to begin with, but in the long run it's just annoying. The point of a smart avatar is in big parts to off-load the tedious, boring, insignificant and subtle tasks from the users. Be helpful, not get in the way of the user. I can see one use for this is a skill system that having a skill in say balance would off-load the balancing task to the smart avatar. Having the skill would change the smart avatar from unhelpful even restricting to become helpful.
The possibilities where you can apply smart avatar system are almost endless. I've been more interested in where you can not (so easily) with smart avatar. The most common example being fast paced games as they become more realistic the unrealistic action switching of the players become harder to cover-up and were you really need smarter system. For example doing an instant 180 turn or near instant weapon switching (were weapons sometimes appear out of nowhere ). You end up with two options. A) Allowing the visualized avatar and actual physical avatar to desynchronized for a short period. This adds all sorts of subtle visual problems. For example if a player suddenly does a 180 degree movement change to avoid incoming projectile the smoothed visual avatar appears to go right into the projectile and turn and run away while the projectile goes right through. Another example if the person is scrolling through weapons the smart avatar waits for the weapon to be selected for a second before doing the switch animation, but the user decides to fire as soon as he finds the correct weapon leading to problems matching attack with a weapon the avatar has not been shown to have drawn yet. One solution is to fast forward through the animation and accept a slight desynchronized fire sequences. Now then there is the other alternative solution that may seem obvious. B) Why not just hinder the user for making such absurd moves in the first place, slowing him down while the avatar does his thing. The problem with this it totally changes the game play and balance of most games, adds new synchronization problems and often totally frustrates the users. If you start applying this everywhere you quickly end up with a game that isn't fast paced at all and gives impression of being restrictive and limited. I think there is hardly a person that has played recent fast somewhat paced games that hasn't seen those synchronization problems or been frustrated by laggy response whether they have given it any thought or not.
No matter what the type of application it is I think smart avatars serve as off-load machines, helpers and 'beautifiers'. What they should not is be unhelpful or restrictive. The user should always have the final say and the smart avatar ready to adapt instantly. The problem is as the virtual world becomes more complex and more ways to interact with it become available the avatar will become more error prone and will need better error correction. Almost another smart smart avatar just tasked with covering up the first one's mistakes
|
|
|
Post by kristjanbb02 on Feb 22, 2009 18:40:35 GMT -5
I think that "natural" behaviour is also worth looking into outside of conversations. Especially third-person role-playing games carry the promise that the user feel they are playing an "authentic" character, yet the modification of the character's attributes (as they can be found in the classic AD&D-based games) often only exhibits any effect in combat powers and conversation options. What if rather low values in charisma and strength caused the avatar to behave timorously in shady bars and what if a character with an "evil chaotic" morale alignment and low intelligence tended to recklessly push and shove their way through crowds? Other "natural" subconscious behaviour could include the occasional glance at specific events in the surroundings, cursory looks to the left and right before crossing a street etc. This reminded me of nothing I was actually going to say. Once every character of the same 'type' starts doing the same behavior the sense of unique avatar is quickly lost. Once you introduce social behavior you need to be able to really customize it (or system detect it?) to a point where you become unique and not just another tree in the wind. Just looking at character attributes (charisma (approbate response or not), int/wisdom (absent or not), etc), class type, alignment, etc isn't enough to get typical quirks and behavior patterns seen in personality 'rich' characters of typical fictional work and role playing. Auto-detecting can be done with some conversation parsing and behavior analyzing, but albeit somewhat limited and really needs a user interface to configure. Just like you need create your basic character looks because you can not infer those from your actions Though I really liked the " "evil chaotic" morale alignment and low intelligence tended to recklessly push and shove their way through crowds" I just think characters need a new headroom for growing if you open this path to really get the big benefit.
|
|
|
Post by Stefán Freyr on Feb 22, 2009 19:13:19 GMT -5
Well, I guess I'm going to have to be the voice of pessimism here to some extent. First of all, I'm not sure I would want my avatar to be too smart when it comes to social interaction with other users, meaning that I don't want it to decide on its own to become offended or happy and express these emotions to other players without my expressed permission. What if I don't feel the same way? What if my avatars "smarts" gave a thumbs up to a fellow Iraqi player? Or the "OK" sign to a Brazilian teammate? ( www.cracked.com/article_16335_7-innocent-gestures-that-can-get-you-killed-overseas.html - note that I do not vouch for the validity of any of these). Of course, you could argue that for the above example, a truly smart avatar would have this knowledge as well and steer clear of gestures that could be misinterpreted. OK, fair enough... but what about when I just don't agree with my avatar? What if it's jumping with joy over something that I (for whatever reason) actually feel angry about and desperately want to express that feeling to other players? Do you really want your avatars face lighting up like a Christmas tree when you get a royal flush in the virtual online poker game? So again, I'm going to drag this conversation down to the "implementation level", sorry about that. As Hannes already said, giving the user a GUI with "emotion options" to choose from is not really an option. It's too disruptive and plain annoying. So unless the avatar is somehow able to get my expressed approval for the reaction, I'm not sure I'd want it to display it at all. So I guess what I'm saying is that what the avatar needs to "know" is whether or not his human player actually agrees with the expression the avatar thinks it should "publish" to the world. I guess this could be done to some extent. For example, analysis of facial expressions of the player might provide vital clues as to what emotions he is feeling. Measuring heart rate, BP and skin moisture could indicate whether the player is excited and even brain waves can be measured to get his state of mind. As I'm a big fan of ubiquitous computing, I don't think these things will become too big until the hardware that has to be used becomes ubiquitous. (sorry again for dragging the conversation down to the implementation level) Finally, to touch on Birna's point about variety. I agree that in almost all ways it would be "better" but I'm just wondering whether doing the exact opposite might help us with some of the social issues that we have with inter cultural communication. If players emotions could be accurately measured (as I discussed above) and we could make sure that all the avatars in our world display these emotions in the same way, would it not be easier to learn this one set of expressions that is common to all the avatars? Just a thought
|
|
|
Post by eirikurn on Feb 22, 2009 21:19:31 GMT -5
I like Peters and Kristjans ideas involving personalization of the avatars social characteristics. It could either be manual selection of personality profiles or "stats", or automatic analysis on the spoken dialogue to derive some sort of physical personality.
This would work best in role playing environments where people really want their avatar to be complete.
So if I've chosen a barbaric character type, the character would animate and emote conversations in a more rough barbaric way, while an intellectual character would have smarter, smoother movements while speaking/acting.
|
|
|
Post by flassari on Feb 23, 2009 4:32:07 GMT -5
My first experience of wanting my avatar to be smart would be when I was playing the game Trespasser en.wikipedia.org/wiki/Trespasser_(computer_game) about ten years ago. This was the first time I played a first person shooter with no HUD, where you could look down and see the avatar's chest and body, and where you would have to control the hands of the avatar to pick up weapons and actually adjust your hand and wrist to fire them in the desired direction (people can download the demo at www.download.com/Trespasser-demo/3000-7563_4-10024401.html to try it out themselfs). This was all an added overhead to using a weapon in a game that should be played fast-paced (and the dinosaurs are really hard to kill). Although a completely different and interesting way to play a FPS, a smart avatar would have suited well. Also, as Thors said with the balance factor, it is indeed a good idea and a good example of a well thought out implementation of that is in the recent game Assassin's creed. The avatar doesn't neccesarily lose balance on it's own, but rather to signal that the player is losing it. That game further utilizes smart functionality by, as we've spoken of before in class discussion, pushing gently and touching the people around the avatar while moving slowly in a crowd, a great feature IMHO that really added to my connection to the avatar. Since I already started comparing everything to video games I also want to talk about the "vacuum" and environmental reaction of avatars. One example of a really great environmental reactions of the avatar is in Crysis, where the avatar's eyes have to adjust to the sun when coming out of dark rooms and other areas, and as Thors talked about, the wetness really shows for a while when getting out of water as the drops slide down the glass of the helmet the avatar is wearing. And in reference to the extreme cold that ellioman mentioned, the avatar's suit and the screen itself actually freeze a little when in extreme cold environments. Crysis also really adds to the feeling of fluid air around you with their great falling snow simulation, where you actually get a sense of a climate around you. I thought I'd take a gamer's view on the topic and leave it at that, this topic could go on and on in the other fields of virtual environments.
|
|
|
Post by Hrafn Þorri on Feb 24, 2009 6:42:34 GMT -5
Do you remember the film 'Minority Report' with Tom Cruise? Do you remember when he was walking around and all the ads were speaking to him as he walked by them? That is an example of avatar smarts used in a creepy way That's started happening, actually. When you watch these ads, the ads check you out (Associated Press) MILWAUKEE (AP) — Watch an advertisement on a video screen in a mall, health club or grocery store and there's a slim — but growing — chance the ad is watching you too.
Small cameras can now be embedded in the screen or hidden around it, tracking who looks at the screen and for how long. The makers of the tracking systems say the software can determine the viewer's gender, approximate age range and, in some cases, ethnicity — and can change the ads accordingly.
That could mean razor ads for men, cosmetics ads for women and video-game ads for teens.
|
|
|
Post by ellioman on Feb 24, 2009 20:24:09 GMT -5
Do you remember the film 'Minority Report' with Tom Cruise? Do you remember when he was walking around and all the ads were speaking to him as he walked by them? That is an example of avatar smarts used in a creepy way That's started happening, actually. When you watch these ads, the ads check you out (Associated Press) MILWAUKEE (AP) — Watch an advertisement on a video screen in a mall, health club or grocery store and there's a slim — but growing — chance the ad is watching you too.
Small cameras can now be embedded in the screen or hidden around it, tracking who looks at the screen and for how long. The makers of the tracking systems say the software can determine the viewer's gender, approximate age range and, in some cases, ethnicity — and can change the ads accordingly.
That could mean razor ads for men, cosmetics ads for women and video-game ads for teens. ohhh man..... I'm leaving this course and going to live somewhere up in the mountains...
|
|