Post by Hannes Vilhjalmsson on Mar 16, 2011 21:41:56 GMT -5
This week we'll be reading about the system I was working on at the USC Information Sciences Institute for 3 years before joining Reykjavik University. It provides a good overview of how AI can be applied to the area of education and training and gives you some of the background for my work here at CADIA.
Post by grimurtomasson on Mar 20, 2011 7:01:51 GMT -5
Were there any particular problems in creating a functioning system from this set of disparate technologies?
What was the accuracy of the speech recognition?
One of the things that are obviously missing from this system is the incorporation of the actual body language of the learner (gestures etc.). Has this been implemented in similar systems, or the continuation of this one (Alelo)?
Have there been studies on the effectiveness of game-based teaching approaches compared to more classical approaches (lectures, assignments etc.)?
Post by Ásgeir Jónasson on Mar 20, 2011 11:08:28 GMT -5
1. Does the architecture of the TLTS allow for relatively easy plugin of new languages and cultures ? What would have to be changed and what could stay the same and what other, if any, languages have been tried other then Levantine Arabic since 2004 ?
2. I believe the way the system teaches language is a great approach and I think schools should be using methods closer to this. Has this system, or similar ones, been used for language training in academic institutions since this paper was written ?
- How much is it extensible also for other languages? Do you need to build a new environment or just changing some features (especially about gestures and cultures), you can obtain a new realistic application?
- Usually when people are learning a new language, they start with some communication-sentences (like "Hello, How are you? what's your name); than they should learn colors, numbers, things about house, job and so on. How much was it difficult to combine the progression of the learning and the progression of the story? I image the first conversation between the user and the other avatar: of course "nice to meet you", "I'm ..." are common phrases to learn immediately but then? (I don't think that the story contains dialogues about colors and real dialogues seem really hard for beginners)
how did the natural language parser work and how well did it work?
"if a learner says something that departs from the reference dialogs but which is still appropriate in context, the non-player characters should respond appropriately" -- could this also work if the player said something extreme like "i have a gun, do as i say" or does it only work if he is "almost" right?
Post by Elín Carstens on Mar 20, 2011 17:13:27 GMT -5
"- character behavior should be consistent with the profiles of the characters and narrative structure."
1) How did you accomplish this? 2) On what were the profiles based and what are the main components of a character profile in the system? 3) How were the character profiles and narrative structure connected?
Post by Hrafn J. Geirsson on Mar 20, 2011 17:19:51 GMT -5
1) I imagine military language training must be quite specific. How much work do you think it would take to expand it into a general beginners language training program?
2) How sophisticated was the conversation you could have with the NPC's. Did situations occur when a trainee would say something that was unexpected, yet technically correct? How would the system respond in such situations?
3) Did you run into any unexpected issues while designing or implementing the system? What would you say was the most surprising thing you ran into during the creation of the TLTS?
Post by kristofer kristofersson on Mar 20, 2011 17:32:42 GMT -5
1. please explain the following a multimodal interface pedagogical agent
2. are modern speech generation software good enugh to handle this project or do they have to create a completely new software for that too. Most computer speech generation that I have seen has been very badly sounding "clearly sounding like a computer" (you do not want to learn to talk a language from something that has bad pronounciation no acsent and is clearly a computer)
Post by Eiríkur Fannar Torfason on Mar 20, 2011 18:05:12 GMT -5
1. How dependant is TLTS on the availability of NLP resources for the language being taught?
2. I'd expect that producing a system like this is a heck of a lot more expensive than producing videos or audio language training material. Did DARPA do some sort of a ROI analysis to evaluate whether a CALL system like TLTS is a good investement over more traditional language teaching methods?