Towards an Automated Model to Evaluate Collaboration through Non-Verbal Interaction in Collaborative Virtual Environments
Virtual environments represent a helpful resource for learning and training. In their multiuser modality, Collaborative Virtual Environments (CVE) support geographical distant people to experience collaborative learning and team training; a context in which the automatic monitor of collaboration can provide valuable and in time information, either for human instructors or intelligent tutor systems, about individual and group performance. CVE enable people to share a virtual space where they interact through a graphical representation, generating nonverbal behavior such as gaze-direction or deictic gestures, a potential means to understand collaboration. This paper presents an automated model and its inference mechanisms to evaluate collaboration in CVE based on the nonverbal activity of the participants. The model is a multi-layer analysis that includes: data filtering, fuzzy classification, and rule-based inference producing high-level assessment for group collaboration.