Description
This thesis is an initial test of the hypothesis that superficial measures suffice for measuring collaboration among pairs of students solving complex math problems, where the degree of collaboration is categorized at a high level. Data were collected

in the

This thesis is an initial test of the hypothesis that superficial measures suffice for measuring collaboration among pairs of students solving complex math problems, where the degree of collaboration is categorized at a high level. Data were collected

in the form of logs from students' tablets and the vocal interaction between pairs of students. Thousands of different features were defined, and then extracted computationally from the audio and log data. Human coders used richer data (several video streams) and a thorough understand of the tasks to code episodes as

collaborative, cooperative or asymmetric contribution. Machine learning was used to induce a detector, based on random forests, that outputs one of these three codes for an episode given only a characterization of the episode in terms of superficial features. An overall accuracy of 92.00% (kappa = 0.82) was obtained when

comparing the detector's codes to the humans' codes. However, due irregularities in running the study (e.g., the tablet software kept crashing), these results should be viewed as preliminary.
Reuse Permissions
  • Downloads
    pdf (625.5 KB)

    Details

    Title
    • Using the tablet gestures and speech of pairs of students to classify their collaboration
    Contributors
    Date Created
    2014
    Resource Type
  • Text
  • Collections this item is in
    Note
    • Partial requirement for: M.S., Arizona State University, 2014
      Note type
      thesis
    • Includes bibliographical references (p. 63-68)
      Note type
      bibliography
    • Field of study: Computer science

    Citation and reuse

    Statement of Responsibility

    by Sree Aurovindh Viswanathan

    Machine-readable links