Abstract - Roger Azevedo
Fostering self-regulated learning with advanced learning technologies by using multimodal multichannel process data: Opportunities and challenges
Learning involves real-time deployment of cognitive, affective, metacognitive, motivational, and social processes. Traditional methods of measuring self-regulatory processes (e.g., self-reports) severely limit our understanding of the temporal nature and role of these processes during learning, problem solving, reasoning, etc. Interdisciplinary researchers have recently used advanced learning technologies (e.g., intelligent tutoring systems, serious games, simulations, immersive virtual environments) to enhance learning by inducing, fostering, and supporting self-regulatory processes while using advanced learning technologies. Despite the emergence of interdisciplinary research, much work is still needed given the various theoretical models and assumptions underlying human learning, methodological approaches (e.g., log-files, eye-tracking, physiological sensors), data types (e.g., verbal, non-verbal, physiological), and analytical methods (e.g., statistics, data mining, machine learning). In this presentation, I will focus on several major challenges currently facing researchers, educators, and industry partners, including: (1) theoretical and methodological challenges related to real-time detection, tracking, and modeling of self-regulatory processes; (2) recent work on using multimodal multichannel data to detect, track, and model self-regulatory processes while learning with advanced learning technologies; and, (3) outlining an interdisciplinary research agenda and opportunities that have the potential to significantly enhance advanced learning technologies’ ability to provide real-time, intelligent support of learning, problem solving, and reasoning across domains.