Predictive Student Modeling In Educational Games With .

Transcription

Predictive Student Modeling in Educational Gameswith Multi-Task LearningMichael Geden,1 Andrew Emerson,1 Jonathan Rowe,1 Roger Azevedo,2 James Lester11North Carolina State University, 2University of Central Florida{mageden,ajemerso,jprowe,lester}@ncsu.edu, roger.azevedo@ucf.eduAbstractModeling student knowledge is critical in adaptive learningenvironments. Predictive student modeling enables formativeassessment of student knowledge and skills, and it drivespersonalized support to create learning experiences that areboth effective and engaging. Traditional approaches topredictive student modeling utilize features extracted fromstudents’ interaction trace data to predict student testperformance, aggregating student test performance as a singleoutput label. We reformulate predictive student modeling as amulti-task learning problem, modeling questions from studenttest data as distinct “tasks.” We demonstrate the effectivenessof this approach by utilizing student data from a series oflaboratory-based and classroom-based studies conducted witha game-based learning environment for microbiologyeducation, CRYSTAL ISLAND. Using sequential representationsof student gameplay, results show that multi-task stackedLSTMs with residual connections significantly outperformbaseline models that do not use the multi-task formulation.Additionally, the accuracy of predictive student models isimproved as the number of tasks increases. These findingshave significant implications for the design and developmentof predictive student models in adaptive learningenvironments.IntroductionRecent years have seen growing interest in modelingstudent knowledge in adaptive learning environments(Piech et al., 2015; Mao, Lin, & Chi, 2018; Gardner,Brooks, & Baker, 2019). Predictive student modeling is thetask of predicting students’ future performance on aproblem or test based upon their past interactions with alearning environment. Predictive modeling is important fortailoring student experiences in a range of adaptivelearning environments, such as intelligent tutoring systems(Gardner, Brooks, & Baker, 2019) and educational games(Shute et al. 2016; Min et al., 2019). By modeling studentCopyright 2020, Association for the Advancement of ArtificialIntelligence (www.aaai.org). All rights reserved.knowledge, adaptive learning environments canpersonalize delivery of problem scenarios, hints,scaffolding, and feedback to create student learningexperiences that are more effective than one-size-fits-allapproaches (VanLehn, 2011). However, predictive studentmodeling is a challenging machine learning task becausestudent data is often noisy, heterogeneous, and expensiveto collect (Bosch et al. 2016).Predictive student models typically represent studentknowledge as an aggregate across a student’s performanceon a set of questions. For example, a typical output label inpredictive student modeling is the overall accuracy ofstudent responses on a post-test administered after thestudent has finished interacting with an adaptive learningenvironment. This approach makes stringent assumptionsthat each post-test question has an equivalent mappingfrom features in the input space and is equallyrepresentative of the underlying latent construct beingmeasured (e.g., science content knowledge). A naturalextension is to relax these assumptions by employingmulti-task learning (MTL), wherein each test question is anoutcome variable in the same predictive model. MTL hasbeen shown to yield improved model accuracy across arange of domains by sharing feature representations acrossdifferent tasks, which provides a natural form of modelregularization (Zhang & Yang 2017; Argyriou, Evgeniou,& Pontil 2007). MTL has particular promise for predictivestudent modeling, where there are typically multiple testquestions designed to assess the same knowledge andwhere there is often limited data available on studentinteractions with the particular adaptive learningenvironment.In this paper, we present a novel predictive studentmodeling framework using MTL. We utilize MTL tomodel student outcomes at the item level within a gamebased learning environment for middle school scienceeducation, CRYSTAL ISLAND. Empirical results

demonstrate the efficacy of the approach with markedlyimproved results over what is typical for predictive studentmodeling. Additionally, we explore how differentmechanisms of self-attention can influence modelperformance through selecting relevant sections of studentgameplay interactions.Related WorkStudent ModelingA widely used approach for modeling student knowledgein adaptive learning technologies is Bayesian knowledgetracing (BKT) (Mao et al. 2018). BKT models studentknowledge as a binary latent variable in a hidden Markovmodel. The model is updated based upon studentinteractions with an adaptive learning environment, whichprovide evidence of student knowledge and skills overtime. Although BKT is an effective approach to studentmodeling in adaptive learning environments, it is notalways well suited for student modeling in educationalgames, particularly in cases in which a game-basedlearning environment lacks repeated content exercises thatprovide recurring evidence of student skills.An alternative to Bayesian knowledge tracing is stealthassessment, which utilizes methods from evidencecentered design to devise Bayesian networks that linkstudent actions with content knowledge based uponnetwork structures that are manually authored by domainexperts (Shute et al. 2016). Stealth assessment is aneffective approach for predictive student modeling ineducational games, but the models are labor-intensive toconstruct. A related framework is deep stealth assessment,which utilizes long short-term memory (LSTM) networksto predict student test performance following interactionwith an educational game and has shown promising resultsat modeling student knowledge without requiring domainexperts (Min et al. 2019).Item response theory (IRT) models the probability that astudent will correctly answer a given exercise byincorporating the characteristics of both the test-taker andthe questions (Embretson & Reise 2013). IRT does notassume all questions are the same difficulty, and it canmodel an individual’s probability of success as a functionof both their capability and the difficulty of the question.Extensions of this work include time-varying models (Lan,Studer, & Baraniuk 2014) and the integration of ideas fromIRT into traditional BKT models (Khajah et al. 2014).More recent work has investigated recurrent neuralnetworks to capture more complex representations ofstudent knowledge and to estimate the probability that astudent will answer the next question correctly (Piech et al.2015). Other recent applications include the use of LSTMbased architectures with an attention mechanism to predictstudent performance for the personalization andsequencing of exercises (Su et al. 2018). Our work utilizessimilar sequential architectures, but we incorporatemethods from multi-task learning to significantly improvemodel performance.Multi-Task LearningRecent years have seen a growing interest in multi-tasklearning in applications such as computer vision (Fang,Zhang, Zhang, & Bai 2017; Kendall, Gal, & Cipolla 2018),climate modeling (Goncalves, Von Zuben, & Banerjee2016), healthcare (Jin, Yang, Xiao, Zhang, Wei, & Wang2017), and dialogue analysis (Tong, Fu, Shang, Zhao, &Yan 2018). Multi-task learning has been shown to improvemodel fitting by sharing information across multipleoutcome variables, providing shared components of themodel with additional training data and enhancedregularization (Zhang & Yang 2017). Multi-task learningdramatically reduces the number of parameters that need tobe estimated, as well as the compute time requiredcompared to running each outcome variable separately.This is operationalized by sharing weights across multipletasks based upon the assumption that the tasks have aninherent relationship (Shui et al. 2019). A challenge ofmulti-task learning is the sensitivity to the choice of lossweights for each of the tasks. Hyperparameter tuning of theloss weights is effective with a small number of tasks butdoes not scale well as the number of tasks increases. Analternative approach is to estimate the loss weights as partof the model building process (Kendall et al. 2018).Incorporated into adaptive learning environments inwhich data collection is often labor intensive compared toother machine learning applications. Therefore,frameworks that make efficient use of training data andincorporate regularization effectively can be beneficial inbuilding predictive models from datasets with a limitedsample size (Sawyer et al. 2018). Multi-task learningallows for the separate modeling of individual questions,which IRT has demonstrated can have largely differentcharacteristics even if the questions are manifesting fromthe same underlying latent variable. A previous studyfound favorable results using MTL to predict student testscores using a standard feedforward neural network(Bakker and Heskes 2003), but it did not involve sequencesof student actions as are often encountered in adaptivelearning environments. Additionally, only one weightingof each tasks’ loss function was explored, even thoughdifferent loss weightings can have a large effect on modelaccuracy (Kendall, Gal, & Cipolla 2018).DatasetWe investigated the multi-task learning framework forpredictive student modeling in an educational game formicrobiology education, CRYSTAL ISLAND (Rowe et al.2011). In CRYSTAL ISLAND, students take the role of amedical field agent investigating an infectious outbreak on

a remote island (Figure 1). Students talk with non-playercharacters, explore different locations, read virtual booksand microbiology posters, test hypotheses about theoutbreak in a virtual laboratory, and record their findingsin a virtual diagnosis worksheet. As students navigatethrough the game, their actions and locations are stored intrace log files that are subsequently used for modeling.In this work, we used data from two different samples ofstudents across different contexts (laboratory andclassroom) to increase the heterogeneity of the sample andthe generalizability of the resulting model (Sawyer et al.2018). Students from both samples answered the same preand post-test surveys, but there were some differences inthe experimental setup and game. Combining the data fromthe university-based laboratory study (n 62) with the datafrom the classroom-based study (n 119), the total samplesize of the dataset is 181 students.Prior to playing the game, students completed a pregame survey containing demographic questions,questionnaires about student interest and achievementgoals, and a 17-item microbiology content knowledge pretest composed of multiple-choice questions. Each questionhad four options with one correct answer. The questionscentered on microbiology content such as pathogens,viruses, carcinogens, and bacteria. Students then playedCRYSTAL ISLAND until they either solved the in-gamemystery or they ran out of time. After playing the game,students completed a post-game survey, which contained aseparate set of 17 microbiology content knowledgequestions. The post-test microbiology content items weresummed to create a single post-test score.Figure 1: CRYSTAL ISLAND Game Environment.Feature RepresentationThe input features for all models consisted of items fromtwo components of the pre-game survey (33 features), anindicator variable representing the dataset which thestudent belonged to (3 features), and the student’sgameplay actions within CRYSTAL ISLAND (130 features),which yielded a total of 166 features. From the pre-gamesurvey, we used 16-items from a survey on emotions,interest, and value (Likert scales) and a 17-itemmicrobiology content pre-test (correct/incorrect answers).Similar to previous work that used gameplay log featuresin a learning environment, we used a one-hot encoding ofstudent actions using several components (Min et al. 2017): Action type: The system records each time the studentmoves to a new location within the virtual environment,engages in conversation with a non-player character(NPC), reads a virtual book or article, completes an ingame milestone (e.g., identifying the outbreak’stransmission source), tests a hypothesis, or recordsfindings in the diagnosis worksheet. The data include 8distinct player action types. Action arguments: Action arguments are specific to thetype of action the student is taking. For example, theyinclude the name of the book the student is reading, theNPC with whom the student is conversing, and the objectthe student is testing in the virtual laboratory. The datacontains 97 distinct types of player action arguments. Location: Within the game world, the system logs thelocation of each action. The data tracks 24 nonoverlapping, discrete regions of the virtual game world. Game time elapsed: The system logs the time of eachstudent action within the game, which is transformedinto elapsed seconds since the start of gameplay.Predictive Student Modeling with Multi-TaskStacked LSTMsStudent assessments are composed of multiple questionsmeasuring the same construct (e.g., science contentknowledge, personality) in order to provide accurate andreliable results. The traditional paradigm for modelingstudent assessments is to represent the outcome as anaggregate of the student’s performance across allquestions. This approach constrains the model to utilize thesame feature encoding 𝑓𝑓(𝑥𝑥𝑖𝑖 , θ1 ) ℎ𝑖𝑖 and mapping fromthe feature encoding 𝑔𝑔(ℎ𝑖𝑖 , θ2 ) 𝑦𝑦𝑖𝑖 across questions.In this work, student knowledge modeling isreconceptualized within a multi-task learning framework,allowing for a shared feature representation for efficientestimation, but providing increased flexibility of differentquestion characteristics through unique mappings from theencoding space. The long sequences of student actionsgenerated from the game-based learning environment aremodeled using a stacked LSTM with a residual connection.We explore how attention can potentially help the modelfocus on relevant sections of gameplay (Luong, Pham, andManning 2015). Finally, the pretest data containing studentcharacteristics is concatenated with the encoded gameplayfeatures, fed into a dense layer, and then output as aprediction via the output layer.Single-Task LearningConsider a dataset composed of a 𝑑𝑑 dimensional inputspace associated with a set of K correct/incorrect responses

to questions across n i.i.d. students. The performance ofeach student is represented as the sum of questions theyanswered correctly, y . If using mean-squared error as theloss function for y , this single-task representation has thefollowing formulation:𝑁𝑁12𝐿𝐿(Θ) 𝑦𝑦 𝚤𝚤 𝑓𝑓(𝑥𝑥𝑖𝑖 , θ) 𝑁𝑁𝑖𝑖 1NKi 1k 1f( xi , θ)1 y ı,k KN2This formulation assumes that the loss for each of the Ttasks are weighted equally. Additionally, each task is givenan identical, shared representation, 𝑓𝑓(𝑥𝑥, θ).Multi-Task LearningA multi-task learning framework relaxes the assumptionthat all tasks are weighted equally by having both a sharedrepresentation, 𝑓𝑓(𝑥𝑥, 𝜃𝜃), and a task specific representation,for each task 𝐾𝐾. The overall multi-task learning lossfunction is often composed as a weighted sum of theindividual task loss functions:𝐾𝐾𝐿𝐿(Θ) 𝑤𝑤𝑘𝑘 𝐿𝐿𝑘𝑘 𝑦𝑦𝑘𝑘 , 𝑔𝑔(𝑓𝑓(𝑥𝑥, θ), θ𝑘𝑘 ) is taken across the sequence. There are a number ofapproaches to estimate attention weights, a i . Here wedescribe the multiplicative approach outlined in Luong,Pham, and Manning (2015), where W Rm m , b Rm ,and 𝑣𝑣 𝑅𝑅𝑚𝑚 are estimated parameters.li v tanh (Whi b)a i softmax(li )In addition to the traditional form of self-attention shownabove, we also utilized a simplified form, given our smallerdataset, where 𝑊𝑊 𝑅𝑅𝑚𝑚 instead of an m m matrix. Thisgreatly reduces the number of parameters at the cost oflimiting the flexibility of the model.Implemented Predictive Student ModelTo investigate MTL for predictive student modeling, wecompared three model architectures: a single-taskrepresentation, an unweighted multi-task representation,and an uncertainty weighted multi-task representation.Each of the architectures were fit using three attentionconfigurations: no attention, a simplified form of attention,and traditional matrix self-attention.Single-task baseline. The single-task model utilized posttest score as the outcome variable with an identityactivation function (Figure 2).𝑘𝑘 1The weight of each individual task, wk , must bedetermined before training the MTL model and thus is notlearned. A challenge stemming from this fact is that theoverall loss can be sensitive to the selection of each wk ,which can become prohibitively expensive to tune as 𝐾𝐾grows large.Uncertainty weighted. Kendall et al. (2018) proposed analternative method for selecting 𝑤𝑤𝑘𝑘 by estimating it as aparameter within the model. The form of the adjusted lossfunction is derived from the log-likelihood of themultivariate normal distribution based on an assumption ofindependence across tasks. In order to prevent the modelanadditionalfromselectingwk 0: k K,regularization term is added. Equal weighting across tasksis a special case of this formulation when 𝑤𝑤𝑘𝑘 1: 𝑘𝑘 𝐾𝐾.𝐾𝐾Figure 2: Single-Task Model Architecture.Unweighted multi-task learning. The unweighted multitask learning (MTL) model predicted the student’saccuracy on each of the post-test questions, for a total of 17tasks (Figure 3). Each question was modeled as a binaryclassification problem (i.e., correct/incorrect) with asigmoid activation function. Binary cross-entropy wasused as the loss function for each task. The relativeweighting for each task’s loss was selected prior to modeltraining as a hyperparameter. Each task was weightedequally for the overall model’s loss function.𝐿𝐿(Θ) 𝑤𝑤𝑘𝑘 𝐿𝐿𝑘𝑘 𝑦𝑦𝑘𝑘 , 𝑔𝑔(𝑓𝑓(𝑥𝑥, θ), θ𝑘𝑘 ) log𝑤𝑤𝑘𝑘𝑘𝑘 1Self-AttentionGiven a sequential output of length 𝑇𝑇 of an 𝑚𝑚 dimensionalrecurrent unit, hi Rm T , the most common approach toobtaining a static representation is to either take theunweighted average across the entire sequence or to selectthe last output from the recurrent unit. An alternativeapproach is to use self-attention, where a weighted averageFigure 3: Multi-Task Model Architecture.Uncertainty weighted multi-task learning. Theuncertainty weighted MTL model predicted the student’s

accuracy on each post-test question using a similar setup tothe unweighted multi-task learning model (Figure 3).However, each task’s relative loss weights were notpreselected and were instead estimated as part of the modelusing the method outlined in Kendall et al. (2018).ExperimentsThe single-task baseline models were formulated as aregression problem and trained to predict student post-testscore. In contrast, the MTL models were trained as a jointbinary classification problem across each of the 17 posttest items. The MTL predictions for each of the 17 itemswere summed to create a single post-test score in order tomake comparisons with single-task baseline models. Forthe baseline models, we developed a set of predictivemodels utilizing a static representation calculated as thesum of each feature in the gameplay data in addition to asingle-task neural network with an otherwise equivalentarchitecture to the MTL models. All models were trainedand evaluated using 10-fold cross-validation along thesame set of students to remove noise from samplingdifferences. In conducting the cross-validation, we ensuredthat no student data occurred both in the training and testsets. Hyperparameter tuning was conducted for each of themodels within the 10-fold cross validation. Continuousdata were standardized within each of the folds.Static ModelsA set of baseline models were selected using a staticrepresentation to assess if the added complexity of deeplearning methods was beneficial over non-neural machinelearning methods. The static baseline regression models forthe single-task problem were the following: mean value,Lasso, Linear Kernel Support Vector Regression (SVR),Random Forest (RF), Gradient Boosting (GB), and multilayer perceptron (MLP). In addition, we used a multi-taskmajority classifier baseline for each of the post-test items.Prior work on predictive student modeling in educationalgames has often utilized feature representations that consistof summary statistics describing students’ gameplaybehaviors (e.g., the number of books read, the number oflaboratory tests run, etc.), which do not capture sequentialpatterns in student behavior over time (Sawyer et al. 2018).Student gameplay data was aggregated by summing theone-hot encoded variables of each student action acrosstheir total gameplay and dividing by their overall gameplayduration, resulting in their relative action rate.Sequential ModelsAll sequential models were composed of two stacked longshort-term memory (LSTM) layers with residualconnections, a layer concatenating the LSTM gameplayfeatures and pretest features, and a single denselyconnected layer (see Figures 2 and 3). The activationfunction for the dense layer, single-task output, and multitask output were the hyperbolic tangent function, theidentity function, and the sigmoid function, respectively.All models used early stopping using mean squared errorwith a patience of 15 and 500 maximum epochs. Everymodel was hyperparameter tuned using a grid search:number of LSTM units (32, 64, 128), number of dense units(32, 64, 128), and dropout rate (.33, .66). The best modelwas selected using the validation data and reported usingthe 10-fold test data.ResultsThe lasso and random forest models tied for the bestperformance among the static baseline models (Table 1).The single-task models outperformed the static models bya moderate margin. The no attention unweighted MTLmodel and the full self-attention weighted MTL model tiedfor the best performance among the sequential models,with a large improvement over the single-task sequentialbaseline. Neither simple nor full attention had a notableeffect on model performance with the exception of theweighted MTL model, where it provided a smallimprovement to model fit. All models terminated by earlystopping prior to the maximum number of epochs.The relationship between the number of tasks and theperformance of the sequential models was assessed byevaluating each tuned model on 15 random samples acrossan increasing number of outcome variables. The averageperformance is displayed in Figure 4. The MTL modelsconsistently outperformed the single-task representation,with the performance of both increasing with the numberof tasks. The unweighted MTL models performed as wellas or better than the uncertainty weighted MTL models.This result was contrary to expectations and led to anadditional analysis exploring the properties of theuncertainty estimated loss weights.Uncertainty Weighted Loss Weights SimulationAn additional investigation was conducted on theflexibility of the estimated loss weights using Kendall etal.’s (2018) uncertainty estimation. To better understandthe similarity between the weighted and unweighted MTLmodel results, we examined the range of optimal lossweights for an individual task with varying levels ofaccuracy. We optimized the loss weight with respect to theuncertainty estimated binary cross-entropy for a singleclassification subcomponent of the overall multi-taskframework (Figure 5). Results showed that the uncertaintyestimation method provides limited flexibility forreweighting across the most common ranges of accuracy.The accuracy of the weighted MTL models ranged between55-76% for each classification task, with loss weightsbetween .77-1.07. This result is expected, as within thisaccuracy range there is a limited range of loss weights.

Table 1: Performance Comparison of Post-Test Sum across Static Baseline ModelsMetricMeanGBLassoLin. SVRRFMLPMajorityClass Table 2: Performance Comparison of Post-Test Sum across Neural Sequential ModelsMetricSingle-task ModelUnweighted 50.47.48.51Figure 4: Sequential Model Performance by Number of Tasks.DiscussionEvaluation results demonstrated that the multi-tasklearning (MTL) formulation of predictive student modelingyielded a 24% improvement in R2 over the single-taskneural network model using a sequential representation anda 38% improvement over models employing a staticrepresentation. Results showed that models leveraging thesequential nature of student interaction data outperformedthose that used a static representation only.Weighted MTLFigure 5: Optimal Uncertainty Estimated Loss Weight for aSingle-Task.Within the MTL framework, we observed an increase inmodel performance as the number of tasks increased acrossall models, with MTL models consistently outperformingthe single-task model. Previous work on predictive studentmodeling in adaptive learning environments has typicallyreported R2 ranges from 0.09 to 0.41, depending on thedataset and the chosen models (Moo, Lin, & Chi 2018;Bakker & Heskes 2003; Zhang et al. 2017). These resultsare in line with the model accuracies observed for the staticbaseline models utilized in this work. By leveraging amulti-task stacked LSTM framework, we observe sizableimprovements in predictive accuracy.

In addition to the MTL framework, we used a selfattention mechanism to further act as a weighting schemefor modeling student’s sequential gameplay data. We didnot see substantial improvements from this self-attentionmechanism. A potential explanation for this could be thateach of the 17 tasks in the predictive modeling problem areinfluenced by different parts of the input sequence.Students are likely to gain knowledge throughout theirinteraction with the CRYSTAL ISLAND game-based learningenvironment. Therefore, predictions about the collection oftasks, each corresponding to a single item from the contentknowledge post-test, may rely fully on the entire gameplaysequence. We constructed the attention mechanism as partof the shared weight portion of the model architecture, andit was an alternative to using attention for each unique task.This was due to insufficient data and the computationalexpense that task-specific attention would require. Becauseof this, attention may be forcing equal weighting across thegame sequence because the tasks as a whole demand it.It is notable that we did not see a benefit of usinguncertainty weighting estimation over unweighted MTLmodels. Simulations on the uncertainty weighted lossweights shed light on this finding by demonstrating that therange of optimal loss weights is constrained when each ofthe tasks has a similar base rate, which is true in our dataset.These results suggest that when tasks in a multi-taskframework possess similar base rates, the simpler methodof equal weighting of tasks is as effective as more complexuncertainty-weighted methods.Overall, results show that multi-task stacked LSTMs arean effective framework for predictive student modeling ineducational games, and therefore, they show significantpromise for enabling run-time support functionalities toenhance student learning in adaptive learningenvironments. Specifically, they enable personalizedsupport, such as feedback and hints, that proactivelyintervene when a learner is trending toward a negativeoutcome. This support can also be targeted toward specificconcepts and skills addressed by individual test itemscaptured in the multi-task model. MTL is broadlyapplicable to predictive student modeling tasks, so long asthey feature assessments with multiple questions, as iscommon in educational settings. Furthermore, MTL islikely to be most effective as the communality of test itemsdecreases. Finally, predictive student models can also serveas a type of formative assessment, providing an “earlywarning system” for teachers that enables re-allocation ofattention toward those students who need the most help.Conclusion and Future WorkPredictive student modeling is critical for drivingpersonalized feedback and support in adaptive learningenvironments. However, devising accurate models ofstudent knowledge is challenging because student data fora particular learning environment may be sparse, and it isoften inherently noisy. In this paper, we have introduced amulti-task stacked LSTM-based predictive studentmodeling framework for modeling student knowledge ineducational games. Multi-task learning creates shared andtask-specific representations of student learning data thatimprove model regularization and allow for increasedflexibility in modeling different tasks.In future work, it will be important to explore howdifferent loss functions can be used to combine the lossacross multiple correlated binary variables withoutrequiring the assumption of independence across each task.It will also be important to investigate the performance ofmulti-task stacked LSTMs for predictive student modelingin different genres of learning environments to study theirgeneralizability. Additionally, further research is neededon developing interpretable predictions for multi-taskpredictive student models to allow teachers to incorporatemodel feedback into classroom settings. Finally, it will beimportant to investigate the incorporation of the multi-taskstacked LSTM-based predictive modeling framework inadaptive learning environments to explore how they canmost effectively drive adaptive support to create the mosteffective learning experiences for students.AcknowledgementsThis material is based upon work supported by the NationalScience Foundation (NSF) under Grant DRL-1661202 andthe Social Sciences and Humanities Research Council ofCanada (SSHRC 895-2011-1006). Any opinions, findings,and conclus

Brooks, & Baker, 2019). Predictive student modeling is the task of predicting students’ future performance on a problem or test based upon their past interactions with a learning environment. Predictive modeling is important for tailoring student experiences in a range of adaptive le