By
Selena J. Layden, Ph.D.
Annemarie L. Horn, Ph.D.
Kera E. Hayden, B.S.
Old Dominion University
This issue of NASET’s Autism Spectrum Disorder series comes from the Spring 2022 NASET peer review journal, JAASEP. Video self-monitoring, a form of self-coaching, is a method of professional development for teachers. Reinforcement is an evidence-based practice for students with autism spectrum disorder (ASD), and it is applicable in a variety of educational settings. Using a multiple baseline across participants design, we evaluated the effects of video self-monitoring on teacher implementation of reinforcement. Four certified special education teachers participated in this research, all of whom taught students with ASD. In our investigation, special education teacher participants used video self-monitoring in each of their four, self-contained classrooms and implementation fidelity of reinforcement was measured. Results were mixed, showing video self-monitoring increased teacher fidelity of reinforcement to some extent, yet high fidelity was not achieved by all four participants. Nonetheless, this study extends current literature on video self-monitoring as used by teachers. Based on our findings, we offer implications for research and practice.
Abstract
Video self-monitoring, a form of self-coaching, is a method of professional development for teachers. Reinforcement is an evidence-based practice for students with autism spectrum disorder (ASD), and it is applicable in a variety of educational settings. Using a multiple baseline across participants design, we evaluated the effects of video self-monitoring on teacher implementation of reinforcement. Four certified special education teachers participated in this research, all of whom taught students with ASD. In our investigation, special education teacher participants used video self-monitoring in each of their four, self-contained classrooms and implementation fidelity of reinforcement was measured. Results were mixed, showing video self-monitoring increased teacher fidelity of reinforcement to some extent, yet high fidelity was not achieved by all four participants. Nonetheless, this study extends current literature on video self-monitoring as used by teachers. Based on our findings, we offer implications for research and practice.
Keywords: video self-monitoring, self-monitoring, autism spectrum disorder, evidence-based practices, teacher performance
Using Video Self-Monitoring to Improve Teacher Implementation of Evidence-Based Practices for Students with Autism Spectrum Disorder
Over the past two decades, research for individuals with autism spectrum disorder (ASD) has had a strong emphasis on the identification and application of evidence-based practices (EBP; National Autism Center, 2009; National Autism Center 2015; National Research Council [NRC], 2001; Wong et al., 2014). Research suggests selecting and implementing individual EBPs with children with ASD yields positive results in addressing their needs (Boyd et al., 2014). Yet, the identification of EBPs for those with ASD is not enough. If students with ASD are to benefit from EBPs, teachers must implement them with fidelity (Simonsen et al., 2013). Fidelity, specifically implementation fidelity, is the ability to implement a practice while including the crucial features from the research of that practice with consistency and accuracy (Hager, 2018). In practice, EBPs must be implemented with fidelity to achieve the efficacy observed in research settings (Cook & Odom, 2013; NRC, 2001; Oliver et al., 2015). Without appropriate attention to fidelity of implementation, students may not receive the features of an EBP that are critical in order for it to be effective. To ensure educators meet the needs of students with ASD, it is crucial they have the firsthand knowledge and skills required to implement EBPs with fidelity in the classroom (Marder & deBettencourt, 2015).
Stansberry-Brusnahan and Collet-Klingenberg (2010) indicated teachers may not be receiving sufficient training in EBPs for students with ASD to successfully replicate the practices in their classrooms. Correspondingly, many educators have reported they do not feel adequately prepared or trained to implement EBPs when teaching students with ASD (Hendricks, 2011). The responsibilities and expectations for special education teachers are immense and expanding. Special educators are required to have knowledge and skills to deliver a variety of content while working with an increasingly diverse group of students (Smith et al., 2010) and students with ASD display unique learning profiles (Hendricks, 2011; Swanson, 2012). In addition to these challenges, it is not uncommon for teachers to adapt practices, use only portions of a practice, or even abandon the practice all together (Oliver et al., 2015). Making such changes to an EBP can affect fidelity of implementation which, in turn, can affect student outcomes.
Reinforcement is one example of an EBP that has been found to be effective for students with ASD (Wong et al., 2014). Reinforcement is defined by its outcomes. In other words, when contingent presentation or removal of a stimulus, such as an item, event, or activity, increases or maintains the likelihood of a behavior occurring again in the future, reinforcement has occurred (Cooper et al., 2020; Wong et al., 2014). When that stimulus is presented or given to a student, that is positive reinforcement. Reinforcement can be part of or paired with many other EBPs, such as differential reinforcement, extinction, functional communication training, modeling, prompting, self-management, and task analysis (Wong et al., 2014). Cooper et al. (2020) stated reinforcement is “the most important principle of behavior and key element of most behavior change programs” (p. 36). As an EBP, reinforcement also has a large amount of research to support its use with students with ASD including those from 0-22 years of age; the review by Wong et al. (2014) included over 40 studies used to determine reinforcement as an EBP for those with ASD. Thus, reinforcement is not only an effective strategy, but it is versatile in that it can be used across ages and settings.
Historically, in order to learn and improve implementation of strategies, in-service teachers have relied on their school districts to provide professional development (PD; Saccomano, 2013) and many school districts offer PD training opportunities to their teachers on a variety of topics. However, didactic training, or a “train and hope” methodology, while popular and probably time-efficient for PD in schools, is rarely effective in changing behavior, especially in terms of generalizing skills beyond the training (Oliver et al., 2015; Rispoli et al., 2017; Stokes & Baer, 1977) and are targeted to application. Morin et al. (2019) aptly stated, “It is critical to provide professional development that not only increases teacher knowledge but also supports the transition from knowledge to instructional practice with high fidelity of implementation” (p. 4). School leaders must identify effective and efficient means to improve the efficacy of teachers working with students with ASD in implementing EBPs in the classroom (Simonsen et al., 2013).
While more effective means of PD have been found (e.g., performance feedback, coaching), these are often challenging for school districts to implement with any regularity or duration because of their associated costs (Simonsen et al., 2013). Given the limited resources of money, time, and specialized expertise available in school districts, there is a critical need for simple and easy-to-implement strategies for teachers in order to increase the application of EBPs in the classroom (Mouzakitis et al., 2015; Simonsen et al., 2013).
Self-Coaching
From the adult learning perspective, self-coaching is a self-directed model of learning based on such theories as andragogy, experiential learning, and reflective practice (Ives, 2008). Self-coaching involves learners in the process of their own assessment and allows them to foster a deeper awareness of their performance and current level of knowledge, while simultaneously making them (learners) aware of skills that may need improvement (Harrison, 2010). Based on their meta-analysis, Dunst et al. (2010) found self-assessment and reflection to be the two most effective strategies for improving adult learning. Sharpe et al. (1996) stated teachers “need to primarily reply upon accurate self-evaluation skills to improve their use of effective instructional practices” (p. 297). Further, Mouzakitis et al. (2015) suggested teachers may be the most effective change agent when it comes to their own performance. Considering the limitations of funding and time commonly seen in the public schools, self-coaching may be a flexible, cost-effective strategy to help teachers improve their implementation techniques and ultimately, increase student outcomes. While multiple methods and strategies could be used in self-coaching, self-monitoring is an EBP that can aid in changing teacher behavior (Simonsen et al., 2013).
Self-Monitoring
Self-monitoring includes two components: self-observation and self-recording of the behavior (Allinder et al., 2000; Bishop et al., 2015). After completing these two tasks, data analysis can allow the teacher to make decisions about their own behavior to improve their performance (Rispoli et al., 2017). Self-monitoring has been shown to be effective for teachers to improve their implementation of instructional strategies, such as increasing verbal praise and behavior specific praise, as well as improving embedded learning trials (Bishop et al., 2015; Cook et al., 2017; Rispoli et al. 2017). Self-monitoring may provide many additional benefits to teachers, including improvement in performance, increased procedural integrity, and the ability to self-evaluate, as well as reductions in inaccurate perceptions of performance and teacher resistance (Plavnick et al., 2010; Sharpe et al., 1996). Self-monitoring has been applied and shown to be effective with teacher behaviors in a few instances, such as implementing curriculum-based measurement, improving praise statements, improving the implementation of behavior intervention plans, increasing the embedment of learning trials, and increasing the number of opportunities for students to respond though many of these studies package self-monitoring with other interventions, such as performance feedback (Allinder et al., 2000; Bishop et al., 2015; Kalis et al., 2007; Mouzakitis et al., 2015; Oliver et al., 2015). Additionally, self-monitoring has been socially validated as a strategy for both in-service and pre-service teachers to use in the classroom (Hager, 2018; Kalis et al., 2007; Saccomano, 2013; Tripp & Rich, 2012).
Self-monitoring can take various forms including checklists, audio review, and video self-monitoring. Each of these forms have been studied to some extent. However, with checklists it can be challenging to ensure data from the teachers are accurate unless direct observation occurs during the self-monitoring process and observations from others can impact behavior regardless of intervention. Delays in completing the self-monitoring checklist can also affect accuracy because teachers have to remember their behavior. Audio self-monitoring assists with teachers remembering their behavior to complete the self-monitoring, but audio recordings may not capture all of the relevant content. Video self-monitoring is an evidence-based strategy that can allow teachers to reflect on their own practice and improve the fidelity of their implementation (Hager, 2018; Morin et al., 2019; Lylo & Lee, 2013). It is useful because of its flexibility and because it addresses some of the disadvantages of checklists and audio review.
Video self-monitoring can align with educators’ needs, provide for professional development in authentic settings, and allow teachers to review their video multiple times in order to analyze their behavior (Morin et al., 2019). Importantly, the use of video to reflect can result in improved performance (Tripp & Rich, 2012). Watching videos of one’s own performance allows for noticing details that may not have been apparent in real time or when reflecting from memory (Tripp & Rich, 2012), such as with self-monitoring checklists. Video self-monitoring has been shown to be effective with teachers. Specifically, Kalis et al. (2007) found a very strong effect size (0.9230) when video self-monitoring was used for the purpose of increasing behavior-specific praise provided by teachers. Video self-monitoring enables teachers to fully attend to the instruction without having to take data in real time, and provides the opportunity to view the video multiple times (Hager, 2018). Consequently, teachers can see and reflect on the target teaching behavior and shape future implementation of that behavior. Video self-monitoring, as a form of self-coaching, is a potentially low-cost, yet highly effective and efficient method of professional development resulting in minimal classroom disruptions for teachers to gain information about and improve their implementation of EBPs (Hager, 2018; Kalis et al., 2007; Oliver et al., 2015; Sharpe et al., 1996; Simonsen et al., 2013; Tripp & Rich, 2012).
Despite the positive reports of self-monitoring, and specifically video self-monitoring, the results have so far been limited by three factors. The first limitation is a paucity of research focused specifically on improving teacher implementation of identified EBPs using self-monitoring procedures, particularly research that has focused on improving teacher-implemented EBPs to support students with ASD. The second limitation is that much of existing research has paired self-monitoring procedures with other interventions, such as performance feedback; thus, the effects of self-monitoring are not measured in isolation. The third limitation is verbal praise has frequently been the measured dependent variable when implementing teacher self-monitoring. Therefore, it is worth exploring other skills using this strategy, including other types of reinforcement, such as the delivery of tangible reinforcers. Finally, given the improvements in availability and ease of use of technology, some of the previous barriers to implementation in the classroom have been removed, meaning this strategy may be even more appropriate to use in classrooms than ever before.
Our investigation addressed the following research questions:
- What is the functional relationship between video self-monitoring and teacher performance of the implementation of a task analysis for delivery of tangible reinforcement (an identified EBP for students with ASD)?
- How do teacher self-ratings on a provided task analysis compare to those of an outside observer when using video self-monitoring?
- Do teachers perceive self-monitoring as a socially valid, viable option for improving their own practice?
Method
Setting
The study occurred in four self-contained, special education classrooms in a mid-sized, suburban school district that contained ten school with approximately 8,000 students. Each participant was employed and taught in a different school within this school district. In the four participating classrooms, only students receiving special education services attended the class. All four classrooms included at least 50% (range 50% to 100%) of students who had an educational label of autism. The internal review board (IRB) at the researchers’ university approved this study and consent for participation was obtained from all participants prior to initiating the study and pseudonyms were assigned to maintain anonymity.
Participants
In order to recruit participants for this study, an e-mail was sent to two special education directors known by the first author explaining the study, the purpose, and the requirements for participating. The special education directors sent out the information to their special education teachers and teachers were asked to e-mail the first author if interested in participating in the study. Eight teachers responded initially. The first author met with the teachers to explain the expectations of the study, at which point, three declined to participate due to time constraints and a fourth teacher declined stating they did not want to appear on video. The remaining four participants were all female teachers who taught in a self-contained, special education classroom. All four teachers were fully licensed in special education by the state department of education. Please see Table 1 for participant characteristics.
Table 1
Participant Characteristics
Teacher |
Education |
Experience |
Grade Level Taught |
Ms. Allen |
Master’s in Special Education |
9 years |
7th grade |
Ms. Baxter |
Bachelor’s in Special Education |
5 years |
Pre-School |
Ms. Collins |
Master’s in Special Education |
16 years |
9-12th grade |
Ms. Davidson |
Bachelor’s in Special Education |
3 years |
K-4th grade |
Research Design
A multiple baseline research design across participants was used to evaluate the effects of video self-monitoring on special education teachers’ implementation of reinforcement (Ledford & Gast, 2018). A multiple baseline design allows for demonstration of a functional relation as it provides for experimental evaluation by controlling for extraneous variables through the sequential introduction the independent variable to different participants at different times (Ledford & Gast, 2018). The conditions of the design were (a) baseline, during which data were collected on the dependent variable prior to any intervention, and (b) intervention, during which participants implemented the intervention of video self-monitoring. Maintenance was an intended third condition for all participants, but due to unforeseen statewide school closings for the last three months of the school year, maintenance data was only obtained for one participant.
Independent Variable
The independent variable was video self-monitoring with the use of a task analysis for providing tangible reinforcement (See Table 2 for task analysis steps). The implementation of the independent variable was the teacher reviewing the provided task analysis prior to implementing tangible reinforcement during instruction, videotaping their use of reinforcement during instruction, reviewing their own video, and scoring themselves on the provided task analysis. Scoring included the teacher indicating yes, they did the step as described or no, they did not do the step as described.
Video Procedures
Teachers in this study utilized individual school district-issued iPad® devices to record sessions. Three of the teachers used tripods and one teacher, Ms. Davidson, had another staff member hold the iPad® to record her sessions. First, using the camera app on the iPad®, teachers video recorded themselves implementing reinforcement in the classroom. Second, the teacher uploaded the video to a restricted-access file in a secure, cloud-based, software program, Box, which allowed only the teacher and researchers to view the videos. Teachers were required to upload their videos after each day they completed a recording to be viewed by the researchers. Third, teachers viewed the uploaded recording and simultaneously evaluated their performance by using the task analysis form provided by the researchers within one day of video recording. Task analysis forms were either printed and completed using paper and pencil or they were completed electronically in Microsoft Word. Paper and pencil data sheets were scanned by the participants and all data forms were uploaded to Box as well. Observers viewed the videos on their laptop computers.
Dependent Variable
The dependent variable was the percentage of steps in the implementation of a task analysis for tangible reinforcement as recorded by the research team. Reinforcement was chosen because of its versatility for ages and settings as well as its importance and effectiveness (Wong et al., 2014). As seen in Table 2, the task analysis included 10 steps. The task analysis was created by the researchers and tested for validity using two methods: executing the task ourselves and obtaining expert input (Cooper et al., 2020). Expert input was obtained by having three professionals who were educators as well as behavior analysts review the task analysis and provide feedback. Feedback was incorporated and the task analysis sent back to each, at which point all three agreed it was an appropriate and complete task analysis. Each teacher also recorded their own data. While this was not used for making decisions for the study, these data were collected and analyzed.
Table 2
Task Analysis for Reinforcement
Step |
Description |
1 |
Gain student’s attention |
2 |
State target behavior to student in manner he/she understands |
3 |
Have at least 3 potential reinforcers available |
4 |
Ask student to choose what he/she would like to earn |
5 |
Set chosen reinforcer in view of student but out of reach |
6 |
Set non-chosen reinforcer options out of sight of student |
7 |
Ensure student is attending |
8 |
Provide SD for target behavior |
9 |
Watch student for target behavior performance |
10 |
If student performs target behavior, immediately provide agreed upon reinforcer to student |
Data Collection
Following the same evaluative protocol as teachers, two independent observers coded performance data across all three conditions using the provided task analysis (See Table 2). Sessions were coded by the primary data collector within 24 hours of a session being uploaded. Sessions lasted between 3-12 minutes, terminating when the first occurrence of positive reinforcement in the form of a tangible reinforcer was delivered to the student. Participants completed approximately 3-5 sessions per week. While participants collected data on their performance on the task analysis to self-evaluate their performance, decisions for the study were based on the researchers’ data collection (See Figure 1). Each teacher calculated their own performance by marking “yes” or “no” to each step. Teachers then added the number of steps they marked as “yes” and divided by 10 (the total number of steps), and multiplied by 100 to obtain a percentage for their performance. Teachers were provided with verbal instruction on how to calculate their own percentage and each task analysis data sheet completed provided visual reminders for the steps to complete this process. Prior to implementation of the intervention, participants were encouraged to ask for clarification on anything in the task analysis they did not understand.
Baseline
During baseline procedures, participants were asked to video record themselves completing their typical classroom instruction, during which they would deliver positive reinforcement, an EBP that has been shown to be effective with students with ASD (Wong et al., 2014). They delivered tangible reinforcers, based on what they knew about reinforcement at that point. Participants were given no additional directions and did not have access to the task analysis at this point. Participants reported they were familiar with positive reinforcement prior to initiating baseline. Because participants were teaching at different grade levels, instructional activities were not prescribed but rather representative of a typical instruction period for that teacher. Recorded baseline sessions were uploaded by the teachers to Box (See Figure 1) after each day recording occurred. The baseline condition continued until dependent measures were stable or were presenting a decelerating trend direction (Horner et al., 2005).
Intervention
During intervention, a task analysis was provided to participants for the provision of positive tangible reinforcement (See Table 2). Introduction of the independent variable of video self-monitoring using the reinforcement task analysis was staggered across participants. At the onset of the intervention condition, participants followed a specific protocol. First, they were instructed to review the task analysis immediately prior to each session. Next, teachers were instructed to video record a session of their typical instruction with a student with ASD where they could occasion an occurrence of contingently providing a tangible reinforcer to a student by having the student engage in a teacher-determined behavior. Because the intervention for the teacher was done during typical instruction, the student behavior was not defined by the researchers. Finally, and once the session was complete, participants viewed their video by the end of the day and simultaneously evaluated their performance using the same task analysis they reviewed prior to implementation. Participants submitted the video of the recorded session and their completed task analysis to the research team through Box after each video session. Fidelity of implementation was addressed by the first author asking each participant if they followed each step of the protocol for implementation after each session. All participants reported they did implement the intervention as directed.
The recorded sessions were used by the research team for data collection purposes only. No training on reinforcement occurred for participants and no additional interventions or feedback from the researchers were included in this study. Because this study aimed at determining whether the sole use of video self-monitoring alone could improve teacher practice, a preset criterion was not an ideal measure. Instead, stability over five consecutive sessions, as measured by independent observer data, determined criterion for each participant. The primary data coder for the research team scored videos within 24 hours of being uploaded in order to facilitate decisions about the intervention by the research team (e.g., when to move from baseline to intervention or when the criterion of five stable data points were met).
As mentioned above, a follow-up probe was planned for each participant at six weeks post-intervention. However, due to unexpected school closings throughout the state due to COVID-19 for the final three months of school, we were only able to obtain maintenance data for one teacher.
All four participant were asked to complete a social validity questionnaire at the end of the study. Questionnaires consisted of five questions. Two questions used a 5-point Likert-scale with 1 being the lowest and 5 being the highest. The two questions were: a) how well did they like using video self-monitoring? and b) how effective did they find video self-monitoring in improving their practice? There were three additional narrative questions asked. Participants were asked if they found doing the video self-monitoring worth their time. Each participant was also asked if they would like to learn another instructional practice using this same method. Finally, each participant was asked if they would recommend others use video self-monitoring to learn how to implement a new instructional practice.
Interobserver Agreement
Sessions across all conditions were recorded using the iPad® video recording app and uploaded to a secure platform, Box, which could only be accessed by the teachers and the researchers. The primary observer, the third author, was a graduate student in a speech language pathology program. The secondary observer, the first author, was university faculty in the special education department. Both observers completed their data collection separately using the same task analysis used by participants to code the presence or absence of each step of the task analysis. Interobserver agreement (IOA) was calculated across teacher performance data for a cumulative total of 54% of sessions, ensuring reliability across all conditions (Horner et al., 2005). Sessions were chosen to ensure equitable distribution across conditions but were randomly picked within the conditions. IOA was calculated across 53.4% of baseline sessions (range 50-61.5%) and 55.6% of intervention sessions (range 50-54.5%). IOA was measured by dividing the total number of agreements by agreements plus the number of non-agreements, and multiplied by 100 (Cooper et al., 2020; Ledford & Gast, 2018). In all, IOA across performance data was 98.9% (range: 97.8-100%). Individual participant reliability data were as follows. IOA for Participant 1, Ms. Allen, was 100%; for Participant 2, Ms. Baxter, it was 98.6% (range: 90-100%); for Participant 3, Ms. Collins, IOA was 100%; and for Participant 4, Ms. Davidson, it was 97.8% (range: 90-100%).
Results
A visual analysis of the data was performed to determine evidence of a functional relationship between the independent variable: video self-monitoring, and the dependent variable: fidelity of reinforcement, as measured by the task analysis. Results showed all four participants increased their level of performance, but the amount of growth varied across participants. Two participants’ mean baseline levels were below 10% and both of these participants increased their levels of performance to a stable 40%. In contrast, the other two participants’ mean baseline levels were 23% and 30%, and their performance increased to mean levels of 74% and 90% respectively. Figure 1 shows the percentage of steps of the task analysis completed correctly across baseline and intervention conditions. As depicted in Figure 1, a functional relation was demonstrated as all four participants showed increased performance in completing the steps of the task analysis as well as stability of performance as a result of the introduction of the independent variable. However, the increased levels of performance reached high levels of fidelity for only two of the four participants. The x-axis represents the instructional sessions and the y-axis represents the percentage of steps on the reinforcement task analysis completed correctly by each participant. Self-reported data during intervention for all four participants was high with means for the participants ranging between 66%-100%. As the study was trying to determine if an increase in performance levels would occur as a result of the video self-monitoring intervention, a functional relation was found through the visual analysis given all four participants increased their levels of performance with the implementation of the independent variable. A detailed description of each participants’ data, including their own self-reported data, follows.
Figure 1: Results of Implementing Video Self-Monitoring Across Participants
Participant One: Ms. Allen
Ms. Allen’s mean performance level across the baseline condition was 6% with low variability (range: 0-10%). The immediacy of effect was 37% and the mean performance level across intervention sessions remained stable with no variability at 40%. Further, there were no overlapping data points. As depicted in Figure 1, Ms. Allen’s self-reported scoring during the intervention condition differed from data collected by observers. The self-reported median performance level across the intervention condition was 90% (range: 80-100%); thus, there was a discrepancy of 53% between performance as coded by the researchers and what was self-reported by the participant. Ms. Allen did complete the six-week follow-up and scored 40% during this probe.
Participant Two: Ms. Baxter
Ms. Baxter’s mean performance level across the baseline condition was 23% (range: 10-40%). The immediacy of effect between baseline and intervention conditions was 50%. The mean performance level across treatment conditions was 74% with low variability (range: 70-90%) and there were no overlapping data points. Ms. Baxter met criteria with five stable data points of 70%. Interestingly, Ms. Baxter’s self-reported performance was 100% with no variability across the entire intervention phase (See Figure 1); revealing a discrepancy of 26% between self-report and coded performance data.
Participant Three: Ms. Collins
The mean performance level across the baseline condition for Ms. Collins was 9% (range: 0-30%). The immediacy of effect was 33% and the mean performance level across intervention sessions remained stable at 40% with no variability or overlapping data. In contrast, Ms. Collins’ self-reported data revealed a mean performance level of 66% (range: 50-70%), which is a discrepancy of 26% compared to coded performance data.
Participant Four: Ms. Davidson
Ms. Davidson’s mean performance level across baseline sessions was 30%, and baseline data revealed high variability (range: 10-80%). Nonetheless, the immediacy of effect between baseline and treatment conditions did demonstrate a change in level of 57% and low variability was observed (range: 80-90%) after the introduction of the independent variable. The mean performance level during the treatment phase was 88% with low variability (range: 80-90%) and Ms. Davidson met criterion of five stable data points of 90%. The percentage of non-overlapping data (PND) points was 83.33%. The mean performance level of self-reported scores for Ms. Davidson was 97% (range: 90-100%), revealing a difference of 9% between self-reported data and coded performance.
Social Validity
All four participants completed the social validity at the end of the study. The first question: how well did participants like using video self-monitoring in improving their practice, yielded a median score of three across participants (individual scores were 3, 3, 4, 3, respectively). For question two: how effective did participants find video self-monitoring in improving their practice, yielded a median score of four (individual scores were 4, 4, 4, 3, respectively). For the narrative questions, all four participants indicated yes, they found doing video self-monitoring worth their time. Ms. Allen expanded saying she “found it helpful to look at [her] techniques and skills and to evaluate how [she] was doing things.” Ms. Baxter did share she thought it could have “been improved if [she] got a chance to monitor things [she] thought [she] really needed help with.” Each of the four participants also reported they would like to learn another instructional practice using this same method and all four again reported yes. Finally, each participant indicated yes, they would recommend others use video self-monitoring to learn how to implement a new instructional practice.
Discussion
The purpose of this study was to measure the effects of video self-monitoring on the level of implementation of reinforcement, an EBP shown effective with students with ASD (Wong et al., 2014). Results from this study showed all four participants increased their implementation of reinforcement to some extent when provided with a task analysis and directed to video record, watch, and score their own performance. While a functional relation was observed with all four subjects, fidelity of implementation did not reach 100% for any of the four participants. This study adds to the literature because it adds to the limited number of studies published on teacher use of video self-monitoring as well as the focus on EBPs for teachers who support students with ASD. Additionally, this study isolated the independent variable of implementing video self-monitoring, which procedurally consisted of a task analysis, without additional practices, extending the literature base specific to measuring the effects of video self-monitoring.
Two participants, Ms. Allen and Ms. Collins, implemented reinforcement with the lowest levels of fidelity during baseline conditions. Correspondingly, although they increased their levels of performance to 40% during treatment conditions, they immediately stabilized at that level. While a performance level of 40% does not reach the levels of fidelity we would like to see, this may indicate further interventions are needed to improve their skills at implementing the steps of reinforcement defined by the task analysis. Additionally, the biggest discrepancy between self-reports and independent observer data was evidenced in these two participants. In contrast, Ms. Baxter and Ms. Davidson, both of whom showed higher levels of performance during baseline, also showed the most growth during the intervention condition. Further, although self-reported performance did not correspond with observer data, they were less discrepant in comparison to Ms. Allen and Ms. Collins.
There was notably room for improvement in terms of improving the implementation of reinforcement for all participants; yet, as a result of the video self-monitoring interventions, all four participants demonstrated some improvement and maintained low variability during the treatment condition. It is plausible the two participants who displayed the higher percentages during intervention, Ms. Baxter and Ms. Davidson, already had more of the behaviors of the task analysis in their behavioral repertoire, meaning they already knew how to perform the behavior but may not have been doing so or may not have been doing so consistently. In the social validity questionnaire, Ms. Baxter did share she thought it could have “been improved if [she] got a chance to monitor things [she] thought [she] really needed help with.” This is an interesting point that may relate to the discrepancy between their perceived implementation of the practice and the scoring from the researchers.
Alternatively, the two participants who only reached 40% during intervention, Ms. Allen and Ms. Collins, may not have as many of the behaviors associated with the task analysis in their repertoire, meaning they did not know how to do the skills and require additional training on the steps they did not complete. Video self-monitoring was not enough to improve fidelity to high levels and further intervention would be needed to improve fidelity levels. Yet this finding is potentially important because it could assist administrators in determining where to target training and coaching opportunities. Another factor that may have contributed to the lower levels of implementation could be a difference between elementary teachers (Ms. Baxter and Ms. Davidson) and secondary teachers (Ms. Allen and Ms. Collins). However, this is a small sample size and further research would be needed to determine which factors, if any, were impacting the results.
Not surprisingly and in line with previous studies (Rispoli et al., 2017), participants scored themselves higher on their own performance than the researchers scored them. It is possible then to consider the teacher may not have known how to perform the step correctly despite believing they could do so. The findings of this study do not examine why there is a discrepancy between the researcher scores and the teachers’ scores, but future studies may look at increasing reliability of participants’ observations with those of an outside observer. However, despite the discrepancy between the researchers’ and participants’ scoring, increases in performance were still observed for all four participants. Follow-up studies may consider comparing video self-monitoring condition with a video self-monitoring combined with a training model to improve teacher accuracy.
All participants reported liking using video self-monitoring and felt it was effective in improving their practice of reinforcement. All four participants also reported they would recommend it to other teachers to use in order to learn new instructional practices supporting that video self-monitoring is a socially acceptable intervention for teachers to learn or improve upon an instructional practice.
Limitations
The first limitation of this study was the use of the task analysis as the scoring mechanism. During baseline, participants were not given the task analysis so as to not influence their baseline behavior. During intervention, participants were asked to video and score their own performance and so were given the task analysis as part of the procedures. Because of this, it is impossible to determine the effect of having the task analysis on their behavior versus the effect of the video self-monitoring without the task analysis. A second limitation relates to the fact that teachers were aware prior to baseline that the skill being targeted was reinforcement. It is unknown whether this affected the participants’ baseline performance. A third limitation is that this study was conducted with teachers who teach in self-contained settings which may limit the generalizability to other teachers. Additionally, because of their willingness to volunteer to be in the study, the teachers included in this study may be more motivated to improve their skills in implementing EBPs than teachers who may not agree to participate in such a study. The lack of student data is another limitation. The study did not control for the students with whom teachers implemented reinforcement. A final limitation was the lack of follow-up data. While the original plan for this study was to include a six-week post-intervention follow-up probe, statewide school closings prohibited obtaining three of the four follow-up probes. Because of the inability to obtain this information, conclusions about lasting effects cannot be made.
Implications for Practice and Future Research
This article identified three important factors for the purpose of this study. First, this study measured the effectiveness of video self-monitoring without being part of a treatment package and without the use of additional interventions, such as performance feedback. Second, this study explored an EBP that goes beyond providing verbal praise. Finally, the implementation of video self-monitoring was able to be done relatively easily given the technology was already available to the teachers and the teachers did not require any training on how to use the technology. One consideration about video self-monitoring is if teachers already have many of the required skills for a particular practice prior to intervention, but are inconsistent in their implementation, video self-monitoring may be very effective. However, future research should focus on implementing video self-monitoring with other EBPs beyond reinforcement. Additionally, future research should look to other populations of teachers, such as special education teachers who teach in general education settings or general education teachers. Looking at systematically pairing video self-monitoring with other interventions would be beneficial to determine how to continue to improve the fidelity of implementation of EBPs while still focusing on interventions that are low-cost in terms of money, time, and expertise.
Conclusion
Kalis et al. (2007) reported “self-monitoring is a nonintrusive intervention, easy to implement, allows for immediate feedback, and can be effective in changing behavior” (p. 26). The benefit of not having to rely on outside expertise reduces the time it takes to begin implementation as well as potentially improving fidelity or at least a teacher’s willingness to continue implementing the intervention (Kalis et al., 2007). The purpose of this study was to isolate video self-monitoring as the independent variable and while high levels of fidelity were not seen across participants, change in behavior did occur and thus future research is warranted to determine what factors may impact how effective video self-monitoring is without other, paired interventions.
References
Allinder, R. M., Bolling, R. M., Oats, R. G., & Gagnon, W. A. (2000). Effects of teacher self-monitoring on implementation of curriculum-based measurement and mathematics computation achievement of students with disabilities. Remedial and Special Education, 21(4), 219-226. https://doi.org/10.1177/074193250002100403
Bishop, C. D., Snyder, P. A., & Crow, R. E. (2015). Impact of video self-monitoring with graduated training on implementation of embedded instructional learning trials. Topics in Early Childhood Special Education, 35(3), 170-182. https://doi.org/10.1177/0271121415594797
Boyd, B. A., Hume, K., McBee, M. T., Alessandri, M. Gutierrez, A., Johnson, L., Sperry, L., & Odom, S. L. (2014). Comparative efficacy of LEAP, TEACCH and non-model specific special education programs for preschoolers with autism spectrum disorders. Journal of Autism and Developmental Disorders, 44(2), 366-380. https://doi.org/10.1007/s10803-013-1877-9
Cook, C. R., Grady, E. A., Long, A. C., Renshaw, T., Codding, R. S., Fiat, A., & Larson, M. (2017). Evaluating the impact of increasing general education teachers’ ratio of positive-to-negative interactions on students’ classroom behavior. Journal of Positive Behavior Interventions, 19(2), 67-77. https://doi.org/10.1177/1098300716679137
Cook, B. G. & Odom, S. L. (2013). Evidence-based practices and implementation science in special education. Exceptional Children, 79(2), 135-144. https://doi.org/10.1177/001440291307900201
Cooper, J. O., Heron, T. E., & Heward, W. L. (2020). Applied behavior analysis (3rd ed.). Pearson.
Dunst, C. J., Trivette, C. M., & Hamby, D. W. (2010). Meta-analysis of the effectiveness of four adult learning methods and strategies. International Journal of Continuing Education and Lifelong Learning, 3(1), 91-112.
Hager, K. D. (2018). Teachers’ use of video self-monitoring to improve delivery of effective teaching practices. TEACHING Exceptional Children, 50(5), 283-290. https://doi.org/10.1177/0040059918765749
Harrison, C. (2010). Peer and self-assessment. In P. Peterson, E. Baker, & B. McGraw (Eds.), Encyclopedia of Education (pp. 169–173). Elsevier.
Hendricks, D. R. (2011). Special education teachers serving students with autism: A descriptive study of the characteristics and self-reported knowledge and practices employed. Journal of Vocational Rehabilitation, 35(1), 37-50. https://doi.org/10.3233/JVR-2011-0552
Horner, R. H., Carr, E. G., Halle, J., McGee, G., Odom, S., & Wolery, M. (2005). The use of single-subject research to identify evidence-based practice in special education. Exceptional Children, 71(2), 165-179. https://doi.org/10.1177/001440290507100203
Ives, Y. (2008). What is ‘coaching’? An exploration of conflicting paradigms. International Journal of Evidence Based Coaching and Mentoring, 6(2), 100. http://www.business.brookes.ac.uk/research/areas/coaching&mentoring/
Kalis, T. M., Vannest, K. J., & Parker, R. (2007). Praise counts: Using self-monitoring to increase effective teaching practices. Preventing School Failure, 51(3), 20-27. https://doi.org/10.3200/PSFL.51.3.20-27
Ledford, J. R. & Gast, D. L. (2018). Single case research methodology: Applications in special education and behavioral sciences (3rd ed.). Routledge.
Lylo, B. J. & Lee, D. L. (2013). Effects of delayed audio-based self-monitoring on teacher completion of learning trials. Journal of Behavioral Education, 22(2), 120-138. https://doi.org/10.1007/s10864-012-9166-9
Marder, T. & deBettencourt, L. U. (2015). Teaching students with ASD using evidence-based practices: Why is training critical now? Teacher Education and Special Education, 38(1), 5-12. https://doi.org/10.1177%2F0888406414565838
Morin, K. L., Ganz, J. B., Vannest, K. J., Haas, A. N., Nagro, S. A., Peltier, C. J., Fuller, M. C., & Ura, S. K. (2019). A systematic review of single-case research on video analysis as professional development for special educators. The Journal of Special Education, 53(1), 3-14. https://doi.org/10.1177/0022466918798361
Mouzakitis, A., Codding, R. S., & Tryon, G. (2015). The effectives of self-monitoring and performance feedback on the treatment integrity of behavior intervention plan implementation and generalization. Journal of Positive Behavior Interventions, 17(4), 223-234. https://doi.org/10.1177%2F1098300715573629
National Autism Center. (2009). National standards report.http://www.nationalautismcenter.org/reports/
National Autism Center. (2015). Findings and conclusions: National standards project phase 2https://www.nationalautismcenter.org/national-standards-project/phase-2/
National Research Council. (2001). Educating Children with Autism. Committee on Educational Interventions for Children with Autism. C. Lord & J. P. McGee (Eds.), Division of Behavioral and Social Sciences and Education. Washington, D. C.: National Academy Press.
Oliver, R. M., Wehby, J. H., & Nelson, J. R. (2015). Helping teachers maintain classroom management practices using a self-monitoring checklist. Teaching and Teacher Education, 51, 113-120. https://doi.org/10.1016/j.tate.2015.06.007
Plavnick, J. B., Ferreri, S. J., & Maupin, A. N. (2010). The effects of self-monitoring on the procedural integrity of a behavioral intervention for young children with developmental disabilities. Journal of Applied Behavior Analysis, 43(2), 315-320. https://doi.org/10.1901/jaba.2010.43-315
Rispoli, M., Zaini, S., Mason, R., Brodhead, M., Burke, M. D., & Gregori, E. (2017). A systematic review of teacher self-monitoring on implementation of behavioral practices. Teaching and Teacher Education, 63, 58-72. https://doi.org/10.1016/j.tate.2016.12.007
Saccomano, D. (2013). Refining pre-service teachers’ teaching capacity with adolescent students through the use of reflective practice. The California Reader, 47(2), 34-43.
Sharpe, T., Spies, R., Newman, D., & Spickelmier-Vallin, D. (1996). Assessing and improving the accuracy of in-service teachers’ perceptions of daily practice. Journal of Teaching in Physical Education, 15(3), 297-318. https://doi.org/10.1123/jtpe.15.3.297
Simonsen, B., MacSuga, A. S., Fallon, L. M., & Sugai, G. (2013). The effectives of self-monitoring on teachers’ use of specific praise. Journal of Positive Behavior Interventions, 15(1), 5-15. https://doi.org/10.1177%2F1098300712440453
Smith, D. D., Robb, S. M., West, J., & Tyler, N. C. (2010). The changing education landscape: How special education leadership preparation can make a difference for teachers and their students with disabilities. Teacher Education and Special Education, 33(1), 25-43. https://doi.org/10.1177/0888406409358425
Stansberry-Brusnahan, L. L. & Collet-Klingenberg, L. L. (2010). Evidence-based practices for young children with autism spectrum disorders: Guidelines and recommendations from the National Resource Council and National Professional Development Center on Autism Spectrum Disorders. International Journal of Early Childhood Special Education, 2(1), 45-56. https://doi.org/10.20489/intjecse.107957
Stokes, T. F. & Baer, D. M. (1977). An implicit technology of generalization. Journal of Applied Behavior Analysis, 10(2), 349-367. https://doi.org/10.1901/jaba.1977.10-349
Swanson, T. C. (2012). Preparing teachers for students with autism spectrum disorders. Southeast Educational Network. Retrieved from http://www.seenmagazine.us/Articles/Article-Detail/ArticleId/2572/smid/403/ArticleCategory/56/Preparing-teachers-for-students-with-Autism-Spectrum-Disorders on May 2, 2018.
Tripp, T. & Rich, P. (2012). Using video to analyze one’s own teaching. British Journal of Educational Technology, 43(4), 678-704. https://doi.org/10.1111/j.1467-8535.2011.01234.x
Wong, C., Odom, S. L., Hume, K., Cox, A. W., Fettig, A., Kucharczyk, S., Brock, M. E., Plavnick, J. B., Fleury, V. P., & Schultz, T. R. (2014). Evidence-based practices for children, youth, and young adults with autism spectrum disorder. Carolina Institute for Developmental Disabilities. http://cidd.unc.edu/Registry/Research/Docs/31.pdf
About the Authors
Selena J. Layden, Ph.D., is an Assistant Professor in the Department of Communication Disorders and Special Education at Old Dominion University in Norfolk, VA. Her research interests include autism spectrum disorder, teacher performance, improving the implementation of evidence-based practices in schools, and supporting behavior analysts in schools.
Annemarie L. Horn, Ph.D., is an Assistant Professor of Special Education at Old Dominion University in Norfolk, VA, where she is currently serving as the Program Coordinator for the MSED Adapted Curriculum Program. Her research interests include professional development with an emphasis on eCoaching and improving postsecondary outcomes for youth with disabilities.
Kera E. Hayden, B.S., is a graduate student in Speech-Language Pathology at Old Dominion University in Norfolk, VA. Her research interests are autism spectrum disorder and literacy intervention.
To download a PDF file version of this issue of NASET’s Autism Spectrum Disorder Series: Click Here
To return to the main page for NASET’s Autism Spectrum Disorder Series – Click Here