Direct link to the Fidelity video: http://unc-fpg-cdi.adobeconnect.com/drivers-ed-fidelity/
(Note: This video requires Flash.)
Purpose: This information offers the facilitator extra content to supplement the video. It is important to distinguish between fidelity, programmatic (effort) and outcome data. This content will be useful in helping staff to gain a deeper understanding of fidelity data to strengthen implementation and make attributions about student outcomes.
Fidelity is the degree to which a program, practice, or innovation is implemented as intended by the user or group of users. For instance, teachers completing training on an instructional practice is an example of effort data. The data gathered during an observation of a teacher implementing a specific instructional practice is one example of fidelity data. It indicates the degree to which the teacher is implementing the practice as designed. Observation is one of many ways fidelity data may be gathered. Other examples may include review of artifacts, consensus ratings, and interviews. Fidelity data will direct attention to make improvements to training, coaching, or organizational supports that improve implementation.
Collecting data on the fidelity of the implementation provides opportunities for attributions as to why an evidence-based practice is or is not yielding promised results. When fidelity is low, it indicates that the practice is not being implemented as designed and learner outcomes will not improve. When fidelity is high, learners have the greatest likelihood of improved outcomes.
Knowledge Objectives for the video: (slide 2)
· Understand the difference between personnel evaluation and fidelity assessments
· Explain how fidelity data help identify strengths and areas for development for individuals and the system
Practice Objectives for the video:
· Recognize when, why and how to engage stakeholders
· Inventory your system for existing fidelity data
· Begin using fidelity assessment data as a system diagnostic tool
· Multiple measures that reflect the using the content as intended, in the right context, and with quality necessary to achieve outcomes
· Such data are GENERATED BY the people charged with using the innovation as intended
· And the data are collected in and from the settings where the innovation is used
Fidelity Data |
Precautions about Fidelity Data |
· Measure the critical components of a practice · Indicate supporting context · Indicate competent delivery · Are collected at regular intervals · Is time sensitive |
· Is not an assessment of learner outcomes · Often triggers emotional responses · Is not to be used in performance evaluation · Is not an indication of effort, activity |
Other ideas about fidelity data collection to maximize benefits to students: (slide 7)
1. Frequent: More frequent fidelity assessments mean more opportunities for improving. Instruction, innovations and implementation supports, and school, district, and state supports for the program or practice benefit from frequent feedback. The mantra for fidelity assessments in education is, “Every teacher every month.”
2. Relevant: Fidelity data are most informative when each item on the assessment is relevant to important supports for student learning. That is, the fidelity assessment items are tied directly to the core components and how they are defined or operationalized (such as in a practice profile - Practice Profile Planning Tool and Practice Profile Examples).
3. Actionable: Fidelity data are most useful when each item on the assessment can be included in a coaching service delivery plan and can be improved in the education setting. After each assessment, the teacher and coach develop goals for improving instruction. In addition, Implementation Teams work with leadership to ensure that teachers have access to the intensity of coaching supports needed for educators to be successful.
Big Idea: Fidelity data help us understand our outcomes (slides 8-11)
Though the focus of a fidelity assessment is on instruction, accountability for effective use of practices lies with leadership and integration among all of the Implementation Drivers.
· If the components of the practice are not evident in classrooms, the Leadership Team is accountable for providing more effective supports for teachers.
· If student outcomes are improving, and the teachers are using the practices with fidelity, the system still needs to collect fidelity data to ensure that effective use of the practices are sustained through changes in leadership and staff.
Questions from Slide 10:
· Let’s see if getting better at what we do helps us improve our outcomes
· Are we measuring the right core component?
· Are the fidelity measures aligned with the core components?
· Do the fidelity measures align with theory of change?
· Did we choose the wrong approach? Practices?
· Did we choose an EBP that matched our need?
Questions from Slide 11:
· Are we doing what we said we would do with quality?
· Did we choose an innovation that does not effectively address the needs we identified?
Big Idea: Fidelity data help us improve and sustain outcomes (slides 12-17)
A quality fidelity assessment requires that the measure is highly correlated with student outcome data. Fidelity assessments refer to measuring the degree to which the system, or parts of a system, such as a grade level or content area are able to use an innovation or instructional practices as intended. In other words, did we do what we said we would do?
· Improve individual practitioner or educator use of a specific program, practice, or intervention as intended. This means that we ask ourselves, “What do we need to do as a school, agency or organization to improve the learning conditions and supports for this particular person?”
· Improve system supports available to all educators such as training, coaching and data use.
For leaders in education, fidelity is not just of academic importance. The use of a fidelity measure helps leaders and others discriminate implementation problems from evidence-based practice problems and helps guide problem solving to improve outcomes.
It is time to ask questions about the effectiveness of the Implementation Drivers. Review and reflect on these types of questions from slide 17:
· What do the teachers, coaches and those conducting the fidelity assessments have to say about the data? What’s their view of the challenges related to student engagement practices?
· Do we need to improve our interview processes for teachers who work at this school to assess their student engagement skills?
· Can we improve initial and booster training regarding student engagement?
· What job aids and materials can help in the classroom?
· How can coaching be improved?
· Is there a way for us all to use data more frequently?
· Is the data system functional for providing feedback about the teachers’ skills related to engaging students?
Answers to these questions lead to systemic changes and improvements in the effectiveness of the Competency Drivers and Organization Drivers for teachers, staff, and practitioners. The goal is a stronger system for all – those currently using the innovation and those who will use it in the future.
Big Idea: Sustaining outcomes by sustaining the fidelity assessment process (slides 18-19)
Organizations need to build a fidelity assessment system. Sustaining implementation takes attention. There will always be times when practice drifts from what is intended. When we know it happens, we can adjust. In addition, new staff will need feedback and support.
Making It Happen (slides 20-25)
Fidelity data helps educators improve the system by examining data at multiple levels (individual, group, and systems level). A fidelity data analysis is designed as a transparent system with stakeholder involvement with each stakeholder actively involved in reviewing and summarizing the data.
· Create readiness by preparing people and improving commitment by creating meaningful opportunities for input and engagement for all stakeholders (Who are your stakeholders?)
· Design a transparent fidelity system by creating a fidelity assessment protocol and by sharing fidelity data on an individual level as well as sharing the summary of the data across individuals (anonymously of course). We then continue demonstrating that the fidelity data are used to tailor and improve system supports for all teachers.
Activities (slide 27)
There are two activities included with this video. The one called Getting Better: Developing a Fidelity Assessment (advanced) has been modified and is included as the Handout: Brainstorming Activity on the Activity Directions 3 document.
For more information about how to use this activity to support the selection of Leadership Teams, see the Action Overview: Leadership Teams Expertise.
The Active Implementation (AI) Hub, AI Modules and AI Lessons are developed by the
State Implementation & Scaling‐up of Evidence‐based Practices Center (SISEP) and The National Implementation Research Network (NIRN)
THE ACTIVE IMPLEMENTATION HUB | implementation.fpg.unc.edu