Blog by Doug Cline
International Society of Fire Service Instructors, Vice President
As fire service instructors, we have a duty to provide the highest quality of service and instruction. We need to be our students’ inspiration, pushing them to strive for excellence.
But there’s a question we need to answer: Are we, ourselves, dedicated and committed enough?
Instructors need to stop and look in the mirror. The future of the fire service rests on our shoulders. That’s why it’s imperative that organizations, leaders and instructors take a hard look at how training is being delivered.
There are numerous ways to do this. Reaction questionnaires can be given to students. Subject-matter experts or senior trainers can audit training sessions. Test scores can be analyzed. Other instructors can perform peer assessments. These are just some of the methods.
The optimum time to evaluate the work of an instructor is while they are actually in the process of delivering a training session. Observation is recommended. However, observation is only effective if it is driven by standards that are objective, comprehensive, reliable and accurate.
Follow these steps to evaluate the delivery of training:
Step 1: Identify and define the objectives of the evaluation and determine how this process will work. Determine why the evaluation is being conducted. One reason may be to provide feedback on an instructor or a specific delivery issue. It also may be to evaluate the overall competence of an instructor.
Step 2: Consider how the information will be summarized and to whom it will be reported. Evaluation data can serve many purposes and can be interpreted different ways. It’s important that clear decisions define why, when and from whom data is being collected. It’s also important to evaluate what information is collected and its relation to the original objectives, which caused the need for the evaluation.
Step 3: Identify and define the specific competencies and performances to be measured. First, you must determine which competencies will serve as the basis of the evaluation. Typically, a detailed evaluation involves no more than three competencies where a more general evaluation may evaluate multiple competencies. Secondly, the objectives of the evaluation must be clearly specified. This is so the evaluator and the instructor understand what is being measured.
Step 4: Determine the sources of data. You can obtain evaluation data from a number of different sources. More common methods of data collection are evaluations by evaluators, co-instructors, and peer and self-evaluations. It’s important to remember that evaluators will have varying levels of skill that may influence data.
Step 5: Write the questions. For quality control, questions must be linked to a specific desired outcome for the evaluation. When the questions are written, we can control the specificity or generality of the individual item. These controls are essential to keep the evaluation instrument practical, manageable, reliable and valid.
Step 6: Design the format and layout of the instrument. Evaluation instruments must be written clearly and concisely for what is being measured. The evaluation must contain unambiguous directions for use and feature ordered questions or items to be evaluated. Instruments must be user friendly. This means easy to read and use and enough space for documentation.
Step 7: Pilot-test the instrument and obtain feedback. Prior to using a document for program evaluation, allow it to be pilot-tested. This will allow others to provide feedback on the instruments adequacy and usefulness. This pilot-test helps evaluators determine how well the instrument design and layout meets the objectives you are looking for. It also allows for the evaluation of the instrument to ensure its designed to provide what its intended to do. Since instrument development is time consuming and costly, it’s imperative to evaluate the tool to ensure it will provide the best information possible.
Step 8: Create the final instrument and implement the evaluation. The final instrument must provide the data needed to ensure training achieves its objectives or job performance requirements. Instruments may be used to assess a variety of aspects focused around training. The instrument may be used to assess the instructor’s performance and usefulness of instructional methods, course materials and content.
Effective fire service organizations must recognize their responsibilities to assist in the professional development of their instructors. Fire service instructors must also realize they have areas that need development.
As the leaders of the fire service, instructors need to have the guts to do more. We should be setting a precedent for the future. We start by looking at the man in the mirror.
About the Author
Douglas Cline is a student of the fire service serving as training commander with the City of High Point (N.C.) Fire Department and assistant chief of administration with the Ruffin Volunteer Fire Department. Cline is a North Carolina Level II Fire Instructor, National Fire Academy Instructor and an EMT-Paramedic instructor/coordinator for the North Carolina Office of Emergency Medical Services. Cline is a member of the North Carolina Society of Fire Service Instructors and the International Society of Fire Service Instructors where he serves on the Board of Directors as The First Vice President.