Quality is word, like professionalism, that trainers tend to use with some abandon. The problem is that quality has a lot of definitions. So before I discuss how to assure and control quality, let’s establish a definition of that somewhat elusive word.
The bottom line on training quality is the degree to which trained personnel perform successfully on the job after training. Quality is the sum total of a number of contributing factors and can be seriously reduced by any one of them.
The major factors that affect quality are:
1. Accuracy in identifying the tasks that must be done on the job and in describing conditions and setting performance standards for those tasks
2. Accuracy in determining how existing performance capabilities of the student population differ from those needed
3. Accuracy in determining what knowledge, skills, tools, aids and attitudes must come together for acceptable performance to occur
4. Accuracy in determining the nature of the learning events best suited to producing needed knowledge, skills and attitudes
5. Effectiveness of any instructors, counselors, coaches, writers, producers or administrators involved in delivery of the learning system
6. Timeliness and frequency of opportunity to perform learned tasks after training
7. Timeliness, frequency and accuracy of feedback on the job
8. Accuracy in determining how available time, manpower, facilities and financial resources can best be applied to optimize quality
Rarely is it appropriate to pursue maximum possible quality. Optimum quality is the point of maximum value, or return on investment. In seeking to improve quality, a point will be reached in which cost increases faster than quality. On the other hand, in cost-cutting exercises, a point is reached when quality declines faster than cost.
Quality should be maintained in the range between these points because that is where we get the best value. It is not the purpose of this article to explore value management but to explore quality, so enough said about financial matters.
Of the eight factors listed, all except 6 and 7 are the direct responsibility of the training function and item 8 is a training management responsibility. That leaves us five factors.
Quality assurance is what we do to minimize risk of error or failure in each factor before the training starts.
Quality control is what we do to measure each factor, independent of all others, and to correct for variances between actual and planned quality, which is the point at which manage¬ment calculates optimum quality will occur.
Different methods of applying quality assurance and control are appropriate for different situations. Key differences are the nature of the job to be learned, the number of learners, the qualifications of the trainer(s) and the sophistication of the organization’s management in controlling operations.
The range of methods is also great. In a small organization, where one person, an acknowledged master of the job to be learned, represents the training function and where decisions are made without gathering and analyzing a lot of facts, it is usually proper for the trainer to base these decisions on his/her own experience and knowledge and to accept subjective opinions of resulting quality.
In this case, the costs of preventing or correcting low quality are usually greater than the costs of low quality, should it occur.
As organizations grow, however, it becomes less likely that a person responsible for preparing training is an acceptable source of answers about job content, learner characteristics and performance components (knowledge, skill, attitudes, tools, aids, and so on) and more likely that he/she is a good source of answers about the design of the learning process and is capable of delivering the learning system.
It is also more likely that management controls operations on the basis of numbers rather than opinions and trust relationships.
In this situation, the larger number of learners makes systematic methods of quality assurance less costly than low quality.
That means the job, the learners and the task components should be researched before defining and preparing the training. Research can range from interviewing selected master performers and job candidates to extensively testing and observing both and statistically analyzing the results. It is also proper in this case to ensure that trainers apply sound educational technology to the design and delivery of the learning system, rather than guesswork, intuition and popular mythology.
Regardless of the size and sophistication of the organization, it is necessary to find a valid way to determine how well people perform on the job after training. In other words, it is necessary to control quality. Though it is often done, it is risky to settle for a few randomly collected opinions about quality, no matter how credible the holders of those opinions.
Too many factors, other than training quality, influence those opinions, which, unfortunately, are rarely objective.
If opinions are to be collected from the graduates and from their super¬visors, they should be collected by means of a questionnaire or survey that is designed to return numbers that can be averaged.
This method eliminates much of the subjectivity and means that the results are more valid. It also allows rare cases to be discarded and individual differences in learners to have only “fair share” impact on results. The results should tell you approximately the quality of what you have done. If that final reading on quality varies above or below what you intended, you need additional data to suggest what to change.
You should measure the first five quality factors I listed at the beginning of this article separately because they are performed separately and will be adjusted, or controlled, separately.
1. Your post-training performance survey questionnaire should ask for demands of the job that the trainee couldn’t perform after training and tasks that were learned but not needed. The data will show errors in item 1.
2. Test the learners, before train¬ing, for capabilities you assumed they had, as well as those you assumed they didn’t have. The first is prerequisite testing; the second is pretraining testing. The results will show errors in item 2.
3. During the training, use quizzes, projects, problems and attitude sur¬veys to compare knowledge, skills, attitudes, tools and aids to the performance achieved. For example, you’ve decided that Joe must be able to work differential equations to select the correct settings on a ma¬chine; if he fails the equations quiz but passes the performance test, you should question whether the equations are really necessary.
4. Examine the results of these quizzes, surveys and so on for the frequency of success for each item of learning. Low frequency, below 70%,. suggests redesign of the learning event. Poor learner attitude toward the task or the learning event (from the attitude surveys) also suggests the need for a new approach.
5. In addition to using the degree to which learners achieve planned learning as an indicator of quality of delivery, we should collect feedback from the learners about delivery technique and style. Don’t sit in on a classroom instructor’s presentation in order to evaluate it. Instead, use results— that is, the way learners respond.
Sure, there is some art to superior classroom instruction but not enough to justify evaluating the instructor’s performance. Concentrate, rather, on the reactions and comments of those on the other side of the podium.
Exact, step-by-step procedures for each task of quality assurance and each task of quality control could fill a book— or several. And they have. If you want to read further about some specific techniques on quality assurance and control, often referred to in the literature as evaluation or testing and measurement, read books and articles on the research and design of training.