The assessment center was conceived for one reason and only one reason to find people who can succeed in certain critical, high-impact jobs.
To be perfectly correct, the primary function of an assessment center is the measurement and evaluation of human characteristics and potential, figuring out who, among a group of “likelies,” is the most likely to succeed in a specific type of job. The acid test of the assessment center, then, is predicting winners for key jobs that have less than obvious, straightforward or easily agreed-upon performance parameters and entry-level requirements.
The first, and most infamous, American assessment center was Station S, the World War II O.S.S. cloak-and-dagger candidate training and selection center, developed by Harvard psychologist Henry Murray. (Go to the head of the class if you re¬member the Jimmy Cagney movie of the same title.)
Click the image below to download !!
The best-known assessment project, and the one generally credited with kicking off the current assessment center binge, is the massive AT&T/Bell System management as¬sessment system.
Dr. Douglas Bray, AT&T’s director of basic human re¬sources research, launched the communications giant into the assessment business in 1956, when he began his ambitious Management Progress Study, an eight-year study of the growth and development of 400 managers. One of the unique contributions of the study was the use of assessment-center methods to obtain information at the beginning and, again, at the eight-year point in the managers’ careers.
Bray and colleagues, in turn, established the first working AT&T assessment center at Michigan Bell.
Telephone Co. in response to dissatisfaction with the quality of individuals being promoted to foreman-level jobs. In the ensuing 24 years, an estimated 200,000 candidates have been scrutinized, measured and evaluated at 70 different centers scattered throughout the Bell system.
In the past 10 years, the list of or¬ganizations using assessment centers has grown from 100 to nearly 2,000. The roster of users includes IBM, American Airlines, General Electric, Ford Motor Co., General Motors, Sears, General Telephone, Merrill Lynch, J. C. Penney and Standard Oil of Ohio. In addition, numerous state and federal bureaus and agencies have hopped on the assessment-method bandwagon.
The Dept. of Health, Education & Welfare (HEW), the Dept. of Housing & Urban Development (HUD), the FBI, the Civil Service Commission and the Social Security Administration head that list. Though assessment centers theoretically could be established for any occupational endeavor, the high cost—up to $1,500 per candidate assessed— has conspired to contain their use primarily to the assessment of potential in managerial, supervisory and sales jobs.
Click the image below to download !!
Regardless of the application, assessment centers are designed primarily to predict performance success more accurately than such fallible measures as one-to-one interviews, work history, educational background, paper-and-pencil tests and performance appraisals.
Dr. William C. Byham, president of Development Dimensions International, Pittsburgh, PA, is less circumspect than most about the need for and value of assessment centers when he candidly and flatly asserts, “Otherwise highly competent managers do a terrible job of hiring and promoting people….The assessment-center method allows managers to accurately assess a candidate’s expertise on a practical level through the use of behavioral simulations.”
Though somewhat complex in execution, the assessment-center method is fairly straightforward conceptually. A group of 6 to 12 upward-movement aspirants are sequestered in a room with three or more assessors, who have been trained to observe and evaluate the behavior of the “would-be’s.” The assessors usually are next-level-up managers from the organization. Care is taken to ensure that none of the candidates works for, or could conceivably work for, any of the assessors.
The aspirants are observed performing in-basket exercises, solving group discussion problems, playing business games and performing other simulation exercises and are rated on their performance. Actual outcomes of the exercises tend to be of secondary importance in assessment. Problem-attack skills, communication, interpersonal skills and the like tend to be the assessors’ focus.
After the simulation and observation period, which lasts from half a day to five days, depending on the job class candidates are being screened for, the assessors meet and thrash out a strengths-and-‘weaknesses report on each assessee.
Though these final re¬ports are often shared with the candidates and their immediate supervisors, results tend to be kept under lock and key and are utilized only as part of the decision-making process. This assessment symphony is usually orchestrated and led by a permanent assessment-center staff.
Do they really work?
Though assessment centers are highly touted by their vendors, users and developers, some critics, usually members of the paper-and-pencil test development crowd, question whether assessment centers really do employ truly standardized and objectively valid evaluation exercises.
Others suggest that assessment centers simply confirm and ritualize existing organizational biases and myths.
Proponents of assessment centers, reports Psychology Today’s Berkley Rice, “claim that the methodology provides a precise, objective, job-related test based on actual behavior rather than tarnished standards such as educational background, prior work record, seniority or intelligence. Hence, the method has gained a reputation as one of the only legally defensible approaches to management selection.”
Indeed, the Equal Employment Opportunity Commission (EEOC) itself uses the method. De¬spite this face validity and a number of favorable research studies, doubt exists.
One Fortune 500 organization quickly suspended operation of its sales assessment center when a follow-up study by internal psychologists found that the success/failure ratio of nonselected sales candidates was almost identical to the success/failure ratio of de-selected candidates.
One conclusion of this extremely confidential report suggests that “its primary value [the assessment center seems at present, to lie in de-selecting candidates who do not look or think similarly to their asses¬sors.” The report questioned both the legality and wisdom of continuing what it referred to as “an inadvertent inbreeding system.”
A number of academics have attacked the quality of the research done by advocates.
Ohio State University psychologists Richard Klimoski and William Strickland re¬iewed 90 such studies for the Journal of Personnel Psychology and concluded, “We should not be overly impressed with the evidence of assessment-center validity.”
In another research review, in the Academy of Management Journal, industrial psychologist Ann Howard suggests that “the research on them, though positive, is sparse, comes from too few sources, covers too many variations in components, lacks replication and is usually plagued by methodological problems.”
Most of the serious criticism—that is, misgivings expressed by researchers who understand the assessment-center method and who have nothing of their own to sell — centers on the topics of self-fulfilling prophecy or halo effect and content validity.
As to the first count, re¬searchers point out that when highly rated assessees are promoted at least in part because of their assessment-center ratings, it is impossible to validate the assessment-center predictions. Those studies that have concealed assessment-center ratings have had mixed reviews. Though many studies show impressive results in favor of the assessment-center method, other studies suggest that assessment centers fare no better than simple managerial predictions of employee success and failure.
The second critique, lack content validity, is a more serious matter. Federal Executive Agency guidelines for selection procedures suggest that only clearly job-related activities are legally acceptable.
That means, essentially, that the law only likes and accepts selection techniques and tools that look like they relate to the job and can be proved to relate to the job. At first blush, the assessment center looks like a natural for meeting the content validity standard.
But hold on a minute. When we look closely at what assessors assess, we see a lot of personal trait and personality words: “Leadership,” “stress tolerance” and “independence” are hardly behavior descriptions that you or I would agree upon very easily.
To further complicate the content validity problem, we have the matter of the individual assessment-center exercises. If, for example, the same identical in-basket exercise is used to assess Kansas City police and Pittsburgh high school principal candidates, it would probably take a real Melvin Belli to convince a judge that both uses are based on independent job-analysis studies and accurately reflect specific job content.
The data on the plus side is rather impressive, even if much of it has been done primarily by industrial and organizational psychologists who have a “vested” interest in the validity and reliability of the assessment-center method. Some of the most widely accepted and interesting evidence in favor of assessment centers is reviewed by James A. Earles and William R. Winn of the Air Force Human Resources Laboratory, Lackland Air Force Base, TX.
Some examples of the most convincing “proof statements are the following:
• Bray and AT&T colleague Donald L. Grant found that, of the 422 employees assessed in the mid-1950s, four to eight years later 42% of those classified as middle-management potential had achieved that level, and only 4% remained at the lowest level. Of those predicted not to rise, 42% had not moved, and only 7% had achieved middle-management positions.
• Byham reviewed a number of assessment center and non-assessment center selection approaches and found assessment-center results to be 10% to 30% better overall.
• A. I. Kraut and G. J. Scott assessed 1,086 IBM non-management employees and classed each as having either potential beyond first-level management or having no potential for successful assignment beyond first-level management. Of those assessed as having higher potential, 30% achieved second-level positions, while only 10% of those rated first-level were promoted beyond first level. In addition, 20% of those promoted against the prediction were eventually demoted. Nine percent of those promoted in accordance with the predictions were demoted.
• H. B. Wollowick and W. J. McNamara found that overall management potential rating of lower and middle-level managers was success¬fully predicted in 37% of the cases in a three-year period. When ratings from the individual exercises, tests and characteristic assessments were statistically combined, the “hit rate” increased to 62%.
• A study by Helen LaVan, Cameron Carley and Dennis Nirtaut, De-Paul University, Chicago, focused on the reaction of people assessed through the assessment-center technique. The researchers found that participants, regardless of their level or the outcome of the assessment, tended to be positive about the experience.
They also found that participant reaction did not depend on how the information was to be used—that is, for training and development, fast tracking or as part of a promotion de¬cision.
• A Nebraska district court upheld the use of assessment centers in a suit against the city of Omaha (Boray, Stokes and Lant v. City of Omaha), when it was demonstrated that professional assessors, who had no connection with the case, independently came to the same conclusions about the job candidates in question as had the original nonprofessional assessors, who had had only two days training in assessment techniques.
Source : TRAINING Magazine