Performance Reviews - Uakron.edu

Transcription

Annual HR ForumOctober 6, 2016Performance ReviewsDo They Have a Future?

The Problems withPerformance ReviewsSteve Ash, PhDChair, Department of Managementash@uakron.edu

Long Journey Performance Management (PM)Management by Objective (MBO)Job AnalysisCritical Incidents TechniqueBehavioral Anchored Ratings (BARS)Graphic Rating ScalesSelf AppraisalsStacked RankingsRatings ScalesEssay MethodsPaired Comparisons360 Degree FeedbackAssessment Centers

Yuck! What do managers hate most?1. Losing their job2. Seeing their people lose their jobs3. Performance reviews In one study, 58% of HR executivesconsidered reviews an ineffective use ofsupervisors’ time Are employee attitudes any different?

Historical Perspective Ratings began in WWI and WWII By 1960s 90% of organizations wereusing performance ratings– Unions were based on seniority– Non-union used evaluation scores formerit (NOT for performance ordevelopment)

Evolution of Performance Reviews 1Cappelli, HBR ‘16WWIThe U.S. military created merit-rating system to flag and dismiss poorperformers.WWIIThe Army devised forced ranking to identify enlisted soldiers withpotential to become officers.1940sAbout 60% of U.S. companies were using appraisals to documentworkers’ performance and allocate rewards.1950sSocial psychologist Douglas McGregor argued for engaging employeesin assessments and goal setting.1960sLed by General Electric, companies began splitting appraisals intoseparate discussions about accountability and growth, to givedevelopment its due.1970sInflation rates shot up, and organizations felt pressure to award meritpay more objectively, so accountability again became the priority inthe appraisal process.

Evolution of Performance Reviews 21980sJack Welch championed forced ranking at GE to reward topperformers, accommodate those in the middle, and get rid of thoseat the bottom.1990sMcKinsey’s War for Talent study pointed to a shortage of capableexecutives and reinforced the emphasis on assessing and rewardingperformance.2000Organizations got flatter, which dramatically increased the number ofdirect reports each manager had, making it harder to invest time indeveloping them.2011Kelly Services was the first big professional services firm to dropappraisals, and other major firms followed suit, emphasizingfrequent, informal feedback.2012Adobe ended annual performance reviews, in keeping with thefamous “Agile Manifesto” and the notion that annual targets wereirrelevant to the way its business operated.2016Deloitte, PwC, and others that tried going numberless are reinstatingperformance ratings but using more than one number and keepingthe new emphasis on developmental feedback.

Dropouts AdobeJuniper SystemsDellMicrosoftIBMDeloitteAccenturePwC GapLearOppenheimerFundsGeneral ElectricMorgan StanleyGoldman SachsLillyGoogle

Why the Movement? Problems– My “dirty dozen” problems withtraditional performance reviews

Problem#1 Frequency– Once a year. Really? Primacy-Recency– Really good organizations mayconduct a mid-year or quarterlyreviews Is that enough?

Problem#2 Time/Cost Invested– Deloitte 1.8 million hours across the firm– Flatter Organizations In 1960s about 6 direct reports, today 20– McGregor (Theory Y) assumed employeeswanted to perform well Doing it right would take managers several days persubordinate each year.– CEB Managers report 210 hours for appraisals (5weeks!) per year

Problem#3 Rewards– Is the motivation the huge raises? In 1970s merit raises of 20% were notuncommon– Today, what is the financial differencebetween a score of 4 vs. 5 in annualsalary?

Problem#4 Management Styles HaveChanged– Movement from “Command andControl” to “Innovative Teamwork”– Annual performance reviews are so“last century”– Do your best people need you totell them how to do their jobs?

Problem#5 Millennial Generation– Instagram and Facebook generation– How rapidly do they expect feedback?– As these folks evaluate employmentoptions, what do they seek?

Problem#6 Measurement Scales & Bias– Have all the appraisal questions youhave been presented with made sense?– One study found that 98% of federalgovernment employees received“satisfactory” ratings, while only 2% goteither of the other two outcomes:“unsatisfactory” or “outstanding.”– There are many human biases thatinfluence ratings, even when scales arescientifically created!

Problem#7 Developmental vs. Evaluative– How much development is actuallyassociated with a poor “score”?– How excited are people really, fornegative feedback?– What is your experience with employeereactions to negative feedback?– One study found only 25% of emps saidtheir managers discussed strengths at all

Problem#8 Focused on the Past, Not Future– Does talking about poor performance ayear ago really help future performance?– How much can you really change people?– Should the focus be on hiring the rightpeople and trusting them to do a goodjob, or improving “bad” performers withgreat evaluation and development tools?

Problem#9 Pace of change and Planning– How comfortable are you with guessing yourassignments one year from now?– How quickly do organizational prioritieschange? Will the pace of change continue toincrease?– If many changes, can we use standardevaluation tools and scales?– What do your best performers do? Agile Manifesto (2001), coders preferresponding to change rather than following aspecific plan.

Problem#10 Legally Required– Decades of advice to standardize processesand be objective – HR has tried to comply– But, poor documentation may be worse thannone– Perhaps rating scores tell more about biasesthan performance (consider times whenwomen and minorities consistently get lowerscores)– Is HR the punching bag for Legal Eagles andBean Counters? Should HR drive someprocesses?

Problem#11 Objectivity is Very Subjective– Does your manager know more about yourwork than you do?– Unless you are counting “widgets” made,sales achieved, or time completed, it isvery hard to objectively measureperformance.– Knowledge work, in particular, requiressubjective judgment.– Objectivity is largely an illusion– Some scores measure 2 decimal places!

Problem#12 Teamwork is Destroyed– Traditional systems often pit employeesagainst each other, when collaborationis needed to be competitive in thecurrent marketplace

W. Edwards Deming[The annual performance review] “nourishesshort-term performance, annihilates long-termplanning, builds fear, demolishes teamwork,nourishes rivalry and politics It leaves peoplebitter, crushed, bruised, battered, desolate,despondent, dejected, feeling inferior, someeven depressed, unfit for weeks after receiptof rating, unable to comprehend why they areinferior. It is unfair, as it ascribes to people in agroup difference that may be caused totally bythe system they work in.”

Don’t Throw the Baby Out! But Pendulums swing both ways Managers often go from one fad toanother just because others do it Deloitte has already decided to bringback some numeric ratings becausethey have had too many problems The theoretical basis for performancefeedback is very sound – how can weimprove implementation?

Thank You!

Evolution of Performance Reviews 2 1980s Jack Welch championed forced ranking at GE to reward top performers, accommodate those in the middle, and get rid of those at the bottom. 1990s McKinsey’s War for Talent study pointed to a shortage of capable executives and reinforced the emphasis