The Dark Side of Talent Analytics

Tim Payne, Partner, Global Centre for HR Transformation, KPMG in the UK

Talent analytics is a very seductive topic. Most HR professionals are longing for a way to put some numbers and science to the art of people management—a longing driven by insecurities after years of unflattering comparisons with our associates in Finance and Marketing, and a craving for respect from our business customers. Analytics holds out the promise of instant credibility fix, a sign of transition from adolescence to adult.

I have to admit to being a little swept up in this myself. An early career spent buried in Spps-x validating recruitment processes and competency frameworks no longer look like a waste; multiple regression is no longer a net negative on the CV.

In addition to this, some of the newer cloud-based HR and talent systems are even arriving with prepacked analytics—in-built algorithms to help you predict and manage your workforce, retain talent, and save money!

And I am still excited and convinced that analytics holds the key—or one of the keys—to improving how we manage our people (see our recent paper on analytics). However, I was given pause a few weeks ago when a software vendor demonstrated their latest analytics package. At the click of a mouse (or the swipe of a screen) the system plots individual workers on a matrix of their future predicted performance rating and their future predicted likelihood of leaving. Think of what you could do with that data. That is the exact moment I started to feel a little nervous for a number of good reasons.

First, there was no mention of how the in-built algorithms are validated. Not a sign of an R-squared or a confidence range, let alone adverse impact, indication of sample size involved, or whether the data are distributed normally. In other words, how do we know the predictions are accurate and fair?  This doesn’t mean these points hadn’t been considered by the vendor, just that they were not raised or promoted.  And if not raised by the vendor, will they always be raised by the customer?

Second, I started to think about how this information might be used. In an ideal world, someone would take this data, together with other contextual information, and work out a plan. Presumably, if an individual is predicted to be a high performer and a high risk for leaving, the course of action is clear—speak to their manager, speak to them, and try to find a way to change the predicted outcome.

But what if it’s the other way around—predicted low performance and a low risk of leaving. Do you stop investing in that person and remove them from the promotion list? Do you start looking for ways to make them more likely to leave? One hopes not but that instead the prediction is used by a line manager to work out why their performance is predicted to be low and what might be done to turn this around. Questions should be asked: Are they in the wrong job or are there issues at home? Perhaps the problem is actually with the manager, or their colleagues, or their client allocation….

This line of thought made me suddenly rethink my allegiance to analytics. Particularly when you factor in that I work more with European than U.S. clients, where the employment law environment is tough and complex.

And yet, I do believe that all things being equal, those prepacked algorithms will be more accurate than our human intuition. I believe analytics works. In my first year as an undergraduate, I took a course in the history and philosophy of science, and that’s where I first came across Paul Meehl. We studied his book ‘Clinical versus Statistical Prediction: A theoretical analysis and a review of the evidence’ which was published in 1954.

You can still get it; I recommend it.  An interesting study quoted involved asking medical doctors (MDs), nurses, members of the public (using a check list), and an equation to diagnose a particular illness. MDs were worst, followed by nurses, followed by the members of the public, and the best diagnosis was via an equation. In other words, we’ve known for a long time that algorithms are better predictors than humans. So why didn’t doctors get replaced in the 1950s? The answer is in the complex moral, social, and cultural context of the time. Frankly, in the 1950s, we wanted to see a man in a white coat with authority telling us our fate.

When it comes to making decisions about people in the workplace based on algorithms, perhaps the moral, social, and cultural context is today more supportive, more ready.  But I don’t think HR professionals should take this for granted.  We should think hard about how we use these powerful new tools before we click that mouse or swipe that screen and in so doing create unforeseen ethical or legal dilemmas.

Hear more about issues related to Human Resources by visiting KPMG’s HR Centre of Excellence and by listening to the KPMG SSO Institute podcasts Eradicating the Stigma: HR’s Future and Rethinking Human Resources in a Changing World.



Leave a Reply