People Science: The Return on Investment (or Analytics)
Article 6 in a series of 6
The concept of people analytics maturity is often used by HR to assess the state of people analytics. Basic reporting is considered low maturity and high maturity is predictive analytics or prescriptive analytics. From this, HR and people teams conclude predictive or prescriptive analytics is the goal and must be used for everything, often not knowing what it is. HR tech and solution providers have jumped on the bandwagon too, adding futuristic sounding machine learning, artificial intelligence (AI), and cognitive computing to product descriptions. Few stop to ponder; is the knowledge gained from each advancement in analytics greater than the investment in achieving that advancement? The answer; yes, only if the knowledge will change the people decision. People Science’s sole purpose is extracting actionable knowledge from people data. When more knowledge adds little value to being able to create actionable insights, it is unnecessary. If the knowledge from descriptive reporting provides sufficient information to lead to the same decision as a complex regression, the latter adds no value. If a manager guide on the signs of flight risks is as effective as a complex predictive flight risk model, the predictive model adds no value. How do you know where actionable knowledge turns into just knowledge? How can you get 80% of people data insights with only 20% of the effort and how can you do this economically? If your organization has yet to invest in a people science you may need to start with how do you evaluate the economic investment in analytical skills and tech and determine the optimal level of investment.
Size the People Opportunity
Do you need a dedicated people scientist on your team? A lack of analytical skills in HR is often one of the biggest hurdles for starting the people science journey. HR is often considered a cost center by the business and securing investment buy-in a challenge. To determine the optimal level of investment in people science start with quantifying the size of the opportunity. How much are people issues- attraction, performance, and retention- costing your organization? Calculating the cost of attrition, both voluntary and involuntary, is the easiest way to quantify the observable performance costs. The cost of attrition will vary considerably by position and company, an aggregated US study cost of attrition at 21.4% an employee’s annual salary. The average annual salary in the UK is almost £28,000, 21% of which is about £5,900. In a company with 1,000 employees and an attrition rate of 15% the cost of attrition could be over £880,000 per year! For this calculation, exclude any involuntary attrition related to redundancies or layoffs. Quantifying the cost of mediocre performance, due to skill or will issues, is more challenging. Research by the Hay Group estimated improving employee productivity in the UK service sector would add £340 billion a year in output. A study by Glassdoor analyzed the stock performance of companies with happy employees with the overall market and showed a strong relationship between happy employees and stock performance. Given only 13% of employees worldwide are estimated to be engaged, it is highly likely your organization has opportunity to improve productivity via engagement. Once you have estimated the size of the opportunity for people science you can determine how much investment is needed.
Optimize Your Analytics
- Prioritize the Pain
People issues and challenges should be prioritized by the size of the impact on the business. The issues with the largest impact need to be addressed first. Big issues are those impacting performance today or with large risks to future performance, business strategy, or financial goals. For example, a shortage of critical talent with direct revenue impact today is a big issue that needs to be prioritized. Risks to future performance could include a large employee segment reaching retirement age without backfill and knowledge transfer plans. In addition, any people issues with potential legal or regulatory risks would be big issues. Additional issues that should be prioritize based on the people strategy with a focus on enabling the short and long term business goals. If capability gaps exist between your people skills today and what the business needs for the future, then these should be prioritized as well.
- Get Lean
In product development, a minimum viable product (MVP) is one that collects the maximum amount of validated learning about customers with the least effort. In People Science, the minimum viable analysis (MVA) is the analysis method and model that maximizes actionable knowledge with the least effort and cost. The primary determinate of actionable knowledge is the ability to better inform, make or change a decision. For example, analysis to determine the characteristics that predict performance and then to build a pre-hire assessment. New hire decisions will be impacted by the data from the pre-hire assessment, as well as the degree of impact determined by how and when the assessment is used. If the original analysis misidentified performance predictors, poor hiring decisions could be the result. When determining the level of investment in People Science, any project that may impact hiring, performance, or promotion decisions warrants sufficient investment to obtain the highest levels of statistical rigor.
However, many people problems can be solved with simpler analyses or models. For example, analysis of engagement data may show a high correlation between work-life balance and engagement. If work-life balance was already a concern in the organization based on exit interviews, and several initiatives to improve work-life balance are already planned, a more complex analysis of engagement drivers designed to test causality or cofounding factors will not change the organizations decision to improve work-life balance. In this example, the correlation analysis is sufficient to confirm the decision to improve work-life balance, there is no need or financial return for further analysis. In general, whenever dependent variables are highly correlated with overlapping root causes, and solution sets address multiple root causes, simpler analyses will likely maximize actionable knowledge.
Statistical models include a series of test statistics, e.g. confidence intervals, p-value, R2, that evaluate how well the model is performing and test the underlying model assumptions. Calculating and interpreting test statistics is arduous and they are often neglected in business research and data science. However, they are very important concepts and their understanding can provide more flexibility in how people data are leveraged. Most statistical methods are designed to test hypotheses about a large population based on the analysis of a small sample of data from that population, e.g. opinion polls. For example, using a t-test to compare the average for two groups includes calculating a confidence intervals and significance. These values quantify the degree of certainty that insights about the samples are true for the population.
In general, achieving a higher level of certainty involves investing more in the analysis, i.e. more complex analysis and/or more data. Understanding the concepts of samples size and population size, and how these concepts are used in statistical models, is essential in People Science. The relevant population size for many people issues is the same as the sample size, i.e. the size of the company. In these instances, the analysis of the sample is no longer an estimate of the population, they are one and the same. For example, if a gender-pay gap of 22% is calculated, and all pay data was included, then that number is a fact not an estimate. The value is dependent on how the metric calculation was defined, but it is still a fact for the given definition. Test statistics are irrelevant whenever the population and sample are the same. With a strong data foundation, the people scientist can leverage the entire population for many analytical problems and move from estimates to facts. However, if the external labor market is relevant to the hypotheses being tested the population size is no longer constricted to the company size. Thus, any analysis on attraction or retention will be subject to sample size considerations.
Data science applied to people data quickly becomes overwhelming without rationalizing investments. The number of hypotheses and statistical models combine and grow exponentially. Prioritization and simplification are necessary to combat choice and complexity. People Science is focused on actionable data, everything else is just noise.
A Very Technical Note: Frequentists vs. Bayesian
Confidence intervals, p-value, and R2 are all test statistics frequentists statistics, the predominate statistical methods used by social scientists. An alternative method of statistics, Bayesian, has similar concepts, e.g. credible intervals. Bayesian statistics is uniquely well suited for big data analysis and social scientists are exploring methods such as calibrated Bayes for big data research. A future post will explore how People Science can leverage both frequentists and Bayesian statistics.