Cover: Wiley Template Version 5.5 by Wiley CDO Team

Technological Changes and Human Resources Set

coordinated by

Patrick Gilbert

Volume 2

Quantifying Human Resources

Uses and Analyses

Clotilde Coron

Wiley Logo

Acknowledgments

I would like to warmly thank all the people working at IAE Paris, the administrative staff and the teachers–researchers, for the stimulating working atmosphere and exchanges. In particular, I would like to thank Patrick Gilbert for his trust, support and wise advice.

My gratitude also goes to Pascal Braun for his attentive review and enriching remarks.

Finally, I would like to thank the team at ISTE, without whom this book would not have been possible.

Introduction

This book arises from an initial observation: quantification has gradually invaded all modern Western societies, and organizations and companies are not exempt from this trend. As a result, the human resources (HR) function is increasingly using quantification tools. However, quantification raises specific questions when it concerns human beings. Consequently, HR quantification gives rise to a variety of approaches, in particular: an approach that values the use of quantification as a guarantee of objectivity, of scientific rigor and, ultimately, of the improvement of the HR function; and a more critical approach that highlights the social foundations of the practice of quantification and thus challenges the myth of totally neutral or objective quantification. These two main approaches make it possible to clarify the aim of this book, which seeks to take advantage of their respective contributions to maintain a broad vision of the challenges of HR quantification.

I.1. The omnipresence of quantification in Western societies

In The Measure of Reality, Crosby (1998) describes the turning point in Medieval and Renaissance Europe that led to the supremacy of quantitative over qualitative thinking. Crosby gives several examples illustrating how widespread this phenomenon was in various fields: the invention and diffusion of the mechanical clock, double-entry accounting and perspective painting, for example. Even music could not escape this movement of “metrologization” (Vatin 2013). It became “measured”, rhythmic and obeyed quantified rules. Crosby goes so far as to link the rise of quantification to the supremacy that Europeans enjoyed in the following centuries.

The author reminds us that the transition to measurement and the quantitative method has been part of a very important change in mentality, and that the deeply rooted habits of a society dominated by quantification today make us partly blind to the implications of this upheaval. Crosby gives several reasons for this upheaval. First, he evokes the development of trade and the State, which has manifested itself in two emblematic places, the market square and the university, and then the renewal of science. But above all, it underlines the importance attached to visualization in the Middle Ages. According to him, the transition from oral to written transmission, whether in literature, music or account books, and the appearance of geometry and perspective in painting, accompanied and catalyzed the transition to quantification, which became necessary for these different activities: tempo and pitch measurement to write music, double-entry accounting to write in accounting books and the calculation of perspectives are all ways of introducing quantification in areas that had not previously benefited from it.

Supiot (2015, p. 104, author’s translation) also notes the growing importance of numbers, particularly in the Western world: “It is in the Western world that expectations of them have constantly expanded: initially objects of contemplation, they became a means of knowledge and then of forecasting, before being endowed with a strictly legal force with the contemporary practice of governance by numbers.” Supiot thus insists on the normative use of quantification, particularly in law and in international treaties and conventions, among others. More precisely, he identifies four normative functions conferred on quantification: accountability (an illustration being the account books that link numbers and the law), administration (knowing the resources of a population to be able to act on them), judging (the judge having to weigh up each testimony to determine the probability that the accused is guilty) and legislation (using statistics to decide laws in the field of public health, for example the preventive inoculation of smallpox that could reduce the disease as a whole but be fatal for some people inoculated in the 18th Century).

I.2. The specific challenges of human resources quantification: quantifying the human being

Ultimately, these authors agree on the central role of quantification in our history and in our societies today. More recently, the rise in the amount of available data has further increased the importance of this role, and has raised new questions, leading to new uses and even new sciences: the use of algorithms in different fields (Cardon 2015; O’Neil 2016), the rise of social physics that uses data on human behavior to model it (Pentland 2014), the study of social networks, etc.

Organizations are no exception to this rule: quantification is a central practice in organizations. Many areas of the company are affected: finance, audit, marketing, HR (human resources), etc. This book focuses on the HR function. This function groups together all the activities that enable an organization to have the human resources (staff, skills, etc.) necessary for it to operate properly (Cadin et al. 2012). Thus, it brings together recruitment, training, mobility, career management, dialog with trade unions, promotion, staff appraisal, etc. In other words, it is a function that manages the “human”, insofar as the majority of these missions are related to human beings (candidates during recruitment, employees, trade unionists, managers, etc.). HR quantification actually covers a variety of practices and situations, which we will elaborate on throughout the book:

  • – quantification of individuals: measurement of individual performance, individual skills, etc. This practice, the stakes of which are specified in Chapters 1 and 2, can be identified during decisions regarding recruitment, salary raises and promotion, for example;
  • – work quantification: job classification, workload quantification, etc. This measure does not concern human beings directly, but rather the work they must do. Chapters 1 and 2 will examine this practice at length;
  • – quantification of the activity of the HR function: evaluation of the performance of the HR function, the effects of HR policies on the organization, etc. This practice, which is discussed in detail in Chapter 4, becomes all the more important as the HR function is required to prove its legitimacy.

These uses may seem disparate, but it seemed important to us to deal with them jointly, as they overlap on a number of issues. Thus, their usefulness for the HR function, or their appropriation by various agents, constitutes transversal challenges. In addition, in these three types of practices, quantification refers to the human being and/or their activities. However, the possibility of quantifying the human and human activities has given rise to numerous methodological and ethical debates in the literature. Two main positions can be identified. The first, which is the basis of the psychotechnical approach, seeks to broaden the scope of what is measurable in human beings: skills, behaviors, motivations, etc. The second, resulting from different theoretical frameworks, criticizes the postulates of the psychotechnical approach and considers on the contrary that the human being is never reducible to what can be measured.

The psychotechnical approach was developed at the beginning of the 20th Century. It is based on the idea that people’s skills, behaviors and motivations can be measured objectively. As a result, the majority of psychotechnicians’ research focuses on measuring instruments. They highlight four qualities necessary to make a good measuring instrument: standardization, ranking result, fidelity, and validity (Huteau and Lautrey 2006). Standardization refers to the fact that all subjects must pass exactly the same test (hence the importance of formalizing the conditions for taking the test, for example). Similarly, the correction of the test must leave as little margin as possible for the corrector. The stated objective of formalization is to make the assessment as objective as possible, whilst trying to avoid having the test results influenced by the test conditions or the assessor’s subjectivity. Then, the test must make it possible to differentiate individuals, in other words to rank them, usually on a scale (e.g. a rating scale). This characteristic implies having items whose difficulty is known in advance, and with a variation in the levels of difficulty. Indeed, the easy items, passed by the vast majority of individuals, are just as low ranking as the difficult items, passed by very few individuals. As a result, psychotechnicians recommend that items of varying levels of difficulty be mixed in the same test in order to achieve a more differentiated ranking of individuals. Accuracy refers to the fact that test results must be stable over time. Individual test results are influenced by random factors such as the fitness level of individuals, and the objective is to minimize this hazard. Finally, validity refers to the fact that the test must contribute to an accurate diagnosis or prognosis that is close to reality. This is called the “predictive value” of the test. This predictive value can be assessed by comparing the results obtained on a test with the actual situation that follows: for example, comparing a ranking of applications received for a position based on a test with the scores obtained on individual assessments by successful candidates, so as to infer the match between the test used for recruitment and the skills of candidates in real situations. Two typical examples of this approach are: the measurement of intellectual quotient (IQ), and the measurement of the factor (Box I.1).

The psychotechnical approach is therefore very explicitly part of an approach aimed at measuring the human being and demonstrating the advantages of such a measurement. Thus, psychotechnical work emphasizes that measurement allows for greater objectivity and better decision-making if it follows the following three assumptions (McCourt 1999). First of all, a good evaluation is universal and impersonal. Second, it must follow a specific procedure (the psychotechnical procedure). The last assumption is that organizational performance is the sum of individual performance.

The second stance takes the opposite approach to this one by demonstrating its limits. Several arguments are put forward to this effect. The first challenges the notion of objectivity by highlighting the many evaluation biases faced by the psychotechnical approach (Gould 1997). These evaluation biases constitute a form of indirect discrimination: an apparently neutral test actually disadvantages some populations (women and ethnic minorities, for example). For example, intelligence tests conducted in the United States at the beginning of the 20th Century produced higher average scores for whites than blacks (Huteau and Lautrey 2006). These differences could be interpreted as hereditary differences, and could have contributed to racist theories and discourse, whereas in fact they illustrated the importance of environmental factors (such as school attendance) for test success, and thus showed that the test did not measure intelligence independently from a social context, but rather intelligence largely acquired in a social context (Marchal 2015). Moreover, this type of test, like craniometry, is based on the idea that human intelligence can be reduced to a measurement, subsequently allowing us to classify individuals on a one-dimensional scale, which is an unproven assumption (Gould 1997).

The second argument criticizes the decontextualization of psychotechnical measures, whereas many individual behaviors and motivations are closely linked to their context (e.g., work). This argument can be found in several theoretical currents. Thus, sociologists, ergonomists and some occupational psychologists argue that the measurement of intelligence is all the more impossible to decontextualize since it is also distributed outside the limits of the individual: it depends strongly on the people and tools used by the individual (Marchal 2015). However, as Marchal (2015) points out, work activities are “situated”, i.e. it is difficult to extract the activity from the context (professional, relational) in which it is embedded. This criticism is all the more valid for tests aimed at measuring a form of generic intelligence or performance, which is supposed to guarantee superior performance in specific areas. The g factor theory (Box I.1) is an instructive example of this decontextualized generalization, since it claims to measure a generic ability that would guarantee better performance in specific work activities. In practice, the same person, therefore with the same measure of g factor, may prove to be highly, or on the contrary, not very efficient depending on the work context in which he or she is placed.

The third argument questions the ethical legitimacy of the measurement of the individual and highlights in particular the possible excesses of this approach. Thus, the racist or sexist abuses to which craniometry or intelligence tests have given rise to are pointed out to illustrate the dangers of measuring intelligence (Gould 1997). In a more precise field of evaluation, many studies have highlighted the harms of quantified, standardized evaluation of individuals. In particular, Vidaillet (2013) denounces three of them. The first harm of quantified evaluation is that it contributes to changing people’s behavior, and not always in the desired direction. A known example of such a perverse effect is that of teachers who, being scored on the basis of their students’ scores on a test in the form of MCQs, are encouraged either to concentrate all their teaching on learning the skills necessary to succeed on the test, to the detriment of other, often fundamental skills, or to cheat to help their students when taking the test (Levitt and Dubner 2005). The second disadvantage is that it may harm the working environment by accentuating individual differences in treatment and thus increase competition and envy. The third harm is that it substitutes an extrinsic motivation (“I do my job well because I want a positive evaluation”) for an intrinsic motivation (“I do my job well because I like it and I am interested”). However, extrinsic motivation may reduce the interest of work for the person and therefore the intrinsic motivation: the two motivations are substitutable and not complementary.

Finally, the fourth argument emphasizes that, unlike objects and things, human beings can react and interact with the quantification applied to them. Thus, Hacking (2001, 2005) studies classification processes and more particularly human classifications, i.e. those that concern human beings: obesity, autism, poverty etc. He then refers to “interactive classification”, in the sense that the human being can be affected and even transformed by being classified in a category, which can sometimes lead to changes in category. Thus, a person who is entering the “obese” category after gaining weight may, due to this simple classification, want to lose weight and may therefore leave the category. This is what Hacking (2001, p. 9) calls the “loop effect of human specifications”. He recommends that the four elements underlying human classification processes (Hacking 2005) be studied together: classification and its criteria, classified people and behaviors, institutions that create or use classifications, and knowledge about classes and classified people (science, popular belief, etc.). Therefore, the possibility of quantifying human beings in a neutral way comes up against these interaction effects.

Finally, the confrontation between these two stances clearly shows the questions raised by the use of quantification when it comes to humans, and in HR notably: is it possible to measure everything when it comes to human beings? At what price? What are the implications, risks and benefits of quantification? Can we do without quantification?

I.3. HR quantification: effective solution or myth? Two lines of research

In response to these questions on the specificities of human quantification, two theoretical currents can be identified on the use of HR quantification.

One, generally normative, tends to consider quantification as an effective solution to improve HR decision-making, whether in recruitment or other areas. This approach thus supports evidence-based management (EBM), in other words management based on evidence which is most often made up of figures and measurements. In the EBM approach, quantification is therefore proof and can cover a multiplicity of objects: quantifying to better evaluate individuals (in line with the psychotechnical approach), or to know them better, or to better understand global HR phenomena (absenteeism, gender equality), all in order to make better decisions. The EBM approach thus considers that quantification improves decision-making, processes and policies, including HR. Lawler et al. (2010) thus believe that the use of figures and the EBM approach have become central to making the HR function a strategic function of the company. For example, they identify three types of metrics of interest in an EBM approach: the efficiency and effectiveness of the HR function, and the impact of HR policies and practices on variables such as organizational performance. More generally, according to the work resulting from this approach, quantification makes it possible to meet several HR challenges. The first challenge is to make the right human resources management decisions: recruitment, promotion and salary increases, for example. The psychotechnical approach already mentioned seems to provide an answer to this first challenge: by measuring individuals’ skills, motivations and abilities in an objective way, it seems to guarantee greater objectivity and rigor in HR decision-making.

The second challenge is to define the right HR policies. Rasmussen and Ulrich (2015) thus give an example where an offshore drilling company uses quantification to define a policy linking management quality, operational performance and customer satisfaction (Box I.2). This example therefore illustrates how quantification can help identify problems and links between different factors in order to define more appropriate and effective HR policies.

Finally, the third challenge is to prove the contribution of the HR function to the company’s performance. As Lawler et al. (2010) point out, the HR function suffers from the lack of an analytical model to measure the link between HR practices and policies, and the organizational performance, unlike the finance and marketing functions for example. To fill this gap, they suggest collecting data on the implementation of HR practices and policies aimed at improving employee performance, well-being or commitment, but also on organizational performance trends (such as increasing production speed or the more frequent development of innovations).

This trend therefore values quantification as a tool to improve the HR function via several factors: more objective decision-making, the definition of more appropriate and effective HR policies and proof of the link between HR practices and organizational performance, which can encourage the company to allocate more financial resources to HR departments.

The other, more critical trend is part of a sociological approach and takes a more analytical look at the challenges of quantification. Desrosières’ work (1993, 2008a, 2008b) founded the sociology of quantification, which focuses on quantification practices and shows how they are socially constructed (Diaz-Bone 2016). This analytical framework is based, among other things, on the concept of conventions, which are interpretative frameworks produced and used by actors to assess situations and decide how to act (Diaz-Bone and Thévenot 2010). The economics of conventions focuses on coordination that allows institutions and values to emerge, and shows how this coordination is based on conventions, which make it possible to share a framework for interpreting and valuing objects, acts and persons, and thus acting in situations of uncertainty (Eymard-Duvernay 1989). The originality of Desrosières’ work lies in mobilizing this concept of convention to analyze quantification operations, which amounts to studying “quantification conventions” (Desrosières 2008a), namely a set of representations of quantification that will make it possible to coordinate behaviors and representations (Chiapello and Gilbert 2013).

Desrosières thus seeks to deconstruct the assumptions that accompany the myths surrounding quantification (the myth of statistics that are ostensibly a transparent and neutral reflection of the world, for example, and that constitute a guarantee of objectivity, rigor and impartiality), in particular by emphasizing the extent to which quantification is based on social constructions, and not on physical or natural quantities. He suggests that statistical indicators should be considered as social conventions rather than measures in the sense of the natural sciences (e.g. air temperature) (Desrosières 2008a). Gould (1997), without claiming to be part of the sociology of quantification, also provides very illuminating illustrations of how quantification can be influenced by social prejudices, making objectivity impossible. In one of his books, Desrosières (2008a) also highlights the extent to which statistics, far from being merely a transparent reflection of the world, create a new way of thinking about it, representing it, measuring it and, ultimately, acting on it. However, his work also focuses on the history of statistics and the dissemination of new methods in the field. Thus, Desrosières (1993) highlights the link between the State and statistics. The latter, historically confined to population counting, has gradually been enriched by new methods and theories (probabilities with the law of large numbers, then econometrics with regression methods, to cite only two examples), which have partially loosened its ties with the State, and have brought it closer to other sciences, such as biology, physics and sociology. In another book, Desrosières (2008b) highlights the developments in modern statistics after the Second World War (reorganization and unification of official statistics, willingness to act on indicators such as the unemployment rate, etc.). These founding works have since been widely adopted by many authors.

Chiapello and Walter (2016), for example, are interested in the dissemination of calculation conventions used in finance. They show that, contrary to a rational ideology that would have the algorithms mobilized in finance be so because they are the most effective and rigorous, this dissemination is sometimes entangled in the power games between different functions or professions in the world of finance. Similarly, Juven (2016) shows that the activity-based pricing policy introduced in French hospitals does not always respond solely to the rational logic of improving hospital performance, but comes from choices and trials and errors that can only be understood by looking at the sociological foundations of the decisions taken (Box I.3). Finally, Espeland and Stevens (1998) focus on the social and sociological processes underlying “commensuration” operations, which make it possible to compare different entities (individuals and positions, for example) according to a common metric.

Finally, this second trend takes a more critical approach to quantification. While the first trend is based in particular on the idea of quantification that can supposedly provide objectivity, transparency, neutrality and rationalization, the second trend questions this vision and these assumptions, thus questioning more generally the contributions of quantification to management.

I.4. The positioning of this work

Our book seeks to provide a nuanced and didactic perspective on the use of HR quantification. Therefore, it draws on these two currents to try to reflect as much as possible both the advantages and limitations of quantification. More precisely, we ask ourselves the question of the use that companies can make of HR quantification, but also the evolutions that the rise of quantification can represent for HR and the appropriation of these new devices by the various agents involved. In parallel, this book pays interest to the different theoretical and disciplinary trends that allow us to better understand the challenges of HR quantification.

To do this, this book mobilizes several types of sources and examples. Some of the information used comes from academic work. Another part is based on empirical surveys carried out within companies. These empirical materials are of several kinds: interviews with HR, employees, trade union representatives; participant observation as part of a professional experience as a Big Data HR project manager; company documents on the use of HR quantification; quantitative analyses conducted on personnel data.

Thus, this book aims to provide both theoretical and empirical knowledge on HR quantification. Finally, a few semantic clarifications must be added. The concepts of quantification, statistics and measurement are frequently used throughout this book. Quantification corresponds to a very broad set: all the tools and uses producing figures (or quantified data), and the figures thus produced. It therefore includes the concepts of statistics and measurement. The term “statistics” is employed when referring to the scientific and epistemological dimension of quantification, as Desrosières does, for example. Finally, the term “measurement” will be used when discussing the specific activity of quantifying a phenomenon, an object or a reality.

I.5. Structure of the book

The book is divided into five chapters of equal importance.

Chapter 1 seeks to delineate the subject by providing definitions and examples of the three major uses of HR quantification: personal and labor statisticalization, reporting and analysis, Big Data/algorithms. The next three chapters take up elements of this introductory chapter by analyzing them each from a different angle and can therefore be read independently of each other, and in the order desired by the reader.

Chapter 2 deals with the issue of decision-making. Indeed, as we have seen, the “EBM” approach sees the benefits of quantification as coming mainly from improving decision-making. Therefore, Chapter 2 examines the paradigms and beliefs that drive this link between quantification and decision-making.

Chapter 3 focuses on the appropriation of the different uses of quantification by the multiple actors involved in HR – managers, employees and trade unions, in particular.

Chapter 4 is based on the potential changes introduced by the increasing use of HR quantification, and questions the consequences of these changes for the HR function.

Finally, Chapter 5 deals with the ethical issues of quantification, particularly with regard to the protection of personal data and questions of discrimination.