[Ed note: This is the fifth of six articles in a virtual panel on Who should benefit from organizational research?]
We are all indebted to Jerry Davis for his provocative piece. The issues he raises are frequent objects of discussion but have seldom generated such an insightful challenge to scholarship generally, and to the field of organization studies in particular. Yet I must confess some misgivings with this line of analysis. Let me explain why.
Davis’s argument raises two interrelated points. The first concerns the reward structure that governs academic publication. This reward structure, he contends, has unduly fostered the production of novel (“interesting,” or counter-intuitive) findings, even if these run the risk of being misleading, distorted or even untrue. This is especially the case, given the use of metrics that judge academic merit by virtue of “impact” (a faulty metric if ever there were one). Especially when individual careers (hiring, promotion, mobility) all depend on the individual’s metrics, the production of knowledge is now governed by a system that tends implicitly to pervert the development of knowledge, to lead away from cumulative knowledge, and even to fuel the generation of illusions. His chosen metaphor –the Winchester Mystery House— stands as an object example: a concerted, generations-long effort that serves little or no use, save as a program of job creation whose only product is the very embodiment of irrationality itself.
Davis’s second claim is that we now live in an era when the large corporation has undergone a major shift. Put simply, corporations no longer offer organization studies the same primary constituency as in the past, since the management of human beings has now been given over to computer algorithms, which govern fluctuations in corporate workforces without need for managerial intervention. Under such a regime, the study of management must reconsider the audience for whom it speaks.
One can only admire Davis for his effort to provoke a much needed discussion. What, indeed, is the purpose of management research, and who are to be its primary beneficiaries? The problem I see is that in posing these questions, Davis has embraced answers that may provide little improvement over the status quo.
For one thing, if indeed organizations are increasingly ruled by algorithmic regimes, this only heightens the need for organizational research, rooted in the interests of the public and the workforce writ large. The erosion of managers who oversee actual employees, in other words, should disrupt the profession of management studies only if we define the field’s mission as that of serving managerial goals. This, I contend, should never have been the organizing principle of the field, even when human managers ruled the roost.
To my mind, we should never have accepted organization studies as a sociology for management and the firms they direct; far better to hope for a sociology of management, in whatever guise its regimes seem to take. That, as I see it, is more or less what has transpired in European business schools, warts and all. Perhaps we all ought to be more European in our conception of our proper role, and seek to generate critical yet rigorous knowledge that advances public debate. (I am heartened by Davis’s emphasis on a similar point toward the end of his analysis).
What then of Davis’s argument concerning the perverse incentives that govern publication in our field? Are we not complicit in the construction of an architecture much like that of the Winchester Mystery House? Perhaps. But I wonder whether the growth of our field is as chaotic (or indeed, as autonomous), as Davis insists. Are there not hidden interests (wealthy donors, powerful employers and alumni, and other elite actors) who implicitly shape the coordinates that define acceptable management research? Haven’t universities themselves –the platform for organization research—adopted corporate logics with increasing frequency? To answer yes to these questions is to suggest that the architecture of knowledge concerning organizations is not nearly so arbitrary or Escher-like as his metaphor suggests. Slanted, yes, but not chaotically so.
One last point. Jerry Davis is surely right that “novelty” is a poor basis for the certification of knowledge, and that “impact” has fostered citation mongering throughout institutions of higher education, both here and in Europe. Yet, one can easily err in the opposite direction, casting a dubious eye toward efforts to challenge or subvert the conventional wisdom. At its best, the effort to banish self-serving novelty may work to heighten the production of “normal science,” with sensible but trivial extensions of existing assumptions. At its worst, it may work to stifle critical inquiry, impeding the formulation of deeper, paradigm-shifting innovations. In the current climate governing so much of higher education, I for one fear that (switching metaphors) the antidote may be more dangerous than the disease.