Can you elaborate on that? Why the qualitative / quantitative distinction is fundamentally important for social science
In a recent blog post, Howard Aldrich argued that social scientists should drop the distinction between quantitative and qualitative research. I want to push back here and argue that there are important differences between the two methods which must be recognized to ensure high quality research. To be sure, the starting point of the discussion should be recognition of the underlying unity of research methodology, about which Charles Ragin has written eloquently. Quantitative and qualitative methods are both tools for advancing theory and knowledge. But these methods advance theory in distinct, complementary ways. To realize the full potential of research methodology requires recognizing these differences.
It is important to remember the long and distinguished tradition of the interpretivist philosophy of science, which undergirds qualitative methods, against the positivist basis of quantitative methods. Interpretivism – beginning with Max Weber’s 1904 treatise on objectivity in social science – holds that to understand human society means going beyond – but not rejecting – positivist methods of statistical hypothesis testing in order to generate data capable of providing meaning. How do humans understand their social world and particular social contexts? This tradition has been elaborated and refined by methodological luminaries from Clifford Geertz to Howard Becker and Michael Burawoy.
To be clear: I have zero interest in paradigm battles between positivism and interpretivism. Both are necessary. Positivism refers to a broad set of positions holding that all knowledge must come from empirical or “positive” data based on sense experience (against metaphysical speculation). The various versions of positivism also maintain that scientists can understand the world as an external object of analysis; for social scientists this means maintaining a certain distance from the people and relations under study, primarily through the use of surveys.
By contrast, interpretivists hold that the social world is distinct from the natural world. The goal of interpretive social science, according to Geertz, is to produce “thick description,” which goes beyond surface appearances to penetrate the “webs of significance” within which humans construct and understand the social world. Continuing with an example from Geertz, this means not mistaking a wink (a coded behavior aimed at another person) for a twitch of the eye. To ensure such mistakes are not made requires prolonged engagement with respondents. In contrast to the positivist attempt to maintain as much distance as possible, Burawoy has argued that prolonged engagement with research subjects helps unpack situational experiences to reveal social process. Disturbing the setting helps reveal social order.
Now, it is not the case that there is not a one-to-one correspondence between positivism-statistics and interpretivism-qualitative methods. Many (most?) contemporary researchers – both quantitative and qualitative – have appreciated the positivist-interpretivist debate and moved forward by appropriating insights from both, rather than slavishly holding to one or the other. It should also be noted that surveys which gather objective data (e.g. labor market data from the US Bureau of Labor Statistics or industry data from the Bureau of Economic Analysis) and statistics gathered from sources such as archives do not have the same interpretive problems as surveys which gather subjective data.
With that said, it is very difficult to extract meaning in the qualitative sense from statistical survey data. As Becker argued, it is difficult for statistics to produce thick description precisely because they operate at a distance from their objects of analysis, using remote indicators. This is by positivist design: in order to produce objective data, researchers should try not to disturb the social world they study. Surveys are thus based on standardized, primarily closed-ended questions, designed so that every respondent should be able to understand them in the absence of any contact with the researcher. But what if there are differences in how diverse respondents understand the question? What if the question does not make sense to some respondents? What if the set of predefined answers is does not include the answers most appropriate for a given respondent? What if these questions mean little to the respondent???
These are serious problems! By no means do they imply that surveys are useless – they are of fundamental importance for social science – but such questions do show that surveys are necessarily incomplete, and that survey data are indeed distinct from other types of data.
Aldrich suggests that “ethnographic fieldwork, archival data collection, long unstructured interviews, simple observational studies” etc. are a “heterogeneous set” that have in common only the fact that they are “not using the latest high-powered statistical techniques to analyze data that’s been arranged in the form of counts of something or other. … Beyond that, however, commonalities are few.” With the exception of archival techniques, I think Howard is fundamentally incorrect on this.
Qualitative scholars have long made a compelling argument that ethnography, in-depth interviews, unstructured interviews and non-participant observation have in common prolonged engagement in the field. In Becker’s analysis, this means that such methods are able to produce data having a number of a critical characteristics that survey data do not have: accuracy (ability to produce close, detailed observations), precision (ability to produce new information on issues not anticipated in the original formation of the research question), and breadth (ability to produce knowledge on a wide range of matters bearing on the research question).
If Weber, Geertz, Becker and Burawoy are correct, then it is eminently sensible – indeed, necessary – to label these methods qualitative, because they produce a different type of data from surveys. Aldrich effectively admits this when he argues that qualitative data can generate counts, but then adds the proviso that “the meaning of what has been observed derives not from ‘counting’ something but rather from understanding how to interpret what was observed,” which “depends upon a researcher’s understanding of the social context for what was observed.”
Data based on survey research are quantitative, and they are fundamentally important but incomplete. Data based on methods of prolonged engagement with respondents are qualitative, also important but incomplete. They are united in their goal of advancing knowledge and theory, but in order to use them correctly, it is important to understand their differences.
I think that the interpretive vs positive lens, leaves out the core sociological insight – context matters. Social networks, communities, organizations these meso level places structure interaction and meaning. Interpretive case studies that are not situated in context produce thin interpretation. Positivist statistical analyses that compare contexts in terms of their distributions (e.g. high vs low poverty) or correlations (e.g. high vs low gender pay gaps or racialized police violence) can be quite thick in their interpretive content. It is from meaning and correlations across social contexts that we develop social science.
On the other hand, Matt Vidal is perhaps a bit too generous to positivism, confusing method with epistemology. The old positivism was quantitative and often survey based and thin as a result, but its biggest sin was the notion that generalization to large aggregates or even across societies or history were achievable and desirable scientific goals.What makes social life, social, is its creation by human agents in contexts. Since human agents share a basic biology, it is typically contextual variation which reveals the social.
Thanks, Don. I fully endorse your comment that social context matters in a way that is not captured by the positivist/interpretivist frame. I don’t mean to lean too heavily on that distinction. My point was simply to suggest that different types of data have different uses.
At the level of epistemology, I think that positivism is untenable. I buy the critiques of both early Roy Bhaskar (*The Possibility of Naturalism*) and Micheal Burawoy (“The Extended Case Method”). I don’t think that critical realism is particularly helpful for conducting social research, but I do buy it’s critique of positivism, in brief, that the natural and social worlds are ontologically similar (both have ontological depth) but epistemological different (researchers are part of the social but not the natural world). Of course, the folks at orgtheory.net have told us how “lame” that all is. While I disagree with the implicit anti-philosophical nature of that position, I agree with the upshot, which is that the epistemological debate can go on forever with little real bearing on social research.
In practical terms, surveys and quants are indispensable. They don’t need to be justified or guided by positivism. I have no problem with generalization from random survey to population. I sure learned a lot from the statistical generalizations in *Gender and Racial Inequality at Work*!
I guess most of us here by and large are on the same page. The argument is not play out one against the other, quantitative research vs qualitative research or positivism vs interpretivism. That was neither Howard Aldrich’s nor your (Matt’s) intention. I however am with Howard Aldrich when he highlights that data are quantitate or qualitative, not people. hence, my problem with your response to his piece is when you talk about “Qualitative scholars”. Yet, I suspect that you did not mean to describe scholars as “qualitative” or “quantitative” either, and the working is simply due to negligence that happens to all of us when writing blog-posts. There are good reasons for field researchers to draw on statistics or to cooperate and write papers or books with people who primarily use quantitative data and statistics. Matthew Desmond and colleagues’ article “Evicting Children” in Social Forces may be cited as an example in case, although there are numerous others.
When we give up on describing scholars as either qualitative or quantitative then the same holds true for many publications. Papers are often not only based on quantitative or qualitative data but they use both. So, the research underlying these publications cannot be described as qualitative or quantitative either.
I have had a quick look at Google’s n-gram viewer on the use of the terms “qualitative research” and which was curiously interesting.
According to this tool, “Qualitative Research” has been rising in usage since 1980s but was used very little before then. Maybe, someone with more expertise in such matters can trace back and explain who introduced the term and therewith the distinction to other kinds of research.
PS: Maybe it has to do with the growth of the textbook market?
Thanks, Dirk. Yes, I agree with you and was just being sloppy when writing “qualitative scholars.” It was precisely my point to suggest that there are different types of data — qualitative and quantitative.
Interesting point about textbooks. The Sage Handbook of Qualitative Research was first published in 1994, which would fit your hypothesis.