See the attachment and answer the following question based on the attachment and you can use one more source.
I need it by Wednesday 16 2019
What are the benefits and traps of “setting standards to control quality” for qualitative research, respectively? Do you think such attempts are really needed?
informed judgment when it comes to balancing the rigor of the research against its potential contribution to policy. This is a matter of judgment, both for researchers and for policy makers.
Systematic reviewing is also very expensive and inefficient in terms of time and material resources, given the little it often delivers in terms of actual “findings.” This is a problem it shares with the conduct of individual RCTs, of course. The results of systematic reviews can take many months to appear, and policy makers are as likely to ask for very rapid reviews of research to be conducted over a few days or weeks, and possibly assembled via an expert seminar, as to commission longer term systematic reviews (Boaz, Solesbury, & Sullivan, 2004, 2007). However, the more general issue for this chapter is the impact of the “scientific evidence movement” on qualitative research, and the above checklist produced by Attree and Milton (2006) well illustrates the contortions that some qualitative researchers are prepared to go through to maintain the visibility of their work in the context of this movement.
Impact on Qualitative Research: Setting Standards to Control Quality?
Another major response to the evidence movement has been for organizations and associations to start trying to “set standards” in qualitative research, and indeed in educational research more generally, to reassure policy makers about the quality of qualitative research and to reassert the contribution that qualitative research can (and should) make to governmentfunded programs. However, the field of qualitative research, or qualitative inquiry, is very broad, involving large numbers of researchers working in different countries, working in and across many different disciplines (anthropology, psychology, sociology, etc.), different applied research and policy settings (education, social work, health studies, etc.), and different national environments with their different policy processes and socioeconomic context of action. It is not at all selfevident that reaching agreement across such boundaries is desirable, even if it were possible. Different disciplines and contexts of action produce different readings and interpretations of apparently common literatures and similar issues. It is the juxtaposition of these readings, the comparing and contrasting within and across boundaries, that allows us to learn about them and reflect on our own situated understandings of our own contexts. Multiplicity of approach and interpretation, as well as multivocalism of reading and response, is the basis of quality in the qualitative research community and, it might be argued, in the advancement of science more generally. The key issue is to discuss and explore quality across boundaries,
thereby continually to develop it, not fix it, as at best a good recipe and at worst a narrow training manual.
Nevertheless, various attempts at “setting standards” are now being made, often, it seems, with the justification of “doing it to ourselves, before others do it to us” (Cheek, 2007; see also the discussion by Moss et al., 2009). In England, independent academics based at the National Centre for Social Research (a notforprofit consultancy organization) were commissioned by the Strategy Unit of the U.K. government Cabinet Office to produce a report on “Quality in Qualitative Evaluation: A Framework for Assessing Research Evidence” (Cabinet Office, 2003a). The rationale seems to have been that U.K. government departments are commissioning policy evaluations in the context of the move toward evidenceinformed policy and practice and that guidelines for judging the quality of qualitative approaches and methods were considered necessary. The report was produced under a different government and before the latest renewed focus on experimental design in the United Kingdom (Goldacre, 2013, discussed above). Nevertheless, it provides an interesting insight into what constitutes officially sanctioned qualitative research.
The framework is a guide for the commissioners of research when drawing up tender documents and reading reports, but it is also meant to influence the conduct and management of research and the training of social researchers (Cabinet Office, 2003a, p. 6). However, the summary “Quality Framework” begs many questions, while the full report reads like an introductory text on qualitative research methods. Paradigms are described and issues rehearsed, but all are resolved in a bloodless, technical, and strangely oldfashioned counsel of perfection. The reality of doing qualitative research and indeed of conducting evaluation, with all the contingencies, political pressures, and decisions that have to be made, is completely absent. The implication is that one would have to comply with everything in the framework in order for one’s work to be regarded as high quality. The issues that are highlighted are indeed important for social researchers to take into account in the design, conduct, and reporting of research studies. However, simply listed as issues to be addressed, they comprise a banal and inoperable set of standards that beg all the important questions of conducting and writing up qualitative fieldwork. Everything cannot be done; choices have to be made: How are they to be made, and how are they to be justified?
To be more positive for a moment and note the arguments that might be put forward in favor of setting standards, it could be argued that if qualitative social and educational research is going to be commissioned, then a set of
standards that can act as a bulwark against commissioning inadequate or underfunded studies in the first place ought to be welcomed. It might also be argued that this document at least demonstrates that qualitative research was being taken seriously enough within government at that time to warrant a guidebook being produced for civil servants. The framework might be said to confer legitimacy on civil servants who still want to commission qualitative work in the face of the policy move to RCTs, on qualitative social researchers bidding for such work, and indeed on social researchers more generally, who may have to deal with local research ethics committees (RECs; institutional review boards in the United States), which are predisposed toward a more quantitative natural science model of investigation. But should we really welcome such “legitimacy”? The dangers on the other side of the argument, as to whether social scientists need or should accede to criteria of quality endorsed by the state, are legion. In this respect, it is not at all clear that, in principle, state endorsement of qualitative research is any more desirable than state endorsement of RCTs.
Similar guidelines and checklists have appeared in the United States. Ragin, Nagel, and White (2004) report on a “Workshop on Scientific Foundations of Qualitative Research,” conducted under the auspices of the National Science Foundation and with the intention of placing “qualitative and quantitative research on a more equal footing … in funding agencies and graduate training programs” (p. 9). The report argues for the importance of qualitative research and thus advocates funding qualitative research per se, but equally, by articulating the “scientific foundations” it is arguing for the commissioning of not just qualitative research but a particular form of qualitative research. Moreover, when it comes to the basic logic of qualitative work, Ragin et al. (2004) do not get much further than arguing for a supplementary role for qualitative methods:
Causal mechanisms are rarely visible in conventional quantitative research … they must be inferred. Qualitative methods can be helpful in assessing the credibility of these inferred mechanisms. (p. 15)
Ragin et al. (2004) also conclude with another counsel of perfection:
These guidelines amount to a specification of the ideal qualitative research proposal. A strong proposal should include as many of these elements as feasible. (p. 17)
But again, that’s the point: What is feasible (and relevant to the particular investigation) is what is important, not what is ideal. How are such crucial choices to be made? Once again, “guidelines” and “recommendations” end up as no guide at all; rather, they are a hostage to fortune, whereby virtually any qualitative proposal or report could be found wanting.
A potentially much more significant example of this tendency is the American Educational Research Association (AERA) “Standards for Reporting on Empirical Social Science Research in AERA Publications” (AERA, 2006). The Standards comprise eight closely typed doublecolumn pages and include “eight general areas” (p. 33) of advice, each of which is subdivided into a total of 40 subsections, some of which are subdivided still further. Yet only one makes any mention of the fact that research findings should be interesting or novel or significant, and that is the briefest of references under “Problem Formulation,” which we are told should answer the question of “why the results of the investigation would be of interest to the research community” (p. 34). Intriguingly, whether the results might be of interest to the policy community is not mentioned as a criterion of quality.
As is typical of the genre, the standards include an opening disclaimer that
the acceptability of a research report does not rest on evidence of literal satisfaction of every standard…. In a given case there may be a sound professional reason why a particular standard is inapplicable. (p. 33)
But once again, this merely restates the problem rather than resolves it. The standards may be of help in the context of producing a booklength thesis or dissertation, but no 5,000word journal article could meet them all. Equally, however, even supposing that they could all be met, the article might still not be worth reading. It would be “warranted” and “transparent,” which are the two essential standards highlighted in the preamble (p. 33), but it could still be boring and unimportant.
It is also interesting to note that words such as warrant and transparency raise issues of trust. They imply a concern for the very existence of a substantial data set as well as how it might be used to underpin conclusions drawn. Yet the issue of trust is only mentioned explicitly once, in the section of the standards dealing with “qualitative methods”: “It is the researcher’s responsibility to show the reader that the report can be trusted” (AERA, 2006, p. 38). No such injunction appears in the parallel section on “quantitative methods” (p. 37); in fact, the only four uses of the actual word warrant in the
whole document all occur in the section on “qualitative methods” (p. 38). The implication seems to be that quantitative methods really are trusted—the issue doesn’t have to be raised—whereas qualitative methods are not. Standards of probity are only of concern when qualitative approaches are involved.
A further response to current debate has been the development or, perhaps more accurately, the rediscovery and redevelopment of mixedmethods research. Mixing methods in social research and program evaluation has a long history. The argument has been that no single method could afford a complete purchase on the topic under study (Bryman, 1988; Denzin, 1970). Evaluations have routinely employed a range of methods to investigate the sitebased specifics of program interpretation and adoption, alongside more general surveys of implementation and outcomes across sites (Greene, Caracelli, & Graham, 1989). However over the past 10 years or so, the “field” of “mixedmethods research” (MMR) has increasingly been exerting itself as something separate, novel, and significant, such that proponents such as Tashakkori and Teddlie (2003) claim, “Mixed methods research has evolved to the point where it is a separate methodological orientation with its own worldview vocabulary and techniques” (p. x). Johnson, Onwuegbuzie, and Turner (2007) argue that “mixed methods research … is becoming increasingly … recognised as the third major research approach or research paradigm” (p. 112).
More recently, as such views have been challenged, interrogated, and augmented, the arguments have been modified. The claim to a distinct third paradigm is left open, not least because other MMR advocates have criticized the whole notion of paradigms somehow driving and determining research methods and have argued instead for a more grounded and pragmatic approach to understanding what researchers actually do and how different approaches are actually combined in action (Christ, 2009; Greene, 2008; Harrits, 2011; Morgan, 2007; Tashakkori & Teddlie, 2010). In addition to debates about mixed methods per se, it is also the case that mixedmethods research has been alighted upon as a way to engage and modify the debate about RCTs and embed qualitative research in larger scale mixedmethods studies. Thus, for example, Mason (2006) argues for qualitative methods to “drive” mixedmethods research; HesseBiber (2010a, 2010b) and Mertens (2007; Mertens, Bledsoe, Sullivan, & Wilson, 2010) argue for the use of qualitative methods to advance social justice issues in largescale investigations and to enhance the “credibility” of RCTs (HesseBiber, 2012).