Can evaluators be the bridge in the research-practice gap?

Researchers and practitioners agree that there is a gap between research (or theory) and practice. While the reasons for this gap are plentiful, they boil down to researchers and practitioners comprising two communities (Caplan, 1979) such that have different languages, values, reward systems, and priorities. The two communities try to bridge the gap through a variety of methods including producer-push models (e.g., knowledge transfer, knowledge translation, dissemination, applied research, interdisciplinary scholarship), user-pull models (e.g., evidence-based practice, practitioner inquiry, action research), and exchange models (e.g., research-practice partnerships and collaboratives, knowledge brokers, intermediaries). However, these methods typically focus on researchers or practitioners and do not consider other scholars that could fill this role.Evaluators are in a prime position to bridge the gap between researchers and practitioners. I have been working with Dr. Tiffany Berry at Claremont Graduate University, whose unique position as both an evaluator and researcher led her to think of evaluators as a potential bridge between researchers and practitioners.Evaluation has been considered a transdiscipline in that it is an essential tool in all other academic disciplines (Scriven, 2008). Evaluators use social science (and other) research methodology and often have a specific area of content expertise, enabling them to bridge the gap to researchers. Furthermore, evaluation often requires a close relationship with practitioners to create evaluations that communicate in their language, speak to their values and priorities, and meet their needs to produce a useful evaluation, enabling them to also bridge the gap to practitioners. Evaluators can use their similarities with both researchers and practitioners to span the gap between researchers and practitioners as knowledge brokers or intermediaries (see figure).However, while evaluators may span the bridge to researchers and practitioners individually, they may not be working to bridge the gap between researchers and practitioners. In a field that still debates the paradigm wars (e.g., the “gold standard” evaluation, qualitative versus quantitative data), the role of evaluators (e.g., as an advocate for programs), core competencies for evaluators, and professionalization of the evaluation field, it is unclear to what extent evaluators see their role encompassing bridging the research-practice gap and, if so, to what extent evaluators are actually working to bridge this gap and how they are doing so.

Stay tuned as I continue blogging about the review paper for my dissertation (i.e., the first chapter of my dissertation). I would sincerely appreciate any and all comments and criticism you may have. It will only strengthen my research and hopefully aid in my ultimate goal of informing the field of evaluation and improving evaluation practice.

Previous
Previous

Dana presents at Eval17: Surveying children, using vignettes to train staff, and more!

Next
Next

Evaluation is Not Applied Research