Quantitative Evaluation (and a little shameless self-promotion) / Évaluation quantitative (et un peu d’autopromotion éhontée)

By David Phipps, RIR-York

Amanda Cooper (@ACooperKMb) recently released her evaluation of 44 Canadian Research Brokering Organizations. She presents a quantitative method for evaluating the effort of a system of knowledge mobilization.

Amanda Cooper (@ACooperKMb) a récemment dévoilé son évaluation de 44 organisations canadiennes de courtage de recherche. Elle présente une méthode d’évaluation quantitative visant à mesurer les efforts d’un système de mobilisation des connaissances.

Knowledge mobilization struggles with evaluation.  Evaluating an individual instance of knowledge mobilization is feasible with the right base line and pre/post intervention metrics. But rolling that up and evaluating a system of knowledge mobilization (like any one of the knowledge mobilization units in the ResearchImpact-RéseauImpactRecherche network) has so far proven challenging.

So thank you, Amanda Cooper (Assistant Professor in the Faculty of Education at Queen’s University, Kingston, Ontario). Amanda recently posted a report titled “Knowledge mobilization in education: A cross-case analysis of 44 research brokering organizations across Canada”. Amanda developed a quantitative methodology to evaluate the efforts of Canadian research brokering organizations (RBOs). The methodology is based on the evidence about research utilization. We know that people centred methods encourage greater research use than do those based solely on making package knowledge accessible to decision makers. In the words of Sandra Nutley and her colleagues in Using Evidence, “[p]ersonal contact is crucial … studies suggest that it is face-to-face interactions that are most likely to encourage policy and practice uses of research” (page 74).  In Amanda’s methodology points are assigned depending on how the RBO employs products (12 points), events (20 points) and networks (20 points) as well as overall features (20 points). You can see that more points are assigned to people centred methods (events and networks) than are assigned to purely product based methods. How points are assigned is detailed in Appendix B of her report.

Amanda used RBO’s web sites as the data source and scored each of the 44 RBOs on a scale out of 100. Amanda cites ResearchImpact as one of the RBOs but the data she used pulled from York’s Knowledge Mobilization Unit. The Harris Centre, another RIR member, is also included separately as one of the 44 RBOs.

Key Point #1: This is a quantitative methodology that is reliable and reproducible citing satisfaction with the inter-rater reliability testing of the tool and the average intra-class correlation coefficient.

Key Point #2: This method evaluates a system of knowledge mobilization not the efficacy of an individual knowledge mobilization intervention.

Key Point #3: This method measures the efforts of Canadian RBOs. It does not measure impact of the RBOs efforts. That more effective RBO efforts will result in greater impact of those efforts is a testable hypothesis, but it makes sense that this would be the case.

Key Point #4 (shameless self-promotion alert): RIR-York achieved the highest score in this study.

Each with a score of 81%, RIR-York tied with the Fraser Institute and Canadian Education Association as the top performing RBOs. Fraser Institute achieved this score with a budget of $12.8M. CEA achieved this score with a budget of $2M. York’s budget for knowledge mobilization is approximately $250,000. RIR-York accomplished the same effort on a fraction of the budget. The data from the top nine ranked RBOs is presented below.

Rank

 

Organization

 

 

 

Type*

 

 

 

Size (FTE)

Operating

Expenditures

Score on KMb Matrix (%)

1

1.2.1 RI

NfP, university research centre

 

 

Small (3)

$250 000

81

1.2.4 Fraser

NfP, think tank

 

Large (60)

$12,808,690

81

1.4.2 CEA

Memb, network

 

Small (9)

$2,044,892

81

2

1.2.4 AIMS

NfP, think tank

 

Small (5)

$872 234

78

3

1.2.0 CCL

NfP, general

 

Large (77)

$20,583,490

76

1.2.3 The Centre

NfP, issue-based

 

Large (25)

$5,685,000

76

4

1.2.0 TLP

NfP, general

 

Large (74)

$5,293,039

75

1.2.1 HC

NfP university research centre

 

 

Med (11)

75

5

1.2.0 CCBR

NfP, general

 

Med (12)

74

6

1.1.2 E-BEST

Gov, district level

 

Small (6.5)

72

7

1.2.1 CEECD

NfP, university research centre

 

 

Small (9)

69

1.2.2 P4E

NfP, advocacy

 

Small (9)

$537,806

69

1.2.3 LEARN

NfP, issue-based

 

Large (33)

$3,000,000

69

8

1.2.1 HELP

NfP, university research centre

 

 

Large (50)

$7,200,200

67

9

1.1.3 CSC

Gov, standards

 

Large (20)

$3,849,254

65

We need more research like this into the processes of knowledge mobilization, engaged scholarship and community based research. Much of what we know comes from individual studies of individual instances of knowledge mobilization. As these activities become more embedded in institutions and systems we will increasingly need research on these systems and how they create infrastructure to support the individual instances. You can read more on other methods for evaluating the impact of research like Payback and Productive Interactions in a 2011 Special Edition (Volume 20, Number 3) of the journal, Research Evaluation.

Thank you to Amanda for your important contributions to this emerging field.

One thought on “Quantitative Evaluation (and a little shameless self-promotion) / Évaluation quantitative (et un peu d’autopromotion éhontée)

  1. Pingback: To Know is not Enough: Research Knowledge and its use | ResearchImpact

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s