Donate My RoSPA
    Basket is empty.
Net Total: £0.00
Stack of files and papers banner

Develop & Collect

To evaluate your intervention, you will need to gather evidence about it. Data sources can be classified according to their source and their type. Evidence can come from a range of sources, and using a mix of different sources can strengthen your case.

Qualitative v Quantitative infographic


Qualitative research

Qualitative methods collect non-numerical data and related categories (themes). They can enable you to find out the 'what?', 'when', 'where', 'how?' and 'why?' behind the numbers and can provide a greater understanding of findings from quantitative research methods. They generate rich in-depth data from a relatively small number of people.

Quantitative research

Quantitative methods collect numerical data that can be used for statistical analysis. Quantitative research instruments (e.g. surveys) ask the same questions in a specific order to enable information to be collected in a uniform manner. Although quantitative research generally involves larger sample sizes than qualitative research, it is still the case that a relatively small numbers of people are included in the research. Therefore, it is important to ensure that the sample size is large enough and representative of the target group to produce meaningful results.

Data Collection Methods

Methods decision tree

Ideally, you should collect both qualitative and quantitative data in your evaluation. However, if resources do not allow for this, you will need to focus more on one kind of data.
 
To decide which kind of data to collect, there are a couple of factors to consider:

  • Think about the kind of evaluation you are carrying out; are you trying to improve your intervention, or prove that it is effective?

  • Time; qualitative methods are more time-consuming for the respondent, and the data they produce is also more time-consuming for you to analyse.

The methods decision tree will help lead you to some of the key methods that you might want to use.
 

What do you want to know infographic
Source: Click here


The decision tree above does not outline all possible methods to use when collecting data.  A summary of key methods can be found on the BetterEvaluation website.

Key Rules

  • Keep surveys, interviews etc. concise

    • Think about the questions you're asking, are they relevant to what information you need for the outcome?

    • What will you learn by including that question in your evaluation? If you do not have a purpose for the data a question will generate, do not ask it.

  • Give people a chance to give feedback to them if that have any other comments that they would like to make

  • Think about a mix of methods:

    • People access information in different ways. Think about what different ways you can collect information about your intervention

  • Think about sample sizes: whatever way you collect data, remember it needs to be analysed and this takes time.

Further reading

Interviews and Focus Groups

 
Conducting interviews for focus groups thumbnail
Conducting interviews for focus groups
Preparation to conduct an interview or focus group thumbnail
Preparation for interview or focus group
Designing and conducting focus group interviews thumbnail
Designing and conducting focus group interviews
Summary of observations thumbnail
Summary of observations
A guide for use in evaluation thumbnail
Observation: a guide for use in evaluation


Questionnaires and Surveys

  
Writing questionnaires thumbnail
Writing questionnaires


Evaluation Design

Evaluation design means the type of evaluation that you choose to do, and the timing of data collection, rather than data collection methods such as surveys or interviews.
 
Process data looks at the delivery mechanisms of the intervention – did everything happen as it should have done? Process evaluation focuses on the input and output stages. It does not measure outcomes. For example: did all the resources delivered to third party agents, get handed out as you had envisioned?
 
Outcome data looks at the post-intervention stage – the effects of your outputs. Outcomes can be measured immediately after an intervention has been delivered (short-term such as changes in knowledge and understanding) or require long-term follow up (such as behaviour change measured by use of mobile phones while driving, or number of traffic violations involved in).
 
Ideally, you should collect both process and outcome data in any evaluation you conduct.
 
Common ways to measure outcomes
Outcome evaluation designs are the different ways in which you can collect data to measure if the intervention has been effective in improving knowledge, attitudes and/or behaviours. These designs are used when you would like to put a number on how effective an intervention was.
 
Outcome evaluation designs fall into three main categories:

  • Experimental (most robust)

  • Quasi-experimental

  • Non-experiment (least robust)

Design type
Example
Strengths
Challenges

Experimental:
Compares intervention group with non-intervention group
 
Uses control groups that are randomly assigned

Randomised controlled trial (RCT) Pre-post design with a randomised control group is one example of an RCT

  • Can infer causality with highest degree of confidence

  • Most complex and expensive to set up

  • Ethical issues

  • Sometimes challenging to generalise to “real world”

Quasi-experimental:
Compares intervention group with non-intervention group
 
Uses comparison groups (not randomly assigned)

Pre-post design with a non-randomised comparison group

  • Can be used when you are unable to randomise control group, but still wish to compare across groups and/or across time

 

  • Easier to carry out than RCT

  • Group selection is critical, need to ensure that they do not receive intervention

  • Moderate confidence in inferring causality

Non-Experimental
 
Considers only the group that has received the intervention
 
Does not use comparison or control groups

Case control (post-intervention only): Retrospectively compares data between intervention and non-intervention groups
 
Pre-post with no control: Data from one group are compared before and after the training intervention

  • Simple design, used when baseline data and/or comparison groups are not available and for descriptive study.

  • Less resource intensive than other designs

  • Minimal ability to infer causality i.e. you cannot be sure that the intervention caused observed effect


The category that will be chosen relies on a number of factors. The evaluator should conduct the most robust (or strongest) type of evaluation they can, within the resources that are available.

Further reading

Types of Evaluation design


Evaluation framework/Outcome management plan

How you will collect and organise your data collection is important. An evaluation framework or outcome management plan can help you organise how, who and when you will collect data for the different indicators in your intervention.
 
For each outcome, the evaluation framework or outcome management plan should include:

  • Specific measurable indicators

  • A clear definition of what will be measured for each indicator

  • A target for each indicator

  • Data sources

  • Methods – how you plan to collect the information

 
Like your logic model, your evaluation plan template is a living document. It is a tool for planning but should be regularly modified based on changes in your goals, activities, organisation’s capacity or information gained from the data you are collecting. Below are a couple of examples of how your framework could look.

Desired Outcomes
Indicator
Who/What
Methods
Target

The outcomes from your logic model can be repeated here

Step 1: Identify one indicator for the outcome.  

Step 2: Identify the group/ participants or what will be measured

Step 3:Basic method and plans for data collection

Step 4:Objective targets you can measure. Likely include phrases such as percentage changed, numbers reached, amount of change etc.


What next?


Define

Go back to the previous step of the evaluation process.

 


 

 


 

Understand

Click here to find out about the next step in the evaluation process


Contact Us

General Enquiries
+44 (0)121 248 2000
+44 (0)121 248 2001
[email protected]
Contact form