BEESPOKE Evaluation toolbox

Senast ändrad: 24 november 2022

We see monitoring and evaluation in Beespoke as the systematic data collection, analysis and reporting of activities, processes, outputs and outcomes. The reporting involves statements, judgements and conclusions on what has been done, how it has been experienced and on potential development paths and improvements.

On monitoring and evaluation in Beespoke

Ideally monitoring and evaluation positively affect current and planned activities within Beespoke, but also future decision-making on implementation strategies.

WP6 aim to support the Beespoke-project and its partners when implementing tools for monitoring and evaluation. It is a collaborative effort. Together we can track what is actually done and how well we achieved it, demonstrate how our approaches have contributed to targeted gains, and better understand barriers and needs to improve future activities.

A multi-methodological approach

Collecting data in order to monitor and evaluate social processes often require that different tools and methods are used. No single method could fully capture the complexity. Furthermore we must apply a flexible framework, adapted to the specific needs and preconditions in unique context.

In our tool-box we have the potential to use many different methods, from surveys and document studies to participant observations and interviews or focus groups. We will be pragmatic in the sense that suggesting methods that is both desirable and feasible. Thus, we will not suggest a “one size fits all” solution. Important is that we monitor and evaluate continuously, collect data from many activities and actors, as well as have a dialogue within Beespoke on how the data should be interpreted. This will be an ongoing process, although with rather low intensity.

 

Focus on specific practices with universal questions

Within Beespoke we have a lot of different activities to monitor and evaluate. From monitoring of pollinators, use of checklists, farmer workshops and training, implementation of new farm management on demo farms, external communication to policy workshops, etc. All involving social interaction and/or learning. The evaluation methods need to suit the data needs (the desirable), but also be practical and fit within existing resources available (the feasible). Where possible the methods need to be included as a normal part of project activities – not just an add-on.

At the same time we need to be able to draw some general conclusions on success factors for implementation of measures supporting pollination in the agricultural landscape. In our WP this challenge will be met by asking similar question to different activities in order to find common answers.

Some general questions we will raise are (examples):

  • What is being done to implement new measures or influence change?
  • With whom and where does these activities take place?
  • What practices are changing and in which field of work?
  • What impact does the new measures have on performance?
  • What benefits are being achieved by the activities?

 

Evaluation methods

After a performed activity how do you know whether or not your activity was good or not? How do you know if the activity succeeded in promoting the pollinator friendly measures that was your message? Have people’s attitudes, knowledge or behaviours changed as a result of your communication efforts? Have there been on-the-ground impacts? When your activities have been planned and set in motion you need to decide whether it is successful and make changes if it is not. You will need to ask following questions:

Is the activity effective?

What are its impacts?

Can the activity be improved?

Is the activity cost effective?

Should the activity be continued or modified?

 

Evaluation is the key to answer these questions and to give feedback for improving activities. Ideally evaluation should be conducted from the beginning to the end. It is well spent time to plan for evaluation at the same time as doing the planning of the activities. Evaluation is a critical component of any successful activity. To start with information collected during planning often can serve as baseline data for comparison with the evaluative results later on.

Key reasons for conducting an evaluation:

-          Measure achievements of activities objectives

-          Assess secondary outcomes and unanticipated impacts

-          Identify strength and weaknesses in the activity

-          Analyse the activity from a cost-benefit perspective

-          Improve activity effectiveness

-          Collect evidence to promote future activities

-          Share experience and lessons learned with similar activities

 

Meetings, workshops and webinars

Reflection and feedback

Reflection and feedback are essential for learning and for creating new ideas building on what has been experienced and learned. People and groups only learn when they reflect on their behavior and experiences. Therefore, talking about what has been worked out and on, i.e. reflecting and linking the findings in the whole plenum after a working session, is a key success factor of meaningful meetings and conferences and in more general, projects.

 

Building on the experiences made within a project group and learning from the cooperation and project work through feedback and reflection makes room for new ideas and possible improvements.

 

Feedback and collective learning are essential elements of collaboration and participation. Therefore, it is important to use different methods to provide transparent feedback for all participants.


To increase reflective thinking

Reflective thinking allows looking back at something, for example an experience or event that happened in the project. Then this experience is analyzed and thought about in depth as well as from different possible perspectives in order to be able to explain the experience more thoroughly. Afterwards it should be thought about what the experience or event meant to oneself and what could be learned from it. Reflective thinking is therefore an approach of self-observation in order to document what has been going on in one´s mind. In turn this helps to resolve conflicts and find a mutual understanding, as perspectives of other involved individuals are considered in the process as well. In order to do this successfully, the more information is available and made transparent, the better.

Education, courses, train-the-trainer etc.

Field visits, farm walks, farmer fairs

Evaluating the impact of your event

Evaluating your event helps to improve the organisation of future events. It is important to link the evaluation to the objective of the event: If the objective was networking, it is useful to focus on whether participants were able to expand their network. If the objective was innovation adoption, organisers need to monitor the participant’s inclination to adopt the demonstrated innovation.

Feedback can be gathered on the set-up (programme, lo- cations, facilities, topic, ...) and organisation of the demo, but also on what participants have learned, and what they believe to be applicable for their farm:

Shorter term: “What do visitors take home?”

  • Know-why (motivation, raised awareness): participants are aware that there are specific problems or challenges and/or that new options are available and may be needed in the future
  • Know-what (the demo topic): participants are informed on specific novelties (new practices, materials, varieties, machinery, etc.)

  • Know-how: participants can connect the new information to their own practice and are able to assess possibilities to implement it on their own farm

Longer term: “What do visitors do with what they have taken home?”

This impact rarely comes from a demo alone, and is less straightforward to evaluate, because of the time lag. It takes time for participants to make actual changes in their farming practice, since it might require financial investments, new skills and knowledge, and a readjustment in the farmer’s usual routine and mind-set. The actual decision for change is also not influenced solely by the demonstration event, but includes a wide array of other information sources, such as publications in (agricultur- al) press, follow-up demo events, workshops, newsletters, contacts with advisors, other farmers, etc.

You can do the evaluation in different ways, such as:

  • Informal talks with participants during the demo event
  • Facilitated participant feedback during the demo event, using forms or based on discussions

  • Evaluation forms sent to the participants afterwards. The risk of exit polls is that they are often completed too quickly when people are in a hurry to talk to other people or to go home.
  • Follow-up emails or telephone calls. If wanting to assess impact, you can wait a couple of weeks, or even months for a follow-up telephone call.

  • Evaluation forms for the demo organisers, to be completed during the demo event

Acting on the evaluation is important to improve future on-farm demonstration. So once the evaluation has been gathered it needs to be shared and improvements for future activities need to be implemented.

TIPS

Use only a few and relevant questions. A questionnaire with many questions will be harder to get people to complete.

You are more likely to have success in collecting feedback on the day, than afterwards through email.

Source FarmDemo https://farmdemo.eu/hub/storage/doc/735_Design_guide_for_on-farm_demonstrations.pdf

Farmer study groups and group facilitation

On-line advisory services and on-line field trips

Social media

Field trials and monitoring in the field

Project management and organisation