How safe and well is your prevention activity?

Gina Yannitell Reinhardt PhD, Andrea MacAlister, Kieron Moir report on the findings of their FIRE Magazine/Gore Research Excellence Award winning paper, ‘How safe and well is your prevention activity? A Study in Evaluation within a Fire and Rescue Service1

1 For fully described results and methods, and to reference this work, please consult: Gina Yannitell Reinhardt & Kakia Chatsiou (2019) Using community education interventions to build resilience and avert crises: how accidental dwelling fires decreased in Essex County, UK, Local Government Studies, 45:3, 394-412, DOI: 10.1080/03003930.2019.1573729.

In late 2016, Andrea MacAlister and Kieron Moir of Essex County Fire and Rescue Service (ECFRS), attended a seminar on public service impact evaluation hosted by Dr Gina Yannitell Reinhardt of the University of Essex Catalyst Project. Three years later, the trio received the FIRE Magazine/Gore Research Excellence Award at the Fire Related Research and Developments Conference (RE19). Find out here how their evaluation of accidental dwelling fires and a fire prevention programme changed the shape of prevention planning for Essex Fire and Rescue.

The Need for Evaluation in Fire and Rescue

Reinhardt’s seminar presented several reasons why fire and rescue services should be interested in evaluating their impact, despite some common concerns. First, evaluations help determine ‘what works’ in service delivery, thereby informing decisions about programme refinements and resource allocation. The most common objection to evaluations is that they are ‘just another form of programme monitoring’ that lead to programme discontinuation. But evaluators are not seeking to eliminate programmes that ‘do not work’. Rather, they hope to determine which elements of programme delivery work well, and which might need adjusting. One of Reinhardt’s evaluations, for example, investigated the repetition of home safety visits across Essex parishes, and determined that while one or two visits to a parish might change behaviour significantly, a third or fourth visit seemed to have little additional effect on the wider population. Evaluations like these allow managers to optimise operational strategies and allocate resources efficiently.

Second, evaluations increase transparency. Another objection to evaluations is that managers will not want to share potentially negative results. But rather than creating a record of negative results to be hidden, evaluations conducted and maintained throughout the life cycle of a programme help document how and why managers’ decisions to improve services are made and implemented. These records create institutional memory for programme development and execution that can share learnings with future managers and employees and help them avoid repeating mistakes. Evaluations can also explain programme and management decisions to the general public. And perhaps most importantly, evaluations help decision makers reflect on the programmes they manage and the decisions they have made, enabling them to track processes and outcomes, including evidence of improvements. With this increased transparency, evaluations help enhance accountability to future members of the organisation, to stakeholders and service users, to the general public, and to oneself.

Another common objection to impact evaluation is that collecting and analysing impact information adds additional administrative burden to staff and diverts resources from service delivery. Evaluations do take time and effort, but the basic benefits to both staff morale and frontline interaction with service users offsets that cost by increasing programme impact both within and outside of the organisation. Principally, this effect comes from the fact that evaluations convey to staff the impact of their work, and the reasons for changes in operations made by decision makers. Staff then see how their own work fits into decision making and planning within their own division and within the organisation. Such information increases confidence in one’s work and position in the organisation, and manifests as improved morale. Frontline workers then convey their enhanced wellbeing to users when they deliver services and interact with the public.

During that seminar in 2016, Reinhardt argued compellingly in favour of evaluations as a tool to increase an organisation’s capacity to conduct critical self-assessment, and to plan for the future. MacAlister, aware of her own organisation’s remit to improve and embed evaluation within operations, found herself agreeing with much of Reinhardt’s message. The two met later to discuss the evaluation needs and opportunities within MacAlister’s own division of ECFRS, which at the time was Volunteering, Partnerships and Home Safety.

Accidental Dwelling Fires in Essex County

MacAlister identified two key evaluative needs of her division. The first was an update to a 2011 report on accidental dwelling fires (ADFs) in Essex County. Reinhardt’s ARISE Team, an evaluation group she put together to Advance Resilience and Innovation for a Sustainable Environment, was happy to work with ECFRS in pursuit of their evaluation goals.

Essex FRS’s data and insights team provided ARISE with incident data on 7,640 ADFs that occurred across Essex from 2009-2017. Reinhardt’s team combined this data with Experian Mosaic socio-demographic data and population/household data from the UK Office of National Statistics and the UK Department for Communities and Local Government. They performed proportional hazards analyses (also known as uplift modelling or survival analysis) to determine which socio-demographic groups were most at-risk of ADFs.

“ARISE analysed ADF risk by many factors, including family life stage. They learned that the groups most likely to experience ADFs are: young singles/home shares; older singles; and elder singles. It appears that for people of all ages, a second set of eyes in the household can serve as valuable reinforcement of good safety practices.”

Reinhardt’s ARISE team also predicted the benefits of targeting these and other at-risk groups with a community education intervention. Using the baseline estimate that an intervention delivered randomly to a portion of the community would reduce ADFs by a corresponding portion (that is, an intervention randomly delivered to ten per cent of the community would reduce ADFs by ten per cent, one delivered randomly to 30 per cent would reduce ADFs by 30 per cent etc), ARISE considered the difference in ADF reduction one could expect from targeting an intervention toward groups that are most at-risk. Results indicate that an intervention delivered in a targeted manner to the most at-risk ten per cent of the community would reduce ADFs by 20 per cent, and that one delivered to the most at-risk 30 per cent of the community would reduce ADFs by 50 per cent, substantially more than a randomly-targeted intervention.

This type of evaluation forecasts potential programme impact rather than measuring delivered programme impact. A benefit of a predictive evaluation is that it informs programme design pre-implementation. It also builds institutional memory by creating a record of assumptions underlying why and how implementation decisions are made.

Parish Safety Volunteers: A Community Education Intervention

Reinhardt’s ARISE team then evaluated the Parish Safety Volunteers Pilot (PSV), a joint project between the ECFRS and Essex Police (EP) Community Safety Departments. The scheme was designed to offset budget restrictions and organisational change by enlisting volunteers to deliver high-quality emergency prevention support. Trained volunteers visited at-risk local homes, reviewed and informed households on fire and burglary safety, and were empowered to refer residents to other services at the same time. They conducted 240 one-hour home safety visits across 72 Essex parishes over 12 months, beginning in March 2016.

ARISE then used difference-in-difference regression analysis to examine the difference in ADF occurrence between parishes that received visits and those that did not, as well as between before and after the PSV Pilot. Reinhardt’s team found that PSV-visited parishes experienced .81 fewer fires per month after they were visited than they had before they were visited. This translates to approximately four fewer ADFs per five-month period for visited parishes. Parishes that did not receive PSV visits experienced no change in ADFs during the same time period.

Due to the timing of data collection and analysis, Reinhardt was unable to use this evaluation to determine whether the reduction in ADFs was directly caused by the PSV visits. Still, this type of impact evaluation is worthwhile and informative. It uncovers basic trends, measures outcomes, and helps determine whether those outcomes change over time. ECFRS incorporated insights and recommendations from Reinhardt’s evaluations to introduce impact evaluations immediately after a visit and three months following safety visits, tracking message memory and behavioural change following an intervention. Insights gained from these evaluations were then used to shape departmental training, provide feedback in appraisals, build a sense of shared achievement and impact, and ultimately shape the ECFRS Safe and Well offer.

Principles of Evaluation

Reinhardt’s ARISE Initiative continues to work with ECFRS to promote, proliferate, and embed strong evaluation principles into operations and public service delivery moving forward. Based on their evaluation and management experience, and informed by workshops and ongoing development of evaluation in ECFRS Home Safety, Reinhardt, MacAlister, and Moir have dubbed these principles ‘the 5 C’s’.

  1. Core: Evaluation should be embedded in the core of prevention services from the onset. Evaluating programmes only as they draw to an end runs the risk of generating misleading results, confusing stakeholders, and ultimately being unable to evidence impact. Incorporating evaluation design as an element of programme design ensures that final project outcome levels can be compared to baseline preliminary levels, and that continual reflection and refinement of service delivery can occur.
  2. Consistency: Evaluation cannot happen only once a year. Evaluators, decision makers, managers, and frontline service personnel should share and talk about evaluation results regularly. Stakeholders and personnel should be comfortable with reading reports, be helped to understand what results mean, and be informed of when to expect evaluations to be disseminated and/or reviewed.
  3. Clarity: Evaluation results must always be presented as clearly and simply as possible. Choose a format to present data that is accessible to all, irrespective of background or education. Allocate time to go over evaluations together to dispel any confusion or misinterpretation and issue corrections when found.
  4. Care: Evaluation results should not make people feel anxious. Share evaluation results with the people whose work is being evaluated before wider release. Do not negatively identify people in public settings.
  5. Constructiveness: Use evaluation to help craft and support a positive story of development and improvement. Allocate time to reflect on evaluations and use evaluation results to refine strategies. If an evaluation does not illuminate positive or successful programme aspects, be open about this. Seek collaborative ways of improving the areas that need help.

Conclusion

Research and evaluation helped shape and steer the prevention work stream by enabling them to claim the impact of their activity, and has empowered ECFRS to focus on priority communities and partnerships. Crucially, the continued review, reflection, and refinement has emboldened ECFRS to embed an evaluation culture into their work force. The work of Reinhardt, MacAlister, and Moir is an excellent example of how data, research, and evaluation can contribute to a continual improvement process within prevention.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More