This page has been reported as having inappropriate content. You can fix this!
Monitoring and evaluation wiki
Explanation of common and best practice in the monitoring and evaluation of the voluntary and community sector (VCS) delivery of public services.
Page toolsJust log in or register
What is monitoring and evaluation?
These processes are distinct from regulation. They are the reporting mechanisms through which the voluntary and community sector (VCS) demonstrate the achievement of their aims whilst funders ensure that accountability, contract compliance and value for money are maintained.
Monitoring is the collection of data and evidence to ensure payment terms are being fulfilled for a grant or contract, by the voluntary and community sector (VCS) organisation a public body (or other funder) is funding. The National Audit Office argues that 'monitoring, both internally for providers and externally for funders is an element of good management practice. Done well, monitoring gives all those with an interest in a financial relationship – funder, taxpayer, provider, user - – information about what is being achieved with the funding.
Evaluation refers to retrospective analysis of a policy, programme or project at its completion, conclusion or revision. Evaluation examines what the policy, programme or project has achieved against what was expected, and is designed to ensure that the lessons learned are fed back into the decision-making process. Good evaluation will necessarily involve examining how money was spent and what was achieved as a result.
Why monitoring matters
There is a danger in monitoring grants and contracts of simply collecting what is easy to count rather than what is useful to know. All too often providers are required to collect lots of data about the output, as monitoring the outputs is usually easy to count and collect. Too much of an emphasis on measuring outputs can mean that all we report on is how busy the service is rather than how effective it is. It is worthwhile to suggest to commissioner a limited range of measures that can be used to report on output performance and also to identify outcomes. Outcome measurement is harder. It takes time, requires some kind of judgment and often cannot be simply reduced down to a number. However, outcomes do show that the organisation is making a difference.
- More than a numbers game - Collecting and analysing performance measures should be a thoughtful and valuable process. It should be more than an administrative chore necessary to satisfy the commissioner’s need for accountability. Good monitoring and evaluation can also be used to.
- Encourage learning- All data needs to be interpreted and discussed. Good interpretation of performance measurement can be an opportunity to learn from what is happening, share experience and develop a constructive relationship with the commissioner.
- Future planning - Performance measurement can be a way into future service planning. Analysing what has and has not worked and what was different from what was originally planned can be a valuable way to identify trends to shape future commissioning strategy.
- Highlight added value - It is worthwhile recording what else you did ‘above and beyond’ what was required in the contract or grant agreement. Often such ‘extras’ such as training volunteers, linking up with other projects and helping commissioners to develop policy are not formally recorded, but recognise significant added value. It is worth studying the proposed agreement carefully to check that the levels of control, monitoring and reporting are fair and reasonable. It is useful to ask a few “what if…?” questions to see how the agreement would handle situations such as a failure to deliver or a dispute between parties. The agreement should set out mutual responsibilities and expectations and a process for resolving disagreements and disputes as and when they occur.
VCS Experiences of monitoring and evaluations
Appropriate and feasible monitoring and evaluation requirements can be beneficial to VCOs where they help staff to focus on delivering an effective service and provide managers with an incentive and extra intelligence for improving the way VCOs function.
Conversely, inappropriate monitoring and evaluation requirements waste resources and reduce the effectiveness and efficiency of service delivery. This ultimately leads to poor value for funders.
- A number of VCOs welcome a greater focus of monitoring and evaluation on outputs and outcomes rather than process.
- The costs of meeting monitoring and evaluation requirements are rarely included in funding for service delivery.
- Feedback on monitoring reports is rarely provided by funders.
- VCOs are much more willing to comply with tight monitoring and evaluation requirements when they form part of a good, reliable funding relationship.
- Most monitoring focuses on quantitative outputs and outcomes. This is welcomed by VCOs where it is appropriate for the type of service being delivered. However, for some services other monitoring and evaluation methods will be more appropriate for measuring soft outcomes.
- Monitoring is not always proportionate to the amount of funding provided (although there may be risks, such as reputational risk or innovation risk, that justify this).
"We have just under £1 million and the monitoring will take me a couple of days. We have £18,000 a year from another funder and monitoring will take me two or three weeks. They want every single invoice. We have to write reams of what we did and who we did it with."
- The costs of completing monitoring and evaluation processes are rarely included in funding for service delivery.
- Some funders are becoming more prescriptive over how targets are to be achieved.
"I'm increasingly being questioned on how I achieved those targets, 'why are you going to do it that way, we would have done it in such and such a way, we'd rather you did it that way'."
Many VCOs feel that heavy monitoring by the public sector is due to a lack of trust in the VCS.
- VCOs can face excessive time pressure when monitoring deadlines fall too soon after the start of a project or when several funders set deadlines that fall within a close time frame.
- Several VCOs feel that monitoring processes change too often.
- Funding streams that have been devolved to local areas now have their own monitoring requirements for each locality. This increases the burden on VCOs who deliver the same service across more than one local area.
- VCOs rarely receive feedback on monitoring they have submitted.
- Pressure on funders to achieve government targets and meet monitoring requirements shapes the way in which monitoring and evaluation processes are designed. This can lead to unrealistic or inappropriate targets being set.
"The targets they're setting is something like 80% into employment, well you'd be lucky if you got 20%."
- Funders' inflexibility towards amending targets that had been set before the start of a project can leave VCOs with unrealistic monitoring requirements.
"You set your targets 18 months in advance and, even though you're only second guessing how many people are going to come through the door, you've got to stick to those on a quarterly basis, otherwise if you don't get your stats in, they threaten you with withholding payments."
Gathered by NAVCA
The pithy NAVCA report, 'For Good Measure' discusses two instances of poor monitoring in detail, including:
- 15% of funding for a project in Ryedale was wasted photocopying an audit trail that would later be audited;
- In Sussex, a VCS service provider was audited eight times in six weeks by local statutory partners.
Impact on the sector
Fulfilling appropriate and feasible monitoring requirements can be a useful process for VCOs. However, excessive and unsuitable monitoring requirements waste resources and reduce the effectiveness and efficiency of service delivery.
- Good monitoring systems can be beneficial to VCOs where they help staff to focus on delivering high quality services. They also provide extra intelligence and incentives for performance improvement not just relating to service delivery but within the organisation as a whole.
"I think [monitoring] is really helpful, especially to managers who probably always wanted to try and get some of these things in place, but now have an external reason for doing it as well."
- Many VCOs welcome the greater focus on outputs and outcomes as it provides clarity on what is required from them. But this type of monitoring is not appropriate for some services which places an unnecessary burden on VCOs as they struggle to fulfil unsuitable monitoring arrangements.
- The innovative and distinct role of the VCS in service delivery is diminished when funders seek to use monitoring and evaluation processes to influence practice.
- Failure to include the costs of fulfilling monitoring requirements in funding uses up VCOs resources and can lead to poor value for money for funders as resources are diverted away from for service delivery.
- Complex and disproportionate monitoring and evaluation regimes stretch VCOs' resources and have a greater impact on smaller VCOs who receive smaller amounts of funding, and have less capacity to deal with monitoring requirements. They also discourage other VCOs from delivering publicly funded services.
- The lack of feedback from funders on submitted monitoring leaves organisations not knowing what they have achieved or where they need to improve performance. This can reduce the incentive for VCOs to complete monitoring and diminish confidence that monitoring and evaluation is worthwhile.
Good Practice for VCS
Taken from 'Intelligent Monitoring: an element of financial relationships with third sector organisations' (National Audit Office, 2009) www.nao.org.uk/intelligentmonitoring
- Understand why reporting is important: Reporting is essential to ensure that public funds are properly spent and have an impact, and reporting can help your organisation prove its worth;
- Identify useful information: If you understand what information is useful to your organisation, you can have a constructive discussion with your funders to agree realistic monitoring and reporting requirements. This discussion should include questioning the funder’s requirements if you are not clear how it will use your information:
- Meet deadlines: Provide speciﬁed reporting information to the funder within agreed timescales:
- Co-ordinate: Make sure the person who is bidding for funding co-ordinates with the person who will project-manage the work, where relevant.
- Suggest using existing systems: Discuss with funders whether you could use a standard report based on your own reporting systems, especially if you can identify several funders who are likely to need similar information.
Good practice for funders
What to Monitor
This guidance recommends funders first establish 'what to monitor' in two parts:
- In determining which organisation should win the contract;
- During the period of service delivery.
The National Audit Office ('Intelligent Monitoring: an element of financial relationships with third sector organisations') recommends funders consider their monitoring effectiveness and proportionality using the following eight questions:
- Can the information be provided less frequently? Funders often require providers to supply monitoring information in time with payments. For example, if a funder agrees to pay a provider once a quarter, it will require the provider to submit the agreed monitoring information with the quarterly claims for payment. Every time an item of information is collected and supplied to a funder, there is an associated cost. Funders and providers should, therefore, agree to the supply of each item of information only as frequently as it is needed. This could mean that, for example, some information is supplied quarterly while other information is collected and supplied annually.
- Can the information be provided in time with the provider’s own reporting systems? A funder may ask for certain information on a certain timescale. The provider may explain that it already produces this information but on a different timescale. It would cost more to produce the information to the funder’s preferred timescale. In this case, the funder should weigh up the costs and beneﬁts of collecting the information on the two timescales. All things being equal, the funder should accept the information on the timescale that the provider already produces it
- Can the information be reported only by exception? Often, a funder requires the provider to supply every agreed item of information in each monitoring report. Collecting and supplying all this information has a cost. Another approach is for the funder and provider to agree that the provider will supply the information only if there has been a change (of a pre-agreed size or type) since the last report (or from a baseline). This can be particularly useful in monitoring issues such as risk. So long as there has been no change in the status of the risk, there is no need to supply other information on it.
- Is there an alternative item of information, perhaps more cost-effective, that could be used instead? When planning monitoring information, funders should be aware that providers usually collect information to support their own management and governance. They may also be collecting other information for other funders. Adding extra monitoring requirements adds cost. It is therefore a good idea to use, where possible, information that the provider already collects.
- Can information that the provider already collects for another funder be used instead? Funders should be aware that providers often have ﬁnancial agreements with more than one funder. A provider of your programme may, thus, already collect and supply information to another funder. When planning monitoring, it is a good idea to use, where possible, information that the provider already collects for another funder. Adding extra monitoring requirements adds cost.
- Can this information be collected on a sample basis? If a funder has ﬁnancial arrangements with a number of different providers, it may be possible to collect certain information from some, not all, providers. This use of a ‘sample’ will relieve the burden, and therefore cost, for those providers that are not part of the sample. Sampling may also reduce the cost of monitoring to the funder.
- Can this information be collected other than from the provider – such as a survey? There may be some information that the funder needs about the programme that is better not collected as part of monitoring information from the provider. It could be collected through a survey – separately funded – instead.
- How can you assure the reliability of this information? You may be supplied with false monitoring information. Usually, this will be down to error. But, occasionally, it could be deliberate. There are ways you can safeguard against this. In particular, make clear to the provider from the start that its data and records will be open to scrutiny by you and your team, as well as by auditors and inspectors. Be clear about what information you require and the quality and robustness of it. A good working relationship with the provider will help and may include face-to-face meetings and personal visits.
Begin the Discussion about Monitoring Early
'Intelligent Monitoring: an element of financial relationships with third sector organisations' continues;
'Be clear about your monitoring requirements when you invite applications or tenders and be prepared to discuss them at that stage. Too often, the discussion about monitoring starts during the tender or application process, or even after the award. This does not allow time for proper planning. It makes it hard for the provider to cost the monitoring requirement and build that cost into its proposal for funding. All this tends to lead to disproportionate and badly-managed monitoring.'
Justify Your Need for Information
'Intelligent Monitoring: an element of financial relationships with third sector organisations' continues;
'It is not sufﬁcient to impose a requirement. Funder and provider should agree the requirement. Funders should expect providers to ask them to justify requests for information. This contributes to good decision-making by funders.'
Tell the provider what you are intending to use the information for,
'Providers are more likely to engage with monitoring requirements if they can see how they contribute to higher goals. Sending information into a ‘black hole’ is demotivating. If a provider knows what information is needed for, it may be able to suggest a better piece of information or a better source.' ('Intelligent Monitoring: an element of financial relationships with third sector organisations')
As a general principle, funding bodies should seek to minimise the monitoring and inspection burden on the recipients of funds to a level proportionate to the level of funding and which maintains proper control of public monies. Monitoring should be proportionate to the sums involved and the perceived risk. This may mean that more attention would be paid to larger payments (although less, proportionately, to their size, and dependent on other risk factors), and that small payments may receive a lighter touch. Under a grant regime, funding bodies should seek only information that is necessary for the purpose of verifying that grant conditions have been met.
Use Exisiting Monitoring Practices
Where possible, funding bodies should rely on monitoring information which a third sector organisation would, as a matter of good practice, report in any case its own Board of Trustees (or other governing body), rather than requiring the transfer of information to a new format. Where providers find that information provided to trustees is inadequate then they should, as a condition of funding, require that standards are improved. This in itself is good practice in managing the risks of poor governance or financial control that might jeopardise the proper delivery of a service.
But also, don't just collect data because it is easy to collect - but only ever because it is useful.
It is important, however, that no funded body is led to believe that it will never be monitored or inspected. One possible solution could be a risk based programme of assessment, monitoring and checking accompanied by an ex-post random check especially for smaller sums. This would involve monitoring a selection of organisations after the funding is in place. The random sample provides both a control over those recipients that might be tempted to believe they will never be checked because they receive less, and allows the funding body to extrapolate the sample results to ensure that the lighter touch has not led to widespread abuse. For funding schemes which involve many small payments, where minimal checks can be justified, funding bodies may also wish to consider methods for looking at patterns in applications (e.g. use of same or similar addresses or contact names) to identify systematic abuse.
Shared Monitoring with Other Funders
A lack of commonality amongst funders’ monitoring requirements means that multi-funded bodies have to record their information in a variety of different ways. This can be a significant administrative burden as entirely separate accounting systems may have to be established in order to record and supply the information required by different funding bodies.
It is important that funding bodies co-ordinate monitoring and inspection arrangements to try wherever possible to reduce disruption to recipients who receive monies from several sources. This can be achieved through joint inspection activity or by sharing of information on recipients. Where recipients receive funding from more than one funding body (or from different parts of the same funding body), the funding bodies should consider whether there is scope to appoint a lead monitoring officer to carry out this role on behalf of all funding scheme managers. This would lessen the burden of inspection on both recipients and funding bodies alike.
- Funders should provide VCOs with clear direction on what information is required and how to capture it.
- Funders should combine monitoring and evaluation processes with reliable, longer term funding and a stable funding environment.
- Funders should work with VCOs when deciding on monitoring and evaluation requirements and how they should be carried out. This is particularly useful when these decisions are made as part of the initial funding arrangements so that requirements are clear form the outset.
"The monitoring document was something that you drew up together based on what the baseline was in your borough, and it was an enjoyable way to work. You felt that you owned all these outputs and they weren't being given to you."
- Funders should be willing to attend events and accept alternative evidence of outcomes such as videos, DVDs, and CD-ROMs.
- Funders should leave realistic timescales between the start of funding and reporting deadlines.
- Government departments should create a common monitoring process for funding streams following their devolution from central government into smaller locally distributed streams.
Auditing of Public Bodies
Auditors inspecting public bodies (rather than public bodies acting as auditors) will expect assurance that grants to the VCS can be evidenced as having been used for their agreed purpose and paid on the basis of need.
Third sector organisations are independent organisations, frequently charitable, and their financial statements require audit or independent examination in accordance with the requirements of the Charities Act 1993 (for charities which are not companies) or the Companies Act 1985. They are not normally subject to public sector audit regimes. The auditing of the financial accounts of a charity will include consideration of whether they comply with relevant accounting standards - including the Statement of Recommended Practice on Accounting and Reporting by Charities (the Charities SORP - latest version 2005).
The MOPSU example
Measuring Outcomes for Public Service Users (MOPSU) was an outcomes research partnership project led by the Office for National Statistics (ONS) and including NCVO amongst others. The project trialled the development of outcomes measures in children's and adult's social services, to test appropriate monitoring procedures and principles.
The outcomes measurement tools used by MOPSU are all available to use free of charge. See here for full details and the final reports.
1. Use outcomes in tiers
When establishing outcomes, the MOPSU researchers divided outcomes into 'basic domains' and 'higher order domains'. These are the components of good outcomes, and in adult care omes, can be divided as such:
Personal Cleanliness and comfort
Accomodation cleanliness and comfort
Food and nutrition
Higher Order Domains
Control over daily life
Social participation and involvement
2. Outcomes should reflect user perspectives
"Outcomes should reflect, as far as possible, the perspective of the service user" (p25) of what is valued, and what is achieved. It is unhelpful (and unpersonalised) for the state or the provider to determine outcomes;
"we are more concerned with whether people feel they have control over their daily life than whether thety have, in some objective, measurable way, control over their daily life, though this is important". MOPSU report, p25
Examples of self-completion questions to gather this user perspective:
"Which of the following statements best descrbies how safe you feel? Not feeling safe could be due to fear of abuse, falling or other accidental harm, and fear of being attacked or robbed. Please cross one box only:
- I feel as safe as I want
- Sometimes I do not feel as safe as I want
- I never feel as safe as I want" MOPSU report, p26
Because many of the care home residents being assessed by MOPSU had cognitive impairments, communicative difficulties, or both, measurement methods other than self-evaluation had to be used. Fieldworkers spent two days at each home, using a variety of alternative techniques:
- observations - both structured and more general
- interviews with staff and residents, when they were able to respond
- staff questionnaires and administrative data
"Necessarily such judgements, particularly when based just on observations and the views of staff, will be closer to measures of functioning". MOPSU report, p27
The other measurable in outcomes is for the absence of services - to try and understand what impact a particular service has had on outcomes. But making this atribution is difficult.
"A simple 'before and after' method could be misleading as the change could still not be attributed to the service as other factors, such as disease progression, could have an impact on the measure". MOPSU report, p27
Ideally, a 'randomised controlled trial would be set up to compare matched groups of people who were or were not in receipt of services". But this approach is expensive and may present ethical issues. Instead the MOPSU researchers asked interviewees what they felt outcomes would be in the absence of the reviewed service, and assuming no other support would be replacing that service activity. Full details on how their answers were tabulated into results are detailed on p28 of the MOPSU Report.
4. Capacity for Benefit
Service outcomes are dependent on a number of factors independent of the service, not merely the quality of the service. To try and focus solely on the service (for the benefit of commissioning and regulation) MOPSU tried to establish the average 'capacity for benefit' from each service: the value that it can deliver, and the value it does actually deliver for outcomes.
"The capacity for benefit is the potential benefit that could be delivered if all the objectives of a service were achieved for all those receiving the service".