We recently talked to Laura Hughston, Monitoring, Evaluation and Learning Advisor at CARE UK, who supports CARE UK program quality issues, and possesses a particular interest in M&E led-by primary constituents or project’s target groups. CARE is s a major international non-governmental organization delivering emergency relief and long-term international development projects. Laura shares CARE International’s efforts to put people at the center of its work by ensuring that primary constituents´ views, priorities and needs are reflected in the organization’s work through adequate feedback and accountability mechanisms (FAM).
GS: Why is feedback important for CARE International?
LH: Two main reasons explain our focus on feedback systems. First, we recognized feedback -that is a positive or negative statement, a concern or suggestion about project’s activities and the behavior of staff and volunteers- as an essential element for accountability towards the people affected by our projects. Second, we know feedback will help us learn and improve. We are committed and passionate about harnessing opportunities for a positive transformation and feedback is central to achieving that.
When we started developing guidelines, our first step was to change the language from the previously used: from FCRM (feedback, complaints, and response mechanisms) to Feedback and Accountability Mechanisms. This wasn’t just a cosmetic change: we intentionally wanted to focus more on the accountability aspect and move away from ‘complaints’. The word ‘complaint’ raises alarms with some people and creates resistance. It’s hard for people to see why they should be soliciting criticism, especially if they feel they don’t have the power to change things. We felt it was important to shift the center of gravity from ‘having to respond to complaints’ to proactively seeking feedback that will help us improve and build trust and ownership.
The standards and guidance on feedback and accountability mechanisms (FAM) were jointly developed by the CARE federation during 2019. It took us over one year to develop them because we incorporated inputs from various teams across the CARE federation (such as gender & inclusion; safeguarding; and governance) as well as from areas across all of our work (such as quick emergency response projects, development type projects, advocacy and coalition work; working with partners). When developing the standards, we tried to incorporate perspectives related to different scenarios: from a short-term emergency project for distributing materials to a long-term engagement project focused on strengthening rights and civil society or building local government’s capacity. It thus took time to come together around shared language and approaches. This is why we choose to develop universal standards applicable across the CARE confederation. We outline what is expected and good practice, but leave translating these principles and standards into practice to each team, since the context, resourcing and realities vary significantly across our portfolio.
Our starting point was imagining a scenario where there was a natural or humanitarian disaster, there was no CARE previous office or program, and where there was limited time for setting up such mechanisms. In such scenario, CARE staff have so many competing demands and priorities that setting up feedback systems might be deprioritized. In order to prevent this from happening, we created many tools embedded in CARE’s Guidance which can be downloaded, and immediately used, while also customized later, if needed. For instance, CARE’s Guidance includes templates with draft questions to include in questionnaires, or a template for a procedure. If there’s nothing else, project staff do not need to create anything from scratch. They can simply download those tools and get started, but they can also later modify the templates to suit their needs. We wanted a minimum set of tools that would be ‘good enough’ to hit the ground running and ensure we follow good practice.
GS: When and how is feedback solicited? Which channels are used? Whose feedback is solicited?
LH: The decision for when and how feedback is solicited depends upon each individual project or country program. The diversity of our portfolio is such that it wouldn’t make sense to set those requirements centrally for everybody. In some areas, we also support third party monitoring where we do not implement projects directly, for example, or there are times in which a program is part of a partnership with other CSOs where CARE is not necessarily the lead organization, and, another CSO collects feedback.
Notwithstanding, one of our standards refers to wide access to providing feedback, and implies that everyone should have access to different feedback channels. We categorize channels into: active channels or static channels, as well as individual or collective channels. While active channels involve the project teams proactively going out and asking people for feedback, for instance, through a sample from some communities, or conduct a group discussion, static channels are those ones where the initiative for providing feedback lies with the individuals such as a feedback box or helpline. So, for example, when materials get distributed in our projects, we conduct micro-surveys to verify if people received what they were expected to receive and their level of satisfaction. In those cases, we also take the opportunity to verify if they are aware and have access to feedback mechanisms, etc. While individual channels can guarantee confidentiality, we also encourage collective channels, in particular, for marginalized groups as they may prefer to come together on a specific issue.
GS: Which are the key challenges when seeking feedback from vulnerable groups? Can you share examples of how they have been addressed/ overcome?
LH: One important challenge is reaching out for feedback to groups who tend to be overlooked, such as people with mental health issues or people with disabilities. Engaging marginalized groups requires building trust. Time pressures and limited resources can make it harder to invest time and efforts in building this trust. Specific measures should be put in place to engage vulnerable groups which are difficult to reach because their feedback is crucial to ensuring our programs are safe and inclusive.
Another important issue is the confidence of individuals in coming forward, as well as the lack of awareness of performance and behavior standards in order to provide feedback. The latter tends to happen in humanitarian projects where some communities assume that whatever is provided needs to be received with gratitude regardless of its quality. For this, raising awareness of our code of conduct and building trust are essential, yet this takes time.
Closing the feedback loop is also crucial not only for building trust but also to promote ownership. I learned this in a previous role. In one of our projects in Cambodia, when we received feedback which could not be responded immediately, we came up with a little ceremony whereby we would tie a ribbon to a tree in a central location to symbolize our commitment to come back with an answer and only then, we would untie the ribbon together with the community. So, when people passed by the tree and saw the ribbon, they knew that a binding promise had been made and an answer would be given. This was transformational in terms of creating trust with community members. Project participants learned that they had a right to be heard and to have a response, even if we couldn’t do what they asked.
All communities have the right to be heard and the right to an answer – even if the answer is that nothing could be done about it.
GS: Which are the main risks when setting up feedback systems?
LH: One issue worth mentioning is that feedback tends to follow a U-shape where we should expect feedback to get worse after we start closing the feedback loop. The initial feedback tends to be polite and positive, but as one starts closing the feedback loop, participants’ feedback tends to be more honest and negative, as communities start to realize that it’s not a simple formality, but that the organization will do something about it. When an organization starts receiving criticism, it is important for its staff to maintain their motivation and not take it personally. Our staff often work tirelessly in very challenging circumstances so it can be hard for them not to take criticism to heart. We have to reframe feedback to our own staff as an opportunity to learn together and build on our achievements, and not as something to fear.
From an M&E perspective, there is always a concern around sampling in our active channels, and not overlooking particular groups, since there is a risk of only collecting information from those who are easy to reach within a project. So, if a meeting is organized, those people with some stake in the project will show up -who are usually people in favor of it and who will enjoy the benefits-, while others will not show up. While this might be the simplest and cheapest way to consult, there is nothing to learn from talking with those who agree with you already. In order to prevent this from happening, we have to go out more pro-actively to look for the people who might not be aware or even disagree with the project, or those who lack the confidence to come forward by themselves.
In addition, there is always a tension between quantitative and qualitative data in M&E work. While the analysis of data from micro-surveys tends to be easier, making sense of the qualitative data where people tell a story, is more difficult and far more time consuming. It is thus important to involve project’s participants in making sense of the information collected and, putting it into context. This is partially the reason why we also insist on the importance of not focusing exclusively on percentages when we analyze feedback data; like the percentage of those who are satisfied with a particular activity. This is because marginalized groups, by definition will be a minority. Focusing excessively on percentages may incur in the risk of elite capture, whereby we end up listening only to the voices of the most confident or articulate, who typically belong to the most powerful groups.
GS: How can feedback systems be scaled up across different regions in a country?
LH: Increasingly we are looking at scaling up at least some parts of our FAMs. Certainly, digital systems lend themselves to this objective but ultimately not all systems can be scaled up nor it is always desirable. Inclusion is an important factor for us and often the use of digital tools may enable greater inclusion particularly of people with impairments, an area I feel is often neglected. Digital tools can also help with anonymity, they are often preferred by the youth, they can help us lower the cost and reach across different languages more easily. In essence there are many benefits from using digital tools in addition to the ability to scale up, but digital tools require online access and literacy, and those are not always granted.
Setting up a nation-wide toll-free helpline, or using Facebook or twitter can make sense and be very cost-effective; but it’s important to also have face-to-face channels that are nested in the local context. We know from experience and research that sensitive feedback (for example safeguarding or fraud allegations) is normally given face to face, so, if we want to make sure that our programs are safe, we must always provide these types of channels as well. Additionally, the local context, realities and cultural preferences are extremely important in making sense and responding to feedback, and it can be difficult, or even inappropriate to bring this to scale.
Of course, we try to encourage and find ways to make FAMs as sustainable and as efficient as possible and digital tools are certainly a way to do this, but the extent to which digital tools should be taken to scale also depends on a number of factors. For example, one of the challenges of a national toll-free number is having staff able to speak all the different languages of all our project participants. To build trust, it is important that feedback can be given in the language one is confident speaking. In many contexts, it is also important to have both female and male helpline operators as some, particularly the most vulnerable, will not be confident speaking to someone of the opposite sex. In the case of a country with a refugee population, for example, finding both men and women able to speak all the relevant languages in a central location may not be possible.
There are also considerations in relation to literacy and accessibility of these different channels by people with impairments. We must also consider how feedback can be understood, and responded to from a nation-wide perspective. Often when we receive feedback it will be incomplete, and we need to understand precisely which location, activity, partner, or service it refers to. Sometimes it might not even relate to our projects at all but to another organization, or it might be a request for a type of support we are unable to offer. In our standards, we commit that we will not place the burden of finding out who is responsible for a particular action or activity on our program participants. We will instead maintain an awareness of our operating context and signpost them to those agencies. Now this could be very challenging to do at a national level because it requires a great deal of awareness of the different services and active projects in different locations.
There are often also concerns about confidentiality and data security when using digital channels, particularly in locations that have witnessed a great deal of conflict. All these considerations must be taken into account in deciding if it is appropriate to scale up or remain local, but in an ideal scenario, we should try to progress both.
GS: Could you share any recommendations about indicators for evaluating feedback systems?
LH: At CARE, we now need to assess the extent to which CARE projects align with our standards for feedback and accountability mechanisms (FAM), so an app-based assessment was recently developed which we are about to launch. Once a project’s FAM gets assessed, this tool will generate a report, automatically emailed to the staff member who completed the assessment, with links to additional targeted resources for areas which may need to be strengthened. We felt it was important to emphasize the continuous learning aspect of developing effective FAMs and this is what we tried to focus on. We are fully aware that not all our FAMs will be perfect, so we want the assessment to be a starting point and not a destination. We want the focus on learning and improving not on compliance or judgement. Moreover, this tool will be accessible across the whole CARE federation, so colleagues can access information on feedback systems (but not access actual feedback – that is to be kept confidential!) from projects in the same country or other countries, which, in turn, we hope, can contribute to peer learning.
Concerning the set of indicators for assessing feedback systems, there are a few options. If a survey is conducted, one possible question would be: If you are to provide feedback during the project cycle, how confident are you that you will get a response? Another question can be the extent to which the people that participated in a project were actually involved in choosing the feedback channels, or making sense of the feedback collected or finding a solution to a problem. These are all indicators of the level of trust in our FAMs and can teach us a lot about our processes. Our standards say that all this should happen, but there are also realities on the ground to consider; like a rapid onset emergency might not allow for full and inclusive consultation. This is why we encourage to consider the process of setting up FAMs as something that should be revisited regularly. If it’s not possible to include all perspectives from the start, we can use our knowledge and expertise to set up a FAM and then use these metrics to assess how well we are doing and where we need to make improvements.
Another possible indicator can relate to the percentage of non-anonymous feedback inputs received as this may provide insights into trust building among parties. While there will always be a percentage of anonymous feedback, it is important that project’s participants feel confident in waiving anonymity regarding non-sensitive data since it is difficult to provide a concrete response to anonymous feedback. Almost opposite to this consideration, another indicator of how well our FAMs are run, is the extent to which our systems are effective in eliciting sensitive feedback. For our projects to be safe, it is important that we are able to swiftly identify when things go wrong, but raising these issues takes a great deal of courage and trust. Closing the feedback loop publicly is one of the strategies we use to show communities that we do take feedback seriously and we will address it. When people see that we listen, act and respond systematically even on minor issues, they start to see that it is worthwhile to raise issues with us and gain the confident to raise more serious concerns. This is why we feel that closing the feedback loop publicly is important despite the challenges of doing so, particularly in relation to sensitive or confidential feedback.
Last but not least, it is important to document the changes made to our activities, after the feedback was collected and responded to. This must be shared with our staff or teams writing funding proposals, so a learning process is promoted and the same mistakes are not made again in the future.
Recent Comments