Last February, we interviewed Laura Hughston (LH), Monitoring, Evaluation and Learning Advisor at CARE UK, to learn about CARE’s experience with feedback and accountability mechanisms (FAM). A few days ago, we caught up again with Laura to check on progress made regarding CARE’s FAM assessment process, and in particular the new FAM app.
GS: What are the latest developments regarding CARE’s efforts around the app for assessing feedback and accountability mechanisms?
LH: The purpose of our app is to assess the extent to which the feedback mechanisms set up for a geographic-specific project or country-wide, either for a humanitarian response or development program, are aligned with CARE’s feedback and accountability mechanism (FAM) standards.
Our standards establish that the target groups and communities with whom we work have the right to get consulted on how a project plans to collect feedback and the right to know how feedback will be used. Also, our standards require that we ensure participants feel safe and have access to a range of channels, both collective and individual channels – including those that guarantee confidentiality. Yet, it is up to each project on how to go about in setting up a specific FAM due to various factors, such as illiteracy rate and the use of one or multiple languages by the local population, as well as the length of the project (whether it is a 3-month emergency response or a long-term development project). All these factors should determine the type of channels and processes we put in place to run our FAMs, but all our projects and programmes should have FAMs that reflect our commitments.
One of our standards is that we will always prioritise adaptations to our activities that increase safety and inclusion. We have a range of resources for our staff to support, not only collecting feedback, but also analysing it to use these insights to improve our practice. In sum, standards must be flexible to take into account the local context; but they remain universal and applicable to all our projects.
So, this app is aimed neither at collecting nor processing feedback, although in many places we use technology to collect feedback, but always prioritizing our choices are appropriate to the context of where we operate. The objective of this app is to help us learn and improve in our practices, project by project as well as collectively, while also advancing progress towards our commitments as a federation.
GS: What can we learn about the FAM app’s design and functions?
LH: We wanted a tool that would allow us to assess compliance with our standards, a check list if you like, but with the ability to include suggestions on how to make improvements. In the past, we would have used spreadsheets, but it is difficult to consolidate data across the entire federation with spreadsheet and there is the risk of having multiple versions of one document which could create confusion. Therefore, given that CARE federation moved into using MS Office 365, we decided to rely on MS Power Apps, which was a tool available for all of us within the CARE family. In other words, only CARE staff are able to access this app.
When our staff log in to the app, MS Power Apps recognizes the user; this makes it secure and we know the system cannot be accessed by anybody outside of CARE. The App has three sections: one section contains a repository of materials and additional resources from CARE and other organizations divided according to various topics, such as FAM and inclusiveness, safeguarding, practical tools and examples of feedback channels etc. – many of them in up to four different languages. Another section includes the standards, in case you need to refresh them. These standards are based on international best practice and align with our commitments. Then, there is a specific section to review or assess your own FAM, which asks basic questions about a project such as the sector, location, budget, FAM responsible officer. The assessment also asks about efforts made to consult with target communities, how inclusive those consultations were, the channels that have been put in place, and the type of feedback and adaptations we have made in response to feedback. Later, there are questions about our use of the feedback data and the extent to which feedback information is used in decision-making. We also ask about efforts made to make sense of feedback data. It is important to triangulate the data and not only rely on percentages or satisfaction rates. We must make efforts to understand the different experiences of different members of the community to ensure that our programmes are as inclusive as they can be and that any adaptation we make to our activities is going to result into an improvement for all.
Most of the questions are answered through drop down options or tick-the-box response options, so it can be completed as fast as possible without becoming time-intensive. We want our colleagues to spend less time compiling the assessment while more time learning and reflecting on how we can improve. The whole app is designed not to be a burden and not to feel like a compliance exercise, but rather it should be a tool to reflect, learn and get support and inspiration.
After all data is entered, it automatically produces an assessment comparing how well a project meets our FAM standards. It was only a small group of M&E professionals who put together this guidance for the entire federation, sometimes on a voluntary basis, so we are not able to review each and every single FAM around the world, and provide feedback to each. Hopefully, this assessment app will help our project staff to identify the basic FAM-related gaps and allow the small group of M&E professionals to focus on tackling more difficult or unusual problems leaving the routine issues to the app. So one on one targeted support is still available for project staff to reach out to the M&E central team for additional support, but we feel this is the most strategic way to make use of our limited resources.
Once the app user clicks on the submit button, an email is generated and sent to her/ his email account. The automated response highlights any gap identified through the assessment, together with the corresponding FAM standard, while also providing links to materials selected precisely to address the weaknesses identified. If, for example, the assessment revealed that consultations were not sufficient, resources about consultations will be sent. If the identified gaps are about the utilization of FAM data to adapt our programmes, then resources on how to analyse and utilize FAM data will be sent. The assessment also points out to the most pressing issues to be addressed, together with additional elements to be considered for improvement enabling our project staff to prioritise the actions they will take. The assessment is all very visual with icons showing which aspect of a FAM are strong and which are weak, whilst also comparing those to the expected standards.
Last but not least, the data from the app is automatically fed into a power bi visualization, which it is automatically refreshed every six hours, capturing data globally from projects across CARE’s federation. Again, this is done in order to minimize the effort required to produce analysis and free up capacity to reflect and learn.
GS: What’s your vision concerning this app for the next 3 years?
LH: First, this app must be run long enough to get sufficient feedback data from country offices and projects to allow us to distill key lessons and identify gaps. We want our programmes to guide us to the next steps of this app: what would make their work easier? what solutions do they wish for? In order to better understand this, we need some time to both familiarize ourselves with the process and the standards, but also reflect on the extent to which this is helping us improve our practice.
As a federation, CARE annually collects impact data across all our projects in alignment with our strategy. From this year onwards, projects and programs must report FAM assessment scoring, so we can have a sense of FAM performance across the board. This will be our baseline and help us identify where we need to go next.
While we believe that our standards are reachable, we would like to know if, and what kind of difficulties our staff face in setting up and running effective FAMs. As a federation we have clear commitments, for example to the Core Humanitarian Standards (CHS), but we need to understand more precisely where and how we need to improve and what it would take to reach the quality we wish for. It’s only with data clearly pinpointing what is needed and where, that we will be able to engage the CARE leadership and ask for precise future reviews of our standards. But it will take a little while before we are able to identify clear trends and patterns in our global data. We are, however, happy to have started this journey.
One clear and immediate ambition is to see the repository of materials and examples from different countries get expanded during the coming months and years. We would like to be able to showcase more of CARE’s own experiences to provide help and inspiration to our own colleagues. A feature of the app is that it allows to search by country, or sector (for instance, a shelter humanitarian project in Argentina). We hope that this will encourage colleagues to look for projects similar to their own or perhaps in countries which speak the same language in order to share approaches and tools among themselves. We believe that context, culture and language are critical to ensure our approaches are appropriate, so enabling people to source materials from projects they feel are more closely aligned with theirs will always be our preferred solution. This is why we hope the app will foster dialogue and sharing between our accountability experts across the federation.
Anyone wishing to know more about FAMs at CARE can contact me: email@example.com