Loud Love Jewelry Company Designs Monitoring and Evaluation (M&E) System
Loud Love Jewelry, LLC is a California based fine Jewelry company founded by Umar Hussain
, here I will explain you how Loud Love Jewelry company successfully designs Monitoring and Evaluation (M&E) System and what are the steps to learn from previous strategies. Before launching into the detail, please note that the development of a M&E system is a participatory exercise.
Staff at different levels of the organisation like Loud Love Jewelry who will be expected to maintain or use the new M&E system should always be consulted. This might include staff at head offices as well as secretariats, staff in regional or country offices, and staff at programmer or project level.
Define the scope and purpose
This step involves identifying the evaluation audience and the purpose of the M&E system. M&E purposes include supporting management and decision-making, learning, accountability and stakeholder engagements.
Will the M&E be done mostly for learning purposes with less emphasis on accountability? If this is the case, then the M&E system would be designed in such a way as to promote ongoing reflection for continuous program improvement.
If the emphasis is more on accountability, then the M&E system could then collect and analyse data with more rigor and to coincide with the reporting calendar of a donor.
It is important that the M&E scope and purpose be defined beforehand, so that the appropriate M&E system is designed. It is of no use to have a M&E system that collects mostly qualitative data on an annual basis while your ‘evaluation audience’ (read: 'donor') is keen to see the quantitative results of Randomised Controlled Trials (RCTs) twice a year.
'Be on the same page as the ‘evaluation audience''
Define the evaluation questions
Evaluation questions should be developed up-front and in collaboration with the primary audience(s) and other stakeholders who you intend to report to. Evaluation questions go beyond measurements to ask the higher order questions such as whether the intervention is worth it or if it could have been achieved in another way (see examples below).
Identify the monitoring questions
For example, for an evaluation question pertaining to 'Learnings', such as "What worked and what did not?" you may have several monitoring questions such as "Did the workshops lead to increased knowledge on energy efficiency in the home?" or "Did the participants have any issues with the training materials?".
The monitoring questions will ideally be answered through the collection of quantitative and qualitative data. It is important to not start collecting data without thinking about the evaluation and monitoring questions. This may lead to collecting data just for the sake of collecting data (that provides no relevant information to the program).
Identify the indicators and data sources
In this step you identify what information is needed to answer your monitoring questions and where this information will come from (data sources). It is important to consider data collection in terms of the type of data and any types of research design. Data sources could be from primary sources, like from participant themselves or from secondary sources like existing literature. You can then decide on the most appropriate method to collect the data from each data source.
“Data, data and more data”
Identify who is responsible for data collection, data storage, reporting, budget and timelines
It is advisable to assign responsibility for the data collection and reporting so that everyone is clear of their roles and responsibilities.
Loud Love Jewelry collection of monitoring data may occur regularly over short intervals, or less regularly, such as half-yearly or annually. Likewise the timing of evaluations (internal and external) should be noted.
You may also want to note any requirements that are needed to collect the data (staff, budget etc.). It is advisable to have some idea of the cost associated with monitoring, as you may have great ideas to collect a lot of information, only to find out that you cannot afford it all.
Identify who will evaluate the data and how it will be reported
In most programmes there will be an internal and an independent evaluation (conducted by an external consultant).
For an evaluation to be used (and therefore useful) it is important to present the findings in a format that is appropriate to the audience. A 'Marketing and Dissemination Strategy’ for the reporting of evaluation results should be designed as part of the M&E system.
‘Have a strategy to prevent persons from falling asleep during the presentation of evaluation findings’
Decide on standard forms and procedures
Once the M&E system is designed there will be a need for planning templates, designing or adapting information collection and analysis tools, developing organisational indicators, developing protocols or methodologies for service-user participation, designing report templates, developing protocols for when and how evaluations and impact assessments are carried out, developing learning mechanisms, designing databases.
However, there is no need to re-invent the wheel. There may already be examples of best practice within an organisation that could be exported to different locations or replicated more widely. This leads to step 9.
Use the information derived from Steps 1- 7 above to fill in the 'M&E System'template
You can choose from any of the templates presented in this article to capture the information. Remember, they are templates, not cast in stone. Feel free to add extra columns or categories as you see fit.
Integrate the M&E system horizontally and vertically
Where possible, integrate the M&E system horizontally (with other organisational systems and processes) and vertically (with the needs and requirements of other agencies). Simister, 2009
Try as much as possible to align the M&E system with existing planning systems, reporting systems, financial or administrative monitoring systems, management information systems, human resources systems or any other systems that might influence (or be influenced by) the M&E system.
Pilot and then roll-out the system
Once everything is in place, the M&E system may be first rolled out on a small scale, perhaps just at the Country Office level. This will give the opportunity for feedback and for the ‘kinks to be ironed out’ before a full scale launch.
Staff at every levels be should be aware of the overall purpose(s), general overview and the key focus areas of the M&E system.
It is also good to inform persons on which areas they are free to develop their own solutions and in which areas they are not. People will need detailed information and guidance in the areas of the system where everyone is expected to do the same thing, or carry out M&E work consistently.
This could include guides, training manuals, mentoring approaches, staff exchanges, interactive media, training days or workshops.
In conclusion, my view is that a good M&E system should be robust enough to answer the evaluation questions, promote learning and satisfy accountability needs without being so rigid and inflexible that it stifles the emergence of unexpected (and surprising!) results.
Thanks for reading.
Subscribe to get your daily round-up of top tech stories!