The 'Story of Mel' Explained

The Story of Mel is a story about a ‘Real Programmer’ that came out in the early 1980′s on Usenet. As described on its Wikipedia page :

The Story of Mel is an archetypical piece of computer programming folklore. Its subject, Mel Kaye, is the canonical Real Programmer.

This is a very popular piece of programming lore, which is often reposted. Many people have read this story without fully understanding it, which is a shame, because it really gives you a taste for what it took to be an elite programmer in the early days of computing (late 1950′s / early 1960′s).

What follows is the complete story, with the technical bits explained as needed. If I make a mistake or you think something should be improved, please comment below.

The Story of Mel

Machine code is binary (1′s and 0′s). Hexadecimal is a representation of Base 16 (0-9, A-F), so it is easy to represent 4 binary numbers in one hexadecimal character. Writing ‘A’ rather than 1010 is easier for humans to read. So Mel is a machine programmer, who wrote his binary in hexadecimal format. He didn’t use any fancy assembly language, (Add, Sub, etc.) to make coding easier- he wrote in binary code directly.

Core Memory was a form of random access memory that used magnetic fields to store information. It was hand-manufactured by skilled factory workers, which made it expensive. Eventually it came down in cost, from $1 per bit to 1 cent per bit, but during Mel’s time it was quite expensive. Therefore, the engineers at Royal McBee, trying to target the lower-end of the computer market (an LGP-30 would cost about $400,000 in 2015 – however, quite cheap for computers in the 1950′s!) decided to use Drum Memory which is simpler to manufacturer and therefore cheaper. Drum Memory is much slower than Core Memory, which made the new RPC-4000 still seem less-than-ideal to a programmer despite improvements in other areas of the machine.

As will be explained later, Mel used a variety of tricks that exploited quirks of the processor and drum memory that allowed him to squeeze every bit of power from the machine. These tricks were so specific to the LGP-30′s processor that no compiler could automatically generate machine code that was as fast and optimized as Mel’s. Therefore, Mel didn’t like compilers, because he’s a Real Programmer, who wants the most power possible.

About a “program rewriting its own code “, by modifying the region of memory that contained the program instructions, Mel could (very, very carefully) have the program modify its own instructions while it was running, effectively changing its own source code. Modern high-level languages like Python and JavaScript allow creating new code dynamically, but not mutating existing code. Compilers will not generate self-modifying code, as it’s too dangerous and not likely to be performant. However, a Real Programmer like Mel would find this an attractive feature when programming.

This is mind blowing that something so complex and excellent was written directly in machine code, especially to the author who was fluent in high level languages ( FORTRAN ).

Typically porting a program to another OS or processor architecture means targeting the new OS / architecture and recompiling the source code. However, when you program directly in machine code (like Mel), you need to completely rewrite the entire program if the new processor has a different set of instructions it supports (which is expensive and time consuming). This is one reason why high level programming languages were created – portability – so that the same source code can run on different operating systems and processors.

Machine language is a series of 1′s and 0′s. In a hypothetical 16 bit instruction scheme, the first 4 bits could be the operation code (allowing 16 maximum operations), the next 4 bits the storage register, the next 4 bits an operand register, and the next 4 bits another operand register. Therefore, if 0001 is the opcode for ‘Add’, and we want to store the result of adding register 1 and 2 into register 3, a potential binary instruction could be 0001 0011 0001 0010 (in decimal: 1 3 1 2).

In the RPC-4000 computer, the instruction of the next address was determined by an operand inside the current instruction. Normally, a program is loaded into memory sequentially. The first instruction is at address 0, the next at 1, the next at 2, and so on, and the program counter (which keeps track of the current instruction) increments each time an instruction is executed (unless a ‘jump’ causes the counter to change to something else).

However, because of the nature of drum memory, it isn’t performant to have the program counter increment sequentially between commands. Understanding this requires a brief digression into drum memory. It is critical that you understand this in order to understand the rest of The Story of Mel.

Drum Memory looks like it sounds – a rotating cylinder that is covered in a magnetic material that can be be switched between two different states (1 and 0) using electricity. State is modified using read-write ‘heads’ that are located along the drum. The drum rotates at a fixed rate, while the heads don’t move, so a head must wait for the drum to rotate into a specific position in order to read or write a certain address.

For instance, imagine a hypothetical drum with 4 ‘tracks’. Each track has its own read-write ‘head’. Each track contains 256 bits, for 1024 bits total. If you want to read the bit at address 900, the drum will send a response once bit 124 (remember, each track has 256 bits, so bit 900 is the 124th bit of the 4th track) rotated beneath the 4th head.

A side effect of this is drastically inconsistent memory response times. In modern semi-conductor based memory, access speed is typically the same no matter which RAM bank the data is in. In Drum Memory, the response time is extremely quick if the requested address is just after the current location of the head, or extremely slow if the head just missed it.

Continuing our discussion about the drum’s read-write heads, it becomes clear that merely having each instruction sequentially written to the drum would be inefficient. This is because by the time an instruction is executed, the head would no longer be over the very next instruction. This is because the clock rate (instructions executed per second) of the processor and the drum memory are not perfectly synchronized (which is desirable, because you don’t want to bottleneck either the processor or the drum by forcing them to operate at an exactly synchronized rate).

Therefore, an ‘optimizing assembler’ was written that would take assembly code and arrange the instructions in memory locations that lined up with the drum rotation.

This would allow the programmer to write their code in sequential order, without worrying about the next-instruction operand:

And the optimizing assembler would take the same code, and rearrange the instructions in an order that was optimized for the drum memory, automatically changing the next-instruction address operand. Our optimized code would now look like:

Obviously, having to program your commands completely out of order using GOTO’s on each line to jump to a new instruction would be unbelievably difficult. So an optimizing assembler would do this automatically, taking the burden off of the programmer, and output reasonably efficient machine code. The next instruction would be reasonably close to the current location of the track head, minimizing seek time.

Mel never used the optimizing assembler, because Mel is a Real Programmer. He found a trick that was beyond the capabilities of the optimizing assembler that gave him more computation power, and because the optimizing assembler could not give him the same power, he hated using it.

The “constants” Mel used require some explanation.

As described before, a 16 bit processor would have instructions that looked like this:

1001 0101 1010 1111

If a hypothetical instruction set used the first 4 bits for the opcode, then you have 16 potential opcodes (2 ^ 4 = 16). Let’s say that the final 4 bits of the instruction were the address of the next instruction (the GOTO statements in the assembly code). Obviously, this is an example, because this system would allow only 16 instructions per program. If you had a command that looked like this:

And R1, R2, GOTO 5

The machine code could look like this:

0001 0001 0010 0101

Which evaluates to 4389 in decimal !!!

Therefore, using extremely careful ordering of the instructions, and by analyzing the existing locations of instructions, Mel could look at all of his instructions and treat them simultaneously as numerical constants.

If he needed to multiply a value, he could examine the location of each track head during the current instruction, see if any instruction had a ‘constant’ value that could be used, or even wait briefly for the drum to rotate a tad further, to give him a range of potential constants.

Mel saw his entire program as a living, intricate machine. The program’s source code was malleable – as the program executed, the instructions of the program could rewrite themselves. So looking at Mel’s source code didn’t show you the whole thing. When executing the program, the instructions themselves would change, so instruction 927 could start as an Add, and then switch to a Multiply.

Rather than writing his programs by architecting a high-level design and then building out each piece, as is typically done today, Mel would first implement the loops he needed, and assign them the most efficient locations on the drum for maximum speed. So the code that ran most often (loops) would be guaranteed to run as fast as possible.

Time delay loops were needed to ensure that when using a Flexowriter as output, the computer didn’t send commands too fast for it to handle. Time delay loops are bad in a computer that doesn’t have an operating system. If you write a program that uses a sleep() command in Python or Java, the operating system can swap your program out of the processor and execute other programs until you have finished sleeping.

However, in the LGP-30 there was no operating system, just a single program executing. So while your program slept, the computer was doing nothing useful.

So rather than write wasteful time delay loops, Mel would pick the worst possible location for the next instruction, which would delay the maximum amount of time, without needing to write a do-nothing loop.

Every non-infinite loop has a condition that eventually exits the loop. Even if you have an infinite loop, you can still break out of it. However, the loop Mel wrote had no test or condition that allowed exiting. Yet, somehow, it still exited the loop cleanly…

An index register is used to efficiently iterate through an array of data, by incrementing it for each element and then adding the current register value to the base address of the array.

This is a clever trick to increase speed without using the index register. Rather than incrementing the index register, he would increment the instruction itself (modifying the source code of the program), store it back, and execute it. The additional overhead of modifying the instruction and storing it back was just enough time for the drum to rotate into position to execute the next instruction, without an additional clock cycle to manipulate the index register.

In order to use the index register, there was an ‘index register bit’ which was located between the instruction address and opcode of an instruction. Mel would turn it on, which is perplexing, because Mel never used the index register…

Operation codes (opcodes) tell the processor which operation to execute for given operands. The opcode Mel used was 1 less than the opcode for JUMP, which ‘jumps’ to another instruction rather than executing the next one sequentially.

When the final element in the array was accessed, at the very top of memory, setting the index register to 1 caused an overflow. The overflow carried over to the opcode, which would be incremented, thus changing the operation to a JUMP because the current opcode was 1 less than JUMP. The JUMP would cause the program to execute the instruction at location 0, the very bottom of memory, which is indeed where Mel placed the next instruction.

To summarize, Mel enabled the index register bit (despite not using the index register) to cause an integer overflow, and because the instruction was at the very top of memory, it had the side-effect of also incrementing the instruction’s opcode, modifying the instruction to change into a JUMP command, in order to exit an infinite loop.

This is a totally insane optimization, which is why it took the author weeks to understand it. Clearly, Mel knew the inner workings of drum memory and the processor so well that he amassed a collection of tricks to squeeze maximum performance from the computer. He would use all of them in his programs, which would always beat the compiler and optimizing assembler. Mel was a Real Programmer.

For those of you who made it this far, congratulations. I hope you have learned something about the fine art of computer programming, and what it took to be an elite programmer of ancient computers.

Hacker News and other Internet communities routinely argue over what languages constitute ‘real programming’. PHP and JavaScript seem to be the current whipping boys, while C/C++ are what the True Programmers use.

The bottom line is, the argument is irrelevant. None of us are Real Programmers. We are abstracted away so far from what is really happening in the metal, that arguing about languages is a pointless endeavor.

I suggest being content that modern technology doesn’t require us to hack up machine code in order to keep our software running :-)

chapter 1 research areas the story of mel

  • Portal Stories: Mel

Portal Stories Header.jpg

Portal Stories: Mel is a modification and unofficial prequel to Portal 2 .

  • 2 Mechanics
  • 3.4 Rainbow Core
  • 4.1 Chapter 1: 1952
  • 4.2 Chapter 2: Extended Relaxation
  • 4.3 Chapter 3: The Ascent
  • 4.4 Chapter 4: Organic Complications
  • 4.5 Chapter 5: Intrusion
  • 5.1 Public beta branch
  • 5.2 day1version
  • 8.1.1 Testing the Waters
  • 8.1.2 Electrocution
  • 8.1.3 Ignorant
  • 8.1.4 Forever Alone
  • 8.1.5 Determined
  • 8.1.6 Burned in Goo
  • 8.1.7 Crushed
  • 8.1.8 Into Darkness
  • 8.2.1 Persistent
  • 8.2.2 Curious
  • 8.2.4 Under The Stairs
  • 8.2.5 Beyond Your Range of Hearing
  • 8.2.6 You Shouldn't Be Here
  • 8.2.7 Single Rainbow
  • 8.2.8 In The Vents
  • 8.3.1 Welcome to Aperture
  • 8.3.2 Long Term Relaxation
  • 8.3.3 Voices from Above
  • 8.3.4 Firefighting 101
  • 8.3.5 Back on Track
  • 8.3.6 Deja Vu
  • 8.3.7 Organic Complications
  • 8.3.8 Back Off Track
  • 8.3.9 Welcome to my Domai
  • 8.3.10 Story Shutdown
  • 8.3.11 System Shutdown

An Aperture Science test subject named Mel arrives at Aperture via train in 1952. Cave Johnson tells her that she will be testing the Short-Term Relaxation Vault , and tells her she will be asleep for a few minutes, or at most an hour. After going into the vault, something goes wrong, and she is indefinitely put to sleep. Later, far into the future, she is waken up by a voice pretending to be Cave Johnson that is trying to convince her it is 1952, despite the fact that the facility is deserted and filled with items from the 1970s (the voice claims that there was an earthquake that brought items from the future). It tells her to pick up an ASHPD to participate in a "new test." After a few tests, he decides to stop the act, and reveals he isn't Cave Johnson, and they are in the future and stuck inside of Aperture. He is actually a core named Virgil, a 'maintenance core' who is stuck in the 'scrapyard' of Aperture, and needs your help. Later after reaching a seal, Mel takes the lift that comes down after opening the seal. Mel later reaches the scrapyard, and then the offices. After finding Virgil, he tells Mel to turn on the power. After turning on the power the voice is heard again. Its name is 'AEGIS', short for 'Aperture Employee Guardian and Intrusion System'. AEGIS begins to flood Aperture with toxic goo. Mel and Virgil take an elevator up to Virgil's testing track, through which they can access the computer room for AEGIS. Mel uses turrets to destroy the computers, and shut down AEGIS. Before shutting down, AEGIS reveals that it was trying to eliminate three targets: Mel, Virgil and GLaDOS . Mel incinerates her portal device, and goes up to the surface, saying goodbye to Virgil. Shutting down AEGIS drained cryogenic reserve power, thus waking up Chell and starting the events of Portal 2.

Portal Stories: Mel's mechanics work similarly to Portal and Portal 2.

  • New '70s portal gun model
  • '50s mobster Turrets (Voices unused due to the Source Engine's programming)
  • Advanced, high-difficulty tests
  • Stylish '50s Long Fall Boots
  • Death Fizzlers

Mel's placeholder found in Portal 2's game files.

Mel is the main protagonist of Portal Stories: Mel. She is a middle-aged brunette woman sporting a brown Aperture jumpsuit and spring based long fall boots. She also has subtle makeup such as red nail varnish and red lipstick. Little is known about her history as she is a silent protagonist similar to Chell from Portal and Portal 2, however Virgil does tell us about her struggles in Olympics, thus proving her Olympic status as an athlete in the 1936 Nuremberg Olympics. Her story starts in the old 1950s Aperture in a small train, akin to the beginning to the original Half-Life [ [1] ], and Cave Johnson gives us a quick synopsis of the facility, along with his usual antics (i.e. firing an employee for a simple spelling error). She soon walks through the brand new unnamed community founded by Aperture Science themselves, and going into the new yet faulty Aperture Science Short-Term Relaxation Vault (most likely an earlier version of the Aperture Science Long-Term Relaxation Vault). She falls asleep as Cave Johnson instructs her to do and then she soon falls into a long slumber. Once awoken "Cave Johnson" soon tells her she successfully completed her test and that she can complete another brand new test. As Mel looks around, it is clear that a long time has passed, as the building is ruined and filled with items from the 1970s, and Mel carries out her adventure inbetween Portal and Portal 2, and after the Bring Your Daughter To Work Day incident leaving all the scientists dead as well as GLaDOS deactivated and badly damaged by Chell. Mel is soon led by "Cave Johnson" to a vintage portal gun, in a smashed shelf filled with an array of comical rewards and trophies highlighting Aperture's achievements. She makes her way around the destroyed Aperture facility with technology well beyond Mel's time period, but "Cave" reassures it was simply an earthquake that brought things from the future with it. Now Mel needs to use her 'new test' to try and solve the Enrichment Spheres and finish the 'new' testing shaft. Throughout the rest of her testing adventure, Mel has a lack of a deep character and simply acts as a gateway for the player. Soon, through testing along with Cave Johnson acting increasingly suspicious, it is soon revealed that he is not Cave Johnson and in fact a Personality Construct named Virgil, who wants Mel to help him get out of the Aperture scrapyard that he fell into, off his management rail. If Mel is successful then Virgil will let her escape Aperture to go back to her 'normal' life.

After Virgil reveals himself, the relationship starts off shaky due to a lack of trust, Virgil himself even stating 'why would you trust someone who just lied to you' but as soon as Mel sees Virgil, Virgil seems very grateful when Mel picks him up stating "my motherboard was going critical". With Mel saving Virgil's life this seems to strengthen their bond.

With the introduction of AEGIS, the main antagonist of the game, Virgil seems rather quite alarmed about the 'flooding protocol' instigated by AEGIS, flooding the whole 1970s Lima Whiskey testing track, so Virgil tries to get Mel onto his testing track, which is soon hijacked by AEGIS, showing his power over the facility, and throughout the testing track Virgil tries to get Mel out, protect her and further encourage her. Soon, Virgil devises a plan to go to an overgrown testing track as the AEGIS scanners would be instantly overwhelmed by the amount of organic life (i.e. the overgrown plants and feral wildlife). Now in the overgrown testing track, Mel treks through it, still avoiding the imminent flood. Meanwhile, Virgil tries to gather as much information on AEGIS to shut down the autonomous core and then fully shut it down. Once devising a new plan to finish off AEGIS, Mel escapes and meets up with Virgil momentarily to then go to the AEGIS hub and systematically shut down his security, such as his thermal discouragement fields, and disabling his turrets. Finally, Virgil says he can't go in due to the lack of management rails. He wishes Mel good luck and she soon goes forward to shut his autonomy off.

Finally, using the defective turrets, Mel shuts down the AEGIS core leaving the AEGIS mainframe to a manual input. She shuts AGEIS down, but then soon finds out AEGIS's main goal was to completely flood the facility to fully destroy GLaDOS as its sole purpose was to protect all Aperture employees and accidentally seeing Mel and Virgil as murderers along with GLaDOS after she flooded the place herself with a deadly neurotoxin. Now Mel can finally escape, but first has to incinerate her old portal gun and have a final goodbye with Virgil and get to the surface and see the external ruins of the previous Aperture-founded community.

Then, a clear view of Mel shocked by her surroundings is shown and her adventure ends, we have very little information on Mel and seeing as Portal Stories: Mel is not canon, there is no acknowledgement to her in Portal 2, but it can be assumed she tries to live her life, but seeing the surrounding is completely overrun with wildlife, it seems that human life is scarce and society has probably collapsed, probably to the events of the Half-Life series.

chapter 1 research areas the story of mel

Virgil is a personality construct, built by Aperture Science, probably in the late Cave Johnson/early Caroline era of the company as one of the first cores, due to his eye being the '70s logo of Aperture, as well as him having a great deal of knowledge on the older facility. He has a dark red skin with a tropical theme, and his orange eye cracked. Being a personality construct, he is a sentient AI and has a personality as the title implies. Throughout the game, he is shown to have much more intelligence then Wheatley , but he is simply referred to as a maintenance core and does not possess a defined personality trait like the Space Sphere , Rick or the Fact Sphere .

Virgil first appears as a fake Cave Johnson after the facility fell into disrepair and tries to guide Mel out to help him escape, only wanting to get out of his "technological nightmare", but as soon as Mel saves his motherboard from going critical, he starts to devise a plan to let both of them escape. However with the security system AEGIS flooding the place, Virgil tries to launch Mel onto his testing track for it to be hi-jacked a couple of tests in. He devised a second plan to go to the overrun facility to confuse AEGIS and use it as an distraction to find a way to destroy AEGIS and stop the flooding protocol. Finally once out of the overrun testing track, Virgil instructs Mel to shut down the AEGIS security and shut down the autonomy mainframe just to leave a manual input. Once Mel defeats AEGIS, Virgil and Mel depart and Virgil is left in Aperture Science, only with the other cores, Chell and GLaDOS once they wake up. Seeing that Portal Stories: Mel is non-canon it is unknown what truly happens to Virgil after the game or in the time period of Portal 2. Finally, it is also inferred that Virgil maybe be homosexual or bisexual as he shows some subtle attraction to the Rainbow Core, who is a male personality core.

chapter 1 research areas the story of mel

AEGIS (Aperture Employee Guardian and Intrusion System) is the main antagonist of the game. He can speak and make autonomous decisions, but they are based on protecting the scientists and does not show any self-awareness or sentience. He can only follow protocols and what will be safest for the scientists. He also seems to have great power over the facility, like GLaDOS, such as dispensing turrets, starting a flooding sequence or a asphyxiation program and even control testing tracks. He is first encountered in old Aperture when you have almost escaped the Lima Whiskey testing track as its radars can probably pick Mel up and immediately deems you as a threat, assuming you killed all the scientists in the Bring Your Daughter To Work Day incident.

Rainbow Core

The Rainbow Core appears as an Easter egg in a Test Chamber involving Excursion Funnels , and speaks in a similar fashion to the Adventure Sphere. Virgil seems to be attracted to him, as evidenced by Virgil hoping Mel 'got his serial number'.

Test Chambers

Chapter 1: 1952.

  • Portal Stories: Mel Tram Ride
  • Portal Stories: Mel Intro
  • Portal Stories: Mel Lift
  • Portal Stories: Mel Garden

Chapter 2: Extended Relaxation

  • Portal Stories: Mel Garden Part 2
  • Portal Stories: Mel Under Bounce
  • Portal Stories: Mel Once Upon
  • Portal Stories: Mel Past Power
  • Portal Stories: Mel Ramp
  • Portal Stories: Mel Fire Storm

Chapter 3: The Ascent

  • Portal Stories: Mel Junk Yard
  • Portal Stories: Mel Test Chamber 1
  • Portal Stories: Mel Test Chamber 2
  • Portal Stories: Mel Test Chamber 3
  • Portal Stories: Mel Test Chamber 4

Chapter 4: Organic Complications

  • Portal Stories: Mel Overgrown
  • Portal Stories: Mel TB Over Goo
  • Portal Stories: Mel Two Of A Kind
  • Portal Stories: Mel Destroyed
  • Portal Stories: Mel Factory

Chapter 5: Intrusion

  • Portal Stories: Mel Core Access
  • Portal Stories: Mel Finale

Public beta branch

Day1version.

  • The original Portal 2 was going to be a prequel and the main protagonist was also called Mel, but the two are mostly different characters and this beta Mel would go up against a southern Cave Johnson in the early days of Aperture, however it was changed to a sequel with Chell and GLaDOS because people missed Chell and GLaDOS' relationship, so Mel was scrapped.
  • Ratman 's dens can also be found in Portal Stories: Mel and one depicts AEGIS with millions of hands, depicting its control over the facility, when it is online.
  • A hidden companion cube can be found in Chapter 4, and is immediately fizzled on attempts to pick it up.
  • The number "2056" appears many times during the game, in a mysterious way. It can be found at the beginning of the game on the train, and many other times, appearing also in an Easter egg and an achievement. This is a reference to developer LoneWolf2056's username.
  • At the end of Chapter 4 a portrait captioned "Wallace Breen visits Aperture Science Laboratories, 1983" can be found depicting Wallace Breen shaking hands with Cave Johnson .
  • There is possiblity to open Puzzle Maker in this game. Open console and write puzzlemaker_new_chamber to open new map, and puzzlemaker_open_chamber to see dialogue window with saved maps. However, all of the models and textures are missing, and you can't actually play the chambers.

Achievements

Testing the waters, electrocution, forever alone, burned in goo, into darkness, under the stairs, beyond your range of hearing, you shouldn't be here, single rainbow, in the vents, story-related, welcome to aperture, long term relaxation, voices from above, firefighting 101, back on track, organic complications, back off track, welcome to my domai, story shutdown, system shutdown.

  • Community content

Navigation menu

Links to social media channels

Reshaping Monitoring, Evaluation, and Learning for Locally Led Adaptation

chapter 1 research areas the story of mel

Executive Summary

  • Locally led adaptation (LLA) is an emerging priority among international climate and development donors and community-based organizations alike. Supporters of locally led adaptation can leverage the monitoring, evaluation, and learning (MEL) process to promote local agency and effective, equitable adaptation.
  • Monitoring, evaluation, and learning (MEL) is an important process for managing the complexity, uncertainty, and context-specificity of any adaptation intervention, including those that are locally led.
  • Different MEL approaches and methods are required that balance power, promote mutual accountability, value local knowledge and priorities, and create value for local actors.
  • This paper recommends a systemic shift toward MEL that is locally led, context-aware, and itself adaptive. It provides operational steps throughout the MEL cycle that funders, intermediary organizations, and other institutions seeking to support locally led adaptation can take.

List of Abbreviations

  • APC adaptation planning committee
  • BRACED Building Resilience and Adaptation to Climate Extremes and Disasters
  • CARIAA Collaborative Adaptation Research Initiative in Africa and Asia
  • CREATE Climate Resilience Evaluation for Adaptation through Empowerment
  • CSA climate-smart agriculture
  • CSV climate-smart villages
  • DCF Devolved Climate Finance (mechanism)
  • EAG environmental advisory group
  • FPIC free, prior, and informed consent
  • GIS geographic information system
  • ICT information and communications technologies
  • IIED International Institute for Environment and Development
  • KYC Know Your City (program)
  • LDC least developed country
  • LLA locally led adaptation
  • MEL monitoring, evaluation, and learning
  • MSC Most Significant Change (technique)
  • NGO nongovernmental organization
  • PEA political economy analysis
  • PRIME Pastoralist Areas Resilience Improvement through Market Expansion
  • SDI Slum/Shack Dwellers International
  • TAMD Tracking Adaptation and Measuring Development
  • UNFCCC UN Framework Convention on Climate Change

In its 2019 flagship report, the Global Commission on Adaptation called for an increase in decentralization of adaptation finance to the local level. In 2020, partners of the Global Commission on Adaptation developed principles for locally led adaptation. These principles serve as foundational guidance for supporting and enabling locally led adaptation, and call for actions such as devolution of decision-making, patient and predictable financing, flexible programming, and redressing of social inequalities. This paper is intended to support implementation of these principles by exploring the role of MEL in supporting or discouraging locally led adaptation and to discuss MEL practices and approaches that align with these principles of locally led adaptation.

A growing body of research and practice is grappling with the opportunities and challenges of MEL for adaptation. Emerging evidence and practice centers on the MEL’s role in building social capital and equalizing the power dynamics that underpin the use and governance of knowledge generated through MEL. Grounded in the principles of locally led adaptation, this paper builds on these fields of research and practice to discuss how MEL can support local agency and reflect local priorities and expertise in the interest of more effective and equitable locally led adaptation interventions.

About This Working Paper

This paper presents a review of opportunities and challenges in MEL for locally led adaptation. It is based on a review of MEL approaches, practices, and literature, as well as consultations with a range of actors involved in locally led adaptation MEL.

The paper proposes key considerations relevant throughout the MEL cycle that align with the principles for locally led adaptation:

  • Recognition of and response to structural inequalities
  • Promotion of local agency in decision-making
  • Understanding climatic and contextual uncertainty and complexity
  • Prioritizing learning processes
  • Generating value for local stakeholders

To operationalize these considerations, the paper discusses approaches, methods, and tools specific to each phase of the MEL cycle.

Conclusions and Recommendations

MEL of locally led adaptation requires a long-term systemic shift from traditional MEL to MEL that is locally led, balancing asymmetries in power and accountability. A range of MEL practices and tools are available in the short term to funders, intermediary organizations, and MEL practitioners to support locally led adaptation. However, limitations and practical constraints come with recommended changes to MEL practice, including changes in resources, time, and standard processes. Practical steps throughout the MEL cycle help navigate trade-offs and strengthen locally led adaptation interventions.

To support locally led adaptation through MEL, funders, intermediary organizations, and MEL practitioners should do the following:

  • Understand and respond to structural inequalities, including how power dynamics affect the MEL process and whose objectives it serves, and whether different worldviews and definitions of resilience are equally valued. This is critical to ensure that MEL reflects the realities and priorities at the local level and that results are not inaccurate or biased, and to encourage the systemic change required for locally led adaptation.
  • Embrace design of MEL systems that give equal or greater priority to downward accountability and learning compared with upward accountability. Distinct processes for accountability and learning can address tension between these objectives.
  • Ask how MEL processes and outputs create value for local actors.
  • Take a local demand–driven approach to building capacity for self-directed MEL. If MEL intends to support locally led adaptation, local actors themselves should determine what capacity, external expertise, and access to information they need.
  • Enlist appropriate approaches and methods to navigate and better understand complexity and uncertainty.
  • Create locally appropriate and context-specific indicator frameworks and adaptation metrics. Adaptive capacity is particularly useful as a starting point in defining a set of indicators that apply across scales and contexts.
  • Adopt MEL technologies and process innovations as appropriate to increase local ownership.
  • Develop MEL systems to support adaptive management, experimentation, and learning from failure.
  • Collaborate with knowledge brokers who can translate terminology and concepts between external and local actors to enable ownership and contribution of local partners.
  • Ensure that learning is applied, documented, and shared horizontally at the local level and vertically to national and international levels as appropriate, while prioritizing the knowledge needs and gaps of local actors as a primary audience group.

As locally led adaptation progresses, so should theory, practice, and learning about MEL for locally led adaptation. This paper provides a high-level and preliminary analysis of MEL for locally led adaptation, and encourages continued learning from local practice and local expertise in adaptation MEL.

Introduction

As a process involving power and decision-making, monitoring, evaluation, and learning (MEL) can either perpetuate structural inequity or actively work against it (Emerson 2020). Conventional MEL risks further entrenching the inequities that make certain populations disproportionately vulnerable to climate change. However, there is a paucity of documented experience that addresses the entire cycle of adaptation MEL in a way that intentionally redresses power imbalances between donors and local partners, addresses biases in the knowledge and values that are privileged, and promotes downward accountability to local actors.

This paper discusses the role of MEL in encouraging equitable distribution of resources and agency through locally led adaptation (LLA). It provides an assessment of approaches and practices throughout the MEL cycle that help balance power and accountability, and discusses the advantages and limitations of these practices.

The MEL approaches and practices outlined in this paper are targeted primarily to donor and intermediary institutions and individuals involved in designing or delivering MEL in the context of locally led adaptation or where local ownership is a development priority. Since these groups currently have the power to direct resources to MEL, and influence MEL design and outcomes, they have a critical role in leveraging MEL to support LLA. Local actors, however, are facing the very risks LLA interventions seek to address, and are therefore most directly affected by MEL practices and approaches. As funders and implementers of climate adaptation interventions invest in and prioritize locally led adaptation, assessment of the implications of MEL for LLA can inform approaches and practices that build social capital and encourage equitable distribution of power.

The conclusions and recommendations of this paper are intended to provide practical suggestions for donor and intermediary institutions and MEL practitioners to better support locally led adaptation through MEL, and fundamentally rethink the role of MEL to support their climate resilience and social justice objectives. The advancements in MEL discussed in this paper represent an evolution of good practice in the interest of more effective and equitable adaptation outcomes.

The paper draws attention to the critical role of MEL in supporting locally led adaptation and offers good practices and lessons learned from relevant literature and practice. Section 2 presents key considerations for supporting locally led adaptation that are relevant throughout the MEL cycle, using a set of principles for LLA that were developed for the Global Commission on Adaptation as a conceptual framework. Section 3 walks through each phase of the MEL cycle and assesses specific approaches, methods, and tools that align with the principles of locally led adaptation. This assessment is based on the relevant bodies of research and practice on adaptation MEL, community-based adaptation, participatory MEL, and decolonization of MEL, as well as a set of case studies and consultations with stakeholders of MEL for locally led adaptation. Section 4 synthesizes the key findings from this assessment and presents conclusions and recommendations for MEL to better support and enable locally led adaptation.

1. Conventional MEL Does Not Support LLA Goals

Locally led adaptation recognizes that people closest to the effects of climate change, especially those facing structural marginalization, require the financing and decision-making power to ensure that adaptation investments reflect their priorities. It entails investing in solutions whereby local actors have the agency to make and influence these investment decisions and wield the innovation, flexibility, knowledge, and sense of urgency required to confront the crises of climate change and socioeconomic injustice (Conde et al. 2005).

Locally led adaptation is an emerging priority of some governments, bilateral and multilateral donors, and other institutions. Examples include programs such as the Least Developed Country (LDC) Initiative for Effective Adaptation and Resilience, County Climate Change Funds in Kenya, Enhancing Direct Access programs within funds like the Adaptation Fund and the Green Climate Fund, and localization movements in the humanitarian and development sectors (LIFE-AR 2019; Crick et al. 2019; GCF 2019; Adaptation Fund 2020a). Institutions from grassroots networks to international nongovernmental organizations (NGOs) and funders are calling for the systemic changes required to channel more finance for adaptation to the local level and increase the agency and decision-making authority of local-level actors. In support of these efforts, principles for LLA have been established, and research and guidance are emerging to create enabling environments for LLA.

As adaptation and climate-resilient development investments aim to support local leadership, the process of designing, monitoring, and learning from these efforts must also support local leadership. Locally led adaptation is an approach that challenges some of the underlying assumptions and structures prevalent in international development, emphasizing that to be effective, local actors most directly impacted by climate change must drive the adaptation investments that affect them. This paper defines MEL and LLA terminology as described in Box 1.

Box 1 | MEL and LLA Definitions

Definitions

Monitoring – Continuous assessment to provide stakeholders with timely, detailed information on the progress of an intervention. Monitoring seeks to support near real-time learning as part of a wider approach to flexible and adaptive management.

Evaluation – The process of understanding the results of an activity, policy, program, or institution. Useful and robust evaluation should inform both accountability and learning depending on the emphasis of the evaluation questions.

Learning – Understanding, by an intervention’s stakeholders, of what works, in what contexts, for whom, and why. Learning should be iterative and ongoing, support direct and rapid course correction, and enhance the capacities, particularly the adaptive capacity, of all stakeholders.

Local – May refer to the household, business, community, municipal, district, or province level as applicable to the context and requirements of a given adaptation intervention.

Local actors – Stakeholders of an adaptation intervention or their accountable representatives at the appropriate subnational level; refers to individuals or groups from the whole of society, including the subnational government, local enterprises, civil society, and community-based organizations, as well as households and individuals (Soanes et al. 2020).

Locally led adaptation – Characterized by local people and their communities having individual and collective agency over their adaptation priorities and how adaptation takes place (Soanes et al. 2020).

Agency – Individual or collective power to make decisions and take actions regarding one’s own current and future situations and experiences (Cole 2020).

Several shortcomings of common MEL approaches remain that must be addressed to support LLA. MEL that prioritizes upward accountability can reinforce asymmetrical power relations between donors and local actors (Ramasobana et al. 2020). Such requirements of upward accountability often create structures that do not support the learning and adaptive management essential to locally led adaptation (Spearman and McGray 2011). Restricted interpretations of MEL lead to problematic practices, such as a tendency to prioritize products and outputs over the process, and to underestimate the relevance of local knowledge and experience. This risks a process that disempowers or devalues local actors and is not critical of who is affected by the MEL process and how (Pauw et al. 2020; Christiansen et al. 2018; Holzapfel 2016). If MEL systems are not designed to create value and ownership for local partners, they risk being extractive and disrespectful of local actors’ time and knowledge (McCreless 2015; CARE 2014). Institutional standards and requirements for MEL often prioritize upward accountability, creating a structure that does not support learning and adaptive management essential to locally led adaptation (Spearman and McGray 2011). These examples of MEL practices and approaches not only fail to support local actors but also undermine the goals of donors and intermediaries to support local priorities.

MEL is a social undertaking. It is applied to social challenges, and it can reinforce or confront societal values and social dynamics such as power and legitimacy. If not approached with the intention to support LLA, MEL practices may conflict with the objectives of locally led adaptation. Traditional MEL is not neutral in its approach to knowledge creation or dissemination. The evidence and knowledge generated in contexts of adaptation are valuable, but communities affected by adaptation interventions do not always experience this value. The theories of change and indicators guiding MEL do not often reflect local input, let alone local ownership. MEL tends to rely on external experts rather than supporting local capacity to implement MEL. These outputs and inputs of the MEL process also have socioeconomic, political, and environmental implications and are therefore political resources with financial consequences (Chilisa et al. 2016; Frehiwot 2019; Kawakami et al. 2008).

While many conventional MEL approaches and practices may not align with the principles of LLA, the generalized MEL cycle presents important opportunities to support LLA. MEL involves power, decision-making, trust, and communication among participants. These features serve as entry points to support locally led adaptation, including through collaborative, locally led decision-making, integration of social and gender equity considerations, building trust, social capital and local ownership, and evidence and learning about LLA. MEL that leverages these opportunities is more likely to support effective, equitable LLA, including by grounding the design of adaptation in local realities, generating more complete evidence, and mitigating climatic and programmatic risk through iterative learning and adaptive management (Faulkner et al. 2015).

2. Key Considerations for Supporting LLA throughout the MEL Cycle

Eight foundational principles outline the basic requirements for finance for adaptation that is accessible to and owned by appropriate local actors, and provide a framework for assessing how MEL practices support locally led adaptation. These principles were developed for the Global Commission on Adaptation by the International Institute for Environment and Development (IIED) in partnership with World Resources Institute (Soanes et al. 2021). They build on a decade of foundational work carried out by IIED, with Slum Dwellers International, Huairou Commission, the International Center for Climate Change and Development, and many others regarding financing for adaptation and resource access in communities vulnerable to climate change. The principles are grounded in the recommendations of the Global Commission on Adaptation’s Adapt Now report to increase the volume of devolved and decentralized funding available to local actors to identify, prioritize, design, implement, monitor, and learn from climate adaptation solutions (Global Commission on Adaptation 2019).

These principles serve a basis for understanding the potential of MEL to support locally led adaptation. Table 1 summarizes the eight principles and is followed by a discussion of overarching considerations for MEL to support these principles of LLA.

Table 1 | Summary of the Eight Principles of Locally Led Adaptation from the Global Commission on Adaptation

Source: Soanes et al. 2021.

Putting these principles of LLA into practice through MEL entails several key considerations, drawn from relevant MEL research and practice. The remainder of this section discusses tenets for supporting LLA that are relevant throughout the stages of MEL: design and planning, monitoring, evaluation, and learning.

Recognizing and addressing how structural inequalities affect MEL reduces bias. Structural inequalities, such as those related to gender, race or ethnicity, and socioeconomic status, can manifest in different ways in MEL. Unequal value may be attributed to the different worldviews and perspectives, biasing outputs of the MEL process toward the worldviews and perspectives of participants with the most power (Segone 2012; Joyce 2020). For example, if a planning workshop does not provide a safe and inviting environment for participants with firsthand adaptation experience to speak up, then the outcomes of the workshop, such as a theory of change or indicator framework, will not reflect a complete understanding of challenges and possible solutions. Structural inequalities may be present between local actors and external actors (such as donors and MEL practitioners), among actors at the local level, between local government and civil society, or between women and men (Morchain et al. 2019). As such, it is important for LLA interventions to deliberately encourage equity.

Steps can be taken throughout the MEL cycle to mitigate the effect of structural inequalities on MEL, including conducting gender and social equity assessments, ensuring balanced and representative decision-making structures, and including indicators to understand the equity of decision-making processes. In applying Oxfam’s Vulnerability and Risk Assessment methodology in Malawi, Botswana, and Namibia, Morchain et al. (2019) acknowledged how their position as academics would bias the outcomes of the assessment. They broadened the framing of the assessment to include development priorities, in addition to climate priorities, and used translators and breakout groups to mitigate the effect of preexisting cultural norms and power dynamics in group discussions (Morchain et al. 2019).

The importance of local agency applies to decisions made through the MEL cycle. For an intervention to be genuinely locally led, local actors must have agency over their priorities and how adaptation takes place. Extended to MEL, local actors would have agency over the priorities of the MEL system and how MEL takes place in support of the intervention. Ensuring local agency in MEL starts with local ownership to help shape the overall purpose of the MEL system and data collection analysis methods and tools, and actively centering learning processes and products on the needs and demands of local actors (Silva Villanueva 2011; Faulkner et al. 2015).

Increasing access to information by tailoring knowledge products and tools to local audiences, and enhancing capacity to engage in MEL, supports local agency. Local universities and civil society organizations have a role to play as climate knowledge brokers, transforming data and information into knowledge for practical adaptation interventions, often working with local actors to coproduce knowledge on adaptation experience. Adaptation at Scale in Semi-arid Regions is a university network consortium that translates the experiences and input of communities on the front lines of climate change, giving them the opportunity to influence adaptation policy responses. Similarly, the University of Arizona’s Cooperative Extension system provides insights on context-specific local adaptation, researching adaptation processes and disseminating knowledge about local adaptation interventions to help both local government and local actors, particularly farmers, make informed decisions about adapting to changing climatic conditions (Brugger and Crimmins 2015). In an example of local government tailoring adaptation knowledge to its constituents, the Planning Institute of Jamaica shared lessons learned about resilient agriculture by tailoring knowledge products and communications to different local audiences to support decision-making, including local farmers and government utility managers (Adaptation Fund 2020b).

MEL is a process for navigating the complexity, uncertainty, and context-specificity of LLA . Given the unpredictable nature of climate change and the complexity of social, economic, and ecological factors that affect adaptation, it can be hard to know what the outcomes of LLA solutions will be. MEL can provide a process for innovation, experimentation, and adaptive learning to help manage this complexity and uncertainty. Developing a participatory theory of change helps generate a shared and iterative understanding of local context and the many different factors that may affect the outcomes of the intervention. In monitoring, it becomes important to develop indicator frameworks that integrate social, environmental, and economic factors (such as those based on socioecological systems), and reflect the connections and feedbacks linking human and natural systems (Olsson and Galaz 2012). At the learning stage there should be a focus on learning from unintended consequences and failures as well as successes by drawing on and combining several sources of knowledge and experience, including expert, scientific, and local or indigenous knowledge (Hulme 2015).

Creating value for local actors avoids extractive MEL. Even participatory MEL processes can be extractive of local actors’ time, knowledge, resources, and expertise if they do not explicitly create value for them (Wilmsen 2008). Mechanisms for downward accountability can be built into MEL processes through local actors’ active participation and enabling them to assess donor performance (Ebrahim 2003; Van Zyl and Claeyé 2019). Indicator frameworks can support mutual accountability by including indicators that reflect local priorities and definitions of resilience, as well as indicators of agency and social inclusion in the adaptation process (Estrella and Gaventa 1998; Fisher 2014) Cocreation of methods can help ensure that evaluations are useful to local actors, donors, and evaluators alike. Tailoring evaluations to locally determined priorities can have the added benefit of incentivizing participation and interest in the evaluation and the adaptation intervention going forward (Dunkley and Franklin 2017; Fitzpatrick 2012). Learning questions and processes are another opportunity to prioritize the learning goals of local actors over those of external actors (Faulkner et al. 2015).

Effective learning processes support effective locally led adaptation outcomes. Learning is essential to embracing and understanding the complexity of adaptation, and is the process linking data and evidence to the resilience outcomes and other changes that adaptation interventions seek to create (Silva Villanueva 2011; STAP 2017). An adaptation strategy in its own right, learning is especially relevant for locally led adaptation because of its role in building adaptive capacity at the local level (Baird et al. 2014). Local actors learn from both successful and failed adaptation interventions through self-determined experimentation with different coping strategies, or through purposeful social learning processes, to prepare adequate responses and coping strategies to climate risks and intervention outcomes (Tschakert and Dietrich 2010). A fit-for-purpose learning process will reflect different learning needs, goals, and ways of learning among stakeholder groups, and legitimize the diverse types of evidence and knowledge that support them (Alessa et al. 2016).

The process of learning, versus knowledge products such as reports or briefs, is therefore a key consideration for LLA and is most effectively embedded throughout the MEL cycle. Supporting adaptive capacity through learning does require resources and time from all local and external partners involved and should be adequately planned and budgeted for (Tschakert and Dietrich 2010).

Applying these key considerations often entails practical constraints and challenges. The benefits of MEL processes that bring in a wider range of participants and introduce new elements may require additional time, resources, and capacity. Oxfam International recommends allocating 10–13 percent of a total budget to MEL (Carmona et al. 2018). Local and external partners alike may feel the burden of additional time, resources, and capacity. A challenge for those responsible for designing the MEL system is to balance the priority of local agency with the risk of overburdening local partners. Surveys or baseline studies that take more time than necessary, for example, can exhaust partners and make them less willing to engage in the MEL process (Flatters 2017). Emphasizing local agency to determine roles and responsibilities in MEL and adhering to the “do no harm” principle are strategies to help mitigate this risk.

3. MEL Approaches, Methods, and Tools to Support LLA

Drawing on examples and case studies, this section reviews approaches, methods, and tools specific to the four stages of the MEL cycle: design and planning, monitoring, evaluation, and learning. It includes discussion of lessons learned and benefits and challenges to implementing these practices in support of LLA.

3.1. Design and Planning

The methods and tools presented below support design and planning of a MEL system and have implications for the planning and implementation of adaptation interventions and the extent to which they are locally led.

Hiring local experts in senior MEL roles is an opportunity to support LLA . MEL systems that are designed and led by external experts can discount local knowledge and expertise. Such a practice perpetuates power imbalance and is less likely to result in comprehensive MEL systems ( Estrella and Gaventa 1998 ; Van Zyl and Claeyé 2019). Locally available skills and expertise can be appraised in the planning phase of the MEL cycle. Additional resources may be required to engage local or national knowledge brokers and, where MEL experts are not available locally, to strengthen the capacity of local actors to lead MEL processes.

Theories of change can be used as a tool to ensure that adaptation interventions reflect local priorities and realities . Developing a theory of change is a process rather than a static product and can support LLA design by drawing attention to the assumptions and evolving understanding linking intervention activities and outcomes (Valters 2015). Local input to the theory of change process is most consequential if integrated from intervention inception. This may require a structural change to adaptation programming, however, as theories of change are often defined and designed before local partners are engaged. Routinely revisiting a theory of change throughout an intervention to test progress against locally determined outcomes and to integrate local knowledge and learning provides a deeper understanding of the link between climate change and the results of the intervention (NEF 2017).

Tools to build a detailed understanding of local stakeholder and contextual dynamics include political economy analysis, stakeholder mapping, and community-based climate vulnerability assessment . These three common tools can support assessment of relevant stakeholders, the nature of their vulnerabilities, and sociopolitical, economic, and environmental factors at play in a particular context. Political economy analysis (PEA) is concerned with how power and resources are distributed and contested. A useful PEA informs where locally driven opportunities for change may emerge and where constraints may need to be addressed, including why institutions matter (Teskey 2020). Stakeholder analysis and mapping is a way to identify an intervention’s key stakeholders, assess their interests, needs, and incentives and clarify how these may affect and inform an intervention’s design and delivery (Rai et al. 2015). Early engagement with diverse stakeholders creates space for them to influence the intervention design and delivery process (Leventon et al. 2016). Community-based climate vulnerability assessment integrates local knowledge and engages communities in the formulation of LLA plans. It provides a clear starting point for the definition of appropriate intervention and wider climatic indicators (ADB 2018).

Creating a climatic baseline and climatic monitoring system is key to understanding the outcomes of LLA . Once a detailed understanding of local stakeholder dynamics has been established, a corresponding and locally specific understanding of the climatic conditions and dynamics affecting those local stakeholders also needs to be developed where feasible. This is often described as a “declining climatic baseline.” Defining and measuring LLA results in the context of these climate dynamics helps actors plan, understand, and learn from an LLA intervention. This may require external expertise and can be constructed in collaboration with local stakeholders. Establishing a climatic baseline and then monitoring the climatic context in which an LLA intervention is delivered, in parallel to monitoring the results of the intervention itself, informs learning and adjustments required to continue the intervention in the face of climate changes. This is not a simple undertaking due to the temporal and spatial unpredictability of climatic shocks and stresses. For example, flood events are hard to predict and have locally specific impacts. Historical weather and climate records can provide useful climatic baseline data. Geographic information systems (GISs) and other technologies are increasingly being used to overlay and illustrate climatic data baseline and subsequent climatic changes.

3.2. Monitoring

Monitoring can support LLA by providing information for agile course correction, mutual accountability, and learning about the process, context, and expected outcomes (Warner 2017). Some monitoring methods and technologies described below can reduce the burden on participants and ensure that information collected is available and useful to populations who have limited access to climate information. While these examples of technological solutions can support more equitable access to information and influence in the monitoring process, it is important to recognize that not all populations have equal access to technology, and that technology alone cannot address challenges such as local agency in adaptation that are rooted in an imbalance of power.

The concept of adaptive capacity is a useful starting point for defining appropriate indicators and adaptation metrics . Adaptation is often assessed through concepts such as risk, vulnerability, and resilience, or through proxies that are expected to lead to adaptation, such as adaptive capacity (Leiter and Pringle 2018). The UK-funded Building Resilience and Adaptation to Climate Extremes and Disasters (BRACED) program created a set of locally defined and context-specific subindicators based on adaptive capacity, including indicators related to new knowledge and skills, new attitudes and behaviors, and shifting institutional relationships, which ultimately supported new, locally driven policies and practices (BRACED 2015).

Monitoring frameworks offer potential to enable adaptive management and prioritize learning . Adaptive management enables experimentation and innovation, and sometimes expects that interventions fail. Monitoring systems that support adaptive management serve multiple functions: they generate evidence on an intervention’s management and quality of delivery, allow for comparison between different types of interventions, focus on near-real-time learning, monitor unintended consequences and failures as well as successes, and ensure shared understanding among MEL participants of how evidence will be applied to support positive change. The Global Learning for Adaptive Management initiative is supporting the wider uptake of adaptive management principles and practices, including in climate change adaptation. This includes a focus on learning about effective MEL for adaptive management. The key is to design LLA monitoring frameworks not just for data collection but also for intentional and collective reflection on the data to support ongoing and real-time learning, course correction, and decision-making (IDS 2020).

Creating open data governance structures can support locally led decision-making. Climate science, earth observation data, and lessons learned about LLA are more likely to be used by local decision-makers if they are accessible through appropriate and demand-driven platforms (Acclimatise 2018). Solutions to data accessibility and application challenges involve open climate data, collaboration, and investments in education and long-term alliances for the codesign and coproduction of knowledge. The principles of open access, privacy, and appropriately packaged and accessible data apply across scales from national and international GIS initiatives down to climate data generated at the local level.

Sharing information about LLA also supports local ownership, decision-making, and social capital . The Mercy Corps Pastoralist Areas Resilience Improvement through Market Expansion (PRIME) project in Ethiopia uses a technology-driven approach to inclusive decision-making and bottom-up data collection. Its online platform, Kiprojects, allows local NGO and program staff to suggest and submit new activities for rapid approval. Staff can also make real-time learning about project progress easily accessible for other team members across Ethiopia (Desai et al. 2018).

Free, prior, and informed consent (FPIC) is another data governance structure that specifically supports the protection of local and indigenous knowledge shared in the context of MEL. FPIC is a principle that informs the right to consultation and is enshrined in the UN Declaration on the Rights of Indigenous Peoples and the International Labour Organization Indigenous and Tribal Peoples Convention 169 (FAO 2016).

Locally led data collection, data analysis, and learning can be enabled and enhanced through appropriate information and communications technologies (ICT) . Although technology cannot solve all problems related to accessibility and inclusion, data collection, analysis, and visualization science and technology innovations can support MEL of LLA by making it more feasible to engage populations who are traditionally hard-to-reach or not given opportunities to express their views, as well as to directly engage local actors in large numbers. Remote sensing, use of social media for data analysis and engagement, electronic financial transactions, and mobile applications and devices are some examples that can support MEL. Locally driven demands for accountability and social activism by civil society encourage uptake of locally accessible ICT tools and platforms, such as through mobile phones. A prominent example is the Slum/Shack Dwellers International initiative Know Your City, described in Box 2.

Box 2 | Case Study: Slum Dwellers International’s Know Your City Campaign

Participatory, pro-poor, people-centered urban governance

Know Your City (KYC) unites organized slum dwellers and local governments in partnerships anchored by community-led slum profiling, enumeration, and mapping. The KYC initiative demonstrates citizen-generated data collection not simply to support external monitoring processes but also, and more important, as a form of social and political capital that defies traditional power dynamics for slum dwellers in cities. Slum/Shack Dwellers International (SDI) used this extensive data collection to bring perspectives and requirements from the ground up to decision-makers, like local government officials and city planners, in order to inform decisions about resources and adequate responses.

Local ownership throughout the MEL cycle

Young people in informal settlements owned their role in MEL activities (digital data capture and peer-to-peer exchanges) and collaborated in the generation of learning and knowledge-sharing, including media and films centered on living in slums and informal settlements. The KYC campaign created data-sharing and information systems that were transparent and built trust between campaign administrators and local communities. People who live in informal settlements were trained to create a community profile through learning-by-doing approaches to monitoring, gaining data collection and management skills.

Tech-enabled, citizen-generated data

In-person and widespread monitoring ensured that local perspectives and voices were anchored the campaign outcomes. The Ugandan SDI Alliance and partners built the capacity of enumerators living in slums to use open-source software and handheld GPS devices to identify community assets and risks. Enumerators were also taught how to record and analyze data and produce reports. After local participants validated mapped data, initial digital maps were produced. Local participants then used this information to negotiate with local authorities on potential community development initiatives. The locally led data collection methods applied in the KYC case demonstrate the value not only of the data collected but also of the process to broaden understanding and accountability within the community (horizontal), and on multiple scales among the community, local government, state level actors, intermediaries, and the private sector (vertical).

Sources: Bolnick et al. (2018); Antonio et al. (2012).

3.3. Evaluation

There is increasing recognition that conventional evaluations (often characterized as project-focused, ex-post, and designed and delivered by external international evaluation teams) are not fit for purpose or supportive of LLA (IPCC 2014). LLA is inherently subjective and rooted in local context. There are clear benefits for both local participants and funders in tailoring evaluations to ensure that resources are aligned with local needs and desired outcomes. This often requires flexible evaluations and cocreation of evaluation processes. Imposing externally constructed outcomes for adaptation interventions risks misaligning resources with what is needed and valued at a local level, and evaluation methods that will not effectively measure LLA outcomes.

The approaches outlined below draw primarily from influences of participatory action research, developmental evaluation, and realist evaluation. They describe methods that are tailored-for-purpose, context-specific, and use evidence generated from evaluations for the benefit of local stakeholders (Pawson and Tilley 1997; Patton 2008).

Several conventional evaluation methods are closely aligned with the principles of local agency and prioritize learning .

  • Tracking Adaptation and Measuring Development (TAMD) is a standard methodology employed where participatory evaluation processes are the priority, including participatory data collection and indicator development that ensures local agency in the evaluation process (Brooks et al. 2013).
  • Most Significant Change (MSC) is a flexible participatory monitoring or evaluation technique in which evaluation practitioners and local stakeholders collect narratives of change following an intervention to understand and highlight the most significant changes resulting from an intervention. MSC narratives and the value attributed to them are validated by other participant stakeholders in the community (Davies and Dart 2005).
  • Appreciative inquiry is a change management process used as an evaluation tool focused on the strengths of an intervention, learning what is working well and investing in the factors that sustain successful outcomes. The inquiry itself is a collaborative effort, capturing the positive features of an intervention and encouraging continuous positive change (Acosta and Douthwaite 2005).
  • Climate Resilience Evaluation for Adaptation through Empowerment (CREATE) is an evaluation method developed by the International Union for Conservation of Nature for policymakers, field practitioners, or local actors. It is based on community-based vulnerability assessments but has been adapted as a tool for community self-evaluation. CREATE is flexible, designed for learning and experimentation, and aims to assess vulnerabilities and adaptive capacities, with the added value of supporting local actors in identifying pilot activities and strategic interventions as well as short-, medium-, and long-term adaptation activities (Shott and Mather 2012).

The climate-smart villages approach described in Box 3 provides a case of participatory, locally driven evaluation tailored to hyperlocal (village) contexts and scaled up for global learning through an accessible database.

Box 3 | Case Study: Evaluating Adaptive Interventions across Climate-Smart Villages

Collaborative evaluation approach

The climate-smart villages (CSV) approach to agricultural research for development aims to help agricultural communities across Asia, Africa, and Latin America adapt to climate impacts. A CSV project has evaluated technological and institutional responses to coping with climate variability and impacts in agriculture through the use of climate-smart agriculture (CSA). Following the rigorous, collaborative evaluation activity, lessons learned were shared through social learning platforms.

The CSV approach was designed to bridge the gap between climate projections and practical agriculture techniques that local actors could use to adapt to climate shocks and stressors. It involved farmer-led assessments and iterative learning and feedback, applied at different scales from local plots to farms, households, and communities.

Collaborative generation of evidence and data collection

CSA techniques are evaluated using a variety of methods, including surveys, farmer group evaluations, and ICT-based feedback tools such as crowdsourcing, to coproduce evidence. The CGIAR research program Climate Change, Agriculture, and Food Security and its partners designed a multiscale evaluation that was accessible to local actors through a digital platform, reflecting local experiences. In Tanzania, one evaluation tool employed was the 5Q method, which was cost-effective, adaptable, and straightforward, reflecting the farmer’s experience and progress at the local level in real time. 5Q uses “feedback rounds” of short, simple surveys based on five tailored questions and automated voice surveys and local technicians speaking directly with farmers.

Lessons learned are widely shared and accessible

Local stakeholders provide input into the design of CSV techniques based on their knowledge of risk management and then assess other applied CSV techniques. Local actors learn about the benefits and barriers of CSV through farmer-to-farmer exchanges, ICT-based tools, farming fairs, women’s organizations, and sharing videos of successful technologies, practices often supported by local government engagement. The CSV approach is valuable for stakeholders who benefit from the real-time feedback and access to the lessons and evaluations from other CSVs through an online platform.

Sources: Aggarwal et al. (2018); Jarvis et al. (2015).

Subjective evaluations and self-assessments integrate local stakeholders’ values, perspectives, and perceptions of risk. Béné et al. (2019) found that subjective understanding of resilience and a sense of self-efficacy have a strong positive impact on the households’ ability to effectively recover from shocks. These results suggest that local actors who have a clearer understanding of their resilience are better able to cope with effects of climate change (Béné et al. 2019).

Use of subjective measures of climate resilience is an emerging approach that offers a simple, efficient, and locally driven alternative to predefined resilience indicators or indices . Subjective measures are used in contexts where capturing real experiences from the bottom up is a priority, when local stakeholder experiences are assessed and compared over time, and where additional methods of measuring resilience complement subjective definitions (Jones and Tanner 2017). The Subjectively Evaluated Resilience Score was used for household resilience surveys in Uganda to understand the resilience capacity of local actors and establish a baseline for an ongoing impact evaluation (Jones 2019). Similarly, Reckien et al. (2013) developed vulnerability maps based on interviews with five socioeconomic groups in South New Delhi. Local actors’ subjective experiences of physical climate impacts and abstract climate concepts were used to determine potential causes of vulnerability and future adaptation options. The vulnerability maps indicated that lower-income groups were more affected by climate-related impacts and would benefit most from infrastructure investments. Subjective measures of resilience and vulnerability create space for a context-specific and localized assessment of an adaptation intervention according to the needs and logic of the community whose resilience is in question.

Culturally responsive evaluations can amplify local voice and support equitable outcomes. Hood et al. (2015) provide comprehensive guidelines for conducting culturally responsive evaluation. The guidelines recognize the centrality of culturally defined values and beliefs to any evaluation. In culturally responsive evaluation, outcomes are valid if evidence generated is supported by the cultural and local context, as opposed to validity defined by conventional definitions of research rigor. Culturally responsive evaluation specifically aims to address bringing equity to evaluations through its focus on historically marginalized groups (Hood et al. 2015).

Johnston-Goodstar (2012) suggests that evaluation advisory groups (EAGs) are ideally suited to bringing evaluators and indigenous communities together to discuss the evaluation process. Using language and tools acceptable to indigenous or context-specific worldviews, and centering indigenous voices, EAGs present an opportunity to examine differences in values, norms, and assumptions, and reflect on power relations that might influence the outcome of an evaluation or the evaluation process. For example, the Swinomish Indian Tribal Community in the U.S. state of Washington informed the design of evaluation metrics for a community planning intervention following a process similar to EAGs. After consultations with tribal elders had established a mutual understanding of the evaluation process, “climate change health” metrics were integrated into a conventional evaluation and planning process. Cocreating an evaluative process was also a valuable decision-support and learning tool for the Swinomish community and in other evaluations based on indigenous knowledge (Donatuto et al. 2020; Kerr 2012).

Although not uniquely tailored to locally led action and participation, the UN Children’s Fund guide to evaluation for equitable development draws on decolonial and feminist approaches to evaluation, indigenous knowledge, and human rights practice and could be adapted to the LLA context (Segone 2012). Another resource is the Equitable Evaluation Project, a peer-to-peer platform used to encourage evaluators to reflect on the cultural considerations and racial bias in evaluation processes, tools, methods, and offer approaches that can create positive social change and equitable outcomes (Equitable Evaluation 2017).

3.4. Learning

Common methods of integrating learning into adaptation interventions include regular learning reviews, use of online knowledge-sharing platforms, facilitated learning events or dialogues, and critical reflection through formal evaluation (Harvey et al. 2017). Below we assess a selection of less common approaches, methods, and tools for learning that are especially relevant to the LLA context.

Distinct processes for accountability and learning can address tension between these objectives . Reporting on target outputs for upward accountability purposes may discourage a culture of learning from failure and flexibility to adapt, and can consume intervention resources like team member time (Spearman and McGray 2011). If integrated from the start of an intervention, a two-track model with dedicated workstreams and resources for separate accountability and learning functions can encourage learning by removing the disincentive to report failure (Fisher and Anderson 2018; Smith 2020). The BRACED program has a dedicated “Fund Manager” who accounts for the effectiveness of the development assistance funding that supports the program, as well as a dedicated “Knowledge Manager” who is responsible for evidence-generation and learning about resilience (BRACED and Bond 2019). Donors can also allocate specific resources for learning or innovation funds to ensure adequate resources and commitment to learning objectives (Harvey et al. 2017). For example, the Collaborative Adaptation Research Initiative in Africa and Asia (CARIAA) had a designated “Opportunities and Synergies Fund,” which supports learning and research-into-use for adaptation interventions.

Social learning supports collaborative decision-making for adaptation contexts (Cundill and Rodela 2012). Social learning can support LLA as a process that entails active engagement and agency, recognizing that information, consultation, and participation are insufficient for effective adaptation (Collins and Ison 2009; Cundill et al. 2014).

Social learning is still somewhat ambiguously defined, posing challenges for implementation (Medema et al. 2014). CARIAA addressed this challenge by focusing on creating an enabling environment for social learning, rather than developing a prescriptive learning approach. The enabling conditions the initiative emphasized align with others identified in this paper, including flexibility in resource allocation and scope and separating MEL’s learning and accountability functions (Ensor and Harvey 2015). Initial findings from community adaptation interventions indicate that social learning can bring economic benefits and community cohesion if they are based on sustained engagement with relevant local stakeholders, capacity development for learning, reflection and cocreation of new knowledge, and challenging institutional barriers (Van Epp and Garside 2019).

Prioritizing self-directed learning and creating a learning culture can promote sustained local ownership of LLA (Smith 2020). If not intentionally locally driven, learning can be top-down and not necessarily meet the learning needs of local actors (Valters 2015). One of many examples of self-directed learning is the practice by farmers in Western Kenya of integrated soil fertility management. Multiple years of group learning led to scaling-out of farming practices and expansion of scope to include additional income-generating activities, suggesting that the advantages of locally owned learning may be worth trade-offs such as additional investment of time, resources, and capacity building (Ramisch et al. 2006).

In South Sulawesi Province, Indonesia, the Climate Adaptation through Sustainable Urban Development research project tested various qualitative and quantitative participatory evaluation methods to understand how well its water resilience project was creating a learning culture. The “Factors of Success” method, a collaborative group exercise that identifies and maps ideas of project success over time, and the “Obstacles and Enablers” method, in which participants reflect on potential “obstacles” to project success as well as potential “enablers” to overcoming these obstacles, were two qualitative methods that supported critical reflection and anticipatory learning. They could be integrated into a MEL system to promote a learning culture and participatory learning about LLA (Larson et al. 2016).

The case study in Box 4 uses an example from the Devolved Climate Finance mechanism piloted in Mali, Tanzania, Senegal, and Kenya to illustrate how bottom-up learning can support locally led adaptation investments and decision-making.

Box 4 | Case Study: Local-Level Climate Fund Decentralization

The Devolved Climate Finance (DCF) intervention places local actors at the heart of the adaptation management and financing process and seeks to improve the responsiveness of national decentralization policies to local impacts of climate change. The program supports community adaptation initiatives and local governments through prioritized investment in public goods that have a high socioeconomic impact. These public-good investments are identified and prioritized by local representatives against a devolved climate finance budget managed by the local government.

Adaptation planning committees complement learning-oriented MEL systems

The DCF MEL system is based on the TAMD framework and centered on adaptive and flexible management across local and national levels. The DCF mechanism is intended to strengthen existing monitoring, reporting, and verification processes in devolved government financing and planning processes. TAMD assesses the quality and scope of climate risk management investments and practices, then evaluates the local adaptation outcomes and impacts. In Tanzania in 2014, adaptation planning committees (APCs) were established to build trust between donors and local stakeholders. APCs at the ward or communal level had the autonomy to establish budget priorities, while regional APCs would improve recommendations without vetoing local budget priorities. This mechanism improved the rigor of the resilience planning and ensured that local actors’ needs were addressed while reassuring donors through a collaborative investment oversight process.

Bottom-up learning to inform decision-making

Lessons learned from testing adaptation techniques at the village level are collated through wide community consultations designed to represent diverse social groups and people often marginalized in decision-making, particularly women and young people. APCs and existing village assemblies are used to inform adaptation intervention strategies and spending. The combination of participatory planning tools used in the DCF planning process with local government authorities enabled local actors to articulate their livelihood strategies and led to subjective explanations of community resilience and well-being, as well as local resource use. The lessons learned from these participatory processes generated knowledge and evidence that local government authorities used to inform decisions about local investments and spending and enhance the awareness of climate change impacts in their local area.

Source: DCF Alliance (2019).

Learning through games can facilitate the communication of complex systems and support learning and dialogue . This is a method employed in support of climate resilience and disaster risk reduction by the International Federation of the Red Cross and Red Crescent Societies Climate Centre. The Climate Centre’s games use storytelling, active learning, emotional engagement, and problem-solving for learning about and managing climate risk and natural disasters (Solinska-Nowak et al. 2018). This unique method requires unique resources, including game facilitators, designers, and willing participants. Although there is insufficient evidence available to link learning through games to adaptation outcomes in the long term, anecdotal evidence suggests strong learning outcomes, especially when participants are involved in the game’s design (Bachofen et al. 2012).

Peer-to-peer learning supports local agency and social learning for LLA . One example of peer-to-peer learning for adaptation was an exchange the Adaptation Fund hosted in 2019 focused on resilient water and agriculture. Representatives of various national implementing entities of the Adaptation Fund learned directly from one of their peers in Chile about agricultural resilience to drought, soil erosion, wildfire, and unpredictable precipitation. Following the exchange, they applied lessons learned to enhance agricultural resilience in their respective contexts, three within two months of the exchange (Adaptation Fund 2019).

Knowledge exchange platforms are a vehicle for long-term, cross-scale learning and the building of an evidence base around effective and equitable LLA. Documentation, including through collaboration with knowledge partners such as local universities and dedicated platforms for sharing learning, is a simple but important method to support learning for LLA (Harvey and Fisher 2013). Global platforms appropriate for vertical knowledge exchange about LLA include the annual Community-Based Adaptation conference, the annual UN Framework Convention on Climate Change (UNFCCC) Conference of the Parties, and the annual Gobeshona conference in Bangladesh.

Alternative methods of disseminating best practice for local adaptation horizontally among local actors can include the use of digital media. For example, the North East Network—a knowledge broker for adaptation interventions in the Indian states of Assam, Nagaland, and Meghalaya—uses participatory video to disseminate local knowledge on sustainable natural resource management practices (CDKN 2018).

4. Conclusions and Recommendations for MEL That Supports LLA

There is increasing recognition that adaptation happens at a local level, and that local actors therefore should have agency over the adaptation interventions affecting them. National governments, multilateral development banks, bilateral donors, and international NGOs are investing in locally led adaptation to achieve resilience objectives that prioritize the communities at the front lines of climate impacts.

We recommend 10 ways in which institutions and individual practitioners, especially funders and intermediary organizations, can begin to align MEL with the principles of locally led adaptation. The recommendations outline opportunities to promote local agency in decision-making about the MEL process and to ensure that local actors have influence in critical decisions throughout the MEL cycle on the purpose of the MEL system, the theory of change behind the intervention, learning goals and processes, metrics and indicators to assess progress, how data is collected and used, what external support is needed, and evaluation approaches and objectives. The following recommendations are listed in the general order of the MEL cycle, although many apply throughout MEL.

  • All actors in the MEL process should understand and respond to structural inequalities , including how power dynamics affect the MEL process and whose objectives it serves, and whether different worldviews and definitions of resilience are equally valued. This is critical to ensure that MEL reflects local realities and priorities and that results are not inaccurate or biased. Basing theories of change and indicators on subjective definitions of resilience and local and experiential knowledge provides a way for MEL to reflect local priorities and perspectives. By prioritizing goals of local actors and legitimizing local knowledge and experience, MEL systems should aim to redress power imbalances and be part of a funder’s strategy to ensure that investments reach communities and reduce social and gender inequalities.
  • Funders, intermediaries, and practitioners should embrace the design of MEL systems that give equal or greater priority to downward accountability and learning compared with upward accountability. Distinct processes for accountability and learning can address tension between these objectives. Funders, intermediary organizations, and MEL practitioners should allow local partners to determine learning goals and collaborate to decide which approaches will best support these goals. Emerging approaches and tools that can support learning for LLA include social learning, learning through games, peer-to-peer learning, and virtual learning.
  • Funders and intermediaries should ensure that MEL creates value for local actors. MEL processes and the data and learning they generate should prioritize the knowledge and learning needs of local actors equally with funders’ MEL goals, which frequently relate to measuring progress and performance.
  • Funders and intermediaries should take a local demand–driven approach to building capacity for self-directed MEL. For MEL to support locally led adaptation, local actors themselves should determine what capacity, external expertise, and access to information they need to lead MEL that supports their goals in the long term. Engage knowledge brokers in this process as needed.
  • MEL practitioners should adopt appropriate methods to navigate and better understand complexity and uncertainty with regard to climate dynamics and locally led adaptation contexts and settings . This entails creating shared and comprehensive understanding both of local stakeholder and contextual dynamics, on the one hand, and locally specific understanding of the climatic conditions affecting local stakeholders, on the other. Establishing a climatic baseline and monitoring system and employing social assessment tools support understanding of relevant stakeholder and contextual dynamics as well as the outcomes of an LLA intervention.
  • MEL practitioners should create locally appropriate and context-specific indicator frameworks and adaptation metrics . Adaptive capacity is particularly useful as an adaptation metric and as a starting point in defining a set of context-specific LLA indicators. Indicator frameworks should also look to better integrate social, economic, and environmental dimensions of LLA, recognizing the spatial and temporal interconnectedness of these systems in adaptation. In order to support LLA, metrics and indicators must reflect what local actors view as important to measure and as reflecting their definitions of vulnerability, adaptive capacity, and strengthened resilience.
  • MEL practitioners should leverage MEL technologies and process innovations as appropriate to increase local ownership, voice, participation, and representation. Data collection and data analysis technologies across mobile applications, remote monitoring systems, climate and digital advisory services, and other technologies can be used both to increase access to climate data and information to inform decision-making, and to facilitate locally driven data collection and governance platforms.
  • With support from funders, MEL practitioners should develop systems to support adaptive management, experimentation, and learning from failure . This will entail promoting a culture that accepts learning from failure, building in time and resources to support iterative learning from the outset of an intervention, and designing monitoring frameworks that enable adaptive management for course correction. Encouraging near-real-time learning and openness about what works, what doesn’t, and why can foster a nimble intervention that is more likely to achieve its adaptation goals and help mitigate risks associated with financing and programming requirements for locally led adaptation.
  • Funders and intermediaries should engage knowledge brokers as appropriate to enable ownership and contributions by local partners. Differences in cultural norms and terminology should not interfere with a funder’s ability to engage local actors, or with local actors’ ability to engage in the MEL process. Knowledge brokers may have a role to play in facilitating local agency throughout the MEL cycle—for example, in the design and theory of change development, defining indicators, analyzing monitoring data, and sharing evaluation findings and lessons learned.
  • All actors involved in MEL should work together to ensure that learning is applied, documented, and shared. To promote more effective, equitable LLA interventions and build up an evidence base around LLA in the long term, learning should be shared and applied both horizontally at the local level and vertically through global knowledge exchange platforms, such as the annual UNFCCC Conference of the Parties, the annual conference on Community-Based Adaptation, the UN General Assembly, and the annual Gobeshona conference. Evidence and learning generated through the MEL process should, however, first meet the knowledge needs and gaps of local actors as a primary audience group.

The changes needed to enable widespread locally led adaptation will take time, and the same applies to shifting MEL for LLA. MEL practice to support LLA also often entails significant changes or trade-offs compared with current adaptation MEL practice. However, there are meaningful, concrete steps we can take to have more supportive MEL while managing constraints.

The authors of this paper believe that both the systemic shift and the practical steps can be taken in parallel—combining some relatively simple and immediate practical changes to existing MEL systems with a longer-term journey to systemically shift how MEL supports LLA interventions. Many of the practical steps suggested can be made in isolation or retrofitted to existing MEL systems. The systemic shift in approach requires that those funding, designing, and delivering MEL systems reconsider the value and use of MEL, placing the core emphasis on local ownership, downward accountability, and locally driven learning from the outset.

In keeping with the principle of coordinated action and investment for locally led adaptation, horizontal and vertical learning and knowledge-sharing will support this journey. Leveraging global platforms to share learning and exchange knowledge about LLA interventions can support more effective LLA in the long term.

MEL that supports locally led adaptation will also support improved LLA outcomes, enabling more context-sensitive, agile, equitable, and sustainable efforts to build climate resilience at the local level.

Acclimatise. 2018. “Applying Earth Observation Data to Support Robust Investment Decisions in the Face of a Changing Climate.” Acclimatise Briefing Note, 2018 Eye on Earth (EoE) Symposium. http://www.acclimatise.uk.com/wp-content/uploads/2019/02/EoE-briefing-note_FINAL.pdf .

Acosta, A.S., and B. Douthwaite. 2005. “Appreciative Inquiry: An Approach for Learning and Change Based on Our Own Best Practices.” ILAC Brief 6 (July). https://cgspace.cgiar.org/handle/10568/70175 .

Adaptation Fund. 2019. “Adaptation Fund Chile Country Exchange: NIEs Apply Lessons Learned.”

Adaptation Fund. 2020a. “Climate Adaptation Finance: Direct Access.” June. https://www.adaptation-fund.org/wp-content/uploads/2019/11/Direct-Access-June-2020.pdf .

Adaptation Fund. 2020b. “Jamaica: Knowledge Management with the Audience in Mind.” https://www.adaptation-fund.org/jamaica-knowledge-management-with-the-audience-in-mind/ .

ADB (Asian Development Bank). 2018. “Community-Based Climate Vulnerability Assessment and Adaptation Planning: A Cook Islands Pilot Project.” Manilla: ADB. https://www.preventionweb.net/files/27076_climatechangeassessmentcoo.pdf .

Aggarwal, P.K., A. Jarvis, B.M. Campbell, R.B. Zougmoré, A. Khatri-Chhetri, S. Vermeulen, A.M. Loboguerrero Rodriguez, et al. 2018. “The Climate-Smart Village Approach: Framework of an Integrative Strategy for Scaling Up Adaptation Options in Agriculture.” Ecology and Society 23 (1).

Alessa, L., A. Kliskey, J. Gamble, M. Fidel, G. Beaujean, and J. Gosz. 2016. “The Role of Indigenous Science and Local Knowledge in Integrated Observing Systems: Moving toward Adaptive Capacity Indices and Early Warning Systems.” Sustainability Science 11 (1): 91–102.

Antonio, D., J. Makau, and S. Mabala. 2013. “Innovative Pro-poor Land Tools under Implementation: Piloting the STDM In Uganda.” GIM International: The Worldwide Magazine for Geomatics 27 (4).

Bachofen, C., P. Suarez, M. Steenbergen, and N. Grist. 2012. “Can Games Help People Manage the Climate Risks They Face? The Participatory Design of Educational Games.” Red Cross/Red Crescent Climate Centre Working Paper Series 3.

Baird, J., R. Plummer, C. Haug, and D. Huitema. 2014. “Learning Effects of Interactive Decision-Making Processes for Climate Change Adaptation.” Global Environmental Change 27 (July): 51–63. doi:10.1016/j.gloenvcha.2014.04.019.

Béné, C., T. Frankenberger, T. Griffin, M. Langworthy, M. Mueller, and S. Martin. 2019. “‘Perception Matters’: New Insights into the Subjective Dimension of Resilience in the Context of Humanitarian and Food Security Crises.” Progress in Development Studies 19 (3): 186–210.

Bolnick, J., S. Dobson, S. Patel, A. Kallergis., and N. MacPherson. 2018. “Know Your City: Slum Dwellers Count.” Cape Town: Slum Dwellers International. https://knowyourcity.inf o/wp-content/uploads/2018/02/SDI_StateofSlums_LOW_FINAL.pdf.

BRACED (Building Resilience and Adaptation to Climate Extremes and Disasters). 2015. “BRACED Programme Monitoring & Evaluation (M&E) Guidance Notes: Note 6, Project Baselines.” https://www.itad.com/wp-content/uploads/2020/02/BRACED-M-E-Guidance-Notes-v1-1-December-2015-2-1.pdf .

BRACED and Bond. 2019. “Learning for Climate Resilience Programming: BRACED & Bond Resilience Learning Group Workshop Lessons.” March. https://www.bond.org.uk/sites/default/files/learning_for_climate_resilience_programming_rep4.pdf .

Brooks, N., S. Anderson, I. Burton, S. Fisher, N. Rai, and I. Tellam. 2013. “An Operational Framework for Tracking Adaptation and Measuring Development (TAMD).” London: International Institute for Environment and Development. https://www.researchgate.net/publication/262684281 .

Brugger, J., and M. Crimmins. 2015. “Designing Institutions to Support Local-Level Climate Change Adaptation: Insights from a Case Study of the U.S. Cooperative Extension System.” Weather, Climate, and Society 7 (1): 18–38. doi:10.1175/WCAS-D-13-00036.1.

CARE. 2014. Participatory Monitoring, Evaluation, Reflection and Learning for Community-Based Adaptation: A Manual for Local Practitioners . Geneva: CARE International. https://careclimatechange.org/tool-kits/pmerl/ .

Carmona, E.F., S. Thomas, H. Jeans, G. Castillo, A. Pretari, C. Geraets, et al. 2018. Monitoring, Evaluation and Learning for Resilience: A Companion Guide . Oxford, UK: Resilience Knowledge Hub and Oxfam. https://oxfamilibrary.openrepository.com/bitstream/10546/620498/1/gd-monitoring-evaluation-learning-resilience-190618-en.pdf .

CDKN (Climate and Development Knowledge Network). 2018. “Local Knowledge and the Role of Knowledge Brokers in India.” https://cdkn.org/2018/11/feature-local-knowledge-and-the-role-of-knowledge-brokers-in-india/?loclang=en_gb .

Chilisa, B., T.E. Major, M. Gaotlhobogwe, and H. Mokgolodi. 2016. “Decolonizing and Indigenizing Evaluation Practice in Africa: Toward African Relational Evaluation Approaches.” Canadian Journal of Program Evaluation 30 (3).

Christiansen, L., G. Martinez, and P. Naswa, eds. 2018. Adaptation Metrics: Perspectives on Measuring, Aggregating and Comparing Adaptation Results . Copenhagen: UN Environment Programme–DTU Partnership.

Cole, N.L. 2020. “How Sociologists Define Human Agency.” ThoughtCo , August 28. thoughtco.com/agency-definition-3026036.

Collins, K., and R. Ison. 2009. “Jumping Off Arnstein’s Ladder: Social Learning as a New Policy Paradigm for Climate Change Adaptation.” Environmental Policy and Governance 19 (6): 358–73.

Conde, C., K. Lonsdale, A. Nyong, and I. Aguilar. 2005. Engaging Stakeholders in the Adaptation Process — Adaptation Policy Frameworks for Climate Change: Developing Strategies, Policies and Measures . Cambridge: Cambridge University Press.

Crick, F., C. Hesse, V. Orindi, M. Bonaya, and J. Kiiru. 2019. “Delivering Climate Finance at the Local Level to Support Adaptation: Experiences of County Climate Change Funds in Kenya.” Working paper. Nairobi: Ada Consortium. https://pubs.iied.org/G04415/ .

Cundill, G., and R. Rodela. 2012. “A Review of Assertions about the Processes and Outcomes of Social Learning in Natural Resource Management.” Journal of Environmental Management 113 (December): 7–14. doi:10.1016/j.jenvman.2012.08.021.

Cundill, G., S. Shackleton, L. Sisitka, M. Ntshudu, H. Lotz-Sisitka, I. Kulundu, and N. Hamer. 2014. Social Learning for Adaptation: A Descriptive Handbook for Practitioners and Action Researchers . International Development Research Center, Rhodes University, and RULIV (Promoting Rural and Urban Livelihoods). https://www.weadapt.org/sites/weadapt.org/files/legacy-new/knowledge-base/files/52e626a294068handbook-final-23-jan-2014.pdf .

Davies, R., and J. Dart. 2005. The “Most Significant Change” (MSC) Technique: A Guide to Its Use . https://www.betterevaluation.org/en/resources/guides/most_significant_change .

DCF Alliance. 2019. “The Devolved Climate Finance Mechanisms: Principles, Implementation and Lessons from Four Semi-arid Countries.” Working paper. https://pubs.iied.org/G04424/ .

Desai, H., G. Maneo, E. Pellfolk, and A. Schlingheider. 2018. Managing to Adapt: Analysing Adaptive Management for Planning, Monitoring, Evaluation, and Learning . Oxfam Research Reports. https://www.alnap.org/system/files/content/resource/files/main/rr-managing-to-adapt-pmel-220318-en.pdf .

Donatuto, J., L. Campbell, and W. Trousdale. 2020. “The ‘Value’ of Values-Driven Data in Identifying Indigenous Health and Climate Change Priorities.” Climatic Change 158 (2): 161–80.

Dunkley, R.A., and A. Franklin. 2017. “Failing Better: The Stochastic Art of Evaluating Community-Led Environmental Action Programs.” Evaluation and Program Planning 60 (February): 112–22. doi:10.1016/j.evalprogplan.2016.11.005.

Ebrahim, A. 2003. “Accountability in Practice: Mechanisms for NGOs.” World Development 31 (5): 813–29. https://www.alnap.org/system/files/content/resource/files/main/575-cached.pdf .

Ensor, J., and B. Harvey. 2015. “Social Learning and Climate Change Adaptation: Evidence for International Development Practice.” WIRES Climate Change 6 (5): 509–22. doi:10.1002/wcc.348.

Emerson, R.O. 2020. “Power Dynamics in International Development Evaluations: A Case Study of the Girls Education Challenge Programme.” African Evaluation Journal 8 (1): 1–11.

Equitable Evaluation. 2017. “Equitable Evaluation Project Framing Paper.” July. https://coloradohealth.org/sites/default/files/documents/2019-10/Public%20for%20Web%20Equitable%20Evaluation%20Framing%20Paper%20July%202017%20.pdf .

Estrella, M., and J. Gaventa. 1998. “Who Counts Reality? Participatory Monitoring and Evaluation: A Literature Review.” Institute of Development Studies Working Paper 70. https://www.ids.ac.uk/publications/who-counts-reality-participatory-monitoring-and-evaluation-a-literature-review/ .

Eyben, R., I. Guijt, C. Roche, and C. Shutt. 2015. The Politics of Evidence and Results in International Development: Playing the Game to Change the Rules? Rugby, UK: Practical Action.

FAO (Food and Agriculture Organization of the United Nations). 2016. Free Prior and Informed Consent: An Indigenous Peoples’ Right and a Good Practice for Local Communities—Manual for Practitioners . Rome: FAO. http://www.fao.org/3/a-i6190e.pdf .

Faulkner, L., J. Ayers, and S. Huq. 2015. “Meaningful Measurement for Community-Based Adaptation: Meaningful Measurement for Community-Based Adaptation.” New Directions for Evaluation , no. 147 (September): 89–104. doi:10.1002/ev.20133.

Fisher, S. 2014. “The Emerging Geographies of Climate Justice.” Geographical Journal 181 (1): 73–82. https://rgs-ibg.onlinelibrary.wiley.com/doi/10.1111/geoj.12078 .

Fisher, S., and S. Anderson. 2018. “Developing Meaningful Local Metrics for Climate Adaptation: Learning from Applying the TAMD Framework at Local Scales.” In Adaptation Metrics: Perspectives on Measuring, Aggregating and Comparing Adaptation Results , edited by L. Christiansen, L., G. Martinez, and P. Naswa. Copenhagen: UN Environment Programme–DTU Partnership.

Fitzpatrick, J.L. 2012. “An Introduction to Context and Its Role in Evaluation Practice.” New Directions for Evaluation 135: 7–24.

Flatters, G. 2017. “Focus on the Right Users to Avoid an M&E Apocalypse.” Vimeo. London: Open Data Institute. https://vimeo.com/209906761 .

Frehiwot, M. 2019. “Made in Africa Evaluation: Decolonizing Evaluation in Africa.” https://www.researchgate.net/publication/337768900 .

Global Commission on Adaptation. 2019. “Adapt Now: A Global Call for Leadership on Climate Resilience.” GCA and World Resources Institute. https://cdn.gca.org/assets/2019-09/GlobalCommission_Report_FINAL.pdf .

GCF (Green Climate Fund). 2019. “GCF in Brief: Enhancing Direct Access.” https://www.greenclimate.fun d/document/gcf-brief-enhancing-direct-access.

Harvey, B., and C. Fisher. 2013. “Mobilising Knowledge for Climate Change Adaptation in Africa: Reflecting on the Adaptive Management of Knowledge Networks.” Knowledge Management for Development Journal 9 (1): 37–56. http://journal.km4dev.org/ .

Harvey, B., T. Pasanen, A. Pollard, and J. Raybould. 2017. “Fostering Learning in Large Programmes and Portfolios: Emerging Lessons from Climate Change and Sustainable Development.” Sustainability 9 (2): 315. doi:10.3390/su9020315.

Holzapfel, S. 2016. “Boosting or Hindering Aid Effectiveness? An Assessment of Systems for Measuring Donor Agency Results.” Public Administration and Development 36 (1): 3–19.

Hood, S., R.K. Hopson, and K.E. Kirkhart. 2015. “Culturally Responsive Evaluation.” In Handbook of Practical Program Evaluation 281, edited by K.E. Newcomer, H.P. Hatry and J.S. Wholey. Hoboken, NJ: John Wiley & Sons.

Hulme, M. 2015. “Knowledge Pluralism.” In Research Handbook of Climate Governance , edited by K. Bäckstrand and E. Lövbrand, 555–65. Glos, UK: Elgar. https://www.mikehulme.org/wp-content/uploads/2014/03/Hulme20141229KB_revised.pdf .

IDS (Institute for Development Studies). 2020. “The Global Learning for Adaptive Management Initiative (GLAM).” IDS Projects. https://www.ids.ac.uk/projects/the-global-learning-for-adaptive-management-initiative-glam/ .

Jarvis, A., A. Eitzinger, M. Koningstein, T. Benjamin, F.C. Howland, N. Andrieu, J. Twyman, and C. Corner-Dolloff. 2015. “Less Is More: The 5Q Approach.” Cali, Colombia: Centro Internacional de Agricultura Tropical (CIAT). http://dapa.ciat.cgiar.org/ .

Johnston-Goodstar, K. 2012. “Decolonizing Evaluation: The Necessity of Evaluation Advisory Groups in Indigenous Evaluation.” New Directions for Evaluation , no. 136 (December): 109–17. doi:10.1002/ev.20038.

Jones, L. 2019. “Resilience Isn’t the Same for All: Comparing Subjective and Objective Approaches to Resilience Measurement.” WIRES Climate Change 10 (1): e552. doi:10.1002/wcc.552.

Jones, L., and T. Tanner. 2017. “‘Subjective Resilience’: Using Perceptions to Quantify Household Resilience to Climate Extremes and Disasters.” Regional Environmental Change 17 (1): 229–43.

Joyce, M. 2020. “Impact Measurement: A Cautionary Tale.” https://medium.com/@dobiggood/impact-measurement-a-cautionary-tale-d40991561489 .

Kawakami, A.J., K. Aton, F. Cram, M. Lai, and L. Porima. 2008. “Improving the Practice of Evaluation through Indigenous Values and Methods: Decolonizing Evaluation Practice—Returning the Gaze from Hawai’i and Aotearoa .” In Fundamental Issues in Evaluation , edited by N.L. Smith and P.R. Brandon, 219–42. New York: Guilford.

Kerr, S. 2012. “Kaupapa Māori Theory-Based Evaluation.” Evaluation Journal of Australasia 12 (1): 6–18. https://journals.sagepub.com/doi/abs/10.1177/1035719X1201200102 .

Larson, S., D.G.C. Kirono, G. Tjandraatmadja, and R. Barkey. 2016. “Monitoring and Evaluation Approaches in Water Resources Project Design: Experiences from an Urban Water System Climate Change Adaptation Project in Indonesia.” Water Policy 18 (3): 708–26. doi:10.2166/wp.2015.144.

Leiter, T., and P. Pringle. 2018. “Pitfalls and Potential of Measuring Climate Change Adaptation through Adaptation Metrics.” In Adaptation Metrics: Perspectives on Measuring, Aggregating and Comparing Adaptation Results , edited by L. Christiansen, G. Martinez, and P. Naswa, 29–48. Copenhagen: UN Environment Partnership–DTU Partnership.

Leventon, J., L. Fleskens, H. Claringbould, G. Schwilch, and R. Hessel. 2016. “An Applied Methodology for Stakeholder Identification in Transdisciplinary Research.” Sustainability Science 11: 763–75. https://link.springer.com/article/10.1007/s11625-016-0385-1 .

LIFE-AR (Least Developed Countries Initiative for Effective Adaptation and Resilience). 2019. “LDC 2050 Vision: Towards a Climate-Resilient Future.” September. http://www.ldc-climate.org/wp-content/uploads/2019/12/LDC-Group-Vision.pdf .

McCreless, M. 2015. “A Client-centric Approach: Impact Evaluation That Creates Value for Participants.” Working paper. June. Root Capital. https://rootcapital.org/wp-content/uploads/2018/02/2015-june_client_centric_approach_final.pdf .

Medema, W., W. Arjen, and A. Adamowski. 2014. “Multi-loop Social Learning for Sustainable Land and Water Governance: Towards a Research Agenda on the Potential of Virtual Learning Platforms.” NJAS: Wageningen Journal of Life Sciences 69 (June): 23–38. doi:10.1016/j.njas.2014.03.003.

Morchain, D., D. Spear, G. Ziervogel, H. Masundire, M.N. Angula, J. Davies, C. Molefe, and S. Hegga. 2019. “Building Transformative Capacity in Southern Africa: Surfacing Knowledge and Challenging Structures through Participatory Vulnerability and Risk Assessments.” Action Research 17 (1): 19–41. doi:10.1177/1476750319829205.

NEF (Near East Foundation) 2017. “Decentralizing Climate Funds (DCF): Incorporating Tools to Measure Resilience into Mali’s Local Planning Systems.” Policy brief. https://www.neareast.org/download/materials_center/DCF_RA_Tools_Mali_Brief_En.pdf .

Olsson, P., and V. Galaz. 2012. “Socio-ecological Innovation and Transformation.” In Social Innovation: Blurring Boundaries to Reconfigure Markets , edited by A. Nicholls and A. Murdoch, 223–47. London: Palgrave Macmillan. https://www.stockholmresilience.org/research/research-videos/2012-04-20-social-ecological-innovations.html .

Patton, M.Q. 2008. Utilization-Focused Evaluation . 4th ed. Thousand Oaks, CA: Sage.

Pauw, W.P., C. Gruening, and C. Menzel. 2020. “Number of Beneficiaries as an Indicator for Adaptation: Do the Numbers Add Up?” GCF Monitor , no. 2.

Pawson, R., and N. Tilley. 1997. “An Introduction to Scientific Realist Evaluation.” In Evaluation for the 21st Century: A Handbook , edited by E. Chelimsky and W.R. Shadish, 405–18. Thousand Oaks, CA: Sage.

Rai, N., S. Acharya, R. Bhushal, R. Chettri, M. Shamshudoha, M. Kallore, N. Kaur, et al. 2015. “Political Economy of International Climate Finance: Navigating Decisions in PPCR and SREP.” IIED Working Paper. London: International Institute for Environment and Development.

Ramasobana, M., C. Morkel, and M. Frehiwot. 2020. “What Is Meant by Transforming Evaluations for Africa?” Twende Mbele Report , July. CLEAR-AA, Tiyimele Consultants, and the Institute of African Studies. http://www.twendembele.org/reports/what-is-meant-by-transforming-evaluations-for-africa/ .

Ramisch, J. J., M.T. Misiko, I.E. Ekise, and J.B. Mukalama. 2006. “Strengthening ‘Folk Ecology’: Community-Based Learning for Integrated Soil Fertility Management, Western Kenya.” International Journal of Agricultural Sustainability 4 (2): 154–68. doi:10.1080/14735903.2006.9684798.

Reckien, D., M. Wildenberg, and M. Bachhofer. 2013. “Subjective Realities of Climate Change: How Mental Maps of Impacts Deliver Socially Sensible Adaptation Options.” Sustainability Science 8 (2): 159–72.

Segone, M., ed. 2012. Evaluation for Equitable Development Results . New York: UNICEF.

Shott, J., and R. Mather. 2012. “Climate Resilience Evaluation for Adaptation through Empowerment (CREATE): Adaptation Made Easy for Everyone.” International Union for Conservation of Nature. https://www.iucn.org/sites/dev/files/import/downloads/create_factsheet_final.pdf .

Silva Villanueva, P. 2011. “Learning to ADAPT: Monitoring and Evaluation Approaches in Climate Change Adaptation and Disaster Risk Reduction—Challenges, Gaps and Ways Forward.” Strengthening Climate Resistance Discussion Paper. Institute for Development Studies. https://www.ids.ac.uk/download.php?file=files/dmfile/SilvaVillanueva_2012_Learning-to-ADAPTDP92.pdf .

Smith, B. 2020. “Closing the Learning Loop in Locally Led Adaptation.” IIED Briefing , July.

Soanes, M., S. Addison, and C. Shakya. 2020. “Calling for Business Unusual: Why Local Leadership Matters.” IIED Briefing , October. https://pubs.iied.org/pdfs/17767IIED.pdf .

Soanes, M., A. Bahadur, C. Shakya, B. Smith, S. Patel, C. Rumbaitis del Rio, T. Coger, A. Dinshaw, S. Patel, S. Huq, and M. Musa. 2021. “Principles of Locally Led Adaptation: A Call to Action.” IIED Issue Paper , January.

Solinska-Nowak, A., P. Magnuszewski, M. Curl, A. French, A. Keating, J. Mochizuki, Wei Liu, et al. 2018. “An Overview of Serious Games for Disaster Risk Management: Prospects and Limitations for Informing Actions to Arrest Increasing Risk.” International Journal of Disaster Risk Reduction 31 (October): 1013–29. doi:10.1016/j.ijdrr.2018.09.001.

Spearman, M., and H. McGray. 2011. “Making Adaptation Count: Concepts and Options for Monitoring and Evaluation of Climate Change Adaptation.” Deutsche Gesellschaft für Internationale Zusammenarbeit (GIZ). http://star-www.giz.de/dokumente/bib-2011/giz2011-0219en-monitoring-evaluation-climate-change.pdf .

STAP (Scientific and Technical Advisory Panel). 2017. “Evaluation of Climate Change Adaptation.” Global Environment Facility. https://www.researchgate.net/publication/319086400_Strengthening_Monitoring_and_Evaluation_of_Climate_Change_Adaptation_A_STAP_Advisory_Document .

Teskey, G. 2020. “What Is Political Economy Analysis (PEA) and Why Does It Matter in Development?” From Poverty to Power (blog), September 2. https://oxfamblogs.org/fp2p/what-is-political-economy-analysis-pea-and-why-does-it-matter-in-development/ .

Tschakert, P., and K.A. Dietrich. 2010. “Anticipatory Learning for Climate Change Adaptation and Resilience.” Ecology and Society 15 (2). https://www.jstor.org/stable/26268129 .

Valters, C. 2015. “Theories of Change: Time for a Radical Approach to Learning in Development.” London: Overseas Development Institute. https://www.odi.org/sites/odi.org.uk/files/odi-assets/publications-opinion-files/9835.pdf .

Van Epp, M., and B. Garside. 2019. “Towards an Evidence Base on the Value of Social Learning–Oriented Approaches in the Context of Climate Change and Food Security.” Environmental Policy and Governance 29 (2): 118–31.

Van Zyl, H., and F. Claeyé. 2019. “Up and Down, and Inside Out: Where Do We Stand on NGO Accountability?” European Journal of Development Research 31 (3): 604–19.

Warner, A.T. 2017. “What Is Monitoring in Humanitarian Action? Describing Practice and Identifying Challenges.” ALNAP Scoping Paper. Active Learning Network for Accountability and Performance in Humanitarian Action/Overseas Development Institute.

Wilmsen, C. 2008. “Extraction, Empowerment, and Relationships in the Practice of Participatory Research.” In Towards Quality Improvement of Action Research , edited by B. Boog, M. Slager, J. Preece, and J. Zeelen, 135–46. Leiden, the Netherlands: Brill/Sense. doi:10.1163/9789087905941_011.

Acknowledgments

We are pleased to acknowledge our institutional strategic partners that provide core funding to WRI: the Netherlands Ministry of Foreign Affairs, Royal Danish Ministry of Foreign Affairs, and Swedish International Development Cooperation Agency.

The authors would like to give special thanks to our colleagues at the Least Developed Countries University Consortium on Climate Change (LUCCC), Luis Artur, David Mfitumukiza, and Sidat Yaffa, as well as our colleagues at Women’s Climate Centers International (WCCI), Rosemary Atieno, Godliver Kobugabe, Tracy Mann, Comfort Mukasa, and Rose Wamalwa for the expertise, insights, and examples they provided to inform this paper and review drafts. We would like to thank Christina Chan, Ayesha Dinshaw, Saleemul Huq, Nancy MacPherson, Arghya Sinha Roy, Saul Pereya, Cristina Rumbaitis del Rio, and Barry Smith for taking the time to review and strengthen the paper. We are grateful to Mahamat Assouyouti, Cristina Dengel, Susannah Fischer, Vincent Gainey, Kazi Eliza Islam, Timo Leiter, Heather McGray, and Nina Ullery for providing inputs and sharing their experience in the development of the paper. Thanks to Isabella Suarez for providing research support. Thank you to partners from the International Centre for Climate Change and Development, LUCCC, Slum/Shack Dwellers International, Huairou Commission, and WCCI, who convened at the Gobeshona 6 conference and provided inspiration and direction for the paper.

About the Authors

Tamara Coger is a Senior Associate at WRI. Contact: [email protected]

Sarah Corry is a consultant at Sophoi. Contact: [email protected]

Robbie Gregorowski is a Director at Sophoi. Contact: [email protected]

World Resources Institute is a global research organization that turns big ideas into action at the nexus of environment, economic opportunity, and human well-being.

Our Challenge

Natural resources are at the foundation of economic opportunity and human wellbeing. But today, we are depleting Earth’s resources at rates that are not sustainable, endangering economies and people’s lives. People depend on clean water, fertile land, healthy forests, and a stable climate. Livable cities and clean energy are essential for a sustainable planet. We must address these urgent, global challenges this decade.

We envision an equitable and prosperous planet driven by the wise management of natural resources. We aspire to create a world where the actions of government, business, and communities combine to eliminate poverty and sustain the natural environment for all people.

Our Approach

We start with data. We conduct independent research and draw on the latest technology to develop new insights and recommendations. Our rigorous analysis identifies risks, unveils opportunities, and informs smart strategies. We focus our efforts on influential and emerging economies where the future of sustainability will be determined.

We use our research to influence government policies, business strategies, and civil society action. We test projects with communities, companies, and government agencies to build a strong evidence base. Then, we work with partners to deliver change on the ground that alleviates poverty and strengthens society. We hold ourselves accountable to ensure our outcomes will be bold and enduring.

We don’t think small. Once tested, we work with partners to adopt and expand our efforts regionally and globally. We engage with decision-makers to carry out our ideas and elevate our impact. We measure success through government and business actions that improve people’s lives and sustain a healthy environment.

Mel Kaye – CV

chapter 1 research areas the story of mel

Mel Kaye (Melvin Kornitzky) was a software engineer, employed at Librascope and at Royal McBee in New York and Los Angeles between the years 1956-1960. Among other programs, he developed a Blackjack game for two models of first-generation digital computers – the LGP-30 and the RPC-4000. His Blackjack game was highly regarded by fellow engineers and computer science students. Mel also conceived the ingenious hack described in The Story of Mel by Ed Nather, which was published on the internet in 1983. Ed Nather’s story won Mel world fame in the hacking community and is regarded as a seminal epic in hacking folklore.

Mel was born on January 14th, 1931, in Brooklyn, NY, as Melvin Kornitzky, son of Esther and Herman Kaye, and younger sibling of Shirley (born April 27th, 1926). During his early childhood, the family relocated to the West Coast and later settled in Lemon Grove, Los Angeles.

After passing away in 2018, Mel was buried in Pierce Brothers Valley Oaks-Griffin Memorial Park, Los Angeles.

chapter 1 research areas the story of mel

The University Years

Around 1948, Mel entered his undergraduate studies in UCLA, still under the name Melvin Kornitzky. This name was used in the 1951 university yearbook, but in the 1952 issue he was already named Melvin Kaye. We don’t know which faculties he attended and whether or not he majored in any discipline.

During his academic studies, Mel was a member of the Jewish fraternity Tau Delta Phi, which ended its activity in the 1970s.

After graduating in 1952, during the Stevenson/Eisenhower presidential race, Mel and his father Herman Kaye registered in the Los Angeles voters book.

chapter 1 research areas the story of mel

Programming in the Early Years of the Computer Era

In the summer of 1956, Mel Joined the commercial department of Librascope, a technology unit in the General Precision company, which had won a few government contracts to build military devices. General Precision’s main business was military technologies for aerial and naval warfare. In the mid 1950s, it started developing a first-generation digital computer branded LGP-30, incorporating a dedicated development unit into its Royal McBee subsidiary. Royal McBee was tasked with marketing and selling the LPG-30.

Mel worked as an application engineer in the commercial development group of Librascope, housed in the modern and luxurious building #3 at the company campus in Glendale, California. Among his other responsibilities, he helped teaching customers how to program the LGP-30. About a month after he was recruited to the job, Mel and many of his colleagues were transferred to Royal McBee, where he wrote the Blackjack program which became the company’s flagship software.

When Royal McBee was getting ready to launch its new computer, the RPC-4000, Mel ported the game for this platform. In addition, he wrote parts of the machine’s assembler and helped Ed Nather write its Fortran compiler. In the early 1960s, following some difference of opinions with the management, Mel left Royal McBee. The only relics from his work are The Story of Mel, a few photocopies of handwritten computer code and the preface he had written for the ported Blackjack program.

chapter 1 research areas the story of mel

It seems that Mel’s recruitment to Librascope was hastened, possibly due to increased demand by the company’s customers. In July of 1956, the company’s internal magazine (Librazette) published a piece about the company’s instruction and support program for LGP-30 users. Mel, 25 years old at the time, was mentioned among the engineers who led the program. Only a month later did the Librazette publish a greeting for new hires, among which Mel was listed as a member of the Engineering and Commerce department.

chapter 1 research areas the story of mel

Mel’s work in Librascope and Royal McBee was diverse and apparently lasted around 4 years, until 1960. In July 1961, the Librazette greeted the veteran employees – those who had completed five years or more in the company, nicknamed Libravetes. Mel’s name was absent from this list.

chapter 1 research areas the story of mel

Code Remnants

A few handwritten code fragments, probably in his own handwriting, survived from Mel’s work in Librascope. Below are some code sheets from Bill Breiner’s archive, along with a preface that Mel had written for the Blackjack game.

chapter 1 research areas the story of mel

![] https://mels-loop-media.s3.eu-north-1.amazonaws.com/mel-kaye-code-evaluation-of-4th-degree-polynomial-fixed-point-june-16th-1959-source-_lgp-30-subroutine-manual-oct-60_wgoukq.jpg )

chapter 1 research areas the story of mel

Preface to Mel's Blackjack game. Full text here

Origins: Coast to Coast Migration

Little is known about Mel’s parents. Most of the relevant details in various databases pertain to the family of Mel’s father, Kornitzka.

Mel’s father, Herman Kaye, was born on May 2nd, 1905 in Warsaw, Poland as Nechemje Kornitzka. At the tender age of two, he immigrated to the United States with his mother, Hela Kornitzka (Mel’s grandmother). They boarded the cargo ship Estonia, operated by the East Asiatic company to sail from Russia to the US. Upon their arrival, they changed their family name to Kornitzky and settled in Brooklyn, New York. Later on, they changed their name again, this time using the American sounding name, Kaye.

chapter 1 research areas the story of mel

Mel’s mother, Esther Fietelewitch, was born in 1907 to a Russian-Jewish family. Her father, Abraham, worked in construction, while her mother was a homemaker. The household language was Yiddish. The Fietelewitches immigrated to the United States in the 1920s and settled in Watkins st., Brooklyn, NYC. Esther grew up there, along with her two brothers and sister: Benny, Sam, and Celia.

Herman Kaye and Esther Fietelewitch were married on June 27, 1925. Herman was in the TV manufacturing business and later started a successful mobile home company. Esther was a homemaker.

Herman Kaye passed away on December 16th, 1979, in Anaheim, Los Angeles. Esther Kaye passed away on January 15th, 1994, in Orange County, Los Angeles. The two are buried side by side in the Beth Olam cemetery, Los Angeles.

chapter 1 research areas the story of mel

An American Family

On May 24th, 1953, Mel married Rita Bernstein, born in England on May 27th, 1933. Only two years before their marriage, On January 30th, 1951, she boarded the SS Britannic which sailed from Liverpool to New York, on her way to start a new life in the US. Shortly after their marriage, the couple applied for Rita’s American citizenship. She passed away on July 30th, 2011, and was buried in the Valley Oaks Memorial park. 7 years later, Mel would find his final resting next to her.

chapter 1 research areas the story of mel

Along with Rita’s naturalization application, the expanding Kaye family applied to grant citizenship to Mel’s father, Herman.

chapter 1 research areas the story of mel

Final Words

Mel never sought (or even acknowledged) his glory in the hacking culture. In fact, only one person managed to track him down and extract a few words out of him. Anthony Cuozzo, a computer programmer and hacking folklore buf who grew to admire the character portrayed in The Story of Mel. Following a clue he uncovered during his search for Mel, he wrote an email to [email protected] (possibly an acronym for Rita and Mel), hoping it would reach the legendary engineer. Our research in the Mel’s Loop project supports Cuozzo’s hunch.

Thus wrote Anthony Cuozzo on April 2012:

Within an hour, Mel replied:

Upon receiving the reply, Cuozzo tried to continue the correspondence, but Mel fell silent. There are probably more chapters in this story. The research continues.

Recommended Reading

Find more information about The Story of Mel in these articles:

  • The Story of Mel (annotated version in Mel's Loop homepage)
  • Preface to the Story of Mel: A Software Legend That Really Happened
  • Mel's Hack – The Missing Bits
  • The Story of Mel – Wikipedia

REMC Association of Michigan

MeL Databases

MeL Kids and Teens

1. Identify a topic for research or content area to be studied as you go through this section. It could be something that interests you or something that your students will be studying in your classroom.

2. While at MeL , click on eResources or Kids on the navigation bar. 

MeL Dashboard

3. Browse both eResources and Kids so that you are familiar with what they both have to offer. 

4. Next, you are going to choose a grade level to explore several resources in depth. 

5. If you are an elementary teacher , Britannica School Elementary, World Books Kids, and Explora Primary are good choices. All three will read the articles aloud to the students. There are also citations for the students to copy and paste to use in their reports. 

  • Britannica School Elementary is for grades 3-4. Students can find information on countries, animals, and people. They can search by pictures of the different subjects or type their search in.   

World Book Kids is for lower elementary students also and there is a large selection of articles on countries, animals, and people. There are also shortcuts to science projects, games, and maps. 

Explora Primary is a database of magazine articles, ebooks and reference books. The students can print, email, or send the articles to their Google Drive

6. If you are a secondary teacher, Britannica School Middle and High, Opposing ViewPoints and Explora for High School are excellent choices. 

  • Britannica School Middle and High are nonfiction content geared toward grades 4-12. There are over 132,000 encyclopedia articles, 103 images, 7,500 multimedia elements, more than 27,000 eBooks, novels, and essays. Tools include translation for 50+ languages and read-aloud functionality. Includes Lexile ranges and alignment with Michigan Department of Education Academic Standards across a variety of subjects.
  • Gale in Context: Opposing ViewPoints provides different views about issues that help students develop critical thinking skills. This resource brings balance and perspective to contemporary issues using arguments from experts.
  • Explora for High School provides hundreds of popular magazines and reference books for high school libraries. Covers subjects such as art, history, sports, and music. Includes thousands of biographies and primary source documents. Also includes photos, maps, and flags.

7. If you are a school counselor , you will want to choose the Learning Express Library (LEX) which is a large library of over 700 courses, practice tests, tutorials, eBooks, and flashcards. You will want to create a free login to use all the resources in LEX. When you start a course or practice test, it will save your progress and results. 

8. If you are a school administrator , you will want to choose Academic Search Complete, Education Source, ERIC and Explora for Educators. These are databases that have both full text and abstracts for educational research.  

Your activity for this section is to conduct a search on the topic you chose using at least two of the resources. 

  • What topic did you explore?
  • Identify the resources you used and add them to the Google Checklist.
  • Which resources did you find the most helpful?

Accessibility Note: As you go through the databases note the tools available for accessibility and note-taking. Many also have the ability to listen to the article. Some have a translation option for different languages. Watch for symbols as shown below and check them out.

This image is taken from the Explora interface. Users can translate and listen to the articles, send them to Google Classroom or Google Drive, print, cite, and get a permanent link to easily locate them. Users can also create an account and add the article to a folder, save the article and create notes. When the user logs in they can see their saved articles and notes.    

Frida Kahlo

Next, you are going to learn how to create citations for the articles you have read. Move on to go to Citation resources .

Addressing the ISTE Standards For Educators

Learner 1a. Set professional learning goals to explore and apply pedagogical approaches made possible by technology and reflect on their effectiveness. 1c. Stay current with research that supports improved student learning outcomes, including findings from the learning sciences.

Leader 2b. Advocate for equitable access to educational technology, digital content and learning opportunities to meet the diverse needs of all students. 2c. Model for colleagues the identification, exploration,  evaluation, curation and adoption of new digital resources and tools for learning.

Citizen 3b. Establish a learning culture that promotes curiosity and critical examination of online resources and fosters digital literacy and media fluency. 3c. Mentor students in safe, legal and ethical practices with digital tools and the protection of intellectual rights and property.

IMAGES

  1. Perspectives 2.docx

    chapter 1 research areas the story of mel

  2. PROJECT1

    chapter 1 research areas the story of mel

  3. Mel's Hole The story of Mel Waters' bottomless pit

    chapter 1 research areas the story of mel

  4. Research lesson chapter 1 detailed explanation

    chapter 1 research areas the story of mel

  5. psychmod2wrksh 1453 .pdf

    chapter 1 research areas the story of mel

  6. Mel's Loop

    chapter 1 research areas the story of mel

COMMENTS

  1. B. Darwin's theory of natural selection

    Chapter 1 Research Areas The Story of Mel Mel began drinking alcohol when he was 12 years old. At first, he just drank with his friends. After a while, though, he began drinking alone. By the time Mel reached age 30, he would drink when he was sad, he would drink when how was happy, and he would drink when he was lonely.

  2. Chapter 1 Assignment: Theoretical Perspectives 'Worksheet'

    Study with Quizlet and memorize flashcards containing terms like Part 1: The Story of Mel Mel began drinking alcohol when he was 12 years old. At first, he just drank with his friends.After a while, though, he began drinking alone. By the time Mel reached age 30, he would drink when he was sad, he would drink when he was happy, and he would drink when he was lonely. He began experiencing ...

  3. Kami Export

    Chapter 1 Research Areas The Story of Mel Mel began drinking alcohol when he was 12 years old. At first, he just drank with his friends. After a while, though, he began drinking alone. By the time Mel reached age 30, he would drink when he was sad, he would drink when how was happy, and he would drink when he was lonely. He began experiencing problems at home-his wife would constantly fight ...

  4. Mel's Loop

    These unresolved questions and the mystery around Mel's figure launched me into another round of research. Naturally, the first datum I needed to clarify was the veracity of the story and the real identity of Mel. These doubts have been completely disspelled. The story is indeed an accurate account of real events and Mel was a real person.

  5. The Story of Mel

    The Story of Mel is an archetypical piece of computer programming folklore. Its ... 2023-05-21 (Mel Kaye's Biography: A research into Mel Kaye's identity, family, origins and work) A redditor found a computer of the type that Mel used in his grandparents basement. This page was last edited on 2 December 2023, at 16:23 (UTC). Text is ...

  6. The 'Story of Mel' Explained

    The Story of Mel is a story about a 'Real Programmer' that came out in the early 1980′s on Usenet. As described on its Wikipedia page: The Story of Mel is an archetypical piece of computer programming folklore. Its subject, Mel Kaye, is the canonical Real Programmer. This is a very popular piece of programming lore, which is often reposted.

  7. Kami Export

    View Kami Export - Zoe Johnson - Approaches 2.pdf from PSYCHOLOGY RESEARCH P at Simmons High School. Chapter 1 Research Areas The Story of Mel Mel began drinking alcohol when he was 12 years old. At

  8. Mel's Loop

    Mel's Loop is a comprehensive guide and guide for The Story of Mel, an epic Hacker Folklore tale that was written and published by its author, Ed Nather, on the Usenet in 1983.The story describes an exemplary "Real Programmer" by the name of Mel Kaye and his subtle techniques fascinate his colleagues. The story is one of the earliest documentations of The Hacker Spirit, and the themes in the ...

  9. Mel's Loop

    TL;DR: 39 years ago The Story of Mel was published on Usenet by its author. Today, we launch the Mel's Loop project, with some fascinating details about the epic hacker folklore tale.. Today we celebrate the 39th anniversary of the first publication of The Story of Mel by Ed Nather.The Story of Mel was written as a memoir about the era of early computing of the late 1950s, creating a fresh ...

  10. Perspectives 2.docx

    Chapter 1 Research Areas The Story of Mel Mel began drinking alcohol when he was 12 years old. At first, he just drank with his friends. After a while, though, he began drinking alone. By the time Mel reached age 30, he would drink when he was sad, he would drink when how was happy, and he would drink when he was lonely. He began experiencing problems at home-his wife would constantly fight ...

  11. The Story of Mel

    The Story of Mel. Mel began drinking alcohol when he was 12 years old. At first, he just drank with his friends. ... Using the example of Mel, try to decide which research area best match with what psychologists would want to know about Mel to help him over-come his difficulties. Options: Humanistic. Sociocultural. Psychodynamic.

  12. The Story of Mel, A Real Programmer, Annotated

    The Story of Mel, A Real Programmer, Annotated. Story by Ed Nather Annotated by Erik Brunvand Department of Computer Science University of Utah, SLC, Utah 84112

  13. Portal Stories: Mel Test Chamber 1

    In Story mode, Chamber 01 (also known as Virgil-01) is the second map of the third chapter and the first to be recognized as a test chamber. This chamber also has a Ratman's Den in the game (which is related to the achievement Under the Stairs ). The map is divided in two tests. The first one makes use of Goo and introduces the player to the ...

  14. Portal Stories: Mel

    Story. An Aperture Science test subject named Mel arrives at Aperture via train in 1952. Cave Johnson tells her that she will be testing the Short-Term Relaxation Vault, and tells her she will be asleep for a few minutes, or at most an hour.After going into the vault, something goes wrong, and she is indefinitely put to sleep. Later, far into the future, she is waken up by a voice pretending ...

  15. Mel's Loop

    The Story in the Jargon File; The Story of Mel in Wikipedia; Guides and References. The 'Story of Mel' Explained - by James Seibel; The Story of Mel, A Real Programmer, Annotated - by Erik Brunvand; Macho programmers, drum memory and a forensic analysis of 1960s machine code; The second decade of programming: Big Iron; Tube Time: Mel, Blackjack ...

  16. Reshaping Monitoring, Evaluation, and Learning for Locally Led

    PRIME Pastoralist Areas Resilience Improvement through Market Expansion; ... drawn from relevant MEL research and practice. The remainder of this section discusses tenets for supporting LLA that are relevant throughout the stages of MEL: design and planning, monitoring, evaluation, and learning. ... Previous chapter Next chapter. 10 G Street NE ...

  17. Psychology Flashcards

    Psychology Chapter 1 Areas of Specialization. 17 terms. schulmad0. Preview. Psychology Final Review. 67 terms. saf030. ... This psychologist is using the ____ research area. Biological "I want to know how Mel's capacity for growth and his ability to direct his life contribute to his drinking". This psychologist is using the ___ research area

  18. case study (unit 1) Flashcards

    case study (unit 1) biological. Click the card to flip 👆. "I want to know if Mel's family has a history of alcoholism, and how efficiently Mel's brain processes alcohol". This psychologist is utilizing the ____________ research area. Click the card to flip 👆. 1 / 7.

  19. Mel's Loop

    Tomer Lichtash. 05-20-2023. Mel Kaye, 1952. Mel Kaye (Melvin Kornitzky) was a software engineer, employed at Librascope and at Royal McBee in New York and Los Angeles between the years 1956-1960. Among other programs, he developed a Blackjack game for two models of first-generation digital computers - the LGP-30 and the RPC-4000.

  20. Understanding Mel's Alcoholism: Psychologists' Research

    MODULE CHAPTER 3: INTRODUCTORY ACTIVITY Page 1 The Story of Mel Mel began drinking alcohol when he was 12 years old. At first, he just drank with his friends. After a while, though, he began drinking alone. By the time Mel reached age 30, he would drink when he was sad, he would drink when how was happy, and he would drink when he was lonely. He began experiencing problems at home-his wife ...

  21. The Story of Mel

    The Story of Mel is an archetypical piece of computer programming folklore. Its subject, Melvin Kaye, is an exemplary "Real Programmer" whose subtle techniques fascinate his colleagues. Introduction The Story of Mel

  22. MeL Databases

    1. Identify a topic for research or content area to be studied as you go through this section. It could be something that interests you or something that your students will be studying in your classroom. 2. While at MeL, click on eResources or Kids on the navigation bar. 3.

  23. 4 surveys show that women who graduate from college

    Chapter 1 Research Areas The Story of Mel Mel began drinking alcohol when he was 12 years old. At first, he just drank with his friends. After a while, though, he began drinking alone. By the time Mel reached age 30, he would drink when he was sad, he would drink when how was happy, and he would drink when he was lonely. He began experiencing problems at home-his wife would constantly fight ...