• Contact sales

Start free trial

Project Evaluation Process: Definition, Methods & Steps

ProjectManager

Managing a project with copious moving parts can be challenging to say the least, but project evaluation is designed to make the process that much easier. Every project starts with careful planning —t his sets the stage for the execution phase of the project while estimations, plans and schedules guide the project team as they complete tasks and deliverables.

But even with the project evaluation process in place, managing a project successfully is not as simple as it sounds. Project managers need to keep track of costs , tasks and time during the entire project life cycle to make sure everything goes as planned. To do so, they utilize the project evaluation process and make use of project management software to help manage their team’s work in addition to planning and evaluating project performance.

What Is Project Evaluation?

Project evaluation is the process of measuring the success of a project, program or portfolio . This is done by gathering data about the project and using an evaluation method that allows evaluators to find performance improvement opportunities. Project evaluation is also critical to keep stakeholders updated on the project status and any changes that might be required to the budget or schedule.

Every aspect of the project such as costs, scope, risks or return on investment (ROI) is measured to determine if it’s proceeding as planned. If there are road bumps, this data can inform how projects can improve. Basically, you’re asking the project a series of questions designed to discover what is working, what can be improved and whether the project is useful. Tools such as project dashboards and trackers help in the evaluation process by making key data readily available.

methodology of project evaluation

Get your free

Project Review Template

Use this free Project Review Template for Word to manage your projects better.

The project evaluation process has been around as long as projects themselves. But when it comes to the science of project management , project evaluation can be broken down into three main types or methods: pre-project evaluation, ongoing evaluation and post-project evaluation. Let’s look at the project evaluation process, what it entails and how you can improve your technique.

Project Evaluation Criteria

The specific details of the project evaluation criteria vary from one project or one organization to another. In general terms, a project evaluation process goes over the project constraints including time, cost, scope, resources, risk and quality. In addition, organizations may add their own business goals, strategic objectives and other project metrics .

Project Evaluation Methods

There are three points in a project where evaluation is most needed. While you can evaluate your project at any time, these are points where you should have the process officially scheduled.

1. Pre-Project Evaluation

In a sense, you’re pre-evaluating your project when you write your project charter to pitch to the stakeholders. You cannot effectively plan, staff and control a new project if you’ve first not evaluated it. Pre-project evaluation is the only sure way you can determine the effectiveness of the project before executing it.

2. Ongoing Project Evaluation

To make sure your project is proceeding as planned and hitting all of the scheduling and budget milestones you’ve set, it’s crucial that you constantly monitor and report on your work in real-time. Only by using project metrics can you measure the success of your project and whether or not you’re meeting the project’s goals and objectives. It’s strongly recommended that you use project management dashboards and tracking tools for ongoing evaluation.

Related: Free Project Dashboard Template for Excel

3. Post-Project Evaluation

Think of this as a postmortem. Post-project evaluation is when you go through the project’s paperwork, interview the project team and principles and analyze all relevant data so you can understand what worked and what went wrong. Only by developing this clear picture can you resolve issues in upcoming projects.

Free Project Review Template for Word

The project review template for Word is the perfect way to evaluate your project, whether it’s an ongoing project evaluation or post-project. It takes a holistic approach to project evaluation and covers such areas as goals, risks, staffing, resources and more. Download yours today.

Project review template

Project Evaluation Steps

Regardless of when you choose to run a project evaluation, the process always has four phases: planning, implementation, completion and dissemination of reports.

1. Planning

The ultimate goal of this step is to create a project evaluation plan, a document that explains all details of your organization’s project evaluation process. When planning for a project evaluation, it’s important to identify the stakeholders and what their short-and-long-term goals are. You must make sure that your goals and objectives for the project are clear, and it’s critical to have settled on criteria that will tell you whether these goals and objects are being met.

So, you’ll want to write a series of questions to pose to the stakeholders. These queries should include subjects such as the project framework, best practices and metrics that determine success.

By including the stakeholders in your project evaluation plan, you’ll receive direction during the course of the project while simultaneously developing a relationship with the stakeholders. They will get progress reports from you throughout the project life cycle , and by building this initial relationship, you’ll likely earn their belief that you can manage the project to their satisfaction.

project plan template for word

2. Implementation

While the project is running, you must monitor all aspects to make sure you’re meeting the schedule and budget. One of the things you should monitor during the project is the percentage completed. This is something you should do when creating status reports and meeting with your team. To make sure you’re on track, hold the team accountable for delivering timely tasks and maintain baseline dates to know when tasks are due.

Don’t forget to keep an eye on quality. It doesn’t matter if you deliver the project within the allotted time frame if the product is poor. Maintain quality reviews, and don’t delegate that responsibility. Instead, take it on yourself.

Maintaining a close relationship with the project budget is just as important as tracking the schedule and quality. Keep an eye on costs. They will fluctuate throughout the project, so don’t panic. However, be transparent if you notice a need growing for more funds. Let your steering committee know as soon as possible, so there are no surprises.

3. Completion

When you’re done with your project, you still have work to do. You’ll want to take the data you gathered in the evaluation and learn from it so you can fix problems that you discovered in the process. Figure out the short- and long-term impacts of what you learned in the evaluation.

4. Reporting and Disseminating

Once the evaluation is complete, you need to record the results. To do so, you’ll create a project evaluation report, a document that provides lessons for the future. Deliver your report to your stakeholders to keep them updated on the project’s progress.

How are you going to disseminate the report? There might be a protocol for this already established in your organization. Perhaps the stakeholders prefer a meeting to get the results face-to-face. Or maybe they prefer PDFs with easy-to-read charts and graphs. Make sure that you know your audience and tailor your report to them.

Benefits of Project Evaluation

Project evaluation is always advisable and it can bring a wide array of benefits to your organization. As noted above, there are many aspects that can be measured through the project evaluation process. It’s up to you and your stakeholders to decide the most critical factors to consider. Here are some of the main benefits of implementing a project evaluation process.

  • Better Project Management: Project evaluation helps you easily find areas of improvement when it comes to managing your costs , tasks, resources and time.
  • Improves Team performance: Project evaluation allows you to keep track of your team’s performance and increases accountability.
  • Better Project Planning: Helps you compare your project baseline against actual project performance for better planning and estimating.
  • Helps with Stakeholder Management: Having a good relationship with stakeholders is key to success as a project manager. Creating a project evaluation report is very important to keep them updated.

How ProjectManager Improves the Project Evaluation Process

To take your project evaluation to the next level, you’ll want ProjectManager , an online work management tool with live dashboards that deliver real-time data so you can monitor what’s happening now as opposed to what happened yesterday.

With ProjectManager’s real-time dashboard, project evaluation is measured in real-time to keep you updated. The numbers are then displayed in colorful graphs and charts. Filter the data to show the data you want or to drill down to get a deeper picture. These graphs and charts can also be shared with a keystroke. You can track workload and tasks, because your team is updating their status in real-time, wherever they are and at whatever time they complete their work.

ProjectManager’s dashboard view, which shows six key metrics on a project

Project evaluation with ProjectManager’s real-time dashboard makes it simple to go through the evaluation process during the evolution of the project. It also provides valuable data afterward. The project evaluation process can even be fun, given the right tools. Feel free to use our automated reporting tools to quickly build traditional project reports, allowing you to improve both the accuracy and efficiency of your evaluation process.

ProjectManager's status report filter

ProjectManager is a cloud-based project management software that has a suite of powerful tools for every phase of your project, including live dashboards and reporting tools. Our software collects project data in real-time and is constantly being fed information by your team as they progress through their tasks. See how monitoring, evaluation and reporting can be streamlined by taking a free 30-day trial today!

Click here to browse ProjectManager's free templates

Deliver your projects on time and under budget

Start planning your projects.

  • Project Management Methodologies
  • Project Management Metrics
  • Project Portfolio Management
  • Proof of Concept Templates
  • Punch List Templates
  • Requirements Traceability Matrix
  • Resource Scheduling
  • Roles and Responsibilities Template
  • Stakeholder Mapping
  • Team Charter
  • What is Project Baseline
  • Work Log Templates
  • Workback Schedule
  • Workload Management
  • Work Breakdown Structures
  • Agile Team Structure
  • Cross-Functional Flowcharts
  • Creating Project Charters
  • Guide to Team Communication
  • How to Prioritize Tasks
  • Mastering RAID Logs
  • Overcoming Analysis Paralysis
  • Understanding RACI Model
  • Eisenhower Matrix Guide
  • Guide to Multi Project Management
  • Procure-to-Pay Best Practices
  • Procurement Management Plan Template to Boost Project Success
  • Project Execution and Change Management
  • Project Plan and Schedule Templates
  • Resource Planning Templates for Smooth Project Execution
  • Risk Management and Quality Management Plan Templates
  • Risk Management in Software Engineering
  • Stage Gate Process
  • Stakeholder Management Planning
  • Understanding the S-Curve
  • Visualizing Your To-Do List
  • 30-60-90 Day Plan
  • Work Plan Template
  • Weekly Planner Template
  • Task Analysis Examples
  • Cross-Functional Flowcharts for Planning
  • Inventory Management Tecniques
  • Inventory Templates
  • Six Sigma DMAIC Method
  • Visual Process Improvement
  • Value Stream Mapping
  • Creating a Workflow
  • Fibonacci Scale Template
  • Supply Chain Diagram
  • Kaizen Method
  • Procurement Process Flow Chart
  • UML Activity Diagrams
  • Class Diagrams & their Relationships
  • Visualize flowcharts for software
  • Wire-Frame Benefits
  • Applications of UML
  • Selecting UML Diagrams
  • Create Sequence Diagrams Online
  • Activity Diagram Tool
  • Archimate Tool
  • Class Diagram Tool
  • Graphic Organizers
  • Social Work Assessment Tools
  • Using KWL Charts to Boost Learning
  • Editable Timeline Templates
  • Guides & Best Practices
  • Kinship Diagram Guide
  • Graphic Organizers for Teachers & Students
  • Visual Documentation Techniques
  • Visual Tool for Visual Documentation
  • Visualizing a Dichotomous Key
  • 5 W's Chart
  • Circular Flow Diagram Maker
  • Cladogram Maker
  • Comic Strip Maker
  • Course Design Template
  • AI Buyer Persona
  • AI Data Visualization
  • AI Diagrams
  • AI Project Management
  • AI SWOT Analysis
  • Best AI Templates
  • Brainstorming AI
  • Pros & Cons of AI
  • AI for Business Strategy
  • Using AI for Business Plan
  • AI for HR Teams
  • BPMN Symbols
  • BPMN vs UML
  • Business Process Analysis
  • Business Process Modeling
  • Capacity Planning Guide
  • Case Management Process
  • How to Avoid Bottlenecks in Processes
  • Innovation Management Process
  • Project vs Process
  • Solve Customer Problems
  • Startup Templates
  • Streamline Purchase Order Process
  • What is BPMN
  • Approval Process
  • Employee Exit Process
  • Iterative Process
  • Process Documentation
  • Process Improvement Ideas
  • Risk Assessment Process
  • Tiger Teams
  • Work Instruction Templates
  • Workflow Vs. Process
  • Process Mapping
  • Business Process Reengineering
  • Meddic Sales Process
  • SIPOC Diagram
  • What is Business Process Management
  • Process Mapping Software
  • Business Analysis Tool
  • Business Capability Map
  • Decision Making Tools and Techniques
  • Operating Model Canvas
  • Mobile App Planning
  • Product Development Guide
  • Product Roadmap
  • Timeline Diagrams
  • Visualize User Flow
  • Sequence Diagrams
  • Flowchart Maker
  • Online Class Diagram Tool
  • Organizational Chart Maker
  • Mind Map Maker
  • Retro Software
  • Agile Project Charter
  • Critical Path Software
  • Brainstorming Guide
  • Brainstorming Tools
  • Visual Tools for Brainstorming
  • Brainstorming Content Ideas
  • Brainstorming in Business
  • Brainstorming Questions
  • Brainstorming Rules
  • Brainstorming Techniques
  • Brainstorming Workshop
  • Design Thinking and Brainstorming
  • Divergent vs Convergent Thinking
  • Group Brainstorming Strategies
  • Group Creativity
  • How to Make Virtual Brainstorming Fun and Effective
  • Ideation Techniques
  • Improving Brainstorming
  • Marketing Brainstorming
  • Rapid Brainstorming
  • Reverse Brainstorming Challenges
  • Reverse vs. Traditional Brainstorming
  • What Comes After Brainstorming
  • Flowchart Guide
  • Spider Diagram Guide
  • 5 Whys Template
  • Assumption Grid Template
  • Brainstorming Templates
  • Brainwriting Template
  • Innovation Techniques
  • 50 Business Diagrams
  • Business Model Canvas
  • Change Control Process
  • Change Management Process
  • NOISE Analysis
  • Profit & Loss Templates
  • Scenario Planning
  • Winning Brand Strategy
  • Work Management Systems
  • Developing Action Plans
  • How to Write a Memo
  • Improve Productivity & Efficiency
  • Mastering Task Batching
  • Monthly Budget Templates
  • Top Down Vs. Bottom Up
  • Weekly Schedule Templates
  • Kaizen Principles
  • Opportunity Mapping
  • Strategic-Goals
  • Strategy Mapping
  • T Chart Guide
  • Business Continuity Plan
  • Developing Your MVP
  • Incident Management
  • Needs Assessment Process
  • Product Development From Ideation to Launch
  • Visualizing Competitive Landscape
  • Communication Plan
  • Graphic Organizer Creator
  • Fault Tree Software
  • Bowman's Strategy Clock Template
  • Decision Matrix Template
  • Communities of Practice
  • Goal Setting for 2024
  • Meeting Templates
  • Meetings Participation
  • Microsoft Teams Brainstorming
  • Retrospective Guide
  • Skip Level Meetings
  • Visual Documentation Guide
  • Weekly Meetings
  • Affinity Diagrams
  • Business Plan Presentation
  • Post-Mortem Meetings
  • Team Building Activities
  • WBS Templates
  • Online Whiteboard Tool
  • Communications Plan Template
  • Idea Board Online
  • Meeting Minutes Template
  • Genograms in Social Work Practice
  • How to Conduct a Genogram Interview
  • How to Make a Genogram
  • Genogram Questions
  • Genograms in Client Counseling
  • Understanding Ecomaps
  • Visual Research Data Analysis Methods
  • House of Quality Template
  • Customer Problem Statement Template
  • Competitive Analysis Template
  • Creating Operations Manual
  • Knowledge Base
  • Folder Structure Diagram
  • Online Checklist Maker
  • Lean Canvas Template
  • Instructional Design Examples
  • Genogram Maker
  • Work From Home Guide
  • Strategic Planning
  • Employee Engagement Action Plan
  • Huddle Board
  • One-on-One Meeting Template
  • Story Map Graphic Organizers
  • Introduction to Your Workspace
  • Managing Workspaces and Folders
  • Adding Text
  • Collaborative Content Management
  • Creating and Editing Tables
  • Adding Notes
  • Introduction to Diagramming
  • Using Shapes
  • Using Freehand Tool
  • Adding Images to the Canvas
  • Accessing the Contextual Toolbar
  • Using Connectors
  • Working with Tables
  • Working with Templates
  • Working with Frames
  • Using Notes
  • Access Controls
  • Exporting a Workspace
  • Real-Time Collaboration
  • Notifications
  • Meet Creately VIZ
  • Unleashing the Power of Collaborative Brainstorming
  • Uncovering the potential of Retros for all teams
  • Collaborative Apps in Microsoft Teams
  • Hiring a Great Fit for Your Team
  • Project Management Made Easy
  • Cross-Corporate Information Radiators
  • Creately 4.0 - Product Walkthrough
  • What's New

What is Project Evaluation? The Complete Guide with Templates

hero-img

Project evaluation is an important part of determining the success or failure of a project. Properly evaluating a project helps you understand what worked well and what could be improved for future projects. This blog post will provide an overview of key components of project evaluation and how to conduct effective evaluations.

What is Project Evaluation?

Project evaluation is a key part of assessing the success, progress and areas for improvement of a project. It involves determining how well a project is meeting its goals and objectives. Evaluation helps determine if a project is worth continuing, needs adjustments, or should be discontinued.

A good evaluation plan is developed at the start of a project. It outlines the criteria that will be used to judge the project’s performance and success. Evaluation criteria can include things like:

  • Meeting timelines and budgets - Were milestones and deadlines met? Was the project completed within budget?
  • Delivering expected outputs and outcomes - Were the intended products, results and benefits achieved?
  • Satisfying stakeholder needs - Were customers, users and other stakeholders satisfied with the project results?
  • Achieving quality standards - Were quality metrics and standards defined and met?
  • Demonstrating effectiveness - Did the project accomplish its intended purpose?

Project evaluation provides valuable insights that can be applied to the current project and future projects. It helps organizations learn from their projects and continuously improve their processes and outcomes.

Project Evaluation Templates

These templates will help you evaluate your project by providing a clear structure to assess how it was planned, carried out, and what it achieved. Whether you’re managing the project, part of the team, or a stakeholder, these template assist in gathering information systematically for a thorough evaluation.

Project Evaluation Template 1

  • Ready to use
  • Fully customizable template
  • Get Started in seconds

exit full-screen

Project Evaluation Template 2

Project Evaluation Methods

Project evaluation involves using various methods to assess the performance and impact of a project. The choice of methods depends on the nature of the project, its objectives, and the available resources. Here are some common project evaluation methods:

Pre-project evaluation

Pre-project evaluations are done before a project begins. This involves evaluating the project plan, scope, objectives, resources, and budget. This helps determine if the project is feasible and identifies any potential issues or risks upfront. It establishes a baseline for later evaluations.

Ongoing evaluation

Ongoing evaluations happen during the project lifecycle. Regular status reports track progress against the project plan, budget, and deadlines. Any deviations or issues are identified and corrective actions can be taken promptly. This allows projects to stay on track and make adjustments as needed.

Post-project evaluation

Post-project evaluations occur after a project is complete. This final assessment determines if the project objectives were achieved and customer requirements were met. Key metrics like timeliness, budget, and quality are examined. Lessons learned are documented to improve processes for future projects. Stakeholder feedback is gathered through surveys, interviews, or focus groups .

Project Evaluation Steps

When evaluating a project, there are several key steps you should follow. These steps will help you determine if the project was successful and identify areas for improvement in future initiatives.

Step 1: Set clear goals

The first step is establishing clear goals and objectives for the project before it begins. Make sure these objectives are SMART: specific, measurable, achievable, relevant and time-bound. Having clear goals from the outset provides a benchmark for measuring success later on.

Step 2: Monitor progress

Once the project is underway, the next step is monitoring progress. Check in regularly with your team to see if you’re on track to meet your objectives and deadlines. Identify and address any issues as early as possible before they become major roadblocks. Monitoring progress also allows you to course correct if needed.

Step 3: Collect data

After the project is complete, collect all relevant data and metrics. This includes both quantitative data like budget information, timelines and deliverables, as well customer feedback and qualitative data from surveys or interviews. Analyzing this data will show you how well the project performed against your original objectives.

Step 4: Analyze and interpret

Identify what worked well and what didn’t during the project. Highlight best practices to replicate and lessons learned to improve future initiatives. Get feedback from all stakeholders involved, including project team members, customers and management.

Step 5: Develop an action plan

Develop an action plan to apply what you’ve learned for the next project. Update processes, procedures and resource allocations based on your evaluation. Communicate changes across your organization and train employees on any new best practices. Implementing these changes will help you avoid similar issues the next time around.

Benefits of Project Evaluation

Project evaluation is a valuable tool for organizations, helping them learn, adapt, and improve their project outcomes over time. Here are some benefits of project evaluation.

  • Helps in making informed decisions by providing a clear understanding of the project’s strengths, weaknesses, and areas for improvement.
  • Holds the project team accountable for meeting goals and using resources effectively, fostering a sense of responsibility.
  • Facilitates organizational learning by capturing valuable insights and lessons from both successful and challenging aspects of the project.
  • Allows for the efficient allocation of resources by identifying areas where adjustments or reallocations may be needed.
  • Provides evidence of the project’s value by assessing its impact, cost-effectiveness, and alignment with organizational objectives.
  • Involves stakeholders in the evaluation process, fostering collaboration, and ensuring that diverse perspectives are considered.

Project Evaluation Best Practices

Follow these best practices to do a more effective and meaningful project evaluation, leading to better project outcomes and organizational learning.

  • Clear objectives : Clearly define the goals and questions you want the evaluation to answer.
  • Involve stakeholders : Include the perspectives of key stakeholders to ensure a comprehensive evaluation.
  • Use appropriate methods : Choose evaluation methods that suit your objectives and available resources.
  • Timely data collection : Collect data at relevant points in the project timeline to ensure accuracy and relevance.
  • Thorough analysis : Analyze the collected data thoroughly to draw meaningful conclusions and insights.
  • Actionable recommendations : Provide practical recommendations that can lead to tangible improvements in future projects.
  • Learn and adapt : Use evaluation findings to learn from both successes and challenges, adapting practices for continuous improvement.
  • Document lessons : Document lessons learned from the evaluation process for organizational knowledge and future reference.

How to Use Creately to Evaluate Your Projects

Use Creately’s visual collaboration platform to evaluate your project and improve communication, streamline collaboration, and provide a visual representation of project data effectively.

Task tracking and assignment

Use the built-in project management tools to create, assign, and track tasks right on the canvas. Assign responsibilities, set due dates, and monitor progress with Agile Kanban boards, Gantt charts, timelines and more. Create task cards containing detailed information, descriptions, due dates, and assigned responsibilities.

Notes and attachments

Record additional details and attach documents, files, and screenshots related to your tasks and projects with per item integrated notes panel and custom data fields. Or easily embed files and attachments right on the workspace to centralize project information. Work together on project evaluation with teammates with full multiplayer text and visual collaboration.

Real-time collaboration

Get any number of participants on the same workspace and track their additions to the progress report in real-time. Collaborate with others in the project seamlessly with true multi-user collaboration features including synced previews and comments and discussion threads. Use Creately’s Microsoft Teams integration to brainstorm, plan, run projects during meetings.

Pre-made templates

Get a head start with ready-to-use progress evaluation templates and other project documentation templates available right inside the app. Explore 1000s more templates and examples for various scenarios in the community.

In summary, project evaluation is like a compass for projects, helping teams understand what worked well and what can be improved. It’s a tool that guides organizations to make better decisions and succeed in future projects. By learning from the past and continuously improving, project evaluation becomes a key factor in the ongoing journey of project management, ensuring teams stay on the path of excellence and growth.

More project management related guides

  • 8 Essential Metrics to Measure Project Success
  • How to Manage Your Project Portfolio Like a Pro
  • What is Project Baseline in Project Management?
  • How to Create a Winning Project Charter: Your Blueprint for Success
  • Your Comprehensive Guide to Creating Effective Workback Schedules
  • What is a Work Breakdown Structure? and How To Create a WBS?
  • The Practical Guide to Creating a Team Charter
  • Your Guide to Multi-Project Management
  • How AI Is Transforming Project Management
  • A Practical Guide to Resource Scheduling in Project Management

Join over thousands of organizations that use Creately to brainstorm, plan, analyze, and execute their projects successfully.

More Related Articles

What is Dependency  Mapping in Project Management?

Amanda Athuraliya is the communication specialist/content writer at Creately, online diagramming and collaboration tool. She is an avid reader, a budding writer and a passionate researcher who loves to write about all kinds of topics.

Filter by Keywords

Project Management

From good to great: everything you need to know about effective project evaluation.

December 30, 2023

For project managers, each project is like nurturing a baby—it needs constant attention to grow strong and reach its full potential. That’s why monitoring your project’s real-time progress and performance is the secret to consistent success. 

Project evaluation is your best ally in assessing how effectively your project aligns with its objectives and delivers value to stakeholders. Uncovering these evaluation insights will empower you to make smart decisions that significantly improve your business outcomes. 

Eager to discover the secrets of successful project evaluation? You’re in for a treat! 🍬

In this article, we’ll guide you through the five crucial steps to master your project evaluation process . Plus, we’ll delve into the perks and pitfalls of project evaluation and explore its primary types. Buckle up, and let’s begin!

What is Project Evaluation?

What are the main types of project evaluation, what are the benefits of performing a project evaluation, step 1: identify project goals and objectives, step 2: define the scope of the evaluation, step 3: develop a data collection plan, step 4: analyze data, step 5: report your findings, step 6: discuss the next project evaluation steps , common project evaluation mistakes to avoid.

Avatar of person using AI

Assessing a project’s success involves project evaluation—a meticulous process that involves gathering detailed project data and using project evaluation methods to uncover areas for performance improvement. 

Project evaluation isn’t just a routine check—it keeps stakeholders informed about project status, opportunities for enhancement, and potential budget or schedule adjustments. ✅

Every part of the project, from expenses and scope to risks and ROI, undergoes analysis to ensure alignment with the initial plan. Any hurdles or deviations encountered along the way become valuable insights that guide future improvements.

Tools like project dashboards and trackers are crucial in facilitating the evaluation process. They streamline access to crucial project data, making it readily available for informed decision-making and strategic adjustments.

In any project’s lifecycle, there are three pivotal moments demanding evaluation . While project evaluation can happen at any time, these particular points deserve official scheduling for a more structured approach.

Pre-project evaluation

Before starting a project, assessing its feasibility for successful completion is essential. This evaluation typically aligns with the development stage the project is currently in, and it’s a cornerstone for its effective execution. In this type of evaluation, you must establish a shared understanding of objectives and goals among all stakeholders before giving the project the thumbs up.

Ongoing project evaluation

Using metrics throughout the project’s lifecycle is important for confirming that completed tasks align with benchmarks. This includes staying within budget, meeting task completion rates, and ensuring overall work quality . Keeping the team focused on the initial objectives helps them stay on course as the project evolves.

Post-project evaluation

After project completion, analyzing impacts and outcomes is your number one priority. Outcomes provide a yardstick for measuring the project’s effectiveness in meeting predefined objectives and goals so you can see what worked and what didn’t. Evaluating impacts helps you effectively address and resolve issues in future projects .

The advantages of conducting a project evaluation span from internal team growth to external triumphs. Here’s a rundown of the main benefits:

  • Tracking the project’s progress: It helps track team performance across projects, providing a record of improvements or setbacks over time
  • Identifying improvement areas: By recognizing trends and patterns, evaluations pinpoint areas for improvement within the project or team processes
  • Measuring impact: Project evaluation quantifies the impact of your project, providing concrete metrics and feedback to measure the success of your endeavors
  • Engaging stakeholders: If you involve stakeholders in the evaluation process, you’ll reassure them of project quality, fostering trust and collaboration
  • Encouraging accountability: Project evaluation promotes accountability and reflection among team members, motivating them to work hard for continuous improvement
  • Informing future planning: Insights you gather from evaluations influence future project plans , allowing for adjustments based on past project performance and lessons learned 👨‍🏫

How to Conduct a Project Evaluation in 6 Steps

Unlocking the path to a successful project evaluation isn’t just about following a checklist —it’s about leveraging the right project management tools to streamline the journey! 

We’re here to provide you with the six essential steps to take during a project evaluation process and equip you with top-notch tools that’ll help you elevate your evaluation game. Let’s explore! 🧐

Crafting solid goals and objectives during your project’s development is like drawing a map for your team— it sets the course and direction .

Goals also play a crucial role in shaping the evaluation process tailored to your objectives. For instance, if your goal is to enhance customer satisfaction, your evaluation might focus on customer feedback , experience metrics, and service quality.

Luckily, the super important step of setting project goals is a piece of cake with an all-in-one project management solution like ClickUp . This powerful tool streamlines your project endeavors and kickstarts your project journey by helping you define clear goals and objectives—all in one place! 🌟

ClickUp Goals

With ClickUp Goals , nailing your targets becomes effortless. Set precise timelines and measurable goals, and let automatic progress tracking do the heavy lifting. Dive in by adding key details—name your goal, set the due date, assign a team member—and you’re ready to roll!

ClickUp equips you to:

  • Establish numerical targets for precise tracking
  • Mark Milestones as done or pending to track progress
  • Keep an eye on financial goals for better budget management
  • List individual tasks as targets to tackle complex objectives

Highlight pivotal moments by tagging them as Milestones and transform large goals into manageable chunks for your team to conquer effortlessly.

The cherry on top? You can group related goals into Folders to track progress across multiple objectives at a glance, leading to simpler decision-making. 🍒

Ready to dive into the evaluation process? First, let’s clarify why you’re doing it, what you’re aiming for, and what exactly you’re measuring. Remember to define the evaluation’s scope, including objectives, timeframe, key stakeholders, evaluation metrics, and methods or tools you plan to use for data collection and analysis.

This clarity in purpose and scope is your secret weapon—it sets the stage for a well-organized and effective evaluation, making your project planning and execution as easy as pie. 🥧

ClickUp has the perfect solution for documenting your scope of work without breaking a sweat. With the ClickUp Scope of Work Template , you get a ready-made framework to plug in all the essentials—covering everything from project background and goals to timelines and budget details.

ClickUp Scope of Work Template

Customize its handy tables to document the ins and outs of your evaluation process. Imagine your evaluation goal is to boost customer satisfaction. Here’s a sneak peek at how you’d document the scope:

  • Objectives: To enhance customer satisfaction by 20% within the next six months
  • Timeframe: Evaluation will be conducted quarterly over the next year
  • Stakeholders: Customer service team, marketing department, and selected customers for feedback
  • Criteria: Metrics include Net Promoter Score (NPS), customer feedback surveys, and resolution time for customer inquiries
  • Methods: Use surveys, feedback forms , focus groups, and analysis of complaint resolutions to gather data and insights on customer satisfaction

In ClickUp Docs , flexibility is the name of the game. You can add or remove sections and dive into real-time collaboration by inviting your team to modify the document through edits and comments. 💬

Each section comes preloaded with sample content, so personalizing your template will be a breeze whether you’re a seasoned pro or a newcomer to using Docs.

Now, it’s time to roll up your sleeves and gather the data that answers your evaluation queries. Get creative—there are plenty of ways to collect information: 

  • Create and distribute surveys 
  • Schedule interviews  
  • Organize focus group observations
  • Dig into documents and reports

Variety is key here, so use quantitative and qualitative data to capture every angle of your project. 

For invaluable insights on areas for improvement , we recommend heading straight to the source—your loyal customers! 🛒

With the ClickUp Feedback Form Template , you get a customizable form that centralizes all your feedback. It’s ready to capture feedback on everything from product features to customer support and pricing.

The template has a tailor-made feedback Form you can easily distribute to your customers. Once the forms are filled in, turn to the Service Rating List view—your personal feedback command center showcasing scores, reasons behind the ratings, and invaluable improvement suggestions.

Plus, you can delve into provider ratings in a dedicated list and explore the Overall Recommendations board to identify areas that need enhancement at a glance.

Clickup Feedback Form Template

Once the data’s in your hands, it’s analysis time! Pick the right tools from your kit—descriptive statistics, thematic analysis, or a SWOT analysis —to unlock insights and make sense of what you’ve gathered.

Tap into ClickUp Whiteboards to orchestrate a dynamic SWOT analysis , perfect for companies with remote or hybrid teams . 

ClickUp Whiteboards

Simply create color-coded squares (or any shape you fancy) representing S trengths, W eaknesses, O pportunities, and T hreats. Then, organize your data effortlessly by creating sticky notes and dragging them to the right square, and behold! Your shareable SWOT analysis Whiteboard is ready to roll! 🎲

ClickUp’s digital Whiteboards are like physical whiteboards but better! You can use them to:

  • Conduct collaborative brainstorming sessions
  • Leverage Mind Maps to break down big ideas into bite-sized portions
  • Create dedicated sections for OKRs , KPIs, and internal data as quick references
  • Share ideas with your team through sticky notes, comments, documents, and media files
  • Solve problems creatively with color-coded shapes, charts, and graphs 📊

ClickUp Dashboards are ideal for visualizing data and making data-driven decisions. Dive into a treasure trove of over 50 Cards, crafting your ideal Dashboard that mirrors your vision. Want to see your progress in a pie chart, line graph, or bar graph ? Take your pick and make it yours!

This panoramic view is excellent for monitoring goals, extracting crucial insights, and effortlessly tweaking your strategies. Rely on Burnup and Burndown charts to track performance against set goals and forecast the road. 🛣️

Whether sharing the Dashboard within your workspace or projecting it full screen in the office, it’s the perfect catalyst for team discussions on key project evaluation points.

ClickUp Dashboards

Once you’ve delved into the data, it’s time to bring those insights to light! Crafting a report is your next move—a clear, concise summary showcasing your evaluation’s key findings, conclusions, and recommendations. 📝

Reporting is all about delivering the right information to the right people, so customize your project evaluation report to suit your audience’s needs. Whether it’s your project team, sponsors, clients, or beneficiaries, tailor your report to meet their expectations and address their interests directly. 

Eliminate the need to start your report from square one using the ClickUp Data Analysis Report Template . This powerful tool provides separate subpages for:

  • Overview: Dive into the analysis backstory, covering objectives, scope, methodology, and data collection methods
  • Findings: Present your study’s results and use graphs and charts to illustrate the findings
  • Recommendations and conclusions: Outline your conclusions and provide actionable steps post-evaluation

The template is fully customizable, so you can tailor it to suit your business needs and audience preferences. Tweak tables or create new ones, adding rows and columns for flawless data presentation. ✨

ClickUp Data Analysis Report Template

Sharing evaluation findings isn’t just a formality—it’s a catalyst for stronger connections and brighter ideas. It sparks discussions, invites innovative suggestions for team enhancements, and nurtures stronger bonds with your stakeholders. Plus, it’s a roadmap for future projects , guiding the way to improvements based on the project’s outcomes and impact. 

With ClickUp, you can say goodbye to toggling between project management dashboards and messaging platforms. Dive into the Chat view —your gateway to real-time conversations and task-specific discussions, all in one convenient thread. It’s the ultimate connection hub, keeping everyone in the loop and engaged. 🕹️

ClickUp Chat view

ClickUp Docs ramps up collaboration with team edits, comment tagging, and action item assignments—all in one place. Plus, you can effortlessly turn text into actionable tasks, ensuring organization and efficiency at every turn.

ClickUp Docs

On top of this, ClickUp’s integrations include numerous messaging tools like Slack and Microsoft Teams, so you can communicate easily, whether directly in ClickUp or through your favorite messaging platforms! 💌

Identifying potential hurdles in your project evaluation journey is your first stride toward navigating this path more successfully. Relying on ClickUp’s project management tools and pre-built templates for project evaluation can act as your compass, steering you clear of these missteps. 🧭

Here’s a glimpse into some prevalent project evaluation blunders you should avoid:

  • Undefined goals and objectives: If you fail to establish clear, specific, and measurable goals, you can hinder the evaluation process because you won’t know where to place your focus
  • Misaligned focus: Evaluating irrelevant aspects or neglecting elements crucial for project success can lead to incomplete assessments
  • Neglecting data collection and analysis: Inadequate data gathering that lacks crucial information, coupled with superficial analysis, can result in incomplete insights and failure to evaluate the most critical project points
  • Misuse of data: If you use incorrect or irrelevant data or misinterpret the collected information, you’ll likely come to false conclusions, defeating the whole purpose of a project evaluation
  • Reactivity over responsiveness: Reacting emotionally instead of responding methodically to project challenges can cloud judgment and lead to ineffective evaluation
  • Lack of documentation: Failing to document the evaluation process thoroughly can cause inconsistency and lead to missed learning opportunities
  • Limited stakeholder involvement: Not engaging stakeholders for diverse perspectives and insights can limit the evaluation’s depth and relevance

Simplify Project Evaluation with ClickUp

To ensure your evaluation hits the bullseye, rely on our six-step project evaluation guide that guarantees a thorough dive into data collection, effective analysis, and collaborative problem-solving. Once you share all the findings with your stakeholders, we guarantee you’ll be cooking up the best solutions in no time.

Sign up for ClickUp for free today to keep your project evaluation centralized. This powerful tool isn’t just your ally in project evaluation—it’s your ultimate sidekick throughout the whole project lifecycle! 💖

Tap into its collaboration tools , save time with over 1,000 templates , and buckle up for turbocharged productivity with ClickUp AI , achieving success faster than ever! ⚡

Questions? Comments? Visit our Help Center for support.

Receive the latest WriteClick Newsletter updates.

Thanks for subscribing to our blog!

Please enter a valid email

  • Free training & 24-hour support
  • Serious about security & privacy
  • 99.99% uptime the last 12 months

👋 We're hiring!

How To Evaluate and Measure the Success of a Project

Master key project evaluation metrics for effective decision-making in project management.

Liz Lockhart

Liz Lockhart,   PMP and Agile Leader

  • project management

Attention all business leaders, project managers, and PMO enthusiasts! If you're passionate about making your projects successful, implementing the right strategies and leveraging technology can make all the difference. Project evaluation is the process you need to comprehend and measure that success. 

Keep in mind, though, evaluating a project's success is more complex than it may appear. There are numerous factors to consider, which can differ from one project to another.

In this article, we'll walk you through the fundamentals of an effective project evaluation process and share insights on measuring success for any project. With this information, you'll be well-prepared to assess if a project has met its intended goals, allowing you to make informed decisions and set benchmarks for future endeavors. 

Let's get started on the path to successful project evaluation!

What is project evaluation? 

Project evaluation is all about objectively examining the success or effectiveness of a project once it's completed. 

Remember that each project has unique goals and objectives, so each evaluation will differ. The assessment typically measures how well the project has met its objectives and goals. Throughout the evaluation process, you'll need to consider various factors, such as:

  • Quality of deliverables
  • Customer satisfaction

These factors help determine whether a project can be considered successful or not. It's crucial to remember that evaluation should happen continuously during the project, not just at the end. This approach allows teams to make informed decisions and adjust their course if necessary.

A practical evaluation process not only pinpoints areas for improvement but also celebrates the project's successes. By analyzing project performance and harnessing the insights gained through project evaluation, organizations, and project leaders can fine-tune their strategies to boost project outcomes and make the most of their investment in time, money, and resources for the project or initiative.

What are the steps for measuring the success of a project?

Measuring the success of a project largely depends on its desired outcomes. Since different projects have varying goals, their criteria for success will also differ.

For instance, a team launching a new product might measure success based on customer engagement, sales figures, and reviews, while a team organizing an event may assess success through ticket sales and attendee feedback. Even projects with similar objectives can have different measurements of success. So, there's no one-size-fits-all approach to evaluating project results; each assessment should be customized to the specific goals in mind.

In general, the process of measuring the success of any project includes the following steps:

1. Define the purpose and goals of the project

Before measuring its success, you need a clear understanding of its objectives, scope, and timeline. Collaborate with your team and stakeholders to establish these elements, ensuring everyone is aligned. 

A well-defined project scope helps you set realistic expectations, allocate resources efficiently, and monitor progress effectively.

2. Assess the current status of the project

Regularly examine the project's progress in relation to its goals, timeline, and budget. This step enables you to identify potential issues early and make necessary adjustments. Maintaining open communication with your team and stakeholders during this phase is crucial for staying on track and addressing any concerns.

3. Analyze the results achieved by the project so far

Continuously evaluate your project's performance by looking at the results you've achieved against your goals. Organize retrospectives with your team to discuss what has worked well, what could be improved, and any lessons learned. 

Use this feedback to inform your decision-making process and fine-tune your approach moving forward.

4. Identify any risks associated with the project

Proactively identify and document any potential issues affecting your project's success. 

Develop a risk management plan that includes strategies for mitigating or transferring these risks. Regularly review and update this plan as the project progresses, and communicate any changes to your team and stakeholders. 

Effective risk management helps minimize surprises and allows you to adapt to unforeseen challenges.

5. Establish KPIs (key performance indicators) to measure success

KPIs are quantifiable metrics that help you assess whether your project is on track to achieve its goals. Work with your project team, stakeholders, and sponsor to identify KPIs that accurately reflect the project's success. Ensure these metrics align with the project's purpose and goals and are meaningful to your organization. 

Examples of KPIs include the number of leads generated, customer satisfaction scores, or cost savings.

6. Monitor these KPIs over time to gauge performance

Once you've established your project-specific KPIs, track them throughout the project's duration. Regular monitoring helps you stay informed about your project's performance, identify trends, and make data-driven decisions. 

If your KPIs show that your project is deviating from its goals, revisit the previous steps to assess the current status, analyze the results, and manage risks. Repeat this process as needed until the project is complete.

Project planning software like Float gives you a bird’s eye view of team tasks, capacity, and hours worked, and you can generate valuable reports to help with future planning.

In addition to these steps, strive for transparency in your project reporting and results by making them easily accessible to your team and stakeholders. Use project dashboards, automated reporting, and self-serve project update information to keep everyone informed and engaged. 

This approach saves time and fosters a culture of openness and collaboration, which is essential for achieving project success.

15 project management metrics that matter 

To effectively measure success and progress, it's essential to focus on the metrics that matter. These metrics vary depending on the organization, team, or project, but some common ones include project completion rate, budget utilization, and stakeholder satisfaction.

We have divided these metrics into waterfall projects (predictive) and agile projects (adaptive). While some metrics may apply to both types of projects, this categorization ensures a more tailored approach to evaluation. Remember that these metrics assume a project has a solid plan or a known backlog to work against, as measuring progress relies on comparing actual outcomes to planned outcomes.

Waterfall project management metrics (predictive)

Waterfall projects typically have a defined scope, schedule, and cost at the outset. If changes are required during project execution, the project manager returns to the planning phase to determine a new plan and expectations across scope, schedule, and cost (commonly called the iron triangle). 

Here are eight waterfall metrics:

1. Schedule variance (SV) - Schedule variance is the difference between the work planned and completed at a given time. It helps project managers understand whether the project is on track, ahead, or behind schedule. A positive SV indicates that the project is ahead of schedule, while a negative SV suggests that the project is behind schedule. Monitoring this metric throughout the project allows teams to identify potential bottlenecks and make necessary adjustments to meet deadlines.

2. Actual cost (AC) : Actual cost represents the total amount of money spent on a project up to a specific point in time. It includes all expenses related to the project, such as personnel costs, material costs, and equipment costs. Keeping track of the actual cost is crucial for managing the project budget and ensuring it stays within the allocated funds. Comparing actual cost to the planned budget can provide insights into the project's financial performance and areas where cost-saving measures may be needed.

3. Cost variance (CV) : Cost variance is the difference between a project's expected and actual costs. A positive CV indicates that the project is under budget, while a negative CV suggests that the project is over budget . Monitoring cost variance helps project managers identify areas where the project may be overspending and implement corrective actions to prevent further cost overruns.

4. Planned value (PV) : Planned value is the estimated value of the work that should have been completed by a specific point in time. It is a valuable metric for comparing the project's progress against the original plan. PV calculates other vital metrics, such as earned value (EV) and schedule performance index (SPI).

5. Earned value (EV) : Earned value is a measure of the progress made on a project , represented by the portion of the total budget earned by completing work on the project up to this point. EV can be calculated by multiplying the percentage complete by the total budget. Monitoring earned value helps project managers assess whether the project is progressing as planned and whether any corrective actions are needed to get the project back on track.

6. Schedule performance index (SPI) : The schedule performance index measures how efficiently a project team completes work relative to the amount of work planned. SPI is calculated by dividing the earned value (EV) by the planned value (PV). An SPI of 1.0 indicates that the project is on schedule, while an SPI of less than 1.0 means that the project is behind schedule. This metric helps identify scheduling issues and make adjustments to improve efficiency and meet deadlines.

7. Cost performance index (CPI) : The cost performance index measures how efficient a project team is in completing work relative to the amount of money budgeted. CPI is calculated by dividing the earned value (EV) by the actual cost (AC). A CPI of 1.0 indicates that the project is on budget, while a CPI of less than 1.0 shows that the project is over budget. Monitoring CPI can help project managers identify areas where costs can be reduced and improve overall project financial performance.

8. Estimate at completion (EAC) : Estimate at completion is an updated total cost estimation after the project is completed. EAC can be calculated using several methods, including bottom-up estimating, top-down estimating, analogous estimating, and parametric estimating. Regularly updating the EAC helps project managers stay informed about the project's financial performance and make informed decisions about resource allocation and cost control.

Agile project management metrics (adaptive)

Agile projects differ from waterfall projects as they often start without a clear final destination, allowing for changes along the way.

It's generally not appropriate to use waterfall metrics to evaluate agile projects. Each project is unique and should be assessed based on its purpose, objectives, and methodology.

Here are seven standard agile metrics:

  • Story points: Story points are used to estimate the workload required to complete a task, taking into account the time, effort, and risk involved. Different teams may use various scales for measuring story points, so comparing story points between teams is not advisable, as it may lead to misleading conclusions.
  • Velocity : This metric represents the work a team can complete within a specific period, measured in story points. Velocity helps gauge a team's progress, predicting the amount of work that can be completed in future sprints and estimating the number of sprints needed to finish the known product backlog. Since story points are not standardized, comparing teams or projects based on story points or velocity is not appropriate.
  • Burndown charts : Burndown charts are graphical representations used to track the progress of an agile development cycle. These charts show the amount of known and estimated work remaining over time, counting down toward completion. They can help identify trends and predict when a project will likely be finished based on the team's velocity.
  • Cumulative flow diagrams : These graphs, related to burndown charts, track the progress of an agile development cycle by showing the amount of work remaining to be done over time, counting up. Cumulative flow diagrams (CFDs) can help identify trends and predict when a project will likely be completed based on the team's velocity.
  • Lead time : Lead time is the duration between the identification of a task and its completion. It is commonly used in agile project management to assess a team's progress and predict how much work can be completed in future sprints. Lead time is a standard Kanban metric, as Kanban focuses on promptly completing tasks and finishing ongoing work before starting new tasks.
  • Cycle time : Cycle time is when it takes to complete a task once it has been identified and work begins, not including any waiting time before the job is initiated. Cycle time is frequently used in agile project management to evaluate a team's progress and predict how much work can be completed in future iterations.  
  • Defect density : As a crucial measure of quality and long-term success, defect density is the number of defects per unit of code or delivered output. It is often employed in software development to assess code quality and pinpoint areas needing improvement. If a team provides the output with a high defect density, the quality of the project's deliverables and outcomes may be significantly compromised.

Take your project planning to the next level

Keep an eye on progress against the project plan, and get useful reports on utilization, capacity, completed work, and budgets.

Not all metrics are created equal

It's essential to recognize that not every metric suits every project. Project metrics shouldn't be seen as a one-size-fits-all approach.

With so many metrics, it can be easy to feel overwhelmed, but the key is to focus on the specific metrics that significantly impact your project's outcome. Project managers can make informed, strategic decisions to drive success by measuring the right aspects.

Your choice of project metrics will depend on various factors, such as the type of project, its purpose, and the desired outcomes. Be cautious about using the wrong metrics to measure your project's progress, which can lead to unintended consequences. After all, you get what you measure, and if you measure incorrectly, you might not achieve the results you're aiming for!

Tips on communicating metrics and learnings

Clear communication is crucial to ensure that insightful metrics and learnings have a meaningful impact on your team. To keep your team members engaged and your communications effective, consider the following tips:

  • Use straightforward, informative language : Opt for concise, easily understood language to ensure everyone has a clear grasp of the data and its implications.
  • Avoid abbreviations : Use full terms to avoid confusion, particularly for new team members.
  • Tell a story : Present metrics and learnings within a narrative context, helping team members better understand the project's journey.
  • Use humor and wit : Lighten the mood with humor to make your points more memorable and relatable while ensuring your message is taken seriously.
  • Be transparent : Foster trust by being open and honest about project progress, encouraging collaboration, and being the first to inform stakeholders if something goes wrong.

By incorporating these friendly and informative communication techniques, you can effectively engage your team members and maintain a united front throughout your project.

Cracking the code on project evaluation success

Project evaluation is a vital component of the project management process. To make informed, decisive decisions, project managers need a thorough understanding of various metrics aligned with the project's purpose and desired outcomes.

Effective teams utilize multiple metrics to assess the success or failure of a project. Establishing key metrics and delving into their implications allows teams to base their decisions on accurate, relevant information. Remember, one size doesn't fit all. Tailor success metrics to the specific goals of your project. 

By implementing a robust evaluation process and leveraging insights, project leaders can adapt strategies, enhance project outcomes, maximize the value of investments, and make data-driven decisions for upcoming projects.

Related reads

Strategic project management: how to be the future of pm, free project status report templates to keep people in the loop, project estimation guide: 5 techniques for accurate planning.

methodology of project evaluation

Search form

methodology of project evaluation

  • Table of Contents
  • Troubleshooting Guide
  • A Model for Getting Started
  • Justice Action Toolkit
  • Best Change Processes
  • Databases of Best Practices
  • Online Courses
  • Ask an Advisor
  • Subscribe to eNewsletter
  • Community Stories
  • YouTube Channel
  • About the Tool Box
  • How to Use the Tool Box
  • Privacy Statement
  • Workstation/Check Box Sign-In
  • Online Training Courses
  • Capacity Building Training
  • Training Curriculum - Order Now
  • Community Check Box Evaluation System
  • Build Your Toolbox
  • Facilitation of Community Processes
  • Community Health Assessment and Planning
  • Section 1. A Framework for Program Evaluation: A Gateway to Tools

Chapter 36 Sections

  • Section 2. Community-based Participatory Research
  • Section 3. Understanding Community Leadership, Evaluators, and Funders: What Are Their Interests?
  • Section 4. Choosing Evaluators
  • Section 5. Developing an Evaluation Plan
  • Section 6. Participatory Evaluation
  • Main Section
This section is adapted from the article "Recommended Framework for Program Evaluation in Public Health Practice," by Bobby Milstein, Scott Wetterhall, and the CDC Evaluation Working Group.

Around the world, there exist many programs and interventions developed to improve conditions in local communities. Communities come together to reduce the level of violence that exists, to work for safe, affordable housing for everyone, or to help more students do well in school, to give just a few examples.

But how do we know whether these programs are working? If they are not effective, and even if they are, how can we improve them to make them better for local communities? And finally, how can an organization make intelligent choices about which promising programs are likely to work best in their community?

Over the past years, there has been a growing trend towards the better use of evaluation to understand and improve practice.The systematic use of evaluation has solved many problems and helped countless community-based organizations do what they do better.

Despite an increased understanding of the need for - and the use of - evaluation, however, a basic agreed-upon framework for program evaluation has been lacking. In 1997, scientists at the United States Centers for Disease Control and Prevention (CDC) recognized the need to develop such a framework. As a result of this, the CDC assembled an Evaluation Working Group comprised of experts in the fields of public health and evaluation. Members were asked to develop a framework that summarizes and organizes the basic elements of program evaluation. This Community Tool Box section describes the framework resulting from the Working Group's efforts.

Before we begin, however, we'd like to offer some definitions of terms that we will use throughout this section.

By evaluation , we mean the systematic investigation of the merit, worth, or significance of an object or effort. Evaluation practice has changed dramatically during the past three decades - new methods and approaches have been developed and it is now used for increasingly diverse projects and audiences.

Throughout this section, the term program is used to describe the object or effort that is being evaluated. It may apply to any action with the goal of improving outcomes for whole communities, for more specific sectors (e.g., schools, work places), or for sub-groups (e.g., youth, people experiencing violence or HIV/AIDS). This definition is meant to be very broad.

Examples of different types of programs include:

  • Direct service interventions (e.g., a program that offers free breakfast to improve nutrition for grade school children)
  • Community mobilization efforts (e.g., organizing a boycott of California grapes to improve the economic well-being of farm workers)
  • Research initiatives (e.g., an effort to find out whether inequities in health outcomes based on race can be reduced)
  • Surveillance systems (e.g., whether early detection of school readiness improves educational outcomes)
  • Advocacy work (e.g., a campaign to influence the state legislature to pass legislation regarding tobacco control)
  • Social marketing campaigns (e.g., a campaign in the Third World encouraging mothers to breast-feed their babies to reduce infant mortality)
  • Infrastructure building projects (e.g., a program to build the capacity of state agencies to support community development initiatives)
  • Training programs (e.g., a job training program to reduce unemployment in urban neighborhoods)
  • Administrative systems (e.g., an incentive program to improve efficiency of health services)

Program evaluation - the type of evaluation discussed in this section - is an essential organizational practice for all types of community health and development work. It is a way to evaluate the specific projects and activities community groups may take part in, rather than to evaluate an entire organization or comprehensive community initiative.

Stakeholders refer to those who care about the program or effort. These may include those presumed to benefit (e.g., children and their parents or guardians), those with particular influence (e.g., elected or appointed officials), and those who might support the effort (i.e., potential allies) or oppose it (i.e., potential opponents). Key questions in thinking about stakeholders are: Who cares? What do they care about?

This section presents a framework that promotes a common understanding of program evaluation. The overall goal is to make it easier for everyone involved in community health and development work to evaluate their efforts.

Why evaluate community health and development programs?

The type of evaluation we talk about in this section can be closely tied to everyday program operations. Our emphasis is on practical, ongoing evaluation that involves program staff, community members, and other stakeholders, not just evaluation experts. This type of evaluation offers many advantages for community health and development professionals.

For example, it complements program management by:

  • Helping to clarify program plans
  • Improving communication among partners
  • Gathering the feedback needed to improve and be accountable for program effectiveness

It's important to remember, too, that evaluation is not a new activity for those of us working to improve our communities. In fact, we assess the merit of our work all the time when we ask questions, consult partners, make assessments based on feedback, and then use those judgments to improve our work. When the stakes are low, this type of informal evaluation might be enough. However, when the stakes are raised - when a good deal of time or money is involved, or when many people may be affected - then it may make sense for your organization to use evaluation procedures that are more formal, visible, and justifiable.

How do you evaluate a specific program?

Before your organization starts with a program evaluation, your group should be very clear about the answers to the following questions:.

  • What will be evaluated?
  • What criteria will be used to judge program performance?
  • What standards of performance on the criteria must be reached for the program to be considered successful?
  • What evidence will indicate performance on the criteria relative to the standards?
  • What conclusions about program performance are justified based on the available evidence?

To clarify the meaning of each, let's look at some of the answers for Drive Smart, a hypothetical program begun to stop drunk driving.

  • Drive Smart, a program focused on reducing drunk driving through public education and intervention.
  • The number of community residents who are familiar with the program and its goals
  • The number of people who use "Safe Rides" volunteer taxis to get home
  • The percentage of people who report drinking and driving
  • The reported number of single car night time crashes (This is a common way to try to determine if the number of people who drive drunk is changing)
  • 80% of community residents will know about the program and its goals after the first year of the program
  • The number of people who use the "Safe Rides" taxis will increase by 20% in the first year
  • The percentage of people who report drinking and driving will decrease by 20% in the first year
  • The reported number of single car night time crashes will decrease by 10 % in the program's first two years
  • A random telephone survey will demonstrate community residents' knowledge of the program and changes in reported behavior
  • Logs from "Safe Rides" will tell how many people use their services
  • Information on single car night time crashes will be gathered from police records
  • Are the changes we have seen in the level of drunk driving due to our efforts, or something else? Or (if no or insufficient change in behavior or outcome,)
  • Should Drive Smart change what it is doing, or have we just not waited long enough to see results?

The following framework provides an organized approach to answer these questions.

A framework for program evaluation

Program evaluation offers a way to understand and improve community health and development practice using methods that are useful, feasible, proper, and accurate. The framework described below is a practical non-prescriptive tool that summarizes in a logical order the important elements of program evaluation.

The framework contains two related dimensions:

  • Steps in evaluation practice, and
  • Standards for "good" evaluation.

The six connected steps of the framework are actions that should be a part of any evaluation. Although in practice the steps may be encountered out of order, it will usually make sense to follow them in the recommended sequence. That's because earlier steps provide the foundation for subsequent progress. Thus, decisions about how to carry out a given step should not be finalized until prior steps have been thoroughly addressed.

However, these steps are meant to be adaptable, not rigid. Sensitivity to each program's unique context (for example, the program's history and organizational climate) is essential for sound evaluation. They are intended to serve as starting points around which community organizations can tailor an evaluation to best meet their needs.

  • Engage stakeholders
  • Describe the program
  • Focus the evaluation design
  • Gather credible evidence
  • Justify conclusions
  • Ensure use and share lessons learned

Understanding and adhering to these basic steps will improve most evaluation efforts.

The second part of the framework is a basic set of standards to assess the quality of evaluation activities. There are 30 specific standards, organized into the following four groups:

  • Feasibility

These standards help answer the question, "Will this evaluation be a 'good' evaluation?" They are recommended as the initial criteria by which to judge the quality of the program evaluation efforts.

Engage Stakeholders

Stakeholders are people or organizations that have something to gain or lose from what will be learned from an evaluation, and also in what will be done with that knowledge. Evaluation cannot be done in isolation. Almost everything done in community health and development work involves partnerships - alliances among different organizations, board members, those affected by the problem, and others. Therefore, any serious effort to evaluate a program must consider the different values held by the partners. Stakeholders must be part of the evaluation to ensure that their unique perspectives are understood. When stakeholders are not appropriately involved, evaluation findings are likely to be ignored, criticized, or resisted.

However, if they are part of the process, people are likely to feel a good deal of ownership for the evaluation process and results. They will probably want to develop it, defend it, and make sure that the evaluation really works.

That's why this evaluation cycle begins by engaging stakeholders. Once involved, these people will help to carry out each of the steps that follows.

Three principle groups of stakeholders are important to involve:

  • People or organizations involved in program operations may include community members, sponsors, collaborators, coalition partners, funding officials, administrators, managers, and staff.
  • People or organizations served or affected by the program may include clients, family members, neighborhood organizations, academic institutions, elected and appointed officials, advocacy groups, and community residents. Individuals who are openly skeptical of or antagonistic toward the program may also be important to involve. Opening an evaluation to opposing perspectives and enlisting the help of potential program opponents can strengthen the evaluation's credibility.

Likewise, individuals or groups who could be adversely or inadvertently affected by changes arising from the evaluation have a right to be engaged. For example, it is important to include those who would be affected if program services were expanded, altered, limited, or ended as a result of the evaluation.

  • Primary intended users of the evaluation are the specific individuals who are in a position to decide and/or do something with the results.They shouldn't be confused with primary intended users of the program, although some of them should be involved in this group. In fact, primary intended users should be a subset of all of the stakeholders who have been identified. A successful evaluation will designate primary intended users, such as program staff and funders, early in its development and maintain frequent interaction with them to be sure that the evaluation specifically addresses their values and needs.

The amount and type of stakeholder involvement will be different for each program evaluation. For instance, stakeholders can be directly involved in designing and conducting the evaluation. They can be kept informed about progress of the evaluation through periodic meetings, reports, and other means of communication.

It may be helpful, when working with a group such as this, to develop an explicit process to share power and resolve conflicts . This may help avoid overemphasis of values held by any specific stakeholder.

Describe the Program

A program description is a summary of the intervention being evaluated. It should explain what the program is trying to accomplish and how it tries to bring about those changes. The description will also illustrate the program's core components and elements, its ability to make changes, its stage of development, and how the program fits into the larger organizational and community environment.

How a program is described sets the frame of reference for all future decisions about its evaluation. For example, if a program is described as, "attempting to strengthen enforcement of existing laws that discourage underage drinking," the evaluation might be very different than if it is described as, "a program to reduce drunk driving by teens." Also, the description allows members of the group to compare the program to other similar efforts, and it makes it easier to figure out what parts of the program brought about what effects.

Moreover, different stakeholders may have different ideas about what the program is supposed to achieve and why. For example, a program to reduce teen pregnancy may have some members who believe this means only increasing access to contraceptives, and other members who believe it means only focusing on abstinence.

Evaluations done without agreement on the program definition aren't likely to be very useful. In many cases, the process of working with stakeholders to develop a clear and logical program description will bring benefits long before data are available to measure program effectiveness.

There are several specific aspects that should be included when describing a program.

Statement of need

A statement of need describes the problem, goal, or opportunity that the program addresses; it also begins to imply what the program will do in response. Important features to note regarding a program's need are: the nature of the problem or goal, who is affected, how big it is, and whether (and how) it is changing.

Expectations

Expectations are the program's intended results. They describe what the program has to accomplish to be considered successful. For most programs, the accomplishments exist on a continuum (first, we want to accomplish X... then, we want to do Y...). Therefore, they should be organized by time ranging from specific (and immediate) to broad (and longer-term) consequences. For example, a program's vision, mission, goals, and objectives , all represent varying levels of specificity about a program's expectations.

Activities are everything the program does to bring about changes. Describing program components and elements permits specific strategies and actions to be listed in logical sequence. This also shows how different program activities, such as education and enforcement, relate to one another. Describing program activities also provides an opportunity to distinguish activities that are the direct responsibility of the program from those that are conducted by related programs or partner organizations. Things outside of the program that may affect its success, such as harsher laws punishing businesses that sell alcohol to minors, can also be noted.

Resources include the time, talent, equipment, information, money, and other assets available to conduct program activities. Reviewing the resources a program has tells a lot about the amount and intensity of its services. It may also point out situations where there is a mismatch between what the group wants to do and the resources available to carry out these activities. Understanding program costs is a necessity to assess the cost-benefit ratio as part of the evaluation.

Stage of development

A program's stage of development reflects its maturity. All community health and development programs mature and change over time. People who conduct evaluations, as well as those who use their findings, need to consider the dynamic nature of programs. For example, a new program that just received its first grant may differ in many respects from one that has been running for over a decade.

At least three phases of development are commonly recognized: planning , implementation , and effects or outcomes . In the planning stage, program activities are untested and the goal of evaluation is to refine plans as much as possible. In the implementation phase, program activities are being field tested and modified; the goal of evaluation is to see what happens in the "real world" and to improve operations. In the effects stage, enough time has passed for the program's effects to emerge; the goal of evaluation is to identify and understand the program's results, including those that were unintentional.

A description of the program's context considers the important features of the environment in which the program operates. This includes understanding the area's history, geography, politics, and social and economic conditions, and also what other organizations have done. A realistic and responsive evaluation is sensitive to a broad range of potential influences on the program. An understanding of the context lets users interpret findings accurately and assess their generalizability. For example, a program to improve housing in an inner-city neighborhood might have been a tremendous success, but would likely not work in a small town on the other side of the country without significant adaptation.

Logic model

A logic model synthesizes the main program elements into a picture of how the program is supposed to work. It makes explicit the sequence of events that are presumed to bring about change. Often this logic is displayed in a flow-chart, map, or table to portray the sequence of steps leading to program results.

Creating a logic model allows stakeholders to improve and focus program direction. It reveals assumptions about conditions for program effectiveness and provides a frame of reference for one or more evaluations of the program. A detailed logic model can also be a basis for estimating the program's effect on endpoints that are not directly measured. For example, it may be possible to estimate the rate of reduction in disease from a known number of persons experiencing the intervention if there is prior knowledge about its effectiveness.

The breadth and depth of a program description will vary for each program evaluation. And so, many different activities may be part of developing that description. For instance, multiple sources of information could be pulled together to construct a well-rounded description. The accuracy of an existing program description could be confirmed through discussion with stakeholders. Descriptions of what's going on could be checked against direct observation of activities in the field. A narrow program description could be fleshed out by addressing contextual factors (such as staff turnover, inadequate resources, political pressures, or strong community participation) that may affect program performance.

Focus the Evaluation Design

By focusing the evaluation design, we mean doing advance planning about where the evaluation is headed, and what steps it will take to get there. It isn't possible or useful for an evaluation to try to answer all questions for all stakeholders; there must be a focus. A well-focused plan is a safeguard against using time and resources inefficiently.

Depending on what you want to learn, some types of evaluation will be better suited than others. However, once data collection begins, it may be difficult or impossible to change what you are doing, even if it becomes obvious that other methods would work better. A thorough plan anticipates intended uses and creates an evaluation strategy with the greatest chance to be useful, feasible, proper, and accurate.

Among the issues to consider when focusing an evaluation are:

Purpose refers to the general intent of the evaluation. A clear purpose serves as the basis for the design, methods, and use of the evaluation. Taking time to articulate an overall purpose will stop your organization from making uninformed decisions about how the evaluation should be conducted and used.

There are at least four general purposes for which a community group might conduct an evaluation:

  • To gain insight .This happens, for example, when deciding whether to use a new approach (e.g., would a neighborhood watch program work for our community?) Knowledge from such an evaluation will provide information about its practicality. For a developing program, information from evaluations of similar programs can provide the insight needed to clarify how its activities should be designed.
  • To improve how things get done .This is appropriate in the implementation stage when an established program tries to describe what it has done. This information can be used to describe program processes, to improve how the program operates, and to fine-tune the overall strategy. Evaluations done for this purpose include efforts to improve the quality, effectiveness, or efficiency of program activities.
  • To determine what the effects of the program are . Evaluations done for this purpose examine the relationship between program activities and observed consequences. For example, are more students finishing high school as a result of the program? Programs most appropriate for this type of evaluation are mature programs that are able to state clearly what happened and who it happened to. Such evaluations should provide evidence about what the program's contribution was to reaching longer-term goals such as a decrease in child abuse or crime in the area. This type of evaluation helps establish the accountability, and thus, the credibility, of a program to funders and to the community.
  • Empower program participants (for example, being part of an evaluation can increase community members' sense of control over the program);
  • Supplement the program (for example, using a follow-up questionnaire can reinforce the main messages of the program);
  • Promote staff development (for example, by teaching staff how to collect, analyze, and interpret evidence); or
  • Contribute to organizational growth (for example, the evaluation may clarify how the program relates to the organization's mission).

Users are the specific individuals who will receive evaluation findings. They will directly experience the consequences of inevitable trade-offs in the evaluation process. For example, a trade-off might be having a relatively modest evaluation to fit the budget with the outcome that the evaluation results will be less certain than they would be for a full-scale evaluation. Because they will be affected by these tradeoffs, intended users have a right to participate in choosing a focus for the evaluation. An evaluation designed without adequate user involvement in selecting the focus can become a misguided and irrelevant exercise. By contrast, when users are encouraged to clarify intended uses, priority questions, and preferred methods, the evaluation is more likely to focus on things that will inform (and influence) future actions.

Uses describe what will be done with what is learned from the evaluation. There is a wide range of potential uses for program evaluation. Generally speaking, the uses fall in the same four categories as the purposes listed above: to gain insight, improve how things get done, determine what the effects of the program are, and affect participants. The following list gives examples of uses in each category.

Some specific examples of evaluation uses

To gain insight:.

  • Assess needs and wants of community members
  • Identify barriers to use of the program
  • Learn how to best describe and measure program activities

To improve how things get done:

  • Refine plans for introducing a new practice
  • Determine the extent to which plans were implemented
  • Improve educational materials
  • Enhance cultural competence
  • Verify that participants' rights are protected
  • Set priorities for staff training
  • Make mid-course adjustments
  • Clarify communication
  • Determine if client satisfaction can be improved
  • Compare costs to benefits
  • Find out which participants benefit most from the program
  • Mobilize community support for the program

To determine what the effects of the program are:

  • Assess skills development by program participants
  • Compare changes in behavior over time
  • Decide where to allocate new resources
  • Document the level of success in accomplishing objectives
  • Demonstrate that accountability requirements are fulfilled
  • Use information from multiple evaluations to predict the likely effects of similar programs

To affect participants:

  • Reinforce messages of the program
  • Stimulate dialogue and raise awareness about community issues
  • Broaden consensus among partners about program goals
  • Teach evaluation skills to staff and other stakeholders
  • Gather success stories
  • Support organizational change and improvement

The evaluation needs to answer specific questions . Drafting questions encourages stakeholders to reveal what they believe the evaluation should answer. That is, what questions are more important to stakeholders? The process of developing evaluation questions further refines the focus of the evaluation.

The methods available for an evaluation are drawn from behavioral science and social research and development. Three types of methods are commonly recognized. They are experimental, quasi-experimental, and observational or case study designs. Experimental designs use random assignment to compare the effect of an intervention between otherwise equivalent groups (for example, comparing a randomly assigned group of students who took part in an after-school reading program with those who didn't). Quasi-experimental methods make comparisons between groups that aren't equal (e.g. program participants vs. those on a waiting list) or use of comparisons within a group over time, such as in an interrupted time series in which the intervention may be introduced sequentially across different individuals, groups, or contexts. Observational or case study methods use comparisons within a group to describe and explain what happens (e.g., comparative case studies with multiple communities).

No design is necessarily better than another. Evaluation methods should be selected because they provide the appropriate information to answer stakeholders' questions, not because they are familiar, easy, or popular. The choice of methods has implications for what will count as evidence, how that evidence will be gathered, and what kind of claims can be made. Because each method option has its own biases and limitations, evaluations that mix methods are generally more robust.

Over the course of an evaluation, methods may need to be revised or modified. Circumstances that make a particular approach useful can change. For example, the intended use of the evaluation could shift from discovering how to improve the program to helping decide about whether the program should continue or not. Thus, methods may need to be adapted or redesigned to keep the evaluation on track.

Agreements summarize the evaluation procedures and clarify everyone's roles and responsibilities. An agreement describes how the evaluation activities will be implemented. Elements of an agreement include statements about the intended purpose, users, uses, and methods, as well as a summary of the deliverables, those responsible, a timeline, and budget.

The formality of the agreement depends upon the relationships that exist between those involved. For example, it may take the form of a legal contract, a detailed protocol, or a simple memorandum of understanding. Regardless of its formality, creating an explicit agreement provides an opportunity to verify the mutual understanding needed for a successful evaluation. It also provides a basis for modifying procedures if that turns out to be necessary.

As you can see, focusing the evaluation design may involve many activities. For instance, both supporters and skeptics of the program could be consulted to ensure that the proposed evaluation questions are politically viable. A menu of potential evaluation uses appropriate for the program's stage of development could be circulated among stakeholders to determine which is most compelling. Interviews could be held with specific intended users to better understand their information needs and timeline for action. Resource requirements could be reduced when users are willing to employ more timely but less precise evaluation methods.

Gather Credible Evidence

Credible evidence is the raw material of a good evaluation. The information learned should be seen by stakeholders as believable, trustworthy, and relevant to answer their questions. This requires thinking broadly about what counts as "evidence." Such decisions are always situational; they depend on the question being posed and the motives for asking it. For some questions, a stakeholder's standard for credibility could demand having the results of a randomized experiment. For another question, a set of well-done, systematic observations such as interactions between an outreach worker and community residents, will have high credibility. The difference depends on what kind of information the stakeholders want and the situation in which it is gathered.

Context matters! In some situations, it may be necessary to consult evaluation specialists. This may be especially true if concern for data quality is especially high. In other circumstances, local people may offer the deepest insights. Regardless of their expertise, however, those involved in an evaluation should strive to collect information that will convey a credible, well-rounded picture of the program and its efforts.

Having credible evidence strengthens the evaluation results as well as the recommendations that follow from them. Although all types of data have limitations, it is possible to improve an evaluation's overall credibility. One way to do this is by using multiple procedures for gathering, analyzing, and interpreting data. Encouraging participation by stakeholders can also enhance perceived credibility. When stakeholders help define questions and gather data, they will be more likely to accept the evaluation's conclusions and to act on its recommendations.

The following features of evidence gathering typically affect how credible it is seen as being:

Indicators translate general concepts about the program and its expected effects into specific, measurable parts.

Examples of indicators include:

  • The program's capacity to deliver services
  • The participation rate
  • The level of client satisfaction
  • The amount of intervention exposure (how many people were exposed to the program, and for how long they were exposed)
  • Changes in participant behavior
  • Changes in community conditions or norms
  • Changes in the environment (e.g., new programs, policies, or practices)
  • Longer-term changes in population health status (e.g., estimated teen pregnancy rate in the county)

Indicators should address the criteria that will be used to judge the program. That is, they reflect the aspects of the program that are most meaningful to monitor. Several indicators are usually needed to track the implementation and effects of a complex program or intervention.

One way to develop multiple indicators is to create a "balanced scorecard," which contains indicators that are carefully selected to complement one another. According to this strategy, program processes and effects are viewed from multiple perspectives using small groups of related indicators. For instance, a balanced scorecard for a single program might include indicators of how the program is being delivered; what participants think of the program; what effects are observed; what goals were attained; and what changes are occurring in the environment around the program.

Another approach to using multiple indicators is based on a program logic model, such as we discussed earlier in the section. A logic model can be used as a template to define a full spectrum of indicators along the pathway that leads from program activities to expected effects. For each step in the model, qualitative and/or quantitative indicators could be developed.

Indicators can be broad-based and don't need to focus only on a program's long -term goals. They can also address intermediary factors that influence program effectiveness, including such intangible factors as service quality, community capacity, or inter -organizational relations. Indicators for these and similar concepts can be created by systematically identifying and then tracking markers of what is said or done when the concept is expressed.

In the course of an evaluation, indicators may need to be modified or new ones adopted. Also, measuring program performance by tracking indicators is only one part of evaluation, and shouldn't be confused as a basis for decision making in itself. There are definite perils to using performance indicators as a substitute for completing the evaluation process and reaching fully justified conclusions. For example, an indicator, such as a rising rate of unemployment, may be falsely assumed to reflect a failing program when it may actually be due to changing environmental conditions that are beyond the program's control.

Sources of evidence in an evaluation may be people, documents, or observations. More than one source may be used to gather evidence for each indicator. In fact, selecting multiple sources provides an opportunity to include different perspectives about the program and enhances the evaluation's credibility. For instance, an inside perspective may be reflected by internal documents and comments from staff or program managers; whereas clients and those who do not support the program may provide different, but equally relevant perspectives. Mixing these and other perspectives provides a more comprehensive view of the program or intervention.

The criteria used to select sources should be clearly stated so that users and other stakeholders can interpret the evidence accurately and assess if it may be biased. In addition, some sources provide information in narrative form (for example, a person's experience when taking part in the program) and others are numerical (for example, how many people were involved in the program). The integration of qualitative and quantitative information can yield evidence that is more complete and more useful, thus meeting the needs and expectations of a wider range of stakeholders.

Quality refers to the appropriateness and integrity of information gathered in an evaluation. High quality data are reliable and informative. It is easier to collect if the indicators have been well defined. Other factors that affect quality may include instrument design, data collection procedures, training of those involved in data collection, source selection, coding, data management, and routine error checking. Obtaining quality data will entail tradeoffs (e.g. breadth vs. depth); stakeholders should decide together what is most important to them. Because all data have limitations, the intent of a practical evaluation is to strive for a level of quality that meets the stakeholders' threshold for credibility.

Quantity refers to the amount of evidence gathered in an evaluation. It is necessary to estimate in advance the amount of information that will be required and to establish criteria to decide when to stop collecting data - to know when enough is enough. Quantity affects the level of confidence or precision users can have - how sure we are that what we've learned is true. It also partly determines whether the evaluation will be able to detect effects. All evidence collected should have a clear, anticipated use.

By logistics , we mean the methods, timing, and physical infrastructure for gathering and handling evidence. People and organizations also have cultural preferences that dictate acceptable ways of asking questions and collecting information, including who would be perceived as an appropriate person to ask the questions. For example, some participants may be unwilling to discuss their behavior with a stranger, whereas others are more at ease with someone they don't know. Therefore, the techniques for gathering evidence in an evaluation must be in keeping with the cultural norms of the community. Data collection procedures should also ensure that confidentiality is protected.

Justify Conclusions

The process of justifying conclusions recognizes that evidence in an evaluation does not necessarily speak for itself. Evidence must be carefully considered from a number of different stakeholders' perspectives to reach conclusions that are well -substantiated and justified. Conclusions become justified when they are linked to the evidence gathered and judged against agreed-upon values set by the stakeholders. Stakeholders must agree that conclusions are justified in order to use the evaluation results with confidence.

The principal elements involved in justifying conclusions based on evidence are:

Standards reflect the values held by stakeholders about the program. They provide the basis to make program judgments. The use of explicit standards for judgment is fundamental to sound evaluation. In practice, when stakeholders articulate and negotiate their values, these become the standards to judge whether a given program's performance will, for instance, be considered "successful," "adequate," or "unsuccessful."

Analysis and synthesis

Analysis and synthesis are methods to discover and summarize an evaluation's findings. They are designed to detect patterns in evidence, either by isolating important findings (analysis) or by combining different sources of information to reach a larger understanding (synthesis). Mixed method evaluations require the separate analysis of each evidence element, as well as a synthesis of all sources to examine patterns that emerge. Deciphering facts from a given body of evidence involves deciding how to organize, classify, compare, and display information. These decisions are guided by the questions being asked, the types of data available, and especially by input from stakeholders and primary intended users.

Interpretation

Interpretation is the effort to figure out what the findings mean. Uncovering facts about a program's performance isn't enough to make conclusions. The facts must be interpreted to understand their practical significance. For example, saying, "15 % of the people in our area witnessed a violent act last year," may be interpreted differently depending on the situation. For example, if 50% of community members had watched a violent act in the last year when they were surveyed five years ago, the group can suggest that, while still a problem, things are getting better in the community. However, if five years ago only 7% of those surveyed said the same thing, community organizations may see this as a sign that they might want to change what they are doing. In short, interpretations draw on information and perspectives that stakeholders bring to the evaluation. They can be strengthened through active participation or interaction with the data and preliminary explanations of what happened.

Judgments are statements about the merit, worth, or significance of the program. They are formed by comparing the findings and their interpretations against one or more selected standards. Because multiple standards can be applied to a given program, stakeholders may reach different or even conflicting judgments. For instance, a program that increases its outreach by 10% from the previous year may be judged positively by program managers, based on standards of improved performance over time. Community members, however, may feel that despite improvements, a minimum threshold of access to services has still not been reached. Their judgment, based on standards of social equity, would therefore be negative. Conflicting claims about a program's quality, value, or importance often indicate that stakeholders are using different standards or values in making judgments. This type of disagreement can be a catalyst to clarify values and to negotiate the appropriate basis (or bases) on which the program should be judged.

Recommendations

Recommendations are actions to consider as a result of the evaluation. Forming recommendations requires information beyond just what is necessary to form judgments. For example, knowing that a program is able to increase the services available to battered women doesn't necessarily translate into a recommendation to continue the effort, particularly when there are competing priorities or other effective alternatives. Thus, recommendations about what to do with a given intervention go beyond judgments about a specific program's effectiveness.

If recommendations aren't supported by enough evidence, or if they aren't in keeping with stakeholders' values, they can really undermine an evaluation's credibility. By contrast, an evaluation can be strengthened by recommendations that anticipate and react to what users will want to know.

Three things might increase the chances that recommendations will be relevant and well-received:

  • Sharing draft recommendations
  • Soliciting reactions from multiple stakeholders
  • Presenting options instead of directive advice

Justifying conclusions in an evaluation is a process that involves different possible steps. For instance, conclusions could be strengthened by searching for alternative explanations from the ones you have chosen, and then showing why they are unsupported by the evidence. When there are different but equally well supported conclusions, each could be presented with a summary of their strengths and weaknesses. Techniques to analyze, synthesize, and interpret findings might be agreed upon before data collection begins.

Ensure Use and Share Lessons Learned

It is naive to assume that lessons learned in an evaluation will necessarily be used in decision making and subsequent action. Deliberate effort on the part of evaluators is needed to ensure that the evaluation findings will be used appropriately. Preparing for their use involves strategic thinking and continued vigilance in looking for opportunities to communicate and influence. Both of these should begin in the earliest stages of the process and continue throughout the evaluation.

The elements of key importance to be sure that the recommendations from an evaluation are used are:

Design refers to how the evaluation's questions, methods, and overall processes are constructed. As discussed in the third step of this framework (focusing the evaluation design), the evaluation should be organized from the start to achieve specific agreed-upon uses. Having a clear purpose that is focused on the use of what is learned helps those who will carry out the evaluation to know who will do what with the findings. Furthermore, the process of creating a clear design will highlight ways that stakeholders, through their many contributions, can improve the evaluation and facilitate the use of the results.

Preparation

Preparation refers to the steps taken to get ready for the future uses of the evaluation findings. The ability to translate new knowledge into appropriate action is a skill that can be strengthened through practice. In fact, building this skill can itself be a useful benefit of the evaluation. It is possible to prepare stakeholders for future use of the results by discussing how potential findings might affect decision making.

For example, primary intended users and other stakeholders could be given a set of hypothetical results and asked what decisions or actions they would make on the basis of this new knowledge. If they indicate that the evidence presented is incomplete or irrelevant and that no action would be taken, then this is an early warning sign that the planned evaluation should be modified. Preparing for use also gives stakeholders more time to explore both positive and negative implications of potential results and to identify different options for program improvement.

Feedback is the communication that occurs among everyone involved in the evaluation. Giving and receiving feedback creates an atmosphere of trust among stakeholders; it keeps an evaluation on track by keeping everyone informed about how the evaluation is proceeding. Primary intended users and other stakeholders have a right to comment on evaluation decisions. From a standpoint of ensuring use, stakeholder feedback is a necessary part of every step in the evaluation. Obtaining valuable feedback can be encouraged by holding discussions during each step of the evaluation and routinely sharing interim findings, provisional interpretations, and draft reports.

Follow-up refers to the support that many users need during the evaluation and after they receive evaluation findings. Because of the amount of effort required, reaching justified conclusions in an evaluation can seem like an end in itself. It is not . Active follow-up may be necessary to remind users of the intended uses of what has been learned. Follow-up may also be required to stop lessons learned from becoming lost or ignored in the process of making complex or political decisions. To guard against such oversight, it may be helpful to have someone involved in the evaluation serve as an advocate for the evaluation's findings during the decision -making phase.

Facilitating the use of evaluation findings also carries with it the responsibility to prevent misuse. Evaluation results are always bounded by the context in which the evaluation was conducted. Some stakeholders, however, may be tempted to take results out of context or to use them for different purposes than what they were developed for. For instance, over-generalizing the results from a single case study to make decisions that affect all sites in a national program is an example of misuse of a case study evaluation.

Similarly, program opponents may misuse results by overemphasizing negative findings without giving proper credit for what has worked. Active follow-up can help to prevent these and other forms of misuse by ensuring that evidence is only applied to the questions that were the central focus of the evaluation.

Dissemination

Dissemination is the process of communicating the procedures or the lessons learned from an evaluation to relevant audiences in a timely, unbiased, and consistent fashion. Like other elements of the evaluation, the reporting strategy should be discussed in advance with intended users and other stakeholders. Planning effective communications also requires considering the timing, style, tone, message source, vehicle, and format of information products. Regardless of how communications are constructed, the goal for dissemination is to achieve full disclosure and impartial reporting.

Along with the uses for evaluation findings, there are also uses that flow from the very process of evaluating. These "process uses" should be encouraged. The people who take part in an evaluation can experience profound changes in beliefs and behavior. For instance, an evaluation challenges staff members to act differently in what they are doing, and to question assumptions that connect program activities with intended effects.

Evaluation also prompts staff to clarify their understanding of the goals of the program. This greater clarity, in turn, helps staff members to better function as a team focused on a common end. In short, immersion in the logic, reasoning, and values of evaluation can have very positive effects, such as basing decisions on systematic judgments instead of on unfounded assumptions.

Additional process uses for evaluation include:

  • By defining indicators, what really matters to stakeholders becomes clear
  • It helps make outcomes matter by changing the reinforcements connected with achieving positive results. For example, a funder might offer "bonus grants" or "outcome dividends" to a program that has shown a significant amount of community change and improvement.

Standards for "good" evaluation

There are standards to assess whether all of the parts of an evaluation are well -designed and working to their greatest potential. The Joint Committee on Educational Evaluation developed "The Program Evaluation Standards" for this purpose. These standards, designed to assess evaluations of educational programs, are also relevant for programs and interventions related to community health and development.

The program evaluation standards make it practical to conduct sound and fair evaluations. They offer well-supported principles to follow when faced with having to make tradeoffs or compromises. Attending to the standards can guard against an imbalanced evaluation, such as one that is accurate and feasible, but isn't very useful or sensitive to the context. Another example of an imbalanced evaluation is one that would be genuinely useful, but is impossible to carry out.

The following standards can be applied while developing an evaluation design and throughout the course of its implementation. Remember, the standards are written as guiding principles, not as rigid rules to be followed in all situations.

The 30 more specific standards are grouped into four categories:

The utility standards are:

  • Stakeholder Identification : People who are involved in (or will be affected by) the evaluation should be identified, so that their needs can be addressed.
  • Evaluator Credibility : The people conducting the evaluation should be both trustworthy and competent, so that the evaluation will be generally accepted as credible or believable.
  • Information Scope and Selection : Information collected should address pertinent questions about the program, and it should be responsive to the needs and interests of clients and other specified stakeholders.
  • Values Identification: The perspectives, procedures, and rationale used to interpret the findings should be carefully described, so that the bases for judgments about merit and value are clear.
  • Report Clarity: Evaluation reports should clearly describe the program being evaluated, including its context, and the purposes, procedures, and findings of the evaluation. This will help ensure that essential information is provided and easily understood.
  • Report Timeliness and Dissemination: Significant midcourse findings and evaluation reports should be shared with intended users so that they can be used in a timely fashion.
  • Evaluation Impact: Evaluations should be planned, conducted, and reported in ways that encourage follow-through by stakeholders, so that the evaluation will be used.

Feasibility Standards

The feasibility standards are to ensure that the evaluation makes sense - that the steps that are planned are both viable and pragmatic.

The feasibility standards are:

  • Practical Procedures: The evaluation procedures should be practical, to keep disruption of everyday activities to a minimum while needed information is obtained.
  • Political Viability : The evaluation should be planned and conducted with anticipation of the different positions or interests of various groups. This should help in obtaining their cooperation so that possible attempts by these groups to curtail evaluation operations or to misuse the results can be avoided or counteracted.
  • Cost Effectiveness: The evaluation should be efficient and produce enough valuable information that the resources used can be justified.

Propriety Standards

The propriety standards ensure that the evaluation is an ethical one, conducted with regard for the rights and interests of those involved. The eight propriety standards follow.

  • Service Orientation : Evaluations should be designed to help organizations effectively serve the needs of all of the targeted participants.
  • Formal Agreements : The responsibilities in an evaluation (what is to be done, how, by whom, when) should be agreed to in writing, so that those involved are obligated to follow all conditions of the agreement, or to formally renegotiate it.
  • Rights of Human Subjects : Evaluation should be designed and conducted to respect and protect the rights and welfare of human subjects, that is, all participants in the study.
  • Human Interactions : Evaluators should respect basic human dignity and worth when working with other people in an evaluation, so that participants don't feel threatened or harmed.
  • Complete and Fair Assessment : The evaluation should be complete and fair in its examination, recording both strengths and weaknesses of the program being evaluated. This allows strengths to be built upon and problem areas addressed.
  • Disclosure of Findings : The people working on the evaluation should ensure that all of the evaluation findings, along with the limitations of the evaluation, are accessible to everyone affected by the evaluation, and any others with expressed legal rights to receive the results.
  • Conflict of Interest: Conflict of interest should be dealt with openly and honestly, so that it does not compromise the evaluation processes and results.
  • Fiscal Responsibility : The evaluator's use of resources should reflect sound accountability procedures and otherwise be prudent and ethically responsible, so that expenditures are accounted for and appropriate.

Accuracy Standards

The accuracy standards ensure that the evaluation findings are considered correct.

There are 12 accuracy standards:

  • Program Documentation: The program should be described and documented clearly and accurately, so that what is being evaluated is clearly identified.
  • Context Analysis: The context in which the program exists should be thoroughly examined so that likely influences on the program can be identified.
  • Described Purposes and Procedures: The purposes and procedures of the evaluation should be monitored and described in enough detail that they can be identified and assessed.
  • Defensible Information Sources: The sources of information used in a program evaluation should be described in enough detail that the adequacy of the information can be assessed.
  • Valid Information: The information gathering procedures should be chosen or developed and then implemented in such a way that they will assure that the interpretation arrived at is valid.
  • Reliable Information : The information gathering procedures should be chosen or developed and then implemented so that they will assure that the information obtained is sufficiently reliable.
  • Systematic Information: The information from an evaluation should be systematically reviewed and any errors found should be corrected.
  • Analysis of Quantitative Information: Quantitative information - data from observations or surveys - in an evaluation should be appropriately and systematically analyzed so that evaluation questions are effectively answered.
  • Analysis of Qualitative Information: Qualitative information - descriptive information from interviews and other sources - in an evaluation should be appropriately and systematically analyzed so that evaluation questions are effectively answered.
  • Justified Conclusions: The conclusions reached in an evaluation should be explicitly justified, so that stakeholders can understand their worth.
  • Impartial Reporting: Reporting procedures should guard against the distortion caused by personal feelings and biases of people involved in the evaluation, so that evaluation reports fairly reflect the evaluation findings.
  • Metaevaluation: The evaluation itself should be evaluated against these and other pertinent standards, so that it is appropriately guided and, on completion, stakeholders can closely examine its strengths and weaknesses.

Applying the framework: Conducting optimal evaluations

There is an ever-increasing agreement on the worth of evaluation; in fact, doing so is often required by funders and other constituents. So, community health and development professionals can no longer question whether or not to evaluate their programs. Instead, the appropriate questions are:

  • What is the best way to evaluate?
  • What are we learning from the evaluation?
  • How will we use what we learn to become more effective?

The framework for program evaluation helps answer these questions by guiding users to select evaluation strategies that are useful, feasible, proper, and accurate.

To use this framework requires quite a bit of skill in program evaluation. In most cases there are multiple stakeholders to consider, the political context may be divisive, steps don't always follow a logical order, and limited resources may make it difficult to take a preferred course of action. An evaluator's challenge is to devise an optimal strategy, given the conditions she is working under. An optimal strategy is one that accomplishes each step in the framework in a way that takes into account the program context and is able to meet or exceed the relevant standards.

This framework also makes it possible to respond to common concerns about program evaluation. For instance, many evaluations are not undertaken because they are seen as being too expensive. The cost of an evaluation, however, is relative; it depends upon the question being asked and the level of certainty desired for the answer. A simple, low-cost evaluation can deliver information valuable for understanding and improvement.

Rather than discounting evaluations as a time-consuming sideline, the framework encourages evaluations that are timed strategically to provide necessary feedback. This makes it possible to make evaluation closely linked with everyday practices.

Another concern centers on the perceived technical demands of designing and conducting an evaluation. However, the practical approach endorsed by this framework focuses on questions that can improve the program.

Finally, the prospect of evaluation troubles many staff members because they perceive evaluation methods as punishing ("They just want to show what we're doing wrong."), exclusionary ("Why aren't we part of it? We're the ones who know what's going on."), and adversarial ("It's us against them.") The framework instead encourages an evaluation approach that is designed to be helpful and engages all interested stakeholders in a process that welcomes their participation.

Evaluation is a powerful strategy for distinguishing programs and interventions that make a difference from those that don't. It is a driving force for developing and adapting sound strategies, improving existing programs, and demonstrating the results of investments in time and other resources. It also helps determine if what is being done is worth the cost.

This recommended framework for program evaluation is both a synthesis of existing best practices and a set of standards for further improvement. It supports a practical approach to evaluation based on steps and standards that can be applied in almost any setting. Because the framework is purposefully general, it provides a stable guide to design and conduct a wide range of evaluation efforts in a variety of specific program areas. The framework can be used as a template to create useful evaluation plans to contribute to understanding and improvement. The Magenta Book - Guidance for Evaluation  provides additional information on requirements for good evaluation, and some straightforward steps to make a good evaluation of an intervention more feasible, read The Magenta Book - Guidance for Evaluation.

Online Resources

Are You Ready to Evaluate your Coalition? prompts 15 questions to help the group decide whether your coalition is ready to evaluate itself and its work.

The  American Evaluation Association Guiding Principles for Evaluators  helps guide evaluators in their professional practice.

CDC Evaluation Resources  provides a list of resources for evaluation, as well as links to professional associations and journals.

Chapter 11: Community Interventions in the "Introduction to Community Psychology" explains professionally-led versus grassroots interventions, what it means for a community intervention to be effective, why a community needs to be ready for an intervention, and the steps to implementing community interventions.

The  Comprehensive Cancer Control Branch Program Evaluation Toolkit  is designed to help grantees plan and implement evaluations of their NCCCP-funded programs, this toolkit provides general guidance on evaluation principles and techniques, as well as practical templates and tools.

Developing an Effective Evaluation Plan  is a workbook provided by the CDC. In addition to information on designing an evaluation plan, this book also provides worksheets as a step-by-step guide.

EvaluACTION , from the CDC, is designed for people interested in learning about program evaluation and how to apply it to their work. Evaluation is a process, one dependent on what you’re currently doing and on the direction in which you’d like go. In addition to providing helpful information, the site also features an interactive Evaluation Plan & Logic Model Builder, so you can create customized tools for your organization to use.

Evaluating Your Community-Based Program  is a handbook designed by the American Academy of Pediatrics covering a variety of topics related to evaluation.

GAO Designing Evaluations  is a handbook provided by the U.S. Government Accountability Office with copious information regarding program evaluations.

The CDC's  Introduction to Program Evaluation for Publilc Health Programs: A Self-Study Guide  is a "how-to" guide for planning and implementing evaluation activities. The manual, based on CDC’s Framework for Program Evaluation in Public Health, is intended to assist with planning, designing, implementing and using comprehensive evaluations in a practical way.

McCormick Foundation Evaluation Guide  is a guide to planning an organization’s evaluation, with several chapters dedicated to gathering information and using it to improve the organization.

A Participatory Model for Evaluating Social Programs from the James Irvine Foundation.

Practical Evaluation for Public Managers  is a guide to evaluation written by the U.S. Department of Health and Human Services.

Penn State Program Evaluation  offers information on collecting different forms of data and how to measure different community markers.

Program Evaluaton  information page from Implementation Matters.

The Program Manager's Guide to Evaluation  is a handbook provided by the Administration for Children and Families with detailed answers to nine big questions regarding program evaluation.

Program Planning and Evaluation  is a website created by the University of Arizona. It provides links to information on several topics including methods, funding, types of evaluation, and reporting impacts.

User-Friendly Handbook for Program Evaluation  is a guide to evaluations provided by the National Science Foundation.  This guide includes practical information on quantitative and qualitative methodologies in evaluations.

W.K. Kellogg Foundation Evaluation Handbook  provides a framework for thinking about evaluation as a relevant and useful program tool. It was originally written for program directors with direct responsibility for the ongoing evaluation of the W.K. Kellogg Foundation.

Print Resources

This Community Tool Box section is an edited version of:

CDC Evaluation Working Group. (1999). (Draft). Recommended framework for program evaluation in public health practice . Atlanta, GA: Author.

The article cites the following references:

Adler. M., &  Ziglio, E. (1996). Gazing into the oracle: the delphi method and its application to social policy and community health and development. London: Jessica Kingsley Publishers.

Barrett, F.   Program Evaluation: A Step-by-Step Guide.  Sunnycrest Press, 2013. This practical manual includes helpful tips to develop evaluations, tables illustrating evaluation approaches, evaluation planning and reporting templates, and resources if you want more information.

Basch, C., Silepcevich, E., Gold, R., Duncan, D., & Kolbe, L. (1985).   Avoiding type III errors in health education program evaluation: a case study . Health Education Quarterly. 12(4):315-31.

Bickman L, & Rog, D. (1998). Handbook of applied social research methods. Thousand Oaks, CA: Sage Publications.

Boruch, R.  (1998).  Randomized controlled experiments for evaluation and planning. In Handbook of applied social research methods, edited by Bickman L., & Rog. D. Thousand Oaks, CA: Sage Publications: 161-92.

Centers for Disease Control and Prevention DoHAP. Evaluating CDC HIV prevention programs: guidance and data system . Atlanta, GA: Centers for Disease Control and Prevention, Division of HIV/AIDS Prevention, 1999.

Centers for Disease Control and Prevention. Guidelines for evaluating surveillance systems. Morbidity and Mortality Weekly Report 1988;37(S-5):1-18.

Centers for Disease Control and Prevention. Handbook for evaluating HIV education . Atlanta, GA: Centers for Disease Control and Prevention, National Center for Chronic Disease Prevention and Health Promotion, Division of Adolescent and School Health, 1995.

Cook, T., & Campbell, D. (1979). Quasi-experimentation . Chicago, IL: Rand McNally.

Cook, T.,& Reichardt, C. (1979).  Qualitative and quantitative methods in evaluation research . Beverly Hills, CA: Sage Publications.

Cousins, J.,& Whitmore, E. (1998).   Framing participatory evaluation. In Understanding and practicing participatory evaluation , vol. 80, edited by E Whitmore. San Francisco, CA: Jossey-Bass: 5-24.

Chen, H. (1990).  Theory driven evaluations . Newbury Park, CA: Sage Publications.

de Vries, H., Weijts, W., Dijkstra, M., & Kok, G. (1992).  The utilization of qualitative and quantitative data for health education program planning, implementation, and evaluation: a spiral approach . Health Education Quarterly.1992; 19(1):101-15.

Dyal, W. (1995).  Ten organizational practices of community health and development: a historical perspective . American Journal of Preventive Medicine;11(6):6-8.

Eddy, D. (1998). Performance measurement: problems and solutions . Health Affairs;17 (4):7-25.Harvard Family Research Project. Performance measurement. In The Evaluation Exchange, vol. 4, 1998, pp. 1-15.

Eoyang,G., & Berkas, T. (1996).  Evaluation in a complex adaptive system . Edited by (we don´t have the names), (1999): Taylor-Powell E, Steele S, Douglah M. Planning a program evaluation. Madison, Wisconsin: University of Wisconsin Cooperative Extension.

Fawcett, S.B., Paine-Andrews, A., Fancisco, V.T., Schultz, J.A., Richter, K.P, Berkley-Patton, J., Fisher, J., Lewis, R.K., Lopez, C.M., Russos, S., Williams, E.L., Harris, K.J., & Evensen, P. (2001). Evaluating community initiatives for health and development. In I. Rootman, D. McQueen, et al. (Eds.),  Evaluating health promotion approaches . (pp. 241-277). Copenhagen, Denmark: World Health Organization - Europe.

Fawcett , S., Sterling, T., Paine-, A., Harris, K., Francisco, V. et al. (1996).  Evaluating community efforts to prevent cardiovascular diseases . Atlanta, GA: Centers for Disease Control and Prevention, National Center for Chronic Disease Prevention and Health Promotion.

Fetterman, D.,, Kaftarian, S., & Wandersman, A. (1996).  Empowerment evaluation: knowledge and tools for self-assessment and accountability . Thousand Oaks, CA: Sage Publications.

Frechtling, J.,& Sharp, L. (1997).  User-friendly handbook for mixed method evaluations . Washington, DC: National Science Foundation.

Goodman, R., Speers, M., McLeroy, K., Fawcett, S., Kegler M., et al. (1998).  Identifying and defining the dimensions of community capacity to provide a basis for measurement . Health Education and Behavior;25(3):258-78.

Greene, J.  (1994). Qualitative program evaluation: practice and promise . In Handbook of Qualitative Research, edited by NK Denzin and YS Lincoln. Thousand Oaks, CA: Sage Publications.

Haddix, A., Teutsch. S., Shaffer. P., & Dunet. D. (1996). Prevention effectiveness: a guide to decision analysis and economic evaluation . New York, NY: Oxford University Press.

Hennessy, M.  Evaluation. In Statistics in Community health and development , edited by Stroup. D.,& Teutsch. S. New York, NY: Oxford University Press, 1998: 193-219

Henry, G. (1998). Graphing data. In Handbook of applied social research methods , edited by Bickman. L., & Rog.  D.. Thousand Oaks, CA: Sage Publications: 527-56.

Henry, G. (1998).  Practical sampling. In Handbook of applied social research methods , edited by  Bickman. L., & Rog. D.. Thousand Oaks, CA: Sage Publications: 101-26.

Institute of Medicine. Improving health in the community: a role for performance monitoring . Washington, DC: National Academy Press, 1997.

Joint Committee on Educational Evaluation, James R. Sanders (Chair). The program evaluation standards: how to assess evaluations of educational programs . Thousand Oaks, CA: Sage Publications, 1994.

Kaplan,  R., & Norton, D.  The balanced scorecard: measures that drive performance . Harvard Business Review 1992;Jan-Feb71-9.

Kar, S. (1989). Health promotion indicators and actions . New York, NY: Springer Publications.

Knauft, E. (1993).   What independent sector learned from an evaluation of its own hard-to -measure programs . In A vision of evaluation, edited by ST Gray. Washington, DC: Independent Sector.

Koplan, J. (1999)  CDC sets millennium priorities . US Medicine 4-7.

Lipsy, M. (1998).  Design sensitivity: statistical power for applied experimental research . In Handbook of applied social research methods, edited by Bickman, L., & Rog, D. Thousand Oaks, CA: Sage Publications. 39-68.

Lipsey, M. (1993). Theory as method: small theories of treatments . New Directions for Program Evaluation;(57):5-38.

Lipsey, M. (1997).  What can you build with thousands of bricks? Musings on the cumulation of knowledge in program evaluation . New Directions for Evaluation; (76): 7-23.

Love, A.  (1991).  Internal evaluation: building organizations from within . Newbury Park, CA: Sage Publications.

Miles, M., & Huberman, A. (1994).  Qualitative data analysis: a sourcebook of methods . Thousand Oaks, CA: Sage Publications, Inc.

National Quality Program. (1999).  National Quality Program , vol. 1999. National Institute of Standards and Technology.

National Quality Program . Baldridge index outperforms S&P 500 for fifth year, vol. 1999.

National Quality Program , 1999.

National Quality Program. Health care criteria for performance excellence , vol. 1999. National Quality Program, 1998.

Newcomer, K.  Using statistics appropriately. In Handbook of Practical Program Evaluation, edited by Wholey,J.,  Hatry, H., & Newcomer. K. San Francisco, CA: Jossey-Bass, 1994: 389-416.

Patton, M. (1990).  Qualitative evaluation and research methods . Newbury Park, CA: Sage Publications.

Patton, M (1997).  Toward distinguishing empowerment evaluation and placing it in a larger context . Evaluation Practice;18(2):147-63.

Patton, M. (1997).  Utilization-focused evaluation . Thousand Oaks, CA: Sage Publications.

Perrin, B. Effective use and misuse of performance measurement . American Journal of Evaluation 1998;19(3):367-79.

Perrin, E, Koshel J. (1997).  Assessment of performance measures for community health and development, substance abuse, and mental health . Washington, DC: National Academy Press.

Phillips, J. (1997).  Handbook of training evaluation and measurement methods . Houston, TX: Gulf Publishing Company.

Poreteous, N., Sheldrick B., & Stewart P. (1997).  Program evaluation tool kit: a blueprint for community health and development management . Ottawa, Canada: Community health and development Research, Education, and Development Program, Ottawa-Carleton Health Department.

Posavac, E., & Carey R. (1980).  Program evaluation: methods and case studies . Prentice-Hall, Englewood Cliffs, NJ.

Preskill, H. & Torres R. (1998).  Evaluative inquiry for learning in organizations . Thousand Oaks, CA: Sage Publications.

Public Health Functions Project. (1996). The public health workforce: an agenda for the 21st century . Washington, DC: U.S. Department of Health and Human Services, Community health and development Service.

Public Health Training Network. (1998).  Practical evaluation of public health programs . CDC, Atlanta, GA.

Reichardt, C., & Mark M. (1998).  Quasi-experimentation . In Handbook of applied social research methods, edited by L Bickman and DJ Rog. Thousand Oaks, CA: Sage Publications, 193-228.

Rossi, P., & Freeman H.  (1993).  Evaluation: a systematic approach . Newbury Park, CA: Sage Publications.

Rush, B., & Ogbourne A. (1995).  Program logic models: expanding their role and structure for program planning and evaluation . Canadian Journal of Program Evaluation;695 -106.

Sanders, J. (1993).  Uses of evaluation as a means toward organizational effectiveness. In A vision of evaluation , edited by ST Gray. Washington, DC: Independent Sector.

Schorr, L. (1997).   Common purpose: strengthening families and neighborhoods to rebuild America . New York, NY: Anchor Books, Doubleday.

Scriven, M. (1998) . A minimalist theory of evaluation: the least theory that practice requires . American Journal of Evaluation.

Shadish, W., Cook, T., Leviton, L. (1991).  Foundations of program evaluation . Newbury Park, CA: Sage Publications.

Shadish, W. (1998).   Evaluation theory is who we are. American Journal of Evaluation:19(1):1-19.

Shulha, L., & Cousins, J. (1997).  Evaluation use: theory, research, and practice since 1986 . Evaluation Practice.18(3):195-208

Sieber, J. (1998).   Planning ethically responsible research . In Handbook of applied social research methods, edited by L Bickman and DJ Rog. Thousand Oaks, CA: Sage Publications: 127-56.

Steckler, A., McLeroy, K., Goodman, R., Bird, S., McCormick, L. (1992).  Toward integrating qualitative and quantitative methods: an introduction . Health Education Quarterly;191-8.

Taylor-Powell, E., Rossing, B., Geran, J. (1998). Evaluating collaboratives: reaching the potential. Madison, Wisconsin: University of Wisconsin Cooperative Extension.

Teutsch, S.  A framework for assessing the effectiveness of disease and injury prevention . Morbidity and Mortality Weekly Report: Recommendations and Reports Series 1992;41 (RR-3 (March 27, 1992):1-13.

Torres, R., Preskill, H., Piontek, M., (1996).   Evaluation strategies for communicating and reporting: enhancing learning in organizations . Thousand Oaks, CA: Sage Publications.

Trochim, W. (1999).  Research methods knowledge base , vol.

United Way of America. Measuring program outcomes: a practical approach . Alexandria, VA: United Way of America, 1996.

U.S. General Accounting Office. Case study evaluations . GAO/PEMD-91-10.1.9. Washington, DC: U.S. General Accounting Office, 1990.

U.S. General Accounting Office. Designing evaluations . GAO/PEMD-10.1.4. Washington, DC: U.S. General Accounting Office, 1991.

U.S. General Accounting Office. Managing for results: measuring program results that are under limited federal control . GAO/GGD-99-16. Washington, DC: 1998.

U.S. General Accounting Office. Prospective evaluation methods: the prosepctive evaluation synthesis . GAO/PEMD-10.1.10. Washington, DC: U.S. General Accounting Office, 1990.

U.S. General Accounting Office. The evaluation synthesis . Washington, DC: U.S. General Accounting Office, 1992.

U.S. General Accounting Office. Using statistical sampling . Washington, DC: U.S. General Accounting Office, 1992.

Wandersman, A., Morrissey, E., Davino, K., Seybolt, D., Crusto, C., et al. Comprehensive quality programming and accountability: eight essential strategies for implementing successful prevention programs . Journal of Primary Prevention 1998;19(1):3-30.

Weiss, C. (1995). Nothing as practical as a good theory: exploring theory-based evaluation for comprehensive community initiatives for families and children . In New Approaches to Evaluating Community Initiatives, edited by Connell, J. Kubisch, A. Schorr, L.  & Weiss, C.  New York, NY, NY: Aspin Institute.

Weiss, C. (1998).  Have we learned anything new about the use of evaluation? American Journal of Evaluation;19(1):21-33.

Weiss, C. (1997).  How can theory-based evaluation make greater headway? Evaluation Review 1997;21(4):501-24.

W.K. Kellogg Foundation. (1998). The W.K. Foundation Evaluation Handbook . Battle Creek, MI: W.K. Kellogg Foundation.

Wong-Reiger, D.,& David, L. (1995).  Using program logic models to plan and evaluate education and prevention programs. In Evaluation Methods Sourcebook II, edited by Love. A.J. Ottawa, Ontario: Canadian Evaluation Society.

Wholey, S., Hatry, P., & Newcomer, E. .  Handbook of Practical Program Evaluation.  Jossey-Bass, 2010. This book serves as a comprehensive guide to the evaluation process and its practical applications for sponsors, program managers, and evaluators.

Yarbrough,  B., Lyn, M., Shulha, H., Rodney K., & Caruthers, A. (2011).  The Program Evaluation Standards: A Guide for Evalualtors and Evaluation Users Third Edition . Sage Publications.

Yin, R. (1988).  Case study research: design and methods . Newbury Park, CA: Sage Publications.

Home

Chapter 2 | Methodological Principles of Evaluation Design

Evaluation of International Development Interventions

Evaluation approaches and methods do not exist in a vacuum. Stakeholders who commission or use evaluations and those who manage or conduct evaluations all have their own ideas and preferences about which approaches and methods to use. An individual’s disciplinary background, experience, and institutional role influence such preferences; other factors include internalized ideas about rigor and applicability of methods. This guide will inform evaluation stakeholders about a range of approaches and methods that are used in evaluative analysis and provide a quick overview of the key features of each. It thus will inform them about the approaches and methods that work best in given situations.

Before we present the specific approaches and methods in chapter 3, let us consider some of the key methodological principles of evaluation design that provide the foundations for the selection, adaptation, and use of evaluation approaches and methods in an IEO evaluation setting. To be clear, we focus only on methodological issues here and do not discuss other key aspects of design, such as particular stakeholders’ intended use of the evaluation. The principles discussed in this chapter pertain also to evaluation in general, but they are especially pertinent for designing independent evaluations in an international development context. We consider the following methodological principles to be important for developing high-quality evaluations:

  • Giving due consideration to methodological aspects of evaluation quality in design: focus, consistency, reliability, and validity
  • Matching evaluation design to the evaluation questions
  • Using effective tools for evaluation design
  • Balancing scope and depth in multilevel, multisite evaluands
  • Mixing methods for analytical depth and breadth
  • Dealing with institutional opportunities and constraints of budget, data, and time
  • Building on theory

Let us briefly review each of these in turn.

Giving Due Consideration to Methodological Aspects of Evaluation Quality in Design

Evaluation quality is complex. It may be interpreted in different ways and refer to one or more aspects of quality in terms of process, use of methods, team composition, findings, and so on. Here we will talk about quality of inference: the quality of the findings of an evaluation as underpinned by clear reasoning and reliable evidence. We can differentiate among four broad, interrelated sets of determinants:

  • The budget, data, and time available for an evaluation (see the Dealing with Institutional Opportunities and Constraints of Budget, Data, and Time section);
  • The institutional processes and incentives for producing quality work;
  • The expertise available within the evaluation team in terms of different types of knowledge and experience relevant to the evaluation: institutional, subject matter, contextual (for example, country), methodological, project management, communication; and
  • Overarching principles of quality of inference in evaluation research based on our experience and the methodological literature in the social and behavioral sciences. 1

Here we briefly discuss the final bullet point. From a methodological perspective, quality can be broken down into four aspects: focus, consistency, reliability, and validity.

Focus concerns the scope of the evaluation. Given the nature of the evaluand and the type of questions, how narrowly or widely does one cast the net? Does one look at both relevance and effectiveness issues? How far down the causal chain does the evaluation try to capture the causal contribution of an intervention? Essentially, the narrower the focus of an evaluation, the greater the concentration of financial and human resources on a particular aspect and consequently the greater the likelihood of high-quality inference.

Consistency here refers to the extent to which the different analytical steps of an evaluation are logically connected. The quality of inference is enhanced if there are logical connections among the initial problem statement, rationale and purpose of the evaluation, questions and scope, use of methods, data collection and analysis, and conclusions of an evaluation.

Reliability concerns the transparency and replicability of the evaluation process. 2 The more systematic the evaluation process and the higher the levels of clarity and transparency of design and implementation, the higher the confidence of others in the quality of inference.

Finally, validity is a property of findings. There are many classifications of validity. A widely used typology is the one developed by Cook and Campbell (1979) and slightly refined by Hedges (2017):

  • Internal validity: To what extent is there a causal relationship between, for example, outputs and outcomes?
  • External validity: To what extent can we generalize findings to other contexts, people, or time periods?
  • Construct validity: To what extent is the element that we have measured a good representation of the phenomenon we are interested in?
  • Data analysis validity: To what extent are methods applied correctly and the data used in the analysis adequate for drawing conclusions?

Matching Evaluation Design to the Evaluation Questions

Although it may seem obvious that evaluation design should be matched to the evaluation questions, in practice much evaluation design is still too often methods driven. Evaluation professionals have implicit and explicit preferences and biases toward the approaches and methods they favor. The rise in randomized experiments for causal analysis is largely the result of a methods-driven movement. Although this guide is not the place to discuss whether methods-driven evaluation is justified, there are strong arguments against it. One such argument is that in IEOs (and in many similar institutional settings), one does not have the luxury of being too methods driven. In fact, the evaluation questions, types of evaluands, or types of outcomes that decision makers or other evaluation stakeholders are interested in are diverse and do not lend themselves to one singular approach or method for evaluation. Even for a subset of causal questions, given the nature of the evaluands and outcomes of interest (for example, the effect of technical assistance on institutional reform versus the effect of microgrants on health-seeking behavior of poor women), the availability and cost of data, and many other factors, there is never one single approach or method that is always better than others. For particular types of questions there are usually several methodological options with different requirements and characteristics that are better suited than others. Multiple classifications of questions can be helpful to evaluators in thinking more systematically about this link, such as causal versus noncausal questions, descriptive versus analytical questions, normative versus nonnormative questions, intervention-focused versus systems-based questions, and so on. Throughout this guide, each guidance note presents what we take to be the most relevant questions that the approach or method addresses.

Using Effective Tools for Evaluation Design

Over the years, the international evaluation community in general and institutionalized evaluation functions (such as IEOs) in particular have developed and used a number of tools to improve the quality and efficiency of evaluation design. 3 Let us briefly discuss four prevalent tools.

First, a common tool in IEOs (and similar evaluation functions) is some type of multicriteria approach to justify the strategic selectivity of topics or interventions for evaluation. This could include demand-driven criteria such as potential stakeholder use or supply-driven criteria such as the financial volume or size of a program or portfolio of interventions. Strategic selectivity often goes hand in hand with evaluability assessment (Wholey 1979), which covers such aspects as stakeholder interest and potential use, data availability, and clarity of the evaluand (for example, whether a clear theory of change underlies the evaluand).

A second important tool is the use of approach papers or inception reports. These are stand-alone documents that describe key considerations and decisions regarding the rationale, scope, and methodology of an evaluation. When evaluations are contracted out, the terms of reference for external consultants often contain similar elements. Terms of reference are, however, never a substitute for approach papers or inception reports.

As part of approach papers and inception reports, a third tool is the use of a design matrix. For each of the main evaluation questions, this matrix specifies the sources of evidence and the use of methods. Design matrixes may also be structured to reflect the multilevel nature (for example, global, selected countries, selected interventions) of the evaluation.

A fourth tool is the use of external peer reviewers or a reference group. Including external methodological and substantive experts in the evaluation design process can effectively reduce bias and enhance quality.

Balancing Scope and Depth in Multilevel, Multisite Evaluands

Although project-level evaluation continues to be important, at the same time and for multiple reasons international organizations and national governments are increasingly commissioning and conducting evaluations at higher programmatic levels of intervention. Examples of the latter are sector-level evaluations, country program evaluations, and regional or global thematic evaluations. These evaluations tend to have the following characteristics:

  • They often cover multiple levels of intervention, multiple sites (communities, provinces, countries), and multiple stakeholder groups at different levels and sites.
  • They are usually more summative and are useful for accountability purposes, but they may also contain important lessons for oversight bodies, management, operations, or other stakeholders.
  • They are characterized by elaborate evaluation designs.

A number of key considerations for evaluation design are specific to higher-level programmatic evaluations. The multilevel nature of the intervention (portfolio) requires a multilevel design with multiple methods applied at different levels of analysis (such as country or intervention type). For example, a national program to support the health sector in a given country may have interventions relating to policy dialogue, policy advisory support, and technical capacity development at the level of the line ministry while supporting particular health system and health service delivery activities across the country. Multilevel methods choice goes hand in hand with multilevel sampling and selection issues. A global evaluation of an international organization’s support to private sector development may involve data collection and analysis at the global level (for example, global institutional mapping), the level of the organization’s portfolio (for example, desk review), the level of selected countries (for example, interviews with representatives of selected government departments or agencies and industry leaders), and the level of selected interventions (for example, theory-based causal analysis of advisory services in the energy sector). For efficiency, designs are often “nested”; for example, the evaluation covers selected interventions in selected countries. Evaluation designs may encompass different case study levels, with within-case analysis in a specific country (or regarding a specific intervention) and cross-case (comparative) analysis across countries (or interventions). A key constraint in this type of evaluation is that one cannot cover everything. Even for one evaluation question, decisions on selectivity and scope are needed. Consequently, strategic questions should address the desired breadth and depth of analysis. In general, the need for depth of analysis (determined by, for example, the time, resources, and triangulation among methods needed to understand and assess one particular phenomenon) must be balanced by the need to generate generalizable claims (through informed sampling and selection). In addition to informed sampling and selection, generalizability of findings is influenced by the degree of convergence of findings from one or more cases with available existing evidence or of findings across cases. In addition, there is a clear need for breadth of analysis in an evaluation (looking at multiple questions, phenomena, and underlying factors) to adequately cover the scope of the evaluation. All these considerations require careful reflection in what can be a quite complicated evaluation design process.

Mixing Methods for Analytical Depth and Breadth

Multilevel, multisite evaluations are by definition multimethod evaluations. But the idea of informed evaluation design, or the strategic mixing of methods applies to essentially all evaluations. According to Bamberger (2012, 1), “Mixed methods evaluations seek to integrate social science disciplines with predominantly quantitative and predominantly qualitative approaches to theory, data collection, data analysis and interpretation. The purpose is to strengthen the reliability of data, validity of the findings and recommendations, and to broaden and deepen our understanding of the processes through which program outcomes and impacts are achieved, and how these are affected by the context within which the program is implemented.” The evaluator should always strive to identify and use the best-suited methods for the specific purposes and context of the evaluation and consider how other methods may compensate for any limitations of the selected methods. Although it is difficult to truly integrate different methods within a single evaluation design, the benefits of mixed methods designs are worth pursuing in most situations. The benefits are not just methodological; through mixed designs and methods, evaluations are better able to answer a broader range of questions and more aspects of each question.

There is an extensive and growing literature on mixed methods in evaluation. One of the seminal articles on the subject (by Greene, Caracelli, and Graham) provides a clear framework for using mixed methods in evaluation that is as relevant as ever. Greene, Caracelli, and Graham (1989) identify the following five principles and purposes of mixing methods:

  • Triangulation Using different methods to compare findings. Convergence of findings from multiple methods strengthens the validity of findings. For example, a survey on investment behavior administered to a random sample of owners of small enterprises could confirm the findings obtained from semistructured interviews for a purposive sample of representatives of investment companies supporting the enterprises.
  • Initiation Using different methods to critically question a particular position or line of thought. For example, an evaluator could test two rival theories (with different underlying methods) on the causal relationships between promoting alternative livelihoods in buffer zones of protected areas and protecting biodiversity.
  • Complementarity Using one method to build on the findings from another method. For example, in-depth interviews with selected households and their individual members could deepen the findings from a quasi-experimental analysis on the relationship between advocacy campaigns and health-seeking behavior.
  • Development Using one method to inform the development of another. For example, focus groups could be used to develop a contextualized understanding of women’s empowerment and could use that information to develop a survey questionnaire.
  • Expansion Using multiple methods to look at complementary areas. For example, social network analysis could be used to understand an organization’s position in the financial landscape of all major organizations supporting a country’s education sector, while using semistructured interviews with officials from the education ministry and related agencies to assess the relevance of the organization’s support to the sector.

Dealing with Institutional Opportunities and Constraints of Budget, Data, and Time

Evaluation is applied social science research in the context of specific institutional requirements, constraints, and opportunities, and a range of other practical constraints. Addressing these all-too-common constraints, including budget, data, time, political, and other constraints, involves balancing rigor and depth of analysis with feasibility. In this sense, evaluation clearly distinguishes itself from academic research in several ways:

  • It is strongly linked to an organization’s accountability and learning processes, and there is some explicit or implicit demand-orientation in evaluation.
  • It is highly normative, and evidence is used to underpin normative conclusions about the merit and worth of an evaluand.
  • It puts the policy intervention (for example, the program, strategy, project, corporate process, thematic area of work) at the center of the analysis.
  • It is subject to institutional constraints of budget, time, and data. Even in more complicated evaluations of larger programmatic evaluands, evaluation (especially by IEOs) is essentially about “finding out fast” without compromising too much the quality of the analysis.
  • It is shaped in part by the availability of data already in the organizational system. Such data may include corporate data (financial, human resources, procurement, and so on), existing reporting (financial appraisal, monitoring, [self-] evaluation), and other data and background research conducted by the organization or its partners.

Building on Theory

Interventions are theories, and evaluation is the test (Pawson and Tilley 2001). This well-known reference indicates an influential school of thought and practice in evaluation, often called theory-driven or theory-based evaluation. Policy interventions (programs and projects) rely on underlying theories regarding how they are intended to work and contribute to processes of change. These theories (usually called program theories, theories of change, or intervention theories) are often made explicit in documents but sometimes exist only in the minds of stakeholders (for example, decision makers, evaluation commissioners, implementing staff, beneficiaries). Program theories (whether explicit or tacit) guide the design and implementation of policy interventions and also constitute an important basis for evaluation.

The important role of program theory (or variants thereof) is well established in evaluation. By describing the inner workings of how programs operate (or at least are intended to operate), the use of program theory is a fundamental step in evaluation planning and design. Regardless of the evaluation question or purpose, a central step will always be to develop a thorough understanding of the intervention that is evaluated. To this end, the development of program theories should always be grounded in stakeholder knowledge and informed to the extent possible by social scientific theories from psychology, sociology, economics, and other disciplines. Building program theories on the basis of stakeholder knowledge and social scientific theory supports more relevant and practice-grounded program theories, improves the conceptual clarity and precision of the theories, and ultimately increases the credibility of the evaluation.

Depending on the level of complexity of the evaluand (for example, a complex global portfolio on urban infrastructure support versus a specific road construction project) a program theory can serve as an overall sense-making framework; a framework for evaluation design by linking particular causal steps and assumptions to methods and data; or a framework for systematic causal analysis (for example, using qualitative comparative analysis or process tracing; see chapter 3 ). Program theories can be nested; more detailed theories of selected (sets of) interventions can be developed and used for guiding data collection, analysis, and the interpretation of findings, while the broader theory can be used to connect the different strands of intervention activities and to make sense of the broader evaluand (see also appendix B ).

Bamberger, M. 2012. Introduction to Mixed Methods in Impact Evaluation . Impact Evaluation Notes 3 (August), InterAction and the Rockefeller Foundation. https://www.interaction.org/wp-content/uploads/2019/03/Mixed-Methods-in-Impact-Evaluation-English.pdf.

Bamberger, M., J. Rugh, and L. Mabry. 2006. RealWorld Evaluation: Working under Budget, Time, Data, and Political Constraints . Thousand Oaks, CA: SAGE.

Cook, T. D., and D. T. Campbell. 1979. Quasi-Experimentation: Design and Analysis Issues for Field Settings. Boston: Houghton Mifflin.

Greene, J., V. Caracelli, and W. Graham. 1989. “Toward a Conceptual Framework for Mixed-Method Evaluation Designs.” Educational Evaluation and Policy Analysis 11 (3): 209–21.

Hedges, L. V. 2017. “Design of Empirical Research.” In Research Methods and Methodologies in Education , 2nd ed., edited by R. Coe, M. Waring, L. V. Hedges, and J. Arthur, 25–33. Thousand Oaks, CA: SAGE.

Morra Imas, L., and R. Rist. 2009. The Road to Results . Washington, DC: World Bank.

Pawson, R., and N. Tilley. 2001. “Realistic Evaluation Bloodlines.” American Journal of Evaluation 22 (3): 317–24.

Wholey, Joseph. 1979. Evaluation—Promise and Performance. Washington, DC: Urban Institute.

  • For simplification purposes we define method as a particular technique involving a set of principles to collect or analyze data, or both. The term approach can be situated at a more aggregate level, that is, at the level of methodology, and usually involves a combination of methods within a unified framework. Methodology provides the structure and principles for developing and supporting a particular knowledge claim.
  • Development evaluation is not to be confused with developmental evaluation. The latter is a specific evaluation approach developed by Michael Patton.
  • Especially in independent evaluations conducted by independent evaluation units or departments in national or international nongovernmental, governmental, and multilateral organizations. Although a broader range of evaluation approaches may be relevant to the practice of development evaluation, we consider the current selection to be at the core of evaluative practice in independent evaluation.
  • Evaluation functions of organizations that are (to a large extent), structurally, organizationally and behaviorally independent from management. Structural independence, which is the most distinguishing feature of independent evaluation offices, includes such aspects as independent budgets, independent human resource management, and no reporting line to management, but some type of oversight body (for example, an executive board).
  • The latter are not fully excluded from this guide but are not widely covered.
  • Evaluation is defined as applied policy-oriented research and builds on the principles, theories, and methods of the social and behavioral sciences.
  • Both reliability and validity are covered by a broad literature. Many of the ideas about these two principles are contested, and perspectives differ according to different schools of thought (with different underlying ontological and epistemological foundations).
  • A comprehensive discussion of the evaluation process, including tools, processes, and standards for designing, managing, quality assuring, disseminating, and using evaluations is effectively outside of the scope of this guide (see instead, for example, Bamberger, Rugh, and Mabry 2006; Morra Imas and Rist 2009).

logo

  • All COURSES
  • CORPORATE Skill Flex Simulation Agile Implementation SAFe Implementation

call-back1

Register Now and Experience Scrum in Action!Learn, Implement and Succeed.

diwaliDesktop

Fill in the details to take one step closer to your goal

Tell Us Your Preferred Starting Date

  • Advanced Certified Scrum Master
  • Agile Scrum Master Certification
  • Certified Scrum Master
  • Certified Scrum Product Owner
  • ICP Agile Certified Coaching
  • JIRA Administration
  • view All Courses

Master Program

  • Agile Master’s Program

Governing Bodies

ICagile

  • Artificial Intelligence Course
  • Data Science Course
  • Data Science with Python
  • Data Science with R
  • Deep Learning Course
  • Machine Learning
  • SAS Certification

methodology of project evaluation

  • Automation Testing Course with Placement
  • Selenium Certification Training
  • AWS Solution Architect Associate
  • DevOps Certification Training
  • DevOps With Guaranteed Interviews*
  • Dockers Certification
  • Jenkins Certification
  • Kubernetes Certification
  • Cloud Architect Master’s Program
  • Big Data Hadoop Course
  • Hadoop Administrator Course
  • Certified Associate in Project Management
  • Certified Business Analyst Professional
  • MS Project Certification
  • PgMP Certification
  • PMI RMP Certification Training
  • PMP® Certification
  • PMP Plus Master's Program

methodology of project evaluation

  • Full Stack Developer Certification Training Course
  • ITIL 4 Foundation Certification Training
  • Lean Six Sigma Black Belt
  • Lean Six Sigma Green Belt
  • Lean Six Sigma Master’s Program
  • Pay After Placement Courses
  • Scrum Master Interview Preparation Bootcamp

methodology of project evaluation

Project Evaluation: Definition, Types and How to Do it

calender

Tabel of the content

What is project evaluation, what are the principles of project evaluation, types of project evaluation, what are the benefits of performing a project evaluation, how to complete a project evaluation.

Implementing project evaluation is crucial for project managers who want to evaluate goals, objectives, and outcomes as well as gauge the efficacy of their initiatives. Different project evaluation methods provide insightful information and draw attention to areas that could need improvement. You may get a number of organisational benefits by including project assessment procedures in your workflow. In order to increase performance and ensure the success of your projects, this article covers the idea of project assessment and offers helpful tips on how to do it. Project evaluation in project management is an important aspect in order to check efficiency.

An approach for assessing the success and effects of projects, programmes, or policies is project assessment. To evaluate the project's process and results, it is necessary for the evaluator to gather relevant data. Implementing project assessment enables organisations to make internal adjustments, spot trends within the project's target audience, organise the next efforts and convince external stakeholders of the project's worth.

It is crucial to abide by a number of fundamental guidelines that contribute to the organization's overall performance in order to guarantee the validity and efficacy of project assessments. These guiding principles offer a structure for carrying out assessments from beginning to end. The following are the main tenets of project evaluation:

  • Aim to increase performance: Each evaluation provides insightful information that might assist your team in constantly strengthening its procedures.
  • Promote organisational learning: You may promote a culture of continuous learning and development by setting up a feedback loop through frequent assessments.
  • Share project engagement: By informing stakeholders of the findings of project assessments, you may promote their active involvement and increase transparency and dependability.
  • Concentrate on getting results: Regular reviews guarantee that your initiatives stay on pace for quantifiable and attainable results.
  • Develop connections with stakeholders: Including stakeholders in the project assessment process fosters teamwork and increases faith in your team's skills.
  • Use trustworthy procedures : When conducting project assessments, it's important to use techniques that can be verified and relied upon.
  • Conduct assessments ethically: Respect the sensitivity of the project and your workers' well-being by carefully considering ethical issues while choosing and executing evaluation methodologies.
  • Accept continuing evaluation: Fostering a culture of continuous assessment empowers teams and promotes ongoing improvement in project results.

methodology of project evaluation

PMP Certification

Delivered by PMI® Authorized Training Partner

There are numerous types of project evaluation which can be used in your projects to check the effectiveness.

Pre-project evaluation

A feasibility analysis is a step in the development of a project that must be finished before any work on it starts. By ensuring that all stakeholders are aware of the project's objectives, this evaluation ensures that it is carried out successfully. By emphasising challenges including resource availability, budgetary constraints, and technology requirements, early feasibility assessments facilitate early decision-making and efficient resource allocation. To increase overall efficiency and the likelihood of successful outcomes, this review process may be incorporated into project planning .

Ongoing evaluation

For a project to be successful, metrics that verify completed work are essential. These indicators include keeping an eye on the budget, assessing the proportion of tasks completed, and rating the overall calibre of the job. You may accurately assess project progress and guarantee conformity with the original objectives and goals by using these indicators. The team stays on track and constantly works towards intended results when it focuses its attention on the original project vision. It is possible to make educated decisions, resolve any aberrations, and take preventative action to ensure project success by routinely evaluating and monitoring these indicators.

Post-project evaluation

A thorough examination of a project's results and effects must be done when it is finished. This evaluation involves assessing how successfully the project met its original aims and objectives. Assessing the results provide information on whether the intended outcomes were achieved and if the project's deliverables were effectively met. Aside from that, evaluating the impacts enables one to determine the real changes that have been made for the intended audience or beneficiaries, including both intentional and unintended consequences on particular people, communities, or organisations. Project managers may discover areas of success and pinpoint areas that need development by performing a thorough analysis and drawing helpful conclusions for subsequent endeavours. This study also helps stakeholders understand the project's value and efficacy, promoting accountability, openness, and confidence.

Self-evaluation

People have the chance to perform self-evaluations at any time over the course of a project. Examining how their job contributes to the bigger aims and goals is a part of these evaluations. Individuals may enhance their capacity to cooperate successfully inside the team by recognising their talents and shortcomings, quantifying their successes, and comprehending the extent of their impact.

External evaluation

Engaging outside organisations to evaluate your work is an alternate strategy. As they have no past ties to or engagement in the project, these organisations contribute objectivity to the appraisal process. This objectivity raises the evaluation's and its results' credibility. Projects with multiple stakeholders or complicated components that call for a thorough analysis benefit especially from external reviews.

After project evaluation the next step is to present the proper project report. Get the overview of project report  & how to write project report through our blog

Performing a project assessment has a number of advantages that help the organisation grow and succeed both internally and internationally. The following are the main benefits of project evaluation.

  • Tracks team performance: You may monitor the growth and development of your team across a number of projects by keeping a record of prior assessments. This enables you to recognise your team's strengths and areas for development, empowering you to make decisions that will improve team performance.
  • Project evaluations assist in identifying trends and patterns that appear during the assessment process, highlighting areas that may be improved. Understanding these patterns offers helpful insights into how the team may improve performance, fix problems, and put improvement plans into practise.
  • Project assessments provide your team with the chance to gauge the practical effects of their efforts. You may objectively evaluate the outcomes and achievements obtained using real measurements and feedback, giving important proof of the project's success in the public eye.
  • Participates stakeholders: Involving significant stakeholders in the project assessment process encourages openness and cooperation. Involving stakeholders fosters a sense of ownership and shared responsibility while assuring them of the high quality of the finished and assessed projects. It also boosts their trust in the organisation.
  • Project evaluations offer a place for team reflection by enabling members to critically evaluate their own performance and contributions. This promotes team reflection and accountability. As team members hold themselves and one another accountable for their actions, results, and continual growth, this technique encourages accountability.
  • It also sharpens the planning process based on previous evaluations: Project assessment insights offer a plethora of information and lessons learnt. In order to make sure that future endeavours are founded upon experience and understanding of how the team performs and what methods lead to success, this information may be leveraged to improve and sharpen the planning process for subsequent initiatives.

Also read: Performance reporting in project management

A planned and executed organised methodology is required for project evaluation. The following stages will help you perform an evaluation for your project:

1. Create an evaluation plan

Setting goals and objectives before you start your project is crucial because they provide your team structure and direction. Along with directing your project, these aims and objectives will also have an impact on the kind of assessment you decide to carry out. Consider using a variety of tools and techniques that complement the selected assessment methodology when creating your evaluation plan. For instance, analysing task completion metrics may be a useful assessment tool to track improvements in productivity rates if your goal is to increase staff output.

2. Identify the source of evaluation and get organized

Once your evaluation plan is complete, it's crucial to pinpoint reliable information sources. Select the people you want to interview if interviews are a component of your strategy. Collect all the resources required for each evaluation technique, such as interview questions and a strategy for classifying and archiving answers. Consider assigning work to others and developing an extensive preparation schedule to guarantee the successful implementation of your evaluation strategy.

3. Implement the project evaluation

Depending on the precise assessment type and the techniques or instruments you have chosen, your evaluation plan will be implemented differently. The following are the main aspects to pay attention to when conducting various sorts of evaluations:

  • Pre-project evaluation : Focus on setting specific objectives and goals at this phase, as well as doing an in-depth analysis of the project's viability.
  • Evaluation that is continuous: If you are doing an evaluation that is continuing, pay particular attention to important elements including the project timeline, budget adherence, and the calibre of the work being produced.
  • Post-project evaluation: After the project is over, do a thorough post-project evaluation to determine its overall success based on the results and impacts that were attained. This assessment analyses the project's observable effects and assesses how well it achieves the desired objectives.

4. Analyze the data

It is essential to carry out a thorough analysis after collecting the necessary data for your review in order to spot trends, weaknesses, and how closely the project adheres to its objectives and aims. Use a tracking system to effectively organise and store the data depending on its specific properties. Then, apply the aims and objectives established by your team to analyse the data gathered and derive insightful conclusions.

5. Develop a report for your team

It is critical to put up a detailed report that gives a succinct overview of the assessment findings when the examination of the gathered data is finished. To meet the specific requirements of your team and stakeholders, modify the report's structure. This report is highly helpful since it can be used to pinpoint areas that need improvement, highlight both planned and unintentional project outcomes, and assess how well your team is doing in terms of achieving goals and objectives.

6. Discuss next steps

Sharing the finished project evaluation report with the team and stakeholders promotes effective communication, inspires creative team improvement ideas, cultivates stronger stakeholder relationships, and offers direction for future project improvement based on the evaluation results and impact.

Project evaluationa and project tracking runs side by side, learn how to do project tracking through our blog

methodology of project evaluation

Training Course

98% Success Rate

Consider enrolling in a PMP certification course from Staragile if you have an interest in project assessment and want to improve your abilities in this field. You will get extensive knowledge and skills in project assessment with this training programme, enabling you to evaluate project outcomes, gauge success, and make adjustments with ease. You will get the proper credentials and the abilities required to succeed in project management and evaluation with a PMP certification from Staragile.

Trending Now

Top 6 benefits of pmp certification.

calender

Top 10 Reasons to Get PMP Certification

Ways to earn pdus for pmp certification, why project manager should get pmp® certification, overview of pmp certification., upcoming pmp® certificationtraining workshops:, keep reading about.

Card image cap

Why Project Manager should get PMP® Cert...

Card image cap

What's New in PMBOK 6th Edition

Find pmp certification in india and us cities.

  • PMP Certification Bangalore
  • PMP Certification Hyderabad
  • PMP Certification Mumbai
  • PMP Certification Pune
  • PMP Certification Chennai
  • PMP Certification New York
  • PMP Certification Washington
  • PMP Certification Chicago

Find PMP Certification in Other Countries

  • PMP Certification UAE
  • PMP Certification Saudi Arabia
  • PMP Certification United Kingdom

We have successfully served:

professionals trained

sucess rate

>4.5 ratings in Google

Drop a Query

Evaluation in project management: what you need to consider

View or edit this activity in your CPD log.

Gettyimages 1137305438

Evaluation has a pivotal role in project management. Good evaluation maximises learning from projects and facilitates the effective communication of project benefits and successes. It serves as the compass, guiding projects toward successful completion and it should be woven into the very fabric of your project. In our experience, launching a project without evaluation embedded into its design is like embarking on an expedition without a map. However, not all project managers are aware of how evaluation relates to their project, nor the skills and practice needed to ensure a high-quality evaluation.

The UK Government Magenta Book defines evaluation as: 'a systematic assessment of the design, implementation and outcomes of an intervention. It involves understanding how an intervention is being, or has been, implemented and what effects it has, for whom and why. It identifies what can be improved and estimates its overall impacts and cost-effectiveness.'

Evaluation ensures you are on the right track from day one. It helps ensure your project meets its goals and objectives, and importantly, to demonstrate this to stakeholders. Early evaluation allows you to make informed decisions and adapt your project as needed; you can change your route if you identify early that you're heading in the wrong direction. Knowing what your stakeholders want and monitoring their satisfaction is also vital. When you integrate evaluation into your project from the start, you ensure that expectations are set, needs are met, and you build trust and confidence in your team's abilities.

How is evaluation linked to project management?

Evaluation should form part of any results driven project management approach. Evaluation doesn't just happen at the end; it’s an ongoing process throughout the project's lifecycle. Ideally, evaluation starts at the planning stage, when the project work plan is being developed. This is where you set the parameters for your evaluation — what will you measure, how and when.

As you launch and deliver your project (implementation and monitoring), you will be collecting data and assessing progress. Are you hitting your milestones? Are you staying on budget? Are you continuing to meet your goals and objectives? Evaluation helps you answer these questions, giving you real-time insights to adjust your project as needed.

Finally, when your project is complete ( benefits realisation , closure and reporting stage), an evaluation closes the project down. You should assess if you achieved what you set out to do and report your findings and lessons learnt. This is not just for internal purposes, it's also about transparency and accountability to your stakeholders.

What key steps can you take for evaluation?

Evaluation scoping: this is an important stage to demonstrate co-design/production by bringing your stakeholders together to clearly outline what you want to achieve through your project and establish what success looks like to all stakeholders. At this early stage, you will also set stakeholders expectations and get their buy-in. A logic model, and a story of change, can help everyone collectively work through this and communicate what success looks like, how you will get there and the data that will tell you if you are on the right track.

Evaluation planning: determine what evaluation questions you will ask and what evaluation approaches will be more suitable to answer those questions. You should plan what data you need to collect, how you'll collect it and when you'll do it. It can be helpful to identify an appropriate evaluation approach or model , depending on your project and what you are aiming to achieve and for whom. and for whom.

Data collection and analysis: throughout your project, you should gather data that pertains to your objectives and key performance indicators (KPIs) as agreed during the evaluation scoping stage — in evaluation terms these are your process metrics. After this has been completed, evaluate the information you've collected to see if you’re on track, identify when you achieve your project outcomes and capture learning. Remember that data can be both quantitative and qualitative — and qualitative data can be a rich source of insights on what your project is achieving and how.

What to do with your evaluation data

Make decisions: based on your analysis, you can decide whether you need to adjust your project plan or continue as planned — interim evaluation findings can be incorporated into project decision gates.

Report and share: you should communicate your findings with stakeholders and your team. Remember, transparency is key!

Learn and improve: you should use your evaluation findings to improve your future projects. It's a continuous learning process.

Things to consider and skills needed

While it’s essential to consider evaluation as part of any project, it's not a one-size-fits-all process. Below are some things to consider and ensure you have the right skillset to design and deliver effective evaluation for your projects.

Tailored approach and flexibility: your evaluation methods should be tailored to your project's unique characteristics. What works for an education and training project may not be suitable for a communications and marketing campaign. Be prepared to adapt your evaluation methods and objectives as the project evolves.

Logic modelling: learn to use logic models for your project and evaluation design and how to use this logical method to refine your project outcomes and design your effective evaluations while keeping stakeholders in the loop.

Data collection and analysis: a good evaluation demands a multi-methods approach. Skills in both qualitative and quantitative data collection, analysis and interpretation are highly valuable.

Critical thinking: evaluation is about assessing and making judgements. You will need to have critical thinking to provide effective analysis and interpretation of data and findings.

Report writing: learn to hone the skills of effective evaluation report writing. Effective communication skills are crucial for sharing your findings and insights with stakeholders and team members. Having a good structure to your report, identifying and sharing key messages, and knowing what not to include all help with effective communication of your findings, achievements and learning.

You may also be interested in:

  • What is project planning?
  • Information sheet: Life cycles
  • APM Corporate Partners discuss the future of project data analytics

Dr Shehla Khalid, Sian Kitchen and Ejemen Asuelimen

Dr Shehla Khalid is a Senior Research and Evaluation Manager

Sian Kitchen is Senior Programme Lead for Evaluation and Insights

Ejemen Asuelimen is a Senior Project Manager at NHS England. They currently lead the evaluation of NHS staff retention and health and wellbeing programmes.

Log in  to post a comment, or create an account if you don't have one already.

Site logo

  • Understanding Evaluation Methodologies: M&E Methods and Techniques for Assessing Performance and Impact
  • Protected: Learning Center

EVALUATION METHODOLOGIES and M&E Methods

This article provides an overview and comparison of the different types of evaluation methodologies used to assess the performance, effectiveness, quality, or impact of services, programs, and policies. There are several methodologies both qualitative and quantitative, including surveys, interviews, observations, case studies, focus groups, and more…In this essay, we will discuss the most commonly used qualitative and quantitative evaluation methodologies in the M&E field.

Table of Contents

  • Introduction to Evaluation Methodologies: Definition and Importance
  • Types of Evaluation Methodologies: Overview and Comparison
  • Program Evaluation methodologies
  • Qualitative Methodologies in Monitoring and Evaluation (M&E)
  • Quantitative Methodologies in Monitoring and Evaluation (M&E)
  • What are the M&E Methods?
  • Difference Between Evaluation Methodologies and M&E Methods
  • Choosing the Right Evaluation Methodology: Factors and Criteria
  • Our Conclusion on Evaluation Methodologies

1. Introduction to Evaluation Methodologies: Definition and Importance

Evaluation methodologies are the methods and techniques used to measure the performance, effectiveness, quality, or impact of various interventions, services, programs, and policies. Evaluation is essential for decision-making, improvement, and innovation, as it helps stakeholders identify strengths, weaknesses, opportunities, and threats and make informed decisions to improve the effectiveness and efficiency of their operations.

Evaluation methodologies can be used in various fields and industries, such as healthcare, education, business, social services, and public policy. The choice of evaluation methodology depends on the specific goals of the evaluation, the type and level of data required, and the resources available for conducting the evaluation.

The importance of evaluation methodologies lies in their ability to provide evidence-based insights into the performance and impact of the subject being evaluated. This information can be used to guide decision-making, policy development, program improvement, and innovation. By using evaluation methodologies, stakeholders can assess the effectiveness of their operations and make data-driven decisions to improve their outcomes.

Overall, understanding evaluation methodologies is crucial for individuals and organizations seeking to enhance their performance, effectiveness, and impact. By selecting the appropriate evaluation methodology and conducting a thorough evaluation, stakeholders can gain valuable insights and make informed decisions to improve their operations and achieve their goals.

2. Types of Evaluation Methodologies: Overview and Comparison

Evaluation methodologies can be categorized into two main types based on the type of data they collect: qualitative and quantitative. Qualitative methodologies collect non-numerical data, such as words, images, or observations, while quantitative methodologies collect numerical data that can be analyzed statistically. Here is an overview and comparison of the main differences between qualitative and quantitative evaluation methodologies:

Qualitative Evaluation Methodologies:

  • Collect non-numerical data, such as words, images, or observations.
  • Focus on exploring complex phenomena, such as attitudes, perceptions, and behaviors, and understanding the meaning and context behind them.
  • Use techniques such as interviews, observations, case studies, and focus groups to collect data.
  • Emphasize the subjective nature of the data and the importance of the researcher’s interpretation and analysis.
  • Provide rich and detailed insights into people’s experiences and perspectives.
  • Limitations include potential bias from the researcher, limited generalizability of findings, and challenges in analyzing and synthesizing the data.

Quantitative Evaluation Methodologies:

  • Collect numerical data that can be analyzed statistically.
  • Focus on measuring specific variables and relationships between them, such as the effectiveness of an intervention or the correlation between two factors.
  • Use techniques such as surveys and experimental designs to collect data.
  • Emphasize the objectivity of the data and the importance of minimizing bias and variability.
  • Provide precise and measurable data that can be compared and analyzed statistically.
  • Limitations include potential oversimplification of complex phenomena, limited contextual information, and challenges in collecting and analyzing data.

Choosing between qualitative and quantitative evaluation methodologies depends on the specific goals of the evaluation, the type and level of data required, and the resources available for conducting the evaluation. Some evaluations may use a mixed-methods approach that combines both qualitative and quantitative data collection and analysis techniques to provide a more comprehensive understanding of the subject being evaluated.

3. Program evaluation methodologies

Program evaluation methodologies encompass a diverse set of approaches and techniques used to assess the effectiveness, efficiency, and impact of programs and interventions. These methodologies provide systematic frameworks for collecting, analyzing, and interpreting data to determine the extent to which program objectives are being met and to identify areas for improvement. Common program evaluation methodologies include quantitative methods such as experimental designs, quasi-experimental designs, and surveys, as well as qualitative approaches like interviews, focus groups, and case studies.

Each methodology offers unique advantages and limitations depending on the nature of the program being evaluated, the available resources, and the research questions at hand. By employing rigorous program evaluation methodologies, organizations can make informed decisions, enhance program effectiveness, and maximize the use of resources to achieve desired outcomes.

Catch HR’s eye instantly?

  • Resume Review
  • Resume Writing
  • Resume Optimization

Premier global development resume service since 2012

Stand Out with a Pro Resume

4. Qualitative Methodologies in Monitoring and Evaluation (M&E)

Qualitative methodologies are increasingly being used in monitoring and evaluation (M&E) to provide a more comprehensive understanding of the impact and effectiveness of programs and interventions. Qualitative methodologies can help to explore the underlying reasons and contexts that contribute to program outcomes and identify areas for improvement. Here are some common qualitative methodologies used in M&E:

Interviews involve one-on-one or group discussions with stakeholders to collect data on their experiences, perspectives, and perceptions. Interviews can provide rich and detailed data on the effectiveness of a program, the factors that contribute to its success or failure, and the ways in which it can be improved.

Observations

Observations involve the systematic and objective recording of behaviors and interactions of stakeholders in a natural setting. Observations can help to identify patterns of behavior, the effectiveness of program interventions, and the ways in which they can be improved.

Document review

Document review involves the analysis of program documents, such as reports, policies, and procedures, to understand the program context, design, and implementation. Document review can help to identify gaps in program design or implementation and suggest ways in which they can be improved.

Participatory Rural Appraisal (PRA)

PRA is a participatory approach that involves working with communities to identify and analyze their own problems and challenges. It involves using participatory techniques such as mapping, focus group discussions, and transect walks to collect data on community perspectives, experiences, and priorities. PRA can help ensure that the evaluation is community-driven and culturally appropriate, and can provide valuable insights into the social and cultural factors that influence program outcomes.

Key Informant Interviews

Key informant interviews are in-depth, open-ended interviews with individuals who have expert knowledge or experience related to the program or issue being evaluated. Key informants can include program staff, community leaders, or other stakeholders. These interviews can provide valuable insights into program implementation and effectiveness, and can help identify areas for improvement.

Ethnography

Ethnography is a qualitative method that involves observing and immersing oneself in a community or culture to understand their perspectives, values, and behaviors. Ethnographic methods can include participant observation, interviews, and document analysis, among others. Ethnography can provide a more holistic understanding of program outcomes and impacts, as well as the broader social context in which the program operates.

Focus Group Discussions

Focus group discussions involve bringing together a small group of individuals to discuss a specific topic or issue related to the program. Focus group discussions can be used to gather qualitative data on program implementation, participant experiences, and program outcomes. They can also provide insights into the diversity of perspectives within a community or stakeholder group .

Photovoice is a qualitative method that involves using photography as a tool for community empowerment and self-expression. Participants are given cameras and asked to take photos that represent their experiences or perspectives on a program or issue. These photos can then be used to facilitate group discussions and generate qualitative data on program outcomes and impacts.

Case Studies

Case studies involve gathering detailed qualitative data through interviews, document analysis, and observation, and can provide a more in-depth understanding of a specific program component. They can be used to explore the experiences and perspectives of program participants or stakeholders and can provide insights into program outcomes and impacts.

Qualitative methodologies in M&E are useful for identifying complex and context-dependent factors that contribute to program outcomes, and for exploring stakeholder perspectives and experiences. Qualitative methodologies can provide valuable insights into the ways in which programs can be improved and can complement quantitative methodologies in providing a comprehensive understanding of program impact and effectiveness

5. Quantitative Methodologies in Monitoring and Evaluation (M&E)

Quantitative methodologies are commonly used in monitoring and evaluation (M&E) to measure program outcomes and impact in a systematic and objective manner. Quantitative methodologies involve collecting numerical data that can be analyzed statistically to provide insights into program effectiveness, efficiency, and impact. Here are some common quantitative methodologies used in M&E:

Surveys involve collecting data from a large number of individuals using standardized questionnaires or surveys. Surveys can provide quantitative data on people’s attitudes, opinions, behaviors, and experiences, and can help to measure program outcomes and impact.

Baseline and Endline Surveys

Baseline and endline surveys are quantitative surveys conducted at the beginning and end of a program to measure changes in knowledge, attitudes, behaviors, or other outcomes. These surveys can provide a snapshot of program impact and allow for comparisons between pre- and post-program data.

Randomized Controlled Trials (RCTs)

RCTs are a rigorous quantitative evaluation method that involve randomly assigning participants to a treatment group (receiving the program) and a control group (not receiving the program), and comparing outcomes between the two groups. RCTs are often used to assess the impact of a program.

Cost-Benefit Analysis

Cost-benefit analysis is a quantitative method used to assess the economic efficiency of a program or intervention. It involves comparing the costs of the program with the benefits or outcomes generated, and can help determine whether a program is cost-effective or not.

Performance Indicators

Performance indicator s are quantitative measures used to track progress toward program goals and objectives. These indicators can be used to assess program effectiveness, efficiency, and impact, and can provide regular feedback on program performance.

Statistical Analysis

Statistical analysis involves using quantitative data and statistical method s to analyze data gathered from various evaluation methods, such as surveys or observations. Statistical analysis can provide a more rigorous assessment of program outcomes and impacts and help identify patterns or relationships between variables.

Experimental designs

Experimental designs involve manipulating one or more variables and measuring the effects of the manipulation on the outcome of interest. Experimental designs are useful for establishing cause-and-effect relationships between variables, and can help to measure the effectiveness of program interventions.

Quantitative methodologies in M&E are useful for providing objective and measurable data on program outcomes and impact, and for identifying patterns and trends in program performance. Quantitative methodologies can provide valuable insights into the effectiveness, efficiency, and impact of programs, and can complement qualitative methodologies in providing a comprehensive understanding of program performance.

6. What are the M&E Methods?

Monitoring and Evaluation (M&E) methods encompass the tools, techniques, and processes used to assess the performance of projects, programs, or policies.

These methods are essential in determining whether the objectives are being met, understanding the impact of interventions, and guiding decision-making for future improvements. M&E methods fall into two broad categories: qualitative and quantitative, often used in combination for a comprehensive evaluation.

7. Choosing the Right Evaluation Methodology: Factors and Criteria

Choosing the right evaluation methodology is essential for conducting an effective and meaningful evaluation. Here are some factors and criteria to consider when selecting an appropriate evaluation methodology:

  • Evaluation goals and objectives: The evaluation goals and objectives should guide the selection of an appropriate methodology. For example, if the goal is to explore stakeholders’ perspectives and experiences, qualitative methodologies such as interviews or focus groups may be more appropriate. If the goal is to measure program outcomes and impact, quantitative methodologies such as surveys or experimental designs may be more appropriate.
  • Type of data required: The type of data required for the evaluation should also guide the selection of the methodology. Qualitative methodologies collect non-numerical data, such as words, images, or observations, while quantitative methodologies collect numerical data that can be analyzed statistically. The type of data required will depend on the evaluation goals and objectives.
  • Resources available: The resources available, such as time, budget, and expertise, can also influence the selection of an appropriate methodology. Some methodologies may require more resources, such as specialized expertise or equipment, while others may be more cost-effective and easier to implement.
  • Accessibility of the subject being evaluated: The accessibility of the subject being evaluated, such as the availability of stakeholders or data, can also influence the selection of an appropriate methodology. For example, if stakeholders are geographically dispersed, remote data collection methods such as online surveys or video conferencing may be more appropriate.
  • Ethical considerations: Ethical considerations, such as ensuring the privacy and confidentiality of stakeholders, should also be taken into account when selecting an appropriate methodology. Some methodologies, such as interviews or focus groups, may require more attention to ethical considerations than others.

Overall, choosing the right evaluation methodology depends on a variety of factors and criteria, including the evaluation goals and objectives, the type of data required, the resources available, the accessibility of the subject being evaluated, and ethical considerations. Selecting an appropriate methodology can ensure that the evaluation is effective, meaningful, and provides valuable insights into program performance and impact.

8. Our Conclusion on Evaluation Methodologies

It’s worth noting that many evaluation methodologies use a combination of quantitative and qualitative methods to provide a more comprehensive understanding of program outcomes and impacts. Both qualitative and quantitative methodologies are essential in providing insights into program performance and effectiveness.

Qualitative methodologies focus on gathering data on the experiences, perspectives, and attitudes of individuals or communities involved in a program, providing a deeper understanding of the social and cultural factors that influence program outcomes. In contrast, quantitative methodologies focus on collecting numerical data on program performance and impact, providing more rigorous evidence of program effectiveness and efficiency.

Each methodology has its strengths and limitations, and a combination of both qualitative and quantitative approaches is often the most effective in providing a comprehensive understanding of program outcomes and impact. When designing an M&E plan, it is crucial to consider the program’s objectives, context, and stakeholders to select the most appropriate methodologies.

Overall, effective M&E practices require a systematic and continuous approach to data collection, analysis, and reporting. With the right combination of qualitative and quantitative methodologies, M&E can provide valuable insights into program performance, progress, and impact, enabling informed decision-making and resource allocation, ultimately leading to more successful and impactful programs.

' data-src=

Munir Barnaba

Thanks for your help its of high value, much appreciated

' data-src=

Very informative. Thank you

Leave a Comment Cancel Reply

Your email address will not be published.

How strong is my Resume?

Only 2% of resumes land interviews.

Land a better, higher-paying career

methodology of project evaluation

Recommended Jobs

Senior program associate, ispi, executive assistant (administrative management specialist) – usaid africa, intern- international project and proposal support, ispi, office coordinator, primary health care advisor (locals only), usaid uganda, sudan monitoring project (smp): third party monitoring coordinator, democracy, rights, and governance specialist – usaid ecuador, senior human resources associate.

  • United States

Digital MEL Manager – Digital Strategy Implementation Mechanism (DSIM) Washington, DC

  • Washington, DC, USA

Senior Accounting Associate

Evaluation consultancy: interculturality for a liberating higher education.

  • SAIH (Norwegian Students’ and Academics’ International Assistance Fund)

Program Associate, MERL

Senior monitoring, evaluation, and learning (mel) specialist, data & report coordinator, knowledge management specialist, services you might be interested in, useful guides ....

How to Create a Strong Resume

Monitoring And Evaluation Specialist Resume

Resume Length for the International Development Sector

Types of Evaluation

Monitoring, Evaluation, Accountability, and Learning (MEAL)

LAND A JOB REFERRAL IN 2 WEEKS (NO ONLINE APPS!)

Sign Up & To Get My Free Referral Toolkit Now:

How to Do a Project Evaluation (With Tools)

You can evaluate a project to determine if it achieved its objectives, impacts, and overall goals. Here are the steps to do it.

Project managers evaluate their projects to see if the projects meet the company and team's goals and objectives. Evaluating projects after completion can help you better understand the impact and identify areas that need improvement.

Project evaluation is vital to any project since it can provide insights and lessons for future projects. Once you complete the project evaluation process, sharing your findings with stakeholders and your team members is essential. While there are many methods to evaluate a project, here are the basic steps that you need to take, regardless of the way you choose.

1. Develop an Evaluation Plan

As you create your project, you should consider the objectives and goals you want to achieve and share them with your team, providing them with a clear path forward. The goals and objectives you determine can help you choose the project evaluation method you want to use.

For example, if the project goal is to increase team productivity, you may want to review data regarding task completion as a tool to evaluate productivity rates. You might be interested in learning how to set project milestones for increased productivity .

2. Select Source of Evaluation & Prepare for Implementation

The first step is to choose how you want to collect the data for the evaluation. You can decide to use interviews, focus groups, surveys, case studies, or observation. Choose an evaluation tool that will suit the people you're looking to get info from, which means identifying the people you want to include.

Whether you plan on interviewing or surveying people, you must prepare the questions ahead. If you use a focus group, you must send invitations, select a date, and list questions.

After you choose your source of evaluation and are ready for implementation, you should share a detailed schedule and delegate duties, so your team is prepared for the next step. If you're uncertain about establishing who does what, you may be interested in learning the best tips for defining team roles and responsibilities .

3. Implement Project Evaluation

While the project is in progress, monitoring all the elements is critical to ensure it is within budget and running on schedule. It is helpful to create status reports you share with the team, so everyone is clear on the project status.

The implementation process differs based on the evaluation tools and methods you choose. It would help if you focused on:

  • Pre-project evaluation : This is where you develop project goals and objectives that you will use to determine the project's viability.
  • Ongoing evaluation : Monitor details like the budget, quality of work, and schedule.
  • Post-project evaluation : Measure the project's success based on outcomes and impact.

4. Review the Data

Once you gather the data for evaluation, it's time to analyze it for weaknesses, strengths, and trends. It's also an opportunity to verify if the project came close to meeting the objectives and goals set out at the start. You can use the team's objectives and goals to translate the data received for the next step.

5. Create a Report for Your Team

After you complete your data analysis, it's necessary to summarize the evaluation results. You should choose a format that meets the needs of the reader, which are your stakeholders and team members.

After completing every project, providing a report on your project evaluation is a valuable habit. It can bring attention to areas that need improvement, feature intentional and unintentional impacts, and determine whether or not the team met its goals and objectives. Before writing your report, you might be interested in learning the best types of project management reports you should know .

6. Discuss Next Steps

The final step in the project evaluation process is discussing the next steps based on the findings. It's essential to initiate a discussion about the results of the evaluation.

A discussion can inspire innovative ideas to improve the team, strengthen communication, and prompt suggestions on improving future projects. If you want your report to stand out to stakeholders and get your team's attention, you may want to see how you can incorporate the best tips to make your project reports stand out .

Tools You Can Use for Project Evaluation

The following are tools that you can use for your project evaluation. You may find that some of more suitable for your project than others.

Surveys are an evaluation tool that allows you to determine how a group of people feel before a project starts and then survey them afterward. This evaluation process can measure various things, including self-esteem, preferences, achievements, and attitudes.

It would be best if you surveyed members of your target audience. You get to see if people's feelings shift positively after the completion of the project, and if that was the project goal, then you know you achieved it. You can survey in numerous ways, including by phone, paper, or electronically.

2. Observation

Observation allows you to assess or monitor a situation or process while documenting what the observer sees and hears. Seeing behaviors and actions in a natural context can provide insight and understanding about the object you are evaluating. When using observation, it's critical to use a consistent and systemic approach as you gather data.

3. Case Studies

Case studies can provide more depth than other evaluation tools. When you do a case study, you focus on a particular group within a community, village, person, or a subset of a broader group. You can use case studies to illustrate trends or show stark differences.

A case study analysis requires pulling critical themes and results to help predict future trends, highlight hidden issues, or provide an understanding of an essential issue with greater clarity.

4. Interviews

Interviews can be a qualitative or quantitative evaluation tool, depending on how you use them. The process involves a conversation between an interviewer and the person answering the questions.

You can use interviews to collect narrative information and data to comprehend better a respondent's unique worldview, perspectives, and opinions. There are different types of interview techniques and approaches, including:

  • Structured interviews : These are quantitative investigations, often survey-based research with standardized questions in a questionnaire format. The responses are usually in the structure of a multiple choice list and are not open-ended.
  • Semi-structured : As the name implies, this is a mixed framework of general themes and pre-established questions adaptable to the interview session context. The interviewer is free to omit questions and play with the order of the questions they ask, and the questions are a variety of open and close-ended.
  • Unstructured : This format is informal or conversational, where all the questions are open-ended.

5. Focus Groups

Focus groups are group interviews you design to explore people's attitudes about a particular subject. They are an excellent way to discover the most common issues for the group or community when information is limited.

To do a focus group, you must ensure you have a capable facilitator and that you've planned it well. Focus groups can deliver detailed information on issues that concern a community or a specific demographic.

Are You Ready for Your Next Project Evaluation?

Evaluations are a vital part of any project, and they help you confirm if you've met your project goals and objectives and can help you establish best practices for future projects. If you don't review what's working and what isn't after each project, you leave yourself open to repeating costly mistakes.

If you're looking for a way to streamline your future projects, you may consider using project management software if you don't already. You may want to read some information on how to get started if trying new software feels intimidating.

Subscribe to our Free Newsletter

image

Designing an Evaluation Methodology for Your Project

By Eva Wieners

methodology of project evaluation

Monitoring and evaluation of project implementation are of key importance if your organization is working with donations from a third party. It creates transparency and trust and also helps your own organization to carry out good projects and to be able to learn from experiences from the past. But how exactly do you evaluate your projects?

Without proper planning and design of an evaluation strategy and methodology, it is going to be very difficult to be able to present good results. Even though normally the evaluation is the last step in the project cycle, you should design your strategy in the very first step to be able to collect the appropriate data throughout the project or program.

In the following paragraphs, we will describe in detail what you need to keep in mind while designing your evaluation methodology and how you can actually use it to your advantage when fundraising for your NGO.

What is evaluation?

To be able to design an evaluation methodology, you must clearly understand what the term evaluation means and how you can use it to your advantage.

Evaluation basically describes the analysis of the project’s success after the project cycle is finished. Based on the collected data in a baseline study, you describe and analyze achievements that have been reached through your project activities. At the same time, you also name and look detailed on possible problems and mistakes that have occurred during that time to be able to learn from these experiences in the future. Basically, you compare the planned results with the actual results and analyze possible disparities.

Figure 1: The role of evaluation in the project cycle

Figure 1: The role of evaluation in the project cycle

As you can observe in figure 1, evaluation is an important step in the project cycle and makes sure, lessons are incorporated in the planning of future projects.

Why is it important?

methodology of project evaluation

Even if your organization is small and at first you might feel like an evaluation is not necessary, it has actually many benefits. Besides the above-described effect for the donor, you also get to collect a lot of data that can be used in the future for applications, information brochures, or similar purposes. If you can clearly name the effect that your past projects had on the communities you are working in, it will be much easier to write new applications based on that and establish new relationships with other donors .

Of course, your evaluation will look different though if you carry out a million-dollar project across several countries or a small program in one village. That is why, in the first step of the project planning, you should take time and thought to design an appropriate and practicable evaluation methodology for your project.

What does “designing an evaluation methodology” mean?

As stated above, you will have very different expectations for your evaluation methodology if you have a project across several countries with a huge budget than if you have a small project with very limited resources. Of course, also your donors will have very different expectations.

Big organizations often outsource the evaluation to specialized organizations that have their own framework. As every project is different, there is no real blueprint as to how to evaluate. While you don´t have to invent the wheel again every time you start e new project, you should be careful that your evaluation methodology is adjusted and appropriate for the purpose that should be achieved.

To design your evaluation methodology basically means to assign certain resources to it, to determine the expected outcomes, and to accommodate it in the project planning. Also, you determine the methods to be used to achieve results and the timeframe of it. We will describe the details of this process in the following paragraphs.

Once you designed your evaluation methodology, you can also share it with your donor. This way, you let your donor know clearly what they can expect in the evaluation in the end and what will not be included. It is a very good way to manage expectations and make sure from the start that you are on the same page. By sharing your methodology in an early stage, you give your donor the chance to make remarks and demand for inclusion of certain measures if needed and avoid any misunderstandings at the end. To share a well-designed evaluation methodology with your donor is one step more towards transparency and good practice.

methodology of project evaluation

Designing an evaluation methodology – Important steps

There are several important steps to be taken into consideration while designing an evaluation methodology appropriate for your project or program. If possible, these steps should be taken together by the people responsible for the evaluation as well as those responsible for the project to result in a well informed and realistic strategy.

It is of key importance that the evaluation methodology is designed during the planning stage of the project, so that sufficient resources can be assigned to it and that necessary data can be collected throughout the project cycle. With a strategy in place and assigned roles, the evaluation and the connected data collection can take place at an ongoing basis and will not be an overwhelming task at the end.

Figure 2: Necessary steps for the design of the evaluation methodology

As can be seen in Figure 2, the steps for defining an evaluation methodology are the following: Defining the purpose, defining the scope, describing the intervention logic, formulating evaluation questions, defining methods and data, and assigning necessary resources to the evaluation. In the following paragraphs, we will describe these tasks in detail.

To be able to design an appropriate evaluation methodology, you must be very clear about its purpose. Why is the evaluation carried out and who is it supposed to be? Does your donor request you to do the evaluation? Do you want to evaluate your projects on an internal basis to see the possible potential for progress? Is it both?

The purpose of the evaluation mostly sets the bar for its scope and form. Many times, the donor already has specific expectations that need to be met and specific regulations that need to be fulfilled. Sometimes even legal requirements come into play. The clearer you are about the purpose of your evaluation, the easier it is to define its form and the appropriate way to go about it.

The second thing you should take into consideration while designing your evaluation methodology is the scope of the evaluation. Deciding about the scope means to decide which interventions will be evaluated, which geographical area will be covered and which timeframe will be covered.

If you are working on a very small project, these questions are normally easy to answer. If your project just comprises a few interventions, a defined geographical area, and a limited timeframe, your evaluation should cover the entire project. If you already know though that the evaluation of certain aspects will be particularly challenging, it might be a good idea to exclude them to adjust the donor’s expectations towards the final evaluation. This might apply if your project just aims to kick start a process that will only show its impact in the long term (after your evaluation would take place) or if you already know that several measures out of your control will probably make it difficult to evaluate your own project activities. Be clear about it though, so that the donors know what to expect or have the opportunity to object if they do not agree with your approach.

In bigger projects with a range of measures and geographical focal areas, it might be a good idea to focus on some. If the project is embedded in a bigger program, it might make sense to focus on areas that have not been evaluated lately or that have reported problems and challenges in the past. Again though, make sure that you follow your legal obligations and that your donor agrees with your approach.

The intervention logic

In this step, you should be able to describe the interventions planned, their potential impact, and their interactions during your project phase. You should also take into consideration external actions that might have an influence on the implementation of your project, being positive or negative.

Writing down or making a diagram of the intervention logic makes sure that you clearly understand how the project is supposed to work and what was expected in the beginning. The intervention logic is dynamic and might change during the course of the project, but these changes can be documented and give a good insight into areas where plans and expectations needed to be adjusted to reality.

Evaluation Questions 

Once you spelled out in detail how your project is planned to work and how the expected impact is, you are in a position to formulate good evaluation questions. Evaluation questions are those questions that are supposed to be answered by your evaluation. It gives you the opportunity to specify what you actually want to analyze in your evaluation.

If you word your questions carefully, you can make sure a critical analysis can take place. Be careful not to end up with simple Yes/No questions, which seem easy to answer, but will give almost no insight in the end and thus will have very little additional value for the donor or your organization.

At the same time, you should try to find questions though that you will be able to answer. While project applications are full of promised impact, in reality, it is quite difficult to actually measure impact. To be able to assess the impact of your project , you would need to take a lot of data also out of your project to be sure no external events influenced the outcome of your project. Even with a huge dataset, it is almost impossible to be 100% sure of the impact your interventions had because things like policy changes, general opinion, or other events might play a role that you are not even aware of.

Sometimes that means to break one issue down to several questions. These questions can be quantitative (answered by hard data, numbers, etc.) or qualitative (opinions, perceptions, etc.) As shown in figure 3 below, you have to find the perfect middle ground between being too broad or too narrow-minded.

The question on the left (impact on education) is too broad, it would not be possible to answer this question in project evaluation. Impact evaluation on education would have to take into consideration many other factors like the general shift in attitudes, all other initiatives in the sector, policy changes, etc. Still, if all this data would be available, it would be very difficult to quantify the impact in comparison to other interventions. If you try to answer this question in this scope, the donors would know that your evaluation must be flawed. Be very careful with the use of the word “impact” in your evaluation!

methodology of project evaluation

Figure 3: Comparison between different evaluation questions. (own representation)

The question on the right in comparison is to narrow (number of schools). It would be possible to answer it with a simple number and would not give any further information about the quality of education or the actual use of these schools. It gives no room for critical analysis and thus would be no good evaluation question.

The questions in the middle show one way to ask a combination of quantitative and qualitative questions that can also lead to a more critical assessment of the project activities, but at the same time give a good picture of what the project has actually achieved.

Of course, the depth of these questions will vary according to the scope of your evaluation.

Methods and data

When you defined the appropriate evaluation questions, the next step is to think about the necessary data and the methods to analyze that data. There are plenty of tools and instruments available to conduct an evaluation, but to decide which ones are appropriate you have to take into consideration the availability of data, the quality of your data, and the resources available for the evaluation. Some tools need to input very detailed data, so if that data is not available, you can´t use these tools as well. Some instruments are very time-intensive, so if you did not allocate sufficient time and manpower for the evaluation, these instruments are also not a good fit.

Allocating resources to your evaluation strategy

It is also important not to forget to allocate resources to your evaluation methodology to be able to carry it out. Many times, the evaluation does not get enough attention in regard to resources and people do not have enough designated time to carry it out. Particularly in smaller projects, sometimes the project manager has to do the evaluation “on the side” of his or her normal tasks. This poses several risks, as not enough time is designated to the important task and the project manager might be biased.

Setting aside manpower and resources for the evaluation from the first project phase shows a responsible behavior on the organization’s side and guarantees that the evaluation will be carried out professionally.

Designing an evaluation methodology

Once you have carried out the above-mentioned steps (ideally in a team), you have gathered enough information to design your evaluation methodology. You have decided which methods you will need and which data you will have to collect for that, and ideally already allotted the corresponding responsibilities to the assigned staff so that everybody knows what her or his role is in the process.

If you put this information together in a document, it is also a good opportunity to share this with your donors or potential donors. A thought-through evaluation methodology shows that you and your organization are very familiar with the working area of your project, have put a lot of thought into the design, and are able and willing to critically analyze your project interventions. It creates transparency and thus more reason for the donors to trust you and your organization. It also makes common ground with respect to expectations towards the evaluation report in the end and gives all stakeholders the opportunity to add input if needed and desired.

Of course, designing the methodology is only the first step. Throughout the project, you have to be careful that it also gets implemented according to the plan and that no big problems arise. You can adjust your strategy if need be, but you should always be able to plausibly explain the reasons for the necessary adjustments to your donors and stakeholders.

About the author

methodology of project evaluation

Eva is based in Germany and has worked for nearly a decade with NGOs on the grassroots level in Nepal in the field of capacity development and promotion of sustainable agricultural practices. Before that, she worked in South America and Europe with different organizations. She holds a Ph.D. in geography and her field of research was sustainability and inclusion in development projects.

guest

Thanks this is helpful

Shiva Raj Panta

Would like to know more about intervention logic; it is of key importance because it provides the compelling logic for the intervention.

Moonah

This was helpful. Thank you. Do you know of companies who do evaluations for companies.

wpdiscuz

404 Not found

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Int J Environ Res Public Health

Logo of ijerph

Research Project Evaluation—Learnings from the PATHWAYS Project Experience

Aleksander galas.

1 Epidemiology and Preventive Medicine, Jagiellonian University Medical College, 31-034 Krakow, Poland; [email protected] (A.G.); [email protected] (A.P.)

Aleksandra Pilat

Matilde leonardi.

2 Fondazione IRCCS, Neurological Institute Carlo Besta, 20-133 Milano, Italy; [email protected]

Beata Tobiasz-Adamczyk

Background: Every research project faces challenges regarding how to achieve its goals in a timely and effective manner. The purpose of this paper is to present a project evaluation methodology gathered during the implementation of the Participation to Healthy Workplaces and Inclusive Strategies in the Work Sector (the EU PATHWAYS Project). The PATHWAYS project involved multiple countries and multi-cultural aspects of re/integrating chronically ill patients into labor markets in different countries. This paper describes key project’s evaluation issues including: (1) purposes, (2) advisability, (3) tools, (4) implementation, and (5) possible benefits and presents the advantages of a continuous monitoring. Methods: Project evaluation tool to assess structure and resources, process, management and communication, achievements, and outcomes. The project used a mixed evaluation approach and included Strengths (S), Weaknesses (W), Opportunities (O), and Threats (SWOT) analysis. Results: A methodology for longitudinal EU projects’ evaluation is described. The evaluation process allowed to highlight strengths and weaknesses and highlighted good coordination and communication between project partners as well as some key issues such as: the need for a shared glossary covering areas investigated by the project, problematic issues related to the involvement of stakeholders from outside the project, and issues with timing. Numerical SWOT analysis showed improvement in project performance over time. The proportion of participating project partners in the evaluation varied from 100% to 83.3%. Conclusions: There is a need for the implementation of a structured evaluation process in multidisciplinary projects involving different stakeholders in diverse socio-environmental and political conditions. Based on the PATHWAYS experience, a clear monitoring methodology is suggested as essential in every multidisciplinary research projects.

1. Introduction

Over the last few decades, a strong discussion on the role of the evaluation process in research has developed, especially in interdisciplinary or multidimensional research [ 1 , 2 , 3 , 4 , 5 ]. Despite existing concepts and definitions, the importance of the role of evaluation is often underestimated. These dismissive attitudes towards the evaluation process, along with a lack of real knowledge in this area, demonstrate why we need research evaluation and how research evaluation can improve the quality of research. Having firm definitions of ‘evaluation’ can link the purpose of research, general questions associated with methodological issues, expected results, and the implementation of results to specific strategies or practices.

Attention paid to projects’ evaluation shows two concurrent lines of thought in this area. The first is strongly associated with total quality management practices and operational performance; the second focuses on the evaluation processes needed for public health research and interventions [ 6 , 7 ].

The design and implementation of process’ evaluations in fields different from public health have been described as multidimensional. According to Baranowski and Stables, process evaluation consists of eleven components: recruitment (potential participants for corresponding parts of the program); maintenance (keeping participants involved in the program and data collection); context (an aspect of environment of intervention); resources (the materials necessary to attain project goals); implementation (the extent to which the program is implemented as designed); reach (the extent to which contacts are received by the targeted group); barriers (problems encountered in reaching participants); exposure (the extent to which participants view or read material); initial use (the extent to which a participant conducts activities specified in the materials); continued use (the extent to which a participant continues to do any of the activities); contamination (the extent to which participants receive interventions from outside the program and the extent to which the control group receives the treatment) [ 8 ].

There are two main factors shaping the evaluation process. These are: (1) what is evaluated (whether the evaluation process revolves around project itself or the outcomes which are external to the project), and (2) who is an evaluator (whether an evaluator is internal or external to the project team and program). Although there are several existing gaps in current knowledge about the evaluation process of external outcomes, the use of a formal evaluation process of a research project itself is very rare.

To define a clear evaluation and monitoring methodology we performed different steps. The purpose of this article is to present experiences from the project evaluation process implemented in the Participation to Healthy Workplaces and Inclusive Strategies in the Work Sector (the EU PATHWAYS project. The manuscript describes key project evaluation issues as: (1) purposes, (2) advisability, (3) tools, (4) implementation, and (5) possible benefits. The PATHWAYS project can be understood as a specific case study—presented through a multidimensional approach—and based on the experience associated with general evaluation, we can develop patterns of good practices which can be used in other projects.

1.1. Theoretical Framework

The first step has been the clear definition of what is an evaluation strategy or methodology . The term evaluation is defined by the Cambridge Dictionary as the process of judging something’s quality, importance, or value, or a report that includes this information [ 9 ] or in a similar way by the Oxford Dictionary as the making of a judgment about the amount, number, or value of something [ 10 ]; assessment and in the activity, it is frequently understood as associated with the end rather than with the process. Stufflebeam, in its monograph, defines evaluation as a study designed and conducted to assist some audience to assess an object’s merit and worth. Considering this definition, there are four categories of evaluation approaches: (1) pseudo-evaluation; (2) questions and/or methods-oriented evaluation; (3) improvement/accountability evaluation; (4) social agenda/advocacy evaluation [ 11 ].

In brief, considering Stufflebeam’s classification, pseudo-evaluations promote invalid or incomplete findings. This happens when findings are selectively released or falsified. There are two pseudo-evaluation types proposed by Stufflebeam: (1) public relations-inspired studies (studies which do not seek truth but gather information to solicit positive impressions of program), and (2) politically controlled studies (studies which seek the truth but inappropriately control the release of findings to right-to-know audiences).

The questions and/or methods-oriented approach uses rather narrow questions, which are oriented on operational objectives of the project. Questions oriented uses specific questions, which are of interest by accountability requirements or an expert’s opinions of what is important, while method oriented evaluations favor the technical qualities of program/process. The general concept of these two is that it is better to ask a few pointed questions well to get information on program merit and worth [ 11 ]. In this group, one may find the following evaluation types: (a) objectives-based studies: typically focus on whether the program objectives have been achieved through an internal perspective (by project executors); (b) accountability, particularly payment by results studies: stress the importance of obtaining an external, impartial perspective; (c) objective testing program: uses standardized, multiple-choice, norm-referenced tests; (d) outcome evaluation as value-added assessment: a recurrent evaluation linked with hierarchical gain score analysis; (e) performance testing: incorporates the assessment of performance (by written or spoken answers, or psychomotor presentations) and skills; (f) experimental studies: program evaluators perform a controlled experiment and contrast the outcomes observed; (g) management information system: provide information needed for managers to conduct their programs; (h) benefit-cost analysis approach: mainly sets of quantitative procedures to assess the full cost of a program and its returns; (i) clarification hearing: an evaluation of a trial in which role-playing evaluators competitively implement both a damning prosecution of a program—arguing that it failed, and a defense of the program—and arguing that it succeeded. Next, a judge hears arguments within the framework of a jury trial and controls the proceedings according to advance agreements on rules of evidence and trial procedures; (j) case study evaluation: focused, in-depth description, analysis, and synthesis of a particular program; (k) criticism and connoisseurship: certain experts in a given area do in-depth analysis and evaluation that could not be done in other way; (l) program theory-based evaluation: based on the theory beginning with another validated theory of how programs of a certain type within similar settings operate to produce outcomes (e.g., Health Believe Model, Predisposing, Reinforcing and Enabling Constructs in Educational Diagnosis and Evaluation and Policy, Regulatory, and Organizational Constructs in Educational and Environmental Development - thus so called PRECEDE-PROCEED model proposed by L. W. Green or Stage of Change Theory by Prochaska); (m) mixed method studies: include different qualitative and quantitative methods.

The third group of methods considered in evaluation theory are improvement/accountability-oriented evaluation approaches. Among these, there are the following: (a) decision/accountability oriented studies: emphasizes that evaluation should be used proactively to help improve a program and retroactively to assess its merit and worth; (b) consumer-oriented studies: wherein the evaluator is a surrogate consumer who draws direct conclusions about the evaluated program; (c) accreditation/certification approach: an accreditation study to verify whether certification requirements have been/are fulfilled.

Finally, a social agenda/advocacy evaluation approach focuses on the assessment of difference, which is/was intended to be the effect of the program evaluation. The evaluation process in this type of approach works in a loop, starting with an independent evaluator who provides counsel and advice towards understanding, judging and improving programs as evaluations to serve the client’s needs. In this group, there are: (a) client-centered studies (or responsive evaluation): evaluators work with, and for, the support of diverse client groups; (b) constructivist evaluation: evaluators are authorized and expected to maneuver the evaluation to emancipate and empower involved and affected disenfranchised people; (c) deliberative democratic evaluation: evaluators work within an explicit democratic framework and uphold democratic principles in reaching defensible conclusions; (d) utilization-focused evaluation: explicitly geared to ensure that program evaluations make an impact.

1.2. Implementation of the Evaluation Process in the EU PATHWAYS Project

The idea to involve the evaluation process as an integrated goal of the PATHWAYS project was determined by several factors relating to the main goal of the project, defined as a special intervention to existing attitudes to occupational mobility and work activity reintegration of people of working age, suffering from specific chronic conditions into the labor market in 12 European Countries. Participating countries had different cultural and social backgrounds and different pervasive attitudes towards people suffering from chronic conditions.

The components of evaluation processes previously discussed proved helpful when planning the PATHWAYS evaluation, especially in relation to different aspects of environmental contexts. The PATHWAYS project focused on chronic conditions including: mental health issues, neurological diseases, metabolic disorders, musculoskeletal disorders, respiratory diseases, cardiovascular diseases, and persons with cancer. Within this group, the project found a hierarchy of patients and social and medical statuses defined by the nature of their health conditions.

According to the project’s monitoring and evaluation plan, the evaluation process followed specific challenges defined by the project’s broad and specific goals and monitored the progress of implementing key components by assessing the effectiveness of consecutive steps and identifying conditions supporting the contextual effectiveness. Another significant aim of the evaluation component on the PATHWAYS project was to recognize the value and effectiveness of using a purposely developed methodology—consisting of a wide set of quantitative and qualitative methods. The triangulation of methods was very useful and provided the opportunity to develop a multidimensional approach to the project [ 12 ].

From the theoretical framework, special attention was paid to the explanation of medical, cultural, social and institutional barriers influencing the chance of employment of chronically ill persons in relation to the characteristics of the participating countries.

Levels of satisfaction with project participation, as well as with expected or achieved results and coping with challenges on local–community levels and macro-social levels, were another source of evaluation.

In the PATHWAYS project, the evaluation was implemented for an unusual purpose. This quasi-experimental design was developed to assess different aspects of the multidimensional project that used a variety of methods (systematic review of literature, content analysis of existing documents, acts, data and reports, surveys on different country-levels, deep interviews) in the different phases of the 3 years. The evaluation monitored each stage of the project and focused on process implementation, with the goal of improving every step of the project. The evaluation process allowed to perform critical assessments and deep analysis of benefits and shortages of the specific phase of the project.

The purpose of the evaluation was to monitor the main steps of the Project, including the expectations associated with a multidimensional, methodological approach used by PATHWAYS partners, as well as improving communication between partners, from different professional and methodological backgrounds involved in the project in all its phases, so as to avoid errors in understanding the specific steps as well as the main goals.

2. Materials and Methods

The paper describes methodology and results gathered during the implementation of Work Package 3, Evaluation of the Participation to Healthy Workplaces and Inclusive Strategies in the Work Sector (the PATHWAYS) project. The work package was intended to keep internal control over the run of the project to achieve timely fulfillment of tasks, milestones, and purpose by all project partners.

2.1. Participants

The project consortium involved 12 partners from 10 different European countries. There were academics (representing cross-disciplinary research including socio-environmental determinants of health, clinicians), institutions actively working for the integration of people with chronic and mental health problems and disability, educational bodies (working in the area of disability and focusing on inclusive education), national health institutes (for rehabilitation of patients with functional and workplace impairments), an institution for inter-professional rehabilitation at a country level (coordinating medical, social, educational, pre-vocational and vocational rehabilitation), a company providing patient-centered services (in neurorehabilitation). All the partners represented vast knowledge and high-level expertise in the area of interest and all agreed with the World Health Organization’s (WHO) International Classification of Functioning, Disability and Health-ICF and of the biopsychosocial model of health and functioning. The consortium was created based on the following criteria:

  • vision, mission, and activities in the area of project purposes,
  • high level of experience in the area (supported by publications) and in doing research (being involved in international projects, collaboration with the coordinator and/or other partners in the past),
  • being able to get broad geographical, cultural and socio-political representation from EU countries,
  • represent different stakeholder type in the area.

2.2. Project Evaluation Tool

The tool development process involved the following steps:

  • (1) Review definitions of ‘evaluation’ and adopt one which consorts best with the reality of public health research area;
  • (2) Review evaluation approaches and decide on the content which should be applicable in the public health research;
  • (3) Create items to be used in the evaluation tool;
  • (4) Decide on implementation timing.

According to the PATHWAYS project protocol, an evaluation tool for the internal project evaluation was required to collect information about: (1) structure and resources; (2) process, management and communication; (3) achievements and/or outcomes and (4) SWOT analysis. A mixed methods approach was chosen. The specific evaluation process purpose and approach are presented in Table 1 .

Evaluation purposes and approaches adopted for the purpose in the PATHWAYS project.

* Open ended questions are not counted here.

The tool was prepared following different steps. In the paragraph to assess structure and resources, there were questions about the number of partners, professional competences, assigned roles, human, financial and time resources, defined activities and tasks, and the communication plan. The second paragraph, process, management and communication, collected information about the coordination process, consensus level, quality of communication among coordinators, work package leaders, and partners, whether project was carried out according to the plan, involvement of target groups, usefulness of developed materials, and any difficulties in the project realization. Finally, the paragraph achievements and outcomes gathered information about project specific activities such as public-awareness raising, stakeholder participation and involvement, whether planned outcomes (e.g., milestones) were achieved, dissemination activities, and opinions on whether project outcomes met the needs of the target groups. Additionally, it was decided to implement SWOT analysis as a part of the evaluation process. SWOT analysis derives its name from the evaluation of Strengths (S), Weaknesses (W), Opportunities (O), and Threats (T) faced by a company, industry or, in this case, project consortium. SWOT analysis comes from the business world and was developed in the 1960s at Harvard Business School as a tool for improving management strategies among companies, institutions, or organization [ 13 , 14 ]. However, in recent years, SWOT analysis has been adapted in the context of research to improve programs or projects.

For a better understanding of SWOT analysis, it is important to highlight the internal features of Strengths and Weaknesses, which are considered controllable. Strengths refers to work inside the project such as capabilities and competences of partners, whereas weaknesses refers to aspects, which needs improvement, such as resources. Conversely, Opportunities and Threats are considered outside factors and uncontrollable [ 15 ]. Opportunities are maximized to fit the organization’s values and resources and threats are the factors that the organization is not well equipped to deal with [ 9 ].

The PATHWAYS project members participated in SWOT analyses every three months. They answered four open questions about strengths, weaknesses, opportunities, and threats identified in evaluated period (last three months). They were then asked to assess those items on 10-point scale. The sample included results from nine evaluated periods from partners from ten different countries.

The tool for the internal evaluation of the PATHWAYS project is presented in Appendix A .

2.3. Tool Implementation and Data Collection

The PATHWAYS on-going evaluation took place at three-month intervals. It consisted of on-line surveys, and every partner assigned a representative who was expected to have good knowledge on the progress of project’s progress. The structure and resources were assessed only twice, at the beginning (3rd month) and at the end (36th month) of the project. The process, management, and communication questions, as well as SWOT analysis questions, were asked every three months. The achievements and outcomes questions started after the first year of implementation (i.e., after 15th month), and some of items in this paragraph, (results achieved, whether project outcomes meet the needs of the target groups and published regular publications), were only implemented at the end of the project (36th month).

2.4. Evaluation Team

The evaluation team was created from professionals with different backgrounds and extensive experience in research methodology, sociology, social research methods and public health.

The project started in 2015 and was carried out for 36 months. There were 12 partners in the PATHWAYS project, representing Austria, Belgium, Czech Republic, Germany, Greece, Italy, Norway, Poland, Slovenia and Spain and a European Organization. The on-line questionnaire was sent to all partners one week after the specified period ended and project partners had at least 2 weeks to fill in/answer the survey. Eleven rounds of the survey were performed.

The participation rate in the consecutive evaluation surveys was 11 (91.7%), 12 (100%), 12 (100%), 11 (91.7%), 10 (83.3%), 11 (91.7%), 11 (91.7%), 10 (83.3%), and 11 (91.7%) till the project end. Overall, it rarely covered the whole group, which may have resulted from a lack of coercive mechanisms at a project level to answer project evaluation questions.

3.1. Evaluation Results Considering Structure and Resources (3rd Month Only)

A total of 11 out of 12 project partners participated in the first evaluation survey. The structure and resources of the project were not assessed by the project coordinator and as such, the results in represent the opinions of the other 10 participating partners. The majority of respondents rated the project consortium as having at least adequate professional competencies. In total eight to nine project partners found human, financial and time resources ‘just right’ and the communication plan ‘clear’. More concerns were observed regarding the clarity of tasks, what is expected from each partner, and how specific project activities should be or were assigned.

3.2. Evaluation Results Considering Process, Management and Communication

The opinions about project coordination, communication processes (with coordinator, between WP leaders, and between individual partners/researchers) were assessed as ‘good’ and ‘very good’, along the whole period. There were some issues, however, when it came to the realization of specific goals, deliverables, or milestones of the project.

Given the broad scope of the project and participating partner countries, we created a glossary to unify the common terms used in the project. It was a challenge, as during the project implementation there were several discussions and inconsistencies in the concepts provided ( Figure 1 ).

An external file that holds a picture, illustration, etc.
Object name is ijerph-15-01071-g001.jpg

Partners’ opinions about the consensus around terms (shared glossary) in the project consortium across evaluation waves (W1—after 3-month realization period, and at 3-month intervals thereafter).

Other issues, which appeared during project implementation, were recruitment of, involvement with, and cooperation with stakeholders. There was a range of groups to be contacted and investigated during the project including individual patients suffering from chronic conditions, patients’ advocacy groups and national governmental organizations, policy makers, employers, and international organizations. It was found that during the project, the interest and the involvement level of the aforementioned groups was quite low and difficult to achieve, which led to some delays in project implementation ( Figure 2 ). This was the main cause of smaller percentages of “what was expected to be done in designated periods of project realization time”. The issue was monitored and eliminated by intensification of activities in this area ( Figure 3 ).

An external file that holds a picture, illustration, etc.
Object name is ijerph-15-01071-g002.jpg

Partners’ reports on whether the project had been carried out according to the plan ( a ) and the experience of any problems in the process of project realization ( b ) (W1—after 3-month realization period, and at 3-month intervals thereafter).

An external file that holds a picture, illustration, etc.
Object name is ijerph-15-01071-g003.jpg

Partners’ reports on an approximate estimation (in percent) of the project plan implementation (what has been done according to the plan) ( a ) and the involvement of target groups (W1—after 3-month realization period, and at 3-month intervals thereafter) ( b ).

3.3. Evaluation Results Considering Achievements and Outcomes

The evaluation process was prepared to monitor project milestones and deliverables. One of the PATHWAYS project goals was to raise public awareness surrounding the reintegration of chronically ill people into the labor market. This was assessed subjectively by cooperating partners and only half (six) felt they achieved complete success on that measure. The evaluation process monitored planned outcomes according to: (1) determination of strategies for awareness rising activities, (2) assessment of employment-related needs, and (3) development of guidelines (which were planned by the project). The majority of partners completely fulfilled this task. Furthermore, the dissemination process was also carried out according to the plan.

3.4. Evaluation Results from SWOT

3.4.1. strengths.

Amongst the key issues identified across all nine evaluated periods ( Figure 4 ), the “strong consortium” was highlighted as the most important strength of the PATHWAYS project. The most common arguments for this assessment were the coordinator’s experience in international projects, involvement of interdisciplinary experts who could guarantee a holistic approach to the subject, and a highly motivated team. This was followed by the uniqueness of the topic. Project implementers pointed to the relevance of the analyzed issues, which are consistent with social needs. They also highlighted that this topic concerned an unexplored area in employment policy. The interdisciplinary and international approach was also emphasized. According to the project implementers, the international approach allowed mapping of vocational and prevocational processes among patients with chronic conditions and disability throughout Europe. The interdisciplinary approach, on the other hand, enabled researchers to create a holistic framework that stimulates innovation by thinking across boundaries of particular disciplines—especially as the PATHWAYS project brings together health scientists from diverse fields (physicians, psychologists, medical sociologists, etc.) from ten European countries. This interdisciplinary approach is also supported by the methodology, which is based on a mixed-method approach (qualitative and quantitative data). The involvement of an advocacy group was another strength identified by the project implementers. It was stressed that the involvement of different types of stakeholders increased validity and social triangulation. It was also assumed that it would allow for the integration of relevant stakeholders. The last strength, the usefulness of results, was identified only in the last two evaluation waves, when the first results had been measured.

An external file that holds a picture, illustration, etc.
Object name is ijerph-15-01071-g004.jpg

SWOT Analysis—a summary of main issues reported by PATHWAYS project partners.

3.4.2. Weaknesses

The survey respondents agreed that the main weaknesses of the project were time and human resources. The subject of the PATHWAYS project turned out to be very broad, and therefore the implementers pointed to the insufficient human resources and inadequate time for the implementation of individual tasks, as well as the project overall. This was related to the broad categories of chronic diseases chosen for analysis in the project. On one hand, the implementers complained about the insufficient number of chronic diseases taken into account in the project. On the other hand, they admitted that it was not possible to cover all chronic diseases in details. The scope of the project was reported as another weakness. In the successive waves of evaluation, the implementers more often pointed out that it was hard to cover all relevant topics.

Nevertheless, some of the major weaknesses reported during the project evaluation were methodological problems. Respondents pointed to problems with the implementation of tasks on a regular basis. For example, survey respondents highlighted the need for more open questions in the survey that the questionnaire was too long or too complicated, that the tools were not adjusted for relevancy in the national context, etc. Another issue was that the working language was English, but all tools or survey questionnaire needed to be translated into different languages and this issue was not always considered by the Commission in terms of timing and resources. This issue could provide useful for further projects, as well as for future collaborations.

The difficulties of involving stakeholders were reported, especially during tasks, which required their active commitment, like participation in in-depth interviews or online questionnaires. Interestingly, the international approach was considered both strength and weakness of the project. The implementers highlighted the complexity of making comparisons between health care and/or social care in different countries. The budget was also identified as a weakness by the project implementers. More funds obtained from the partners could have helped PATHWAYS enhance dissemination and stakeholders’ participation.

3.4.3. Opportunities

A list of seven issues within the opportunities category reflects the positive outlook of survey respondents from the beginning of the project to its final stage. Social utility was ranked as the top opportunity. The implementers emphasized that the project could fill a gap between the existing solutions and the real needs of people with chronic diseases and mental disorders. The implementers also highlighted the role of future recommendations, which would consist of proposed solutions for professionals, employees, employers, and politicians. These advantages are strongly associated with increasing awareness of employment situations of people with chronic diseases in Europe and the relevance of the problem. Alignment with policies, strategies, and stakeholders’ interests were also identified as opportunities. The topic is actively discussed on the European and national level, and labor market and employment issues are increasingly emphasized in the public discourse. What is more relevant is that the European Commission considers the issue crucial, and the results of the project are in line with its requests for the future. The implementers also observed increasing interest from the stakeholders, which is very important for the future of the project. Without doubt, the social network of project implementers provides a huge opportunity for the sustainability of results and the implementation of recommendations.

3.4.4. Threats

Insufficient response from stakeholders was the top perceived threat selected by survey respondents. The implementers indicated that insufficient involvement of stakeholders resulted in low response rates in the research phase, which posed a huge threat for the project. The interdisciplinary nature of the PATHWAYS project was highlighted as a potential threat due to differences in technical terminology and different systems of regulating the employment of persons with reduced work capacity in each country, as well as many differences in the legislation process. Insufficient funding and lack of existing data were identified as the last two threats.

One novel aspect of the evaluation process in the PATHWAYS project was a numerical SWOT analysis. Participants were asked to score strengths, weaknesses, opportunities, and threats from 0 (meaning the lack of/no strengths, weaknesses) to 10 (meaning a lot of ... several ... strengths, weaknesses). This concept enabled us to get a subjective score of how partners perceive the PATHWAYS project itself and the performance of the project, as well as how that perception changes over time. Data showed an increase in both strengths and opportunities and a decrease in weaknesses and threats over the course of project implementation ( Figure 5 ).

An external file that holds a picture, illustration, etc.
Object name is ijerph-15-01071-g005.jpg

Numerical SWOT, combined, over a period of 36 months of project realization (W1—after 3-month realization period, and at 3-month intervals thereafter).

4. Discussion

The need for project evaluation was born from an industry facing challenges regarding how to achieve market goals in more efficient way. Nowadays, every process, including research project implementation, faces questions regarding its effectiveness and efficiency.

The challenge of a research project evaluation is that the majority of research projects are described as unique, although we believe several projects face similar issues and challenges as those observed in the PATHWAYS project.

The main objectives of the PATHWAYS Project were (a) to identify integration and re-integration strategies that are available in Europe and beyond for individuals with chronic diseases and mental disorders experiencing work-related problems (such as unemployment, absenteeism, reduced productivity, stigmatization), (b) to determine their effectiveness, (c) to assess the specific employment-related needs of those people, and (d) to develop guidelines supporting the implementation of effective strategies of professional integration and reintegration. The broad area of investigation, partial knowledge in the field, diversity of determinants across European Union countries, and involvement with stakeholders representing different groups caused several challenges in the project, including:

  • problem : uncovered, challenging, demanding (how to encourage stakeholders to participate, share experiences),
  • diversity : different European regions; different determinants: political, social, cultural; different public health and welfare systems; differences in law regulations; different employment policies and issues in the system,
  • multidimensionality of research: some quantitative, qualitative studies including focus groups, opinions from professionals, small surveys in target groups (workers with chronic conditions).

The challenges to the project consequently led to several key issues, which should be taken, into account during project realization:

  • partners : with their own expertise and interests; different expectations; different views on what is more important to focused on and highlighted;
  • issues associated with unification : between different countries with different systems (law, work-related and welfare definitions, disability classification, others);
  • coordination : as multidimensionality of the project may have caused some research activities by partners to move in a wrong direction (data, knowledge which is not needed for the project purposes), a lack of project vision in (some) partners might postpone activities through misunderstanding;
  • exchange of information : multidimensionality, the fact that different tasks were accomplished by different centers and obstacles to data collection required good communication methods and smooth exchange of information.

Identified Issues and Implemented Solutions

There were several issues identified through the semi-internal evaluation process performed during the project. Those, which might be more relevant for the project realization, are mentioned in the Table 2 .

Issues identified by the evaluation process and solutions implemented.

The PATHWAYS project included diverse partners representing different areas of expertise and activity (considering broad aspect of chronic diseases, decline in functioning and of disability, and its role in a labor market) in different countries and social security systems, which caused a challenge when developing a common language to achieve effective communication and better understanding of facts and circumstances in different countries. The implementation of continuous project process monitoring, and proper adjustment, enabled the team to overcome these challenges.

The evaluation tool has several benefits. First, it covers all key areas of the research project including structure and available resources, the run of the process, quality and timing of management and communication, as well as project achievements and outcomes. Continuous evaluation of all of these areas provides in-depth knowledge about project performance. Second, the implementation of SWOT tool provided opportunities to share out good and bad experiences by all project partners, and the use of a numerical version of SWOT provided a good picture about inter-relations strengths—weaknesses and opportunities—threats in the project and showed the changes in their intensity over time. Additionally, numerical SWOT may verify whether perception of a project improves over time (as was observed in the PATHWAYS project) showing an increase in strengths and opportunities and a decrease in weaknesses and threats. Third, the intervals in which partners were ‘screened’ by the evaluation questionnaire seems to be appropriate, as it was not very demanding but frequent enough to diagnose on-time some issues in the project process.

The experiences with the evaluation also revealed some limitations. There were no coercive mechanisms for participation in the evaluation questionnaires, which may have caused a less than 100% response rate in some screening surveys. Practically, that was not a problem in the PATHWAYS project. Theoretically, however, this might lead to unrevealed problems, as partners experiencing troubles might not report them. Another point is asking about quality of the consortium to the project coordinator, which has no great value (the consortium is created by the coordinator in the best achievable way and it is hard to expect other comments especially at the beginning of the project). Regarding the tool itself, the question Could you give us approximate estimation (in percent) of the project plan realization (what has been done according to the plan)? was expected to collect information about the project partners collecting data on what has been done out of what should be done during each evaluation period, meaning that 100% was what should be done in 3-month time in our project. This question, however, was slightly confusing at the beginning, as it was interpreted as percentage of all tasks and activities planned for the whole duration of the project. Additionally, this question only works provided that precise, clear plans on the type and timing of tasks were allocated to the project partners. Lastly, there were some questions with very low variability in answer types across evaluation surveys (mainly about coordination and communication). Our opinion is that if the project runs/performs in a smooth manner, one may think such questions useless, but in more complicated projects, these questions may reveal potential causes of troubles.

5. Conclusions

The PATHWAYS project experience shows a need for the implementation of structured evaluation processes in multidisciplinary projects involving different stakeholders in diverse socio-environmental and political conditions. Based on the PATHWAYS experience, a clear monitoring methodology is suggested as essential in every project and we suggest the following steps while doing multidisciplinary research:

  • Define area/s of interest (decision maker level/s; providers; beneficiaries: direct, indirect),
  • Identify 2–3 possible partners for each area (chain sampling easier, more knowledge about; check for publications),
  • Prepare a research plan (propose, ask for supportive information, clarify, negotiate),
  • Create a cross-partner groups of experts,
  • Prepare a communication strategy (communication channels, responsible individuals, timing),
  • Prepare a glossary covering all the important issues covered by the research project,
  • Monitor the project process and timing, identify concerns, troubles, causes of delays,
  • Prepare for the next steps in advance, inform project partners about the upcoming activities,
  • Summarize, show good practices, successful strategies (during project realization, to achieve better project performance).

Acknowledgments

The current study was part of the PATHWAYS project, that has received funding from the European Union’s Health Program (2014–2020) Grant agreement no. 663474.

The evaluation questionnaire developed for the PATHWAYS Project.

SWOT analysis:

What are strengths and weaknesses of the project? (list, please)

What are threats and opportunities? (list, please)

Visual SWOT:

Please, rate the project on the following continua:

How would you rate:

(no strengths) 0 1 2 3 4 5 6 7 8 9 10 (a lot of strengths, very strong)

(no weaknesses) 0 1 2 3 4 5 6 7 8 9 10 (a lot of weaknesses, very weak)

(no risks) 0 1 2 3 4 5 6 7 8 9 10 (several risks, inability to accomplish the task(s))

(no opportunities) 0 1 2 3 4 5 6 7 8 9 10 (project has a lot of opportunities)

Author Contributions

A.G., A.P., B.T.-A. and M.L. conceived and designed the concept; A.G., A.P., B.T.-A. finalized evaluation questionnaire and participated in data collection; A.G. analyzed the data; all authors contributed to writing the manuscript. All authors agreed on the content of the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

Cart

  • SUGGESTED TOPICS
  • The Magazine
  • Newsletters
  • Managing Yourself
  • Managing Teams
  • Work-life Balance
  • The Big Idea
  • Data & Visuals
  • Reading Lists
  • Case Selections
  • HBR Learning
  • Topic Feeds
  • Account Settings
  • Email Preferences

Creating a Corporate Social Responsibility Program with Real Impact

  • Emilio Marti,
  • David Risi,
  • Eva Schlindwein,
  • Andromachi Athanasopoulou

methodology of project evaluation

Lessons from multinational companies that adapted their CSR practices based on local feedback and knowledge.

Exploring the critical role of experimentation in Corporate Social Responsibility (CSR), research on four multinational companies reveals a stark difference in CSR effectiveness. Successful companies integrate an experimental approach, constantly adapting their CSR practices based on local feedback and knowledge. This strategy fosters genuine community engagement and responsive initiatives, as seen in a mining company’s impactful HIV/AIDS program. Conversely, companies that rely on standardized, inflexible CSR methods often fail to achieve their goals, demonstrated by a failed partnership due to local corruption in another mining company. The study recommends encouraging broad employee participation in CSR and fostering a culture that values CSR’s long-term business benefits. It also suggests that sustainable investors and ESG rating agencies should focus on assessing companies’ experimental approaches to CSR, going beyond current practices to examine the involvement of diverse employees in both developing and adapting CSR initiatives. Overall, embracing a dynamic, data-driven approach to CSR is essential for meaningful social and environmental impact.

By now, almost all large companies are engaged in corporate social responsibility (CSR): they have CSR policies, employ CSR staff, engage in activities that aim to have a positive impact on the environment and society, and write CSR reports. However, the evolution of CSR has brought forth new challenges. A stark contrast to two decades ago, when the primary concern was the sheer neglect of CSR, the current issue lies in the ineffective execution of these practices. Why do some companies implement CSR in ways that create a positive impact on the environment and society, while others fail to do so? Our research reveals that experimentation is critical for impactful CSR, which has implications for both companies that implement CSR and companies that externally monitor these CSR activities, such as sustainable investors and ESG rating agencies.

  • EM Emilio Marti is an associate professor at the Rotterdam School of Management, Erasmus University. His research focuses on corporate sustainability with a specific focus on sustainable investing.
  • DR David Risi is a professor at the Bern University of Applied Sciences and a habilitated lecturer at the University of St. Gallen. His research focuses on how companies organize CSR and sustainability.
  • ES Eva Schlindwein is a professor at the Bern University of Applied Sciences and a postdoctoral fellow at the University of Oxford. Her research focuses on how organizations navigate tensions between business and society.
  • AA Andromachi Athanasopoulou is an associate professor at Queen Mary University of London and an associate fellow at the University of Oxford. Her research focuses on how individuals manage their leadership careers and make ethically charged decisions.

Partner Center

ORIGINAL RESEARCH article

This article is part of the research topic.

Infrastructure Project Management

Influential Factors for Risk Assessment and Allocation on Complex Design-Build Infrastructure Projects; the Texas Experience Provisionally Accepted

  • 1 Rutgers, The State University of New Jersey, United States
  • 2 The University of Texas at Austin, United States

The final, formatted version of the article will be published soon.

The design-build (DB) delivery method is used to deliver increasingly complex transportation infrastructure projects associated with higher uncertainty. As such, allocating risks in the contract between the owner and design-builder becomes challenging and often leads to higher initial bids, increased contingency, or claims. Learnings from implementation worldwide have underlined the need for improving risk allocation in DB contracts. Most existing studies address risk allocation mechanisms to manage contingency at the contract level. Other studies have recognized the need for owners to adapt their processes to better allocate risks in DB contracts. This study explored the influential factors for risk assessment and allocation for complex DB infrastructure projects, addressing the opportunity to improve transportation owners' risk allocation processes before the design-builder is selected and the DB contract is awarded. The objectives of this work were achieved by utilizing empirical data collected through 20 interviews with Texas Department of Transportation and private sector experts. The interview data were analyzed using inductive and axial coding. Inductive coding allowed themes to emerge without a pre-existing framework, identifying six influential factors and six pertinent risks on complex DB projects. These factors include the (i) Quality of DB teams, (ii) Level of up-front investigation, (iii) Limitations on the timing of letting, (iv) Design optimization opportunities, (v) Project-specific requirements, and (vi) Relationships with third parties. Through axial coding, the interaction and frequency between the factors and risks were also examined. The coded interactions demonstrated how the identified factors influence allocation for six pertinent risks including right-of-way acquisition, stakeholder approval, site conditions, permits and third-party agreements, railroad interaction, and utility adjustments and coordination. Findings indicate that the evaluation of these interactions can shift the risk allocation from baseline norms established by an agency to correspond to project-specific needs. In contributing to the infrastructure project management, this is the first study to examine the factors that influence risk allocation in complex DB projects and examine interactions with pertinent risks, setting the foundation for optimizing allocation based on project-specific needs. In practice, the findings presented in this study can guide owners in adapting their allocation practices,

Keywords: design-build, risk, Risk allocation, Transportation infrastructure, Complex projects, transportation owners

Received: 30 Oct 2023; Accepted: 08 Apr 2024.

Copyright: © 2024 Demetracopoulou, O'Brien and Khwaja. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence: Mx. William J. O'Brien, The University of Texas at Austin, Austin, United States

People also looked at

Blog UK National Screening Committee

https://nationalscreening.blog.gov.uk/2024/04/08/partnership-board-updated-on-progress-with-sma-screening-evidence-review/

Partnership board updated on progress with SMA screening evidence review

methodology of project evaluation

The UK National Screening Committee (UK NSC) and its partners were updated on work to review the evidence for newborn screening for spinal muscular atrophy (SMA) at the second meeting of the SMA in-service evaluation (ISE) partnership board.

The board and its 3 sub-groups are responsible for planning the ISE of newborn blood spot (NBS) screening for SMA in real world NHS services in the UK.

The meeting heard that the UK NSC had met the National Institute for Health and Care Excellence (NICE) to ensure that NICE’s upcoming review of SMA treatment guidance is aligned with the ISE, and vice-versa.

Board members, who include screening experts from the 4 UK governments and NHS, organisations with a shared interest in newborn screening for SMA, clinicians, academics, genomic experts and patient and public voice members, were reminded of the importance of declaring any actual or potential conflicts of interest. Actions to mitigate conflicts of interest, if appropriate, will be agreed in line with the UK NSC’s declarations of interest guidance .

Chairs of the 3 sub-groups then gave an overview of activity.

Clinical pathway sub-group

This group is tasked with defining the screening pathway, up to treatment. A very important part of this pathway is information for parents, the public and health care staff. The clinical pathway sub-group will lead on the development of information about the ISE that will be provided for parents, the public and healthcare staff. Parents, people with SMA and healthcare staff will be actively involved in the development of these materials.

Sub-group chair Dr David Elliman said the group would learn from the materials already developed by various SMA support groups and from the evaluation of newborn screening for severe combined immunodeficiency (SCID) in England. This will include the need to inform parents about the nature of the ISE at the point of offering screening.

David fed back on discussions over which SMN2 copy numbers would prompt a referral and reporting of SMA to parents. This discussion followed presentations from colleagues in Netherlands and Sweden on the referral and reporting pathways in those countries. This issue will be discussed further at the next meeting and within the laboratory sub-group.

Data and methodology sub-group

The data and methodology sub-group has contributed suggestions to a research commissioning brief on SMA newborn screening which will be publicised in due course by the National Institute for Health and Care Research’s (NIHR’s) Health Technology Assessment (HTA) programme.

The research brief will address the feasibility, clinical effectiveness and cost effectiveness of screening. The sub-group’s suggestions included that the HTA study should include the potential use of existing data and stored blood spot samples as sources of information.

The data and methodology sub-group will also be responsible for advising on any research gaps not covered by the HTA brief.

Laboratory sub-group

The laboratory sub-group will be led by NHS England. The first meeting of the group is scheduled for early April.

Laboratory sub-group chair Professor Jim Bonham explained that the 6 screening labs which currently offer SCID testing could, without additional equipment, undertake evaluation for SMA screening if required. These labs cover approximately 60% of births in England, or around half of all babies born in the UK.

Generation study update

Dr Ellen Thomas, Deputy Chief Medical Officer at Genomics England (GEL), gave an update on progress with the Generation study , which will sequence the genomes of 100,000 newborn babies.

The GEL team is in regular contact with Prof Bonham on SMA reporting and the collection of longitudinal data to ensure that processes align.

Cost effectiveness modelling project

Work has started on the development of a new comprehensive and flexible cost effectiveness SMA screening modelling study for the UK screening context.

This work is being undertaken by the Sheffield Centre for Health and Related Research (ScHARR) at the University of Sheffield and the first stakeholder workshop will be held on 21 May.

The ScHARR team is keeping in close contact with NICE colleagues regarding the managed access agreements (MAAs) for SMA treatments that will impact on the modelling work.

Updates from around the UK

The partnership board also heard updates from government and NHS screening experts from across the UK. Members agreed on the importance of communications emphasising that the UK NSC’s evidence review applies to all 4 nations.

NHS colleagues stressed the need to address the challenges of delivering change at scale across the system, along with capacity and workforce issues across the whole pathway.

The lead for the  Oxford/Thames Valley based SMA newborn screening study  gave an update on the progress of their study, which now also covers Southampton. Experience and lessons learnt from this study will feed into the ISE.

Keep up to date

The UK NSC blog provides up to date news from the UK NSC. You can  register to receive updates  direct to your inbox, so there is no need to keep checking for new articles. If you have any questions about this blog article, or about the work of the UK NSC, please email  [email protected] .

You may also be interested in:

  • SMA ISE clinical pathway expert sub-group meets for first time
  • English government endorses recommendation to screen newborns for tyrosinaemia
  • Screening workstreams prominent in England Rare Diseases Action Plan

Sharing and comments

Share this page, related content and links, about this blog.

The UK National Screening Committee (UK NSC) advises ministers and the NHS in the 4 UK countries about all aspects of population screening and supports implementation of screening programmes.

Find out more

Sign up and manage updates

Useful links.

  • UK NSC homepage
  • UK NSC helpdesk
  • Introduction to population screening e-learning
  • Screening in Wales
  • Screening in Scotland
  • Screening in Northern Ireland
  • Screening in England
  • UK NSC blog archive (articles before May 2021)

Comments and moderation

Read our guidelines

Recent posts

  • Anneke helps the UK NSC navigate the complexities of genetics April 9, 2024
  • Partnership board updated on progress with SMA screening evidence review April 8, 2024
  • New UK NSC guidance explains how conflicts of interest can be mitigated March 18, 2024

Help | Advanced Search

Computer Science > Computation and Language

Title: how to evaluate entity resolution systems: an entity-centric framework with application to inventor name disambiguation.

Abstract: Entity resolution (record linkage, microclustering) systems are notoriously difficult to evaluate. Looking for a needle in a haystack, traditional evaluation methods use sophisticated, application-specific sampling schemes to find matching pairs of records among an immense number of non-matches. We propose an alternative that facilitates the creation of representative, reusable benchmark data sets without necessitating complex sampling schemes. These benchmark data sets can then be used for model training and a variety of evaluation tasks. Specifically, we propose an entity-centric data labeling methodology that integrates with a unified framework for monitoring summary statistics, estimating key performance metrics such as cluster and pairwise precision and recall, and analyzing root causes for errors. We validate the framework in an application to inventor name disambiguation and through simulation studies. Software: this https URL

Submission history

Access paper:.

  • HTML (experimental)
  • Other Formats

References & Citations

  • Google Scholar
  • Semantic Scholar

BibTeX formatted citation

BibSonomy logo

Bibliographic and Citation Tools

Code, data and media associated with this article, recommenders and search tools.

  • Institution

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs .

IMAGES

  1. Improving Your Project Evaluation Process

    methodology of project evaluation

  2. Project Evaluation 101: Benefits, Methods, & Steps

    methodology of project evaluation

  3. Top 20 Project Management Methodologies For 2020 (UPDATED)

    methodology of project evaluation

  4. Assess Project Management Methodologies for Your Business

    methodology of project evaluation

  5. 5 Stages of the Evaluation Process

    methodology of project evaluation

  6. Designing an Evaluation Methodology for Your Project

    methodology of project evaluation

VIDEO

  1. Research Critique of Methodology/Chapter-3

  2. EXTENSION LECTURE ON RESEARCH METHODOLOGY & PROJECT REPORT

  3. 04 Project Evaluation Methods

  4. Continuous Comprehensive Evaluation biology methodology AP TS DSC TET plz🙏 subscribe jai sree ram🙏

  5. continuous Comprehensive evaluation biology methodology AP TS DSC plz🙏 subscribe jai sree ram🙏

  6. The FERET evaluation methodology for face recognition algorithms

COMMENTS

  1. Project Evaluation Process: Definition, Methods & Steps

    Project evaluation is the process of measuring the success of a project, program or portfolio. This is done by gathering data about the project and using an evaluation method that allows evaluators to find performance improvement opportunities. Project evaluation is also critical to keep stakeholders updated on the project status and any ...

  2. Project Evaluation: Definition, Methods, and Steps On How to Do It

    It's essential to tailor the evaluation method to the project at hand so that you can get the most accurate results. Steps to Conduct a Project Evaluation: A Project Evaluation Example. Embarking on a project evaluation can be likened to planning and executing a voyage. Consider a team set to launch a new mobile application - a project ...

  3. What is Project Evaluation? The Complete Guide with Templates

    Project evaluation is a key part of assessing the success, progress and areas for improvement of a project. It involves determining how well a project is meeting its goals and objectives. Evaluation helps determine if a project is worth continuing, needs adjustments, or should be discontinued. A good evaluation plan is developed at the start of ...

  4. (PDF) Four Approaches to Project Evaluation

    evaluation types: (1) constructive process evaluation, (2) conclusive process evaluation, (3) constructive. outcome evaluation, (4) conclusive outcome evaluation and (5) hybrid evaluations derived ...

  5. Project Evaluation: Steps, Benefits, and Common Mistakes

    The advantages of conducting a project evaluation span from internal team growth to external triumphs. Here's a rundown of the main benefits: Tracking the project's progress: It helps track team performance across projects, providing a record of improvements or setbacks over time. Identifying improvement areas: By recognizing trends and ...

  6. How To Evaluate and Measure the Success of a Project

    SPI is calculated by dividing the earned value (EV) by the planned value (PV). An SPI of 1.0 indicates that the project is on schedule, while an SPI of less than 1.0 means that the project is behind schedule. This metric helps identify scheduling issues and make adjustments to improve efficiency and meet deadlines. 7.

  7. Chapter 36. Introduction to Evaluation

    By evaluation, we mean the systematic investigation of the merit, worth, or significance of an object or effort. Evaluation practice has changed dramatically during the past three decades - new methods and approaches have been developed and it is now used for increasingly diverse projects and audiences.

  8. Chapter 2

    We consider the following methodological principles to be important for developing high-quality evaluations: Giving due consideration to methodological aspects of evaluation quality in design: focus, consistency, reliability, and validity. Matching evaluation design to the evaluation questions. Using effective tools for evaluation design.

  9. What is Project Evaluation? Definition, Types & How to Do it

    Implementing project evaluation is crucial for project managers who want to evaluate goals, objectives, and outcomes as well as gauge the efficacy of their initiatives. Different project evaluation methods provide insightful information and draw attention to areas that could need improvement. You may get a number of organisational benefits by ...

  10. Evaluation in project management: what you need to consider

    Evaluation has a pivotal role in project management. Good evaluation maximises learning from projects and facilitates the effective communication of project benefits and successes. ... Tailored approach and flexibility: your evaluation methods should be tailored to your project's unique characteristics. What works for an education and training ...

  11. Project Evaluation

    Methods. The methods of project evaluation are: Pre-Project Evaluation: This is a crucial phase conducted before initiating a project. It focuses on assessing the suggested project's feasibility and viability. The outcome of the evaluation informs decision-makers about whether to proceed, modify, or abandon the project. ...

  12. Understanding Evaluation Methodologies: M&E Methods and ...

    Monitoring and Evaluation (M&E) methods encompass the tools, techniques, and processes used to assess the performance of projects, programs, or policies. These methods are essential in determining whether the objectives are being met, understanding the impact of interventions, and guiding decision-making for future improvements.

  13. How to Do a Project Evaluation (With Tools)

    1. Develop an Evaluation Plan. As you create your project, you should consider the objectives and goals you want to achieve and share them with your team, providing them with a clear path forward. The goals and objectives you determine can help you choose the project evaluation method you want to use. For example, if the project goal is to ...

  14. Designing an Evaluation Methodology for Your Project

    A thought-through evaluation methodology shows that you and your organization are very familiar with the working area of your project, have put a lot of thought into the design, and are able and willing to critically analyze your project interventions. It creates transparency and thus more reason for the donors to trust you and your organization.

  15. PDF Guidelines for Project and Programme Evaluations

    English Translation, Vienna, July 2009. 1. Introduction. The Guidelines are intended to support project partners which implement projects or programmes supported by the Austrian Development Cooperation (ADC) during the process of planning, commissioning and managing project and programme evaluations.

  16. PDF Evaluation Methodology

    The Evaluation Methodology is a tool to help one better understand the steps needed to do a quality evaluation. By following this process, a faculty member can learn what he or she needs to know to determine the level of quality of a performance, product, or skill. The discussion and examples of the use of this methodology are geared toward ...

  17. Project Evaluation Process: Definition, Methods & Steps

    Projects Evaluation Methods. There are three points by one project where evaluation is most wanted. For it can evaluate your project at any time, that are points where you should must the process officially scheduled. 1. Pre-Project Estimate. At a sense, you're pre-evaluating your show when you write your project charter to pitch to the ...

  18. Project Evaluation: What It Is and How To Do It

    Project evaluation is a strategy used to determine the success and impact of projects, programs, or policies. It requires the evaluator to gather important information to analyze the process and outcome of a certain project. Project evaluation prompts changes in internal workflow, detects patterns in the target audience of the project, plans ...

  19. Learn About the Project Evaluation Process and Its Importance ...

    Project Evaluation Methods. Understandably, project evaluation has been a part of the projects since they started. But when it comes to the exact science of project management, the project evaluation process can be broken down into 3 different types. They are: Pre-Project evaluation; Ongoing evaluation; Post-Project Evaluation

  20. PDF DESIGNING A PROJECT EVALUATION FRAMEWORK

    evaluation, the paper utilizes an action design research methodology to unfold a project evaluation framework. The paper contributes to theory and practice in three ways. First, it presents a multidimensional project evaluation framework at an abstract level. Second, it outlines how this framework is applied in a concrete evaluation study of a ...

  21. Project evaluation methodologies and techniques

    Additional project evaluation methodologies and/or techniques based on cost criteria are the well-known cost-benefit and cost-effectiveness analyses. Both these techniques, however, assume that the effectiveness of and/or the accruing benefits from each considered project differ.

  22. What Is Evaluation?: Perspectives of How Evaluation Differs (or Not

    Source Definition; Suchman (1968, pp. 2-3) [Evaluation applies] the methods of science to action programs in order to obtain objective and valid measures of what such programs are accomplishing.…Evaluation research asks about the kinds of change desired, the means by which this change is to be brought about, and the signs by which such changes can be recognized.

  23. Research Project Evaluation—Learnings from the PATHWAYS Project

    Background: Every research project faces challenges regarding how to achieve its goals in a timely and effective manner. The purpose of this paper is to present a project evaluation methodology gathered during the implementation of the Participation to Healthy Workplaces and Inclusive Strategies in the Work Sector (the EU PATHWAYS Project). The PATHWAYS project involved multiple countries and ...

  24. Creating a Corporate Social Responsibility Program with Real Impact

    Summary. Exploring the critical role of experimentation in Corporate Social Responsibility (CSR), research on four multinational companies reveals a stark difference in CSR effectiveness ...

  25. Frontiers

    The design-build (DB) delivery method is used to deliver increasingly complex transportation infrastructure projects associated with higher uncertainty. As such, allocating risks in the contract between the owner and design-builder becomes challenging and often leads to higher initial bids, increased contingency, or claims. Learnings from implementation worldwide have underlined the need for ...

  26. Partnership board updated on progress with SMA screening evidence

    Cost effectiveness modelling project Work has started on the development of a new comprehensive and flexible cost effectiveness SMA screening modelling study for the UK screening context. This work is being undertaken by the Sheffield Centre for Health and Related Research (ScHARR) at the University of Sheffield and the first stakeholder ...

  27. [2404.05622] How to Evaluate Entity Resolution Systems: An Entity

    Entity resolution (record linkage, microclustering) systems are notoriously difficult to evaluate. Looking for a needle in a haystack, traditional evaluation methods use sophisticated, application-specific sampling schemes to find matching pairs of records among an immense number of non-matches. We propose an alternative that facilitates the creation of representative, reusable benchmark data ...