• Privacy Policy

Research Method

Home » Evaluating Research – Process, Examples and Methods

Evaluating Research – Process, Examples and Methods

Table of Contents

Evaluating Research

Evaluating Research

Definition:

Evaluating Research refers to the process of assessing the quality, credibility, and relevance of a research study or project. This involves examining the methods, data, and results of the research in order to determine its validity, reliability, and usefulness. Evaluating research can be done by both experts and non-experts in the field, and involves critical thinking, analysis, and interpretation of the research findings.

Research Evaluating Process

The process of evaluating research typically involves the following steps:

Identify the Research Question

The first step in evaluating research is to identify the research question or problem that the study is addressing. This will help you to determine whether the study is relevant to your needs.

Assess the Study Design

The study design refers to the methodology used to conduct the research. You should assess whether the study design is appropriate for the research question and whether it is likely to produce reliable and valid results.

Evaluate the Sample

The sample refers to the group of participants or subjects who are included in the study. You should evaluate whether the sample size is adequate and whether the participants are representative of the population under study.

Review the Data Collection Methods

You should review the data collection methods used in the study to ensure that they are valid and reliable. This includes assessing the measures used to collect data and the procedures used to collect data.

Examine the Statistical Analysis

Statistical analysis refers to the methods used to analyze the data. You should examine whether the statistical analysis is appropriate for the research question and whether it is likely to produce valid and reliable results.

Assess the Conclusions

You should evaluate whether the data support the conclusions drawn from the study and whether they are relevant to the research question.

Consider the Limitations

Finally, you should consider the limitations of the study, including any potential biases or confounding factors that may have influenced the results.

Evaluating Research Methods

Evaluating Research Methods are as follows:

  • Peer review: Peer review is a process where experts in the field review a study before it is published. This helps ensure that the study is accurate, valid, and relevant to the field.
  • Critical appraisal : Critical appraisal involves systematically evaluating a study based on specific criteria. This helps assess the quality of the study and the reliability of the findings.
  • Replication : Replication involves repeating a study to test the validity and reliability of the findings. This can help identify any errors or biases in the original study.
  • Meta-analysis : Meta-analysis is a statistical method that combines the results of multiple studies to provide a more comprehensive understanding of a particular topic. This can help identify patterns or inconsistencies across studies.
  • Consultation with experts : Consulting with experts in the field can provide valuable insights into the quality and relevance of a study. Experts can also help identify potential limitations or biases in the study.
  • Review of funding sources: Examining the funding sources of a study can help identify any potential conflicts of interest or biases that may have influenced the study design or interpretation of results.

Example of Evaluating Research

Example of Evaluating Research sample for students:

Title of the Study: The Effects of Social Media Use on Mental Health among College Students

Sample Size: 500 college students

Sampling Technique : Convenience sampling

  • Sample Size: The sample size of 500 college students is a moderate sample size, which could be considered representative of the college student population. However, it would be more representative if the sample size was larger, or if a random sampling technique was used.
  • Sampling Technique : Convenience sampling is a non-probability sampling technique, which means that the sample may not be representative of the population. This technique may introduce bias into the study since the participants are self-selected and may not be representative of the entire college student population. Therefore, the results of this study may not be generalizable to other populations.
  • Participant Characteristics: The study does not provide any information about the demographic characteristics of the participants, such as age, gender, race, or socioeconomic status. This information is important because social media use and mental health may vary among different demographic groups.
  • Data Collection Method: The study used a self-administered survey to collect data. Self-administered surveys may be subject to response bias and may not accurately reflect participants’ actual behaviors and experiences.
  • Data Analysis: The study used descriptive statistics and regression analysis to analyze the data. Descriptive statistics provide a summary of the data, while regression analysis is used to examine the relationship between two or more variables. However, the study did not provide information about the statistical significance of the results or the effect sizes.

Overall, while the study provides some insights into the relationship between social media use and mental health among college students, the use of a convenience sampling technique and the lack of information about participant characteristics limit the generalizability of the findings. In addition, the use of self-administered surveys may introduce bias into the study, and the lack of information about the statistical significance of the results limits the interpretation of the findings.

Note*: Above mentioned example is just a sample for students. Do not copy and paste directly into your assignment. Kindly do your own research for academic purposes.

Applications of Evaluating Research

Here are some of the applications of evaluating research:

  • Identifying reliable sources : By evaluating research, researchers, students, and other professionals can identify the most reliable sources of information to use in their work. They can determine the quality of research studies, including the methodology, sample size, data analysis, and conclusions.
  • Validating findings: Evaluating research can help to validate findings from previous studies. By examining the methodology and results of a study, researchers can determine if the findings are reliable and if they can be used to inform future research.
  • Identifying knowledge gaps: Evaluating research can also help to identify gaps in current knowledge. By examining the existing literature on a topic, researchers can determine areas where more research is needed, and they can design studies to address these gaps.
  • Improving research quality : Evaluating research can help to improve the quality of future research. By examining the strengths and weaknesses of previous studies, researchers can design better studies and avoid common pitfalls.
  • Informing policy and decision-making : Evaluating research is crucial in informing policy and decision-making in many fields. By examining the evidence base for a particular issue, policymakers can make informed decisions that are supported by the best available evidence.
  • Enhancing education : Evaluating research is essential in enhancing education. Educators can use research findings to improve teaching methods, curriculum development, and student outcomes.

Purpose of Evaluating Research

Here are some of the key purposes of evaluating research:

  • Determine the reliability and validity of research findings : By evaluating research, researchers can determine the quality of the study design, data collection, and analysis. They can determine whether the findings are reliable, valid, and generalizable to other populations.
  • Identify the strengths and weaknesses of research studies: Evaluating research helps to identify the strengths and weaknesses of research studies, including potential biases, confounding factors, and limitations. This information can help researchers to design better studies in the future.
  • Inform evidence-based decision-making: Evaluating research is crucial in informing evidence-based decision-making in many fields, including healthcare, education, and public policy. Policymakers, educators, and clinicians rely on research evidence to make informed decisions.
  • Identify research gaps : By evaluating research, researchers can identify gaps in the existing literature and design studies to address these gaps. This process can help to advance knowledge and improve the quality of research in a particular field.
  • Ensure research ethics and integrity : Evaluating research helps to ensure that research studies are conducted ethically and with integrity. Researchers must adhere to ethical guidelines to protect the welfare and rights of study participants and to maintain the trust of the public.

Characteristics Evaluating Research

Characteristics Evaluating Research are as follows:

  • Research question/hypothesis: A good research question or hypothesis should be clear, concise, and well-defined. It should address a significant problem or issue in the field and be grounded in relevant theory or prior research.
  • Study design: The research design should be appropriate for answering the research question and be clearly described in the study. The study design should also minimize bias and confounding variables.
  • Sampling : The sample should be representative of the population of interest and the sampling method should be appropriate for the research question and study design.
  • Data collection : The data collection methods should be reliable and valid, and the data should be accurately recorded and analyzed.
  • Results : The results should be presented clearly and accurately, and the statistical analysis should be appropriate for the research question and study design.
  • Interpretation of results : The interpretation of the results should be based on the data and not influenced by personal biases or preconceptions.
  • Generalizability: The study findings should be generalizable to the population of interest and relevant to other settings or contexts.
  • Contribution to the field : The study should make a significant contribution to the field and advance our understanding of the research question or issue.

Advantages of Evaluating Research

Evaluating research has several advantages, including:

  • Ensuring accuracy and validity : By evaluating research, we can ensure that the research is accurate, valid, and reliable. This ensures that the findings are trustworthy and can be used to inform decision-making.
  • Identifying gaps in knowledge : Evaluating research can help identify gaps in knowledge and areas where further research is needed. This can guide future research and help build a stronger evidence base.
  • Promoting critical thinking: Evaluating research requires critical thinking skills, which can be applied in other areas of life. By evaluating research, individuals can develop their critical thinking skills and become more discerning consumers of information.
  • Improving the quality of research : Evaluating research can help improve the quality of research by identifying areas where improvements can be made. This can lead to more rigorous research methods and better-quality research.
  • Informing decision-making: By evaluating research, we can make informed decisions based on the evidence. This is particularly important in fields such as medicine and public health, where decisions can have significant consequences.
  • Advancing the field : Evaluating research can help advance the field by identifying new research questions and areas of inquiry. This can lead to the development of new theories and the refinement of existing ones.

Limitations of Evaluating Research

Limitations of Evaluating Research are as follows:

  • Time-consuming: Evaluating research can be time-consuming, particularly if the study is complex or requires specialized knowledge. This can be a barrier for individuals who are not experts in the field or who have limited time.
  • Subjectivity : Evaluating research can be subjective, as different individuals may have different interpretations of the same study. This can lead to inconsistencies in the evaluation process and make it difficult to compare studies.
  • Limited generalizability: The findings of a study may not be generalizable to other populations or contexts. This limits the usefulness of the study and may make it difficult to apply the findings to other settings.
  • Publication bias: Research that does not find significant results may be less likely to be published, which can create a bias in the published literature. This can limit the amount of information available for evaluation.
  • Lack of transparency: Some studies may not provide enough detail about their methods or results, making it difficult to evaluate their quality or validity.
  • Funding bias : Research funded by particular organizations or industries may be biased towards the interests of the funder. This can influence the study design, methods, and interpretation of results.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Research Gap

Research Gap – Types, Examples and How to...

Data collection

Data Collection – Methods Types and Examples

Problem statement

Problem Statement – Writing Guide, Examples and...

Table of Contents

Table of Contents – Types, Formats, Examples

Assignment

Assignment – Types, Examples and Writing Guide

Research Summary

Research Summary – Structure, Examples and...

15 Steps to Good Research

  • Define and articulate a research question (formulate a research hypothesis). How to Write a Thesis Statement (Indiana University)
  • Identify possible sources of information in many types and formats. Georgetown University Library's Research & Course Guides
  • Judge the scope of the project.
  • Reevaluate the research question based on the nature and extent of information available and the parameters of the research project.
  • Select the most appropriate investigative methods (surveys, interviews, experiments) and research tools (periodical indexes, databases, websites).
  • Plan the research project. Writing Anxiety (UNC-Chapel Hill) Strategies for Academic Writing (SUNY Empire State College)
  • Retrieve information using a variety of methods (draw on a repertoire of skills).
  • Refine the search strategy as necessary.
  • Write and organize useful notes and keep track of sources. Taking Notes from Research Reading (University of Toronto) Use a citation manager: Zotero or Refworks
  • Evaluate sources using appropriate criteria. Evaluating Internet Sources
  • Synthesize, analyze and integrate information sources and prior knowledge. Georgetown University Writing Center
  • Revise hypothesis as necessary.
  • Use information effectively for a specific purpose.
  • Understand such issues as plagiarism, ownership of information (implications of copyright to some extent), and costs of information. Georgetown University Honor Council Copyright Basics (Purdue University) How to Recognize Plagiarism: Tutorials and Tests from Indiana University
  • Cite properly and give credit for sources of ideas. MLA Bibliographic Form (7th edition, 2009) MLA Bibliographic Form (8th edition, 2016) Turabian Bibliographic Form: Footnote/Endnote Turabian Bibliographic Form: Parenthetical Reference Use a citation manager: Zotero or Refworks

Adapted from the Association of Colleges and Research Libraries "Objectives for Information Literacy Instruction" , which are more complete and include outcomes. See also the broader "Information Literacy Competency Standards for Higher Education."

  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • QuestionPro

survey software icon

  • Solutions Industries Gaming Automotive Sports and events Education Government Travel & Hospitality Financial Services Healthcare Cannabis Technology Use Case AskWhy Communities Audience Contactless surveys Mobile LivePolls Member Experience GDPR Positive People Science 360 Feedback Surveys
  • Resources Blog eBooks Survey Templates Case Studies Training Help center

more research is needed to effectively evaluate

Home Market Research

Evaluation Research: Definition, Methods and Examples

Evaluation Research

Content Index

  • What is evaluation research
  • Why do evaluation research

Quantitative methods

Qualitative methods.

  • Process evaluation research question examples
  • Outcome evaluation research question examples

What is evaluation research?

Evaluation research, also known as program evaluation, refers to research purpose instead of a specific method. Evaluation research is the systematic assessment of the worth or merit of time, money, effort and resources spent in order to achieve a goal.

Evaluation research is closely related to but slightly different from more conventional social research . It uses many of the same methods used in traditional social research, but because it takes place within an organizational context, it requires team skills, interpersonal skills, management skills, political smartness, and other research skills that social research does not need much. Evaluation research also requires one to keep in mind the interests of the stakeholders.

Evaluation research is a type of applied research, and so it is intended to have some real-world effect.  Many methods like surveys and experiments can be used to do evaluation research. The process of evaluation research consisting of data analysis and reporting is a rigorous, systematic process that involves collecting data about organizations, processes, projects, services, and/or resources. Evaluation research enhances knowledge and decision-making, and leads to practical applications.

LEARN ABOUT: Action Research

Why do evaluation research?

The common goal of most evaluations is to extract meaningful information from the audience and provide valuable insights to evaluators such as sponsors, donors, client-groups, administrators, staff, and other relevant constituencies. Most often, feedback is perceived value as useful if it helps in decision-making. However, evaluation research does not always create an impact that can be applied anywhere else, sometimes they fail to influence short-term decisions. It is also equally true that initially, it might seem to not have any influence, but can have a delayed impact when the situation is more favorable. In spite of this, there is a general agreement that the major goal of evaluation research should be to improve decision-making through the systematic utilization of measurable feedback.

Below are some of the benefits of evaluation research

  • Gain insights about a project or program and its operations

Evaluation Research lets you understand what works and what doesn’t, where we were, where we are and where we are headed towards. You can find out the areas of improvement and identify strengths. So, it will help you to figure out what do you need to focus more on and if there are any threats to your business. You can also find out if there are currently hidden sectors in the market that are yet untapped.

  • Improve practice

It is essential to gauge your past performance and understand what went wrong in order to deliver better services to your customers. Unless it is a two-way communication, there is no way to improve on what you have to offer. Evaluation research gives an opportunity to your employees and customers to express how they feel and if there’s anything they would like to change. It also lets you modify or adopt a practice such that it increases the chances of success.

  • Assess the effects

After evaluating the efforts, you can see how well you are meeting objectives and targets. Evaluations let you measure if the intended benefits are really reaching the targeted audience and if yes, then how effectively.

  • Build capacity

Evaluations help you to analyze the demand pattern and predict if you will need more funds, upgrade skills and improve the efficiency of operations. It lets you find the gaps in the production to delivery chain and possible ways to fill them.

Methods of evaluation research

All market research methods involve collecting and analyzing the data, making decisions about the validity of the information and deriving relevant inferences from it. Evaluation research comprises of planning, conducting and analyzing the results which include the use of data collection techniques and applying statistical methods.

Some of the evaluation methods which are quite popular are input measurement, output or performance measurement, impact or outcomes assessment, quality assessment, process evaluation, benchmarking, standards, cost analysis, organizational effectiveness, program evaluation methods, and LIS-centered methods. There are also a few types of evaluations that do not always result in a meaningful assessment such as descriptive studies, formative evaluations, and implementation analysis. Evaluation research is more about information-processing and feedback functions of evaluation.

These methods can be broadly classified as quantitative and qualitative methods.

The outcome of the quantitative research methods is an answer to the questions below and is used to measure anything tangible.

  • Who was involved?
  • What were the outcomes?
  • What was the price?

The best way to collect quantitative data is through surveys , questionnaires , and polls . You can also create pre-tests and post-tests, review existing documents and databases or gather clinical data.

Surveys are used to gather opinions, feedback or ideas of your employees or customers and consist of various question types . They can be conducted by a person face-to-face or by telephone, by mail, or online. Online surveys do not require the intervention of any human and are far more efficient and practical. You can see the survey results on dashboard of research tools and dig deeper using filter criteria based on various factors such as age, gender, location, etc. You can also keep survey logic such as branching, quotas, chain survey, looping, etc in the survey questions and reduce the time to both create and respond to the donor survey . You can also generate a number of reports that involve statistical formulae and present data that can be readily absorbed in the meetings. To learn more about how research tool works and whether it is suitable for you, sign up for a free account now.

Create a free account!

Quantitative data measure the depth and breadth of an initiative, for instance, the number of people who participated in the non-profit event, the number of people who enrolled for a new course at the university. Quantitative data collected before and after a program can show its results and impact.

The accuracy of quantitative data to be used for evaluation research depends on how well the sample represents the population, the ease of analysis, and their consistency. Quantitative methods can fail if the questions are not framed correctly and not distributed to the right audience. Also, quantitative data do not provide an understanding of the context and may not be apt for complex issues.

Learn more: Quantitative Market Research: The Complete Guide

Qualitative research methods are used where quantitative methods cannot solve the research problem , i.e. they are used to measure intangible values. They answer questions such as

  • What is the value added?
  • How satisfied are you with our service?
  • How likely are you to recommend us to your friends?
  • What will improve your experience?

LEARN ABOUT: Qualitative Interview

Qualitative data is collected through observation, interviews, case studies, and focus groups. The steps for creating a qualitative study involve examining, comparing and contrasting, and understanding patterns. Analysts conclude after identification of themes, clustering similar data, and finally reducing to points that make sense.

Observations may help explain behaviors as well as the social context that is generally not discovered by quantitative methods. Observations of behavior and body language can be done by watching a participant, recording audio or video. Structured interviews can be conducted with people alone or in a group under controlled conditions, or they may be asked open-ended qualitative research questions . Qualitative research methods are also used to understand a person’s perceptions and motivations.

LEARN ABOUT:  Social Communication Questionnaire

The strength of this method is that group discussion can provide ideas and stimulate memories with topics cascading as discussion occurs. The accuracy of qualitative data depends on how well contextual data explains complex issues and complements quantitative data. It helps get the answer of “why” and “how”, after getting an answer to “what”. The limitations of qualitative data for evaluation research are that they are subjective, time-consuming, costly and difficult to analyze and interpret.

Learn more: Qualitative Market Research: The Complete Guide

Survey software can be used for both the evaluation research methods. You can use above sample questions for evaluation research and send a survey in minutes using research software. Using a tool for research simplifies the process right from creating a survey, importing contacts, distributing the survey and generating reports that aid in research.

Examples of evaluation research

Evaluation research questions lay the foundation of a successful evaluation. They define the topics that will be evaluated. Keeping evaluation questions ready not only saves time and money, but also makes it easier to decide what data to collect, how to analyze it, and how to report it.

Evaluation research questions must be developed and agreed on in the planning stage, however, ready-made research templates can also be used.

Process evaluation research question examples:

  • How often do you use our product in a day?
  • Were approvals taken from all stakeholders?
  • Can you report the issue from the system?
  • Can you submit the feedback from the system?
  • Was each task done as per the standard operating procedure?
  • What were the barriers to the implementation of each task?
  • Were any improvement areas discovered?

Outcome evaluation research question examples:

  • How satisfied are you with our product?
  • Did the program produce intended outcomes?
  • What were the unintended outcomes?
  • Has the program increased the knowledge of participants?
  • Were the participants of the program employable before the course started?
  • Do participants of the program have the skills to find a job after the course ended?
  • Is the knowledge of participants better compared to those who did not participate in the program?

MORE LIKE THIS

more research is needed to effectively evaluate

Customer Experience Lessons from 13,000 Feet — Tuesday CX Thoughts

Aug 20, 2024

insight

Insight: Definition & meaning, types and examples

Aug 19, 2024

employee loyalty

Employee Loyalty: Strategies for Long-Term Business Success 

Jotform vs SurveyMonkey

Jotform vs SurveyMonkey: Which Is Best in 2024

Aug 15, 2024

Other categories

  • Academic Research
  • Artificial Intelligence
  • Assessments
  • Brand Awareness
  • Case Studies
  • Communities
  • Consumer Insights
  • Customer effort score
  • Customer Engagement
  • Customer Experience
  • Customer Loyalty
  • Customer Research
  • Customer Satisfaction
  • Employee Benefits
  • Employee Engagement
  • Employee Retention
  • Friday Five
  • General Data Protection Regulation
  • Insights Hub
  • Life@QuestionPro
  • Market Research
  • Mobile diaries
  • Mobile Surveys
  • New Features
  • Online Communities
  • Question Types
  • Questionnaire
  • QuestionPro Products
  • Release Notes
  • Research Tools and Apps
  • Revenue at Risk
  • Survey Templates
  • Training Tips
  • Tuesday CX Thoughts (TCXT)
  • Uncategorized
  • What’s Coming Up
  • Workforce Intelligence

An official website of the United States government

Here’s how you know

Official websites use .gov A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS A lock ( Lock Locked padlock icon ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

Evaluation Home

The Federal Evaluation Toolkit BETA

Evaluation 101.

What is evaluation? How can it help me do my job better? Evaluation 101 provides resources to help you answer those questions and more. You will learn about program evaluation and why it is needed, along with some helpful frameworks that place evaluation in the broader evidence context. Other resources provide helpful overviews of specific types of evaluation you may encounter or be considering, including implementation, outcome, and impact evaluations, and rapid cycle approaches.

What is Evaluation?

Heard the term "evaluation," but are still not quite sure what that means? These resources help you answer the question, "what is evaluation?," and learn more about how evaluation fits into a broader evidence-building framework.

What is Program Evaluation?: A Beginners Guide

Program evaluation uses systematic data collection to help us understand whether programs, policies, or organizations are effective. This guide explains how program evaluation can contribute to improving program services. It provides a high-level, easy-to-read overview of program evaluation from start (planning and evaluation design) to finish (dissemination), and includes links to additional resources.

Types of Evaluation

What's the difference between an impact evaluation and an implementation evaluation? What does each type of evaluation tell us? Use these resources to learn more about the different types of evaluation, what they are, how they are used, and what types of evaluation questions they answer.

Common Framework for Research and Evaluation The Administration for Children & Families Common Framework for Research and Evaluation (OPRE Report #2016-14). Office of Planning, Research, and Evaluation, U.S. Department of Health and Human Services. https://www.acf.hhs.gov/sites/default/files/documents/opre/acf_common_framework_for_research_and_evaluation_v02_a.pdf" aria-label="Info for Common Framework for Research and Evaluation">

Building evidence is not one-size-fits all, and different questions require different methods and approaches. The Administration for Children & Families Common Framework for Research and Evaluation describes, in detail, six different types of research and evaluation approaches – foundational descriptive studies, exploratory descriptive studies, design and development studies, efficacy studies, effectiveness studies, and scale-up studies – and can help you understand which type of evaluation might be most useful for you and your information needs.

Formative Evaluation Toolkit Formative evaluation toolkit: A step-by-step guide and resources for evaluating program implementation and early outcomes . Washington, DC: Children’s Bureau, Administration for Children and Families, U.S. Department of Health and Human Services." aria-label="Info for Formative Evaluation Toolkit">

Formative evaluation can help determine whether an intervention or program is being implemented as intended and producing the expected outputs and short-term outcomes. This toolkit outlines the steps involved in conducting a formative evaluation and includes multiple planning tools, references, and a glossary. Check out the overview to learn more about how this resource can help you.

Introduction to Randomized Evaluations

Randomized evaluations, also known as randomized controlled trials (RCTs), are one of the most rigorous evaluation methods used to conduct impact evaluations to determine the extent to which your program, policy, or initiative caused the outcomes you see. They use random assignment of people/organizations/communities affected by the program or policy to rule out other factors that might have caused the changes your program or policy was designed to achieve. This in-depth resource introduces randomized evaluations in a non-technical way, provides examples of RCTs in practice, describes when RCTs might be the right approach, and offers a thorough FAQ about RCTs.

Rapid Cycle Evaluation at a Glance Rapid Cycle Evaluation at a Glance (OPRE #2020-152). Office of Planning, Research, and Evaluation, U.S. Department of Health and Human Services. https://www.acf.hhs.gov/opre/report/rapid-cycle-evaluation-glance" aria-label="Info for Rapid Cycle Evaluation at a Glance">

Rapid Cycle Evaluation (RCE) can be used to efficiently assess implementation and inform program improvement. This brief provides an introduction to RCE, describing what it is, how it compares to other methods, when and how to use it, and includes more in-depth resources. Use this brief to help you figure out whether RCE makes sense for your program.

Evaluation.gov

An official website of the Federal Government

  • About Meera

Search form

more research is needed to effectively evaluate

My Environmental Education Evaluation Resource Assistant

Evaluation: what is it and why do it.

  • Planning and Implementing an EE Evaluation
  • Step 1: Before You Get Started
  • Step 2: Program Logic
  • Step 3: Goals of Evaluation
  • Step 4: Evaluation Design
  • Step 5: Collecting Data
  • Step 6: Analyzing Data
  • Step 7: Reporting Results
  • Step 8: Improve Program
  • Related Topics
  • Sample EE Evaluations
  • Links & Resources

Evaluation. What associations does this word bring to mind? Do you see evaluation as an invaluable tool to improve your program? Or do you find it intimidating because you don't know much about it? Regardless of your perspective on evaluation, MEERA is here to help! The purpose of this introductory section is to provide you with some useful background information on evaluation.

Table of Contents

What is evaluation?

Should i evaluate my program, what type of evaluation should i conduct and when, what makes a good evaluation, how do i make evaluation an integral part of my program, how can i learn more.

Evaluation is a process that critically examines a program. It involves collecting and analyzing information about a program’s activities, characteristics, and outcomes. Its purpose is to make judgments about a program, to improve its effectiveness, and/or to inform programming decisions (Patton, 1987).

Experts stress that evaluation can:

Improve program design and implementation.

It is important to periodically assess and adapt your activities to ensure they are as effective as they can be. Evaluation can help you identify areas for improvement and ultimately help you realize your goals more efficiently. Additionally, when you share your results about what was more and less effective, you help advance environmental education.

Demonstrate program impact.

Evaluation enables you to demonstrate your program’s success or progress. The information you collect allows you to better communicate your program's impact to others, which is critical for public relations, staff morale, and attracting and retaining support from current and potential funders.

Why conduct evaluations? approx. 2 minutes

Gus Medina, Project Manager, Environmental Education and Training Partnership

There are some situations where evaluation may not be a good idea

Evaluations fall into one of two broad categories: formative and summative. Formative evaluations are conducted during program development and implementation and are useful if you want direction on how to best achieve your goals or improve your program. Summative evaluations should be completed once your programs are well established and will tell you to what extent the program is achieving its goals.

Within the categories of formative and summative, there are different types of evaluation.

Which of these evaluations is most appropriate depends on the stage of your program:

Type of Evaluation Purpose
Formative
1. Needs Assessment Determines who needs the program, how great the need is, and what can be done to best meet the need. An EE needs assessment can help determine what audiences are not currently served by programs and provide insight into what characteristics new programs should have to meet these audiences’ needs.

For more information, uses a practical training module to lead you through a series of interactive pages about needs assessment.
2. Process or Implementation Evaluation Examines the process of implementing the program and determines whether the program is operating as planned. Can be done continuously or as a one-time assessment. Results are used to improve the program. A process evaluation of an EE program may focus on the number and type of participants reached and/or determining how satisfied these individuals are with the program.
Summative
1. Outcome Evaluation Investigates to what extent the program is achieving its outcomes. These outcomes are the short-term and medium-term changes in program participants that result directly from the program. For example, EE outcome evaluations may examine improvements in participants’ knowledge, skills, attitudes, intentions, or behaviors.
2. Impact Evaluation Determines any broader, longer-term changes that have occurred as a result of the program. These impacts are the net effects, typically on the entire school, community, organization, society, or environment. EE impact evaluations may focus on the educational, environmental quality, or human health impacts of EE programs.

Make evaluation part of your program; don’t tack it on at the end!

more research is needed to effectively evaluate

Adapted from:

Norland, E. (2004, Sept). From education theory.. to conservation practice Presented at the Annual Meeting of the International Association for Fish & Wildlife Agencies, Atlantic City, New Jersey.

Pancer, s. M., and Westhues, A. (1989) "A developmental stage approach to program planning and evaluation." Evaluation Review (13): 56-77.

Rossi R H., Lipsey, M. W., & Freeman. H. E. (2004). Evaluation: a systematic approach Thousand Oaks. Call.: Sage Publications.

For additional information on the differences between outcomes and impacts, including lists of potential EE outcomes and impacts, see MEERA's Outcomes and Impacts page.

A well-planned and carefully executed evaluation will reap more benefits for all stakeholders than an evaluation that is thrown together hastily and retrospectively. Though you may feel that you lack the time, resources, and expertise to carry out an evaluation, learning about evaluation early-on and planning carefully will help you navigate the process.

MEERA provides suggestions for all phases of an evaluation. But before you start, it will help to review the following characteristics of a good evaluation (list adapted from resource formerly available through the University of Sussex, Teaching and Learning Development Unit Evaluation Guidelines and John W. Evans' Short Course on Evaluation Basics):

Good evaluation is tailored to your program and builds on existing evaluation knowledge and resources.

Your evaluation should be crafted to address the specific goals and objectives of your EE program. However, it is likely that other environmental educators have created and field-tested similar evaluation designs and instruments. Rather than starting from scratch, looking at what others have done can help you conduct a better evaluation. See MEERA’s searchable database of EE evaluations to get started.

Good evaluation is inclusive.

It ensures that diverse viewpoints are taken into account and that results are as complete and unbiased as possible. Input should be sought from all of those involved and affected by the evaluation such as students, parents, teachers, program staff, or community members. One way to ensure your evaluation is inclusive is by following the practice of participatory evaluation.

Good evaluation is honest.

Evaluation results are likely to suggest that your program has strengths as well as limitations. Your evaluation should not be a simple declaration of program success or failure. Evidence that your EE program is not achieving all of its ambitious objectives can be hard to swallow, but it can also help you learn where to best put your limited resources.

Good evaluation is replicable and its methods are as rigorous as circumstances allow.

A good evaluation is one that is likely to be replicable, meaning that someone else should be able to conduct the same evaluation and get the same results. The higher the quality of your evaluation design, its data collection methods and its data analysis, the more accurate its conclusions and the more confident others will be in its findings.

Consider doing a “best practices” review of your program before proceeding with your evaluation.

Making evaluation an integral part of your program means evaluation is a part of everything you do. You design your program with evaluation in mind, collect data on an on-going basis, and use these data to continuously improve your program.

Developing and implementing such an evaluation system has many benefits including helping you to:

  • better understand your target audiences' needs and how to meet these needs
  • design objectives that are more achievable and measurable
  • monitor progress toward objectives more effectively and efficiently
  • learn more from evaluation
  • increase your program's productivity and effectiveness

To build and support an evaluation system:

Couple evaluation with strategic planning.

As you set goals, objectives, and a desired vision of the future for your program, identify ways to measure these goals and objectives and how you might collect, analyze, and use this information. This process will help ensure that your objectives are measurable and that you are collecting information that you will use. Strategic planning is also a good time to create a list of questions you would like your evaluation to answer.

Revisit and update your evaluation plan and logic model

(See Step 2) to make sure you are on track. Update these documents on a regular basis, adding new strategies, changing unsuccessful strategies, revising relationships in the model, and adding unforeseen impacts of an activity (EMI, 2004).

Build an evaluation culture

by rewarding participation in evaluation, offering evaluation capacity building opportunities, providing funding for evaluation, communicating a convincing and unified purpose for evaluation, and celebrating evaluation successes.

The following resource provides more depth on integrating evaluation into program planning:

Best Practices Guide to Program Evaluation for Aquatic Educators (.pdf) Beginner Intermediate Recreational Boating and Fishing Foundation. (2006).

Chapter 2 of this guide, “Create a climate for evaluation,” gives advice on how to fully institutionalize evaluation into your organization. It describes features of an organizational culture, and explains how to build teamwork, administrative support and leadership for evaluation. It discusses the importance of developing organizational capacity for evaluation, linking evaluation to organizational planning and performance reviews, and unexpected benefits of evaluation to organizational culture.

If you want to learn more about how to institutionalize evaluation, check out the following resources on adaptive management. Adaptive management is an approach to conservation management that is based on learning from systematic, on-going monitoring and evaluation, and involves adapting and improving programs based on the findings from monitoring and evaluation.

  • Adaptive Management: A Tool for Conservation Practitioners Salafsky, N., R. Margoluis, and K. Redford, (2001) Biodiversity Support Program. Beginner This guide provides an overview of adaptive management, defines the approach, describes the conditions under which adaptive managements makes most sense, and outlines the steps involved.
  • Measures of Conservation Success: Designing, Managing, and Monitoring Conservation and Development Projects Margoluis, R., and N. Salafsky. (1998) Island Press. Beginner Intermediate Advanced Available for purchase at Amazon.com. This book provides a detailed guide to project management and evaluation. The chapters and case studies describe the process step-by-step, from project conception to conclusion. The chapters on creating and implementing a monitoring plan, and on using the information obtained to modify the project are particularly useful.
  • Does your project make a difference? A guide to evaluating environmental education projects and programs. Sydney: Department of Environment and Conservation, Australia. (2004) Beginner Section 1 provides a useful introduction to evaluation in EE. It defines evaluation, and explains why it is important and challenging, with quotes about the evaluation experiences of several environmental educators.
  • Designing Evaluation for Education Projects (.pdf), NOAA Office of Education and Sustainable Development. (2004) Beginner In Section 3, “Why is evaluation important to project design and implementation?” nine benefits of evaluation are listed, including, for example, the value of using evaluation results for public relations and outreach.
  • Evaluating EE in Schools: A Practical Guide for Teachers (.pdf) Bennett, D.B. (1984). UNESCO-UNEP Beginner Intermediate The introduction of this guide explains four main benefits of evaluation in EE, including: 1) building greater support for your program, 2) improving your program, 3) advancing student learning, 4) promoting better environmental outcomes.
  • Guidelines for Evaluating Non-Profit Communications Efforts (.pdf) Communications Consortium Media Center. (2004) Beginner Intermediate A section titled “Overarching Evaluation Principles” describes twelve principles of evaluation, such as the importance of being realistic about the potential impact of a project, and being aware of how values shape evaluation. Another noteworthy section, “Acknowledging the Challenges of Evaluation,” outlines nine substantial challenges, including the difficulty in assessing complicated changes in multiple levels of society (school, community, state, etc.). This resource focuses on evaluating public communications efforts, though most of the content is relevant to EE.

EMI (Ecosystem Management Initiative). (2004). Measuring Progress: An Evaluation Guide for Ecosystem and Community-Based Projects. School of Natural Resources and Environment, University of Michigan. Downloaded September 20, 2006 from: www.snre.umich.edu/ecomgt/evaluation/templates.htm

Patton, M.Q. (1987). Qualitative Research Evaluation Methods. Thousand Oaks, CA: Sage Publishers.

Thomson, G. & Hoffman, J. (2003). Measuring the success of EE programs. Canadian Parks and Wilderness Society.

  • Skip to content
  • Skip to search
  • Staff portal (Inside the department)
  • Student portal
  • Key links for students

Other users

  • Forgot password

Notifications

{{item.title}}, my essentials, ask for help, contact edconnect, directory a to z, how to guides.

  • Centre for Education Statistics and Evaluation

5 essentials for effective evaluation

This report was originally published 26 May 2016.

more research is needed to effectively evaluate

  • 2016 5 essentials for effective evaluation (PDF 2157 KB)

All education programs are well-intentioned and many of them are highly effective. However, there are usually more ways than one to achieve good educational outcomes for students. When faced with this scenario, how do educators and education policymakers decide which alternative is likely to provide most ‘bang for buck’?

There’s also an uncomfortable truth that educators and policymakers need to grapple with: some programs are not effective and some may even be harmful. What is the best way to identify these programs so that they can be remediated or stopped altogether?

Program evaluation is a tool to inform these decisions. More formally, program evaluation is a systematic and objective process to make judgements about the merit or worth of our actions, usually in relation to their effectiveness, efficiency and appropriateness (NSW Government 2016).

Evaluation and self-assessment is at the heart of strong education systems and evaluative thinking is a core competency of effective educational leadership. Teachers, school leaders and people in policy roles should all apply the principles of evaluation to their daily work.

Research shows that:

  • Effective teachers use data and other evidence to constantly assess how well students are progressing in response to their lessons (Timperley & Parr, 2009).
  • Effective principals constantly plan, coordinate and evaluate teaching and the use of the curriculum with systematic use of assessment data (Robinson, Lloyd & Rowe, 2008).
  • Effective education systems engage all school staff and students in school self-evaluations so that program and policy settings can be adjusted to maximise educational outcomes (OECD, 2013).

This Learning Curve sets out five conditions for effective evaluation in education. These are not the only considerations and they are not unique to education. However, if these parameters are missing, evaluation will not be possible or it will be ineffective.

The five prerequisites for effective evaluation in education are:

  • Start with a clear and measurable statement of objectives
  • Develop a theory about how program activities will lead to improved outcomes (i.e. a program logic) and structure the evaluation questions around that logic
  • Let the evaluation questions determine the evaluation method
  • For questions about program impact, either a baseline or a comparison group will be required (preferably both)
  • Be open-minded about the findings and have a clear plan for how to use the results.

Related Resources

  • 5 essentials for effective evaluation MyPL course
  • Research report

Business Unit:

Purdue Online Writing Lab Purdue OWL® College of Liberal Arts

Evaluating Sources: General Guidelines

OWL logo

Welcome to the Purdue OWL

This page is brought to you by the OWL at Purdue University. When printing this page, you must include the entire legal notice.

Copyright ©1995-2018 by The Writing Lab & The OWL at Purdue and Purdue University. All rights reserved. This material may not be published, reproduced, broadcast, rewritten, or redistributed without permission. Use of this site constitutes acceptance of our terms and conditions of fair use.

Once you have an idea of the types of sources you need for your research, you can spend time evaluating individual sources. If a bibliographic citation seems promising, it’s a good idea to spend a bit more time with the source before you determine its credibility. Below are some questions to ask and things to consider as you read through a source. 

Find Out What You Can about the Author

One of the first steps in evaluating a source is to locate more information about the author. Sometimes simply typing an author’s name into a search engine will give you an initial springboard for information. Finding the author’s educational background and areas of expertise will help determine whether the author has experience in what they’re writing about. You should also examine whether the author has other publications and if they are with well-known publishers or organizations.

Read the Introduction / Preface

Begin by reading the Introduction or the Preface—What does the author want to accomplish? Browse through the Table of Contents and the Index. This will give you an overview of the source. Is your topic covered in enough depth to be helpful? If you don't find your topic discussed, try searching for some synonyms in the Index.

If your source does not contain any of these elements, consider reading the first few paragraphs of the source and determining whether it includes enough information on your topic for it to be relevant.

Determine the Intended Audience

Consider the tone, style, vocabulary, level of information, and assumptions the author makes about the reader. Are they appropriate for your needs? Remember that scholarly sources often have a very particular audience in mind, and popular sources are written for a more general audience. However, some scholarly sources may be too dense for your particular research needs, so you may need to turn to sources with a more general audience in mind.

Determine whether the Information is Fact, Opinion, or Propaganda

Information can usually be divided into three categories: fact, opinion, and propaganda. Facts are objective, while opinions and propaganda are subjective. A fact is something that is known to be true. An opinion gives the thoughts of a particular individual or group. Propaganda is the (usually biased) spreading of information for a specific person, group, event, or cause. Propaganda often relies on slogans or emotionally-charged images to influence an audience. It can also involve the selective reporting of true information in order to deceive an audience.

  • Fact: The Purdue OWL was launched in 1994.
  • Opinion: The Purdue OWL is the best website for writing help.
  • Propaganda: Some students have gone on to lives of crime after using sites that compete with the Purdue OWL. The Purdue OWL is clearly the only safe choice for student writers.

The last example above uses facts in a bad-faith way to take advantage of the audience's fear. Even if the individual claim is true, the way it is presented helps the author tell a much larger lie. In this case, the lie is that there is a link between the websites students visit for writing help and their later susceptibility to criminal lifstyles. Of course, there is no such link. Thus, when examining sources for possible propaganda, be aware that sometimes groups may deploy pieces of true information in deceptive ways.

Note also that the difference between an opinion and propaganda is that propaganda usually has a specific agenda attached—that is, the information in the propaganda is being spread for a certain reason or to accomplish a certain goal. If the source appears to represent an opinion, does the author offer legitimate reasons for adopting that stance? If the opinion feels one-sided, does the author acknowledge opposing viewpoints? An opinion-based source is not necessarily unreliable, but it’s important to know whether the author recognizes that their opinion is not the only opinion.

Identify the Language Used

Is the language objective or emotional? Objective language sticks to the facts, but emotional language relies on garnering an emotional response from the reader. Objective language is more commonly found in fact-based sources, while emotional language is more likely to be found in opinion-based sources and propaganda.

Evaluate the Evidence Listed

If you’re just starting your research, you might look for sources that include more general information. However, the deeper you get into your topic, the more comprehensive your research will need to be.

If you’re reading an opinion-based source, ask yourself whether there’s enough evidence to back up the opinions. If you’re reading a fact-based source, be sure that it doesn’t oversimplify the topic.

The more familiar you become with your topic, the easier it will be for you to evaluate the evidence in your sources.

Cross-Check the Information

When you verify the information in one source with information you find in another source, this is called cross-referencing or cross-checking. If the author lists specific dates or facts, can you find that same information somewhere else? Having information listed in more than one place increases its credibility.

Check the Timeliness of the Source

How timely is the source? Is the source twenty years out of date? Some information becomes dated when new research is available, but other older sources of information can still be useful and reliable fifty or a hundred years later. For example, if you are researching a scientific topic, you will want to be sure you have the most up-to-date information. However, if you are examining an historical event, you may want to find primary documents from the time of the event, thus requiring older sources.

Examine the List of References

Check for a list of references or other citations that look as if they will lead you to related material that would be good sources. If a source has a list of references, it often means that the source is well-researched and thorough.

As you continue to encounter more sources, evaluating them for credibility will become easier.

Banner

  • Asbury University
  • Research Help
  • Critical Evaluation of Sources
  • Why Evaluate?

Critical Evaluation of Sources: Why Evaluate?

  • Introduction and Criteria
  • Web and Articles

While most of us realize that we can’t trust all the information we see or read, we don’t always spend a lot of time considering how we actually make decisions about what to trust. Whether we’re watching the news, reading a friend’s blog, researching a health condition, or using information in some other way, we generally draw on our own values and life experiences to make relatively quick judgments about the validity of the information we are exposed to or seek out. Sometimes we don’t even consider the fact that we made a judgment about what to trust in the first place. We’re simply on autopilot.

Although the amount of deep thinking we need to put into evaluating the validity of an information source can vary depending on the significance of the situation, we ultimately make better decisions and construct more convincing arguments when we have a strong understanding of the quality of the information we’re using (or not using). This is especially true in an academic context, where our ability to create knowledge and meaning depends on our ability to analyze and interpret information with precision.   

To evaluate information, then, is to analyze information from a critical perspective. The evaluative process requires us to step back and carefully consider the sources we use and how we use them, to not rush to judgment but to think through the content of the articles we’re reading or the online search results we’re browsing. We also need to consider the relationships among different sources and how they work together to form “conversations” around certain topics or issues. A “conversation” in this sense refers to the diverse perspectives and arguments surrounding a particular research question (or set of questions).

The questions in this guide can help you think through the evaluation of information sources. Keep in mind that evaluation is not simply about determining whether a source is “reliable” or “not reliable.” It’s rarely that easy or straightforward. Instead, it’s more useful to consider the degree to which a source is reliable for a given purpose.  The primary goal of evaluation is to understand the significance and value of a source in relation to other sources and your own thinking on a topic.

Note that some evaluative questions will be more important than others depending on your needs as a researcher. Figuring out which questions are important to ask in a given situation is part of the research process. Also note that your evaluation of a source may evolve over time. For instance, a source that seems very useful early on may prove less useful as your project develops. Likewise, a source that seems insignificant at the beginning of a project may turn out to be your most significant source later in the research process.

From:   http://louisville.libguides.com/evaluation

This evaluation process is really no different than the process people use everyday as they acquire all types of information from a neighbor, a friend, a newspaper, a television broadcast, or a bulletin board flyer. 

All of this happens so automatically, you don't even realize you're doing it. While you should evaluate all of information sources (books, periodical articles, etc.) before using them in your research, it is most vital that you evaluate the information you find on the Internet.  Every book and article published (even those available in Internet-based databases) goes through some sort of evaluation process, but Web pages go through no such pre-publication evaluation.

The number of resources available via the Internet is immense. Companies, organizations, educational institutions, communities and individual people all serve as information providers for the Internet community. Savvy members of the Internet community are aware that there are few, if any, quality controls for the information that is made available. Accurate and reliable data may share the computer screen with data that is inaccurate, unreliable, or even purposely false. In addition, the differences between the two types of data may be imperceptible, especially for someone who is not an expert in the topic area. Because the Internet is not the responsibility of any one organization or institution, it seems unlikely that any universal quality control will be established in the near future. In view of this, members of the Internet community must prepare themselves to be critically skilled consumers of the information they find.

Hoaxes, Fallacies, Propaganda - OH MY!

Types of hoaxes with examples -   http://virtualchase.justia.com/hoaxes-and-other-bad-information

Listing of Types of Fallacies  -  http://www.nizkor.org/features/fallacies/

Propaganda -    http://guides.library.jhu.edu/content.php?pid=198142&sid=1657614

Critical Thinkers

Characteristics of Critical Thinkers:   http://www.mhhe.com/socscience/philosophy/reichenbach/m1_chap02studyguide.html

Defining Critical Thinking

A Source:   http://www.criticalthinking.org/pages/defining-critical-thinking/766

Which states:  

The Problem  Everyone thinks; it is our nature to do so. But much of our thinking, left to itself, is biased, distorted, partial, uninformed or down-right prejudiced. Yet the quality of our life and that of what we produce, make, or build depends precisely on the quality of our thought. Shoddy thinking is costly, both in money and in quality of life. Excellence in thought, however, must be systematically cultivated.

A Definition  Critical thinking is that mode of thinking - about any subject, content, or problem - in which the thinker improves the quality of his or her thinking by skillfully taking charge of the structures inherent in thinking and  imposing intellectual standards upon them.

The Result  A well cultivated critical thinker:

  • raises vital questions and problems, formulating them clearly and precisely;
  • gathers and assesses relevant information, using abstract ideas to interpret it effectively  comes to well-reasoned conclusions and solutions, testing them against relevant criteria and standards;
  • thinks openmindedly within alternative systems of thought, recognizing and assessing, as need be, their assumptions, implications, and practical consequences; and
  • communicates effectively with others in figuring out solutions to complex problems.

Critical thinking is, in short, self-directed, self-disciplined, self-monitored, and self-corrective thinking. It presupposes assent to rigorous standards of excellence and mindful command of their use. It entails effective communication and problem solving abilities and a commitment to overcome our native egocentrism and sociocentrism.

Critical Thinking

Critical Thinking and introduction to the basic skills by William Hughes 1992 Broadview Press Ltd. Lewiston, NY Isbn 1-921149-73-2 The primary focus of critical thinking skills is on determining whether arguments are sound, i.e. whether they have true premises and logical strength.But determining the soundness of arguments is not a simple matter, for three reasons.First, before we can assess an argument we must determine its precise meaning. Second, determining the truth or falsity of statements is often a difficult task. Third, assessing argument is complex because there are several different types of inferences and each type requires a different kind of assessment.  There three types of skills—

interpretive skills, verification skills, and reason skills—constitutes what are usually referred to as critical thinking skills.  … mastering critical thinking skills is also a matter of intellectual self-respect.  We all have the capacity to learn how to distinguish good arguments from bad ones and to work out for ourselves what we ought and ought not to believe, and it diminishes us as persons if we let others do our thinking for us.  If we are not prepared to think for ourselves, and to make the effort to learn how do this well, we will always remain slaves to the ideas and values of others and to our own ignorance. P. 11 Argumentation and Debate Critical thinking for reasoned decision Making Austin J. Freeley and David L. Steinberg 10th edition 2000 Wadsworth/Thomson Learning Belmont, CA Isbn 0-534-46115-2

Critical thinking:   the ability to analyze, criticize, and advocate ideas; to reason inductively and deductively; and to reach factual or judgmental conclusions based on sound inferences drawn from unambiguous statements of knowledge or belief.    P. 458 Author: Theresa Rienzo, Reference Librarian,James Edward Tobin Library, Molloy College 1000 Hempstead Ave. Rockville Centre, NY  11571

Copied from:   http://molloy.libguides.com/criticalthinking

A critical thinking model with elements:   http://www.criticalthinking.org/ctmodel/logic-model1.htm

From:   http://www.edpsycinteractive.org/topics/cogsys/critthnk.html

  • << Previous: Introduction and Criteria
  • Next: Analysis >>
  • Last Updated: Feb 26, 2019 2:33 PM
  • URL: https://asbury.libguides.com/criteval

University Libraries

Media literacy.

  • Information Literacy Defined
  • Videos Covering the Basics of Information Literacy
  • The Framework for Information Literacy for Higher Education
  • Fact Checking
  • Social Media
  • Newspaper Databases

Ask Us Logo

Need help evaluating your sources? Contact a Subject Librarian. The subject librarians can help you with finding and evaluating research and sources. You can contact them by phone, email, or in-person.

  • Subject Librarians by College

If you select "no," please send me an email so I can improve this guide.

Definitions & Standards

" Information Literacy in a Nutshell " created by David L. Rice Library on YouTube . Accessed 2016.

Information literacy is a set of abilities requiring individuals to "recognize when information is needed and have the ability to locate, evaluate, and use effectively the needed information." 1 Information literacy also is increasingly important in the contemporary environment of rapid technological change and proliferating information resources. Because of the escalating complexity of this environment, individuals are faced with diverse, abundant information choices--in their academic studies, in the workplace, and in their personal lives. Information is available through libraries, community resources, special interest organizations, media, and the Internet--and increasingly, information comes to individuals in unfiltered formats, raising questions about its authenticity, validity, and reliability. In addition, information is available through multiple media, including graphical, aural, and textual, and these pose new challenges for individuals in evaluating and understanding it. The uncertain quality and expanding quantity of information pose large challenges for society. The sheer abundance of information will not in itself create a more informed citizenry without a complementary cluster of abilities necessary to use information effectively.

Information literacy is a set of abilities requiring individuals to "recognize when information is needed and have the ability to locate, evaluate, and use effectively the needed information." Information Literacy Competency Standards for Higher Education . American Library Association. 2006. (Accessed June 4, 2013). Library instruction sessions, LibGuides and the Research 101 course are based on these standards. Information literacy skills are essential in today's world. Student development of information literacy is a process that spans the entire college experience.

Information literacy is knowing when and why you need information, where to find it, and how to evaluate, use and communicate it in an ethical manner. http://www.cilip.org.uk/cilip/advocacy-campaigns-awards/advocacy-campaigns/information-literacy/information-literacy Information literacy  is the ability to recognize the extent and nature of an information need, then to locate, evaluate, and effectively use the needed information.  (Plattsburgh State Information and Computer Literacy Task Force, 2001) http://www.plattsburgh.edu/library/instruction/informationliteracydefinition.php Information literacy forms the basis for lifelong learning. It is common to all disciplines, to all learning environments, and to all levels of education. It enables learners to master content and extend their investigations, become more self-directed, and assume greater control over their own learning. An information literate individual is able to:

  • Determine the extent of information needed
  • Access the needed information effectively and efficiently
  • Evaluate information and its sources critically
  • Incorporate selected information into one’s knowledge base
  • Use information effectively to accomplish a specific purpose

Understand the economic, legal, and social issues surrounding the use of information, and access and use information ethically and legally http://libguides.unitec.ac.nz/infolitstaff   (from Unitec Institute of Technology in New Zealand)

  • Information literacy is…..the set of skills enabling students to recognize when they need information, how to competently locate it from appropriate sources and evaluate its use and potential. Being able to critically evaluate and effectively use information does not just create successful students, it makes them independent lifelong learners, helping them succeed in the workplace and beyond.  

Through IL instruction, students learn to:​

  • Recognize the need for information and determines the nature and extent of the information needed.
  • Find needed information effectively and efficiently.
  • Critically evaluate information and the information seeking process.
  • Manage information collected or generated.
  • Apply prior and new information to construct new concepts or create new understandings.
  • Use information with understanding and acknowledge cultural, ethical, economic, legal, and social issues surrounding the use of information.  

Definitions & Standards continued

Information Literacy Defined http://www.ala.org/acrl/standards/informationliteracycompetency Information literacy is a set of abilities requiring individuals to "recognize when information is needed and have the ability to locate, evaluate, and use effectively the needed information." 1 Information literacy also is increasingly important in the contemporary environment of rapid technological change and proliferating information resources. Because of the escalating complexity of this environment, individuals are faced with diverse, abundant information choices--in their academic studies, in the workplace, and in their personal lives. Information is available through libraries, community resources, special interest organizations, media, and the Internet--and increasingly, information comes to individuals in unfiltered formats, raising questions about its authenticity, validity, and reliability. In addition, information is available through multiple media, including graphical, aural, and textual, and these pose new challenges for individuals in evaluating and understanding it. The uncertain quality and expanding quantity of information pose large challenges for society. The sheer abundance of information will not in itself create a more informed citizenry without a complementary cluster of abilities necessary to use information effectively. Information literacy forms the basis for lifelong learning. It is common to all disciplines, to all learning environments, and to all levels of education. It enables learners to master content and extend their investigations, become more self-directed, and assume greater control over their own learning. An information literate individual is able to:

  • Information literacy is…..the set of skills enabling students to recognise when they need information, how to competently locate it from appropriate sources and evaluate its use and potential. Being able to critically evaluate and effectively use information does not just create successful students, it makes them independent lifelong learners, helping them succeed in the workplace and beyond.  

At Unitec all our information literacy classes are based on the Australian and New Zealand Institute for Information Literacy (ANZIIL) standards . Students learn to:​

  • Recognise the need for information and determines the nature and extent of the information needed.
  • Use information with understanding and acknowledge cultural, ethical, economic, legal, and social issues surrounding the use of information. http://unitec.v1.libguides.com/content.php?pid=294846

Five Laws of Media and Information Literacy

more research is needed to effectively evaluate

UNESCO Launches Five Laws of Media and Information Literacy 

  • << Previous: Media and Information Fluency
  • Next: Videos Covering the Basics of Information Literacy >>

Copyright © University of North Texas. Some rights reserved. Except where otherwise indicated, the content of this library guide is made available under a Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) license . Suggested citation for citing this guide when adapting it:

This work is a derivative of "Media Literacy" , created by [author name if apparent] and © University of North Texas, used under CC BY-NC 4.0 International .

  • Last Updated: Aug 14, 2024 10:44 AM
  • URL: https://guides.library.unt.edu/medialiteracy

Additional Links

UNT: Apply now UNT: Schedule a tour UNT: Get more info about the University of North Texas

UNT: Disclaimer | UNT: AA/EOE/ADA | UNT: Privacy | UNT: Electronic Accessibility | UNT: Required Links | UNT: UNT Home

Site logo

  • Understanding Evaluation Methodologies: M&E Methods and Techniques for Assessing Performance and Impact
  • Learning Center

EVALUATION METHODOLOGIES and M&E Methods

This article provides an overview and comparison of the different types of evaluation methodologies used to assess the performance, effectiveness, quality, or impact of services, programs, and policies. There are several methodologies both qualitative and quantitative, including surveys, interviews, observations, case studies, focus groups, and more…In this essay, we will discuss the most commonly used qualitative and quantitative evaluation methodologies in the M&E field.

Table of Contents

  • Introduction to Evaluation Methodologies: Definition and Importance
  • Types of Evaluation Methodologies: Overview and Comparison
  • Program Evaluation methodologies
  • Qualitative Methodologies in Monitoring and Evaluation (M&E)
  • Quantitative Methodologies in Monitoring and Evaluation (M&E)
  • What are the M&E Methods?
  • Difference Between Evaluation Methodologies and M&E Methods
  • Choosing the Right Evaluation Methodology: Factors and Criteria
  • Our Conclusion on Evaluation Methodologies

1. Introduction to Evaluation Methodologies: Definition and Importance

Evaluation methodologies are the methods and techniques used to measure the performance, effectiveness, quality, or impact of various interventions, services, programs, and policies. Evaluation is essential for decision-making, improvement, and innovation, as it helps stakeholders identify strengths, weaknesses, opportunities, and threats and make informed decisions to improve the effectiveness and efficiency of their operations.

Evaluation methodologies can be used in various fields and industries, such as healthcare, education, business, social services, and public policy. The choice of evaluation methodology depends on the specific goals of the evaluation, the type and level of data required, and the resources available for conducting the evaluation.

The importance of evaluation methodologies lies in their ability to provide evidence-based insights into the performance and impact of the subject being evaluated. This information can be used to guide decision-making, policy development, program improvement, and innovation. By using evaluation methodologies, stakeholders can assess the effectiveness of their operations and make data-driven decisions to improve their outcomes.

Overall, understanding evaluation methodologies is crucial for individuals and organizations seeking to enhance their performance, effectiveness, and impact. By selecting the appropriate evaluation methodology and conducting a thorough evaluation, stakeholders can gain valuable insights and make informed decisions to improve their operations and achieve their goals.

2. Types of Evaluation Methodologies: Overview and Comparison

Evaluation methodologies can be categorized into two main types based on the type of data they collect: qualitative and quantitative. Qualitative methodologies collect non-numerical data, such as words, images, or observations, while quantitative methodologies collect numerical data that can be analyzed statistically. Here is an overview and comparison of the main differences between qualitative and quantitative evaluation methodologies:

Qualitative Evaluation Methodologies:

  • Collect non-numerical data, such as words, images, or observations.
  • Focus on exploring complex phenomena, such as attitudes, perceptions, and behaviors, and understanding the meaning and context behind them.
  • Use techniques such as interviews, observations, case studies, and focus groups to collect data.
  • Emphasize the subjective nature of the data and the importance of the researcher’s interpretation and analysis.
  • Provide rich and detailed insights into people’s experiences and perspectives.
  • Limitations include potential bias from the researcher, limited generalizability of findings, and challenges in analyzing and synthesizing the data.

Quantitative Evaluation Methodologies:

  • Collect numerical data that can be analyzed statistically.
  • Focus on measuring specific variables and relationships between them, such as the effectiveness of an intervention or the correlation between two factors.
  • Use techniques such as surveys and experimental designs to collect data.
  • Emphasize the objectivity of the data and the importance of minimizing bias and variability.
  • Provide precise and measurable data that can be compared and analyzed statistically.
  • Limitations include potential oversimplification of complex phenomena, limited contextual information, and challenges in collecting and analyzing data.

Choosing between qualitative and quantitative evaluation methodologies depends on the specific goals of the evaluation, the type and level of data required, and the resources available for conducting the evaluation. Some evaluations may use a mixed-methods approach that combines both qualitative and quantitative data collection and analysis techniques to provide a more comprehensive understanding of the subject being evaluated.

3. Program evaluation methodologies

Program evaluation methodologies encompass a diverse set of approaches and techniques used to assess the effectiveness, efficiency, and impact of programs and interventions. These methodologies provide systematic frameworks for collecting, analyzing, and interpreting data to determine the extent to which program objectives are being met and to identify areas for improvement. Common program evaluation methodologies include quantitative methods such as experimental designs, quasi-experimental designs, and surveys, as well as qualitative approaches like interviews, focus groups, and case studies.

Each methodology offers unique advantages and limitations depending on the nature of the program being evaluated, the available resources, and the research questions at hand. By employing rigorous program evaluation methodologies, organizations can make informed decisions, enhance program effectiveness, and maximize the use of resources to achieve desired outcomes.

Catch HR’s eye instantly?

  • Resume Review
  • Resume Writing
  • Resume Optimization

Premier global development resume service since 2012

Stand Out with a Pro Resume

4. Qualitative Methodologies in Monitoring and Evaluation (M&E)

Qualitative methodologies are increasingly being used in monitoring and evaluation (M&E) to provide a more comprehensive understanding of the impact and effectiveness of programs and interventions. Qualitative methodologies can help to explore the underlying reasons and contexts that contribute to program outcomes and identify areas for improvement. Here are some common qualitative methodologies used in M&E:

Interviews involve one-on-one or group discussions with stakeholders to collect data on their experiences, perspectives, and perceptions. Interviews can provide rich and detailed data on the effectiveness of a program, the factors that contribute to its success or failure, and the ways in which it can be improved.

Observations

Observations involve the systematic and objective recording of behaviors and interactions of stakeholders in a natural setting. Observations can help to identify patterns of behavior, the effectiveness of program interventions, and the ways in which they can be improved.

Document review

Document review involves the analysis of program documents, such as reports, policies, and procedures, to understand the program context, design, and implementation. Document review can help to identify gaps in program design or implementation and suggest ways in which they can be improved.

Participatory Rural Appraisal (PRA)

PRA is a participatory approach that involves working with communities to identify and analyze their own problems and challenges. It involves using participatory techniques such as mapping, focus group discussions, and transect walks to collect data on community perspectives, experiences, and priorities. PRA can help ensure that the evaluation is community-driven and culturally appropriate, and can provide valuable insights into the social and cultural factors that influence program outcomes.

Key Informant Interviews

Key informant interviews are in-depth, open-ended interviews with individuals who have expert knowledge or experience related to the program or issue being evaluated. Key informants can include program staff, community leaders, or other stakeholders. These interviews can provide valuable insights into program implementation and effectiveness, and can help identify areas for improvement.

Ethnography

Ethnography is a qualitative method that involves observing and immersing oneself in a community or culture to understand their perspectives, values, and behaviors. Ethnographic methods can include participant observation, interviews, and document analysis, among others. Ethnography can provide a more holistic understanding of program outcomes and impacts, as well as the broader social context in which the program operates.

Focus Group Discussions

Focus group discussions involve bringing together a small group of individuals to discuss a specific topic or issue related to the program. Focus group discussions can be used to gather qualitative data on program implementation, participant experiences, and program outcomes. They can also provide insights into the diversity of perspectives within a community or stakeholder group .

Photovoice is a qualitative method that involves using photography as a tool for community empowerment and self-expression. Participants are given cameras and asked to take photos that represent their experiences or perspectives on a program or issue. These photos can then be used to facilitate group discussions and generate qualitative data on program outcomes and impacts.

Case Studies

Case studies involve gathering detailed qualitative data through interviews, document analysis, and observation, and can provide a more in-depth understanding of a specific program component. They can be used to explore the experiences and perspectives of program participants or stakeholders and can provide insights into program outcomes and impacts.

Qualitative methodologies in M&E are useful for identifying complex and context-dependent factors that contribute to program outcomes, and for exploring stakeholder perspectives and experiences. Qualitative methodologies can provide valuable insights into the ways in which programs can be improved and can complement quantitative methodologies in providing a comprehensive understanding of program impact and effectiveness

5. Quantitative Methodologies in Monitoring and Evaluation (M&E)

Quantitative methodologies are commonly used in monitoring and evaluation (M&E) to measure program outcomes and impact in a systematic and objective manner. Quantitative methodologies involve collecting numerical data that can be analyzed statistically to provide insights into program effectiveness, efficiency, and impact. Here are some common quantitative methodologies used in M&E:

Surveys involve collecting data from a large number of individuals using standardized questionnaires or surveys. Surveys can provide quantitative data on people’s attitudes, opinions, behaviors, and experiences, and can help to measure program outcomes and impact.

Baseline and Endline Surveys

Baseline and endline surveys are quantitative surveys conducted at the beginning and end of a program to measure changes in knowledge, attitudes, behaviors, or other outcomes. These surveys can provide a snapshot of program impact and allow for comparisons between pre- and post-program data.

Randomized Controlled Trials (RCTs)

RCTs are a rigorous quantitative evaluation method that involve randomly assigning participants to a treatment group (receiving the program) and a control group (not receiving the program), and comparing outcomes between the two groups. RCTs are often used to assess the impact of a program.

Cost-Benefit Analysis

Cost-benefit analysis is a quantitative method used to assess the economic efficiency of a program or intervention. It involves comparing the costs of the program with the benefits or outcomes generated, and can help determine whether a program is cost-effective or not.

Performance Indicators

Performance indicator s are quantitative measures used to track progress toward program goals and objectives. These indicators can be used to assess program effectiveness, efficiency, and impact, and can provide regular feedback on program performance.

Statistical Analysis

Statistical analysis involves using quantitative data and statistical method s to analyze data gathered from various evaluation methods, such as surveys or observations. Statistical analysis can provide a more rigorous assessment of program outcomes and impacts and help identify patterns or relationships between variables.

Experimental designs

Experimental designs involve manipulating one or more variables and measuring the effects of the manipulation on the outcome of interest. Experimental designs are useful for establishing cause-and-effect relationships between variables, and can help to measure the effectiveness of program interventions.

Quantitative methodologies in M&E are useful for providing objective and measurable data on program outcomes and impact, and for identifying patterns and trends in program performance. Quantitative methodologies can provide valuable insights into the effectiveness, efficiency, and impact of programs, and can complement qualitative methodologies in providing a comprehensive understanding of program performance.

6. What are the M&E Methods?

Monitoring and Evaluation (M&E) methods encompass the tools, techniques, and processes used to assess the performance of projects, programs, or policies.

These methods are essential in determining whether the objectives are being met, understanding the impact of interventions, and guiding decision-making for future improvements. M&E methods fall into two broad categories: qualitative and quantitative, often used in combination for a comprehensive evaluation.

7. Choosing the Right Evaluation Methodology: Factors and Criteria

Choosing the right evaluation methodology is essential for conducting an effective and meaningful evaluation. Here are some factors and criteria to consider when selecting an appropriate evaluation methodology:

  • Evaluation goals and objectives: The evaluation goals and objectives should guide the selection of an appropriate methodology. For example, if the goal is to explore stakeholders’ perspectives and experiences, qualitative methodologies such as interviews or focus groups may be more appropriate. If the goal is to measure program outcomes and impact, quantitative methodologies such as surveys or experimental designs may be more appropriate.
  • Type of data required: The type of data required for the evaluation should also guide the selection of the methodology. Qualitative methodologies collect non-numerical data, such as words, images, or observations, while quantitative methodologies collect numerical data that can be analyzed statistically. The type of data required will depend on the evaluation goals and objectives.
  • Resources available: The resources available, such as time, budget, and expertise, can also influence the selection of an appropriate methodology. Some methodologies may require more resources, such as specialized expertise or equipment, while others may be more cost-effective and easier to implement.
  • Accessibility of the subject being evaluated: The accessibility of the subject being evaluated, such as the availability of stakeholders or data, can also influence the selection of an appropriate methodology. For example, if stakeholders are geographically dispersed, remote data collection methods such as online surveys or video conferencing may be more appropriate.
  • Ethical considerations: Ethical considerations, such as ensuring the privacy and confidentiality of stakeholders, should also be taken into account when selecting an appropriate methodology. Some methodologies, such as interviews or focus groups, may require more attention to ethical considerations than others.

Overall, choosing the right evaluation methodology depends on a variety of factors and criteria, including the evaluation goals and objectives, the type of data required, the resources available, the accessibility of the subject being evaluated, and ethical considerations. Selecting an appropriate methodology can ensure that the evaluation is effective, meaningful, and provides valuable insights into program performance and impact.

8. Our Conclusion on Evaluation Methodologies

It’s worth noting that many evaluation methodologies use a combination of quantitative and qualitative methods to provide a more comprehensive understanding of program outcomes and impacts. Both qualitative and quantitative methodologies are essential in providing insights into program performance and effectiveness.

Qualitative methodologies focus on gathering data on the experiences, perspectives, and attitudes of individuals or communities involved in a program, providing a deeper understanding of the social and cultural factors that influence program outcomes. In contrast, quantitative methodologies focus on collecting numerical data on program performance and impact, providing more rigorous evidence of program effectiveness and efficiency.

Each methodology has its strengths and limitations, and a combination of both qualitative and quantitative approaches is often the most effective in providing a comprehensive understanding of program outcomes and impact. When designing an M&E plan, it is crucial to consider the program’s objectives, context, and stakeholders to select the most appropriate methodologies.

Overall, effective M&E practices require a systematic and continuous approach to data collection, analysis, and reporting. With the right combination of qualitative and quantitative methodologies, M&E can provide valuable insights into program performance, progress, and impact, enabling informed decision-making and resource allocation, ultimately leading to more successful and impactful programs.

' data-src=

Munir Barnaba

Thanks for your help its of high value, much appreciated

' data-src=

Very informative. Thank you

' data-src=

Chokri HAMOUDA

I am grateful for this article, which offers valuable insights and serves as an excellent educational resource. My thanks go to the author.

Leave a Comment Cancel Reply

Your email address will not be published.

How strong is my Resume?

Only 2% of resumes land interviews.

Land a better, higher-paying career

more research is needed to effectively evaluate

Jobs for You

Call for consultancy: evaluation of dfpa projects in kenya, uganda and ethiopia.

  • The Danish Family Planning Association

Project Assistant – Close Out

  • United States (Remote)

Global Technical Advisor – Information Management

  • Belfast, UK
  • Concern Worldwide

Intern- International Project and Proposal Support – ISPI

  • United States

Budget and Billing Consultant

Manager ii, budget and billing, usaid/lac office of regional sustainable development – program analyst, team leader, senior finance and administrative manager, data scientist.

  • New York, NY, USA
  • Everytown For Gun Safety

Energy Evaluation Specialist

Senior evaluation specialist, associate project manager, project manager i, services you might be interested in, useful guides ....

How to Create a Strong Resume

Monitoring And Evaluation Specialist Resume

Resume Length for the International Development Sector

Types of Evaluation

Monitoring, Evaluation, Accountability, and Learning (MEAL)

LAND A JOB REFERRAL IN 2 WEEKS (NO ONLINE APPS!)

Sign Up & To Get My Free Referral Toolkit Now:

FIU Libraries Logo

  •   LibGuides
  •   A-Z List
  •   Help

The Research Process: Step By Step

  • Step 1: Getting Started
  • Step 2: Developing a Strategy
  • step 3: refine your results
  • Evaluate & Decide
  • Evaluating Results
  • Scholarly vs. Popular Sources
  • Step 5: Put it Together & Cite it
  • Library Workshops
  • MicroCredential: Fundamentals of Information Literacy This link opens in a new window

Most college-level assignments expect you to take a critical view of all your sources, not just those you may have found online.   

It is always important to consider whether the authors of what you are reading are properly qualified and present convincing arguments. Because your time for careful reading is limited, try to skim through your sources first to decide whether they are truly helpful. Once you have chosen your best sources, read the most relevant ones first, leaving the more tangential material aside to use as background information.

Learning to identify scholarly (often known as "peer-reviewed") and non-scholarly sources of information is an important skill to cultivate. Many databases provide help with making this distinction.

Additionally,  Ulrich's Directory of Publications  is a database that can be searched to check to check the publication type (scholarly, refereed, magazine, etc).

If you are using the internet for research, it is especially important to evaluate the accuracy and authority of the information you find there.

REMEMBER: If you are using the internet for research, it is especially important to evaluate the accuracy and authority of the information you find there. Search engines, like Google, find web sites of all levels of quality. Keep these things in mind when deciding if a web page is reliable and appropriate for your research:

  • authority/credibility
  • accuracy/verifiability
  • bias/objectivity
  • currency/timeliness
  • scope/depth
  • intended audience/purpose

Always check with your instructor to find out if you can use free (non-Library) web sites for your assignments. And if looking for journal articles, library databases are the most efficient tool for searching.

  • Evaluating Information Resources In the process of conducting research, the most prominent resources one finds are books, periodical articles, and web resources. It is up to you to evaluate the appropriateness of all resources for your research purposes.

more research is needed to effectively evaluate

Questions to ask

The information available on websites is not always accurate or reliable because anyone can publish almost anything they wish online. In order to use websites for academic and research purposes, they must be approached critically. Below are questions grouped by category that will help when critiquing the credibility of an online resource.

  • Is the name of the author/creator on the page?
  • Are his/her credentials listed (occupation, years of experience, position or education)?
  • Is the author qualified to write on the given topic? Why?
  • Is there contact information, such as an email address, somewhere on the page?

Knowing the motive behind the page's creation can help you judge its content.

  • Scholarly audience or experts?
  • General public or novices?
  • Inform or Teach?
  • Explain or Enlighten?
  • Sell a Product? 

Objectivity

  • Is the information covered fact, opinion, or propaganda?
  • Is the author's point-of-view objective and impartial?
  • Is the language free of emotion-rousing words and bias?
  • Does the author's affiliation with an institution or organization appear to bias the information?
  • Does the content of the page have the official approval of the institution, organization, or company? 
  • Are the sources for factual information clearly listed so that the information can be verified?
  • Is it clear who has the ultimate responsibility for the accuracy of the content of the material?
  • Can you verify any of the information in independent sources or from your own knowledge?
  • Has the information been reviewed or refereed?
  • Is the information free of grammatical, spelling, or typographical errors?

Reliability and Credibility

  • Why should anyone believe information from this site?
  • Does the information appear to be valid and well-researched, or is it unsupported by evidence?
  • Are quotes and other strong assertions backed by sources that you could check through other means?
  • What institution (company, government, university, etc.) supports this information?
  • If it is an institution, have you heard of it before? Can you find more information about it?
  • Is there a non-Web equivalent of this material that would provide a way of verifying its legitimacy?
  • If timeliness of the information is important, is it kept up-to-date?
  • Is there an indication of when the site was last updated?
  • Are links related to the topic and useful to the purpose of the site?
  • Are links still current, or have they become dead ends?
  • What kinds of sources are linked?
  • Are the links evaluated or annotated in any way?

InfoLit for U: The Information Check Point

  • << Previous: step 3: refine your results
  • Next: Scholarly vs. Popular Sources >>
  • Last Updated: Aug 14, 2024 1:39 PM
  • URL: https://library.fiu.edu/gettingstarted

Information

Fiu libraries floorplans, green library, modesto a. maidique campus.

Floor Resources
One
Two
Three
Four
Five
Six
Seven
Eight

Hubert Library, Biscayne Bay Campus

Floor Resources
One
Two
Three

Federal Depository Library Program logo

Directions: Green Library, MMC

Directions: Hubert Library, BBC

Library & Information Science Education Network

What is Information Literacy?

Md. Ashikuzzaman

Introduction: Information literacy is an essential skill set that empowers individuals to navigate the vast and ever-expanding realm of information with confidence and discernment. In an age where information is readily accessible through various platforms and sources, the ability to locate, evaluate, and effectively use information has become paramount. Information literacy goes beyond mere information retrieval; it encompasses the critical thinking, analytical, and technological skills needed to assess information’s credibility, relevance, and reliability effectively. By honing these abilities, individuals can make informed decisions, participate actively in society, and become lifelong learners in a world increasingly driven by information and knowledge.

1.1 What is Information Literacy?

Information literacy is a critical skill set in the digital age, encompassing the ability to access, evaluate, analyze, and ethically use information from diverse sources. It goes beyond basic information retrieval and involves a multifaceted understanding of information, including its context, credibility, and relevance. Information literacy is not confined to traditional libraries but extends to various digital platforms, where an overwhelming volume of information is available. A literate individual is adept at navigating this information landscape, distinguishing between reliable and unreliable sources, and critically assessing the quality and credibility of information. The information literate person is not just a consumer but an active participant in creating and disseminating knowledge, understanding the ethical considerations of information use, and contributing meaningfully to the broader intellectual discourse. Information literacy is a dynamic skill that evolves with technological advancements, requiring individuals to adapt to new information formats, platforms, and communication channels. Educational institutions and libraries are pivotal in fostering information literacy, equipping learners with the skills to thrive in an information-rich society.

1.2 Definitions of Information Literacy

Information literacy is the ability to locate, evaluate, interpret, and effectively use information from various sources and in diverse formats. It encompasses the skills, knowledge, and attitudes required to navigate the complex information landscape of today’s world. It involves the capacity to find relevant information using appropriate search strategies and tools, critically evaluate the quality and reliability of information sources, interpret and analyze information to derive meaningful insights, and ethically and responsibly use and communicate information. Information literacy empowers individuals to make informed decisions, engage in lifelong learning, and actively participate in society by effectively harnessing the information available.

Paul Zurkowski first defined information literacy in (1974) as ‘people trained in the application of information sources to their work can be called literates.’ The UNESCO-sponsored Meeting of Experts on Information Literacy in Prague defines that,

“Information literacy” encompasses knowledge of one’s information concerns and needs and the ability to identify, locate, evaluate, organize, and effectively create, use, and communicate information to address issues or problems at hand; it is a prerequisite for participating effectively in the information society, and is part of the basic human right of lifelong learning (US National Commission on Library and Information Science, 2003).

While Sheila Webber, who was instrumental in developing the Council for Information Literacy Implementation Program (UK) (CILIP) definition, had also developed an earlier definition:

According to Webber, “information literacy” is the adoption of appropriate information behavior to obtain, through whatever channel or medium, information one’s to information needs, together with a critical awareness of the importance of wise and ethical use of information in society (Webber & Johnston, 2008). Information literacy is knowing when and why you need information, where to find it, and how to evaluate, use, and communicate it ethically.

In conclusion, information literacy is a beacon of empowerment in the modern era, guiding individuals through the intricate information landscape with skill, discernment, and adaptability. Beyond the ability to locate and access information, it embodies a comprehensive set of skills that enables critical thinking, ethical use of data, and effective problem-solving. In a world inundated with information, cultivating information literacy is paramount. It equips individuals with the tools to navigate diverse sources, discern credibility, and engage with information in a way that transcends academic settings, influencing professional, personal, and civic dimensions of life. As the digital society continues to evolve, information literacy emerges as a cornerstone for informed decision-making, lifelong learning, and active participation in a globalized community. Libraries, educational institutions, and information professionals play pivotal roles in fostering information literacy, ensuring that individuals are not merely consumers of information but empowered navigators of the dynamic and interconnected information landscape. In embracing information literacy, individuals are poised to navigate the complexities of the information age with confidence, contributing to a society where knowledge is not just accessed but critically and responsibly utilized for the greater good.

1.3 Objectives of Information Literacy.

Information literacy objectives encapsulate a comprehensive set of goals aimed at equipping individuals with the essential skills and competencies necessary to navigate the information-rich landscape of the digital age. Information literacy empowers individuals to access, evaluate, and utilize information effectively, fostering critical thinking, independent learning, and informed decision-making. The multifaceted nature of these objectives includes the ability to discern credible sources, understand diverse media formats, and navigate complex information systems. Furthermore, information literacy aims to cultivate ethical information use, encouraging individuals to respect intellectual property, evaluate sources’ reliability, and responsibly contribute to the information ecosystem. These objectives extend beyond academic settings, influencing professional, personal, and civic realms positioning information literacy as a fundamental life skill in today’s interconnected society. Educational institutions, libraries, and information professionals play pivotal roles in advancing these objectives, ensuring that individuals are proficient consumers of information and active contributors to a knowledge-based global community. As technology continues to evolve, the objectives of information literacy adapt to meet the challenges of an ever-changing information landscape, emphasizing the importance of ongoing learning and adaptability in the face of an increasingly complex and interconnected world.

The objectives of information literacy can be summarized as follows:

  • Facilitating Information Access: Information literacy aims to equip individuals with the skills to navigate the diverse landscape of information sources. This involves understanding traditional libraries and proficiency in utilizing digital archives, online databases, and search engines. The objective is to ensure that individuals can effectively locate, retrieve, and access information relevant to their needs. This skill becomes particularly vital in an age where information is dispersed across various platforms and formats.
  • Developing Critical Evaluation Skills: Critical evaluation is a cornerstone objective of information literacy. It goes beyond merely finding information to instill in individuals the ability to assess its credibility, relevance, and reliability. This involves questioning the source’s authority, understanding potential biases, and evaluating the currency of the information. Cultivating these critical thinking skills enables individuals to sift through the vast amount of information available, making informed decisions and avoiding the pitfalls of misinformation.
  • Promoting Effective Information Use: Beyond access and evaluation, information literacy empowers individuals to apply information effectively. This involves synthesizing information from various sources, integrating it into existing knowledge, and applying it to solve problems or make decisions. The objective is to go beyond passive consumption and enable individuals to actively use information in meaningful ways in both professional and personal contexts.
  • Encouraging Ethical Information Use: Ethical considerations are critical to information literacy objectives. Individuals are guided to understand and adhere to ethical practices, including proper citation, respecting intellectual property rights, and avoiding plagiarism. This objective emphasizes the importance of responsible information use, contributing to an ethical and sustainable information ecosystem.
  • Cultivating Lifelong Learning Habits: The objective of lifelong learning is intrinsic to information literacy. In a rapidly changing information landscape, individuals are encouraged not only to acquire knowledge but also to adapt to new technologies, stay curious, and actively seek opportunities for ongoing skill development. This objective recognizes that the learning journey extends beyond formal education into a continuous and self-directed process.
  • Empowering Civic and Social Engagement: Information literacy objectives extend to societal and civic realms by encouraging individuals to engage with information in ways that positively contribute to their communities. This involves understanding diverse perspectives, participating in informed civic discourse, and using information for social betterment. The objective is to empower individuals to be active and responsible contributors to societal progress through the effective use of information.

The objectives of information literacy paint a comprehensive picture of a skill set that is crucial for navigating the complexities of the information age. By achieving these objectives, individuals become adept consumers of information and active contributors to a global knowledge society. Educational institutions, libraries, and information professionals play pivotal roles in advancing these objectives, ensuring that information literacy remains a cornerstone skill in an interconnected and information-abundant world. In embracing these objectives, individuals can navigate the knowledge seas with confidence, discernment, and a commitment to ethical and responsible information use.

1.4 Special Aspects of Information Literacy.

Information literacy encompasses various special aspects that enhance individuals’ ability to effectively navigate and engage with information in today’s digital age. These special aspects include tool literacy, resource literacy, social-structural literacy, research literacy, publishing literacy, emerging technology literacy, and critical literacy. Tool literacy focuses on mastering the tools and technologies used to access and organize information. Resource literacy involves understanding and utilizing different types of information resources. Social-structural literacy acknowledges the influence of societal factors on information. Research literacy emphasizes the skills required to conduct effective research. Publishing literacy involves understanding the processes of creating and disseminating information. Emerging technology literacy addresses the adaptation to new technologies. Critical literacy develops the ability to analyze and evaluate information critically. These special aspects of information literacy empower individuals to navigate the vast information landscape, critically evaluate sources, and make informed decisions in an increasingly complex and interconnected world.

These aspects include:

  • Tool literacy: This aspect focuses on mastering various tools and technologies used to access, retrieve, organize, and present information. It encompasses proficiency in using search engines, databases, citation management software, productivity tools, and other technological resources.
  • Resource literacy: Resource literacy emphasizes understanding and utilizing different information resources, such as books, journals, databases, websites, multimedia materials, and archives. It involves knowing each resource’s characteristics and strengths and selecting the most appropriate ones for specific information needs.
  • Social-structural literacy: This aspect recognizes that social and structural factors, such as power dynamics, biases, and cultural contexts, shape information. It entails understanding how societal structures influence the creation, dissemination, and access to information, as well as recognizing and critically analyzing the impact of these factors on information sources and content.
  • Research literacy: Research literacy encompasses the skills and knowledge required to conduct systematic and effective research. It includes formulating research questions, designing research strategies, evaluating sources, collecting and analyzing data, and communicating research findings in a scholarly manner.
  • Publishing literacy: Publishing literacy focuses on understanding the processes and practices involved in creating and disseminating information. It involves knowledge of scholarly publishing norms, copyright regulations, open-access initiatives, and ethical considerations related to publishing and authorship.
  • Emerging technology literacy: Given the rapid technological advancements, this aspect highlights the ability to adapt to and utilize emerging technologies for information discovery, analysis, and communication. It involves staying updated on emerging tools, platforms, and trends and critically evaluating their relevance and reliability.
  • Critical literacy: Critical literacy emphasizes developing critical thinking skills to question, evaluate, and challenge information. It involves examining information sources’ assumptions, biases, and perspectives, recognizing propaganda, misinformation, and disinformation, and engaging in critical analysis and reflection.

By addressing these special aspects of information literacy, individuals are better equipped to navigate the complexities of the information landscape, critically engage with information sources, and make informed decisions in an increasingly digital and information-rich society.

1.5 Abilities of Information Literate.

An information-literate individual possesses the skills, knowledge, and attitudes necessary to effectively navigate, evaluate, and utilize information in various contexts. They have developed the abilities to identify information needs, access information from diverse sources, critically evaluate the credibility and relevance of information, analyze and synthesize information to derive meaningful insights, and ethically and responsibly use and communicate information. Information-literate individuals can utilize various tools and technologies to search, retrieve, organize, and present information. They are critical thinkers who can discern reliable sources from misinformation and disinformation, and they actively evaluate and question the information they encounter. Information-literate individuals are lifelong learners, continuously seeking new knowledge and adapting to evolving information landscapes. They are empowered to make informed decisions, solve problems, and participate in a knowledge-driven society.

Information literacy equips individuals with various abilities to navigate, evaluate, and utilize information effectively. The abilities of information-literate individuals include:

  • Identifying information needs: Information-literate individuals can clearly recognize when they need information and articulate their requirements. They can define the scope and purpose of their information needs, which guides their search and evaluation processes.
  • Accessing information: Information-literate individuals possess the skills to locate and access information from various sources. They are proficient in using search engines, library catalogs, databases, and other resources to retrieve relevant information efficiently.
  • Evaluating information: Information-literate individuals can critically evaluate information sources for their credibility, accuracy, relevance, and bias. They can assess the authority of the authors or publishers, examine the evidence presented, and determine the overall quality and reliability of the information.
  • Analyzing and synthesizing information: Information-literate individuals can analyze and synthesize information from various sources to derive meaningful insights. They can identify patterns, connections, and relationships between different pieces of information and integrate them into a coherent understanding of the topic or issue at hand.
  • Applying information effectively: Information-literate individuals are skilled at applying their acquired information to fulfill specific tasks or objectives. They can use the information to solve problems, make informed decisions, develop arguments, and support their ideas or claims.
  • Ethical information use: Information-literate individuals understand and adhere to ethical considerations when using information. They respect copyright laws, intellectual property rights, and fair use principles. They give proper attribution to sources, avoid plagiarism, and use information ethically and responsibly.
  • Communication and information sharing: Information-literate individuals can effectively communicate and share information with others. They can present information clearly and coherently, cite sources accurately, and engage in collaborative discussions and knowledge sharing.
  • Lifelong learning: Information-literate individuals embrace a lifelong learning mindset. They possess the skills and motivation to seek new knowledge continuously, adapt to changing information environments, and engage in ongoing learning and self-improvement.

By developing these abilities, information-literate individuals are equipped to navigate the complexities of the information landscape, make informed decisions, solve problems, and actively participate in a knowledge-driven society. These abilities empower individuals to be critical thinkers, lifelong learners, and responsible users and creators of information.

1.6 Need for Information Literacy.

The need for information literacy arises from the digital age’s ever-expanding and rapidly changing information landscape. In today’s society, where information is readily accessible through various sources and platforms, information literacy has become essential for individuals to navigate, evaluate, and effectively use the wealth of information available. Here are several key reasons highlighting the need for information literacy:

  • Coping with information overload: The digital era has brought about an overwhelming amount of information, making it challenging to filter through the noise and find reliable and relevant information. Information literacy equips individuals with the skills to navigate this information overload, allowing them to efficiently locate, evaluate, and utilize information that meets their needs.
  • Critical evaluation of information: With the proliferation of misinformation, fake news, and biased content, there is a growing need for individuals to evaluate the credibility and reliability of information sources critically. Information literacy empowers individuals to discern fact from fiction, evaluate sources for accuracy and bias, and make informed judgments about the quality of information.
  • Making informed decisions: In a society where individuals are constantly faced with numerous choices and decisions, information literacy plays a crucial role in enabling individuals to make informed and evidence-based decisions. By accessing reliable information, critically evaluating it, and synthesizing relevant insights, information-literate individuals can make well-informed choices in various aspects of their lives, including education, career, health, and personal matters.
  • Participating in democratic processes: In democratic societies, informed citizen participation is vital for effective decision-making and social progress. Information literacy empowers individuals to access and critically analyze political, social, and economic information. It enables them to engage in informed discussions, contribute to public debates, and participate actively in democratic processes such as voting and advocacy.
  • Lifelong learning: In a knowledge-based economy and rapidly evolving world, continuous learning and upskilling are paramount. Information literacy fosters a lifelong learning mindset by providing individuals with the skills and strategies to seek, evaluate, and apply information effectively. It equips individuals to adapt to changing technologies, acquire new knowledge, and engage in continuous personal and professional development.
  • Ethical information use: Information literacy promotes the ethical and responsible use of information. It emphasizes respecting intellectual property rights, citing sources accurately, and avoiding plagiarism. By instilling ethical information practices, information literacy helps individuals maintain integrity, uphold academic and professional standards, and contribute to a culture of ethical information use.
  • Digital citizenship: Information literacy is crucial for developing responsible digital citizenship. In an interconnected world where online interactions and digital platforms are prevalent, information literacy helps individuals navigate digital spaces safely, understand privacy settings, recognize online threats, and engage in respectful and ethical online behavior.
  • Professional and career success: In the professional realm, information literacy is highly valued by employers. The ability to effectively locate, evaluate, and utilize information is vital for job performance, research, problem-solving, and decision-making. Information literacy enhances individuals’ employability and improves their professional growth and success.
  • Social inclusion: Information literacy plays a role in promoting social inclusion and reducing the digital divide. By equipping individuals with the skills to access and utilize information, regardless of their background or socio-economic status, information literacy helps bridge the gap between those with access to information resources and those without. It promotes equal opportunities for education, employment, and civic engagement.
  • Empowerment and self-advocacy: Information literacy empowers individuals to proactively seek information, advocate for their rights, and effectively voice their opinions. It encourages individuals to question and challenge existing knowledge, engage in critical thinking, and contribute to creating and disseminating new knowledge.

The need for information literacy arises from the abundance of information, the prevalence of misinformation, the necessity for informed decision-making, the importance of democratic participation, the demand for lifelong learning, and the ethical considerations associated with information use. By developing information literacy skills, individuals are better equipped to navigate the information landscape, critically evaluate sources, make informed decisions, and actively engage in society.

1.7 Medium of Information Literacy

The medium of information _ literacy encompasses various literacies essential for effectively navigating and utilizing information in different formats and contexts. Here are some key mediums of information _ literacy:

  • Computer Literacy: Computer literacy encompasses the skills and knowledge required to use computers and related technologies effectively. It involves understanding basic computer operations, such as turning on/off the computer, using input devices like keyboards and mice, and managing files and folders. Computer literacy also includes proficiency using software applications, such as word processors, spreadsheets, presentation software, and web browsers. It is essential to access and utilize digital information, perform tasks efficiently, and engage with technology effectively.
  • Network Literacy: Network literacy focuses on understanding and utilizing computer networks, particularly the Internet. It involves skills such as using web browsers to access information online, conducting effective online searches using search engines, understanding website structures and URLs, and utilizing online communication tools like email and instant messaging. Network literacy also encompasses navigating online platforms, engaging in online communities, and understanding concepts like hyperlinks, web navigation, and online privacy. It enables individuals to access and leverage digital information resources, communicate and collaborate online, and stay informed in a connected world.
  • Digital Literacy: Digital literacy encompasses many skills and competencies to navigate and utilize digital information effectively. It includes understanding digital tools, software, and applications and being able to use them for various purposes. Digital literacy also involves evaluating digital content for credibility, reliability, and accuracy and understanding privacy and security considerations related to digital interactions. It encompasses skills such as digital communication (email, social media), online collaboration (file sharing, virtual meetings), information management (organizing and storing digital files), and responsible digital citizenship (ethics, online safety). Digital literacy empowers individuals to engage confidently with digital technology and leverage digital resources for learning, productivity, and communication.
  • Visual Literacy: Visual literacy refers to the ability to interpret, analyze, and create visual representations of information. It involves understanding visual elements such as images, charts, graphs, diagrams, maps, and infographics. Visual literacy enables individuals to comprehend and communicate information effectively through visual means. It includes skills such as understanding visual symbolism, analyzing visual messages for bias or manipulation, and creating visual presentations or data visualizations. Visual literacy is valuable in fields that heavily rely on visual communication, such as design, data analysis, and media production.
  • Media Literacy: Media literacy focuses on understanding and critically evaluating media messages and forms of communication. It involves analyzing and interpreting various media formats, such as print, television, radio, film, and digital media. Media literacy includes skills such as understanding media bias and manipulation, identifying persuasive techniques, evaluating the credibility and reliability of sources, and recognizing different media genres and formats. Media literacy empowers individuals to navigate media landscapes, differentiate between fact and opinion, critically analyze media messages, and make informed decisions about the information they consume. It also involves understanding the media’s influence on society, including issues of representation, stereotypes, and the media’s role in shaping public opinion.

These mediums of information literacy encompass a range of skills and competencies necessary for individuals to navigate, evaluate, and utilize information effectively in the digital age. Developing proficiency in these literacies enables individuals to adapt to new technologies, critically analyze information across different mediums, and make informed decisions in an increasingly digital and media-rich society.

1.7 The Impact and Role of Information Literacy in Higher Education

The impact and role of information _ literacy in higher education are significant as it directly influences students’ ability to succeed academically, conduct research effectively, and become lifelong learners. Here are several key aspects highlighting the impact and role of information _ literacy in higher education:

  • Academic Success: Information _ literacy skills are crucial for students to excel in their academic pursuits. It empowers them to locate, evaluate, and effectively use relevant and credible information to support their coursework, assignments, and research projects. By developing information _ literacy competencies, students can navigate library resources, databases, and online platforms, enabling them to access a wide range of scholarly materials. Information _ literacy also fosters critical thinking and analytical skills, helping students synthesize information, construct well-reasoned arguments, and produce high-quality academic work.
  • Research Competence: Information literacy plays a vital role in research processes within higher education. It equips students with the skills to identify research gaps, formulate research questions, design research methodologies, and locate appropriate sources of information. Through information _ literacy, students learn how to effectively evaluate sources’ relevance, accuracy, and credibility, ensuring that their research is based on reliable and trustworthy information. Information _ literacy also helps students avoid plagiarism by understanding ethical citation practices and properly attributing ideas and sources in their scholarly work.
  • Critical Thinking and Analysis: Information literacy fosters critical thinking skills, enabling students to evaluate information critically and analyze its implications. Students learn to discern biases, identify logical fallacies, and assess the validity and reliability of sources. By critically engaging with information, students develop a more nuanced understanding of complex issues and become better equipped to form informed opinions and make well-reasoned arguments.
  • Lifelong Learning: Information literacy cultivates a lifelong learning mindset among students, emphasizing the importance of continuous learning beyond their formal education. By developing skills in accessing, evaluating, and utilizing information effectively, students can adapt to new technologies, keep up with advancements in their fields, and engage in self-directed learning. Information literacy empowers students to become independent learners who can seek out and engage with information resources throughout their lives.
  • Digital Citizenship and Ethical Use: Information literacy addresses the ethical and responsible use of information in higher education. It promotes digital citizenship, helping students understand their rights and responsibilities in the digital realm, including privacy, copyright, and intellectual property issues. Students learn to evaluate information sources critically, distinguish between reliable and unreliable information, and engage in responsible digital communication and sharing practices.
  • Professional Preparation: Information literacy skills are highly valued by employers in various fields. Graduates with strong information literacy competencies are better equipped to navigate the information-intensive workplace, conduct research, stay updated on industry trends, and make informed decisions. Information literacy gives students a competitive edge, enhancing their employability and career prospects.
  • Engagement with Knowledge and Society: Information literacy enables students to engage with knowledge in a broader societal context. It encourages students to examine diverse perspectives critically, recognize the influence of information on societal issues, and participate in informed discussions and debates. Information literacy empowers students to contribute actively to knowledge creation, dissemination, and application, fostering a culture of intellectual curiosity and engagement.

Information literacy is crucial in higher education by facilitating academic success, research competence, critical thinking, lifelong learning, digital citizenship, professional preparation, and engagement with knowledge and society. By developing information literacy skills, students are better equipped to navigate the complex information landscape, think critically, and become active participants in their academic journeys and beyond.

1.8 Impact of Information Literacy on Lifelong Learning

The impact of information literacy on lifelong learning is profound as it empowers individuals to navigate the vast sea of information, adapt to new technologies, and continue their learning journey beyond formal education. Information _ literacy equips individuals with the skills to locate, evaluate, and effectively utilize information from various sources and formats. This enables them to stay updated on current topics, explore new areas of interest, and deepen their knowledge in their chosen fields. By developing information _ literacy competencies, individuals become self-directed learners, capable of critically analyzing information, synthesizing knowledge, and making informed decisions. Information _ literacy also cultivates a curiosity, inquiry, and intellectual growth mindset, encouraging individuals to continually seek out new information, engage with diverse perspectives, and challenge their existing knowledge. In an ever-changing world where information is constantly evolving, information _ literacy is a vital tool for individuals to become lifelong learners who can adapt, grow, and thrive in their personal and professional lives.

1.8.1 Why Information Literacy?

Information literacy is crucial in today’s information-rich society due to several important reasons. Here are some key advantages and implications of information literacy:

a) Saving of time by information skills: Information _ literacy equips individuals with the skills to efficiently locate, evaluate, and use information. By knowing how to search for and evaluate relevant and reliable sources effectively, individuals can save time in their information-seeking endeavors and avoid getting overwhelmed by the vast amount of available information.

b) Effective deployment of information service staff: Information _ literacy enables individuals to become self-sufficient in their information needs, reducing their reliance on information service staff. This allows information professionals to focus on more complex tasks, provide specialized assistance, and contribute their expertise where it is most needed.

c) Best use of information resources: Information _ literacy empowers individuals to effectively navigate and utilize information resources, both physical and digital. By understanding how to access and evaluate a wide range of resources, individuals can make the best use of available information, ensuring that their decisions and actions are well-informed and based on reliable sources.

d) To add value to the profession as a whole: Information _ literacy enhances the value of information professionals and the broader profession. Information professionals with strong information literacy skills are better equipped to meet the evolving needs of their users, provide valuable guidance, and contribute to knowledge creation and dissemination in their respective fields.

e) Effective use of stock: Information _ literacy enables individuals to effectively utilize existing information stocks, such as library collections, databases, and archives. By knowing how to navigate and evaluate these resources, individuals can access relevant information and make connections between different sources, enhancing their understanding and enabling them to generate new insights.

f) Abundant information choices: Information _ literacy allows individuals to take advantage of the abundance of information choices available today. It equips them with the skills to critically evaluate and select the most relevant and reliable information from various sources, ensuring they are exposed to diverse perspectives and can make well-informed decisions.

g) Caution on unfiltered information: Information _ literacy raises awareness about the importance of critically evaluating information sources and questions validity, reliability, and authenticity. In an era of easily accessible information where misinformation and fake news are prevalent, information literacy helps individuals develop a critical mindset and discern reliable and trustworthy sources from unreliable ones.

Information literacy is essential because it enables individuals to save time, effectively use information resources, contribute value to their profession, utilize existing information stocks, access abundant information choices, and exercise caution in evaluating information. By developing information _ literacy skills, individuals can confidently navigate the complex information landscape, make informed decisions, and actively engage in lifelong learning.

1.9 Impact of Information Literacy in Library and Information Center

The impact of information _ literacy in libraries and information centers is profound, shaping these institutions’ core functions and missions in the digital age. Information literacy is the linchpin that bridges the gap between the vast reservoir of information and the patrons seeking knowledge. In an era marked by information abundance, the ability to navigate, critically evaluate, and effectively use information is essential. Libraries and information centers, traditionally repositories of knowledge, are transformed into dynamic hubs of learning and exploration by integrating information literacy. The impact is evident in how these institutions curate resources, design user-centric services, and empower individuals to become adept information navigators. Information literacy not only enhances patrons’ research and learning capabilities but also reinforces the role of libraries as facilitators of lifelong learning and intellectual engagement.

The impact of information literacy in libraries and information centers is significant and transformative. Information literacy enhances the effectiveness of these institutions in several key ways:

  • Empowering Users: Information literacy empowers library and information center users by equipping them with the skills to effectively access, evaluate, and utilize information resources. Users become self-sufficient in their information needs, reducing their dependence on library staff and enabling them to make informed decisions. This empowerment leads to a more engaged and active user community.
  • Maximizing Resource Utilization: Information literacy ensures that library and information center resources are utilized to their full potential. Users with information literacy skills can navigate the available resources efficiently, locate relevant materials, and critically evaluate their suitability for their information needs. This maximizes the value and impact of the resources within the institution.
  • Enhancing Research and Scholarship: Information literacy supports research and scholarship within libraries and information centers. Users with strong information literacy skills can conduct comprehensive literature reviews, critically evaluate research findings, and effectively communicate their research through appropriate citation practices. This leads to the production of high-quality research and promotes scholarly engagement within the institution.
  • Collaborative Learning and Teaching: Information literacy fosters collaborative learning and teaching within libraries and information centers. Librarians and information professionals can actively engage with users to provide instruction on information literacy skills, helping them develop the necessary competencies to navigate and utilize information effectively. This collaboration promotes a lifelong learning culture and supports library users’ academic success.
  • Promoting Digital Literacy: Information _ literacy in libraries and information centers extends to digital literacy, equipping users with the skills to navigate and critically evaluate digital information resources. This is particularly important in the digital age, where vast information is available online. Digital literacy ensures that users can navigate online databases, search engines, and other digital platforms confidently and responsibly.
  • Fostering Critical Thinking: Information _ literacy encourages critical thinking among users within libraries and information centers. Users are trained to critically evaluate information sources, assess their reliability and relevance, and consider multiple perspectives. This critical thinking ability enhances their analytical skills, enabling them to make informed judgments and decisions based on credible and trustworthy information.
  • Bridging the Digital Divide: Information _ literacy programs in libraries and information centers help bridge the digital divide by providing access to digital technologies and teaching digital skills to underserved communities. This ensures that individuals from all backgrounds have equal opportunities to access and utilize information resources, empowering them in their educational, professional, and personal pursuits.

Information literacy profoundly impacts libraries and information centers by empowering users, maximizing resource utilization, supporting research and scholarship, promoting collaborative learning, enhancing digital literacy, fostering critical thinking, and bridging the digital divide. By integrating information literacy initiatives, libraries and information centers can effectively fulfill their mission of providing access to information and supporting lifelong learning in their communities.

Original Reference Article:

  • Vellaichamy, A. (2013). Information literacy skills in the use of electronic resources among the faculty members of mother Teresa Womens University and its affiliated colleges_ An analytical study. Retrieved from: http://hdl.handle.net/10603/229132

Related Posts

The role of libraries in promoting digital literacy, embedding a curriculum-based information literacy program within the university, impact of digital information literacy to prevent misinformation, what is digital information literacy : what is digital literacy, what is community information services, need of information literacy skills.

' src=

a good research

' src=

I deeply thank the creators of this content. It has helped me alot in doing my coursework.

' src=

I really appreciate the owner of this work, it has really been of great help to me in the course of my research

' src=

A big thank you to the owner of this work, it has helped me with my coursework.

Save my name, email, and website in this browser for the next time I comment.

Type above and press Enter to search. Press Esc to cancel.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Neurol Res Pract

Logo of neurrp

How to use and assess qualitative research methods

Loraine busetto.

1 Department of Neurology, Heidelberg University Hospital, Im Neuenheimer Feld 400, 69120 Heidelberg, Germany

Wolfgang Wick

2 Clinical Cooperation Unit Neuro-Oncology, German Cancer Research Center, Heidelberg, Germany

Christoph Gumbinger

Associated data.

Not applicable.

This paper aims to provide an overview of the use and assessment of qualitative research methods in the health sciences. Qualitative research can be defined as the study of the nature of phenomena and is especially appropriate for answering questions of why something is (not) observed, assessing complex multi-component interventions, and focussing on intervention improvement. The most common methods of data collection are document study, (non-) participant observations, semi-structured interviews and focus groups. For data analysis, field-notes and audio-recordings are transcribed into protocols and transcripts, and coded using qualitative data management software. Criteria such as checklists, reflexivity, sampling strategies, piloting, co-coding, member-checking and stakeholder involvement can be used to enhance and assess the quality of the research conducted. Using qualitative in addition to quantitative designs will equip us with better tools to address a greater range of research problems, and to fill in blind spots in current neurological research and practice.

The aim of this paper is to provide an overview of qualitative research methods, including hands-on information on how they can be used, reported and assessed. This article is intended for beginning qualitative researchers in the health sciences as well as experienced quantitative researchers who wish to broaden their understanding of qualitative research.

What is qualitative research?

Qualitative research is defined as “the study of the nature of phenomena”, including “their quality, different manifestations, the context in which they appear or the perspectives from which they can be perceived” , but excluding “their range, frequency and place in an objectively determined chain of cause and effect” [ 1 ]. This formal definition can be complemented with a more pragmatic rule of thumb: qualitative research generally includes data in form of words rather than numbers [ 2 ].

Why conduct qualitative research?

Because some research questions cannot be answered using (only) quantitative methods. For example, one Australian study addressed the issue of why patients from Aboriginal communities often present late or not at all to specialist services offered by tertiary care hospitals. Using qualitative interviews with patients and staff, it found one of the most significant access barriers to be transportation problems, including some towns and communities simply not having a bus service to the hospital [ 3 ]. A quantitative study could have measured the number of patients over time or even looked at possible explanatory factors – but only those previously known or suspected to be of relevance. To discover reasons for observed patterns, especially the invisible or surprising ones, qualitative designs are needed.

While qualitative research is common in other fields, it is still relatively underrepresented in health services research. The latter field is more traditionally rooted in the evidence-based-medicine paradigm, as seen in " research that involves testing the effectiveness of various strategies to achieve changes in clinical practice, preferably applying randomised controlled trial study designs (...) " [ 4 ]. This focus on quantitative research and specifically randomised controlled trials (RCT) is visible in the idea of a hierarchy of research evidence which assumes that some research designs are objectively better than others, and that choosing a "lesser" design is only acceptable when the better ones are not practically or ethically feasible [ 5 , 6 ]. Others, however, argue that an objective hierarchy does not exist, and that, instead, the research design and methods should be chosen to fit the specific research question at hand – "questions before methods" [ 2 , 7 – 9 ]. This means that even when an RCT is possible, some research problems require a different design that is better suited to addressing them. Arguing in JAMA, Berwick uses the example of rapid response teams in hospitals, which he describes as " a complex, multicomponent intervention – essentially a process of social change" susceptible to a range of different context factors including leadership or organisation history. According to him, "[in] such complex terrain, the RCT is an impoverished way to learn. Critics who use it as a truth standard in this context are incorrect" [ 8 ] . Instead of limiting oneself to RCTs, Berwick recommends embracing a wider range of methods , including qualitative ones, which for "these specific applications, (...) are not compromises in learning how to improve; they are superior" [ 8 ].

Research problems that can be approached particularly well using qualitative methods include assessing complex multi-component interventions or systems (of change), addressing questions beyond “what works”, towards “what works for whom when, how and why”, and focussing on intervention improvement rather than accreditation [ 7 , 9 – 12 ]. Using qualitative methods can also help shed light on the “softer” side of medical treatment. For example, while quantitative trials can measure the costs and benefits of neuro-oncological treatment in terms of survival rates or adverse effects, qualitative research can help provide a better understanding of patient or caregiver stress, visibility of illness or out-of-pocket expenses.

How to conduct qualitative research?

Given that qualitative research is characterised by flexibility, openness and responsivity to context, the steps of data collection and analysis are not as separate and consecutive as they tend to be in quantitative research [ 13 , 14 ]. As Fossey puts it : “sampling, data collection, analysis and interpretation are related to each other in a cyclical (iterative) manner, rather than following one after another in a stepwise approach” [ 15 ]. The researcher can make educated decisions with regard to the choice of method, how they are implemented, and to which and how many units they are applied [ 13 ]. As shown in Fig.  1 , this can involve several back-and-forth steps between data collection and analysis where new insights and experiences can lead to adaption and expansion of the original plan. Some insights may also necessitate a revision of the research question and/or the research design as a whole. The process ends when saturation is achieved, i.e. when no relevant new information can be found (see also below: sampling and saturation). For reasons of transparency, it is essential for all decisions as well as the underlying reasoning to be well-documented.

An external file that holds a picture, illustration, etc.
Object name is 42466_2020_59_Fig1_HTML.jpg

Iterative research process

While it is not always explicitly addressed, qualitative methods reflect a different underlying research paradigm than quantitative research (e.g. constructivism or interpretivism as opposed to positivism). The choice of methods can be based on the respective underlying substantive theory or theoretical framework used by the researcher [ 2 ].

Data collection

The methods of qualitative data collection most commonly used in health research are document study, observations, semi-structured interviews and focus groups [ 1 , 14 , 16 , 17 ].

Document study

Document study (also called document analysis) refers to the review by the researcher of written materials [ 14 ]. These can include personal and non-personal documents such as archives, annual reports, guidelines, policy documents, diaries or letters.

Observations

Observations are particularly useful to gain insights into a certain setting and actual behaviour – as opposed to reported behaviour or opinions [ 13 ]. Qualitative observations can be either participant or non-participant in nature. In participant observations, the observer is part of the observed setting, for example a nurse working in an intensive care unit [ 18 ]. In non-participant observations, the observer is “on the outside looking in”, i.e. present in but not part of the situation, trying not to influence the setting by their presence. Observations can be planned (e.g. for 3 h during the day or night shift) or ad hoc (e.g. as soon as a stroke patient arrives at the emergency room). During the observation, the observer takes notes on everything or certain pre-determined parts of what is happening around them, for example focusing on physician-patient interactions or communication between different professional groups. Written notes can be taken during or after the observations, depending on feasibility (which is usually lower during participant observations) and acceptability (e.g. when the observer is perceived to be judging the observed). Afterwards, these field notes are transcribed into observation protocols. If more than one observer was involved, field notes are taken independently, but notes can be consolidated into one protocol after discussions. Advantages of conducting observations include minimising the distance between the researcher and the researched, the potential discovery of topics that the researcher did not realise were relevant and gaining deeper insights into the real-world dimensions of the research problem at hand [ 18 ].

Semi-structured interviews

Hijmans & Kuyper describe qualitative interviews as “an exchange with an informal character, a conversation with a goal” [ 19 ]. Interviews are used to gain insights into a person’s subjective experiences, opinions and motivations – as opposed to facts or behaviours [ 13 ]. Interviews can be distinguished by the degree to which they are structured (i.e. a questionnaire), open (e.g. free conversation or autobiographical interviews) or semi-structured [ 2 , 13 ]. Semi-structured interviews are characterized by open-ended questions and the use of an interview guide (or topic guide/list) in which the broad areas of interest, sometimes including sub-questions, are defined [ 19 ]. The pre-defined topics in the interview guide can be derived from the literature, previous research or a preliminary method of data collection, e.g. document study or observations. The topic list is usually adapted and improved at the start of the data collection process as the interviewer learns more about the field [ 20 ]. Across interviews the focus on the different (blocks of) questions may differ and some questions may be skipped altogether (e.g. if the interviewee is not able or willing to answer the questions or for concerns about the total length of the interview) [ 20 ]. Qualitative interviews are usually not conducted in written format as it impedes on the interactive component of the method [ 20 ]. In comparison to written surveys, qualitative interviews have the advantage of being interactive and allowing for unexpected topics to emerge and to be taken up by the researcher. This can also help overcome a provider or researcher-centred bias often found in written surveys, which by nature, can only measure what is already known or expected to be of relevance to the researcher. Interviews can be audio- or video-taped; but sometimes it is only feasible or acceptable for the interviewer to take written notes [ 14 , 16 , 20 ].

Focus groups

Focus groups are group interviews to explore participants’ expertise and experiences, including explorations of how and why people behave in certain ways [ 1 ]. Focus groups usually consist of 6–8 people and are led by an experienced moderator following a topic guide or “script” [ 21 ]. They can involve an observer who takes note of the non-verbal aspects of the situation, possibly using an observation guide [ 21 ]. Depending on researchers’ and participants’ preferences, the discussions can be audio- or video-taped and transcribed afterwards [ 21 ]. Focus groups are useful for bringing together homogeneous (to a lesser extent heterogeneous) groups of participants with relevant expertise and experience on a given topic on which they can share detailed information [ 21 ]. Focus groups are a relatively easy, fast and inexpensive method to gain access to information on interactions in a given group, i.e. “the sharing and comparing” among participants [ 21 ]. Disadvantages include less control over the process and a lesser extent to which each individual may participate. Moreover, focus group moderators need experience, as do those tasked with the analysis of the resulting data. Focus groups can be less appropriate for discussing sensitive topics that participants might be reluctant to disclose in a group setting [ 13 ]. Moreover, attention must be paid to the emergence of “groupthink” as well as possible power dynamics within the group, e.g. when patients are awed or intimidated by health professionals.

Choosing the “right” method

As explained above, the school of thought underlying qualitative research assumes no objective hierarchy of evidence and methods. This means that each choice of single or combined methods has to be based on the research question that needs to be answered and a critical assessment with regard to whether or to what extent the chosen method can accomplish this – i.e. the “fit” between question and method [ 14 ]. It is necessary for these decisions to be documented when they are being made, and to be critically discussed when reporting methods and results.

Let us assume that our research aim is to examine the (clinical) processes around acute endovascular treatment (EVT), from the patient’s arrival at the emergency room to recanalization, with the aim to identify possible causes for delay and/or other causes for sub-optimal treatment outcome. As a first step, we could conduct a document study of the relevant standard operating procedures (SOPs) for this phase of care – are they up-to-date and in line with current guidelines? Do they contain any mistakes, irregularities or uncertainties that could cause delays or other problems? Regardless of the answers to these questions, the results have to be interpreted based on what they are: a written outline of what care processes in this hospital should look like. If we want to know what they actually look like in practice, we can conduct observations of the processes described in the SOPs. These results can (and should) be analysed in themselves, but also in comparison to the results of the document analysis, especially as regards relevant discrepancies. Do the SOPs outline specific tests for which no equipment can be observed or tasks to be performed by specialized nurses who are not present during the observation? It might also be possible that the written SOP is outdated, but the actual care provided is in line with current best practice. In order to find out why these discrepancies exist, it can be useful to conduct interviews. Are the physicians simply not aware of the SOPs (because their existence is limited to the hospital’s intranet) or do they actively disagree with them or does the infrastructure make it impossible to provide the care as described? Another rationale for adding interviews is that some situations (or all of their possible variations for different patient groups or the day, night or weekend shift) cannot practically or ethically be observed. In this case, it is possible to ask those involved to report on their actions – being aware that this is not the same as the actual observation. A senior physician’s or hospital manager’s description of certain situations might differ from a nurse’s or junior physician’s one, maybe because they intentionally misrepresent facts or maybe because different aspects of the process are visible or important to them. In some cases, it can also be relevant to consider to whom the interviewee is disclosing this information – someone they trust, someone they are otherwise not connected to, or someone they suspect or are aware of being in a potentially “dangerous” power relationship to them. Lastly, a focus group could be conducted with representatives of the relevant professional groups to explore how and why exactly they provide care around EVT. The discussion might reveal discrepancies (between SOPs and actual care or between different physicians) and motivations to the researchers as well as to the focus group members that they might not have been aware of themselves. For the focus group to deliver relevant information, attention has to be paid to its composition and conduct, for example, to make sure that all participants feel safe to disclose sensitive or potentially problematic information or that the discussion is not dominated by (senior) physicians only. The resulting combination of data collection methods is shown in Fig.  2 .

An external file that holds a picture, illustration, etc.
Object name is 42466_2020_59_Fig2_HTML.jpg

Possible combination of data collection methods

Attributions for icons: “Book” by Serhii Smirnov, “Interview” by Adrien Coquet, FR, “Magnifying Glass” by anggun, ID, “Business communication” by Vectors Market; all from the Noun Project

The combination of multiple data source as described for this example can be referred to as “triangulation”, in which multiple measurements are carried out from different angles to achieve a more comprehensive understanding of the phenomenon under study [ 22 , 23 ].

Data analysis

To analyse the data collected through observations, interviews and focus groups these need to be transcribed into protocols and transcripts (see Fig.  3 ). Interviews and focus groups can be transcribed verbatim , with or without annotations for behaviour (e.g. laughing, crying, pausing) and with or without phonetic transcription of dialects and filler words, depending on what is expected or known to be relevant for the analysis. In the next step, the protocols and transcripts are coded , that is, marked (or tagged, labelled) with one or more short descriptors of the content of a sentence or paragraph [ 2 , 15 , 23 ]. Jansen describes coding as “connecting the raw data with “theoretical” terms” [ 20 ]. In a more practical sense, coding makes raw data sortable. This makes it possible to extract and examine all segments describing, say, a tele-neurology consultation from multiple data sources (e.g. SOPs, emergency room observations, staff and patient interview). In a process of synthesis and abstraction, the codes are then grouped, summarised and/or categorised [ 15 , 20 ]. The end product of the coding or analysis process is a descriptive theory of the behavioural pattern under investigation [ 20 ]. The coding process is performed using qualitative data management software, the most common ones being InVivo, MaxQDA and Atlas.ti. It should be noted that these are data management tools which support the analysis performed by the researcher(s) [ 14 ].

An external file that holds a picture, illustration, etc.
Object name is 42466_2020_59_Fig3_HTML.jpg

From data collection to data analysis

Attributions for icons: see Fig. ​ Fig.2, 2 , also “Speech to text” by Trevor Dsouza, “Field Notes” by Mike O’Brien, US, “Voice Record” by ProSymbols, US, “Inspection” by Made, AU, and “Cloud” by Graphic Tigers; all from the Noun Project

How to report qualitative research?

Protocols of qualitative research can be published separately and in advance of the study results. However, the aim is not the same as in RCT protocols, i.e. to pre-define and set in stone the research questions and primary or secondary endpoints. Rather, it is a way to describe the research methods in detail, which might not be possible in the results paper given journals’ word limits. Qualitative research papers are usually longer than their quantitative counterparts to allow for deep understanding and so-called “thick description”. In the methods section, the focus is on transparency of the methods used, including why, how and by whom they were implemented in the specific study setting, so as to enable a discussion of whether and how this may have influenced data collection, analysis and interpretation. The results section usually starts with a paragraph outlining the main findings, followed by more detailed descriptions of, for example, the commonalities, discrepancies or exceptions per category [ 20 ]. Here it is important to support main findings by relevant quotations, which may add information, context, emphasis or real-life examples [ 20 , 23 ]. It is subject to debate in the field whether it is relevant to state the exact number or percentage of respondents supporting a certain statement (e.g. “Five interviewees expressed negative feelings towards XYZ”) [ 21 ].

How to combine qualitative with quantitative research?

Qualitative methods can be combined with other methods in multi- or mixed methods designs, which “[employ] two or more different methods [ …] within the same study or research program rather than confining the research to one single method” [ 24 ]. Reasons for combining methods can be diverse, including triangulation for corroboration of findings, complementarity for illustration and clarification of results, expansion to extend the breadth and range of the study, explanation of (unexpected) results generated with one method with the help of another, or offsetting the weakness of one method with the strength of another [ 1 , 17 , 24 – 26 ]. The resulting designs can be classified according to when, why and how the different quantitative and/or qualitative data strands are combined. The three most common types of mixed method designs are the convergent parallel design , the explanatory sequential design and the exploratory sequential design. The designs with examples are shown in Fig.  4 .

An external file that holds a picture, illustration, etc.
Object name is 42466_2020_59_Fig4_HTML.jpg

Three common mixed methods designs

In the convergent parallel design, a qualitative study is conducted in parallel to and independently of a quantitative study, and the results of both studies are compared and combined at the stage of interpretation of results. Using the above example of EVT provision, this could entail setting up a quantitative EVT registry to measure process times and patient outcomes in parallel to conducting the qualitative research outlined above, and then comparing results. Amongst other things, this would make it possible to assess whether interview respondents’ subjective impressions of patients receiving good care match modified Rankin Scores at follow-up, or whether observed delays in care provision are exceptions or the rule when compared to door-to-needle times as documented in the registry. In the explanatory sequential design, a quantitative study is carried out first, followed by a qualitative study to help explain the results from the quantitative study. This would be an appropriate design if the registry alone had revealed relevant delays in door-to-needle times and the qualitative study would be used to understand where and why these occurred, and how they could be improved. In the exploratory design, the qualitative study is carried out first and its results help informing and building the quantitative study in the next step [ 26 ]. If the qualitative study around EVT provision had shown a high level of dissatisfaction among the staff members involved, a quantitative questionnaire investigating staff satisfaction could be set up in the next step, informed by the qualitative study on which topics dissatisfaction had been expressed. Amongst other things, the questionnaire design would make it possible to widen the reach of the research to more respondents from different (types of) hospitals, regions, countries or settings, and to conduct sub-group analyses for different professional groups.

How to assess qualitative research?

A variety of assessment criteria and lists have been developed for qualitative research, ranging in their focus and comprehensiveness [ 14 , 17 , 27 ]. However, none of these has been elevated to the “gold standard” in the field. In the following, we therefore focus on a set of commonly used assessment criteria that, from a practical standpoint, a researcher can look for when assessing a qualitative research report or paper.

Assessors should check the authors’ use of and adherence to the relevant reporting checklists (e.g. Standards for Reporting Qualitative Research (SRQR)) to make sure all items that are relevant for this type of research are addressed [ 23 , 28 ]. Discussions of quantitative measures in addition to or instead of these qualitative measures can be a sign of lower quality of the research (paper). Providing and adhering to a checklist for qualitative research contributes to an important quality criterion for qualitative research, namely transparency [ 15 , 17 , 23 ].

Reflexivity

While methodological transparency and complete reporting is relevant for all types of research, some additional criteria must be taken into account for qualitative research. This includes what is called reflexivity, i.e. sensitivity to the relationship between the researcher and the researched, including how contact was established and maintained, or the background and experience of the researcher(s) involved in data collection and analysis. Depending on the research question and population to be researched this can be limited to professional experience, but it may also include gender, age or ethnicity [ 17 , 27 ]. These details are relevant because in qualitative research, as opposed to quantitative research, the researcher as a person cannot be isolated from the research process [ 23 ]. It may influence the conversation when an interviewed patient speaks to an interviewer who is a physician, or when an interviewee is asked to discuss a gynaecological procedure with a male interviewer, and therefore the reader must be made aware of these details [ 19 ].

Sampling and saturation

The aim of qualitative sampling is for all variants of the objects of observation that are deemed relevant for the study to be present in the sample “ to see the issue and its meanings from as many angles as possible” [ 1 , 16 , 19 , 20 , 27 ] , and to ensure “information-richness [ 15 ]. An iterative sampling approach is advised, in which data collection (e.g. five interviews) is followed by data analysis, followed by more data collection to find variants that are lacking in the current sample. This process continues until no new (relevant) information can be found and further sampling becomes redundant – which is called saturation [ 1 , 15 ] . In other words: qualitative data collection finds its end point not a priori , but when the research team determines that saturation has been reached [ 29 , 30 ].

This is also the reason why most qualitative studies use deliberate instead of random sampling strategies. This is generally referred to as “ purposive sampling” , in which researchers pre-define which types of participants or cases they need to include so as to cover all variations that are expected to be of relevance, based on the literature, previous experience or theory (i.e. theoretical sampling) [ 14 , 20 ]. Other types of purposive sampling include (but are not limited to) maximum variation sampling, critical case sampling or extreme or deviant case sampling [ 2 ]. In the above EVT example, a purposive sample could include all relevant professional groups and/or all relevant stakeholders (patients, relatives) and/or all relevant times of observation (day, night and weekend shift).

Assessors of qualitative research should check whether the considerations underlying the sampling strategy were sound and whether or how researchers tried to adapt and improve their strategies in stepwise or cyclical approaches between data collection and analysis to achieve saturation [ 14 ].

Good qualitative research is iterative in nature, i.e. it goes back and forth between data collection and analysis, revising and improving the approach where necessary. One example of this are pilot interviews, where different aspects of the interview (especially the interview guide, but also, for example, the site of the interview or whether the interview can be audio-recorded) are tested with a small number of respondents, evaluated and revised [ 19 ]. In doing so, the interviewer learns which wording or types of questions work best, or which is the best length of an interview with patients who have trouble concentrating for an extended time. Of course, the same reasoning applies to observations or focus groups which can also be piloted.

Ideally, coding should be performed by at least two researchers, especially at the beginning of the coding process when a common approach must be defined, including the establishment of a useful coding list (or tree), and when a common meaning of individual codes must be established [ 23 ]. An initial sub-set or all transcripts can be coded independently by the coders and then compared and consolidated after regular discussions in the research team. This is to make sure that codes are applied consistently to the research data.

Member checking

Member checking, also called respondent validation , refers to the practice of checking back with study respondents to see if the research is in line with their views [ 14 , 27 ]. This can happen after data collection or analysis or when first results are available [ 23 ]. For example, interviewees can be provided with (summaries of) their transcripts and asked whether they believe this to be a complete representation of their views or whether they would like to clarify or elaborate on their responses [ 17 ]. Respondents’ feedback on these issues then becomes part of the data collection and analysis [ 27 ].

Stakeholder involvement

In those niches where qualitative approaches have been able to evolve and grow, a new trend has seen the inclusion of patients and their representatives not only as study participants (i.e. “members”, see above) but as consultants to and active participants in the broader research process [ 31 – 33 ]. The underlying assumption is that patients and other stakeholders hold unique perspectives and experiences that add value beyond their own single story, making the research more relevant and beneficial to researchers, study participants and (future) patients alike [ 34 , 35 ]. Using the example of patients on or nearing dialysis, a recent scoping review found that 80% of clinical research did not address the top 10 research priorities identified by patients and caregivers [ 32 , 36 ]. In this sense, the involvement of the relevant stakeholders, especially patients and relatives, is increasingly being seen as a quality indicator in and of itself.

How not to assess qualitative research

The above overview does not include certain items that are routine in assessments of quantitative research. What follows is a non-exhaustive, non-representative, experience-based list of the quantitative criteria often applied to the assessment of qualitative research, as well as an explanation of the limited usefulness of these endeavours.

Protocol adherence

Given the openness and flexibility of qualitative research, it should not be assessed by how well it adheres to pre-determined and fixed strategies – in other words: its rigidity. Instead, the assessor should look for signs of adaptation and refinement based on lessons learned from earlier steps in the research process.

Sample size

For the reasons explained above, qualitative research does not require specific sample sizes, nor does it require that the sample size be determined a priori [ 1 , 14 , 27 , 37 – 39 ]. Sample size can only be a useful quality indicator when related to the research purpose, the chosen methodology and the composition of the sample, i.e. who was included and why.

Randomisation

While some authors argue that randomisation can be used in qualitative research, this is not commonly the case, as neither its feasibility nor its necessity or usefulness has been convincingly established for qualitative research [ 13 , 27 ]. Relevant disadvantages include the negative impact of a too large sample size as well as the possibility (or probability) of selecting “ quiet, uncooperative or inarticulate individuals ” [ 17 ]. Qualitative studies do not use control groups, either.

Interrater reliability, variability and other “objectivity checks”

The concept of “interrater reliability” is sometimes used in qualitative research to assess to which extent the coding approach overlaps between the two co-coders. However, it is not clear what this measure tells us about the quality of the analysis [ 23 ]. This means that these scores can be included in qualitative research reports, preferably with some additional information on what the score means for the analysis, but it is not a requirement. Relatedly, it is not relevant for the quality or “objectivity” of qualitative research to separate those who recruited the study participants and collected and analysed the data. Experiences even show that it might be better to have the same person or team perform all of these tasks [ 20 ]. First, when researchers introduce themselves during recruitment this can enhance trust when the interview takes place days or weeks later with the same researcher. Second, when the audio-recording is transcribed for analysis, the researcher conducting the interviews will usually remember the interviewee and the specific interview situation during data analysis. This might be helpful in providing additional context information for interpretation of data, e.g. on whether something might have been meant as a joke [ 18 ].

Not being quantitative research

Being qualitative research instead of quantitative research should not be used as an assessment criterion if it is used irrespectively of the research problem at hand. Similarly, qualitative research should not be required to be combined with quantitative research per se – unless mixed methods research is judged as inherently better than single-method research. In this case, the same criterion should be applied for quantitative studies without a qualitative component.

The main take-away points of this paper are summarised in Table ​ Table1. 1 . We aimed to show that, if conducted well, qualitative research can answer specific research questions that cannot to be adequately answered using (only) quantitative designs. Seeing qualitative and quantitative methods as equal will help us become more aware and critical of the “fit” between the research problem and our chosen methods: I can conduct an RCT to determine the reasons for transportation delays of acute stroke patients – but should I? It also provides us with a greater range of tools to tackle a greater range of research problems more appropriately and successfully, filling in the blind spots on one half of the methodological spectrum to better address the whole complexity of neurological research and practice.

Take-away-points

• Assessing complex multi-component interventions or systems (of change)

• What works for whom when, how and why?

• Focussing on intervention improvement

• Document study

• Observations (participant or non-participant)

• Interviews (especially semi-structured)

• Focus groups

• Transcription of audio-recordings and field notes into transcripts and protocols

• Coding of protocols

• Using qualitative data management software

• Combinations of quantitative and/or qualitative methods, e.g.:

• : quali and quanti in parallel

• : quanti followed by quali

• : quali followed by quanti

• Checklists

• Reflexivity

• Sampling strategies

• Piloting

• Co-coding

• Member checking

• Stakeholder involvement

• Protocol adherence

• Sample size

• Randomization

• Interrater reliability, variability and other “objectivity checks”

• Not being quantitative research

Acknowledgements

Abbreviations.

EVTEndovascular treatment
RCTRandomised Controlled Trial
SOPStandard Operating Procedure
SRQRStandards for Reporting Qualitative Research

Authors’ contributions

LB drafted the manuscript; WW and CG revised the manuscript; all authors approved the final versions.

no external funding.

Availability of data and materials

Ethics approval and consent to participate, consent for publication, competing interests.

The authors declare no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

  • Learning how to find, evaluate, and use resources to explore a topic in depth

Series Editor : Michael Theall, Youngstown State University Authors : Gail MacKay, Indiana University Kokomo; Barbara Millis, University of Nevada-Reno; Rebecca Brent, Education Designs, Inc.

At institutions of higher education across the U.S., information literacy (IL) is being integrated into general education curricula as a specific learning objective. The Association of College and Research Libraries (ACRL) (1) defines information literate students as those who “recognize when information is needed and have the ability to locate, evaluate, and use effectively the needed information.” As the world moves toward a knowledge-based economy, information literacy becomes a crucial component of preparing students for the lifelong learning that current and future job markets demand.

IDEA Research Report #1 (2) states that, “…It is important to recognize that much of the subject matter content which students learn today will be outdated in 5-10 years after they graduate.” Thus, an emphasis on lifelong learning seems imperative. Canja (3), for example, suggests that “… Lifelong learning has become an economic necessity for national and global productivity. With the decline in birth rates in major developed countries, persons—still active, still healthy—must continue in the workforce, trained and retrained” (p. 27). Ironically, IDEA Research Report #1 also finds that the objectives identified as emphasizing lifelong learning (Learning to find and use resources, and Gave tests/projects that covered most important points) were identified as “Important” or “Essential” in only about 30% of the classes using IDEA. The ACRL (1) notes, “Information literacy forms the basis for lifelong learning. It is common to all disciplines, to all learning environments, and to all levels of education. It enables learners to master content and extend their investigations, become more self-directed, and assume greater control over their own learning.” However, information literacy does not concern itself only with technical resources. Successful students and workers must also be able to affiliate with others and to seek and find expertise among the human resources that are available (4).

Seeking out information resources and then using them to address a question or a problem are engaging activities, and there are several attached benefits. First is recognition of the value of the resources. Next is application of the new information and the construction of new knowledge. Intrinsic motivation results from the realization that learning is taking place and ultimately, these practical and motivational effects promote continued use of the resources, lifelong learning, and facilitates deep learning.

For example, here are key components that characterize a deep, rather than a surface approach to learning. Rhem (5) summarizes them as follows:

Motivational context: We learn best what we feel a need to know. Intrinsic motivation remains inextricably bound to some level of choice and control. Courses that remove these take away the sense of ownership and kill one of the strongest elements in lasting learning.

Learner activity: Deep learning and “doing” travel together. Doing in itself isn’t enough. Faculty must connect activity to the abstract conceptions that make sense of it, but passive mental postures lead to superficial learning.

Interaction with others: As Noel Entwistle put it in a recent email message, “The teacher is not the only source of instruction or inspiration.”

A well-structured knowledge base: This does not just mean presenting new material in an organized way. It also means engaging and reshaping, when necessary, the concepts students bring with them. Deep approaches and learning for understanding are integrative processes. The more fully new concepts can be connected with students’ prior experience and existing knowledge, the more it is they will be impatient with inert facts and eager to achieve their own synthesis (p. 4).

If instructors are to motivate students to acquire the skills of information literacy that will help them to remain lifelong learners, then they need to design research projects and assignments that get students into the knowledge base and engage them in critical thinking activities through active learning and interaction with one another. Through such sequenced assignments, students can learn how to answer relevant questions and to solve challenging problems.

Keep in mind that an important component of finding and using resources to explore topics is evaluating the quality of those resources. In an information-rich world, students must be able to determine if a resource is reliable and valid enough to use in their work. These information literacy skills (and even quantitative literacy skills–see the Teaching Note, “Learning appropriate methods for collecting, analyzing, and interpreting numerical information”) must be taught. See the Teaching Note, “Encouraged students to use multiple resources (e.g. Internet, library holdings, outside experts) to improve understanding,” for more ideas.

Teaching This Objective

The most relevant IDEA instructional method is “encouraged students to use multiple resources to improve understanding.” This Learning Note complements Baron’s with some general guidelines that focus on developing good research projects or assignments to assist with “learning how to find and use resources for answering questions or solving problems” and attempts to help instructors provide students with effective and feasible assignments. With today’s information overload, students need guidance in locating and using appropriate resources for answering questions and solving problems. Students must hone these skills throughout their lives. Academic librarians can serve as an instructor’s best ally.

Other IDEA instructional methods that are important to Objective #9 include items #2 Finding ways to help students answer their own questions, #8 Stimulating intellectual effort, #15 Inspiring students to set and achieve goals, #18 Asking students to help each other understand ideas or concepts, and #19 Assigning work that requires original or creative thinking. These relationships are logical because the nature of investigative activity requires intellectual effort, focused exploration, and creativity, and the connections between problem solving and gathering information and evidence have been well-documented (6). These methods support many of the specific hints described below.

Motivation as a starting point. Locating information for its own sake provides practice, but it fails to engage motivated students in productive work linked to an understood outcome. Feldman suggests that student achievement remains strongly correlated to the perceived outcomes of instruction (7). The relevance of assigned work is also critical to student’s active engagement (8) and a major predictor of student ratings of their teachers (9). Thus, skill development becomes much more productive when there is a clearly understood link between the assigned work and specific learning goals or tangible products. The real-world analog is obvious: people do not search for information unless they have a reason to do so. Because in many teaching-learning situations, teachers expect students to explore issues and topics that may not intrinsically interest them, demonstrating relevance and utility become critical first steps in getting students engaged (See “Related course material to real-life situations” and “Introduced stimulating ideas about the subject”). Allowing students some choice of topic or project can motivate them to take a deeper approach to learning (10).

Sequence the research project or assignment. If instructors want students to learn to find and use resources to tackle stimulating questions and challenging problems through research, they need to design sequenced activities that motivate students and get them into the knowledge base. This can often be accomplished through the individual work that students do either as discrete homework assignments or as smaller parts of an extended research project. What becomes of these assignments or project components is critical for deep learning. Instructors should design in-class exercises where learners are actively engaged with the material they prepared individually and with each other (11).

A. Planning

  • Arrange for library instruction. Even students who have achieved some level of proficiency with library research will benefit from the reinforcement and enhancement of their skills. Require attendance. Attend yourself, asking questions as a learner.
  • Bring the class to the library or ask a librarian to come to your classroom when they are ready to begin their project, not in advance. Students learn best when there is an immediate and applicable need.
  • Send a copy of the assignment to the instruction librarian at your campus. Ask for input before finalizing the assignment. Librarians, for example, are highly skeptical of the academic value of commonly assigned “Library Scavenger Hunts.”
  • Include homegrown resource guides, sometimes called “pathfinders,” in your initial quest for student library sources. Often campus instruction and reference librarians develop these guides for various fields or disciplines. If your field is not included, ask the library to develop a resource guide for your area. These subject guides provide students with suggestions for “where to start” their research. Included in the guides are both print and electronic sources such as subject encyclopedias; specialized periodical indexes such as Applied Science and Technology Index ; PsycINFO ; or Sociological Abstracts ; also included are reference works or standards in the field such as The Physician’s Desk Reference (PDR) ; CRC Handbook of Chemistry and Physics ; or the Statistical Abstract of the United States .
  • Consider alternatives to the conventional research paper. Excellent assignment ideas reside on the Web, often at other campus library sites (12, 13, 14, 15, 16).

B. Designing

  • Provide your expectations for the assignment in writing to your students. Let them know what the assignment involves and what you expect them to learn from the experience. “I don’t know what s/he wants” is a student lament transcending the ages. Make their day; tell them. To help students fully understand these expectations in practice, consider providing strong and weak samples of typical segments of the project or assignment to discuss and critique in class.
  • Specify how the assignment fits with the goals or objectives of the course to show relevance. Be as explicit as possible. Share this information, also, with instruction librarians to help them determine appropriate sources.
  • Provide students with the grading criteria in writing for the project or assignment.
  • Offer a variety of flexible topics, encouraging students to choose ones that interest them.
  • Review the student-selected topics to see that they are appropriate and achievable. Avoid very current or local topics if students need scholarly sources as scholarly peer reviewed journals take time to reach publication.
  • Place materials on “Reserve,” if necessary, to avoid having 30 students compete for six books.
  • Discuss the role of attribution and documentation in a community of scholars. Include a policy on plagiarism in the syllabus. Emphasize the ethical use of information and of the avoidance of plagiarism. Aside from ethics, there are also copyright laws, both national and international, to consider. Specifically discuss appropriate and inappropriate use of online material, a gray area for many students.
  • Announce which style manual you expect students to use. Be very specific about documentation for online sources. Many style manuals are difficult to interpret.

Provide opportunities to engage in deep learning. As noted in the background section, the key components characterizing deep learning are motivation context, learner activity, interaction with others, and a well-structured knowledge base (5). As an example, faculty members can ask students, as part of a larger research project, to prepare paired annotations based on the double entry journal recommended by writing across the curriculum and classroom assessment experts (17). The teacher or the students identify a pool of articles on the question or problem at hand. Each student, working individually out-of-class, prepares a reflective commentary on one of the articles or chapters. They do so using a double column format (a Microsoft Word table works beautifully) where they cite the key points of the research article on the left-hand side and reactions, questions, commentary, and connections with other readings on the right, aligning the key point with the reflective commentary. The entries in these columns will not be the same length. When students come to class, the teacher randomly pairs them with another student who has read and analyzed the same research article. The two partners now read one another’s reflective commentaries, comparing both the key points they have identified and their specific responses to them. They discuss their reasons for the choices they made. Then working together, they prepare a composite annotation summarizing the article (See IDEA Paper No. 38).

This activity should be repeated several times during the semester, pairing different students. It enables students to reflect on their own thinking skills (metacognition) and to compare their thinking with those of other students. The more paired annotations they complete, the more skilled students become at identifying key points in an article and “using resources for answering questions or solving problems.” This structure thus enables teachers to sequence learning in meaningful ways. It builds critical thinking and writing skills by having students analyze and then compare their responses to the same piece of writing. It has the additional virtue of being relevant to virtually any discipline. Over the course of the semester, students build a repertoire of annotated research articles they can bring to bear on the given question or problem.

A note about technology. A thorough discussion of the ways in which new technologies can support and supplement students’ efforts to find and use resources is beyond the scope of this Note. However, we should mention at minimum, that the bounty that awaits students who explore web-based resources comes with a price: the equally large amount of inaccurate, incomplete, and sometimes distorted information that can be found in any web search. The critical issue for teachers is to construct assignments that require specific information known to exist and is accessible with minimum interference from useless, irrelevant, or biased data . Your resource librarian can be a tremendous asset in saving you hours of work (e.g., training students on effective and efficient search strategies and helping everyone to avoid wasting time and effort on valueless information). All disciplines and courses deal with electronic information and we cannot ignore its potential value. What is important to remember in constructing assignments is that the work must have a meaningful relationship to a clearly stated outcome. There has to be a tangible “payoff” in terms of students being able to connect the work to an understood and desired result.

Assessing This Objective

  • Develop a rubric (or a form) to assess the announced grading criteria. For example, assign a certain number of points for each component of a project or assignment (see 2 below). What percentage of the total will the final paper and bibliography be? Note what happens if any of the required items are a day late; two days late; etc. What percentage will mechanics—spelling, punctuation, grammar—contribute to the final grade?
  • Sequence parts of the project or assignment by establishing intermittent deadlines along the way. This practice not only helps prevent procrastination, but also helps to deter plagiarism. For example, have due dates for the overall topic and the thesis statement, due dates for a preliminary bibliography of “X” number of sources, an outline, a first draft, oral presentation, written or in-class peer reviews, etc.
  • Require critical thinking. If students are using Web sites, for example, ask for the background or credentials of the author; ask for the date of last revision if currency is important; ask if students found any bias on the site; and ask why they selected this site from among all the others.
  • Make use of peer reviewing throughout the research project or assignment to provide an additional source of feedback and add to the active learning and student interactions essential for deep learning. Have students exchange drafts and apply the rubric or checklist that will be used to assess the assignment. The opportunity for critical review of another draft and seeing comments from a peer will help them more fully understand the expectations, leading to better final products.
  • Review respected resources such as the Tutorial for Developing and Evaluating Library Assignments at the University of Maryland University College, Adelphi, MD (18) and the Scoring Criteria for Development/Resource-Based Learning Project at Delta College, University Center, MI (19).
  • Association of College and Research Libraries (ACRL). (2006). Information literacy competency standards for higher education. American Library Association. Retrieved September 27, 2006 from http://www.ala.org/ala/acrl/acrlstandards/standardsguidelines.htm
  • Hoyt, D. P., & Perera, S. (2000). Teaching approach, instructional objectives, and learning: IDEA research report #1 . Manhattan, KS: IDEA Center, Kansas State University.
  • Canja, E. T. (2002). Lifelong learning: Challenges and opportunities. CAEL Forum and News , 26-29.
  • Klemp, G. O. (1977). Three factors of success. In D. W. Vermilye (Eds.) Relating work and education: Current issues in higher education 1977 . San Francisco: Jossey-Bass.
  • Rhem, J. (1995). Close-up: Going deep. The National Teaching and Learning Forum, 5 (1), 4.
  • See the Problem Based Learning website at: http://www.samford.edu/pbl/ for many resources and references. Retrieved September 27, 2006.
  • Feldman, K. A. (1989). The association between student ratings of specific instructional dimensions and student achievement: Refining and extending the synthesis of data from multisection validity studies. Research in Higher Education, 30 , 583-645.
  • Theall, M. (1999). What have we learned? A synthesis and some guidelines for effective motivation in higher education. In M. Theall (Eds.) “Motivation from within: Encouraging faculty and students to excel.” New Directions for Teaching and Learning, 78 . San Francisco: Jossey-Bass.
  • Franklin, J. L., & Theall, M. (1995). The relationship of disciplinary differences and the value of class preparation time to student ratings of instruction. In N. Hativa & M. Marincovich (Eds.) “Disciplinary differences in teaching and learning: Implications for practice.” New Directions for Teaching and Learning, 64 . San Francisco: Jossey-Bass.
  • Felder, R.M., & Brent, R. (2005). Understanding student differences. Journal of Engineering Education, 94 (1), 57-72. Retrieved September 27, 2006 from http://www.ncsu.edu/effective_teaching/Papers/Understanding_Differences.pdf
  • Millis, B. J. (2006). Helping faculty learn to teach better and “smarter” through sequenced activities. In S. Chadwick-Blossy & D.R. Robertson (Eds.). To Improve the Academy , Vol 24. (pp. 216-230). Bolton, MA: POD Network and Anker Publications.
  • Designing assignments , University of Washington Libraries.
  • Effective assignments using library and Internet resources . (2004). Teaching Library, University of California, Berkeley. Retrieved September 27, 2006 from http://www.lib.berkeley.edu/TeachingLib/assignments.html
  • Fister, B., & Fuhr, S. (2001). Suggestions for assignments. Enhancing developmental research skills in the undergraduate curriculum . Folke Bernadotte Memorial Library, Gustavus Adolphus College. Retrieved September 27, 2006 from http://www.gustavus.edu/oncampus/academics/library/IMLS/assignmentsuggestions.html
  • Recommendations for creating effective library assignments. (2005). Mitchell Memorial Library, Library Instructional Services, Mississippi State University. Retrieved September 27, 2006 from http://library.msstate.edu/content/templates/?a=323&z=74
  • Creative assignments using information competency and writing. (2006). Ohio University, Athens OH. Retrieved September 27, 2006 from http://www.library.ohiou.edu/inst/creative.html
  • Millis, B., & Cottell, P. (1998). Cooperative learning for higher education faculty. Greenwood Press: American Council on Education, Oryx Press.
  • Kelley, K., & McDonald, R. (2005). Section 4: Designing assignments that contain writing and research. In Information literacy and writing assessment project: Tutorial for developing and evaluating assignments . Information and Library Services, University of Maryland University College.
  • Examples of good assessments . (2006). Delta College Library, Delta College. Retrieved September 27, 2006 from http://www.delta.edu/library/assessments.html
  • IDEA Paper No. 38: Enhancing Learning – and More! – Through Cooperative Learning , Millis
  • IDEA Paper No. 41: Student Goal Orientation, Motivation, and Learning , Svinicki
  • from http://ejournals.library.gatech.edu/ijsotel/index.php/ijsotel/article/view/19/18
  • Gaining A Basic Understanding of the Subject
  • Developing knowledge and understanding of diverse perspectives, global awareness, or other cultures
  • Learning to apply course material
  • Developing specific skills, competencies, and points of view needed by professionals in the field most closely related to this course
  • Acquiring skills in working with others as a member of a team
  • Developing creative capacities
  • Gaining a broader understanding and appreciation of intellectual/cultural activity
  • Developing skill in expressing myself orally or in writing
  • Developing ethical reasoning and/or ethical decision making
  • Learning to analyze and critically evaluate ideas, arguments, and points of view
  • Learning to apply knowledge and skills to benefit others or serve the public good
  • Learning appropriate methods for collecting, analyzing, and interpreting numerical information

Science of People - Logo

How to Give an Effective Performance Evaluation

Learn effective techniques for conducting performance evaluations, from 360-degree feedback to goal-setting. Overcome biases and improve your team’s growth.

Subscribe to our weekly newsletter

Performance evaluations can be tough to give. But, they provide a powerful opportunity to boost productivity, improve teamwork, and guide employees toward personal and professional growth. 

You’ve landed on the right article if you want to turn this often-dreaded event into a meaningful exercise. Stick around as we unpack proven strategies, pitfalls to avoid, and methods that will help you maximize the benefits of your next performance evaluation.

What is a Performance Evaluation?

A performance evaluation is a comprehensive, structured process, often led by a manager or human resources, to assess an employee’s work, behavior, and outcomes based on specific criteria.

Typically conducted annually, this is the “big picture” look at an employee’s contributions. However, some progressive companies are moving toward semi-annual or quarterly evaluations to keep the feedback loop tight.

It’s essentially a long conversation involving multiple people assessing how an employee is doing quantifiably. A typical performance evaluation uses this format:

  • Review of goals
  • Review of challenges
  • Plans for future goals

Interested in being happier? Check out this video on how to be happy at work:

How to Prepare for a Performance Evaluation as a Manager

You only have so much time in each performance evaluation, so you can only discuss some things. Let’s talk about how you can handpick the meeting agenda items that offer a balanced and meaningful picture of an employee’s contributions.

Start with measurable metrics

Start with the tasks that can be quantified. Whether it’s the number of projects completed on time, customer satisfaction scores, or leads converted, these hard numbers offer an objective basis for your assessment.

You should still dip into the “performance review” space of talking about how things feel and how the employee contributes non-quantifiably (how they help with team morale or show leadership potential, for example). However, the numbers will help clarify if the employee’s output meets their baseline performance standards.

Action Step: Clarify beforehand all of the measurable metrics related to the employee’s role that you want to discuss. Get clear on what the threshold of “success” for each metric is.

Evaluate skills, not just numbers

Don’t lose sight of the skills and attributes that make an employee great. Pro-activity, resourcefulness, and communication are also vital to team success, so find ways to evaluate these softer skills.

Action Step: One way to gauge these harder-to-measure skills is to have your employees list several qualities they aspire to display at work. Some examples might be: 

  • Dependability
  • Taking initiative
  • Team morale booster

Then, at your performance evaluation, have them self-assess how they’ve done on these qualities. You can also give feedback on how you feel they’ve exhibited these qualities. And you can even elicit peer feedback before the meeting,

Make sure to update their aspirational qualities every quarter or year so that they still feel inspired by their list.

Sync with company goals

Another place to focus your evaluation is where their role meets company goals. 

If this quarter focuses on customer retention, then customer engagement and satisfaction tasks should carry more weight in your evaluation.

Action Step: Consider how the current OKRs align with this employee’s role and duties. How much of their role is geared toward the OKRs, and how are they doing in those areas?

Remember to address both periodic tasks and milestone accomplishments

Regular, recurring tasks show how reliable and consistent an employee is. 

But don’t ignore those big milestones—the successful product launches or completed campaigns. These demonstrate an employee’s aptitude for managing larger responsibilities.

Action Step: Address how the employee is doing with their day-to-day and bigger projects.

How to Prepare for a Performance Evaluation as an Employee

Going into a performance review as an employee can feel scary. But if you prepare beforehand, you can make the conversation smoother and come off as more prepared.

Get your metrics ready

Before stepping into your performance evaluation, arm yourself with data. 

These metrics are not just numbers; they’re the story of your hard work, dedication, and progress. They provide a clear, objective backdrop to your performance, making your achievements quantifiable and your contributions tangible. 

Action Step: Gather all relevant data and statistics that reflect your performance. This could include sales numbers, project completion rates, customer satisfaction scores, or other relevant metrics. Organize them concisely in a chart or a brief report to make them easy to discuss during your evaluation.

Think of what skills you’re bringing to the table.

Reflect on the unique blend of skills you bring to your role. Whether it’s your knack for problem-solving, exceptional communication skills, or ability to lead a team under pressure, acknowledging these skills helps you articulate your unique value in the performance evaluation.

Action Step: List your key skills and how they have positively impacted your work. Think of specific instances where your skills made a difference – a problem you solved, a project you led successfully, or a challenging situation you navigated easily. Be ready to share these examples during your evaluation.

Sit with the company goals

When you can align your efforts with the company goals, you come off as a team player, and it shows that you’re not just working in the company, but you’re working for the company’s vision. 

Action Step: Review the company’s goals and objectives for the quarter. Identify how your work directly contributes to these goals. Prepare to discuss specific examples of how your performance aligns with the company’s direction, and be ready to suggest ways you can further support these objectives in the future.

List out accomplishments (big and little)

Every accomplishment counts, whether a game-changing project or your daily task. Think of this as your professional highlight reel. This comprehensive view often provides a more accurate picture of your value to the team.

Action Step: Create a comprehensive list of accomplishments over the evaluation period. Include both major projects and smaller tasks or improvements. Be specific about how each accomplishment contributed to the team or company, and be prepared to discuss them during your evaluation.

Tips to Conduct an Excellent Performance Evaluation

Your employee’s relationship with you has a profound impact on both their job satisfaction and even their life satisfaction, as seen by the data below: 

Three different bar graphs that show drivers of life satisfaction, drivers of job satisfaction, and drivers of satisfaction in interpersonal relationships at work. The takeaway is that employee's relationship with their boss has a profound impact on both their job satisfaction and even their life satisfaction. This relates to the article which is about performance evaluation.

How you show up as a manager impacts your employees’ well-being, so you have a responsibility to hold! With this in mind, you can look at this performance evaluation as an opportunity to accurately assess how your team is doing and encourage them to do their best work.

Here are a few tips to keep in mind to make sure your evaluations are clarifying, valuable, and encouraging.

Consider your employee’s career ambitions

If you work as a manager or in HR, part of your role is to make sure all the gears are turning smoothly. But you also have a leadership opportunity to empower your team into the best versions of themselves.

If you open a dialogue about what they want for their careers and how you can support them, you’ll inspire their work because they’ll see how today’s grind sets them up for tomorrow’s glory. Plus, you’re creating a climate where ambition is recognized and nurtured.

This is also a good time to discuss possible bonuses or any relevant promotion paths and what it will take to get to the next level.

Action Step:

During your evaluation, make sure to ask questions like:

  • What’s your dream role here in the next couple of years?
  • What skills do you want to level up?
  • What would you make it look like if you had a magic wand, you could wave at your career?

You’re not just clocking in as a manager; you’re showing up as a mentor.

Help them set goals

By bringing goal-setting into the picture, you map a successful road ahead for your employees. This can empower your team members and keep them on track.

Plus, when measurable goals are in place, you are setting up an honest, straightforward way to gauge how things are going later.

When helping set goals, consider making them:

  • Relevant, and 

Allocate a segment of your review meeting to collaborate on SMART goals for the upcoming quarter or year. Pose questions like:

  • Which projects get you excited?
  • What skills are you looking to master?

Once you agree, scribble them down and lock in some check-in dates. These periodic reviews keep the energy high and make room for course corrections, helping you switch from just evaluating to genuinely empowering your team.

If you’d like extra insight on how to set goals using science, you might enjoy this freebie:

How To Set Better Goals Using Science

Do you set the same goals over and over again? If you’re not achieving your goals – it’s not your fault! Let me show you the science-based goal-setting framework to help you achieve your biggest goals.

Be transparent about evaluation criteria

Share how the evaluation will be conducted and the criteria used from the get-go. It can be beneficial to share this information before the meeting so employees can brace themselves.

Transparency minimizes anxiety and leaves less room for surprises, making the evaluation process more straightforward for everyone involved.

Make the conversation a two-way street

A performance evaluation shouldn’t just be a manager monologuing at an employee; it should be a back-and-forth. 

This can offer you additional insights and allow employees to voice their opinions, concerns, or aspirations.

Action Step: Before the evaluation meeting, let the employee know that you’d like to hear their thoughts on their performance and any goals or obstacles they see. Create a designated time in the evaluation agenda for this open dialogue.

Zoom in on core duties

It’s always good to circle back to the employee’s job description. These are the bread-and-butter tasks, the ones they were hired to do. 

Action Step: Before the performance evaluation meeting, print out a list of all the employee’s official core duties and be prepared to go through each point with them.

Highlight the positive

A group of researchers 1 https://trace.tennessee.edu/utk_gradthes/5046/ compared groups of athletes with unconditionally positive coaches to a group of athletes with coaches who were critical. The study found that athletes with positive coaches reported more confidence and enjoyment in their sport. On the other hand, necessary attitudes from coaches led to lower confidence and even burnout in athletes.

The same is true of work. The more encouraging and empowering you are to your employees, the more they’ll feel energized, confident, and excited about work. 37% of employees 2 https://www.greatplacetowork.com/resources/blog/creating-a-culture-of-recognition said receiving more personal recognition would inspire them to do better work more often.

When giving a performance evaluation, it’s easy to fall into the trap of only focusing on what needs improvement. But, acknowledging what an employee is doing right is crucial for morale and motivation.

Action Step: Start your evaluation with a “wins” section. Celebrate the specific things the employee has done exceptionally well before diving into areas for improvement.

Seek extra evaluators if needed

If the employee’s role involves specialized skills, consider including a subject-matter expert as an evaluator. This person can offer nuanced insights that a generalist manager might overlook.

Also, consider involving other team members, departments, or clients with a stake in the employee’s performance when relevant. Their input can be invaluable, and it ensures that multiple perspectives validate your expectations. 

Keep a record

Documenting the process and the performance evaluation results is essential for future evaluations and tracking progress over time.

This way, you can look at progress across quarters and years.

Action Step: After the evaluation, summarize the key points discussed, the goals set, and the action items identified. Share this document with the employee and file it appropriately for future reference.

Try 360-Degree Feedback

In the 360-degree feedback approach, evaluations come from all directions, not just top-down, from the manager. 

Team members, subordinates, and sometimes even clients weigh in. 

And, perhaps most importantly, the employee assesses themself. Providing clear questions for them to take an honest look at their output can prevent an employee’s self-evaluation from looking like this:

If you take a 360 approach, you can set up meetings with the relevant people a few weeks before the evaluation. Or you can send out anonymous feedback forms to help others quantify the impact of the employee.

The one drawback to remember, especially in a competitive workplace, is that peer ratings may skew negative for a performance evaluation (over a development meeting).

Action Step: Set up an anonymous performance evaluation form to send to your employees’ colleagues, subordinates, and clients to gather numerical ratings and qualitative reflections on their performance.

The Most Common Challenges of Performance Evaluations and How to Overcome Them

Navigating the intricacies of performance evaluations can take time and effort. However, once you understand the most common challenges, you can quickly overcome them, leading to the best possible evaluation meetings.

Navigating your biases

We’re all human, which means we all have biases. When we aren’t aware of our biases, they can cloud the best of judgments. 

To overcome your biases, the first thing you can do is become familiar with them.

Here are some common biases to watch out for:

  • Confirmation bias 3 https://www.britannica.com/science/confirmation-bias happens when a manager only notices the traits or actions confirming their opinions about an employee. So, if you go into the evaluation already thinking an employee is lazy, you’ll tend to seek out and interpret information that reinforces your view.
  • Recency bias 4 https://www.oxfordreference.com/display/10.1093/oi/authority.20110803100407676 tends to focus more on the employee’s most recent actions rather than considering their performance over an extended period. This can lead to evaluations that need to reflect the overall scope of an employee’s contributions or improvements.
  • The halo effect 5 https://www.britannica.com/science/halo-effect is when a manager is so impressed by one aspect of an employee that they let it disproportionately influence the overall evaluation. If you have a charismatic employee whose jokes you love, you might unwittingly view them as more organized, efficient, and effective than they are.
  • On the other hand, the horn effect is when a manager fixates on a single negative trait or action and lets it cloud their overall judgment of an employee. For example, an employee cut you in the bathroom line once, which left a sour taste in your mouth. This feeling could easily cloud your view of their strengths and contributions if you’re careless.
  • Similarity or affinity bias 6 https://www.masterclass.com/articles/affinity-bias happens when a manager gives a higher evaluation to an employee because they see aspects of themselves in the person. This could lead you to overlook areas where the employee needs to improve because you share the same alma mater, and both used to paint your face purple and gold every Saturday.
  • Before your evaluation meeting, study the list above and reflect on how each bias might be at play.
  • Bring in feedback from others or other evaluators to balance out your view.
  • Stick to quantifiable metrics as much as possible to minimize subjective judgments.

Avoiding the temptation to inflate ratings

It’s tempting to slip into the “everyone gets an A!” approach. While it feels good to be everyone’s favorite boss in the short term, it doesn’t help identify areas for real improvement, making it a disservice to everyone in the long run.

To resist the urge to give everyone high marks, create a standardized scoring system that objectively assesses each key performance indicator. 

Solution: While feelings can be part of your review, make sure there is a quantifiable scoring system. For soft skills, you could consider questions like, “On a scale of 1-10, how dependable is this employee?”

Addressing underperforming employees

Managing underperforming employees is one of the tougher parts of your job. It can be hard to say something that might bruise someone’s ego or shake their confidence. And when that happens, you have to be prepared for some resistance. 

But it’s important for you to be as honest as possible and to avoid sugar coating. And there are ways to state your employee feedback that are more likely to uplift rather than deflate.

Solutions : 

  • Frame your feedback in an empowering way. This study suggests 7 https://www.apa.org/pubs/journals/releases/xge-a0033906.pdf that the most effective constructive feedback will be something like: “I see what you’re capable of and want to help you get there” or “I’m giving you this feedback because I have high standards for you that I know you can reach.”
  • Create a follow-up plan. Don’t just point out the issues—chart a course for improvement with deadlines and regular check-ins.
  • Consider role alignment. Sometimes, underperformance stems from a misalignment of skills and tasks. It may be worth exploring if the employee is better suited for a different role.

Different Evaluation Frequencies

For many companies, an employee performance evaluation is annual. But there are other approaches to ensure employees aren’t shooting in the dark for most of the year.

Consider more regular employee evaluations. 

Employees in companies who have a continuous performance feedback process tend to outperform their competition at a 24% higher rate 8 https://www.betterworks.com/magazine/performance-management-survey/#:~:text=Overwhelmingly%2C%20respondents%20in%20organizations%20who,at%20a%2024%25%20higher%20rate. . 

On top of that, HR teams of such companies are nearly 50% more satisfied 8 https://www.betterworks.com/magazine/performance-management-survey/#:~:text=Overwhelmingly%2C%20respondents%20in%20organizations%20who,at%20a%2024%25%20higher%20rate. with their performance management process. They are 24% more likely to recommend their evaluation process to others than annual review companies.

Here are a few options to consider.

Continual, on-the-fly feedback

This approach sidesteps the formal structure of scheduled employee performance reviews, opting for ongoing, informal feedback. This is often seen in smaller operations or startups with more dynamic environments.

The plus side of this approach is that it allows for real-time adjustments and improvements. Plus, it removes the anxiety and formality associated with traditional performance reviews.

On the flip side, the absence of formal reviews could lead to a lack of clarity around performance expectations. Important feedback could be missed or forgotten without scheduled checkpoints.

Employee pulse evaluations

Think of pulse reviews as a Goldilocks solution—somewhere between annual reviews’ formality and continual feedback informality. These are quicker, less intensive reviews that occur more frequently on a monthly or quarterly schedule.

This frequency allows for timely course corrections and is more manageable than the comprehensive annual review. 

But beware, because monthly reviews could become routine and lose impact if they aren’t executed thoughtfully.

How Is a Performance Evaluation Different from a Performance Review or a Performance Improvement Plan?

The world of performance management can sometimes feel like a bowl of alphabet soup. Different terms often get used interchangeably, but they have unique flavors.

A performance review is often considered a subset of a performance evaluation. A performance review is typically more about dialogue and less formal than a full-blown evaluation. An appraisal is more of a “let’s talk about how it’s been going,” and an assessment is “let’s look at your output to see if it’s matching expectations.”

Performance reviews often coincide with formal evaluations but can also happen more frequently. 

A performance improvement plan (PIP) is a formal document that outlines specific areas where employees need to improve their performance. They are usually only used when an employee is on the cusp of termination. PIPs set out clear objectives and timelines for improvement.

Looking for more management tips? Here are the top 5 management skills every manager should know:

Frequently Asked Questions About Performance Evaluations

A performance review assesses an employee’s work contributions, skills, and areas for growth. It’s vital for aligning individual performance with organizational goals, acknowledging good work, and setting the stage for future development.

The frequency of performance evaluations can vary, but a good rule of thumb is at least annually, though some companies opt for more frequent, even quarterly, check-ins. Regular evaluations keep everyone on the same page and allow timely course corrections.

The key elements of a successful performance evaluation include clear criteria, objective measurements of the quality of work and deliverables, and open, constructive dialogue. Think of it as a three-legged stool, each element supporting a balanced and practical evaluation.

Performance evaluations serve as a roadmap for employee development, pinpointing strengths to be leveraged and areas needing improvement. They are pivotal in planning targeted training and career advancement opportunities.

Challenges in the performance evaluation process can include biases, rating inflation, and managing underperformance. Navigating these effectively is essential for a fair and beneficial evaluation.

Managers and supervisors should rely on measurable metrics and gather diverse feedback to ensure fairness and objectivity in evaluations. It’s like adding layers of paint to a portrait; the more perspectives, the more nuanced and accurate the final image.

Best practices for providing constructive feedback during evaluations include being specific, offering actionable recommendations, and balancing positive with constructive points. Think of it as a sandwich: praise on the outside, constructive comments in the middle, all aimed at fueling growth. This can motivate them to want to improve their weaknesses.

Takeaways on Conducting a Performance Evaluation

Conducting performance evaluations can feel daunting. Where to start, and what to cover? Just remember to bring these topics into the conversation, and you’ll be in good shape:

  • Core duties. What is on their job description?
  • Measurable metrics. How are they doing on quantifiable metrics?
  • Skills, not just numbers. Address their soft skills as well. Even if you do numerically rate them.
  • Company goals. Put focus on the parts of their performance that relate to OKRs.
  • Periodic tasks and milestones. Remember to address both their daily responsibilities as well as their big projects.

Best of luck with these performance evaluations! 

Also, if you’d like to boost your overall leadership skills, you might enjoy this article.

Article sources

Popular guides, how to deal with difficult people at work.

Do you have a difficult boss? Colleague? Client? Learn how to transform your difficult relationship. I’ll show you my science-based approach to building a strong, productive relationship with even the most difficult people.

Related Articles

Science of People offers over 1000+ articles on people skills and nonverbal behavior.

Get our latest insights and advice delivered to your inbox.

It’s a privilege to be in your inbox. We promise only to send the good stuff.

nifty logo

Project Evaluation: A Complete Guide

Project Evaluation

Evaluating a project is key to understanding its performance and ways to improve. By checking how a project measures up against its goals, organizations can spot areas that need a boost, use resources more effectively, and make smart choices for future projects.

A pre-project evaluation is a critical step before initiating a project. This evaluation assesses the project plan, feasibility, and potential risks to ensure that all stakeholders have a shared understanding of objectives and goals, which guides effective project execution.

In this guide, we’ll explain the core parts of project evaluation, share some handy steps, and show how apps like Nifty can make the process more efficient.

What Is Project Evaluation?

Project evaluation is a systematic assessment of a project to determine its merit. It involves collecting and analyzing data to measure the project’s performance against its goals and project objectives.

A proper project evaluation aims to achieve three things:

  • Understanding whether the project achieved its objectives
  • How efficiently it used resources
  • Finding out what impact it had

Collecting and analyzing project data throughout different stages of project management is crucial for evaluating project effectiveness and monitoring progress.

Project evaluation is more than just a final report card. It is an ongoing process that provides insights for improvement. By understanding what worked and what did not, organizations can plan future projects, distribute resources for maximum impact, and show accountability.

Project Evaluation Criteria

Project evaluation criteria are the benchmarks used to measure project success. These standards provide a framework for performance and overall impact. 

The main evaluation criteria include:

  • Relevance:  Does the project address a real need or problem?
  • Efficiency:  Did the project make optimal use of resources (for example, time, budget, personnel)?
  • Effectiveness:  Did the project achieve its intended outcomes and outputs?
  • Impact:  What was the overall effect of the project?
  • Sustainability:  Can the benefits of the project be maintained over time?
  • Cost-effectiveness:  Did the project achieve its goals at a reasonable cost?
  • Timeliness:  Was the project completed within the planned timeframe?

These criteria can be adapted to fit the specific context of a project. For example, a technology project might prioritize efficiency and effectiveness, while a social development project might focus on impact.

Project Evaluation Methods

The process of project evaluation involves examining the project across various parameters to understand its effectiveness, efficiency, and overall value. The choice of evaluation method depends on the project’s goals, the resources available, and the specific insights required. Ongoing project evaluation is crucial for monitoring and assessing project performance throughout its lifecycle, ensuring that project goals, budget constraints, and scheduling milestones are met.

Here are some of the main evaluation methods used by companies:

Input Evaluation

Input evaluation focuses on the resources that were invested in the project. This includes the budget, personnel, and materials such as equipment and materials.

  • Provides a clear picture of the resources allocated and utilized.
  • Helps in assessing whether the project was properly resourced.

Weaknesses:

  • Does not measure the effectiveness or outcomes of the resources used.
  • May not indicate if the resources were used efficiently.
  • When assessing whether the project had sufficient funding or personnel.
  • To evaluate if the allocated resources were aligned with the project’s goals.

Process Evaluation

Process evaluation examines how the project was implemented. Parameters include management practices, teamwork, coordination among teams, and feedback from the project team.

  • Provides insights into the efficiency of execution.
  • Identifies strengths and weaknesses in management and operational processes.
  • May require detailed data collection and analysis.
  • Focuses on implementation rather than results.
  • To understand the effectiveness of project management strategies .
  • To improve project processes or team dynamics.

Outcome Evaluation

Outcome evaluation measures the immediate results of the project, including tangible outputs such as products and services, and is a critical aspect of assessing the project’s performance.

  • Provides clear evidence of what was achieved in terms of output.
  • Helps to determine whether the project met its initial objectives.
  • May not capture the long-term effects or broader impact of the project.
  • Outcomes might be influenced by external factors unrelated to the project.
  • When evaluating the effectiveness of a new service or product.
  • To determine if the project has achieved its short-term goals.

Impact and Project Performance Evaluation

Impact evaluation assesses the long-term effects of the project on the target audience. It includes understanding the broader changes that resulted from the project’s activities.

  • Gives insight into the lasting effects and benefits of the project.
  • Helps to understand the project’s contribution to long-term goals and objectives.
  • Requires long-term data collection and analysis.
  • Can be complex due to the need to account for various external factors.
  • For projects meant to create social or economic change.
  • When determining the overall effectiveness and sustainability of the project.

Cost-Benefit Analysis

Cost-benefit analysis compares project costs to benefits to determine its overall value. It involves quantifying both the costs incurred and the benefits gained.

Strengths :

  • Provides a clear financial picture of the project’s value.
  • Helps in making decisions about project continuation or scaling.

Weaknesses :

  • May not capture non-monetary benefits or costs.
  • Requires accurate data on costs and benefits.
  • When evaluating whether a project is economically viable.
  • To make decisions about resource allocation and investment.

Logical Framework Analysis 

Logical Framework Analysis is a structured approach to planning, managing, and evaluating projects. It involves defining the project’s objectives, outputs, and activities and measuring them against the proper criteria.

  • Provides a systematic approach to project planning and evaluation.
  • Helps to align activities with objectives and measure progress effectively.
  • Can be time-consuming to develop and maintain.
  • May require a detailed understanding of the project and managerial assumptions.
  • When there is a need for a comprehensive approach to project management.
  • Ensure that all project activities are aligned with strategic goals and objectives.

Combining Evaluation Methods

Often, mixing different evaluation methods can give a fuller picture of a project’s performance. For instance, combining process evaluations with outcome evaluations helps to see not just how the project was carried out but also what it achieved.

Similarly, combining impact evaluations with cost-benefit analysis can show the project’s long-term value from multiple angles.

By using a mix of approaches, project managers and stakeholders can get a clearer view of a project’s effectiveness. They can spotlight areas that need improvement and make better decisions for future projects.

How to Conduct a Project Evaluation

The project evaluation process involves several steps. Whether you’re defining project goals, planning data collection, or reporting results, Nifty has several features that can streamline and enhance evaluation efforts. Here are the six main steps. Post project evaluation is crucial in examining whether objectives were met and gathering lessons learned to inform future projects.

1. Identify Project Goals and Objectives

The first move in project evaluation is to define the project objectives and desired outcomes. These goals and objectives should be SMART: specific, measurable, achievable, relevant, and time-bound.

For instance, if a marketing team is launching a new product, a goal might be to increase product awareness by 20% within the first quarter.

Many apps have goal-setting templates that can help you establish and track these objectives effectively. This lays the foundation for a successful evaluation.

Project Evaluation - A goal-setting template from Nifty

Try Goal Setting Template for Free Get Started

2. Define the Scope of the Evaluation

After project goals are established, it is time to put down the specific aspects of the project to be evaluated. This step involves defining the evaluation boundaries, such as the phases, target audience, and KPIs.

Nifty’s collaborative features make it a great project scope management tool . Its features help outline the boundaries and key deliverables  with the stakeholders.

3. Develop a Data Collection Plan

A structured data collection plan is necessary for gathering information to assess project performance. With a centralized communication system , managers can streamline the decision-making process across teams and help people share ideas , gather feedback, and turn discussions into action.

This step includes documenting data sources, creating survey templates, and storing collected data for analysis.

An example of a centralized dashboard including a chat function from Nifty

4. Analyze Data

Next, it is time to analyze data for insights and trends. That means organizing and interpreting data by using statistical methods. This step will assess project performance against established criteria.

The right online reporting tools empower organizations to collect, organize, analyze, and transform complex data into easy-to-understand graphs and charts.

5. Report Your Findings

The evaluation results must be reported to stakeholders to inform decisions and generate buy-in. An evaluation report needs to summarize findings, conclusions, and recommendations. Additionally, it is important to include updates on the project’s progress to keep stakeholders informed and engaged.

With Nifty’s MIro integration , you can easily create a Custom View of a project to ensure that the master plan can be easily referenced and followed.

A Nifty+Miro custom view

6. Discuss the Next Project Evaluation Steps

Based on the findings, an action plan for improvement can be developed. It should involve identifying areas for improvement, setting new goals, and fine-tuning strategies.

Nifty’s  task and milestone dependencies are a feature that many project managers use to create a roadmap for the future. It enables you to work out the optimal task order for the fastest route through the project.

Nifty's task dependencies window

Project Evaluation Templates for Word 

As we have seen above, the steps for proper project evaluation can be complex, with many intersecting elements. A great way to simplify matters effectively is to use a pre-defined project evaluation template.

A project evaluation template is a tool to streamline the process by providing a structured format. The templates include sections for defining goals, tracking progress, analyzing data, and summarizing findings. 

These templates can easily be created as Word Documents. There are various types of templates available , depending on the nature of the project and the type of evaluation.

By using such project evaluation templates, teams can maintain consistency, save time, and ensure a thorough assessment of projects.

Get the best project evaluation template for Free Get Started

The Benefits of Project Evaluation

Project evaluation offers a wealth of advantages for organizations. By systematically assessing project performance, organizations can:

  • Improve Decision-Making: Evaluation provides data-driven insights for future projects and resource allocation. By understanding what worked and why, organizations can make informed choices about project priorities and strategies.
  • Enhance Accountability: Proper evaluation shows a commitment to transparency and learning. By measuring project outcomes against stated goals, organizations can be held accountable for their performance and build trust with stakeholders.
  • Identify Best Practices: Successful projects can be replicated and scaled. By identifying the factors that contributed to project success, organizations can sustain best practices across the organization.
  • Increase Project Efficiency: Evaluation helps identify areas where resources can be optimized. Streamlining processes and eliminating inefficiencies can improve project outcomes in terms of productivity and costs.
  • Optimize Resource Allocation: Evaluation provides a framework for resource allocation. By demonstrating the impact of prior projects, organizations can generate funding and support for initiatives that deliver the greatest value.
  • Demonstrate Impact to Stakeholders: Evaluation helps in communicating the value of projects to stakeholders. By highlighting project outcomes, organizations can build support for future initiatives and strengthen partner relationships.

At the end of the day, project evaluation is a powerful tool for organizational learning and improvement. By means of thorough evaluation, organizations can enhance their ability to achieve goals and create impact.

Common Project Evaluation Mistakes to Avoid

Project evaluation is a valuable tool. That is why it is important to be aware of some common pitfalls that can compromise the results.

Some frequent mistakes include:

  • Lack of Clear Objectives:  Without well-defined goals and criteria, it is hard to measure project success accurately.
  • Insufficient Data Collection:  Inadequate data can lead to biased or inconclusive findings.
  • Biased Analysis:  Personal opinions or preconceived notions can influence data interpretation. This will lead to inaccurate results.
  • Failure to Communicate Findings:  Effective communication of evaluation results is crucial. This step will drive change and improvement.
  • Neglecting Follow-up Actions:  Evaluation is not just about generating reports. It is also about using the findings to improve future projects.
  • Relying on Quantitative Data:  While quantitative data is essential, qualitative data can provide valuable insights into project processes and outcomes.
  • Ignoring Stakeholder Perspectives:  Involving stakeholders in the evaluation process ensures that their needs and expectations are considered.
  • Lack of Resources: It is also important to consider project evaluation as a helpful and essential activity. Insufficient time, budget, or personnel can hinder the process.

By avoiding the above mistakes, organizations can conduct effective and impactful project evaluations.

Project Evaluation Best Practices

Following best practices in project evaluation helps ensure that the process is thorough, insightful, and actionable. Such practices can ensure that your evaluation provides a clear understanding of the project’s performance and areas for improvement.

The above section on project evaluation mistakes highlights four key aspects to focus on:

Clear Objectives : Defining specific, measurable, achievable, relevant, and time-bound (SMART) objectives is fundamental. They provide a foundation for assessing whether the project has met its goals and helps in measuring success.

Engaging Stakeholders : Actively involving stakeholders throughout the evaluation process ensures that their perspectives and insights are considered. It helps in understanding different viewpoints and increases the acceptance of the findings.

Communicating Findings : Effectively sharing the results of the evaluation is essential for making the information actionable. It involves presenting findings in a clear, concise manner to enhance understanding and decision-making.

Ensuring Action and Follow-Up : An evaluation is only as valuable as the actions taken. Ensuring that there is a plan for addressing the recommendations and following up on the progress helps in translating insights into tangible improvements.

That apart, here are some more best practices:

Develop a Comprehensive Evaluation Plan: Create a detailed evaluation plan that outlines the methods, tools, and metrics to be used. This plan should include:

  • Evaluation Questions: Specific questions that the evaluation aims to answer.
  • Data Collection Methods: Techniques for gathering information, such as surveys, interviews, or document reviews.
  • Timeline: Key milestones and deadlines for the evaluation process.
  • Responsibilities: Roles and responsibilities of team members involved in the evaluation.

Use a Mix of Evaluation Methods: Combine different evaluation methods (input, process, outcome, impact, cost-benefit, and logical framework analysis) to gain a 360-degree view of the project. This mixed-methods approach helps to capture various dimensions of project performance and provides a nuanced analysis.

Ensure Data Quality: Focus on collecting accurate, reliable, and relevant data. Implement rigorous data collection and validation procedures to minimize errors and biases. Use appropriate tools and techniques to analyze the data.

Analyze and Interpret Findings Carefully: Analyze the data thoroughly and interpret the results in the context of the evaluation objectives. Identify patterns, trends, and insights that can inform decision-making. Be mindful of the limitations of the data and the evaluation methods used.

Ensure Ethical Standards: Respect the privacy and confidentiality of participants, obtain necessary permissions, and ensure that the evaluation does not cause harm.

By following these best practices, you can enhance the effectiveness of project evaluations, generate valuable insights, and drive continuous improvement in project execution.

Enhancing Project Evaluation with Nifty

Project evaluation is an essential step in organizational learning and improvement. By following a structured process and utilizing tools like Nifty, organizations can maximize the value of their projects and achieve sustainable results.

Nifty offers a range of features to support your project evaluation and project implementation process :

  • Goal setting:  Define clear and measurable objectives.
  • Project management:  Track project progress and milestones.
  • Document management:  Store and organize evaluation-related documents.
  • Collaboration:  Facilitate teamwork and knowledge sharing.
  • Reporting:  Create professional and informative evaluation reports.

To find out more about how your team can benefit from project evaluation, get in touch today. 

What is the difference between project monitoring and project evaluation?

Project monitoring and project evaluation serve separate purposes. Monitoring means the ongoing tracking of project activities, progress, and resource utilization against the project plan. It provides real-time information to identify potential issues and make necessary adjustments.

On the other hand, project evaluation is an in-depth assessment conducted at specific intervals or the conclusion. The focus is on determining the project’s overall effectiveness, impact, and efficiency. Monitoring provides corrective actions, and evaluation informs strategic decisions and future project planning.

How often should a project be evaluated?

The frequency of project evaluation depends on various factors. These include complexity, duration, and organizational goals.

One final evaluation might be sufficient for short-term, low-risk projects. However, for complex or high-impact projects, quarterly or bi-annual evaluations can provide more valuable insights. It is essential to strike a balance between the need for information and the resources required for evaluation. A well-planned evaluation schedule can ensure that project performance is consistently monitored and assessed.

Who should be involved in project evaluation?

A successful project evaluation involves a group of stakeholders. Core team members possess in-depth project knowledge and can provide firsthand insights. Management can offer a broader organizational perspective. Clients can share their experiences and satisfaction levels. External evaluators bring objectivity and expertise. By including different perspectives, organizations can gather comprehensive data and ensure a balanced evaluation.

How can I ensure that project evaluation leads to actionable improvements?

To maximize the impact of project evaluation, it is crucial to focus on actionable recommendations. This involves clearly communicating evaluation findings to relevant stakeholders, identifying specific areas for improvement, and developing concrete action plans.

Engaging stakeholders in the evaluation process can foster ownership and commitment to implementing changes. Establishing a system for tracking and monitoring the implementation of recommendations can help ensure that evaluation efforts lead to tangible results.

Recent Articles:

Team Collaboration Software

Top 15 Team Collaboration Software in 2024

creative project management software

10 Best Creative Project Management Software In 2024 

Project Evaluation

ALTERNATIVES

Wait before you go.

Do you really want to lose 5 productive hours a week?

Teams  waste 5 hours a week  on average juggling between tools. Nifty is one app for  chat, tasks, docs, and more. Try it for free and see for yourself. We promise you’ll love it.

No thanks, I’ll stick with 5+ tools.

Cart

  • SUGGESTED TOPICS
  • The Magazine
  • Newsletters
  • Managing Yourself
  • Managing Teams
  • Work-life Balance
  • The Big Idea
  • Data & Visuals
  • Reading Lists
  • Case Selections
  • HBR Learning
  • Topic Feeds
  • Account Settings
  • Email Preferences

Embracing Gen AI at Work

  • H. James Wilson
  • Paul R. Daugherty

more research is needed to effectively evaluate

The skills you need to succeed in the era of large language models

Today artificial intelligence can be harnessed by nearly anyone, using commands in everyday language instead of code. Soon it will transform more than 40% of all work activity, according to the authors’ research. In this new era of collaboration between humans and machines, the ability to leverage AI effectively will be critical to your professional success.

This article describes the three kinds of “fusion skills” you need to get the best results from gen AI. Intelligent interrogation involves instructing large language models to perform in ways that generate better outcomes—by, say, breaking processes down into steps or visualizing multiple potential paths to a solution. Judgment integration is about incorporating expert and ethical human discernment to make AI’s output more trustworthy, reliable, and accurate. It entails augmenting a model’s training sources with authoritative knowledge bases when necessary, keeping biases out of prompts, ensuring the privacy of any data used by the models, and scrutinizing suspect output. With reciprocal apprenticing, you tailor gen AI to your company’s specific business context by including rich organizational data and know-how into the commands you give it. As you become better at doing that, you yourself learn how to train the AI to tackle more-sophisticated challenges.

The AI revolution is already here. Learning these three skills will prepare you to thrive in it.

Generative artificial intelligence is expected to radically transform all kinds of jobs over the next few years. No longer the exclusive purview of technologists, AI can now be put to work by nearly anyone, using commands in everyday language instead of code. According to our research, most business functions and more than 40% of all U.S. work activity can be augmented, automated, or reinvented with gen AI. The changes are expected to have the largest impact on the legal, banking, insurance, and capital-market sectors—followed by retail, travel, health, and energy.

  • H. James Wilson is the global managing director of technology research and thought leadership at Accenture Research. He is the coauthor, with Paul R. Daugherty, of Human + Machine: Reimagining Work in the Age of AI, New and Expanded Edition (HBR Press, 2024). hjameswilson
  • Paul R. Daugherty is Accenture’s chief technology and innovation officer. He is the coauthor, with H. James Wilson, of Human + Machine: Reimagining Work in the Age of AI, New and Expanded Edition (HBR Press, 2024). pauldaugh

Partner Center

MIT Technology Review

  • Newsletters

This researcher wants to replace your brain, little by little

The US government just hired a researcher who thinks we can beat aging with fresh cloned bodies and brain updates.

  • Antonio Regalado archive page

cross section of a head from the side and back with plus symbols scattered over to represent rejuvenated sections. The cast shadow of the head has a clock face.

A US agency pursuing moonshot health breakthroughs has hired a researcher advocating an extremely radical plan for defeating death.

His idea? Replace your body parts. All of them. Even your brain. 

Jean Hébert, a new hire with the US Advanced Projects Agency for Health ( ARPA-H ), is expected to lead a major new initiative around “functional brain tissue replacement,” the idea of adding youthful tissue to people’s brains. 

President Joe Biden created ARPA-H in 2022, as an agency within the Department of Health and Human Services, to pursue what he called  “bold, urgent innovation” with transformative potential. 

The brain renewal concept could have applications such as treating stroke victims, who lose areas of brain function. But Hébert, a biologist at the Albert Einstein school of medicine, has most often proposed total brain replacement, along with replacing other parts of our anatomy, as the only plausible means of avoiding death from old age.

As he described in his 2020 book, Replacing Aging, Hébert thinks that to live indefinitely people must find a way to substitute all their body parts with young ones, much like a high-mileage car is kept going with new struts and spark plugs.

The idea has a halo of plausibility since there are already liver transplants and titanium hips, artificial corneas and substitute heart valves. The trickiest part is your brain. That ages, too, shrinking dramatically in old age. But you don’t want to swap it out for another—because it is you.

And that’s where Hébert's research comes in. He’s been exploring ways to “progressively” replace a brain by adding bits of youthful tissue made in a lab. The process would have to be done slowly enough, in steps, that your brain could adapt, relocating memories and your self-identity.  

During a visit this spring to his lab at Albert Einstein, Hébert showed MIT Technology Review how he has been carrying out initial experiments with mice, removing small sections of their brains and injecting slurries of embryonic cells. It’s a step toward proving whether such youthful tissue can survive and take over important functions.

To be sure, the strategy is not widely accepted, even among researchers in the aging field. “On the surface it sounds completely insane, but I was surprised how good a case he could make for it,” says Matthew Scholz, CEO of aging research company Oisín Biotechnologies, who met with Hébert this year. 

Scholz is still skeptical though. “A new brain is not going to be a popular item,” he says. “The surgical element of it is going to be very severe, no matter how you slice it.”

Now, though, Hébert's ideas appear to have gotten a huge endorsement from the US government. Hébert told MIT Technology Review that he had proposed a $110 million project to ARPA-H to prove his ideas in monkeys and other animals, and that the government “didn’t blink” at the figure. 

ARPA-H confirmed this week that it had hired Hébert as a program manager. 

The agency, modeled on DARPA, the Department of Defense organization that developed stealth fighters, gives managers unprecedented leeway in awarding contracts to develop novel technologies. Among its first programs are efforts to develop at-home cancer tests and cure blindness with eye transplants .

President Biden created ARPA-H in 2022 to pursue “bold, urgent innovation” with transformative potential.

It may be several months before details of the new project are announced, and it’s possible that ARPA-H will establish more conventional goals like treating stroke victims and Alzheimer’s patients, whose brains are damaged, rather than the more radical idea of extreme life extension. 

“ If it can work, forget aging; it would be useful for all kinds of neurodegenerative disease,” says Justin Rebo, a longevity scientist and entrepreneur.

But defeating death is Hébert's stated aim. “I was a weird kid and when I found out that we all fall apart and die, I was like, ‘Why is everybody okay with this?’ And that has pretty much guided everything I do,” he says. “I just prefer life over this slow degradation into nonexistence that biology has planned for all of us.”

Hébert, now 58, also recalls when he began thinking that the human form might not be set in stone. It was upon seeing the 1973 movie Westworld , in which the gun-slinging villain, played by Yul Brynner, turns out to be an android. “That really stuck with me,” Hébert said.

Lately, Hébert has become something of a star figure among immortalists, a fringe community devoted to never dying. That’s because he’s an established scientist who is willing to propose extreme steps to avoid death. “A lot of people want radical life extension without a radical approach. People want to take a pill, and that’s not going to happen,” says Kai Micah Mills, who runs a company, Cryopets, developing ways to deep-freeze cats and dogs for future reanimation.

The reason pharmaceuticals won’t ever stop aging, Hébert says, is that time affects all of our organs and cells and even degrades substances such as elastin, one of the molecular glues that holds our bodies together. So even if, say, gene therapy could rejuvenate the DNA inside cells, a concept some companies are exploring , Hébert believes we’re still doomed as the scaffolding around them comes undone.

One organization promoting Hébert's ideas is the Longevity Biotech Fellowship (LBF), a self-described group of “hardcore” life extension enthusiasts, which this year published a technical roadmap for defeating aging altogether. In it, they used data from Hébert's ARPA-H proposal to argue in favor of extending life with gradual brain replacement for elderly subjects, as well as transplant of their heads onto the bodies of “non-sentient” human clones, raised to lack a functioning brain of their own, a procedure they referred to as “body transplant.”

Such a startling feat would involve several technologies that don’t yet exist, including a means to attach a transplanted head to a spinal cord. Even so, the group rates “replacement” as the most likely way to conquer death, claiming it would take only 10 years and $3.6 billion to demonstrate.

“It doesn’t require you to understand aging,” says Mark Hamalainen, co-founder of the research and education group. “That is why Jean’s work is interesting.”

Hébert's connections to such far-out concepts (he serves as a mentor in LBF’s training sessions) could make him an edgy choice for ARPA-H, a young agency whose budget is $1.5 billion a year.

For instance, Hebert recently said on a podcast with Hamalainen that human fetuses might be used as a potential source of life-extending parts for elderly people. That would be ethical to do, Hébert said during the program, if the fetus is young enough that there “are no neurons, no sentience, and no person.” And according to a meeting agenda viewed by MIT Technology Review , Hébert was also a featured speaker at an online pitch session held last year on full “body replacement,” which included biohackers and an expert in primate cloning.

Hébert declined to describe the session, which he said was not recorded “out of respect for those who preferred discretion.” But he’s in favor of growing non-sentient human bodies. “I am in conversation with all these groups because, you know, not only is my brain slowly deteriorating, but so is the rest of my body,” says Hébert. “I'm going to need other body parts as well.”

The focus of Hébert's own scientific work is the neocortex, the outer part of the brain that looks like a pile of extra-thick noodles and which houses most of our senses, reasoning, and memory. The neocortex is “arguably the most important part of who we are as individuals,” says Hébert, as well as “maybe the most complex structure in the world.”

There are two reasons he believes the neocortex could be replaced, albeit only slowly. The first is evidence from rare cases of benign brain tumors, like a man described in the medical literature who developed a growth the size of an orange. Yet because it grew very slowly, the man’s brain was able to adjust, shifting memories elsewhere, and his behavior and speech never seemed to change—even when the tumor was removed. 

That’s proof, Hébert thinks, that replacing the neocortex little by little could be achieved “without losing the information encoded in it” such as a person’s self-identity.

The second source of hope, he says, is experiments showing that fetal-stage cells can survive, and even function, when transplanted into the brains of adults. For instance, medical tests underway are showing that young neurons can integrate into the brains of people who have epilepsy  and stop their seizures.  

“It was these two things together—the plastic nature of brains and the ability to add new tissue—that, to me, were like, ‘Ah, now there has got to be a way,’” says Hébert.

“I just prefer life over this slow degradation into nonexistence that biology has planned for all of us.”

One challenge ahead is how to manufacture the replacement brain bits, or what Hebert has called “facsimiles” of neocortical tissue. During a visit to his lab at Albert Einstein, Hébert described plans to manually assemble chunks of youthful brain tissue using stem cells. These parts, he says, would not be fully developed, but instead be similar to what’s found in a still-developing fetal brain. That way, upon transplant, they’d be able to finish maturing, integrate into your brain, and be “ready to absorb and learn your information.”

To design the youthful bits of neocortex, Hébert has been studying brains of aborted human fetuses 5 to 8 weeks of age. He’s been measuring what cells are present, and in what numbers and locations, to try to guide the manufacture of similar structures in the lab.

“What we're engineering is a fetal-like neocortical tissue that has all the cell types and structure needed to develop into normal tissue on its own,” says Hébert. 

Part of the work has been carried out by a startup company, BE Therapeutics (it stands for Brain Engineering), located in a suite on Einstein’s campus and which is funded by Apollo Health Ventures, VitaDAO, and with contributions from a New York State development fund . The company had only two employees when MIT Technology Review visited this spring, and the its future is uncertain, says Hébert, now that he’s joining ARPA-H and closing his lab at Einstein.

Because it’s often challenging to manufacture even a single cell type from stem cells, making a facsimile of the neocortex involving a dozen cell types isn’t an easy project . In fact, it’s just one of several scientific problems standing between you and a younger brain, some of which might never have practical solutions. “There is a saying in engineering. You are allowed one miracle, but if you need more than one, find another plan,” says Scholz.

Maybe the crucial unknown is whether young bits of neocortex will ever correctly function inside an elderly person’s brain, for example by establishing connections or storing and sending electro-chemical information. Despite evidence the brain can incorporate individual transplanted cells, that’s never been robustly proven for larger bits of tissue, says Rusty Gage, a biologist at the Salk Institute in La Jolla, Calif., and who is considered a pioneer of neural transplants. He says researchers for years have tried to transplant larger parts of fetal animal brains into adult animals, but with inconclusive results. “If it worked, we’d all be doing more of it,” he says.

The problem, says Gage, isn’t whether the tissue can survive, but whether it can participate in the workings of an existing brain. “I am not dissing his hypothesis. But that’s all it is,” says Gage. “Yes, fetal or embryonic tissue can mature in the adult brain. But whether it replaces the function of the dysfunctional area is an experiment he needs to do, if he wants to convince the world he has actually replaced an aged section with a new section.”

Biotechnology and health

a group of people some with blindfolds face in the direction of virus particles

How covid conspiracy theories led to an alarming resurgence in AIDS denialism

Widespread distrust of our public health system is reviving long-debunked ideas on HIV and AIDS—and energizing a broad movement that questions the foundations of disease prevention.

  • Anna Merlan archive page

Full-Frame Close-Up of a Senior Woman&#039;s Eye

Aging hits us in our 40s and 60s. But well-being doesn’t have to fall off a cliff.

Lifestyle changes could counter some of the deterioration.

  • Jessica Hamzelou archive page

a patient in a hospital bed and a grieving family member with a decision tree diagram

End-of-life decisions are difficult and distressing. Could AI help?

Ethicists say a “digital psychological twin” could help doctors and family members make decisions for people who can’t speak themselves.

still frame from in vivo data received by implant in a mini pig, showing circles covering the frame, color-coded to correspond with the nature and location of sensory contact.

Watch a video showing what happens in our brains when we think

Recordings from brain electrodes show "the physical manifestation of thought."

Stay connected

Get the latest updates from mit technology review.

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.

Information

  • Author Services

Initiatives

You are accessing a machine-readable page. In order to be human-readable, please install an RSS reader.

All articles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables. For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited. For more information, please refer to https://www.mdpi.com/openaccess .

Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications.

Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive positive feedback from the reviewers.

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Original Submission Date Received: .

  • Active Journals
  • Find a Journal
  • Proceedings Series
  • For Authors
  • For Reviewers
  • For Editors
  • For Librarians
  • For Publishers
  • For Societies
  • For Conference Organizers
  • Open Access Policy
  • Institutional Open Access Program
  • Special Issues Guidelines
  • Editorial Process
  • Research and Publication Ethics
  • Article Processing Charges
  • Testimonials
  • Preprints.org
  • SciProfiles
  • Encyclopedia

applsci-logo

Article Menu

more research is needed to effectively evaluate

  • Subscribe SciFeed
  • Recommended Articles
  • Google Scholar
  • on Google Scholar
  • Table of Contents

Find support for a specific problem in the support section of our website.

Please let us know what you think of our products and services.

Visit our dedicated information section to learn more about MDPI.

JSmol Viewer

Combating pathogens using carbon-fiber ionizers (cfis) for air purification: a narrative review.

more research is needed to effectively evaluate

1. Introduction

2. material and methods, 3. database search results, 4. discussion, 4.1. defining the testing protocols and parameter reporting for a consistent research approach, 4.2. air ionization and microorganisms—a familiar interplay, 4.3. the other side of the coin: how to avoid harmful byproducts of air ionization, 4.4. carbon-fiber ionizers: the pros and cons, 4.5. possible challenges in the development of real-world cfi-based air purification systems, 4.6. effectiveness of air ionization in comparison with other pathogen mitigation measures, 4.7. limitations.

  • Different particle sizes investigated;
  • Varying degrees of ozone emissions (or simply a lack of O 3 measurement in a particular study);
  • Range of microorganisms studied (some studies report on microbes that are not infectious to humans; many others, although they analyze the effect of ions on human pathogens, do so on only bacteria or viruses; the spectrum of microorganisms analyzed per study is restricted to only several species; ultimately, some microorganisms may be inherently more resistant to ions than others);
  • Vegetative bacterial forms (in this instance, we are still uncertain about how ions impact bacterial spores);
  • Different methods to measure microorganism viability (which can negatively impact interstudy comparisons; viability was also not measured in every study);
  • CFI-related parameters (voltage, material used, and flow velocity);
  • Definition of ion source (an example is a study that does not define whether a CFI unit is utilized);
  • Type of most efficient ions (the optimal type of ions is also disputed, as in several instances a frank contradiction between study results was observed (Cf. Table 1 ).

5. Conclusions

Author contributions, institutional review board statement, informed consent statement, data availability statement, acknowledgments, conflicts of interest.

  • Liu, Q.; Yan, S.; Zhang, M.; Wang, C.; Xing, D. Air sampling and ATP bioluminescence for quantitative detection of airborne microbes. Talanta 2024 , 274 , 126025. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Yu, K.-P.; Lee, G.W.-M.; Lin, S.-Y.; Huang, C.P. Removal of bioaerosols by the combination of a photocatalytic filter and negative air ions. J. Aerosol Sci. 2008 , 39 , 377–392. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Kolarž, P.; Ilić, A.Ž.; Janković, M.; Janićijević, A.; Trbovich, A.M. Estimating aerosol particle removal in indoor air by ion-enhanced deposition. J. Aerosol Sci. 2023 , 173 , 106199. [ Google Scholar ] [ CrossRef ]
  • Hyun, J.; Lee, S.G.; Hwang, J. Application of corona discharge-generated air ions for filtration of aerosolized virus and inactivation of filtered virus. J. Aerosol Sci. 2017 , 107 , 31–40. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Han, B.; Kim, H.J.; Kim, Y.J.; Sioutas, C. Unipolar Charging of Fine and Ultra-Fine Particles Using Carbon Fiber Ionizers. Aerosol Sci. Technol. 2008 , 42 , 793–800. [ Google Scholar ] [ CrossRef ]
  • Han, B.; Hudda, N.; Ning, Z.; Kim, H.J.; Kim, Y.J.; Sioutas, C. A novel bipolar charger for submicron aerosol particles using carbon fiber ionizers. J. Aerosol Sci. 2009 , 40 , 285–294. [ Google Scholar ] [ CrossRef ]
  • Ligotski, R.; Sager, U.; Schneiderwind, U.; Asbach, C.; Schmidt, F. Prediction of VOC adsorption performance for estimation of service life of activated carbon-based filter media for indoor air purification. Build. Environ. 2019 , 149 , 146–156. [ Google Scholar ] [ CrossRef ]
  • Rashid, M.M.; Tushan, S.S.; Ahmed, S.; Tushar, S.I. Design and development of advanced Air Purifier Facial Mask. In Proceedings of the International Conference on Industrial Engineering and Operations Management, Bangkok, Thailand, 5–7 March 2019; pp. 1–8. Available online: https://www.ieomsociety.org/ieom2019/papers/581.pdf (accessed on 22 June 2024).
  • Rafique, M.S.; Tahir, M.B.; Rafique, M.S.; Shakil, M. Photocatalytic Nanomaterials for Air Purification and Self-Cleaning ; Elsevier Inc.: Amsterdam, The Netherlands, 2020. [ Google Scholar ] [ CrossRef ]
  • Kim, Y.S.; Yoon, K.Y.; Park, J.H.; Hwang, J. Application of air ions for bacterial de-colonization in air filters contaminated by aerosolized bacteria. Sci. Total Environ. 2011 , 409 , 748–755. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Park, J.H.; Yoon, K.Y.; Kim, Y.S.; Byeon, J.H.; Hwang, J. Removal of submicron aerosol particles and bioaerosols using carbon fiber ionizer assisted fibrous medium filter media. J. Mech. Sci. Technol. 2009 , 23 , 1846–1851. [ Google Scholar ] [ CrossRef ]
  • Park, J.H.; Yoon, K.Y.; Noh, K.C.; Byeon, J.H.; Hwang, J. Removal of PM 2.5 entering through the ventilation duct in an automobile using a carbon fiber ionizer-assisted cabin air filter. J. Aerosol Sci. 2010 , 41 , 935–943. [ Google Scholar ] [ CrossRef ]
  • Ratliff, K.M.; Oudejans, L.; Archer, J.; Calfee, W.; Gilberry, J.U.; Hook, D.A.; Schoppman, W.E.; Yaga, R.W.; Brooks, L.; Ryan, S. Large-scale evaluation of microorganism inactivation by bipolar ionization and photocatalytic devices. Build. Environ. 2023 , 227 , 109804. [ Google Scholar ] [ CrossRef ]
  • Park, C.W.; Park, J.W.; Lee, S.H.; Hwang, J. Real-time monitoring of bioaerosols via cell-lysis by air ion and ATP bioluminescence detection. Biosens Bioelectron. 2014 , 52 , 379–383. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Lu, Y.H.; Wu, H.; Zhang, H.H.; Li, W.S.; Lai, A.C.K. Synergistic disinfection of aerosolized bacteria and bacteriophage by far-UVC (222-nm) and negative air ions. J. Hazard. Mater. 2023 , 441 , 129876. [ Google Scholar ] [ CrossRef ]
  • Lee, S.G.; Hyun, J.Y.; Sung, H.L.; Jungho, H. One-pass antibacterial efficacy of bipolar air ions against aerosolized Staphylococcus epidermidis in a duct flow. J. Aerosol Sci. 2014 , 69 , 71–81. [ Google Scholar ] [ CrossRef ]
  • Zhou, P.; Yang, Y.; Huang, G.; Lai, A.C.K. Numerical and experimental study on airborne disinfection by negative ions in air duct flow. Build. Environ. 2018 , 127 , 204–210. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Park, J.S.; Sung, B.J.; Yoon, K.S.; Jeong, C.S. The bactericidal effect of an ionizer under low concentration of ozone. BMC Microbiol. 2016 , 16 , 173. [ Google Scholar ] [ CrossRef ] [ PubMed ] [ PubMed Central ]
  • Woo, C.G.; Kim, H.J.; Kim, Y.J.; Han, B. Enhanced Antimicrobial Activity on Non-Conducting and Conducting Air Filters by Using Air Ions and Grapefruit Seed Extract. Aerosol Air Qual. Res. 2017 , 17 , 1917–1924. [ Google Scholar ] [ CrossRef ]
  • Comini, S.; Mandras, N.; Iannantuoni, M.R.; Menotti, F.; Musumeci, A.G.; Piersigilli, G.; Allizond, V.; Banche, G.; Cuffini, A.M. Positive and Negative Ions Potently Inhibit the Viability of Airborne Gram-Positive and Gram-Negative Bacteria. Microbiol. Spectrum 2021 , 9 , e00651-21. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Banche, G.; Iannantuoni, M.R.; Musumeci, A.; Allizond, V.; Cuffini, A.M. Use of Negative and Positive Ions for Reducing Bacterial Pathogens to Control Infections. ACTA Sci. Microbiol. 2019 , 2 , 35–38. [ Google Scholar ] [ CrossRef ]
  • Marin, V.; Moretti, G.; Rassu, M. Effetti della ionizzazione dell’aria su alcuni ceppi batterici (Effects of ionization of the air on some bacterial strains). Ann Ig. 1989 , 1 , 1491–1500. [ Google Scholar ]
  • Kanesaka, I.; Katsuse, A.; Takahashi, H.; Kobayashi, I. Evaluation of a bipolar ionization device in inactivation of antimicrobial-resistant bacteria, yeast, Aspergillus spp. and human coronavirus. J. Hosp. Infect. 2022 , 126 , 16–20. [ Google Scholar ] [ CrossRef ]
  • Zhang, S.; Li, S.; Xu, Y.; Du, Y.; Wang, A.; Liu, Z.; Yan, K. Electrostatic precipitation (ESP) index driven bio-aerosol collection for a high biological viability sampling. J. Clean. Prod. 2023 , 423 , 138790. [ Google Scholar ] [ CrossRef ]
  • Han, T.; Zhen, H.; Fennell, D.E.; Mainelis, G. Design and Evaluation of the Field-Deployable Electrostatic Precipitator with Superhydrophobic Surface (FDEPSS) with High Concentration Rate. Aerosol Air Qual. Res. 2015 , 15 , 2397–2408. [ Google Scholar ] [ CrossRef ]
  • Mainelis, G.; Adhikari, A.; Willeke, K.; Lee, S.-A.; Reponen, T.; Grinshpun, S. Collection of airborne microorganisms by electrostatic precipitation. J. Aerosol Sci. 2002 , 33 , 1417–1432. [ Google Scholar ] [ CrossRef ]
  • Tyagi, A.K.; Malik, A. Antimicrobial action of essential oil vapours and negative air ions against Pseudomonas fluorescens . Int. J. Food Microbiol. 2010 , 143 , 205–210. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Fan, L.; Song, J.; Hildebrand, P.D.; Forney, C.F. Interaction of ozone and negative air ions to control micro-organisms. J. Appl. Microbiol. 2002 , 93 , 144–148. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Perdikaki, A.; Galeou, A.; Pilatos, G.; Prombona, A.; Karanikolos, G.N. Ion-Based Metal/Graphene Antibacterial Agents Comprising Mono-Ionic and Bi-Ionic Silver and Copper Species. Langmuir 2018 , 34 , 11156–11166. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Zhuangbo, F.; Cao, S.J.; Wang, J.; Kumar, P.; Haghighat, F. Indoor airborne disinfection with electrostatic disinfector (ESD): Numerical simulations of ESD performance and reduction of computing time. Build. Environ. 2021 , 200 , 107956. [ Google Scholar ] [ CrossRef ]
  • Zhou, P.; Yang, Y.; Lai, A.; Huang, G. Inactivation of airborne bacteria by cold plasma in air duct flow. Build. Environ. 2016 , 106 , 120–130. [ Google Scholar ] [ CrossRef ]
  • Niveditha, A.S.; Pandiselvam, R.; Prasath, V.A.; Singh, S.K.; Gul, K.; Kothakota, A. Application of cold plasma and ozone technology for decontamination of Escherichia coli in foods—A review. Food Control 2021 , 130 , 108338. [ Google Scholar ] [ CrossRef ]
  • Phillips, G.; Harris, G.J.; Jones, M.W. Effect of air ions on bacterial aerosols. Int. J. Biometeorol. 1964 , 8 , 27–37. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Noyce, J.O.; Hughes, J.F. Bactericidal Effects of Negative and Positive Ions Generated in Nitrogen on Starved Pseudomonas Veronii. J. Electrost. 2003 , 57 , 49–58. [ Google Scholar ] [ CrossRef ]
  • Kampmann, Y.; Klingshirn, A.; Kloft, K.; Kreyenschmidt, J. The application of ionizers in domestic refrigerators for reduction in airborne and surface bacteria. J. Appl. Microbiol. 2009 , 107 , 1789–1798. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Arnold, J.; Boothe, D.H.; Mitchell, B.W. Use of Negative Air Ionization for Reducing Bacterial Pathogens and Spores on Stainless Steel Surfaces. J. Appl. Poult. Res. 2004 , 13 , 200–206. [ Google Scholar ] [ CrossRef ]
  • Fletcher, L.A.; Gaunt, L.F.; Beggs, C.B.; Shepherd, S.J.; Sleigh, P.A.; Noakes, C.J.; Kerr, K.G. Bactericidal action of positive and negative ions in air. BMC Microbiol. 2007 , 7 , 32. [ Google Scholar ] [ CrossRef ] [ PubMed ] [ PubMed Central ]
  • Escombe, A.R.; Moore, D.A.; Gilman, R.H.; Navincopa, M.; Ticona, E.; Mitchell, B.; Noakes, C.; Martínez, C.; Sheen, P.; Ramirez, R.; et al. Upper-room ultraviolet light and negative air ionization to prevent tuberculosis transmission. PLoS Med. 2009 , 6 , e43. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Ahmed, W.; Al-Marzouqi, A.H.; Nazir, M.H.; Rizvi, T.A.; Zaneldin, E.; Khan, M. Comparative Experimental Investigation of Biodegradable Antimicrobial Polymer-Based Composite Produced by 3D Printing Technology Enriched with Metallic Particles. Int. J. Mol. Sci. 2022 , 23 , 11235. [ Google Scholar ] [ CrossRef ]
  • Lin, Y.E.; Stout, J.E.; Yu, V.L. Controlling Legionella in hospital drinking water: An evidence-based review of disinfection methods. Infect. Control Hosp. Epidemiol. 2011 , 32 , 166–173. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Stout, J.E.; Yu, V.L. Experiences of the first 16 hospitals using copper-silver ionization for Legionella control: Implications for the evaluation of other disinfection modalities. Infect. Control Hosp. Epidemiol. 2003 , 24 , 563–568. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Stout, J.E.; Lin, Y.S.; Goetz, A.M.; Muder, R.R. Controlling Legionella in hospital water systems: Experience with the superheat-and-flush method and copper-silver ionization. Infect Control Hosp. Epidemiol. 1998 , 19 , 911–914. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Gast, R.K.; Mitchell, B.W.; Holt, P.S. Application of negative air ionization for reducing experimental airborne transmission of Salmonella enteritidis to chicks. Poult. Sci. 1999 , 78 , 57–61. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Holt, P.S.; Mitchell, B.W.; Seo, K.-H.; Gast, R.K. Use of Negative Air Ionization for Reducing Airborne Levels of Salmonella enterica serovar enteritidis in a Room Containing Infected Caged Layers. J. Appl. Poult. Res. 1999 , 8 , 440–446. [ Google Scholar ] [ CrossRef ]
  • Seo, K.H.; Mitchell, B.W.; Holt, P.S.; Gast, R.K. Bactericidal effects of negative air ions on airborne and surface Salmonella enteritidis from an artificially generated aerosol. J. Food Prot. 2001 , 64 , 113–116. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Kellogg, E.W., 3rd; Yost, M.G.; Barthakur, N.; Kreuger, A.P. Superoxide involvement in the bactericidal effects of negative air ions on Staphylococcus albus. Nature 1979 , 281 , 400–401. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Lee, B.; Yermakov, M.; Grinshpun, S.A. Unipolar ion emission enhances respiratory protection against fine and ultrafine particles. J. Aerosol Sci. 2004 , 35 , 1359–1368. [ Google Scholar ] [ CrossRef ] [ PubMed ] [ PubMed Central ]
  • Aplin, K.L.; Harrison, R.G. Electricity in the Atmosphere; Ions in the Atmosphere. In Encyclopedia of Atmospheric Sciences , 2nd ed.; Academic Press: Cambridge, MA, USA, 2015; pp. 9–13. [ Google Scholar ] [ CrossRef ]
  • Pratt, R.; Barnard, R.W. Some effects of ionized air on Penicillium notatum . J. Pharm. Sci. 1960 , 49 , 643–646. [ Google Scholar ] [ CrossRef ]
  • Shargawi, J.M.; Theaker, E.D.; Drucker, D.B.; MacFarlane, T.; Duxbury, A.J. Sensitivity of Candida albicans to negative air ion streams. J. Appl. Microbiol. 1999 , 87 , 889–897. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Electronics, Sanyo Home Appliances Research Program. Sharp’s plasmacluster ions effectively deactivate H5N1 avian influenza virus. Asia Pac. Biotech. News. 2005 , 6 , 469. [ Google Scholar ]
  • Bolashikov, Z.D.; Melikov, A.K. Methods for air cleaning and protection of building occupants from airborne pathogens. Build. Environ. 2009 , 44 , 1378–1385. [ Google Scholar ] [ CrossRef ] [ PubMed ] [ PubMed Central ]
  • Soliman, M.Y.M.; Medema, G.; Bonilla, B.E.; Brouns, S.J.J.; van Halem, D. Inactivation of RNA and DNA viruses in water by copper and silver ions and their synergistic effect. Water Res. X 2020 , 9 , 100077. [ Google Scholar ] [ CrossRef ] [ PubMed ] [ PubMed Central ]
  • Suwardi, A.; Ooi, C.C.; Daniel, D.; Tan, C.K.I.; Li, H.; Liang, O.Y.Z.; Tang, Y.K.; Chee, J.; Sadovoy, A.; Jiang, S.-Y.; et al. The Efficacy of Plant-Based Ionizers in Removing Aerosol for COVID-19 Mitigation. Research 2021 , 2021 , 2173642. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Van Egeren, D.; Stoddard, M.; Malakar, A.; Ghosh, D.; Acharya, A.; Mainuddin, S.; Majumdar, B.; Luo, D.; Nolan, R.P.; Joseph-McCarthy, D.; et al. No magic bullet: Limiting in-school transmission in the face of variable SARS-CoV-2 viral loads. Front. Public Health 2022 , 10 , 941773. [ Google Scholar ] [ CrossRef ] [ PubMed ] [ PubMed Central ]
  • Feng, Z.; Cao, S.J.; Haghighat, F. Removal of SARS-CoV-2 using UV+Filter in built environment. Sustain. Cities Soc. 2021 , 74 , 103226. [ Google Scholar ] [ CrossRef ] [ PubMed ] [ PubMed Central ]
  • Zhang, C.; Cui, H.; Zhang, C.; Chen, Z.; Jiang, X.; Liu, J.; Wan, Z.; Li, J.; Liu, J.; Gao, Y.; et al. Aerosol Transmission of the Pandemic SARS-CoV-2 and Influenza A Virus Was Blocked by Negative Ions. Front. Cell. Infect. Microbiol. 2022 , 12 , 897416. [ Google Scholar ] [ CrossRef ]
  • Mitchell, B.W.; King, D.J. Effect of Negative Air Ionization on Airborne Transmission of Newcastle Disease Virus. Avian Diseases 1994 , 38 , 725–732. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Estola, T.; Mäkelä, P.; Hovi, T. The effect of air ionization on the air-borne transmission of experimental Newcastle disease virus infections in chickens. J. Hygiene 1979 , 83 , 59–67. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Park, D.H.; An, S.H.; Lee, Y.; Kim, Y.J.; Han, B.; Kim, H.J. Development of On-Demand Antiviral Electrostatic Precipitators with Electrothermal-Based Antiviral Surfaces against Airborne Virus Particles. Toxics 2022 , 10 , 601. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Heo, J.; Lee, J.; Park, D. Effects of Brush-Type Ionizer Materials on Virus Inactivation. Toxics 2022 , 10 , 611. [ Google Scholar ] [ CrossRef ]
  • Jeong, S.B.; Shin, J.H.; Kim, S.W.; Seo, S.C.; Jung, J.H. Performance evaluation of an electrostatic precipitator with a copper plate using an aerosolized SARS-CoV-2 surrogate (bacteriophage phi 6). Environ. Technol. Innov. 2023 , 30 , 103124. [ Google Scholar ] [ CrossRef ]
  • Hagbom, M.; Nordgren, J.; Nybom, R.; Hedlund, K.O.; Wigzell, H.; Svensson, L. Ionizing air affects influenza virus infectivity and prevents airborne-transmission. Sci. Rep. 2015 , 5 , 11431. [ Google Scholar ] [ CrossRef ]
  • ISO 14698-1:2003 ; Cleanrooms and Associated Controlled Environments-Biocontamination Control: Part 1: General Principles and Methods. International Organization for Standardization: Geneva, Switzerland, 2003.
  • Ouyang, H.; Wang, L.; Sapkota, D.; Yang, M.; Morán, J.; Li, L.; Olson, B.A.; Schwartz, M.; Hogan, C.J., Jr.; Torremorell, M. Control technologies to prevent aerosol-based disease transmission in animal agriculture production settings: A review of established and emerging approaches. Front. Vet. Sci. 2023 , 10 , 1291312. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Zhang, W.; Xu, Y.; Streets, D.G.; Wang, C. How does decarbonization of the central heating industry affect employment? A spatiotemporal analysis from the perspective of urbanization. Energy Build. 2024 , 306 , 113912. [ Google Scholar ] [ CrossRef ]
  • Engel-Cox, J.A.; Van Houten, B.; Phelps, J.; Rose, S.W. Conceptual model of comprehensive research metrics for improved human health and environment. Environ. Health Perspect. 2008 , 116 , 583–592. [ Google Scholar ] [ CrossRef ]
  • Klepeis, N.E.; Nelson, W.C.; Ott, W.R.; Robinson, J.P.; Tsang, A.M.; Switzer, P.; Behar, J.V.; Hern, S.C.; Engelmann, W.H. The National Human Activity Pattern Survey (NHAPS): A resource for assessing exposure to environmental pollutants. J. Expo. Anal. Environ. Epidemiol. 2001 , 11 , 231–252. [ Google Scholar ] [ CrossRef ]
  • La Rosa, G.; Fratini, M.; Della Libera, S.; Iaconelli, M.; Muscillo, M. Viral infections acquired indoors through airborne, droplet or contact transmission. Ann. Ist. Super. Sanita 2013 , 49 , 124–132. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Zhao, B.; Liu, Y.; Chen, C. Air purifiers: A supplementary measure to remove airborne SARS-CoV-2. Build. Environ. 2020 , 177 , 106918. [ Google Scholar ] [ CrossRef ]
  • Berry, G.; Parsons, A.; Morgan, M.; Rickert, J.; Cho, H. A review of methods to reduce the probability of the airborne spread of COVID-19 in ventilation systems and enclosed spaces. Environ. Res. 2022 , 203 , 111765. [ Google Scholar ] [ CrossRef ] [ PubMed ] [ PubMed Central ]
  • Block, S.S. Disinfection, Sterilization, and Preservation ; Lippincott Williams & Wilkins: Philadelphia, PA, USA, 2001. [ Google Scholar ]
  • Zhao, H.; Kong, X.; Yao, W.; Fei, X.; Zhao, J.; Zhao, S.; Feng, T. Control technology of pathogenic biological aerosol: Review and prospect. Build. Environ. 2023 , 243 , 110679. [ Google Scholar ] [ CrossRef ]
  • Park, D.H.; Hwang, J.; Shin, D.; Kim, Y.; Lee, G.; Park, I.; Kim, S.B.; Hong, K.; Han, B. Developing an Optimal Antiviral Method for the Air-filtration System of Subway Stations. Aerosol. Air Qual. Res. 2023 , 23 , 230088. [ Google Scholar ] [ CrossRef ]
  • Grinshpun, S.A.; Adhikari, A.; Honda, T.; Kim, K.Y.; Toivola, M.; Rao, K.S.R.; Reponen, T. Control of aerosol contaminants in indoor air: Combining the particle concentration reduction with microbial inactivation. Environ. Sci. Technol. 2007 , 41 , 606–612. [ Google Scholar ] [ CrossRef ]
  • Jiang, S.-Y.; Ma, A.; Ramachandran, S. Negative Air Ions and Their Effects on Human Health and Air Quality Improvement. Int. J. Mol. Sci. 2018 , 19 , 2966. [ Google Scholar ] [ CrossRef ]
  • Cappa, C. Air Pollutant Emissions and Possible Health Effects Associated with Electronic Air Cleaners. 2023. Available online: https://ww2.arb.ca.gov/sites/default/files/2023-09/CARB%2022RD003%20White%20Paper%20Sept%2020%202023.pdf (accessed on 9 July 2024).
  • Asadgol, Z.; Nadali, A.; Arfaeinia, H.; Gholi, M.K.; Fateh, R.; Fahiminia, M. Evaluation of Negative Air Ions in Bioaerosol Removal: Indoor Concentration of Airborne Bacterial and Fungal in Residential Building in Qom City, Iran. Int. J. Earth Energy Environ. Sci. 2018 , 12 , 300–311. [ Google Scholar ] [ CrossRef ]
  • Grinshpun, S.A.; Mainelis, G.; Trunov, M.; Adhikari, A.; Reponen, T.; Willeke, K. Evaluation of ionic air purifiers for reducing aerosol exposure in confined indoor spaces. Indoor Air 2005 , 15 , 235–245. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Kim, H.J.; Han, B.; Kim, Y.-J.; Oda, T.; Won, H. Submicrometer particle removal indoors by a novel electrostatic precipitator with high clean air delivery rate, low ozone emissions, and carbon fiber ionizer. Indoor Air 2013 , 23 , 369–378. [ Google Scholar ] [ CrossRef ]
  • Wang, C.; Lu, S.; Zhang, Z. Inactivation of airborne bacteria using different UV sources: Performance modeling, energy utilization, and endotoxin degradation. Sci. Total Environ. 2019 , 655 , 787–795. [ Google Scholar ] [ CrossRef ] [ PubMed ] [ PubMed Central ]
  • Tyagi, A.K.; Nirala, B.K.; Malik, A.; Singh, K. The effect of negative air ion exposure on Escherichia coli and Pseudomonas fluorescens . J. Environ. Sci. Health A Tox Hazard Subst. Environ. Eng. 2008 , 43 , 694–699. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Mata, T.M.; Martins, A.A.; Calheiros, C.S.C.; Villanueva, F.; Alonso-Cuevilla, N.P.; Gabriel, M.F.; Silva, G.V. Indoor Air Quality: A Review of Cleaning Technologies. Environments 2022 , 9 , 118. [ Google Scholar ] [ CrossRef ]
  • Guo, C.; Gao, Z.; Shen, J. Emission rates of indoor ozone emission devices: A literature review. Build. Environ. 2019 , 158 , 302–318. [ Google Scholar ] [ CrossRef ]
  • Shi, S.; Zhu, S.; Lee, E.S.; Zhao, B.; Zhu, Y. Performance of wearable ionization air cleaners: Ozone emission and particle removal. Aerosol Sci. Technol. 2016 , 50 , 211–221. [ Google Scholar ] [ CrossRef ]
  • WHO Global Air Quality Guidelines. In Particulate Matter (PM2.5 And Pmio), Ozone, Nitrogen Dioxide, Sulfur Dioxide and Carbon Monoxide ; World Health Organization: Geneva, Switzerland, 2021.
  • WHO Regional Office for Europe. Review of Evidence on Health Aspects of Air Pollution—REVIHAAP Project. Available online: https://www.ncbi.nlm.nih.gov/books/NBK361805/ (accessed on 19 June 2024).
  • United States Environmental Protection Agency. Available online: https://www.epa.gov/ozone-pollution-and-your-patients-health/course-outline-and-key-points-ozone (accessed on 3 July 2024).
  • Yehia, A.; Abdel-Salam, M.; Mizuno, A. On assessment of ozone generation in dc coronas. J. Phys. D Appl. Phys. 2000 , 33 , 831. [ Google Scholar ] [ CrossRef ]
  • Sung, J.H.; Kim, M.; Kim, Y.J.; Han, B.; Hong, K.J. Ultrafine particle cleaning performance of an ion spray electrostatic air cleaner emitting zero ozone with diffusion charging by carbon fiber. Build. Environ. 2019 , 165 , 106422. [ Google Scholar ] [ CrossRef ]
  • Selvaprakash, K.; Chen, Y.C. Using an insulating fiber as the sampling probe and ionization substrate for ambient ionization–mass spectrometric analysis of volatile, semi-volatile, and polar analytes. Anal. Bioanal. Chem. 2022 , 414 , 4633–4643. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Hudson, J.B.; Sharma, M.; Petric, M. Inactivation of Norovirus by ozone gas in conditions relevant to healthcare. J. Hosp. Infect. 2007 , 66 , 40–45. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Bo, Z.; Yu, K.; Lu, G.; Cui, S.; Mao, S.; Chen, J. Vertically oriented graphene sheets grown on metallic wires for greener corona discharges: Lower power consumption and minimized ozone emission. Energy Environ. Sci. 2011 , 4 , 2525–2528. [ Google Scholar ] [ CrossRef ]
  • Hobbs, P.C.D.; Gross, V.P.; Murray, K.D. Suppression of particle generation in a modified clean room corona air ionizer. J. Aerosol Sci. 1990 , 21 , 463–465. [ Google Scholar ] [ CrossRef ]
  • Saxena, A.; Khare, D.; Agrawal, S.; Singh, A.; Dubey, A.K. Recent advances in materials science: A reinforced approach toward challenges against COVID-19. Emerg. Mater. 2021 , 4 , 57–73. [ Google Scholar ] [ CrossRef ] [ PubMed ] [ PubMed Central ]
  • Kim, M.; Lim, G.T.; Kim, Y.-J.; Han, B.; Woo, C.; Kim, H. A novel electrostatic precipitator-type small air purifier with a carbon fiber ionizer and an activated carbon fiber filter. J. Aerosol Sci. 2017 , 117 , 63–73. [ Google Scholar ] [ CrossRef ]
  • World Health Organization. Air Quality Guidelines for Particulate Matter, Ozone, Nitrogen Dioxide and Sulfur Dioxide: Global Update ; WHO Regional Office for Europe: Copenhagen, Denmark, 2006. [ Google Scholar ]
  • National Ambient Air Quality Standards for Ozone, EPA-HQ-OAR. United States Environmental Protection Agency; p. 2015, [Online], 2016-0202. Available online: https://www.govinfo.gov/content/pkg/FR-2018-12-06/pdf/2018-25424.pdf (accessed on 22 June 2024).
  • Young, T.M.; Sobek, E.; Farahi, F. Quantifying the Natural Variation of ‘Data Signatures’ from Aerosols Using Statistical Control Bands. Mathematics 2022 , 10 , 2103. [ Google Scholar ] [ CrossRef ]
  • First, M.W. HEPA Filters. J. Am. Biol. Saf. Assoc. 1998 , 3 , 33–42. [ Google Scholar ] [ CrossRef ]
  • Ehsan, S.M.; Krystal, J.G.P.; Jodi, S.; Richard, A.M. Performance analysis of portable HEPA filters and temporary plastic anterooms on the spread of surrogate coronavirus. Build. Environ. 2020 , 183 , 107186. [ Google Scholar ] [ CrossRef ]
  • Pawar, S.D.; Khare, A.B.; Keng, S.S.; Kode, S.S.; Tare, D.S.; Singh, D.K.; More, R.L.; Mullick, J. Selection and application of biological safety cabinets in diagnostic and research laboratories with special emphasis on COVID-19. Rev. Sci. Instrum. 2021 , 92 , 081401. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Thomas, D.; Contal, P.; Renaudin, V.; Penicot, P.; Leclerc, D.; Vendel, J. Modelling pressure drop in HEPA filters during dynamic filtration. J. Aerosol Sci. 1999 , 30 , 235–246. [ Google Scholar ] [ CrossRef ]
  • Himanshu, M.; Simon, R.P.; Thomas, P.; James, T.W.; Allan, M.B. Survival of Microorganisms on HEPA Filters. Appl. Biosafety 2011 , 16 , 163–166. [ Google Scholar ] [ CrossRef ]
  • Christiane, S.H.; Ulrich, W.A.; Leonie, S.; Ulf, D.; Oliver, W.; Dongliang, Y.; Xin, Z.; Kathrin, S.; Mirko, T.; Mira, A.; et al. Susceptibility of SARS-CoV-2 to UV irradiation. Am. J. Infect. Cont. 2020 , 48 , 1273–1275. [ Google Scholar ] [ CrossRef ]
  • Fabio, P.S.; Caetano, P.S.; Fernanda, V.C.; Martha, S.R. A Systematic scoping review of ultraviolet C (UVC) light systems for SARS-CoV-2 inactivation. J. Photochem. Photobiol. 2021 , 8 , 100068. [ Google Scholar ] [ CrossRef ]
  • Matsie, M.; Ashwin, S.D.; Paul, A.J.; Stephen, N.R.; Tobias, H.R.; Marcello, A.P.; Wilhelm, L.; Tim, A.S.; Sonya, P.M.; Martie, W.; et al. Institutional Tuberculosis Transmission, Controlled Trial of Upper Room Ultraviolet Air Disinfection: A Basis for New Dosing Guidelines. Am. J. Respir. Crit. Care Med. 2015 , 192 , 477–484. [ Google Scholar ] [ CrossRef ]
  • Sung, M.; Kato, S. Estimating the germicidal effect of upper-room UVGI system on exhaled air of patients based on ventilation efficiency. Build. Environ. 2011 , 46 , 2326–2332. [ Google Scholar ] [ CrossRef ]
  • Edward, N.; Richard, V.; David, H.S. Upper-Room Ultraviolet Germicidal Irradiation (UVGI) for Air Disinfection: A Symposium in Print. Photochem. Photobiol. 2013 , 89 , 764–769. [ Google Scholar ] [ CrossRef ]
  • Jangra, R.; Ahlawat, K.; Dixit, A.; Prakash, R. Efficient deactivation of aerosolized pathogens using a dielectric barrier discharge based cold-plasma detergent in environment device for good indoor air quality. Sci. Rep. 2023 , 13 , 10295. [ Google Scholar ] [ CrossRef ]
  • Chu, C.H.; Chen, S.R.; Wu, C.H.; Cheng, Y.C.; Cho, Y.M.; Chang, Y.K. The effects of negative air ions on cognitive function: An event-related potential (ERP) study. Int. J. Biometeorol. 2019 , 63 , 1309–1317. [ Google Scholar ] [ CrossRef ]

Click here to enlarge figure

MicroorganismOutcome of Ionization *Ion Type and PerformanceOzone LevelsReferences
Surrogates (particle sizes equivalent to pathogen dimensions)Particle deposition− ions better than bipolar ions<5 ppb [ ]
Staphylococcus epidermidis, Escherichia coliMicrobial destruction
(membrane disruption)
+ ions better than − ions<10 ppb[ ]
Staphylococcus epidermidis, Escherichia coliMicrobial destruction
(membrane disruption)
+ ions21.8–26.0 ppb[ ]
Staphylococcus epidermidis, Escherichia coli
(see also viruses)
Microorganism inactivation− ions3.0–3.5 ppb
(emission rate 0.026 mg/h)
[ ]
Staphylococcus epidermidisAntibacterial effect
Cell contraction
Bipolar ions eliminate bacteria
Unipolar (+) activity not antibacterial
<25 ppb[ ]
Staphylococcus epidermidis, Serratia marcescensMicrobial inactivation− ions68 ppb [ ]
Staphylococcus aureus, Escherichia coli, Bacillus subtilis, Enterococcus faecalisPotent bactericidal effect− and + air ions~35 ppb[ ]
Staphylococcus
aureus
Microbial deposition
Microbial destruction
− ions better than + ionsNot measured[ ]
Staphylococcus aureus, Escherichia coliDecreased viability
-On Petri dishes (10 CFU/mL):
S. aureus up to 86% after 3 h exposure and 95% after 8–12 h;
E. coli up to 51% after 3 h exposure and 70% after 8–12 h;
-On filters soaked with 10 CFU/mL:
S. aureus up to 78% on PP filters and 82% on PET filters after 3 h;
E. coli up to 52% on PP filters and PET filters after 3 h.
− and + air ionsN/A[ ]
Staphylococcus aureus, Escherichia coliAgglutination of microbial products, removed from air, microbicidal effects. More noticeable effect on gram + bacteria
Exceptional antibacterial activity via oxidative damage
− and + air ionsOnly passively mentioned[ ]
Staphylococcus aureus, Escherichia coliBactericidal, more noticeable effect on gram-bacteria− air ionization with oxidation effectN/A[ ]
Escherichia coliEnhanced pathogenic removal efficiency + ionsN/A[ ]
Escherichia coli
(see also viruses)
Complete inactivation with more than a 5-log reduction (99.999%) in 90 minbipolar<24 ppb [ ]
Escherichia coli
(see also fungi)
Pre-charging enhances collection efficiency− ionsN/A[ ]
Pseudomonas fluorescens, Bacillus atropheus
(see also fungi)
Particle deposition (microorganism viability not measured)− ions39 ppb[ ]
Pseudomonas fluorescens, Bacillus anthracis
(see also fungi)
Easier and more efficient collection (from 70% without charge to 80–90%)+ ionsN/A[ ]
Pseudomonas fluorescensSynergistic bactericidal action (decreased microbial load), morphologically deformedCombined − air ions and C. citratus essential oil vaporN/A[ ]
Pseudomonas fluorescens, Erwinia carotovora, Escherichia coliBactericidal effect, P. fluorescens most vulnerable− air ionsYes (noted synergy between ozone and NAIs)[ ]
Escherichia coliAntibacterialReleased + ions from copper/silver (metals proved to be synergistic) N/A[ ]
Escherichia coliDisinfection− ions generated by a cold plasma tubeNo ozone, but oxygen species and oxygen-containing radicals, UV-C, and short-term heating of microorganisms[ , ]
Escherichia coliInactivate and decontaminate E. coli via oxidation− and + ions, free radicals, all fall into the category of the fourth state of matter (cold plasma)Harnessed ozone as part of the study to maximize disinfection[ ]
Serratia marcescensSignificant bactericidal effects− and + air ions (NAIs showing slightly stronger repercussions)N/A[ ]
Pseudomonas veroniiDestroyed cells in a starved, and thus highly impenetrable, state via the predicted ionic porous formation of the cell wall (due to ion accumulation on surface)Both − and + ions of electric coronaN/A[ ]
Bacillus subtilisAntimicrobial effects, reduced number of bioaerosols − air ions created by ionizer, tested with concurrent ozone Yes[ ]
Campylobacter jejuni, E. coli, Salmonella enteritidis,
Listeria monocytogenes, Staphylococcus aureus, Bacillus stearothermophilu
Significantly decreased microbial load (levels in biofilm)Supercharged − air ionsN/A[ ]
Mycobacterium parafortuitumCell inactivation and biocidal qualities via electroporous mechanisms − air ionsYes, but not the principal cause of destruction[ ]
Mycobacterium tuberculosisPrevented most airborne TB− air ionsN/A[ ]
LegionellaIons find the − charged cell walls of pathogens and destroy them+ ions in water systemsN/A[ , , , ]
Salmonella enteritidisStatistically significant decrease in infection via airborne transmission, attracted to ground surfaces,
direct organism killing
− air ionsN/A[ , , ]
Staphylococcus albusBactericidal effects− air ions in synergy with superoxide radical anionInadvertently, as superoxide radicals may be chain carriers for ozonation (O decomposition) and since it has been remarked that ozone and superoxide combine in corona discharge[ ]
Clostridioides difficile, drug-resistant strains of Staphylococcus aureus, Pseudomonas aeruginosa, and Klebsiella pneumoniae
(see also fungi and viruses)
Reduction Bacteria
94.4–99.9%;
Virus 94%;
bipolar22–66 ppb[ ]
Bacterial/viral agentsSame polarity of ions on respiratory protective masks (N95 and surgical) leads to electrostatic protection− ionsN/A[ ]
Bacteria (review)Bactericidal effects− ionsN/A[ ]
Penicillium notatumLowered penicillin production (mostly by − ions), reduced germination of spores (mostly by + ions), lowered CO production− and + air ionsN/A[ ]
Penicillium chrysogenum
(see also bacteria)
Particle deposition (microorganism viability not measured)− ions39 ppb[ ]
Penicillium brevicompactum (see also bacteria)Easier and more efficient collection (from 70% without charge to 80–90%)+ ionsN/A[ ]
Aspergillus fumigatus, Candida albicans
(see also bacteria and viruses)
Reduction Fungi 32.4–87.3%bipolar22–66 ppb[ ]
Candida albicans
(see also bacteria)
Pre-charging enhances collection efficiency− ionsN/A[ ]
Candida albicans (10 strains)Inhibited growth− air ionsYes, additionally hypothesizes potential microbicidal role of ozone[ ]
H5N1 avian influenza virusNeutralizes up to 26% of airborne pathogens− ions<50 ppb[ , ]
RNA/DNA VirusesFree ionic Ag+ inactivated ssRNA MS2 and ssDNA PhiX 174 (specifically in neutral and alkaline environments) + charged copper/silver ions in water, synergistic effectN/A[ ]
SARS-CoV-2Reduced aerosolized pathogens− ions created by plant-based ionizerNo ozone detected[ ]
SARS-CoV-2Pathogens agglutinate and ‘fall’ down− ionsVaries between generations/low concentration[ , ]
SARS-CoV-2 and
Influenza A virus
Inactivation—fixed on surfaces: >99.98% after 1 h of exposure; Disinfection—aerosolized: after 10 min of exposure at a 30 cm height—89.96% for SARS-CoV-2 and 91.27% for influenza A virus. At a 50 cm height, 87.77% for SARS-CoV-2 and 89.50% for the influenza A virus.− ions<50 ppb[ ]
Human coronavirus 229EReduction
Virus 94%;
bipolar22–66 ppb[ ]
Newcastle disease virusFacilitate pathogenic aerosol decay, wire-gauze completely prevented transmission− ionsN/A[ , ]
MS2 phage
(see also bacteria and fungi)
Complete inactivation with more than a 5-log reduction (99.999%) in 30 minbipolar<24 ppb[ ]
MS2 bacteriophage, H1N1 influenza virusParticle deposition− ions~10 ppb (varying)[ ]
MS2 bacteriophageMicrobial inactivation− ions1.6 ppb[ ]
MS2 bacteriophage Reduction unipolar ions: 46.1%, 78.8%, and 83.7% after 15, 30, and 45 min of exposure, respectively, and up to 97.4% for bipolar ionsbipolar better than unipolar unipolar ions: 2–10 ppb;
bipolar ions: ~30 ppb
[ ]
virus (P22 and Φ6 bacteriophages)
(see also bacteria)
Microorganism inactivation− ions3.0–3.5 ppb
(emission rate 0.026 mg/h)
[ ]
Φ6 bacteriophage
(SARS-CoV-2 surrogate)
Particle removal
Antiviral performance
− ions Did not measure [ ]
Canine calicivirus (CaCV), rhesus rotavirus (RRV), influenza A virus (H3N2)Reduced infectivity of aerosolized CaCV and RRV (>97%); Active ionizer prevented 100% of guinea pigs from infection by H3N2.− ions<2 ppb[ ]
Air contaminants
Removal Technology
BenefitsLimitationsRepresentative Examples
HEPA (High Efficiency Particulate Air) filters [ , , , , ]
UVGI (UltraViolet
Germicidal Irradiation)
[ , , , , ]
Ionizers
also producing ozone
[ , , , , ]
Ionizers with negligible ozone production [ , , , , ]
The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

Radalj, A.; Nikšić, A.; Trajković, J.; Knezević, T.; Janković, M.; De Luka, S.; Djoković, S.; Mijatović, S.; Ilić, A.; Arandjelović, I.; et al. Combating Pathogens Using Carbon-Fiber Ionizers (CFIs) for Air Purification: A Narrative Review. Appl. Sci. 2024 , 14 , 7311. https://doi.org/10.3390/app14167311

Radalj A, Nikšić A, Trajković J, Knezević T, Janković M, De Luka S, Djoković S, Mijatović S, Ilić A, Arandjelović I, et al. Combating Pathogens Using Carbon-Fiber Ionizers (CFIs) for Air Purification: A Narrative Review. Applied Sciences . 2024; 14(16):7311. https://doi.org/10.3390/app14167311

Radalj, Andrea, Aleksandar Nikšić, Jelena Trajković, Tara Knezević, Marko Janković, Silvio De Luka, Stefan Djoković, Stefan Mijatović, Andjelija Ilić, Irena Arandjelović, and et al. 2024. "Combating Pathogens Using Carbon-Fiber Ionizers (CFIs) for Air Purification: A Narrative Review" Applied Sciences 14, no. 16: 7311. https://doi.org/10.3390/app14167311

Article Metrics

Article access statistics, further information, mdpi initiatives, follow mdpi.

MDPI

Subscribe to receive issue release notifications and newsletters from MDPI journals

Unsupported browser detected

Your browser appears to be unsupported. Because of this, portions of the site may not function as intended.

Please install a current version of Chrome , Firefox , Edge , or Safari for a better experience.

The researcher versus the mosquitoes

more research is needed to effectively evaluate

Long before she devoted her life to studying mosquitoes, Corine Ngufor knew their scourge. Growing up in Cameroon, “I was condemned to suffer from malaria,” she says. “My siblings and I would just keep having malaria and malaria and malaria . Just about everyone did.”  

A medical entomologist with a Ph.D. from the London School of Hygiene & Tropical Medicine and a passion for public health, Ngufor leads a Benin-based lab that investigates a variety of products aimed at controlling mosquitoes that carry malaria across the African continent .  

She didn’t set out to spend her days thinking about insects—she had actually envisioned a career in the physical or mathematical sciences. But after getting a chance to collect mosquitoes at field sites during her undergraduate years, she was hooked. 

It’s hardly surprising, then, that Ngufor would go on to play a big role in developing one of the most effective innovations in malaria prevention to emerge in the past decade.  

When people think about innovation, they often focus on future breakthroughs: the next big scientific discovery or technology that’s going to revolutionize everything. But Ngufor’s landmark innovation is one that’s been available for several years and has already saved a stunning number of lives. Today, I’m going to tell you more about why the product she helped create is so remarkable.  

Insecticide resistance on the rise 

Before we get into Ngufor’s research, a quick bit of background: 

Between 1980 and 2000, malaria was on the rise. By the early 2000s, it was killing 800,000 to 900,000 people a year , most of them children. Two key innovations began to really turn things around: bed nets treated with an insecticide called pyrethroid, and indoor spraying with long-lasting insecticide. Starting around 2005, malaria cases and deaths began to steadily and substantially decline. Some say bed nets alone have been responsible for 68% of the reduction , saving an estimated 7.6 million lives over two decades.  

Unfortunately, mosquitoes are wildly adaptable. Over time, they began to develop resistance to pyrethroid. In 2020, malaria cases and deaths began rising again .  

To stay ahead of these adaptable creatures, humans had to keep innovating. One potential solution was creating dual-insecticide nets : nets treated with pyrethroid and a second insecticide. But which one? For a variety of reasons, finding the right insecticide was extremely difficult. One expert has noted that researchers had already eliminated millions of potential options. 

That’s where Ngufor and her lab came in. They began testing an insecticide called chlorfenapyr, which, among other things, acts in a very different way than pyrethroids. Instead of targeting the mosquitoes’ nervous system, it blocks their ability to produce energy. When they can’t fly, they die.  

“I thought we might have to give up.” 

The scientific path forward wasn’t easy. Initially, the lab data wasn’t promising. Chlorfenapyr just wasn’t killing very many mosquitoes in lab tests. The researchers tried using a different polymer to bind the insecticide to the nets. Working with the manufacturer, they tested nets made from different types or combinations of materials. Still, the chlorfenapyr wasn’t killing enough mosquitoes.  

“At some point, I thought we might have to give up,” Ngufor says. “But we also knew that the options out there were few, so it was important to make sure we tried everything.” 

Finally, they began to consider how lab conditions differ from the real world . For example, lab tests are generally conducted during the day, but mosquitoes are most metabolically active at night and when looking for people to bite; lab tests are time-limited (based on the fast-acting nature of pyrethroids), but the effects of chlorfenapyr require a little more time.  

The team needed to test under conditions more closely resembling the real world. So they turned to experimental huts. The huts were standardized, with specially designed entry points that allowed mosquitoes to get in but not get out. Adult volunteers would sleep in a hut under a net, and the next morning technicians would collect the mosquitoes—tallying how many got in, how many were dead, and how many had fed on blood.  

“That way, you can have an estimate of the mortality rate and get a sense of how effective the intervention is,” Ngufor says. 

The team counted a lot of dead mosquitoes from the huts. The mood at the lab improved dramatically. And statistical analysis suggested that the dual-treated nets would work even better in real life than in the lab tests. But Ngufor wasn’t quite ready to share the news. 

“We wanted to contain our excitement at first,” she says. They double-checked their analysis. When they were confident of the results, they shared their findings broadly .  

It had been six years since Ngufor began testing chlorfenapyr in the lab. But there was more work to be done. 

Finally, signs of success 

The nets, now manufactured under the name Interceptor ® G2 (IG2), had to be tested in randomized controlled trials. In the first key study, conducted in Tanzania, the pyrethroid-chlorfenapyr nets performed significantly better than pyrethroid nets alone— cutting malarial infections among children by almost half . Anytime you can make gains against a disease like malaria that affects so many children, it’s a win. But this reduction was incredible.  

To ensure that the nets could be used in the regions that needed them most, the World Health Organization had to weigh in. That process required even more study. Subsequent controlled trials, including one in Benin that Ngufor helped design , showed results similar to the trial in Tanzania.  

A last hurdle remained: IG2 nets cost more than pyrethroid-only ones. Researchers looked at the cost of treating the malaria cases that would have arisen without the IG2 nets and found that the new nets would actually save money in the long run. 

In 2023, WHO made a strong recommendation in favor of the pyrethroid-chlorfenapyr nets over pyrethroid-only nets, based the new nets’ life-saving potential as well as cost-effectiveness. This was the first time WHO had issued such a recommendation for a new insecticide formulation.  

It was a moment for celebration in Ngufor’s lab, and also a moment for reflection. “It shows that, as scientists, we really have to try everything before we give up on an intervention,” she says. “It was a really big lesson in my career.” 

Continued innovation 

Now that the IG2 nets are being used in the real world, outside of clinical trial settings, their incredible impact is clear. That’s a big deal. One large pilot project in 17 countries, funded by Unitaid and the Global Fund to Fight AIDS, Tuberculosis and Malaria, found that the nets prevented 13 million malaria cases between 2019 and 2022 , saving an estimated 25,000 lives .   

Ngufor says, “I want to see how I can contribute to further driving down that curve.” She’s among many researchers who believe that malaria eradication is achievable. They also know that one innovation alone, no matter how good, won’t get us there. 

The Bill & Melinda Gates Foundation has funded malaria work  since the early 2000s, and we’re continuing to support it in various ways, including by: 

  • Providing volume guarantees to manufacturers, to help reduce the price of new nets 
  • Supporting research into new tools, including malaria vaccines  and improved treatments   
  • Funding a project in which male mosquitoes are bred to carry a gene  that doesn’t allow their female offspring to survive into adulthood 
  • Supporting development of attractive targeted sugar baits , which kill mosquitoes after they take a drink of sugary liquid 

New tools are important, but real-time, high-quality data is also essential to stop malaria. We support organizations that are developing more sophisticated monitoring systems  to better track and target mosquito vectors.  

Of course, mosquitoes are crafty, and they continue to adapt. At some point, they’ll adapt to chlorfenapyr. Constant vigilance is critical. As Ngufor says, “You always have to think ahead of the mosquito.” 

Is the threat of malaria increasing?

Continuous innovation needed to eliminate malaria, why the interceptor g2 net could be a game changer for malaria eradication.

By submitting your email to subscribe, you agree to the Bill & Melinda Gates Foundation's Privacy & Cookies Notice

APS

Empathy is on the Rise in Young People

  • Personality/Social

It doesn’t often feel as if we’re living in empathetic times.

That increase in empathy can be undermined by our cynicism toward each other, according to Jamil Zaki , a professor of psychology at Stanford University who is also director of the Stanford Social Neuroscience Lab.

People often believe that “their craving for a more empathetic community is theirs alone when other people all around them also want the same thing,” said Zaki, author of “The War for Kindness: Building Empathy in a Fractured World.” This mistaken belief weakens conversations by creating biased views before you even start talking.

People sometimes have an inaccurate sense of what other people think.

That’s why “gaining a more accurate perspective on who is surrounding us right now can make us more hopeful about how we can build a better future together,” Zaki said.

Read the whole story (subscription may be required): CNN

APS regularly opens certain online articles for discussion on our website. Effective February 2021, you must be a logged-in APS member to post comments. By posting a comment, you agree to our Community Guidelines and the display of your profile information, including your name and affiliation. Any opinions, findings, conclusions, or recommendations present in article comments are those of the writers and do not necessarily reflect the views of APS or the article’s author. For more information, please see our Community Guidelines .

Please login with your APS account to comment.

Presenter speaking to a room full of people.

Does Psychology Need More Effective Suspicion Probes?

Suspicion probes are meant to inform researchers about how participants’ beliefs may have influenced the outcome of a study, but it remains unclear what these unverified probes are really measuring or how they are currently being used.

more research is needed to effectively evaluate

Science in Service: Shaping Federal Support of Scientific Research  

Social psychologist Elizabeth Necka shares her experiences as a program officer at the National Institute on Aging.

more research is needed to effectively evaluate

A Very Human Answer to One of AI’s Deepest Dilemmas

Imagine that we designed a fully intelligent, autonomous robot that acted on the world to accomplish its goals. How could we make sure that it would want the same things we do? Alison Gopnik explores. Read or listen!

Privacy Overview

CookieDurationDescription
__cf_bm30 minutesThis cookie, set by Cloudflare, is used to support Cloudflare Bot Management.
CookieDurationDescription
AWSELBCORS5 minutesThis cookie is used by Elastic Load Balancing from Amazon Web Services to effectively balance load on the servers.
CookieDurationDescription
at-randneverAddThis sets this cookie to track page visits, sources of traffic and share counts.
CONSENT2 yearsYouTube sets this cookie via embedded youtube-videos and registers anonymous statistical data.
uvc1 year 27 daysSet by addthis.com to determine the usage of addthis.com service.
_ga2 yearsThe _ga cookie, installed by Google Analytics, calculates visitor, session and campaign data and also keeps track of site usage for the site's analytics report. The cookie stores information anonymously and assigns a randomly generated number to recognize unique visitors.
_gat_gtag_UA_3507334_11 minuteSet by Google to distinguish users.
_gid1 dayInstalled by Google Analytics, _gid cookie stores information on how visitors use a website, while also creating an analytics report of the website's performance. Some of the data that are collected include the number of visitors, their source, and the pages they visit anonymously.
CookieDurationDescription
loc1 year 27 daysAddThis sets this geolocation cookie to help understand the location of users who share the information.
VISITOR_INFO1_LIVE5 months 27 daysA cookie set by YouTube to measure bandwidth that determines whether the user gets the new or old player interface.
YSCsessionYSC cookie is set by Youtube and is used to track the views of embedded videos on Youtube pages.
yt-remote-connected-devicesneverYouTube sets this cookie to store the video preferences of the user using embedded YouTube video.
yt-remote-device-idneverYouTube sets this cookie to store the video preferences of the user using embedded YouTube video.
yt.innertube::nextIdneverThis cookie, set by YouTube, registers a unique ID to store data on what videos from YouTube the user has seen.
yt.innertube::requestsneverThis cookie, set by YouTube, registers a unique ID to store data on what videos from YouTube the user has seen.

IMAGES

  1. Evaluative Research: Definition, Methods & Types

    more research is needed to effectively evaluate

  2. The Research Process

    more research is needed to effectively evaluate

  3. What is evaluation research: Methods & examples

    more research is needed to effectively evaluate

  4. How To Research Effectively: Tools And Tips For Better Research

    more research is needed to effectively evaluate

  5. What is Evaluation Research? + [Methods & Examples]

    more research is needed to effectively evaluate

  6. Evaluation Research Examples

    more research is needed to effectively evaluate

COMMENTS

  1. Evaluating Research

    Definition: Evaluating Research refers to the process of assessing the quality, credibility, and relevance of a research study or project. This involves examining the methods, data, and results of the research in order to determine its validity, reliability, and usefulness. Evaluating research can be done by both experts and non-experts in the ...

  2. 15 Steps to Good Research

    Judge the scope of the project. Reevaluate the research question based on the nature and extent of information available and the parameters of the research project. Select the most appropriate investigative methods (surveys, interviews, experiments) and research tools (periodical indexes, databases, websites). Plan the research project.

  3. Evaluation Research: Definition, Methods and Examples

    Evaluation research, also known as program evaluation,refers to research purpose instead of a specificmethod. Evaluation research is the systematic assessment of the worth or merit of time, money, effort and resources spent in order to achieve a goal. Evaluation research is closely related to but slightly different from more conventional social ...

  4. The effectiveness of implementation strategies for promoting evidence

    Without effective strategies for implementation of evidence-based recommendations it is unlikely that evidence-based practice will improve the quality of care, reduce practice variation and/or reduce cost. The importance of the implementation strategy to the effective use of evidence-based practice has been recognised by numerous authors [7, 8 ...

  5. What Is Evaluation?: Perspectives of How Evaluation Differs (or Not

    Overall, evaluators believed research and evaluation intersect, whereas researchers believed evaluation is a subcomponent of research. Furthermore, evaluators perceived greater differences between evaluation and research than researchers did, particularly in characteristics relevant at the beginning (e.g., purpose, questions, audience) and end ...

  6. Evaluation.gov

    Evaluation 101 provides resources to help you answer those questions and more. You will learn about program evaluation and why it is needed, along with some helpful frameworks that place evaluation in the broader evidence context. Other resources provide helpful overviews of specific types of evaluation you may encounter or be considering ...

  7. Understanding Effectiveness Evaluation: Definition, Benefits, and Best

    As programs and interventions continue to evolve and become more complex, there is a need for continued research and innovation in effectiveness evaluation. Future directions for effectiveness evaluation research and practice should focus on incorporating new data sources, emphasizing stakeholder engagement, advancing evaluation methods ...

  8. Evaluation: What is it and why do it?

    Evaluation is a process that critically examines a program. It involves collecting and analyzing information about a program's activities, characteristics, and outcomes. Its purpose is to make judgments about a program, to improve its effectiveness, and/or to inform programming decisions (Patton, 1987).

  9. How to Improve Your Research Skills: 6 Research Tips

    How to Improve Your Research Skills: 6 Research Tips. Written by MasterClass. Last updated: Aug 18, 2021 • 3 min read. Whether you're writing a blog post or a short story, you'll likely reach a point in your first draft where you don't have enough information to go forward—and that's where research comes in.

  10. Evaluating research: A multidisciplinary approach to assessing research

    The example in Fig. 2 illustrates that Research emanates from at least one Question at Hand, and aims for at least one piece of New Knowledge.According to our definition (concept model), you cannot call something Research if it is not aiming for New Knowledge and does not emanate from a Question at Hand.This is the way we define the concept in concept modelling, and this small example only ...

  11. 5 essentials for effective evaluation

    The five prerequisites for effective evaluation in education are: Start with a clear and measurable statement of objectives. Develop a theory about how program activities will lead to improved outcomes (i.e. a program logic) and structure the evaluation questions around that logic. Let the evaluation questions determine the evaluation method.

  12. Evaluating Sources: General Guidelines

    Evaluate the Evidence Listed. If you're just starting your research, you might look for sources that include more general information. However, the deeper you get into your topic, the more comprehensive your research will need to be. If you're reading an opinion-based source, ask yourself whether there's enough evidence to back up the ...

  13. Research Help: Critical Evaluation of Sources: Why Evaluate?

    The primary goal of evaluation is to understand the significance and value of a source in relation to other sources and your own thinking on a topic. Note that some evaluative questions will be more important than others depending on your needs as a researcher. Figuring out which questions are important to ask in a given situation is part of ...

  14. Information Literacy Defined

    Accessed 2016. Information literacy is a set of abilities requiring individuals to "recognize when information is needed and have the ability to locate, evaluate, and use effectively the needed information." 1 Information literacy also is increasingly important in the contemporary environment of rapid technological change and proliferating ...

  15. Understanding Evaluation Methodologies: M&E Methods and Techniques for

    The choice of evaluation methodology depends on the specific goals of the evaluation, the type and level of data required, and the resources available for conducting the evaluation. The importance of evaluation methodologies lies in their ability to provide evidence-based insights into the performance and impact of the subject being evaluated.

  16. Evaluating the impact of healthcare interventions using routine data

    An effective impact evaluation begins with the formulation of one or more clear questions driven by the purpose of the evaluation and what you and your stakeholders want to learn. ... and routinely shared for research and evaluation purposes, eg, secondary care data in England (Hospital Episode Statistics), or Medicare Claims data in the United ...

  17. Step 4: Evaluating your sources

    REMEMBER: If you are using the internet for research, it is especially important to evaluate the accuracy and authority of the information you find there. Search engines, like Google, find web sites of all levels of quality. Keep these things in mind when deciding if a web page is reliable and appropriate for your research: authority/credibility

  18. What is Information Literacy?

    1.2 Definitions of Information Literacy. Information literacy is the ability to locate, evaluate, interpret, and effectively use information from various sources and in diverse formats. It encompasses the skills, knowledge, and attitudes required to navigate the complex information landscape of today's world.

  19. How to use and assess qualitative research methods

    Abstract. This paper aims to provide an overview of the use and assessment of qualitative research methods in the health sciences. Qualitative research can be defined as the study of the nature of phenomena and is especially appropriate for answering questions of why something is (not) observed, assessing complex multi-component interventions ...

  20. Learning how to find, evaluate, and use resources to explore a topic in

    Ironically, IDEA Research Report #1 also finds that the objectives identified as emphasizing lifelong learning (Learning to find and use resources, and Gave tests/projects that covered most important points) were identified as "Important" or "Essential" in only about 30% of the classes using IDEA. The ACRL (1) notes, "Information ...

  21. How To Evaluate The Effectiveness Of Training Programs

    Context, input, process, and product evaluation model (CIPP): Also called the Stufflebeam model, this model for evaluating training effectiveness assesses four aspects of a training program (context, input, process, and product) to determine its efficacy in meeting business objectives. Other relevant models include the learning-transfer ...

  22. How to Give an Effective Performance Evaluation

    Action Step: List your key skills and how they have positively impacted your work. Think of specific instances where your skills made a difference - a problem you solved, a project you led successfully, or a challenging situation you navigated easily. Be ready to share these examples during your evaluation.

  23. Project Evaluation: Master the Art of Assessing Project Success

    Evaluating a project is key to understanding its performance and ways to improve. By checking how a project measures up against its goals, organizations can spot areas that need a boost, use resources more effectively, and make smart choices for future projects. A pre-project evaluation is a critical step before initiating a project.

  24. Where Data-Driven Decision-Making Can Go Wrong

    To avoid missteps, you need to separate causation from correlation and control for confounding factors. You should examine the sample size and setting of the research and the period over which it ...

  25. Embracing Gen AI at Work

    Soon it will transform more than 40% of all work activity, according to the authors' research. ... This article describes the three kinds of "fusion skills" you need to get the best results ...

  26. This researcher wants to replace your brain, little by little

    The brain renewal concept could have applications such as treating stroke victims, who lose areas of brain function. But Hébert, a biologist at the Albert Einstein school of medicine, has most ...

  27. Summer EBT

    An official website of the United States government. Here's how you know

  28. Combating Pathogens Using Carbon-Fiber Ionizers (CFIs) for Air ...

    The literature review highlights the need for comprehensive studies to evaluate the real-world application and effectiveness of CFIs. ... seeks to offer an updated understanding of CFIs' antimicrobial capabilities and to identify limitations in current research, paving the way for more informed and effective air purification strategies ...

  29. How a Scientist Found a Cost-Effective Way to Exterminate Mosquitoes

    For example, lab tests are generally conducted during the day, but mosquitoes are most metabolically active at night and when looking for people to bite; lab tests are time-limited (based on the fast-acting nature of pyrethroids), but the effects of chlorfenapyr require a little more time. The team needed to test under conditions more closely ...

  30. Empathy is on the Rise in Young People

    APS regularly opens certain online articles for discussion on our website. Effective February 2021, you must be a logged-in APS member to post comments. By posting a comment, you agree to our Community Guidelines and the display of your profile information, including your name and affiliation. Any opinions, findings, conclusions, or ...