• Search Menu
  • Advance articles
  • Author Guidelines
  • Submission Site
  • Open Access
  • Why Publish?
  • About Research Evaluation
  • Editorial Board
  • Advertising and Corporate Services
  • Journals Career Network
  • Self-Archiving Policy
  • Dispatch Dates
  • Journals on Oxford Academic
  • Books on Oxford Academic

Issue Cover

Article Contents

Introduction, what is already known about this topic, what this study adds, acknowledgements.

  • < Previous

Development and validation of a questionnaire to measure research impact

  • Article contents
  • Figures & tables
  • Supplementary Data

Maite Solans-Domènech, Joan MV Pons, Paula Adam, Josep Grau, Marta Aymerich, Development and validation of a questionnaire to measure research impact, Research Evaluation , Volume 28, Issue 3, July 2019, Pages 253–262, https://doi.org/10.1093/reseval/rvz007

  • Permissions Icon Permissions

Although questionnaires are widely used in research impact assessment, their metric properties are not well known. Our aim is to test the internal consistency and content validity of an instrument designed to measure the perceived impacts of a wide range of research projects. To do so, we designed a questionnaire to be completed by principal investigators in a variety of disciplines (arts and humanities, social sciences, health sciences, and information and communication technologies). The impacts perceived and their associated characteristics were also assessed. This easy-to-use questionnaire demonstrated good internal consistency and acceptable content validity. However, its metric properties were more powerful in areas such as knowledge production, capacity building and informing policy and practice, in which the researchers had a degree of control and influence. In general, the research projects represented an stimulus for the production of knowledge and the development of research skills. Behavioural aspects such as engagement with potential users or mission-oriented projects (targeted to practical applications) were associated with higher social benefits. Considering the difficulties in assessing a wide array of research topics, and potential differences in the understanding of the concept of ‘research impact’, an analysis of the context can help to focus on research needs. Analyzing the metric properties of questionnaires can open up new possibilities for validating instruments used to measure research impact. Further to the methodological utility of the current exercise, we see a practical applicability to specific contexts where multiple discipline research impact is requires.

Over the past three decades, increasing attention has been paid to the social role and impact of research carried out at universities. National research evaluation systems, such as the UK’s Research Excellence Framework (REF) ( Higher Education Funding Council of England et al. 2015 ) and the Excellence in Research for Australia ( Australian Research Council 2016 ) are examples of assessment tools that address these concerns. These systems identify and define how research funding is allocated based on a number of dimensions of the research process, including impact of research. ( Berlemann and Haucap 2015 ).

Being explicit about the objective of the impact assessment is emphasized in the International School on Research Impact Assessment (ISRIA) statement ( Adam et al. 2018 ) a 10-point guideline for an effective research impact assessment that includes four purposes: advocacy, analysis, allocation, and accountability. The last one emphasizes transparency, efficiency, value to the public and a return for the investment. With mounting concern about the relevance of research outcomes, funding organizations are increasingly expecting researchers to demonstrate that investments result in tangible improvements for society ( Hanney et al. 2004 ). This accountability is intended to ensure resources have been appropriately utilized and is strongly linked to the drive for value‐for‐money within health services and research ( Panel on the return on investments in health research 2009 ). As policy-makers and society expect science to meet societal needs, scientists have to prioritize social impact, or risk losing public support ( Poppy 2015 ).

To meet these expectations, the Universitat Oberta de Catalunya (UOC) has embraced a number of pioneering initiatives in its current Strategic Plan, which includes the promotion of Open Knowledge, a specific measure related to the social impact of research, ( Universitat Oberta de Catalunya 2017 ) and the development of an institution wide action plan to incorporate it in research evaluation. The UOC is currently investigating how to implement the principals of the DORA Declaration in institutional evaluation processes, taking into account ‘a broad range of impact measures including qualitative indicators of research impact, such as influence on policy and practice’ ( ‘San Francisco Declaration on Research Assessment (DORA)’ n.d. ). The UOC is also taking the lead in meeting the Sustainable Development Goals (SDG) of the UN 2030 Agenda,( Jørgensen and Claeys-Kulik 2018 ) having been selected by the International Association of Universities as one of the 16 university cluster leaders around the world to lead the SDGs ( ‘IAU HESD Cluster | HESD - Higher Education for Sustainable Development portal’ n.d. ).

The term ‘research impact’ has many definitions. On a basic level, the ‘academic impact’ is understood as benefits for further research, while ‘wider and societal impact’ includes the outcomes that reach beyond academia. In our study we will include both categories and refer to ‘research impact’ as any type of output or outcome of research activities that can be considered a ‘positive return or payback’ for a wide range of beneficiaries, including people, organizations, communities, regions, or other entities. The pathways linking science, practice, and outcomes are multifaceted and complex ( Molas-Gallart et al. 2016 ). Indeed, the path from new knowledge to its practical application is neither linear nor simple; the stages may vary considerably in terms of duration, and many impacts of research may not be easily measurable or attributable to a concrete result of research ( Figure 1 ). This outputs and outcomes generated by research characteristics (inputs and processes) are context dependant ( Pawson 2013 ). Therefore, a focus on process is fundamental to understanding the generation of impact.

Effects of research impact.

Effects of research impact.

Surveys are among the most widely used tools in research impact evaluation. Quantitative approaches as surveys are suggested for accountability purposes, as the most appropriate way that calls for transparency ( Guthrie et al. 2013 ). They provide a broad overview of the status of a body of research and supply comparable, easy-to-analyze data referring to a range of researchers and/or grants. Standardization of the approach enhances this comparability and minimizes researcher bias and subjectivity, particularly in the case of web or postal surveys. Careful wording and question construction increases the reliability of resulting data ( Guthrie et al. 2013 ). However, while ex-ante assessments instruments for research proposals have undergone significant study, ( Fogelholm et al. 2012 ; Van den Broucke et al. 2012 ) the metric properties of research evaluation instruments have received little attention ( Aymerich et al. 2012 ). ‘Internal consistency’ is generally considered evidence of internal structure, ( Clark and Watson 1995 ) while the measurement of ‘content validity’ attempts to demonstrate that the elements of an assessment instrument are relevant to and representative of the targeted construct for a particular assessment purpose ( Nunnally and Bernstein 1994 ).

As the demand for monitoring research impact increases across the world, so does the need for research impact measures that demonstrate validity. Therefore, the aim of this study is to develop and test the internal consistency and the content validity of an instrument designed for accountability purposes to measure the perceived impacts of a wide range of competitively funded research projects, according to the perspectives of the principal investigators (PIs). The study will also focus on the perceived impacts and their characteristics.

A cross-sectional survey was used to assess the research undertaken at UOC. This research originates from four knowledge areas: arts and humanities, social sciences, health sciences, and information and communication technologies (ICT). Research topics include ‘identity, culture, art and society’; ‘technology and social action’; ‘globalization, legal pluralism and human rights’; ‘taxation, labour relations and social benefits’; ‘internet, digital technologies and media’; ‘management, systems and services in information and communications’; and ‘eHealth’. UOC’s Ethics Committee approved this study.

Study population

The study population included all PIs with at least one competitively funded project (either public or private) at local, regional, national, or international level completed by 2017 (n = 159).

The questionnaire

An on-line questionnaire was designed for completion by project PIs in order to retrospectively determine the impacts directly attributed to the projects. The questions were prepared based on the team’s prior experience and questionnaires published in scientific literature. ( Wooding et al. 2010 ; Hanney et al. 2013 ) The questionnaire was structured around the multidimensional categorization of impacts in the Payback Framework. ( Hanney et al. 2017 )

The Payback Framework has been extensively tested and used to analyze the impact of research in various disciplines. It has three elements: first, a logic model which identifies the multiple elements that form part of the research process and contribute to achieving impact; second, two ‘interfaces’, one referring to the project specification and selection, the other referring to the dissemination of research results; and third, a consideration of five impact categories: knowledge production (represented by scientific publications or dissemination to non-scientific audiences); research capacity building (research training, new collaborations, the securing of additional funding or improvement of infrastructures); informing policy and product development (research used to inform policymaking in a wide range of circumstances); social benefits (application of the research within the discipline and topic sector); and broader economic benefits (commercial exploitation or employment) ( Hanney et al. 2013 ).

Our instrument included four sections. The first section recorded information on the PIs, including their sex, age, and the number of years they had been involved in research. The second focused on the nature of the project itself (or a body of work based on continuation/research progression projects). PIs involved in more than one project (or a set of projects within the same body of work) were instructed to select one, in order to reduce the time needed to complete the survey and thereby to increase response rate. This section included the discipline, the main topic of research, the original research drivers, interaction with potential users of the research during the research processes, and funding bodies. The third section addressed the PIs’ perceptions of the impact of the research project, and was structured around the five impact categories of the aforementioned Payback Framework. The last section included general questions, one of which sought to capture other relevant impacts that might not fall within one of the previous five categories. The final question requested an evaluation (as a percentage) of the contribution/attribution of the research to the five impact categories. Respondents were required to decide the level of the contribution/attribution of the impacts according to three answer categories: limited, contribution from 1 to 30%; moderate, contribution from 40 to 60%; and significant, contribution from 70 to 100%).

Questionnaire items included questions with dichotomous answers (yes/no) and additional open box questions for a brief descriptions of the impacts perceived.

Prior to testing, we reviewed the abstracts of 72 REF2014 impact case studies (two per knowledge area). REF2014 ( Higher Education Funding Council of England et al. 2015 ) is the first country-wide exercise to assess the impact of university research beyond academia and has a publicly available database of over 6,000 impact case studies, grouped in 34 subject-based units of assessment. Case studies were randomly selected and the impacts found in each mapped onto the most appropriate items and dimensions of the questionnaire. This review helped to reformulate and add questions, especially in the sections on informing policy and practice and social benefits .

Data collection

The questionnaire was sent to experts in various disciplines with a request for feedback on the relevance of each item to the questionnaire’s aim (impact assessment), which they rated on a 4-point scale (0 = ‘not relevant’, 1 = ‘slightly relevant’, 2 = ‘quite relevant’, 3 = ‘very relevant’) according to the definition of research impact included in our study (defined above). The experts were also asked to evaluate whether the items covered the important aspects or whether certain components were missing. They could also add comments on any item.

The PIs were contacted by email. They were informed of the objectives of the study and assured that the data would be treated confidentially. They received two reminders, also by email.

A quality control exercise was performed prior to data analysis. The data were processed and the correct classification of the various impacts checked by comparing the yes/no responses with the information provided in the additional open box questions. No alterations were required after these comparisons. Questionnaire results provided a measure of the number of research projects contributing to a particular type of impact; therefore, to estimate each level of impact we calculated the frequency of its occurrence in relation to the number of projects. A Chi-squared test was used to test for group differences.

Internal consistency was assessed by focusing on the inter-item correlations within the questionnaire, indicating how well the items fitted together theoretically. This was performed using Cronbach’s alpha ( α ). An alpha between 0.70 and 0.95 was considered acceptable ( Peterson 1994 ).

An expert opinion index was used to estimate content validity at the item level. This index was calculated by dividing the number of experts providing a score of 2 or 3 by the total number of answers. Due to the diverse array of disciplines and topics under examination, values were calculated for all experts and for the experts of each discipline. These were considered acceptable if the level of endorsement was >0.5.

All data were introduced into the statistical programme SPSS18, and the level of significance set at 0.05 for all tests.

Sixty-eight PIs answered the questionnaire, a response rate of 42.8%. Respondents took an average of 26 minutes to complete the questionnaire. Table 1 shows the sample characteristics. Significant differences were found between the respondents and non-respondents for knowledge area (p = 0.014) and age group (p = 0.047). Arts and humanities investigators and PIs older than 50 years were more frequent among non-respondents. The proportion of women did not differ significantly between respondents and non-respondents (p = 0.083).

Sample characteristics

Answers could include more than one response. PI: principal investigator.

Impact and its characteristics

An impact on knowledge production was observed in 97.1% of the projects, and an impact on capacity building in 95.6%. Lower figures were recorded for informing policy and practice (64.7%), and lower still for economic benefits (33.8%), and for social benefits (32.4%), although results were based on a formal evaluation in only 11, 8% of the cases included in social benefits . Estimations of the contribution of projects to the different impact levels were considered significant (between 70% and 100%) to knowledge production , moderate (between 40% and 60%), to capacity building , and limited (1–30%) to informing policy and practice , social benefits and economic benefits . No additional impacts were reported.

Figure 2 shows the different impact categories and the distribution of impact subcategories. The size of the bars indicates the percentage of projects in which this specific impact occurred, according to the PIs.

Achieved impact bars, according to level (n = 68).

Achieved impact bars, according to level (n = 68).

Statistically significant differences were found according to the original impetus for the project: for projects intended to fill certain gaps in knowledge, the greatest impact was observed in knowledge production (p = 0.01) and capacity building (p = 0.03), while for projects targeting to a practical application, the greatest impact was observed in informing policy and practice (p = 0.05) and in social benefits (p = 0.01). In general, projects that interacted with end users had more impact in the levels of knowledge production (p = 0.01), capacity building (p = 0.03), and social benefits (p = 0.05). Projects that had begun over four years before the survey was completed was correlated with knowledge production (p = 0, 04), and PIs over 40 years of age and those with over 3 years research experience were correlated with more frequent impacts on knowledge production and capacity building (p ≤ 0.01). No differences were found regarding the gender of PI’s. The size of the differences can be found in the Supplementary Table S1 .

Internal consistency and content validity

The Cronbach’s alpha score, which measures the internal consistency of the questions, was satisfactory ( α  = 0.89). Table 2 shows its value in each domain (impact level). Internal consistency was satisfactory in all domains with the exception of economic benefits . However, the removal of any of the questions would have resulted in an equal or lower Cronbach's alpha.

Internal consistency for each domain (impact level)

Thirteen of the 17 experts contacted completed the content validity form and assessed whether the content of the questionnaire was appropriate and relevant to the purpose of the study. Seven were from social sciences and humanities, four from health sciences and two from ICT; 39% were women. All had longstanding experience as either researchers or research managers. The experts scored the 45 items according to their relevance and 76% of the ratings (n = 34) had an index of 0.5 or greater. The results for each item are shown in Table 3 . In accordance with the expert review, an item relating to ‘new academic networks’ was added.

Content validity of items according to experts (n = 13)

Items rated greater than or equal to 0.5; ICT: information and communication technologies.

Ninety-one percent of the items in knowledge production were rated acceptable (expert opinion index ≥ 0.5), as were 89% of the items in capacity building , 83% of the items in informing policy and practice , and 63% of the items in social benefits . In contrast, only 43% of the items (three out of seven) in the economic benefits domain achieved an acceptable rating. Some items were of higher relevance in specific fields: for example, items relating to health and social determinants were considered acceptable by health experts; training for final undergraduate’s projects was considered acceptable by ICT experts; influencing education systems and curricular assessments, was considered acceptable by social sciences and humanities, and ICT experts; and commercialization items were considered acceptable by health and ICT experts ( Table 3 ).

In this study, we tested the metric properties of a questionnaire designed to record the impact of university research originating from various disciplines. Tests of this kind, although rare in research impact assessment, are common in other study areas such as patient-reported outcome measures, education and psychology. The questionnaire displayed good internal consistency and acceptable content validity in our context. Internal consistency for all items on the instrument was excellent demonstrating that they all measured the same construct. However, since ‘impact’ is a multidimensional concept and, by definition, Cronbach’s alpha ‘indicates the correlation among items that measure one single construct’, ( Osburn 2000 ) the internal consistency of each of the five domains required evaluation; this was found to be excellent in all cases except economic benefits . Low internal consistency in this domain may be related to the fact it contained relatively few items, and/or the fact that most of the researchers who answered the questionnaire worked in the social sciences and humanities, and therefore impacts relating to transfer, commercialization and innovation were less likely to occur. An alternative possibility is that the items are, in fact, measuring more than one construct.

There is a consensus in the literature that content validity is largely a matter of judgment, ( Mastaglia et al. 2003 ) as content validity is not a property of the instrument, but of the instrument’s interpretation. We therefore incorporated two distinct phases in our study. In the first phase of development, conceptualization was enhanced through the analysis and mapping of the impacts of the randomly selected REF case; in the second the relevance of the scale’s content was evaluated through expert assessment. The expert assessment revealed that some items did not achieve acceptable content validity, especially in the domains of social benefits and economic benefits . However, it should be taken into account that while many of the items in the questionnaire were generic and thus relevant for all fields, a number were primarily specific to one field, and therefore, more relevant for experts in a particular field. Content validity was stronger in the domains ‘closest’ to the investigators. This may be due to the most frequently recognized impacts being both in areas where researchers have a degree of control and influence, ( Kalucy et al. 2009 ) and those which have been ‘traditionally’ used to measure research. In other words, their understanding of the concept of impact in the knowledge production , capacity building and informing policy and practice domains: that is, those at the intermediate level (secondary outputs) display greater homogeneity ( Kalucy et al. 2009 ).

Use of an online questionnaire in this research impact study provided data on a wide range of benefits deriving from UOC’s funded projects at a particular moment and its results address a message of accountability. Questionnaires can provide insights into respondents’ viewpoints and can systematically enhance accountability. Although assuming that PIs will provide truthful responses about the impact of their research is clearly a potential limitation, Hanney et al (2013) demonstrate that researchers do not routinely exaggerate the impacts of their research, at least in studies like this one, where there is no clear link between the replies given and future funding. International guidelines on research impact assessment studies recommend the use of a combination of methods to achieve comprehensive, robust results. ( Adam et al. 2018 ) However, the primary focus of this study was the quality and value of the survey instrument itself, therefore the issue of triangulating the findings with other methods was not explored. The questionnaire could be applied in future studies to select projects that require a more in-depth and closer analysis, such as how an understanding of scientific processes works in this context. Previous attempts have been made to assess the impact of university’s research in our context, but these have been restricted to the level of outputs (i.e. publications and patents), ( Associació Catalana d’Universitats Públiques (ACUP) 2017 ) or inputs’ level (i.e. contributions to Catalan GDP) ( Suriñach et al. 2017 ).

Evaluated as a whole, the research projects covered in this study was effective in the production of knowledge and the development of research skills in individuals and teams. This funded research has helped to generate new knowledge for other researchers and, to a lesser extent, for non-academic audiences. It has consolidated the position of UOC researchers (both experienced and novice) within national and international scientific communities, enabling them to develop and enhance ability to conduct quality research ( Trostle 1992 ).

Assessing the possible wider benefits of the research process (in terms of informing policy and practice , social benefits and economic benefits for society) proved more problematic. The relatively short period that had elapsed since the projects finished might have limited the assessment of impact. There was a striking disparity, in our results, between the return on research measured in terms of scientific impact ( knowledge production and capacity building ), notably high and uniform, and the limited and uneven contribution to wider benefits. This disparity is not a local phenomenon, but a recurrent finding in contemporary biomedical research worldwide. The Retrosight study, ( Wooding et al. 2014 ), which analyzed cardiovascular and stroke research in the United Kingdom found no correlation between knowledge production and the broader social impact of research. Behavioural aspects such as researcher engagement with potential users of the research or mission-oriented projects (targeted to practical applications) were associated with higher social benefits. This might be interpreted as strategic thinking on the part of researchers, in the sense that they consider the potential ‘mechanisms’ that might enhance the impact of their work. These results do not appear to be exceptional, since the final impact of research is influenced by the extent to which the knowledge obtained is made available to those in a position to use it.

Although the response rate was lower than expected, 43% is within the normal range for on-line surveys. ( Shih and Xitao 2008 ) In addition, arts and humanities researchers were underrepresented among PIs, but not between experts for considering content validity. One possible reason for this is that investigators are not fully aware of the influence of their research; another is the belief that research impact assessment studies are unable to provide valuable data about how arts and humanities research generates value. ( Molas-Gallart 2015 ) Arts and humanities is a discipline where in some cases the final objective of the research is not a practical application, but rather to change behaviours or people perspectives, which are therefore, more difficult to measure. According to Ochsner et al. (2012) there is a missing link between indicators and humanities scholars’ notions of quality. However, questionnaires have been used to successfully measure the impact of arts and humanities research, including in an approach adapted from the Payback Framework ( Levitt et al. 2010 ), and research impact analyses such as REF2014 ( Higher Education Funding Council of England et al. 2015 ) and the special issue of Arts and Humanities in Higher Education on the public value of arts and humanities research ( Benneworth 2015 ) have demonstrated that research in these disciplines may have many implications for society. Research results provide guidance and expertise and can be easily transferred to public debates, policies and institutional learning.

Weiss describes the rationale and conceptualization of assessment activities relating to the social impact of research as an open challenge ( Weiss 2007 ). As well as the well-known practice of attributing impact to a sole research project and the time-lag between the start of a research project and the attainment of a specific impact, in this study we also had the challenge to assess the impact of research from a diverse variety of topics and disciplines. Research impact studies are prevalent in disciplines such as health sciences ( Hanney et al. 2017 ) and agricultural research ( Weißhuhn et al. 2018 ) but less common in the social sciences and humanities, despite the REF2014 results revealing a wide array of impacts associated with various disciplines. ( Higher Education Funding Council of England et al. 2015 ) Our challenge was to analyze projects from highly diverse disciplines—social sciences, humanities, health sciences, and ICTs—and assess their varied impacts on society. We have attempted to develop a flexible and adaptable approach to assessing research impacts by utilizing a diverse amalgamation of indicators, including impact subcategories. However, due to ‘cultural’ differences between disciplines, we cannot guarantee that PIs from different knowledge areas have a homogeneous understanding of ‘research impact’: indeed the diversity of respondents when assessing the relevance of questionnaire items suggests otherwise. For this reason, a context analysis in which research is carried out and assessed, as described in the literature ( Adam et al. 2018 ) may help to decide which questionnaire items or domains should be included or removed in future studies.

To conclude, this study demonstrates that the easy-to-use questionnaire developed here is capable of measuring a wide range of research impact benefits and provides good internal consistency. Analyzing the metric properties of instruments used to measure research impact and establishing their validity will significantly contribute to research impact assessment and stimulate and extend reflection on the definition of research impact. Therefore, this questionnaire can be a powerful instrument to measure research impact when considered in context. The power of this instrument will be significantly improved when combined with other methodologies.

Surveys are widely used in research impact evaluation. They provide a broad overview of the state of a body of research, and supply comparable, easily analyzable data referring to a range of researchers and/or grants. The standardization of the approach enhances this comparability.

To our knowledge, the metric properties of impact assessment questionnaires have not been studied to date. The analysis of these properties can determine the internal consistency and content validity of these instruments and the extent to which they measure what they are intended to measure.

We thank the UOC principal investigators for providing us with their responses.

This project did not receive any specific grants from funding agencies in the public, commercial, or not-for-profit sectors.

Transparency

The lead authors (the manuscript’s guarantors) affirm that the manuscript is an honest, accurate, and transparent account of the study being reported; that no important aspects of the study have been omitted; and that any discrepancies from the study as planned have been explained.

Adam P. et al.  ( 2018 ) ‘ ISRIA Statement: Ten-Point Guidelines for an Effective Process of Research Impact Assessment ’, Health Research Policy and Systems , 16 / 1 , DOI: 10.1186/s12961-018-0281-5.

Google Scholar

Associació Catalana d’Universitats Públiques (ACUP) ( 2017 ) Research and Innovation Indicators of Catalan Public Universities . Report 2016.

Australian Research Council ( 2016 ) State of Australian University Research 2015-2016: Volume 1 ERA National Report . Canberra : Commonwealth of Australia .

Google Preview

Aymerich M. et al.  ( 2012 ) ‘ Measuring the Payback of Research Activities: A Feasible Ex-Post Evaluation Methodology in Epidemiology and Public Health ’, Social Science and Medicine , 75 / 3 : 505 – 10 .

Benneworth P. ( 2015 ) ‘ Putting Impact into Context: The Janus Face of the Public Value of Arts and Humanities Research ’, Arts and Humanities in Higher Education , 14 / 1 : 3 – 8 .

Berlemann M. , Haucap J. ( 2015 ) ‘ Which Factors Drive the Decision to Opt out of Individual Research Rankings? An Empirical Study of Academic Resistance to Change ’, Research Policy , 44 / 5 : 1108 – 15 .

Clark L. A. , Watson D. ( 1995 ) ‘ Constructing Validity: Basic Issues in Objective Scale Development ’, Psychological Assessment , 7 / 3 : 309 – 19 .

Fogelholm M. et al.  ( 2012 ) ‘ Panel Discussion Does Not Improve Reliability of Peer Review for Medical Research Grant Proposals ’, Journal of Clinical Epidemiology , 65 / 1 : 47 – 52 .

Guthrie S. et al.  ( 2013 ) Measuring Research: A Guide to Research Evaluation Frameworks and Tools . RAND Corporation .

Hanney S. et al.  ( 2004 ) ‘ Proposed Methods for Reviewing the Outcomes of Health Research: The Impact of Funding by the UK’s “Arthritis Research Campaign” ’, Health Research Policy and Systems , 2 / 1 : 4.

Hanney S. et al.  ( 2013 ) ‘ Conducting Retrospective Impact Analysis to Inform a Medical Research Charity’s Funding Strategies: The Case of Asthma UK ’, Allergy, Asthma, and Clinical Immunology: Official Journal of the Canadian Society of Allergy and Clinical Immunology , 9 / 1 : 17 .

Hanney S. et al.  ( 2017 ) ‘ The Impact on Healthcare, Policy and Practice from 36 Multi-Project Research Programmes: Findings from Two Reviews ’, Health Res Policy Syst , 15 / 1 : 26 .

Higher Education Funding Council of England, et al.  ( 2015 ) The Nature, Scale and Beneficiaries of Research Impact: An Initial Analysis of Research Excellence Framework (REF) 2014 Impact Case Studies . London : HEFCE.

‘IAU HESD Cluster | HESD - Higher Education for Sustainable Development portal’ (n.d.) < http://iau-hesd.net/en/contenu/4648-iau-hesd-cluster.html > accessed 17 Dec 2018.

Jørgensen T. E. , Claeys-Kulik A.-L. ( 2018 ) Universities’ Strategies and Approaches Towards Diversity, Equity and Inclusion. Examples from Across Europe . Brussels : European University Association.

Kalucy E. C. et al.  ( 2009 ) ‘ The Feasibility of Determining the Impact of Primary Health Care Research Projects Using the Payback Framework ’, Health Research Policy and Systems , 7 : 11 .

Levitt R. , Celia C. , Diepeveen S. ( 2010 ) Assessing the Impact of Arts and Humanities Research at the University of Cambridge . Technical Report. RAND Corporation , 104 .

Mastaglia B. , Toye C. , Kristjanson L. J. ( 2003 ) ‘ Ensuring Content Validity in Instrument Development: Challenges and Innovative Approaches ’, Contemporary Nurse , 14 / 3 : 281 – 91 .

Molas-Gallart J. ( 2015 ) ‘ Research Evaluation and the Assessment of Public Value ’, Arts and Humanities in Higher Education , 14 / 1 : 111 – 26 .

Molas-Gallart J. et al.  ( 2016 ) ‘ Towards an Alternative Framework for the Evaluation of Translational Research Initiatives ’, Research Evaluation , 25 / 3 : 235 – 43 .

Nunnally J. C. , Bernstein I. H. ( 1994 ) Psychometric Theory . New York: McGraw-Hill .

Ochsner M. , Hug S. E. , Daniel H.-D. ( 2012 ) ‘ Indicators for Research Quality in the Humanities: Opportunities and Limitations ’, Bibliometrie - Praxis und Forschung , 1 /4: 1-17. DOI: 10.5283/bpf.157.

Osburn H. G. ( 2000 ) ‘ Coefficient Alpha and Related Internal Consistency Reliability Coefficients ’, Psychological Methods , 5 / 3 : 343 – 55 .

Panel on the return on investments in health research. ( 2009 ) Making an Impact: A Preferred Framework and Indicators to Measure Returns on Investment in Health Research . Ottawa, ON (Canada ): Canadian Academy of Health Science (CAHS ), Ed.

Pawson R. ( 2013 ) The Science of Evaluation: A Realist Manifesto . London: Sage Publications Ltd. http://dx.doi.org/10.4135/9781473913820

Peterson R. A. ( 1994 ) ‘ A Meta-Analysis of Cronbach’s Coefficient Alpha ’, Journal of Consumer Research , 21 / 2 : 381 .

Poppy G. ( 2015 ) ‘ Science Must Prepare for Impact ’, Nature , 526 / 7571 : 7.

‘San Francisco Declaration on Research Assessment (DORA)’. (n.d.) < https://sfdora.org/> accessed 17 Dec 2018.

Shih T. H. , Xitao F. ( 2008 ) ‘ Comparing Response Rates from Web and Mail Surveys: A Meta-Analysis ’, Field Methods , 20 / 3 : 249 – 71 .

Suriñach J. et al.  ( 2017 ) Socio-Economic Impacts of Catalan Public Universities and Research , Development and Innovation in Catalonia . Barcelona: Catalan Association of Public Universities (ACUP).

Trostle J. ( 1992 ) ‘ Research Capacity Building in International Health: Definitions, Evaluations and Strategies for Success ’, Social Science & Medicine , 35 / 11 : 1321 – 4 .

Universitat Oberta de Catalunya ( 2017 ) Strategic Plan Stage II 2017-2020 . Barcelona: UOC.

Van den Broucke S. , Dargent G. , Pletschette M. ( 2012 ) ‘ Development and Assessment of Criteria to Select Projects for Funding in the EU Health Programme ’, The European Journal of Public Health , 22 / 4 : 598 – 601 .

Weiss A. P. ( 2007 ) ‘ Reviews and Overviews Measuring the Impact of Medical Research: Moving from Outputs to Outcomes ’, Psychiatry: Interpersonal and Biological Processes , 164 / February : 206 – 14 .

Weißhuhn P. , Helming K. , Ferretti J. ( 2018 ) ‘ Research Impact Assessment in Agriculture—A Review of Approaches and Impact Areas ’, Research Evaluation , 27 / 1 : 36 – 42 .

Wooding S. et al.  ( 2010 ) Mapping the Impact: Exploring the Payback of Arthritis Research . Santa Monica, CA: RAND Corporation .

Wooding S. et al.  ( 2014 ) ‘ Understanding Factors Associated with the Translation of Cardiovascular Research: A Multinational Case Study Approach ’, Implementation Science , 9 / 1 : 47 .

Supplementary data

Email alerts, citing articles via.

  • Recommend to your Library

Affiliations

  • Online ISSN 1471-5449
  • Print ISSN 0958-2029
  • Copyright © 2024 Oxford University Press
  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Institutional account management
  • Rights and permissions
  • Get help with access
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

Questionnaire Method In Research

Saul Mcleod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul Mcleod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

Learn about our Editorial Process

Olivia Guy-Evans, MSc

Associate Editor for Simply Psychology

BSc (Hons) Psychology, MSc Psychology of Education

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.

On This Page:

A questionnaire is a research instrument consisting of a series of questions for the purpose of gathering information from respondents. Questionnaires can be thought of as a kind of written interview . They can be carried out face to face, by telephone, computer, or post.

Questionnaires provide a relatively cheap, quick, and efficient way of obtaining large amounts of information from a large sample of people.

Questionnaire

Data can be collected relatively quickly because the researcher would not need to be present when completing the questionnaires. This is useful for large populations when interviews would be impractical.

However, a problem with questionnaires is that respondents may lie due to social desirability. Most people want to present a positive image of themselves, and may lie or bend the truth to look good, e.g., pupils exaggerate revision duration.

Questionnaires can effectively measure relatively large subjects’ behavior, attitudes, preferences, opinions, and intentions more cheaply and quickly than other methods.

Often, a questionnaire uses both open and closed questions to collect data. This is beneficial as it means both quantitative and qualitative data can be obtained.

Closed Questions

A closed-ended question requires a specific, limited response, often “yes” or “no” or a choice that fit into pre-decided categories.

Data that can be placed into a category is called nominal data. The category can be restricted to as few as two options, i.e., dichotomous (e.g., “yes” or “no,” “male” or “female”), or include quite complex lists of alternatives from which the respondent can choose (e.g., polytomous).

Closed questions can also provide ordinal data (which can be ranked). This often involves using a continuous rating scale to measure the strength of attitudes or emotions.

For example, strongly agree / agree / neutral / disagree / strongly disagree / unable to answer.

Closed questions have been used to research type A personality (e.g., Friedman & Rosenman, 1974) and also to assess life events that may cause stress (Holmes & Rahe, 1967) and attachment (Fraley, Waller, & Brennan, 2000).

  • They can be economical. This means they can provide large amounts of research data for relatively low costs. Therefore, a large sample size can be obtained, which should represent the population from which a researcher can then generalize.
  • The respondent provides information that can be easily converted into quantitative data (e.g., count the number of “yes” or “no” answers), allowing statistical analysis of the responses.
  • The questions are standardized. All respondents are asked exactly the same questions in the same order. This means a questionnaire can be replicated easily to check for reliability . Therefore, a second researcher can use the questionnaire to confirm consistent results.

Limitations

  • They lack detail. Because the responses are fixed, there is less scope for respondents to supply answers that reflect their true feelings on a topic.

Open Questions

Open questions allow for expansive, varied answers without preset options or limitations.

Open questions allow people to express what they think in their own words. Open-ended questions enable the respondent to answer in as much detail as they like in their own words. For example: “can you tell me how happy you feel right now?”

Open questions will work better if you want to gather more in-depth answers from your respondents. These give no pre-set answer options and instead, allow the respondents to put down exactly what they like in their own words.

Open questions are often used for complex questions that cannot be answered in a few simple categories but require more detail and discussion.

Lawrence Kohlberg presented his participants with moral dilemmas. One of the most famous concerns a character called Heinz, who is faced with the choice between watching his wife die of cancer or stealing the only drug that could help her.

Participants were asked whether Heinz should steal the drug or not and, more importantly, for their reasons why upholding or breaking the law is right.

  • Rich qualitative data is obtained as open questions allow respondents to elaborate on their answers. This means the research can determine why a person holds a certain attitude .
  • Time-consuming to collect the data. It takes longer for the respondent to complete open questions. This is a problem as a smaller sample size may be obtained.
  • Time-consuming to analyze the data. It takes longer for the researcher to analyze qualitative data as they have to read the answers and try to put them into categories by coding, which is often subjective and difficult. However, Smith (1992) has devoted an entire book to the issues of thematic content analysis that includes 14 different scoring systems for open-ended questions.
  • Not suitable for less educated respondents as open questions require superior writing skills and a better ability to express one’s feelings verbally.

Questionnaire Design

With some questionnaires suffering from a response rate as low as 5%, a questionnaire must be well designed.

There are several important factors in questionnaire design.

Pilot Study

Question order.

Questions should progress logically from the least sensitive to the most sensitive, from the factual and behavioral to the cognitive, and from the more general to the more specific.

The researcher should ensure that previous questions do not influence the answer to a question.

Question order effects

  • Question order effects occur when responses to an earlier question affect responses to a later question in a survey. They can arise at different stages of the survey response process – interpretation, information retrieval, judgment/estimation, and reporting.
  • Types of question order effects include: unconditional (subsequent answers affected by prior question topic), conditional (subsequent answers depend on the response to the prior question), and associational (correlation between two questions changes based on order).
  • Question order effects have been found across different survey topics like social and political attitudes, health and safety studies, vignette research, etc. Effects may be moderated by respondent factors like age, education level, knowledge and attitudes about the topic.
  • To minimize question order effects, recommendations include avoiding judgmental dependencies, separating potentially reactive questions, randomizing questions, following good survey design principles, considering respondent characteristics, and intentionally examining question context and order.

Terminology

  • There should be a minimum of technical jargon. Questions should be simple, to the point, and easy to understand. The language of a questionnaire should be appropriate to the vocabulary of the group of people being studied.
  • Use statements that are interpreted in the same way by members of different subpopulations of the population of interest.
  • For example, the researcher must change the language of questions to match the social background of the respondent’s age / educational level / social class/ethnicity, etc.

Presentation

Ethical issues.

  • The researcher must ensure that the information provided by the respondent is kept confidential, e.g., name, address, etc.
  • This means questionnaires are good for researching sensitive topics as respondents will be more honest when they cannot be identified.
  • Keeping the questionnaire confidential should also reduce the likelihood of psychological harm, such as embarrassment.
  • Participants must provide informed consent before completing the questionnaire and must be aware that they have the right to withdraw their information at any time during the survey/ study.

Problems with Postal Questionnaires

At first sight, the postal questionnaire seems to offer the opportunity to get around the problem of interview bias by reducing the personal involvement of the researcher. Its other practical advantages are that it is cheaper than face-to-face interviews and can quickly contact many respondents scattered over a wide area.

However, these advantages must be weighed against the practical problems of conducting research by post. A lack of involvement by the researcher means there is little control over the information-gathering process.

The data might not be valid (i.e., truthful) as we can never be sure that the questionnaire was completed by the person to whom it was addressed.

That, of course, assumes there is a reply in the first place, and one of the most intractable problems of mailed questionnaires is a low response rate. This diminishes the reliability of the data

Also, postal questionnaires may not represent the population they are studying. This may be because:

  • Some questionnaires may be lost in the post, reducing the sample size.
  • The questionnaire may be completed by someone not a member of the research population.
  • Those with strong views on the questionnaire’s subject are more likely to complete it than those without interest.

Benefits of a Pilot Study

A pilot study is a practice / small-scale study conducted before the main study.

It allows the researcher to try out the study with a few participants so that adjustments can be made before the main study, saving time and money.

It is important to conduct a questionnaire pilot study for the following reasons:

  • Check that respondents understand the terminology used in the questionnaire.
  • Check that emotive questions are not used, as they make people defensive and could invalidate their answers.
  • Check that leading questions have not been used as they could bias the respondent’s answer.
  • Ensure the questionnaire can be completed in an appropriate time frame (i.e., it’s not too long).

Frequently Asked Questions 

How do psychological researchers analyze the data collected from questionnaires.

Psychological researchers analyze questionnaire data by looking for patterns and trends in people’s responses. They use numbers and charts to summarize the information.

They calculate things like averages and percentages to see what most people think or feel. They also compare different groups to see if there are any differences between them.

By doing these analyses, researchers can understand how people think, feel, and behave. This helps them make conclusions and learn more about how our minds work.

Are questionnaires effective in gathering accurate data?

Yes, questionnaires can be effective in gathering accurate data. When designed well, with clear and understandable questions, they allow individuals to express their thoughts, opinions, and experiences.

However, the accuracy of the data depends on factors such as the honesty and accuracy of respondents’ answers, their understanding of the questions, and their willingness to provide accurate information. Researchers strive to create reliable and valid questionnaires to minimize biases and errors.

It’s important to remember that while questionnaires can provide valuable insights, they are just one tool among many used in psychological research.

Can questionnaires be used with diverse populations and cultural contexts?

Yes, questionnaires can be used with diverse populations and cultural contexts. Researchers take special care to ensure that questionnaires are culturally sensitive and appropriate for different groups.

This means adapting the language, examples, and concepts to match the cultural context. By doing so, questionnaires can capture the unique perspectives and experiences of individuals from various backgrounds.

This helps researchers gain a more comprehensive understanding of human behavior and ensures that everyone’s voice is heard and represented in psychological research.

Are questionnaires the only method used in psychological research?

No, questionnaires are not the only method used in psychological research. Psychologists use a variety of research methods, including interviews, observations , experiments , and psychological tests.

Each method has its strengths and limitations, and researchers choose the most appropriate method based on their research question and goals.

Questionnaires are valuable for gathering self-report data, but other methods allow researchers to directly observe behavior, study interactions, or manipulate variables to test hypotheses.

By using multiple methods, psychologists can gain a more comprehensive understanding of human behavior and mental processes.

What is a semantic differential scale?

The semantic differential scale is a questionnaire format used to gather data on individuals’ attitudes or perceptions. It’s commonly incorporated into larger surveys or questionnaires to assess subjective qualities or feelings about a specific topic, product, or concept by quantifying them on a scale between two bipolar adjectives.

It presents respondents with a pair of opposite adjectives (e.g., “happy” vs. “sad”) and asks them to mark their position on a scale between them, capturing the intensity of their feelings about a particular subject.

It quantifies subjective qualities, turning them into data that can be statistically analyzed.

Ayidiya, S. A., & McClendon, M. J. (1990). Response effects in mail surveys. Public Opinion Quarterly, 54 (2), 229–247. https://doi.org/10.1086/269200

Fraley, R. C., Waller, N. G., & Brennan, K. A. (2000). An item-response theory analysis of self-report measures of adult attachment. Journal of Personality and Social Psychology, 78, 350-365.

Friedman, M., & Rosenman, R. H. (1974). Type A behavior and your heart . New York: Knopf.

Gold, R. S., & Barclay, A. (2006). Order of question presentation and correlation between judgments of comparative and own risk. Psychological Reports, 99 (3), 794–798. https://doi.org/10.2466/PR0.99.3.794-798

Holmes, T. H., & Rahe, R. H. (1967). The social readjustment rating scale. Journal of psychosomatic research, 11(2) , 213-218.

Schwarz, N., & Hippler, H.-J. (1995). Subsequent questions may influence answers to preceding questions in mail surveys. Public Opinion Quarterly, 59 (1), 93–97. https://doi.org/10.1086/269460

Smith, C. P. (Ed.). (1992). Motivation and personality: Handbook of thematic content analysis . Cambridge University Press.

Further Information

  • Questionnaire design and scale development
  • Questionnaire Appraisal Form

Print Friendly, PDF & Email

  • Skip to main content
  • Keyboard shortcuts for audio player

Shots - Health News

Your Health

  • Treatments & Tests
  • Health Inc.
  • Public Health

Helping women get better sleep by calming the relentless 'to-do lists' in their heads

Yuki Noguchi

Yuki Noguchi

questionnaire research article

Katie Krimitsos is among the majority of American women who have trouble getting healthy sleep, according to a new Gallup survey. Krimitsos launched a podcast called Sleep Meditation for Women to offer some help. Natalie Champa Jennings/Natalie Jennings, courtesy of Katie Krimitsos hide caption

Katie Krimitsos is among the majority of American women who have trouble getting healthy sleep, according to a new Gallup survey. Krimitsos launched a podcast called Sleep Meditation for Women to offer some help.

When Katie Krimitsos lies awake watching sleepless hours tick by, it's almost always because her mind is wrestling with a mental checklist of things she has to do. In high school, that was made up of homework, tests or a big upcoming sports game.

"I would be wide awake, just my brain completely spinning in chaos until two in the morning," says Krimitsos.

There were periods in adulthood, too, when sleep wouldn't come easily, like when she started a podcasting company in Tampa, or nursed her first daughter eight years ago. "I was already very used to the grainy eyes," she says.

Now 43, Krimitsos says in recent years she found that mounting worries brought those sleepless spells more often. Her mind would spin through "a million, gazillion" details of running a company and a family: paying the electric bill, making dinner and dentist appointments, monitoring the pets' food supply or her parents' health checkups. This checklist never, ever shrank, despite her best efforts, and perpetually chased away her sleep.

"So we feel like there are these enormous boulders that we are carrying on our shoulders that we walk into the bedroom with," she says. "And that's what we're laying down with."

By "we," Krimitsos means herself and the many other women she talks to or works with who complain of fatigue.

Women are one of the most sleep-troubled demographics, according to a recent Gallup survey that found sleep patterns of Americans deteriorating rapidly over the past decade.

"When you look in particular at adult women under the age of 50, that's the group where we're seeing the most steep movement in terms of their rate of sleeping less or feeling less satisfied with their sleep and also their rate of stress," says Gallup senior researcher Sarah Fioroni.

Overall, Americans' sleep is at an all time low, in terms of both quantity and quality.

A majority – 57% – now say they could use more sleep, which is a big jump from a decade ago. It's an acceleration of an ongoing trend, according to the survey. In 1942, 59% of Americans said that they slept 8 hours or more; today, that applies to only 26% of Americans. One in five people, also an all-time high, now sleep fewer than 5 hours a day.

Popular myths about sleep, debunked

Popular myths about sleep, debunked

"If you have poor sleep, then it's all things bad," says Gina Marie Mathew, a post-doctoral sleep researcher at Stony Brook Medicine in New York. The Gallup survey did not cite reasons for the rapid decline, but Mathew says her research shows that smartphones keep us — and especially teenagers — up later.

She says sleep, as well as diet and exercise, is considered one of the three pillars of health. Yet American culture devalues rest.

"In terms of structural and policy change, we need to recognize that a lot of these systems that are in place are not conducive to women in particular getting enough sleep or getting the sleep that they need," she says, arguing things like paid family leave and flexible work hours might help women sleep more, and better.

No one person can change a culture that discourages sleep. But when faced with her own sleeplessness, Tampa mom Katie Krimitsos started a podcast called Sleep Meditation for Women , a soothing series of episodes in which she acknowledges and tries to calm the stresses typical of many women.

Many Grouchy, Error-Prone Workers Just Need More Sleep

Shots - Health News

Many grouchy, error-prone workers just need more sleep.

That podcast alone averages about a million unique listeners a month, and is one of 20 podcasts produced by Krimitsos's firm, Women's Meditation Network.

"Seven of those 20 podcasts are dedicated to sleep in some way, and they make up for 50% of my listenership," Krimitsos notes. "So yeah, it's the biggest pain point."

Krimitsos says she thinks women bear the burdens of a pace of life that keeps accelerating. "Our interpretation of how fast life should be and what we should 'accomplish' or have or do has exponentially increased," she says.

She only started sleeping better, she says, when she deliberately cut back on activities and commitments, both for herself and her two kids. "I feel more satisfied at the end of the day. I feel more fulfilled and I feel more willing to allow things that are not complete to let go."

Watch CBS News

Video games help and harm U.S. teens — leading to both friendships and bullying, Pew survey says

By Cara Tabachnick

May 9, 2024 / 8:00 AM EDT / CBS News

Video games are where U.S. teens form friendships — but also where a majority say they experience bullying and name-calling, a Pew survey released Thursday found.

More than 1,400 teens from ages 13-17 participated in the survey last fall, answering questions on various aspects of  their relationship with video games . Some of the results are to be expected. For instance, a large majority of teens in the U.S. — more than 85% — play video games.

But other topics weren't so clear cut and painted a more complex picture of how teens viewed their experience. Survey participants reported video games were how they had fun and made friends, despite also reporting bullying, harassment and name-calling. But even with those issues, they still wanted to continue playing, saying gaming also helped them with their problem-solving skills and even their mental health. 

Competition Begins In National Video Game Event

Most teens said they play video games for fun or "entertainment reasons," with around three-quarters saying they play to spend time with others. They said they don't see the games as harmful to themselves or their lifestyle, even though 40% said it hurt their sleep. 

Some 58% of respondents said they felt they played the right amount of video games. 

There were also stark differences in how different genders said they respond to and engage with video games.

Teen boys play video games far more often than girls — and almost two-thirds play them daily — with the activity making up a large portion of their social lives. More than half of the teen boys said video games helped them make friends, compared to 35% of girls surveyed. 

Black and Latino teens said they made friends at a higher rate than White teens, and the numbers jumped even higher for those who considered themselves gamers. 

Even with all the friendships made, about half of teen boys said they've been called offensive names while playing, with about a third of girls reporting the same. Eight in 10 said that bullying is an issue in video games and about one-third of the teens surveyed said it's a major problem.

Cara Tabachnick is a news editor and journalist at CBSNews.com. Cara began her career on the crime beat at Newsday. She has written for Marie Claire, The Washington Post and The Wall Street Journal. She reports on justice and human rights issues. Contact her at [email protected]

More from CBS News

Video shows bus in Russia plunge off bridge in deadly crash

More than 40 still feared trapped under rubble after building collapse

Apple apologizes for iPad Pro commercial after backlash

As a Social Security cut looms, should seniors buy long-term care insurance now?

  • Election 2024
  • Entertainment
  • Newsletters
  • Photography
  • Personal Finance
  • AP Investigations
  • AP Buyline Personal Finance
  • AP Buyline Shopping
  • Press Releases
  • Israel-Hamas War
  • Russia-Ukraine War
  • Global elections
  • Asia Pacific
  • Latin America
  • Middle East
  • Election Results
  • Delegate Tracker
  • AP & Elections
  • Auto Racing
  • 2024 Paris Olympic Games
  • Movie reviews
  • Book reviews
  • Personal finance
  • Financial Markets
  • Business Highlights
  • Financial wellness
  • Artificial Intelligence
  • Social Media

News organizations have trust issues as they gear up to cover another election, a poll finds

About half of Americans say they are extremely or very concerned that news organizations will report inaccuracies or misinformation during the election. Some 42% express worry that news outlets will use generative artificial intelligence to create stories, according to a poll. (AP Video: Serkan Gurbuz)

FILE - Journalists line the press stand before Republican presidential candidate former President Donald Trump speaks at a caucus night party in Des Moines, Iowa, Jan. 15, 2024. Attitudes toward the media and political news ahead of the election were explored in a poll from the American Press Institute and The Associated Press-NORC Center for Public Affairs Research. (AP Photo/Andrew Harnik, File)

FILE - Journalists line the press stand before Republican presidential candidate former President Donald Trump speaks at a caucus night party in Des Moines, Iowa, Jan. 15, 2024. Attitudes toward the media and political news ahead of the election were explored in a poll from the American Press Institute and The Associated Press-NORC Center for Public Affairs Research. (AP Photo/Andrew Harnik, File)

  • Copy Link copied

Dave Bauder stands for a portrait at the New York headquarters of The Associated Press on Tuesday, Aug. 23, 2022. (AP Photo/Patrick Sison)

NEW YORK (AP) — Even as many Americans say they learn about the 2024 election campaign from national news outlets, a disquieting poll reveals some serious trust issues.

About half of Americans, 53%, say they are extremely or very concerned that news organizations will report inaccuracies or misinformation during the election. Some 42% express worry that news outlets will use generative artificial intelligence to create stories, according to a poll from the American Press Institute and The Associated Press-NORC Center for Public Affairs Research .

The poll found 47% of Americans also expressing serious concern that news outlets would report information that has not been confirmed or verified, and 44% worry that accurate information will be presented in a way that favors one side or another.

Half of Americans say they get election news always or frequently from national news outlets, a percentage that is higher among older respondents, the poll found.

“The level of engagement is good,” said Michael Bolden, CEO of the American Press Institute. “The thing that’s most concerning is that they’re not sure they can actually trust the information.”

Years of suspicion about journalists, much of it sown by politicians, is partly responsible, he said. People are also less familiar with how journalism works. The poll found about half of respondents say they have at least a moderate amount of confidence in the information they receive from either national or local news outlets when it comes to the 2024 elections, though only about 1 in 10 say they have a great deal of confidence.

AP AUDIO: News organizations have trust issues as they gear up to cover another election, a poll finds.

AP correspondent Donna Warder reports on a new poll that suggests major trust issues when it comes to the public’s ability to believe the media’s coverage of the 2024 election campaign.

What to know about the 2024 Election

  • Democracy: American democracy has overcome big stress tests since 2020. More challenges lie ahead in 2024.
  • AP’s Role: The Associated Press is the most trusted source of information on election night, with a history of accuracy dating to 1848. Learn more.
  • Read the latest: Follow AP’s complete coverage of this year’s election.

“There may have been a time when people knew a journalist because one lived on their block,” Bolden said. “The way the industry has been decimated, that’s much less likely.”

Simply putting out the news often isn’t good enough anymore, he said. There’s a growing disconnect between news organizations and communities that the outlets need to address, by helping to let people know what journalists do and how people reporting news are their friends and neighbors, he said.

Outlets should lean into a convenor role, bringing people together for newsworthy events, he said.

About half of U.S. adults say they follow the news about presidential elections closely, with older adults being more engaged. About two-thirds of Americans age 60 or older say they keep a close eye on presidential election news, compared wth roughly one-third of those under age 30.

The same trend is seen with local and state election news. While the poll found that 46% of Americans age 60 or older say they follow news about local and state elections closely, only 16% of people age 18 to 29 said the same thing.

“As they transition to becoming older people, will they begin to care?” Bolden asked. “If they don’t begin to care, what will that mean for local and state communities?”

Young people, those under age 30, are about as likely to get election news from social media or friends or family as they are to get it from national or local news outlets, the poll found. Black and Latino adults are somewhat more likely to express “a great deal” of confidence in the reliability of social media as a source of election news than white Americans are.

That’s both a warning sign, since there is a lot more misinformation to be found on social media, and an opportunity for traditional outlets to make more of their work available this way, Bolden said.

About 6 in 10 Democrats say they get election news from national outlets at least frequently. That’s more than the 48% of Republicans or 34% of independents, according to the poll. Republicans are more likely than Democrats and independents to express concern about inaccurate information or misinformation in news coverage during the upcoming elections. About 6 in 10 Republicans are concerned about this, compared with about half of Democrats.

Besides inaccuracies, many also expressed serious concern about election news that focuses too much on division or controversies or concentrates on who may win or lose — the horserace aspect of political coverage — rather than issues or the character of candidates.

Most Americans say that for them to make informed decisions about the 2024 state and local elections, they want national and local news outlets to highlight candidates’ values or their different positions on key social issues. In each case, about three-quarters of U.S. adults say they would like “a lot” or “some” coverage of these topics.

The poll of 2,468 adults was conducted March 21-25, 2024, using a sample drawn from NORC’s probability-based AmeriSpeak Panel, which is designed to be representative of the U.S. population. The margin of error is plus or minus 2.9 percentage points.

David Bauder writes about media for The Associated Press. Follow him at http://twitter.com/dbauder .

DAVID BAUDER

Numbers, Facts and Trends Shaping Your World

Read our research on:

Full Topic List

Regions & Countries

  • Publications
  • Our Methods
  • Short Reads
  • Tools & Resources

Read Our Research On:

Voters’ views of Trump and Biden differ sharply by religion

The U.S. electorate continues to be sharply divided along religious lines.

The latest Pew Research Center survey finds that most registered voters who are White Christians would vote for Republican Donald Trump over Democrat Joe Biden if the 2024 presidential election were held today. More than half of White Christians think Trump was a “great” or “good” president and don’t think he broke the law in an effort to change the outcome of the 2020 election.

In stark contrast, most registered voters who are Black Protestants or religious “nones” – those who self-identify as atheists, agnostics or “nothing in particular” – would vote for Biden over Trump. Large numbers in these groups also say Trump was a “terrible” president and that he broke the law trying to overturn the 2020 election results.

Pew Research Center conducted this analysis to highlight religious differences in U.S. voters’ views about the 2024 presidential election. For this analysis, we surveyed 8,709 adults – including 7,166 registered voters – from April 8 to 14, 2024. Everyone who took part in this survey is a member of the Center’s American Trends Panel (ATP), an online survey panel that is recruited through national, random sampling of residential addresses. This way nearly all U.S. adults have a chance of selection. The survey is weighted to be representative of the U.S. adult population by gender, race, ethnicity, partisan affiliation, education and other categories.  Read more about the ATP’s methodology .

Here are the  questions used for this report , along with responses, and the  survey methodology . Here are details about sample sizes and margins of error for groups analyzed in this report.

Religion and the 2024 presidential election

A diverging bar chart showing that most White Christian voters would vote for Trump if the election were held today; most religious 'nones' and Black Protestants would back Biden.

While most White Christian voters say they would vote for Trump over Biden if the election were held today, there are some differences by religious tradition. Trump draws support from:  

  • 81% of White evangelical Protestant voters
  • 61% of White Catholics
  • 57% of White Protestants who are not evangelical

By contrast, 77% of Black Protestant voters say they would vote for Biden over Trump. Most religious “nones” also say this, including:

  • 87% of atheist voters
  • 82% of agnostics
  • 57% of those whose religion is “nothing in particular” 

These presidential preferences reflect the partisan leanings of U.S. religious groups . White Christians have been trending in a Republican direction for quite some time, while Black Protestants and religious “nones” have long been strongly Democratic.

The Center’s new survey includes responses from Jews, Muslims, Buddhists, Hindus and people from many other religious backgrounds, as well as adherents of smaller Christian groups like Hispanic Protestants and members of the Church of Jesus Christ of Latter-day Saints (widely known as Mormons). However, the survey does not include enough respondents from these smaller religious categories to be able to report on them separately.

Church attendance and voting preferences in 2024

A diverging bar chart showing that, among Christian voters, regular churchgoers back Trump at slightly higher rate than nonattenders.

Among Christians, support for Trump is somewhat higher among regular church attenders than non-churchgoers. Overall, 62% of Christian voters who say they go to church at least once or twice a month support Trump over Biden. Among Christians who go to church less often, 55% would vote for Trump if the election were today.

Among White evangelical Protestant voters, 84% of regular churchgoers say they would vote for Trump, compared with 77% of White evangelicals who don’t go to church regularly.

White nonevangelical Protestants are the only Christian group in which support for Trump is significantly stronger among nonattenders than among regular churchgoers.

Voters’ views of Biden and Trump as presidents

About three-quarters of White evangelical Protestant voters say Trump was a “great” (37%) or “good” (37%) president. Roughly half of White Catholics and White nonevangelical Protestants share this view.

When it comes to Biden, atheists and Black Protestants rate the current president’s performance most favorably. Roughly half of voters in each of these groups say Biden is a great or good president.

Overall, Trump gets higher marks on these questions than Biden. This is because Trump supporters are more inclined to say he was a great or good president than Biden supporters are to say the same about him.

A horizontal stacked bar chart showing that 74% of White evangelical voters say Trump was a 'great' or 'good' president.

Views of whether Trump broke the law in effort to change 2020 election outcome

A horizontal stacked bar chart showing that most atheist, agnostic, Black Protestant voters say Trump broke the law in effort to change outcome of 2020 election; just 16% of White evangelicals agree.

People in the religious groups that are most supportive of Biden tend to think Trump broke the law in an effort to change the outcome of the 2020 election. Most atheists (83%) say this, as do 70% of Black Protestants and 63% of agnostics.

By contrast, just 16% of White evangelical Protestants say Trump broke the law trying to change the 2020 election outcome. Another 15% of White evangelicals say they think Trump did something wrong but did not break the law, while the largest share by far (47%) say Trump did nothing wrong.

Note: Here are the  questions used for this report , along with responses, and the  survey methodology .

  • Donald Trump
  • Election 2024
  • Religion & Politics

Gregory A. Smith's photo

Gregory A. Smith is an associate director of research at Pew Research Center .

In Tight Presidential Race, Voters Are Broadly Critical of Both Biden and Trump

Changing partisan coalitions in a politically divided nation, about 1 in 4 americans have unfavorable views of both biden and trump, 2024 presidential primary season was one of the shortest in the modern political era, americans more upbeat on the economy; biden’s job rating remains very low, most popular.

1615 L St. NW, Suite 800 Washington, DC 20036 USA (+1) 202-419-4300 | Main (+1) 202-857-8562 | Fax (+1) 202-419-4372 |  Media Inquiries

Research Topics

  • Age & Generations
  • Coronavirus (COVID-19)
  • Economy & Work
  • Family & Relationships
  • Gender & LGBTQ
  • Immigration & Migration
  • International Affairs
  • Internet & Technology
  • Methodological Research
  • News Habits & Media
  • Non-U.S. Governments
  • Other Topics
  • Politics & Policy
  • Race & Ethnicity
  • Email Newsletters

ABOUT PEW RESEARCH CENTER  Pew Research Center is a nonpartisan fact tank that informs the public about the issues, attitudes and trends shaping the world. It conducts public opinion polling, demographic research, media content analysis and other empirical social science research. Pew Research Center does not take policy positions. It is a subsidiary of  The Pew Charitable Trusts .

Copyright 2024 Pew Research Center

Terms & Conditions

Privacy Policy

Cookie Settings

Reprints, Permissions & Use Policy

questionnaire research article

Microsoft and LinkedIn release the 2024 Work Trend Index on the state of AI at work

May 8, 2024 | Microsoft Source

  • Share on Facebook (opens new window)
  • Share on LinkedIn (opens new window)
  • Share on Twitter (opens new window)

questionnaire research article

New data shows most employees are experimenting with AI and growing their skills — now, the job of every leader is to channel this experimentation into business impact

REDMOND, Wash. — May 8, 2024 — On Wednesday, Microsoft Corp. and LinkedIn released the 2024 Work Trend Index, a joint report on the state of AI at work titled, “ AI at work is here. Now comes the hard part .” The research — based on a survey of 31,000 people across 31 countries, labor and hiring trends on LinkedIn, trillions of Microsoft 365 productivity signals, and research with Fortune 500 customers — shows how, just one year in, AI is influencing the way people work, lead and hire around the world. Microsoft also announced new capabilities in Copilot for Microsoft 365, and LinkedIn made free more than 50 learning courses for LinkedIn Premium subscribers designed to empower professionals at all levels to advance their AI aptitude. [1]

The data is in: 2024 is the year AI at work gets real. Use of generative AI at work has nearly doubled in the past six months. LinkedIn is seeing a significant increase in professionals adding AI skills to their profiles, and most leaders say they wouldn’t hire someone without AI skills. But with many leaders worried their company lacks an AI vision, and employees bringing their own AI tools to work, leaders have reached the hard part of any tech disruption: moving from experimentation to tangible business impact.

“AI is democratizing expertise across the workforce,” said Satya Nadella, chairman and CEO, Microsoft. “Our latest research highlights the opportunity for every organization to apply this technology to drive better decision-making, collaboration — and ultimately business outcomes.”

The report highlights three insights every leader and professional needs to know about AI’s impact on work and the labor market in the year ahead:

  • Employees want AI at work — and won’t wait for companies to catch up: Seventy-five percent of knowledge workers now use AI at work. Employees, many of them struggling to keep up with the pace and volume of work, say AI saves time, boosts creativity, and allows them to focus on their most important work. But although 79% of leaders agree AI adoption is critical to remain competitive, 59% worry about quantifying the productivity gains of AI and 60% say their company lacks a vision and plan to implement it. So, employees are taking things into their own hands. 78% of AI users are bringing their own tools to work — Bring Your Own AI (BYOAI) — missing out on the benefits that come from strategic AI use at scale and putting company data at risk. The opportunity for every leader is to channel this momentum into business impact at scale.
  • For employees, AI raises the bar and breaks the career ceiling : Although AI and job loss are top of mind for many, the data offers a more nuanced view — one with a hidden talent shortage, employees eyeing a career change, and massive opportunity for those willing to skill up on AI. A majority of leaders (55%) are concerned about having enough talent to fill roles this year with leaders in cybersecurity, engineering and creative design feeling the pinch most. And professionals are looking. Forty-six percent across the globe are considering quitting in the year ahead — an all-time high since the Great Reshuffle of 2021. A separate LinkedIn study found U.S. numbers to be even higher with 85% eyeing career moves. Although two-thirds of leaders (66%) wouldn’t hire someone without AI skills, only 39% of users have received AI training from their company and only 25% of companies expect to offer it this year. So, professionals are skilling up on their own. As of late last year, we’ve seen a 142x increase in LinkedIn members adding AI skills like Copilot and ChatGPT to their profiles and a 160% increase in nontechnical professionals using LinkedIn Learning courses to build their AI aptitude. In a world where AI mentions in LinkedIn job posts drive a 17% bump in application growth, it’s a two-way street: Organizations that empower employees with AI tools and training will attract the best talent, and professionals who skill up will have the edge.
  • The rise of the AI power user — and what they reveal about the future: Four types of AI users emerged in the research — from skeptics who rarely use AI to power users who use it extensively. Compared to skeptics, AI power users have reoriented their workdays in fundamental ways, reimagining business processes and saving over 30 minutes per day. Over 90% of power users say AI makes their overwhelming workload more manageable and their work more enjoyable, but they aren’t doing it on their own. These users are 61% more likely to have heard from their CEO on the importance of using generative AI at work, 53% more likely to receive encouragement from leadership to consider how AI can transform their function, and 35% more likely to receive tailored AI training for their specific role or function.

“AI is redefining work, and it’s clear we need new playbooks,” said Ryan Roslansky, CEO of LinkedIn. “It’s the leaders who build for agility instead of stability and invest in skill building internally that will give their organizations a competitive advantage and create more efficient, engaged and equitable teams.”

Microsoft is also announcing Copilot for Microsoft 365 innovations to help people get started with AI.

  • A new auto-complete feature is coming to the prompt box. Copilot will now help people who have the start of a prompt by offering to complete it, suggesting a more detailed prompt based on what is being typed, to deliver a stronger result.
  • When people know what they want, but don’t have the right words, the new rewrite feature in Copilot will turn a basic prompt into a rich one with the click of a button.
  • Catch Up is a new chat interface that surfaces personal insights based on recent activity and provides responsive recommendations. For example, Copilot will flag an upcoming meeting and provide relevant information to help participants prepare.
  • And new capabilities in Copilot Lab will enable people to create, publish and manage prompts tailored to them, and to their specific team, role and function.

These features will be available in the coming months.

LinkedIn is also providing AI tools to enable you to stay ahead in your career.

  • For upskilling. LinkedIn Learning offers more than 22,000 courses, including more than 600 AI courses, to build aptitude in generative AI , empower your teams to make GAI-powered business investments , or simply to keep your skills sharp. This includes over 50 new AI learning courses to empower professionals at all skill levels. New courses are free and available for everyone to use through July 8. Additionally, our new AI-Powered Coaching in LinkedIn Learning helps learners find the content they need to grow their skills faster, with greater personalization and guided conversational learning.
  • For career advancement. For LinkedIn Premium subscribers, AI-powered personalized takeaways on LinkedIn Feed on posts, articles or videos (from the article to the commentary) can also help you daily in your career with personalized, relevant insights and opportunities including ideas and actions you can take.
  • For job seeking. And if you’re looking to change your job, we’re also making it easier and faster to find your ideal job. With new AI-powered tools, you can now assess your fit for a role in seconds based on your experience and skills, get advice on how to stand out, and subscribers will also see nudges, for example suggestions for skills to build, professionals in your network to reach out to, and more . So far, more than 90% of subscribers who have access shared it’s been helpful in job search.

To learn more, visit the Official Microsoft Blog , the 2024 Work Trend Index Report , and head to LinkedIn to hear more from the company’s Chief Economist, Karin Kimbrough.

About Microsoft

Microsoft (Nasdaq “MSFT” @microsoft) creates platforms and tools powered by AI to deliver innovative solutions that meet the evolving needs of our customers. The technology company is committed to making AI available broadly and doing so responsibly, with a mission to empower every person and every organization on the planet to achieve more.

About LinkedIn

LinkedIn connects the world’s professionals to make them more productive and successful and transforms the way companies hire, learn, market and sell. Our vision is to create economic opportunity for every member of the global workforce through the ongoing development of the world’s first Economic Graph. LinkedIn has more than 1 billion members and has offices around the globe.

For more information, press only:

Microsoft Media Relations, WE Communications, (425) 638-7777,  [email protected]

LinkedIn Press Line, [email protected]

Note to editors:  For more information, news and perspectives from Microsoft, please visit the Microsoft News Center at  http://news.microsoft.com . Web links, telephone numbers and titles were correct at time of publication but may have changed. For additional assistance, journalists and analysts may contact Microsoft’s Rapid Response Team or other appropriate contacts listed at  https://news.microsoft.com/microsoft-public-relations-contacts .

[1] Courses will be available for free until July 8, 2024.

Related Posts

ServiceNow and Microsoft expand strategic alliance, combining generative AI capabilities to enhance choice and flexibility

Microsoft announces $3.3 billion investment in Wisconsin to spur artificial intelligence innovation and economic growth

Axel Springer and Microsoft expand partnership across advertising, AI, content and Azure services 

The Estée Lauder Companies and Microsoft increase collaboration to power prestige beauty with generative AI

Microsoft earnings press release available on Investor Relations website

  • Check us out on RSS

Share this page:

Facebook

IMAGES

  1. Questionnaire Format For Research

    questionnaire research article

  2. Questionnaire Format For Research

    questionnaire research article

  3. Questionnaire Format For Research

    questionnaire research article

  4. 🌱 Types of questionnaire in research. Questionnaire. 2022-10-22

    questionnaire research article

  5. www.newsmoor.com: Questionnaire Sample- Questionnaire Sample For

    questionnaire research article

  6. 🌱 Types of questionnaire in research. Questionnaire. 2022-10-22

    questionnaire research article

VIDEO

  1. What is Questionnaire?Types of Questionnaire in Research .#Research methodology notes

  2. Aligning Title, SOP, and Questionnaire

  3. Primary Data In Research

  4. Diploma in Applied Research and Statistics

  5. 50 must-know Questionnaires in medical/healthcare research. #questionnaire #research #healthcare

  6. Questionnaire| Research Methodology| Data Collection Tool |Sociology

COMMENTS

  1. Behind the Numbers: Questioning Questionnaires

    Based on our observations of participants' spontaneous thoughts and confusions as they filled in questionnaires on "leadership" and "teamwork", we draw attention to hidden problems in much organizational research. Many respondents found measures ambiguous, irrelevant, or misleading.

  2. (PDF) Questionnaires and Surveys

    Abstract. Survey methodologies, usually using questionnaires, are among the most popular in. the social sciences, but they are also among the most mis-used. The ir popularity in. small-scale ...

  3. Understanding and Evaluating Survey Research

    Survey research is defined as "the collection of information from a sample of individuals through their responses to questions" ( Check & Schutt, 2012, p. 160 ). This type of research allows for a variety of methods to recruit participants, collect data, and utilize various methods of instrumentation. Survey research can use quantitative ...

  4. Practical Guidelines to Develop and Evaluate a Questionnaire

    Thus, the questionnaire-based research was criticized by many in the past for being a soft science. The scale construction is also not a part of most of the graduate and postgraduate training. Given the previous discussion, the primary objective of this article is to sensitize researchers about the various intricacies and importance of each ...

  5. Designing and validating a research questionnaire

    However, the quality and accuracy of data collected using a questionnaire depend on how it is designed, used, and validated. In this two-part series, we discuss how to design (part 1) and how to use and validate (part 2) a research questionnaire. It is important to emphasize that questionnaires seek to gather information from other people and ...

  6. (PDF) Understanding and Evaluating Survey Research

    Survey research is defined as. "the collection of information from. a sample of individuals through their. responses to questions" (Check &. Schutt, 2012, p. 160). This type of r e -. search ...

  7. Survey Research

    Survey research means collecting information about a group of people by asking them questions and analyzing the results. To conduct an effective survey, follow these six steps: Determine who will participate in the survey. Decide the type of survey (mail, online, or in-person) Design the survey questions and layout.

  8. PDF Designing a Questionnaire for a Research Paper: A Comprehensive Guide

    writing questions and building the construct of the questionnaire. It also develops the demand to pre-test the questionnaire and finalizing the questionnaire to conduct the survey. Keywords: Questionnaire, Academic Survey, Questionnaire Design, Research Methodology I. INTRODUCTION A questionnaire, as heart of the survey is based on a set of

  9. (PDF) Questionnaire Designing for a Survey

    The research instrument that the proponents utilized for this study is the survey questionnaire. Questionnaires are considered the primary way to collect quantitative data in a standardized format ...

  10. Hands-on guide to questionnaire research: Administering, analysing, and

    This article outlines how to pilot your questionnaire, distribute and administer it; and get it returned, analysed, and written up for publication. It is intended to supplement published guidance on questionnaire research, three quarters of which focuses on content and design. 2

  11. Questionnaire Design

    Questionnaires vs. surveys. A survey is a research method where you collect and analyze data from a group of people. A questionnaire is a specific tool or instrument for collecting the data.. Designing a questionnaire means creating valid and reliable questions that address your research objectives, placing them in a useful order, and selecting an appropriate method for administration.

  12. Development and validation of a questionnaire to measure research

    Surveys are among the most widely used tools in research impact evaluation. Quantitative approaches as surveys are suggested for accountability purposes, as the most appropriate way that calls for transparency (Guthrie et al. 2013).They provide a broad overview of the status of a body of research and supply comparable, easy-to-analyze data referring to a range of researchers and/or grants.

  13. What Is a Questionnaire and How Is It Used in Research?

    A questionnaire is a research instrument consisting of a series of questions for the purpose of gathering information from respondents. Questionnaires can be thought of as a kind of written interview. They can be carried out face to face, by telephone, computer, or post. Questionnaires provide a relatively cheap, quick, and efficient way of ...

  14. How Do Women in the Public Sector Assess Their Retirement Security?

    The public sector has long been seen as a reliable provider of retirement security. But a recent survey suggests that some younger workers—especially women—have significant concerns about their future financial health.. Published by the MissionSquare Research Institute, the survey data points to a gender gap in expected retirement security between male and female state and local public ...

  15. Americans are getting less sleep. The biggest burden falls on ...

    A recent survey found that Americans' sleep patterns have been getting worse. Adult women under 50 are among the most sleep-deprived demographics.

  16. Video games help and harm U.S. teens

    Video games are where U.S. teens form friendships — but also where a majority say they experience bullying and name-calling, a Pew survey released Thursday found. More than 1,400 teens from ages ...

  17. Designing a Questionnaire for a Research Paper: A Comprehensive Guide

    A questionnaire is an important instrument in a research study to help the researcher collect relevant data regarding the research topic. It is significant to ensure that the design of the ...

  18. How to design a questionnaire

    Moreover, the quality of a survey is greatly dependent on the design of the questionnaire used. This editorial briefly outlines the process of development of a questionnaire in the context of the three survey-based studies published in this issue of the journal.[1,2,3] A questionnaire appears to be just a simple list of questions to the naive.

  19. News organizations have trust issues as they gear up to cover another

    NEW YORK (AP) — Even as many Americans say they learn about the 2024 election campaign from national news outlets, a disquieting poll reveals some serious trust issues.. About half of Americans, 53%, say they are extremely or very concerned that news organizations will report inaccuracies or misinformation during the election.

  20. Voters' views of Trump and Biden differ sharply by religion

    The U.S. electorate continues to be sharply divided along religious lines. The latest Pew Research Center survey finds that most registered voters who are White Christians would vote for Republican Donald Trump over Democrat Joe Biden if the 2024 presidential election were held today. More than half of White Christians think Trump was a "great" or "good" president and don't think he ...

  21. B2B Content Marketing Trends 2024 [Research]

    The online survey was emailed to a sample of marketers using lists from CMI and MarketingProfs. This article presents the findings from the 894 respondents, mostly from North America, who indicated their organization is primarily B2B and that they are either content marketers or work in marketing, communications, or other roles involving content.

  22. Hands-on guide to questionnaire research: Selecting, designing, and

    This is the first in a series of three articles on questionnaire research. References w1-w17, further illustrative examples, and checklists are on bmj.com. Susan Catt supplied additional references and feedback. We also thank Alicia O'Cathain, Jill Russell, Geoff Wong, Marcia Rigby, Sara Shaw, Fraser MacFarlane, and Will Callaghan for feedback ...

  23. Microsoft and LinkedIn release the 2024 Work Trend Index on the state

    The research — based on a survey of 31,000 people across 31 countries, labor and hiring trends on LinkedIn, trillions of Microsoft 365 productivity signals, and research with Fortune 500 customers — shows how, just one year in, AI is influencing the way people work, lead and hire around the world. ...

  24. The Design and Use of Questionnaires in Educational Research: A New

    The design and use of questionnaires are important aspects of. educational research (Newby, 2013, Cohen et al., 2017). By. following key considerations about the design and. operationalization of ...

  25. New Survey Shows 'Nest Egg Inflation.' See What This Means for Your

    According to a new survey from Northwestern Mutual, Americans now believe that they will need $1.46 million to retire comfortably, up from $1.27 million last year.

  26. Designing and Using Surveys in Nursing Research: A Contemporary

    The use of research questionnaires or surveys in nursing is a long standing tradition, dating back to the 1960s (Logan, 1966) and 1970s (Oberst, 1978), when the scientific discipline emerged.This type of tool enables nursing researchers to gather primary data from a specific population, whether it is patients, carers, nurses, or other stakeholders to address gaps in the existing evidence base ...