Lab Report Format: Step-by-Step Guide & Examples

Saul Mcleod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul Mcleod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

Learn about our Editorial Process

Olivia Guy-Evans, MSc

Associate Editor for Simply Psychology

BSc (Hons) Psychology, MSc Psychology of Education

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.

On This Page:

In psychology, a lab report outlines a study’s objectives, methods, results, discussion, and conclusions, ensuring clarity and adherence to APA (or relevant) formatting guidelines.

A typical lab report would include the following sections: title, abstract, introduction, method, results, and discussion.

The title page, abstract, references, and appendices are started on separate pages (subsections from the main body of the report are not). Use double-line spacing of text, font size 12, and include page numbers.

The report should have a thread of arguments linking the prediction in the introduction to the content of the discussion.

This must indicate what the study is about. It must include the variables under investigation. It should not be written as a question.

Title pages should be formatted in APA style .

The abstract provides a concise and comprehensive summary of a research report. Your style should be brief but not use note form. Look at examples in journal articles . It should aim to explain very briefly (about 150 words) the following:

  • Start with a one/two sentence summary, providing the aim and rationale for the study.
  • Describe participants and setting: who, when, where, how many, and what groups?
  • Describe the method: what design, what experimental treatment, what questionnaires, surveys, or tests were used.
  • Describe the major findings, including a mention of the statistics used and the significance levels, or simply one sentence summing up the outcome.
  • The final sentence(s) outline the study’s “contribution to knowledge” within the literature. What does it all mean? Mention the implications of your findings if appropriate.

The abstract comes at the beginning of your report but is written at the end (as it summarises information from all the other sections of the report).

Introduction

The purpose of the introduction is to explain where your hypothesis comes from (i.e., it should provide a rationale for your research study).

Ideally, the introduction should have a funnel structure: Start broad and then become more specific. The aims should not appear out of thin air; the preceding review of psychological literature should lead logically into the aims and hypotheses.

The funnel structure of the introducion to a lab report

  • Start with general theory, briefly introducing the topic. Define the important key terms.
  • Explain the theoretical framework.
  • Summarise and synthesize previous studies – What was the purpose? Who were the participants? What did they do? What did they find? What do these results mean? How do the results relate to the theoretical framework?
  • Rationale: How does the current study address a gap in the literature? Perhaps it overcomes a limitation of previous research.
  • Aims and hypothesis. Write a paragraph explaining what you plan to investigate and make a clear and concise prediction regarding the results you expect to find.

There should be a logical progression of ideas that aids the flow of the report. This means the studies outlined should lead logically to your aims and hypotheses.

Do be concise and selective, and avoid the temptation to include anything in case it is relevant (i.e., don’t write a shopping list of studies).

USE THE FOLLOWING SUBHEADINGS:

Participants

  • How many participants were recruited?
  • Say how you obtained your sample (e.g., opportunity sample).
  • Give relevant demographic details (e.g., gender, ethnicity, age range, mean age, and standard deviation).
  • State the experimental design .
  • What were the independent and dependent variables ? Make sure the independent variable is labeled and name the different conditions/levels.
  • For example, if gender is the independent variable label, then male and female are the levels/conditions/groups.
  • How were the IV and DV operationalized?
  • Identify any controls used, e.g., counterbalancing and control of extraneous variables.
  • List all the materials and measures (e.g., what was the title of the questionnaire? Was it adapted from a study?).
  • You do not need to include wholesale replication of materials – instead, include a ‘sensible’ (illustrate) level of detail. For example, give examples of questionnaire items.
  • Include the reliability (e.g., alpha values) for the measure(s).
  • Describe the precise procedure you followed when conducting your research, i.e., exactly what you did.
  • Describe in sufficient detail to allow for replication of findings.
  • Be concise in your description and omit extraneous/trivial details, e.g., you don’t need to include details regarding instructions, debrief, record sheets, etc.
  • Assume the reader has no knowledge of what you did and ensure that he/she can replicate (i.e., copy) your study exactly by what you write in this section.
  • Write in the past tense.
  • Don’t justify or explain in the Method (e.g., why you chose a particular sampling method); just report what you did.
  • Only give enough detail for someone to replicate the experiment – be concise in your writing.
  • The results section of a paper usually presents descriptive statistics followed by inferential statistics.
  • Report the means, standard deviations, and 95% confidence intervals (CIs) for each IV level. If you have four to 20 numbers to present, a well-presented table is best, APA style.
  • Name the statistical test being used.
  • Report appropriate statistics (e.g., t-scores, p values ).
  • Report the magnitude (e.g., are the results significant or not?) as well as the direction of the results (e.g., which group performed better?).
  • It is optional to report the effect size (this does not appear on the SPSS output).
  • Avoid interpreting the results (save this for the discussion).
  • Make sure the results are presented clearly and concisely. A table can be used to display descriptive statistics if this makes the data easier to understand.
  • DO NOT include any raw data.
  • Follow APA style.

Use APA Style

  • Numbers reported to 2 d.p. (incl. 0 before the decimal if 1.00, e.g., “0.51”). The exceptions to this rule: Numbers which can never exceed 1.0 (e.g., p -values, r-values): report to 3 d.p. and do not include 0 before the decimal place, e.g., “.001”.
  • Percentages and degrees of freedom: report as whole numbers.
  • Statistical symbols that are not Greek letters should be italicized (e.g., M , SD , t , X 2 , F , p , d ).
  • Include spaces on either side of the equals sign.
  • When reporting 95%, CIs (confidence intervals), upper and lower limits are given inside square brackets, e.g., “95% CI [73.37, 102.23]”
  • Outline your findings in plain English (avoid statistical jargon) and relate your results to your hypothesis, e.g., is it supported or rejected?
  • Compare your results to background materials from the introduction section. Are your results similar or different? Discuss why/why not.
  • How confident can we be in the results? Acknowledge limitations, but only if they can explain the result obtained. If the study has found a reliable effect, be very careful suggesting limitations as you are doubting your results. Unless you can think of any c onfounding variable that can explain the results instead of the IV, it would be advisable to leave the section out.
  • Suggest constructive ways to improve your study if appropriate.
  • What are the implications of your findings? Say what your findings mean for how people behave in the real world.
  • Suggest an idea for further research triggered by your study, something in the same area but not simply an improved version of yours. Perhaps you could base this on a limitation of your study.
  • Concluding paragraph – Finish with a statement of your findings and the key points of the discussion (e.g., interpretation and implications) in no more than 3 or 4 sentences.

Reference Page

The reference section lists all the sources cited in the essay (alphabetically). It is not a bibliography (a list of the books you used).

In simple terms, every time you refer to a psychologist’s name (and date), you need to reference the original source of information.

If you have been using textbooks this is easy as the references are usually at the back of the book and you can just copy them down. If you have been using websites then you may have a problem as they might not provide a reference section for you to copy.

References need to be set out APA style :

Author, A. A. (year). Title of work . Location: Publisher.

Journal Articles

Author, A. A., Author, B. B., & Author, C. C. (year). Article title. Journal Title, volume number (issue number), page numbers

A simple way to write your reference section is to use Google scholar . Just type the name and date of the psychologist in the search box and click on the “cite” link.

google scholar search results

Next, copy and paste the APA reference into the reference section of your essay.

apa reference

Once again, remember that references need to be in alphabetical order according to surname.

Psychology Lab Report Example

Quantitative paper template.

Quantitative professional paper template: Adapted from “Fake News, Fast and Slow: Deliberation Reduces Belief in False (but Not True) News Headlines,” by B. Bago, D. G. Rand, and G. Pennycook, 2020,  Journal of Experimental Psychology: General ,  149 (8), pp. 1608–1613 ( https://doi.org/10.1037/xge0000729 ). Copyright 2020 by the American Psychological Association.

Qualitative paper template

Qualitative professional paper template: Adapted from “‘My Smartphone Is an Extension of Myself’: A Holistic Qualitative Exploration of the Impact of Using a Smartphone,” by L. J. Harkin and D. Kuss, 2020,  Psychology of Popular Media ,  10 (1), pp. 28–38 ( https://doi.org/10.1037/ppm0000278 ). Copyright 2020 by the American Psychological Association.

Print Friendly, PDF & Email

Related Articles

How To Cite A YouTube Video In APA Style – With Examples

Student Resources

How To Cite A YouTube Video In APA Style – With Examples

How to Write an Abstract APA Format

How to Write an Abstract APA Format

APA References Page Formatting and Example

APA References Page Formatting and Example

APA Title Page (Cover Page) Format, Example, & Templates

APA Title Page (Cover Page) Format, Example, & Templates

How do I Cite a Source with Multiple Authors in APA Style?

How do I Cite a Source with Multiple Authors in APA Style?

How to Write a Psychology Essay

How to Write a Psychology Essay

Providing a study guide and revision resources for students and psychology teaching resources for teachers.

Psychological Report Writing

March 8, 2021 - paper 2 psychology in context | research methods.

  • Back to Paper 2 - Research Methods

Writing up Psychological Investigations

Through using this website, you have learned about, referred to, and evaluated research studies. These research studies are generally presented to the scientific community as a journal article. Most journal articles follow a standard format. This is similar to the way you may have written up experiments in other sciences.

In research report there are usually six sub-sections:

(1)  Abstract:  This is always written last because it is a very brief summary:

  • Include a one sentence summary, giving the topic to be studied. This may include the hypothesis and some brief theoretical background research, for example the name of the researchers whose work you have replicated.
  • Describe the participants, number used and how they were selected.
  • Describe the method and design used and any questionnaires etc. you employed.
  • State your major findings, which should include a mention of the statistics used the observed and critical values and whether or not your results were found to be significant, including the level of significance
  • Briefly summarise what your study shows, the conclusion of your findings and any implications it may have. State whether the experimental or null hypothesis has been accepted/rejected.
  • This should be around 150 words.

(2) Introduction:

This tells everyone why the study is being carried out and the commentary should form a ‘funnel’ of information. First, there is broad coverage of all the background research with appropriate evaluative comments: “Asch (1951) found…but Crutchfield (1955) showed…” Once the general research has been covered, the focus becomes much narrower finishing with the main researcher/research area you are hoping to support/refute. This then leads to the aims and hypothesis/hypotheses (i.e. experimental and null hypotheses) being stated.

(3) Method:

Method – this section is split into sub-sections:

(1) Design:

  • What is the experimental method that has been used?
  • Experimental Design type independent groups, repeated measures, matched pairs? Justify?
  • What is the IV, DV? These should be operationalised.
  • Any potential EVs?
  • How will these EVs be overcome?
  • Ethical issues? Strategies to overcome these ethical issues

(2) Participants:

  • Who is the target population? Age/socio-economic status, gender, etc.
  • What sampling technique has been used? Why?
  • Details of participants that have been used? Do they have certain characteristics
  • How have participants been allocated to conditions

(3) Materials:

  • Description of all equipment used and how to use it (essential for replication)
  • Stimulus materials for participants should be in the appendix

(4) Procedure:

  • This is a step-by-step guide of how the study was carried out when, where, how
  • Instructions to participants must be standardised to allow replication
  • Lengthy sets of instructions and instructions to participants should be in the appendix

(4) Results:

This section contains:

  • A summary of the data. All raw data and calculations are put in the appendix.
  • This generally starts with a section of descriptive statistics measures of central tendency and dispersion.
  • Summary tables, which should be clearly labelled and referred to in the text, e.g., “Table One shows that…” Graphical representations of the data must also be clear and properly labelled and referred to in the text, e.g., “It can be seen from Figure 1 that…”
  • Once the summary statistics have been explained, there should be an analysis of the results of any inferential tests, including observed values, how these relate to the critical table value, significance level and whether the test was one- or two-tailed.
  • This section finishes with the rejection or acceptance of the null hypothesis.

(5) Discussion:

This sounds like a repeat of the results section, but here you need to state what you’ve found in terms of psychology rather than in statistical terms, in particular relate your findings to your hypotheses. Mention the strength of your findings, for example were they significant and at what level. If your hypothesis was one tailed and your results have gone in the opposite direction this needs to be indicated. If you have any additional findings to report, other than those relating to the hypotheses then they too can be included.

All studies have flaws, so anything that went wrong or the limitations of the study are discussed together with suggestions for how it could be improved if it were to be repeated. Suggestions for alternative studies and future research are also explored. The discussion ends with a paragraph summing up what was found and assessing the implications of the study and any conclusions that can be drawn from it.

(6) Referencing (Harvard Referencing):

References should contain details of all the research covered in a psychological report. It is not sufficient to simply list the books used.

What you should do:

Look through your report and include a reference every researcher mentioned. A reference should include; the name of the researcher, the date the research was published, the title of the book/journal, where the book was published (or what journal the article was published in), the edition number of the book/volume of the journal article, the page numbers used.

Example: Paivio, A., Madigan, S.A. (1970). Noun imagery and frequency in paired-associate and free learning recall. Canadian Journal of Psychology. 24, pp353-361.

Other Rules Make sure that the references are placed in alphabetical order.

Exam Tip:  In the exam, the types of questions you could expect relating to report writing include; defining what information you would find in each section of the report, in addition, on the old specification, questions linked to report writing have included; writing up a method section, results section and designing a piece of research.

In addition, in the exam, you may get asked to write; a  consent form ,  debriefing sheet  or a set of  standardised instructions.

Writing a Consent Form for a Psychological Report Remember the mnemonic TAPCHIPS

Your consent form should include the following;

(1)  T itle of the Project:

(2)  A im of the study?

(3)  P rocedure – What will I be asked to do if I take part?

You should give a brief description of what the participants will have to do if they decide to consent to take part in the study (i.e. complete a 15-minute memory test etc )

(4) Will your data be kept  C onfidential?

Explain how you will make sure that all personal details will be kept confidential.

(5) Do I  H ave to take part?

Explain to the participant that they don’t have to take part in the study, explain about their right to withdraw.

(6)  I nformation? Where can I obtained further information if I need it?

Provide the participant with the contact details of the key researchers carrying out the study.

(7)  P articipant responses to the following questions:

Have you received enough information about the study? YES/NO

Do you consent for your data to be used in this study and retained for use in other studies? YES/NO

Do you understand that you do not need to take part in the study and that you can; withdraw your participation at any time without reason or detriment? YES/NO

(8)  S ignature from the participant and the researcher: will need to be acquired at the bottom of the consent form.

Writing a set of Standardised Instructions for a Psychological Investigation

When writing a set of standardised instructions, it is essential that you include:

1. Enough information to allow for replication of the study

2. You must write the instructions so that they can simply be read out by the researcher to the participants.

3. You should welcome the participants to the study.

4. Thank the participants for giving their consent to take part.

5. Explain to the participants what will happen in the study, what they will be expected to do (step by step), how long the task/specific parts of the task will take to complete.

6. Remind participants that they have the right to withdraw throughout the study.

7. Ask that participants at the end if they have any questions

8. Check that the participants are still happy to proceed with the study.

Writing a Debriefing Form for a Psychological Report

This is the form that you should complete with your participants at the end of the study to ensure that they are happy with the way the study has been conducted, to explain to them the true nature of the study, to confirm consent and to give them the researcher’s contact details in case they want to ask any further questions.

  • Thank  the participants for taking part in the study.
  • Outline the true aims  of the research (what were the participants expected to do? What happened in each of the different conditions?)
  • Explain what you were  looking to find.
  • Explain  how the data will be used  now and in the future.
  • Remind  the participants that they have the  right to withdraw  now and after the study.
  • Thank  participants once  again  for taking part.
  • Remind the participant of the  researcher(s) contact details.

Designing Research

One of the questions that you may get asked in the exam is to design a piece of research. The best way to go about this is to include similar information to what you would when writing up the  method section of a psychological report.

Things to Consider…

  • What is the experimental method/non-experimental method will you use?  ( Lab, field, natural experiment? Questionnaire (open/closed questions?), Interviews (structured, unstructured, semi-structured?), Observation).
  • Why?   ( does this method allow a great deal of control? Is it in a natural setting and would show behaviour reflective of real life? Would it allow participants to remain anonymous and therefore, they are more likely to tell the truth/act in a realistic way? Does the method avoid demand characteristics?) 
  • Experimental Design type   ( independent groups, repeated measures, matched pairs? Justify you choice?)
  • What is the IV, DV? These should be operationalised  ( how are you going to measure these variables?)
  • Any potential EVs?  ( Participant variables, experimenter effects, demand characteristics, situational variables?)
  • How will these EVs be overcome?  ( Are you going to out some control mechanisms in place? Are you going to use standardised instructions? Double or single blind? Will the experimental design that you are using help to overcome EVs?)
  • Ethical issues?  ( What are the potential ethical issues and what strategies are you going to use to overcome these ethical issues?)
  • Who is the target population?  Age/socio-economic status, gender, etc.
  • How have participants been allocated to conditions  ( have you used random allocation? Why have you adopted this technique?
  • This is a step-by-step guide of how the study was carried out – from beginning to end, how are you going to carry out the study.
  • Psychopathology
  • Social Psychology
  • Approaches To Human Behaviour
  • Biopsychology
  • Research Methods
  • Issues & Debates
  • Teacher Hub
  • Terms and Conditions
  • Privacy Policy
  • Cookie Policy
  • [email protected]
  • www.psychologyhub.co.uk

captcha txt

We're not around right now. But you can send us an email and we'll get back to you, asap.

Start typing and press Enter to search

Cookie Policy - Terms and Conditions - Privacy Policy

research reports in psychology

Logo for Open Library Publishing Platform

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

49 Writing a Research Report in American Psychological Association (APA) Style

Learning objectives.

  • Identify the major sections of an APA-style research report and the basic contents of each section.
  • Plan and write an effective APA-style research report.

In this section, we look at how to write an APA-style empirical research report , an article that presents the results of one or more new studies. Recall that the standard sections of an empirical research report provide a kind of outline. Here we consider each of these sections in detail, including what information it contains, how that information is formatted and organized, and tips for writing each section. At the end of this section is a sample APA-style research report that illustrates many of these principles.

Sections of a Research Report

Title page and abstract.

An APA-style research report begins with a title page . The title is centered in the upper half of the page, with each important word capitalized. The title should clearly and concisely (in about 12 words or fewer) communicate the primary variables and research questions. This sometimes requires a main title followed by a subtitle that elaborates on the main title, in which case the main title and subtitle are separated by a colon. Here are some titles from recent issues of professional journals published by the American Psychological Association.

  • Sex Differences in Coping Styles and Implications for Depressed Mood
  • Effects of Aging and Divided Attention on Memory for Items and Their Contexts
  • Computer-Assisted Cognitive Behavioral Therapy for Child Anxiety: Results of a Randomized Clinical Trial
  • Virtual Driving and Risk Taking: Do Racing Games Increase Risk-Taking Cognitions, Affect, and Behavior?

Below the title are the authors’ names and, on the next line, their institutional affiliation—the university or other institution where the authors worked when they conducted the research. As we have already seen, the authors are listed in an order that reflects their contribution to the research. When multiple authors have made equal contributions to the research, they often list their names alphabetically or in a randomly determined order.

It’s  Soooo  Cute!  How Informal Should an Article Title Be?

In some areas of psychology, the titles of many empirical research reports are informal in a way that is perhaps best described as “cute.” They usually take the form of a play on words or a well-known expression that relates to the topic under study. Here are some examples from recent issues of the Journal Psychological Science .

  • “Smells Like Clean Spirit: Nonconscious Effects of Scent on Cognition and Behavior”
  • “Time Crawls: The Temporal Resolution of Infants’ Visual Attention”
  • “Scent of a Woman: Men’s Testosterone Responses to Olfactory Ovulation Cues”
  • “Apocalypse Soon?: Dire Messages Reduce Belief in Global Warming by Contradicting Just-World Beliefs”
  • “Serial vs. Parallel Processing: Sometimes They Look Like Tweedledum and Tweedledee but They Can (and Should) Be Distinguished”
  • “How Do I Love Thee? Let Me Count the Words: The Social Effects of Expressive Writing”

Individual researchers differ quite a bit in their preference for such titles. Some use them regularly, while others never use them. What might be some of the pros and cons of using cute article titles?

For articles that are being submitted for publication, the title page also includes an author note that lists the authors’ full institutional affiliations, any acknowledgments the authors wish to make to agencies that funded the research or to colleagues who commented on it, and contact information for the authors. For student papers that are not being submitted for publication—including theses—author notes are generally not necessary.

The abstract is a summary of the study. It is the second page of the manuscript and is headed with the word  Abstract . The first line is not indented. The abstract presents the research question, a summary of the method, the basic results, and the most important conclusions. Because the abstract is usually limited to about 200 words, it can be a challenge to write a good one.

Introduction

The introduction begins on the third page of the manuscript. The heading at the top of this page is the full title of the manuscript, with each important word capitalized as on the title page. The introduction includes three distinct subsections, although these are typically not identified by separate headings. The opening introduces the research question and explains why it is interesting, the literature review discusses relevant previous research, and the closing restates the research question and comments on the method used to answer it.

The Opening

The opening , which is usually a paragraph or two in length, introduces the research question and explains why it is interesting. To capture the reader’s attention, researcher Daryl Bem recommends starting with general observations about the topic under study, expressed in ordinary language (not technical jargon)—observations that are about people and their behavior (not about researchers or their research; Bem, 2003 [1] ). Concrete examples are often very useful here. According to Bem, this would be a poor way to begin a research report:

Festinger’s theory of cognitive dissonance received a great deal of attention during the latter part of the 20th century (p. 191)

The following would be much better:

The individual who holds two beliefs that are inconsistent with one another may feel uncomfortable. For example, the person who knows that they enjoy smoking but believes it to be unhealthy may experience discomfort arising from the inconsistency or disharmony between these two thoughts or cognitions. This feeling of discomfort was called cognitive dissonance by social psychologist Leon Festinger (1957), who suggested that individuals will be motivated to remove this dissonance in whatever way they can (p. 191).

After capturing the reader’s attention, the opening should go on to introduce the research question and explain why it is interesting. Will the answer fill a gap in the literature? Will it provide a test of an important theory? Does it have practical implications? Giving readers a clear sense of what the research is about and why they should care about it will motivate them to continue reading the literature review—and will help them make sense of it.

Breaking the Rules

Researcher Larry Jacoby reported several studies showing that a word that people see or hear repeatedly can seem more familiar even when they do not recall the repetitions—and that this tendency is especially pronounced among older adults. He opened his article with the following humorous anecdote:

A friend whose mother is suffering symptoms of Alzheimer’s disease (AD) tells the story of taking her mother to visit a nursing home, preliminary to her mother’s moving there. During an orientation meeting at the nursing home, the rules and regulations were explained, one of which regarded the dining room. The dining room was described as similar to a fine restaurant except that tipping was not required. The absence of tipping was a central theme in the orientation lecture, mentioned frequently to emphasize the quality of care along with the advantages of having paid in advance. At the end of the meeting, the friend’s mother was asked whether she had any questions. She replied that she only had one question: “Should I tip?” (Jacoby, 1999, p. 3)

Although both humor and personal anecdotes are generally discouraged in APA-style writing, this example is a highly effective way to start because it both engages the reader and provides an excellent real-world example of the topic under study.

The Literature Review

Immediately after the opening comes the  literature review , which describes relevant previous research on the topic and can be anywhere from several paragraphs to several pages in length. However, the literature review is not simply a list of past studies. Instead, it constitutes a kind of argument for why the research question is worth addressing. By the end of the literature review, readers should be convinced that the research question makes sense and that the present study is a logical next step in the ongoing research process.

Like any effective argument, the literature review must have some kind of structure. For example, it might begin by describing a phenomenon in a general way along with several studies that demonstrate it, then describing two or more competing theories of the phenomenon, and finally presenting a hypothesis to test one or more of the theories. Or it might describe one phenomenon, then describe another phenomenon that seems inconsistent with the first one, then propose a theory that resolves the inconsistency, and finally present a hypothesis to test that theory. In applied research, it might describe a phenomenon or theory, then describe how that phenomenon or theory applies to some important real-world situation, and finally suggest a way to test whether it does, in fact, apply to that situation.

Looking at the literature review in this way emphasizes a few things. First, it is extremely important to start with an outline of the main points that you want to make, organized in the order that you want to make them. The basic structure of your argument, then, should be apparent from the outline itself. Second, it is important to emphasize the structure of your argument in your writing. One way to do this is to begin the literature review by summarizing your argument even before you begin to make it. “In this article, I will describe two apparently contradictory phenomena, present a new theory that has the potential to resolve the apparent contradiction, and finally present a novel hypothesis to test the theory.” Another way is to open each paragraph with a sentence that summarizes the main point of the paragraph and links it to the preceding points. These opening sentences provide the “transitions” that many beginning researchers have difficulty with. Instead of beginning a paragraph by launching into a description of a previous study, such as “Williams (2004) found that…,” it is better to start by indicating something about why you are describing this particular study. Here are some simple examples:

Another example of this phenomenon comes from the work of Williams (2004).

Williams (2004) offers one explanation of this phenomenon.

An alternative perspective has been provided by Williams (2004).

We used a method based on the one used by Williams (2004).

Finally, remember that your goal is to construct an argument for why your research question is interesting and worth addressing—not necessarily why your favorite answer to it is correct. In other words, your literature review must be balanced. If you want to emphasize the generality of a phenomenon, then of course you should discuss various studies that have demonstrated it. However, if there are other studies that have failed to demonstrate it, you should discuss them too. Or if you are proposing a new theory, then of course you should discuss findings that are consistent with that theory. However, if there are other findings that are inconsistent with it, again, you should discuss them too. It is acceptable to argue that the  balance  of the research supports the existence of a phenomenon or is consistent with a theory (and that is usually the best that researchers in psychology can hope for), but it is not acceptable to  ignore contradictory evidence. Besides, a large part of what makes a research question interesting is uncertainty about its answer.

The Closing

The closing of the introduction—typically the final paragraph or two—usually includes two important elements. The first is a clear statement of the main research question and hypothesis. This statement tends to be more formal and precise than in the opening and is often expressed in terms of operational definitions of the key variables. The second is a brief overview of the method and some comment on its appropriateness. Here, for example, is how Darley and Latané (1968) [2] concluded the introduction to their classic article on the bystander effect:

These considerations lead to the hypothesis that the more bystanders to an emergency, the less likely, or the more slowly, any one bystander will intervene to provide aid. To test this proposition it would be necessary to create a situation in which a realistic “emergency” could plausibly occur. Each subject should also be blocked from communicating with others to prevent his getting information about their behavior during the emergency. Finally, the experimental situation should allow for the assessment of the speed and frequency of the subjects’ reaction to the emergency. The experiment reported below attempted to fulfill these conditions. (p. 378)

Thus the introduction leads smoothly into the next major section of the article—the method section.

The  method section  is where you describe how you conducted your study. An important principle for writing a method section is that it should be clear and detailed enough that other researchers could replicate the study by following your “recipe.” This means that it must describe all the important elements of the study—basic demographic characteristics of the participants, how they were recruited, whether they were randomly assigned to conditions, how the variables were manipulated or measured, how counterbalancing was accomplished, and so on. At the same time, it should avoid irrelevant details such as the fact that the study was conducted in Classroom 37B of the Industrial Technology Building or that the questionnaire was double-sided and completed using pencils.

The method section begins immediately after the introduction ends with the heading “Method” (not “Methods”) centered on the page. Immediately after this is the subheading “Participants,” left justified and in italics. The participants subsection indicates how many participants there were, the number of women and men, some indication of their age, other demographics that may be relevant to the study, and how they were recruited, including any incentives given for participation.

Figure 11.1 Three Ways of Organizing an APA-Style Method

After the participants section, the structure can vary a bit. Figure 11.1 shows three common approaches. In the first, the participants section is followed by a design and procedure subsection, which describes the rest of the method. This works well for methods that are relatively simple and can be described adequately in a few paragraphs. In the second approach, the participants section is followed by separate design and procedure subsections. This works well when both the design and the procedure are relatively complicated and each requires multiple paragraphs.

What is the difference between design and procedure? The design of a study is its overall structure. What were the independent and dependent variables? Was the independent variable manipulated, and if so, was it manipulated between or within subjects? How were the variables operationally defined? The procedure is how the study was carried out. It often works well to describe the procedure in terms of what the participants did rather than what the researchers did. For example, the participants gave their informed consent, read a set of instructions, completed a block of four practice trials, completed a block of 20 test trials, completed two questionnaires, and were debriefed and excused.

In the third basic way to organize a method section, the participants subsection is followed by a materials subsection before the design and procedure subsections. This works well when there are complicated materials to describe. This might mean multiple questionnaires, written vignettes that participants read and respond to, perceptual stimuli, and so on. The heading of this subsection can be modified to reflect its content. Instead of “Materials,” it can be “Questionnaires,” “Stimuli,” and so on. The materials subsection is also a good place to refer to the reliability and/or validity of the measures. This is where you would present test-retest correlations, Cronbach’s α, or other statistics to show that the measures are consistent across time and across items and that they accurately measure what they are intended to measure.

The  results section is where you present the main results of the study, including the results of the statistical analyses. Although it does not include the raw data—individual participants’ responses or scores—researchers should save their raw data and make them available to other researchers who request them. Many journals encourage the open sharing of raw data online, and some now require open data and materials before publication.

Although there are no standard subsections, it is still important for the results section to be logically organized. Typically it begins with certain preliminary issues. One is whether any participants or responses were excluded from the analyses and why. The rationale for excluding data should be described clearly so that other researchers can decide whether it is appropriate. A second preliminary issue is how multiple responses were combined to produce the primary variables in the analyses. For example, if participants rated the attractiveness of 20 stimulus people, you might have to explain that you began by computing the mean attractiveness rating for each participant. Or if they recalled as many items as they could from study list of 20 words, did you count the number correctly recalled, compute the percentage correctly recalled, or perhaps compute the number correct minus the number incorrect? A final preliminary issue is whether the manipulation was successful. This is where you would report the results of any manipulation checks.

The results section should then tackle the primary research questions, one at a time. Again, there should be a clear organization. One approach would be to answer the most general questions and then proceed to answer more specific ones. Another would be to answer the main question first and then to answer secondary ones. Regardless, Bem (2003) [3] suggests the following basic structure for discussing each new result:

  • Remind the reader of the research question.
  • Give the answer to the research question in words.
  • Present the relevant statistics.
  • Qualify the answer if necessary.
  • Summarize the result.

Notice that only Step 3 necessarily involves numbers. The rest of the steps involve presenting the research question and the answer to it in words. In fact, the basic results should be clear even to a reader who skips over the numbers.

The discussion is the last major section of the research report. Discussions usually consist of some combination of the following elements:

  • Summary of the research
  • Theoretical implications
  • Practical implications
  • Limitations
  • Suggestions for future research

The discussion typically begins with a summary of the study that provides a clear answer to the research question. In a short report with a single study, this might require no more than a sentence. In a longer report with multiple studies, it might require a paragraph or even two. The summary is often followed by a discussion of the theoretical implications of the research. Do the results provide support for any existing theories? If not, how  can  they be explained? Although you do not have to provide a definitive explanation or detailed theory for your results, you at least need to outline one or more possible explanations. In applied research—and often in basic research—there is also some discussion of the practical implications of the research. How can the results be used, and by whom, to accomplish some real-world goal?

The theoretical and practical implications are often followed by a discussion of the study’s limitations. Perhaps there are problems with its internal or external validity. Perhaps the manipulation was not very effective or the measures not very reliable. Perhaps there is some evidence that participants did not fully understand their task or that they were suspicious of the intent of the researchers. Now is the time to discuss these issues and how they might have affected the results. But do not overdo it. All studies have limitations, and most readers will understand that a different sample or different measures might have produced different results. Unless there is good reason to think they  would have, however, there is no reason to mention these routine issues. Instead, pick two or three limitations that seem like they could have influenced the results, explain how they could have influenced the results, and suggest ways to deal with them.

Most discussions end with some suggestions for future research. If the study did not satisfactorily answer the original research question, what will it take to do so? What  new  research questions has the study raised? This part of the discussion, however, is not just a list of new questions. It is a discussion of two or three of the most important unresolved issues. This means identifying and clarifying each question, suggesting some alternative answers, and even suggesting ways they could be studied.

Finally, some researchers are quite good at ending their articles with a sweeping or thought-provoking conclusion. Darley and Latané (1968) [4] , for example, ended their article on the bystander effect by discussing the idea that whether people help others may depend more on the situation than on their personalities. Their final sentence is, “If people understand the situational forces that can make them hesitate to intervene, they may better overcome them” (p. 383). However, this kind of ending can be difficult to pull off. It can sound overreaching or just banal and end up detracting from the overall impact of the article. It is often better simply to end by returning to the problem or issue introduced in your opening paragraph and clearly stating how your research has addressed that issue or problem.

The references section begins on a new page with the heading “References” centered at the top of the page. All references cited in the text are then listed in the format presented earlier. They are listed alphabetically by the last name of the first author. If two sources have the same first author, they are listed alphabetically by the last name of the second author. If all the authors are the same, then they are listed chronologically by the year of publication. Everything in the reference list is double-spaced both within and between references.

Appendices, Tables, and Figures

Appendices, tables, and figures come after the references. An appendix is appropriate for supplemental material that would interrupt the flow of the research report if it were presented within any of the major sections. An appendix could be used to present lists of stimulus words, questionnaire items, detailed descriptions of special equipment or unusual statistical analyses, or references to the studies that are included in a meta-analysis. Each appendix begins on a new page. If there is only one, the heading is “Appendix,” centered at the top of the page. If there is more than one, the headings are “Appendix A,” “Appendix B,” and so on, and they appear in the order they were first mentioned in the text of the report.

After any appendices come tables and then figures. Tables and figures are both used to present results. Figures can also be used to display graphs, illustrate theories (e.g., in the form of a flowchart), display stimuli, outline procedures, and present many other kinds of information. Each table and figure appears on its own page. Tables are numbered in the order that they are first mentioned in the text (“Table 1,” “Table 2,” and so on). Figures are numbered the same way (“Figure 1,” “Figure 2,” and so on). A brief explanatory title, with the important words capitalized, appears above each table. Each figure is given a brief explanatory caption, where (aside from proper nouns or names) only the first word of each sentence is capitalized. More details on preparing APA-style tables and figures are presented later in the book.

Sample APA-Style Research Report

Figures 11.2, 11.3, 11.4, and 11.5 show some sample pages from an APA-style empirical research report originally written by undergraduate student Tomoe Suyama at California State University, Fresno. The main purpose of these figures is to illustrate the basic organization and formatting of an APA-style empirical research report, although many high-level and low-level style conventions can be seen here too.

Figure 11.2 Title Page and Abstract. This student paper does not include the author note on the title page. The abstract appears on its own page.

  • Bem, D. J. (2003). Writing the empirical journal article. In J. M. Darley, M. P. Zanna, & H. R. Roediger III (Eds.), The complete academic: A practical guide for the beginning social scientist  (2nd ed.). Washington, DC: American Psychological Association. ↵
  • Darley, J. M., & Latané, B. (1968). Bystander intervention in emergencies: Diffusion of responsibility. Journal of Personality and Social Psychology, 4 , 377–383. ↵
  • Bem, D. J. (2003). Writing the empirical journal article. In J. M. Darley, M. P. Zanna, & H. R. Roediger III (Eds.),  The complete academic: A practical guide for the beginning social scientist  (2nd ed.). Washington, DC: American Psychological Association. ↵
  • Darley, J. M., & Latané, B. (1968). Bystander intervention in emergencies: Diffusion of responsibility.  Journal of Personality and Social Psychology, 4 , 377–383. ↵
  • Define non-experimental research, distinguish it clearly from experimental research, and give several examples.
  • Explain when a researcher might choose to conduct non-experimental research as opposed to experimental research.

What Is Non-Experimental Research?

Non-experimental research  is research that lacks the manipulation of an independent variable. Rather than manipulating an independent variable, researchers conducting non-experimental research simply measure variables as they naturally occur (in the lab or real world).

Most researchers in psychology consider the distinction between experimental and non-experimental research to be an extremely important one. This is because although experimental research can provide strong evidence that changes in an independent variable cause differences in a dependent variable, non-experimental research generally cannot. As we will see, however, this inability to make causal conclusions does not mean that non-experimental research is less important than experimental research. It is simply used in cases where experimental research is not able to be carried out.

When to Use Non-Experimental Research

As we saw in the last chapter , experimental research is appropriate when the researcher has a specific research question or hypothesis about a causal relationship between two variables—and it is possible, feasible, and ethical to manipulate the independent variable. It stands to reason, therefore, that non-experimental research is appropriate—even necessary—when these conditions are not met. There are many times in which non-experimental research is preferred, including when:

  • the research question or hypothesis relates to a single variable rather than a statistical relationship between two variables (e.g., how accurate are people’s first impressions?).
  • the research question pertains to a non-causal statistical relationship between variables (e.g., is there a correlation between verbal intelligence and mathematical intelligence?).
  • the research question is about a causal relationship, but the independent variable cannot be manipulated or participants cannot be randomly assigned to conditions or orders of conditions for practical or ethical reasons (e.g., does damage to a person’s hippocampus impair the formation of long-term memory traces?).
  • the research question is broad and exploratory, or is about what it is like to have a particular experience (e.g., what is it like to be a working mother diagnosed with depression?).

Again, the choice between the experimental and non-experimental approaches is generally dictated by the nature of the research question. Recall the three goals of science are to describe, to predict, and to explain. If the goal is to explain and the research question pertains to causal relationships, then the experimental approach is typically preferred. If the goal is to describe or to predict, a non-experimental approach is appropriate. But the two approaches can also be used to address the same research question in complementary ways. For example, in Milgram's original (non-experimental) obedience study, he was primarily interested in one variable—the extent to which participants obeyed the researcher when he told them to shock the confederate—and he observed all participants performing the same task under the same conditions. However,  Milgram subsequently conducted experiments to explore the factors that affect obedience. He manipulated several independent variables, such as the distance between the experimenter and the participant, the participant and the confederate, and the location of the study (Milgram, 1974) [1] .

Types of Non-Experimental Research

Non-experimental research falls into two broad categories: correlational research and observational research. 

The most common type of non-experimental research conducted in psychology is correlational research. Correlational research is considered non-experimental because it focuses on the statistical relationship between two variables but does not include the manipulation of an independent variable. More specifically, in correlational research , the researcher measures two variables with little or no attempt to control extraneous variables and then assesses the relationship between them. As an example, a researcher interested in the relationship between self-esteem and school achievement could collect data on students' self-esteem and their GPAs to see if the two variables are statistically related.

Observational research  is non-experimental because it focuses on making observations of behavior in a natural or laboratory setting without manipulating anything. Milgram’s original obedience study was non-experimental in this way. He was primarily interested in the extent to which participants obeyed the researcher when he told them to shock the confederate and he observed all participants performing the same task under the same conditions. The study by Loftus and Pickrell described at the beginning of this chapter is also a good example of observational research. The variable was whether participants “remembered” having experienced mildly traumatic childhood events (e.g., getting lost in a shopping mall) that they had not actually experienced but that the researchers asked them about repeatedly. In this particular study, nearly a third of the participants “remembered” at least one event. (As with Milgram’s original study, this study inspired several later experiments on the factors that affect false memories).

Cross-Sectional, Longitudinal, and Cross-Sequential Studies

When psychologists wish to study change over time (for example, when developmental psychologists wish to study aging) they usually take one of three non-experimental approaches: cross-sectional, longitudinal, or cross-sequential. Cross-sectional studies involve comparing two or more pre-existing groups of people (e.g., children at different stages of development). What makes this approach non-experimental is that there is no manipulation of an independent variable and no random assignment of participants to groups. Using this design, developmental psychologists compare groups of people of different ages (e.g., young adults spanning from 18-25 years of age versus older adults spanning 60-75 years of age) on various dependent variables (e.g., memory, depression, life satisfaction). Of course, the primary limitation of using this design to study the effects of aging is that differences between the groups other than age may account for differences in the dependent variable. For instance, differences between the groups may reflect the generation that people come from (a cohort effect ) rather than a direct effect of age. For this reason, longitudinal studies , in which one group of people is followed over time as they age, offer a superior means of studying the effects of aging. However, longitudinal studies are by definition more time consuming and so require a much greater investment on the part of the researcher and the participants. A third approach, known as cross-sequential studies , combines elements of both cross-sectional and longitudinal studies. Rather than measuring differences between people in different age groups or following the same people over a long period of time, researchers adopting this approach choose a smaller period of time during which they follow people in different age groups. For example, they might measure changes over a ten year period among participants who at the start of the study fall into the following age groups: 20 years old, 30 years old, 40 years old, 50 years old, and 60 years old. This design is advantageous because the researcher reaps the immediate benefits of being able to compare the age groups after the first assessment. Further, by following the different age groups over time they can subsequently determine whether the original differences they found across the age groups are due to true age effects or cohort effects.

The types of research we have discussed so far are all quantitative, referring to the fact that the data consist of numbers that are analyzed using statistical techniques. But as you will learn in this chapter, many observational research studies are more qualitative in nature. In  qualitative research , the data are usually nonnumerical and therefore cannot be analyzed using statistical techniques. Rosenhan’s observational study of the experience of people in psychiatric wards was primarily qualitative. The data were the notes taken by the “pseudopatients”—the people pretending to have heard voices—along with their hospital records. Rosenhan’s analysis consists mainly of a written description of the experiences of the pseudopatients, supported by several concrete examples. To illustrate the hospital staff’s tendency to “depersonalize” their patients, he noted, “Upon being admitted, I and other pseudopatients took the initial physical examinations in a semi-public room, where staff members went about their own business as if we were not there” (Rosenhan, 1973, p. 256) [2] . Qualitative data has a separate set of analysis tools depending on the research question. For example, thematic analysis would focus on themes that emerge in the data or conversation analysis would focus on the way the words were said in an interview or focus group.

Internal Validity Revisited

Recall that internal validity is the extent to which the design of a study supports the conclusion that changes in the independent variable caused any observed differences in the dependent variable.  Figure 6.1 shows how experimental, quasi-experimental, and non-experimental (correlational) research vary in terms of internal validity. Experimental research tends to be highest in internal validity because the use of manipulation (of the independent variable) and control (of extraneous variables) help to rule out alternative explanations for the observed relationships. If the average score on the dependent variable in an experiment differs across conditions, it is quite likely that the independent variable is responsible for that difference. Non-experimental (correlational) research is lowest in internal validity because these designs fail to use manipulation or control. Quasi-experimental research (which will be described in more detail in a subsequent chapter) falls in the middle because it contains some, but not all, of the features of a true experiment. For instance, it may fail to use random assignment to assign participants to groups or fail to use counterbalancing to control for potential order effects. Imagine, for example, that a researcher finds two similar schools, starts an anti-bullying program in one, and then finds fewer bullying incidents in that “treatment school” than in the “control school.” While a comparison is being made with a control condition, the inability to randomly assign children to schools could still mean that students in the treatment school differed from students in the control school in some other way that could explain the difference in bullying (e.g., there may be a selection effect).

Figure 7.1 Internal Validity of Correlational, Quasi-Experimental, and Experimental Studies. Experiments are generally high in internal validity, quasi-experiments lower, and correlational studies lower still.

Notice also in  Figure 6.1 that there is some overlap in the internal validity of experiments, quasi-experiments, and correlational (non-experimental) studies. For example, a poorly designed experiment that includes many confounding variables can be lower in internal validity than a well-designed quasi-experiment with no obvious confounding variables. Internal validity is also only one of several validities that one might consider, as noted in Chapter 5.

  • Describe several strategies for recruiting participants for an experiment.
  • Explain why it is important to standardize the procedure of an experiment and several ways to do this.
  • Explain what pilot testing is and why it is important.

The information presented so far in this chapter is enough to design a basic experiment. When it comes time to conduct that experiment, however, several additional practical issues arise. In this section, we consider some of these issues and how to deal with them. Much of this information applies to non-experimental studies as well as experimental ones.

Recruiting Participants

Of course, at the start of any research project, you should be thinking about how you will obtain your participants. Unless you have access to people with schizophrenia or incarcerated juvenile offenders, for example, then there is no point designing a study that focuses on these populations. But even if you plan to use a convenience sample, you will have to recruit participants for your study.

There are several approaches to recruiting participants. One is to use participants from a formal  subject pool —an established group of people who have agreed to be contacted about participating in research studies. For example, at many colleges and universities, there is a subject pool consisting of students enrolled in introductory psychology courses who must participate in a certain number of studies to meet a course requirement. Researchers post descriptions of their studies and students sign up to participate, usually via an online system. Participants who are not in subject pools can also be recruited by posting or publishing advertisements or making personal appeals to groups that represent the population of interest. For example, a researcher interested in studying older adults could arrange to speak at a meeting of the residents at a retirement community to explain the study and ask for volunteers.

image

The Volunteer Subject

Even if the participants in a study receive compensation in the form of course credit, a small amount of money, or a chance at being treated for a psychological problem, they are still essentially volunteers. This is worth considering because people who volunteer to participate in psychological research have been shown to differ in predictable ways from those who do not volunteer. Specifically, there is good evidence that on average, volunteers have the following characteristics compared with non-volunteers (Rosenthal & Rosnow, 1976) [3] :

  • They are more interested in the topic of the research.
  • They are more educated.
  • They have a greater need for approval.
  • They have higher IQ.
  • They are more sociable.
  • They are higher in social class.

This difference can be an issue of external validity if there is a reason to believe that participants with these characteristics are likely to behave differently than the general population. For example, in testing different methods of persuading people, a rational argument might work better on volunteers than it does on the general population because of their generally higher educational level and IQ.

In many field experiments, the task is not recruiting participants but selecting them. For example, researchers Nicolas Guéguen and Marie-Agnès de Gail conducted a field experiment on the effect of being smiled at on helping, in which the participants were shoppers at a supermarket. A confederate walking down a stairway gazed directly at a shopper walking up the stairway and either smiled or did not smile. Shortly afterward, the shopper encountered another confederate, who dropped some computer diskettes on the ground. The dependent variable was whether or not the shopper stopped to help pick up the diskettes (Guéguen & de Gail, 2003) [4] . There are two aspects of this study that are worth addressing here. First, n otice that these participants were not “recruited,” which means that the IRB would have taken care to ensure that dispensing with informed consent in this case was acceptable (e.g., the situation would not have been expected to cause any harm and the study was conducted in the context of people’s ordinary activities). Second, even though informed consent was not necessary, the researchers still had to select participants from among all the shoppers taking the stairs that day. I t is extremely important that this kind of selection be done according to a well-defined set of rules that are established before the data collection begins and can be explained clearly afterward. In this case, with each trip down the stairs, the confederate was instructed to gaze at the first person he encountered who appeared to be between the ages of 20 and 50. Only if the person gazed back did they become a participant in the study. The point of having a well-defined selection rule is to avoid bias in the selection of participants. For example, if the confederate was free to choose which shoppers he would gaze at, he might choose friendly-looking shoppers when he was set to smile and unfriendly-looking ones when he was not set to smile. As we will see shortly, such biases can be entirely unintentional.

Standardizing the Procedure

It is surprisingly easy to introduce extraneous variables during the procedure. For example, the same experimenter might give clear instructions to one participant but vague instructions to another. Or one experimenter might greet participants warmly while another barely makes eye contact with them. To the extent that such variables affect participants’ behavior, they add noise to the data and make the effect of the independent variable more difficult to detect. If they vary systematically across conditions, they become confounding variables and provide alternative explanations for the results. For example, if participants in a treatment group are tested by a warm and friendly experimenter and participants in a control group are tested by a cold and unfriendly one, then what appears to be an effect of the treatment might actually be an effect of experimenter demeanor. When there are multiple experimenters, the possibility of introducing extraneous variables is even greater, but is often necessary for practical reasons.

Experimenter’s Sex as an Extraneous Variable

It is well known that whether research participants are male or female can affect the results of a study. But what about whether the  experimenter  is male or female? There is plenty of evidence that this matters too. Male and female experimenters have slightly different ways of interacting with their participants, and of course, participants also respond differently to male and female experimenters (Rosenthal, 1976) [5] .

For example, in a recent study on pain perception, participants immersed their hands in icy water for as long as they could (Ibolya, Brake, & Voss, 2004) [6] . Male participants tolerated the pain longer when the experimenter was a woman, and female participants tolerated it longer when the experimenter was a man.

Researcher Robert Rosenthal has spent much of his career showing that this kind of unintended variation in the procedure does, in fact, affect participants’ behavior. Furthermore, one important source of such variation is the experimenter’s expectations about how participants “should” behave in the experiment. This outcome is referred to as an  experimenter expectancy effect  (Rosenthal, 1976) [7] . For example, if an experimenter expects participants in a treatment group to perform better on a task than participants in a control group, then they might unintentionally give the treatment group participants clearer instructions or more encouragement or allow them more time to complete the task. In a striking example, Rosenthal and Kermit Fode had several students in a laboratory course in psychology train rats to run through a maze. Although the rats were genetically similar, some of the students were told that they were working with “maze-bright” rats that had been bred to be good learners, and other students were told that they were working with “maze-dull” rats that had been bred to be poor learners. Sure enough, over five days of training, the “maze-bright” rats made more correct responses, made the correct response more quickly, and improved more steadily than the “maze-dull” rats (Rosenthal & Fode, 1963) [8] . Clearly, it had to have been the students’ expectations about how the rats would perform that made the difference. But how? Some clues come from data gathered at the end of the study, which showed that students who expected their rats to learn quickly felt more positively about their animals and reported behaving toward them in a more friendly manner (e.g., handling them more).

The way to minimize unintended variation in the procedure is to standardize it as much as possible so that it is carried out in the same way for all participants regardless of the condition they are in. Here are several ways to do this:

  • Create a written protocol that specifies everything that the experimenters are to do and say from the time they greet participants to the time they dismiss them.
  • Create standard instructions that participants read themselves or that are read to them word for word by the experimenter.
  • Automate the rest of the procedure as much as possible by using software packages for this purpose or even simple computer slide shows.
  • Anticipate participants’ questions and either raise and answer them in the instructions or develop standard answers for them.
  • Train multiple experimenters on the protocol together and have them practice on each other.
  • Be sure that each experimenter tests participants in all conditions.

Another good practice is to arrange for the experimenters to be “blind” to the research question or to the condition in which each participant is tested. The idea is to minimize experimenter expectancy effects by minimizing the experimenters’ expectations. For example, in a drug study in which each participant receives the drug or a placebo, it is often the case that neither the participants nor the experimenter who interacts with the participants knows which condition they have been assigned to complete. Because both the participants and the experimenters are blind to the condition, this technique is referred to as a  double-blind study . (A single-blind study is one in which only the participant is blind to the condition.) Of course, there are many times this blinding is not possible. For example, if you are both the investigator and the only experimenter, it is not possible for you to remain blind to the research question. Also, in many studies, the experimenter  must  know the condition because they must carry out the procedure in a different way in the different conditions.

image

Record Keeping

It is essential to keep good records when you conduct an experiment. As discussed earlier, it is typical for experimenters to generate a written sequence of conditions before the study begins and then to test each new participant in the next condition in the sequence. As you test them, it is a good idea to add to this list basic demographic information; the date, time, and place of testing; and the name of the experimenter who did the testing. It is also a good idea to have a place for the experimenter to write down comments about unusual occurrences (e.g., a confused or uncooperative participant) or questions that come up. This kind of information can be useful later if you decide to analy z e sex differences or effects of different experimenters, or if a question arises about a particular participant or testing session.

Since participants' identities should be kept as confidential (or anonymous) as possible, their names and other identifying information should not be included with their data. In order to identify individual participants, it can, therefore, be useful to assign an identification number to each participant as you test them. Simply numbering them consecutively beginning with 1 is usually sufficient. This number can then also be written on any response sheets or questionnaires that participants generate, making it easier to keep them together.

Manipulation Check

In many experiments, the independent variable is a construct that can only be manipulated indirectly. For example, a researcher might try to manipulate participants’ stress levels indirectly by telling some of them that they have five minutes to prepare a short speech that they will then have to give to an audience of other participants. In such situations, researchers often include a manipulation check  in their procedure. A manipulation check is a separate measure of the construct the researcher is trying to manipulate. The purpose of a manipulation check is to confirm that the independent variable was, in fact, successfully manipulated. For example, researchers trying to manipulate participants’ stress levels might give them a paper-and-pencil stress questionnaire or take their blood pressure—perhaps right after the manipulation or at the end of the procedure—to verify that they successfully manipulated this variable.

Manipulation checks are particularly important when the results of an experiment turn out null. In cases where the results show no significant effect of the manipulation of the independent variable on the dependent variable, a manipulation check can help the experimenter determine whether the null result is due to a real absence of an effect of the independent variable on the dependent variable or if it is due to a problem with the manipulation of the independent variable. Imagine, for example, that you exposed participants to happy or sad movie music—intending to put them in happy or sad moods—but you found that this had no effect on the number of happy or sad childhood events they recalled. This could be because being in a happy or sad mood has no effect on memories for childhood events. But it could also be that the music was ineffective at putting participants in happy or sad moods. A manipulation check—in this case, a measure of participants’ moods—would help resolve this uncertainty. If it showed that you had successfully manipulated participants’ moods, then it would appear that there is indeed no effect of mood on memory for childhood events. But if it showed that you did not successfully manipulate participants’ moods, then it would appear that you need a more effective manipulation to answer your research question.

Manipulation checks are usually done at the end of the procedure to be sure that the effect of the manipulation lasted throughout the entire procedure and to avoid calling unnecessary attention to the manipulation (to avoid a demand characteristic). However, researchers are wise to include a manipulation check in a pilot test of their experiment so that they avoid spending a lot of time and resources on an experiment that is doomed to fail and instead spend that time and energy finding a better manipulation of the independent variable.

Pilot Testing

It is always a good idea to conduct a  pilot test  of your experiment. A pilot test is a small-scale study conducted to make sure that a new procedure works as planned. In a pilot test, you can recruit participants formally (e.g., from an established participant pool) or you can recruit them informally from among family, friends, classmates, and so on. The number of participants can be small, but it should be enough to give you confidence that your procedure works as planned. There are several important questions that you can answer by conducting a pilot test:

  • Do participants understand the instructions?
  • What kind of misunderstandings do participants have, what kind of mistakes do they make, and what kind of questions do they ask?
  • Do participants become bored or frustrated?
  • Is an indirect manipulation effective? (You will need to include a manipulation check.)
  • Can participants guess the research question or hypothesis (are there demand characteristics)?
  • How long does the procedure take?
  • Are computer programs or other automated procedures working properly?
  • Are data being recorded correctly?

Of course, to answer some of these questions you will need to observe participants carefully during the procedure and talk with them about it afterward. Participants are often hesitant to criticize a study in front of the researcher, so be sure they understand that their participation is part of a pilot test and you are genuinely interested in feedback that will help you improve the procedure. If the procedure works as planned, then you can proceed with the actual study. If there are problems to be solved, you can solve them, pilot test the new procedure, and continue with this process until you are ready to proceed.

Research Methods in Psychology Copyright © 2020 by Rajiv S. Jhangiani, I-Chant A. Chiang, Carrie Cuttler, Dana C. Leighton & Molly A. Metz is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

APS

Psychological Science

Prospective submitters of manuscripts are encouraged to read Editor-in-Chief Simine Vazire’s editorial , as well as the editorial by Tom Hardwicke, Senior Editor for Statistics, Transparency, & Rigor, and Simine Vazire.

Psychological Science , the flagship journal of the Association for Psychological Science, is the leading peer-reviewed journal publishing empirical research spanning the entire spectrum of the science of psychology. The journal publishes high quality research articles of general interest and on important topics spanning the entire spectrum of the science of psychology. Replication studies are welcome and evaluated on the same criteria as novel studies. Articles are published in OnlineFirst before they are assigned to an issue. This journal is a member of the Committee on Publication Ethics (COPE) .

Quick Facts

Read the February 2022 editorial by former Editor-in-Chief Patricia Bauer, “Psychological Science Stepping Up a Level.”

Read the January 2020 editorial by former Editor Patricia Bauer on her vision for the future of  Psychological Science .

Read the December 2015 editorial on replication by former Editor Steve Lindsay, as well as his April 2017 editorial on sharing data and materials during the review process.

Watch Geoff Cumming’s video workshop on the new statistics.

research reports in psychology

Current Issue

research reports in psychology

Online First Articles

research reports in psychology

List of Issues

research reports in psychology

Editorial Board

research reports in psychology

Submission Guidelines

research reports in psychology

Editorial Policies

Featured research from psychological science, teens who view their homes as more chaotic than their siblings have poorer mental health in adulthood.

Many parents ponder why one of their children seems more emotionally troubled than the others. A new study in the United Kingdom reveals a possible basis for those differences.

Rewatching Videos of People Shifts How We Judge Them, Study Indicates

Rewatching recorded behavior, whether on a Tik-Tok video or police body-camera footage, makes even the most spontaneous actions seem more rehearsed or deliberate, new research shows.

Loneliness Bookends Adulthood, Study Shows

Loneliness in adulthood follows a U-shaped pattern: It’s higher in younger and older adulthood, and lowest during middle adulthood, according to new research that examined nine longitudinal studies from around the world.

Privacy Overview

  • Search This Site All UCSD Sites Faculty/Staff Search Term
  • Contact & Directions
  • Climate Statement
  • Cognitive Behavioral Neuroscience
  • Cognitive Psychology
  • Developmental Psychology
  • Social Psychology
  • Adjunct Faculty
  • Non-Senate Instructors
  • Researchers
  • Psychology Grads
  • Affiliated Grads
  • New and Prospective Students
  • Honors Program
  • Experiential Learning
  • Programs & Events
  • Psi Chi / Psychology Club
  • Prospective PhD Students
  • Current PhD Students
  • Area Brown Bags
  • Colloquium Series
  • Anderson Distinguished Lecture Series
  • Speaker Videos
  • Undergraduate Program
  • Academic and Writing Resources

Writing Research Papers

  • Research Paper Structure

Whether you are writing a B.S. Degree Research Paper or completing a research report for a Psychology course, it is highly likely that you will need to organize your research paper in accordance with American Psychological Association (APA) guidelines.  Here we discuss the structure of research papers according to APA style.

Major Sections of a Research Paper in APA Style

A complete research paper in APA style that is reporting on experimental research will typically contain a Title page, Abstract, Introduction, Methods, Results, Discussion, and References sections. 1  Many will also contain Figures and Tables and some will have an Appendix or Appendices.  These sections are detailed as follows (for a more in-depth guide, please refer to " How to Write a Research Paper in APA Style ”, a comprehensive guide developed by Prof. Emma Geller). 2

What is this paper called and who wrote it? – the first page of the paper; this includes the name of the paper, a “running head”, authors, and institutional affiliation of the authors.  The institutional affiliation is usually listed in an Author Note that is placed towards the bottom of the title page.  In some cases, the Author Note also contains an acknowledgment of any funding support and of any individuals that assisted with the research project.

One-paragraph summary of the entire study – typically no more than 250 words in length (and in many cases it is well shorter than that), the Abstract provides an overview of the study.

Introduction

What is the topic and why is it worth studying? – the first major section of text in the paper, the Introduction commonly describes the topic under investigation, summarizes or discusses relevant prior research (for related details, please see the Writing Literature Reviews section of this website), identifies unresolved issues that the current research will address, and provides an overview of the research that is to be described in greater detail in the sections to follow.

What did you do? – a section which details how the research was performed.  It typically features a description of the participants/subjects that were involved, the study design, the materials that were used, and the study procedure.  If there were multiple experiments, then each experiment may require a separate Methods section.  A rule of thumb is that the Methods section should be sufficiently detailed for another researcher to duplicate your research.

What did you find? – a section which describes the data that was collected and the results of any statistical tests that were performed.  It may also be prefaced by a description of the analysis procedure that was used. If there were multiple experiments, then each experiment may require a separate Results section.

What is the significance of your results? – the final major section of text in the paper.  The Discussion commonly features a summary of the results that were obtained in the study, describes how those results address the topic under investigation and/or the issues that the research was designed to address, and may expand upon the implications of those findings.  Limitations and directions for future research are also commonly addressed.

List of articles and any books cited – an alphabetized list of the sources that are cited in the paper (by last name of the first author of each source).  Each reference should follow specific APA guidelines regarding author names, dates, article titles, journal titles, journal volume numbers, page numbers, book publishers, publisher locations, websites, and so on (for more information, please see the Citing References in APA Style page of this website).

Tables and Figures

Graphs and data (optional in some cases) – depending on the type of research being performed, there may be Tables and/or Figures (however, in some cases, there may be neither).  In APA style, each Table and each Figure is placed on a separate page and all Tables and Figures are included after the References.   Tables are included first, followed by Figures.   However, for some journals and undergraduate research papers (such as the B.S. Research Paper or Honors Thesis), Tables and Figures may be embedded in the text (depending on the instructor’s or editor’s policies; for more details, see "Deviations from APA Style" below).

Supplementary information (optional) – in some cases, additional information that is not critical to understanding the research paper, such as a list of experiment stimuli, details of a secondary analysis, or programming code, is provided.  This is often placed in an Appendix.

Variations of Research Papers in APA Style

Although the major sections described above are common to most research papers written in APA style, there are variations on that pattern.  These variations include: 

  • Literature reviews – when a paper is reviewing prior published research and not presenting new empirical research itself (such as in a review article, and particularly a qualitative review), then the authors may forgo any Methods and Results sections. Instead, there is a different structure such as an Introduction section followed by sections for each of the different aspects of the body of research being reviewed, and then perhaps a Discussion section. 
  • Multi-experiment papers – when there are multiple experiments, it is common to follow the Introduction with an Experiment 1 section, itself containing Methods, Results, and Discussion subsections. Then there is an Experiment 2 section with a similar structure, an Experiment 3 section with a similar structure, and so on until all experiments are covered.  Towards the end of the paper there is a General Discussion section followed by References.  Additionally, in multi-experiment papers, it is common for the Results and Discussion subsections for individual experiments to be combined into single “Results and Discussion” sections.

Departures from APA Style

In some cases, official APA style might not be followed (however, be sure to check with your editor, instructor, or other sources before deviating from standards of the Publication Manual of the American Psychological Association).  Such deviations may include:

  • Placement of Tables and Figures  – in some cases, to make reading through the paper easier, Tables and/or Figures are embedded in the text (for example, having a bar graph placed in the relevant Results section). The embedding of Tables and/or Figures in the text is one of the most common deviations from APA style (and is commonly allowed in B.S. Degree Research Papers and Honors Theses; however you should check with your instructor, supervisor, or editor first). 
  • Incomplete research – sometimes a B.S. Degree Research Paper in this department is written about research that is currently being planned or is in progress. In those circumstances, sometimes only an Introduction and Methods section, followed by References, is included (that is, in cases where the research itself has not formally begun).  In other cases, preliminary results are presented and noted as such in the Results section (such as in cases where the study is underway but not complete), and the Discussion section includes caveats about the in-progress nature of the research.  Again, you should check with your instructor, supervisor, or editor first.
  • Class assignments – in some classes in this department, an assignment must be written in APA style but is not exactly a traditional research paper (for instance, a student asked to write about an article that they read, and to write that report in APA style). In that case, the structure of the paper might approximate the typical sections of a research paper in APA style, but not entirely.  You should check with your instructor for further guidelines.

Workshops and Downloadable Resources

  • For in-person discussion of the process of writing research papers, please consider attending this department’s “Writing Research Papers” workshop (for dates and times, please check the undergraduate workshops calendar).

Downloadable Resources

  • How to Write APA Style Research Papers (a comprehensive guide) [ PDF ]
  • Tips for Writing APA Style Research Papers (a brief summary) [ PDF ]
  • Example APA Style Research Paper (for B.S. Degree – empirical research) [ PDF ]
  • Example APA Style Research Paper (for B.S. Degree – literature review) [ PDF ]

Further Resources

How-To Videos     

  • Writing Research Paper Videos

APA Journal Article Reporting Guidelines

  • Appelbaum, M., Cooper, H., Kline, R. B., Mayo-Wilson, E., Nezu, A. M., & Rao, S. M. (2018). Journal article reporting standards for quantitative research in psychology: The APA Publications and Communications Board task force report . American Psychologist , 73 (1), 3.
  • Levitt, H. M., Bamberg, M., Creswell, J. W., Frost, D. M., Josselson, R., & Suárez-Orozco, C. (2018). Journal article reporting standards for qualitative primary, qualitative meta-analytic, and mixed methods research in psychology: The APA Publications and Communications Board task force report . American Psychologist , 73 (1), 26.  

External Resources

  • Formatting APA Style Papers in Microsoft Word
  • How to Write an APA Style Research Paper from Hamilton University
  • WikiHow Guide to Writing APA Research Papers
  • Sample APA Formatted Paper with Comments
  • Sample APA Formatted Paper
  • Tips for Writing a Paper in APA Style

1 VandenBos, G. R. (Ed). (2010). Publication manual of the American Psychological Association (6th ed.) (pp. 41-60).  Washington, DC: American Psychological Association.

2 geller, e. (2018).  how to write an apa-style research report . [instructional materials]. , prepared by s. c. pan for ucsd psychology.

Back to top  

  • Formatting Research Papers
  • Using Databases and Finding References
  • What Types of References Are Appropriate?
  • Evaluating References and Taking Notes
  • Citing References
  • Writing a Literature Review
  • Writing Process and Revising
  • Improving Scientific Writing
  • Academic Integrity and Avoiding Plagiarism
  • Writing Research Papers Videos

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts

Psychology articles within Scientific Reports

Article 17 May 2024 | Open Access

Comparison of 8-vs-12 weeks, adapted dialectical behavioral therapy (DBT) for borderline personality disorder in routine psychiatric inpatient treatment—A naturalistic study

  • Milenko Kujovic
  • , Daniel Benz
  •  &  Eva Meisenzahl

The threat sensitivity scale: A brief self-report measure of dispositional sensitivity toward perceiving threats to physical harm

  • David S. March
  • , Connor Hasty
  •  &  Vincenzo Olivett

Developmental changes of bodily self-consciousness in adolescent girls

  • , Cédric Goulon
  •  &  Marie-Hélène Grosbras

Crowdsourcing with the drift diffusion model of decision making

  • Shamal Lalvani
  •  &  Aggelos Katsaggelos

Associations of coping and health-related behaviors with medical students’ well-being and performance during objective structured clinical examination

  • Noémie Barret
  • , Théodore Guillaumée
  •  &  Sophie Schlatter

Article 16 May 2024 | Open Access

Sustained attention operates via dissociable neural mechanisms across different eccentric locations

  • Tanagrit Phangwiwat
  • , Phond Phunchongharn
  •  &  Sirawaj Itthipuripat

T cell activation and deficits in T regulatory cells are associated with major depressive disorder and severity of depression

  • Muanpetch Rachayon
  • , Ketsupar Jirakran
  •  &  Michael Maes

Article 15 May 2024 | Open Access

Configurational impact of self-regulated writing strategy, writing anxiety, and perceived writing difficulty on EFL writing performance: an fsQCA approach

  • Cunying Fan
  •  &  Juan Wang

Corpus callosum morphology and relationships to illness phenotypes in individuals with anorexia nervosa

  • Jamie D. Feusner
  • , Alicja Nowacka
  •  &  Florian Kurth

Evaluating the psychometric properties of the simplified Chinese version of PROMIS-29 version 2.1 in patients with hematologic malignancies

  • Qianqian Zhang
  • , Jinying Zhao
  •  &  Wenjun Xie

Decoding predicted musical notes from omitted stimulus potentials

  • , Tomomi Ishida
  •  &  Hiroshi Nittono

Article 14 May 2024 | Open Access

An integrative framework for mapping the psychological landscape of risk perception

  • Sarah C. Jenkins
  • , Robert F. Lachlan
  •  &  Magda Osman

Associations between alcohol consumption and empathy in a non-clinical sample: drinking motives as a moderator

  • Isabelle C. Baltariu
  • , Violeta Enea
  •  &  Marije aan het Rot

The changes in fear of childbirth in pregnancy during and before the COVID-19 pandemic

  • Cenk Soysal
  • , Özlem Ulaş
  •  &  Nadi Keskin

Mechanisms upholding the persistence of stigma across 100 years of historical text

  • Tessa E. S. Charlesworth
  •  &  Mark L. Hatzenbuehler

Misinformation does not reduce trust in accurate search results, but warning banners may backfire

  • Sterling Williams-Ceci
  • , Michael W. Macy
  •  &  Mor Naaman

Arranged and non-arranged marriages have similar reproductive outcomes in Nepal

  • Elizabeth Agey

Adolescents’ attitudes, habits, identity and social support in relation to physical activity after the COVID-19 pandemic

  • Ivana Matteucci
  •  &  Mario Corsi

Association of type 2 diabetes with family history of diabetes, diabetes biomarkers, mental and physical disorders in a Kenyan setting

  • David M. Ndetei
  • , Victoria Mutiso
  •  &  Norman Sartorius

Editorial 13 May 2024 | Open Access

Editorial: The cognitive ageing collection

Alongside rapid population ageing, we are experiencing increasing numbers of people with cognitive impairment and dementia. There is great scientific effort being committed to understanding cognitive and brain functioning, with the aim of helping to promote healthy ageing and independence, and improve quality of life. This Cognitive Ageing Collection brings together cutting-edge research using a variety of methods and from diverse disciplinary perspectives, with example topics including cognitive strategies, genetic risk factors, and emotion regulation. Articles in the Collection highlight advances in our understanding of cognitive and brain health, and outline important directions for future research.

  • Louise A. Brown Nicholls
  • , Martina Amanzio
  •  &  Hannah Keage

Matters Arising 13 May 2024 | Open Access

Reply to: Differences in response-scale usage are ubiquitous in cross-country comparisons and a potential driver of elusive relationships

  • Piotr Sorokowski
  •  &  Marta Kowal

Article 13 May 2024 | Open Access

Gratitude practice helps undergraduates who experienced an earthquake in China find meaning in life

  • , Ningyi Zhou
  •  &  Yanhui Mao

Dark Tetrad personality traits, paraphilic interests, and the role of impulsivity: an EEG-study using a Go/No-Go paradigm

  • Maria M. Lassche
  • , Luca Lasogga
  •  &  Josanne D. M. van Dongen

Multiple paths to rumination within a network analytical framework

  • , Ernst H. W. Koster
  •  &  Kristof Hoorelbeke

Long-lasting negativity in the left motoric brain structures during word memory inhibition in the Think/No-Think paradigm

  • Viktoriya Vitkova
  • , Dominique Ristori
  •  &  Ana Maria Cebolla

Exploring the use of visual predictions in social scenarios while under anticipatory threat

  • Fábio Silva
  • , Sérgio Ribeiro
  •  &  Sandra C. Soares

Article 11 May 2024 | Open Access

Flexible processing of distractor stimuli under stress

  • Imke M. Duehnen
  • , Susanne Vogel
  •  &  Mike Wendt

Design and psychometric evaluation of the collaborative coping with infertility questionnaire in candidate of assisted reproductive techniques

  • Marzie Reisi
  •  &  Ashraf Kazemi

Article 10 May 2024 | Open Access

The role of bipolar disorder and family wealth in choosing creative occupations

  • Barbara Biasi
  • , Michael S. Dahl
  •  &  Petra Moser

Behavioral evidence of impaired self-referential processing in patients with affective disorders and first-episode schizophrenia

  • , Jiahua Xu
  •  &  Shuping Tan

The central executive network moderates the relationship between posttraumatic stress symptom severity and gastrointestinal related issues

  • Kia A. Howard
  • , Salman S. Ahmad
  •  &  Roger C. McIntosh

Article 09 May 2024 | Open Access

Immigration documentation statuses evoke racialized faceism in mental representations

  • Joel E. Martinez
  • , DongWon Oh
  •  &  Alexander Todorov

Reducing childhood mortality extends mothers’ lives

  • Matthew N. Zipple

Prevalence and factors associated with diabetes-related distress in type 2 diabetes patients: a study in Hong Kong primary care setting

  • Man Ho Wong
  • , Sin Man Kwan
  •  &  Wan Luk

Computed tomography myocardial perfusion imaging to detect myocardial ischemia in patients with anxiety and obstructive coronary heart disease post-exposure to mental stressors

  • Weihang Sun
  • , Lingjun Mei
  •  &  Xiaofeng Qu

Counterfactual thinking induces different neural patterns of memory modification in anxious individuals

  • Shenyang Huang
  • , Leonard Faul
  •  &  Felipe De Brigard

Article 08 May 2024 | Open Access

The nonverbal expression of guilt in healthy adults

  • Chloe A. Stewart
  • , Derek G. V. Mitchell
  •  &  Elizabeth C. Finger

Closing the loop in minimally supervised human–robot interaction: formative and summative feedback

  • Mayumi Mohan
  • , Cara M. Nunez
  •  &  Katherine J. Kuchenbecker

Experiences of stigma, discrimination and violence and their impact on the mental health of health care workers during the COVID-19 pandemic

  • Miroslava Janoušková
  • , Jaroslav Pekara
  •  &  Dominika Šeblová

Article 07 May 2024 | Open Access

No evidence that averaging voices influences attractiveness

  • Jessica Ostrega
  • , Victor Shiramizu
  •  &  David R. Feinberg

Objective numeracy exacerbates framing effects from decision-making under political risk

  • Erin B. Fitz
  • , Dominik A. Stecuła
  •  &  Kyle L. Saunders

Large environmental changes reduce valence-dependent belief updating

  • Juan Cruz Beron
  • , Guillermo Solovey
  •  &  Rodrigo S. Fernández

Longitudinal associations between women’s cycle characteristics and sexual motivation using Flo cycle tracking data

  • Summer Mengelkoch
  • , Katja Cunningham
  •  &  Sarah E. Hill

Committing to the wrong artificial delegate in a collective-risk dilemma is better than directly committing mistakes

  • Inês Terrucha
  • , Elias Fernández Domingos
  •  &  Tom Lenaerts

Universal and cultural factors shape body part vocabularies

  • Annika Tjuka
  • , Robert Forkel
  •  &  Johann-Mattis List

Evaluating the factor structure, reliability and validity of the Copenhagen Burnout Inventory-Student Survey (CBI-SS) among faculty of arts students of Ekiti State University, Ado-Ekiti, Nigeria

  • Kehinde Sunday Oluwadiya
  • , Omolara Kikelomo Owoeye
  •  &  Adekunle Olatayo Adeoti

Collective plasticity of binocular interactions in the adult visual system

  • Mengxin Wang
  • , Paul V. McGraw
  •  &  Timothy Ledgeway

Article 06 May 2024 | Open Access

Nocturnal exposure to a preferred ambient scent does not affect dream emotionality or post-sleep core affect valence in young adults

  • Lenka Martinec Nováková
  • , Eva Miletínová
  •  &  Jitka Bušková

A virtual reality paradigm simulating blood donation serves as a platform to test interventions to promote donation

  • Lisa A. Williams
  • , Kallie Tzelios
  •  &  Tanya E. Davison

Metacognition as a mediator of the relation between family SES and language and mathematical abilities in preschoolers

  • Mélanie Maximino-Pinheiro
  • , Iris Menu
  •  &  Grégoire Borst

Advertisement

Browse broader subjects

  • Social sciences
  • Biological sciences
  • Social science

Browse narrower subjects

  • Human behaviour

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

research reports in psychology

Purdue Online Writing Lab Purdue OWL® College of Liberal Arts

Writing in Psychology Overview

OWL logo

Welcome to the Purdue OWL

This page is brought to you by the OWL at Purdue University. When printing this page, you must include the entire legal notice.

Copyright ©1995-2018 by The Writing Lab & The OWL at Purdue and Purdue University. All rights reserved. This material may not be published, reproduced, broadcast, rewritten, or redistributed without permission. Use of this site constitutes acceptance of our terms and conditions of fair use.

Psychology is based on the study of human behaviors. As a social science, experimental psychology uses empirical inquiry to help understand human behavior. According to Thrass and Sanford (2000), psychology writing has three elements: describing, explaining, and understanding concepts from a standpoint of empirical investigation.

Discipline-specific writing, such as writing done in psychology, can be similar to other types of writing you have done in the use of the writing process, writing techniques, and in locating and integrating sources. However, the field of psychology also has its own rules and expectations for writing; not everything that you have learned in about writing in the past works for the field of psychology.

Writing in psychology includes the following principles:

  • Using plain language : Psychology writing is formal scientific writing that is plain and straightforward. Literary devices such as metaphors, alliteration, or anecdotes are not appropriate for writing in psychology.
  • Conciseness and clarity of language : The field of psychology stresses clear, concise prose. You should be able to make connections between empirical evidence, theories, and conclusions. See our OWL handout on conciseness for more information.
  • Evidence-based reasoning: Psychology bases its arguments on empirical evidence. Personal examples, narratives, or opinions are not appropriate for psychology.
  • Use of APA format: Psychologists use the American Psychological Association (APA) format for publications. While most student writing follows this format, some instructors may provide you with specific formatting requirements that differ from APA format .

Types of writing

Most major writing assignments in psychology courses consists of one of the following two types.

Experimental reports: Experimental reports detail the results of experimental research projects and are most often written in experimental psychology (lab) courses. Experimental reports are write-ups of your results after you have conducted research with participants. This handout provides a description of how to write an experimental report .

Critical analyses or reviews of research : Often called "term papers," a critical analysis of research narrowly examines and draws conclusions from existing literature on a topic of interest. These are frequently written in upper-division survey courses. Our research paper handouts provide a detailed overview of how to write these types of research papers.

Logo for M Libraries Publishing

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

11.2 Writing a Research Report in American Psychological Association (APA) Style

Learning objectives.

  • Identify the major sections of an APA-style research report and the basic contents of each section.
  • Plan and write an effective APA-style research report.

In this section, we look at how to write an APA-style empirical research report , an article that presents the results of one or more new studies. Recall that the standard sections of an empirical research report provide a kind of outline. Here we consider each of these sections in detail, including what information it contains, how that information is formatted and organized, and tips for writing each section. At the end of this section is a sample APA-style research report that illustrates many of these principles.

Sections of a Research Report

Title page and abstract.

An APA-style research report begins with a title page . The title is centered in the upper half of the page, with each important word capitalized. The title should clearly and concisely (in about 12 words or fewer) communicate the primary variables and research questions. This sometimes requires a main title followed by a subtitle that elaborates on the main title, in which case the main title and subtitle are separated by a colon. Here are some titles from recent issues of professional journals published by the American Psychological Association.

  • Sex Differences in Coping Styles and Implications for Depressed Mood
  • Effects of Aging and Divided Attention on Memory for Items and Their Contexts
  • Computer-Assisted Cognitive Behavioral Therapy for Child Anxiety: Results of a Randomized Clinical Trial
  • Virtual Driving and Risk Taking: Do Racing Games Increase Risk-Taking Cognitions, Affect, and Behavior?

Below the title are the authors’ names and, on the next line, their institutional affiliation—the university or other institution where the authors worked when they conducted the research. As we have already seen, the authors are listed in an order that reflects their contribution to the research. When multiple authors have made equal contributions to the research, they often list their names alphabetically or in a randomly determined order.

It’s Soooo Cute!

How Informal Should an Article Title Be?

In some areas of psychology, the titles of many empirical research reports are informal in a way that is perhaps best described as “cute.” They usually take the form of a play on words or a well-known expression that relates to the topic under study. Here are some examples from recent issues of the Journal of Personality and Social Psychology .

  • “Let’s Get Serious: Communicating Commitment in Romantic Relationships”
  • “Through the Looking Glass Clearly: Accuracy and Assumed Similarity in Well-Adjusted Individuals’ First Impressions”
  • “Don’t Hide Your Happiness! Positive Emotion Dissociation, Social Connectedness, and Psychological Functioning”
  • “Forbidden Fruit: Inattention to Attractive Alternatives Provokes Implicit Relationship Reactance”

Individual researchers differ quite a bit in their preference for such titles. Some use them regularly, while others never use them. What might be some of the pros and cons of using cute article titles?

For articles that are being submitted for publication, the title page also includes an author note that lists the authors’ full institutional affiliations, any acknowledgments the authors wish to make to agencies that funded the research or to colleagues who commented on it, and contact information for the authors. For student papers that are not being submitted for publication—including theses—author notes are generally not necessary.

The abstract is a summary of the study. It is the second page of the manuscript and is headed with the word Abstract . The first line is not indented. The abstract presents the research question, a summary of the method, the basic results, and the most important conclusions. Because the abstract is usually limited to about 200 words, it can be a challenge to write a good one.

Introduction

The introduction begins on the third page of the manuscript. The heading at the top of this page is the full title of the manuscript, with each important word capitalized as on the title page. The introduction includes three distinct subsections, although these are typically not identified by separate headings. The opening introduces the research question and explains why it is interesting, the literature review discusses relevant previous research, and the closing restates the research question and comments on the method used to answer it.

The Opening

The opening , which is usually a paragraph or two in length, introduces the research question and explains why it is interesting. To capture the reader’s attention, researcher Daryl Bem recommends starting with general observations about the topic under study, expressed in ordinary language (not technical jargon)—observations that are about people and their behavior (not about researchers or their research; Bem, 2003). Concrete examples are often very useful here. According to Bem, this would be a poor way to begin a research report:

Festinger’s theory of cognitive dissonance received a great deal of attention during the latter part of the 20th century (p. 191)

The following would be much better:

The individual who holds two beliefs that are inconsistent with one another may feel uncomfortable. For example, the person who knows that he or she enjoys smoking but believes it to be unhealthy may experience discomfort arising from the inconsistency or disharmony between these two thoughts or cognitions. This feeling of discomfort was called cognitive dissonance by social psychologist Leon Festinger (1957), who suggested that individuals will be motivated to remove this dissonance in whatever way they can (p. 191).

After capturing the reader’s attention, the opening should go on to introduce the research question and explain why it is interesting. Will the answer fill a gap in the literature? Will it provide a test of an important theory? Does it have practical implications? Giving readers a clear sense of what the research is about and why they should care about it will motivate them to continue reading the literature review—and will help them make sense of it.

Breaking the Rules

Researcher Larry Jacoby reported several studies showing that a word that people see or hear repeatedly can seem more familiar even when they do not recall the repetitions—and that this tendency is especially pronounced among older adults. He opened his article with the following humorous anecdote (Jacoby, 1999).

A friend whose mother is suffering symptoms of Alzheimer’s disease (AD) tells the story of taking her mother to visit a nursing home, preliminary to her mother’s moving there. During an orientation meeting at the nursing home, the rules and regulations were explained, one of which regarded the dining room. The dining room was described as similar to a fine restaurant except that tipping was not required. The absence of tipping was a central theme in the orientation lecture, mentioned frequently to emphasize the quality of care along with the advantages of having paid in advance. At the end of the meeting, the friend’s mother was asked whether she had any questions. She replied that she only had one question: “Should I tip?” (p. 3).

Although both humor and personal anecdotes are generally discouraged in APA-style writing, this example is a highly effective way to start because it both engages the reader and provides an excellent real-world example of the topic under study.

The Literature Review

Immediately after the opening comes the literature review , which describes relevant previous research on the topic and can be anywhere from several paragraphs to several pages in length. However, the literature review is not simply a list of past studies. Instead, it constitutes a kind of argument for why the research question is worth addressing. By the end of the literature review, readers should be convinced that the research question makes sense and that the present study is a logical next step in the ongoing research process.

Like any effective argument, the literature review must have some kind of structure. For example, it might begin by describing a phenomenon in a general way along with several studies that demonstrate it, then describing two or more competing theories of the phenomenon, and finally presenting a hypothesis to test one or more of the theories. Or it might describe one phenomenon, then describe another phenomenon that seems inconsistent with the first one, then propose a theory that resolves the inconsistency, and finally present a hypothesis to test that theory. In applied research, it might describe a phenomenon or theory, then describe how that phenomenon or theory applies to some important real-world situation, and finally suggest a way to test whether it does, in fact, apply to that situation.

Looking at the literature review in this way emphasizes a few things. First, it is extremely important to start with an outline of the main points that you want to make, organized in the order that you want to make them. The basic structure of your argument, then, should be apparent from the outline itself. Second, it is important to emphasize the structure of your argument in your writing. One way to do this is to begin the literature review by summarizing your argument even before you begin to make it. “In this article, I will describe two apparently contradictory phenomena, present a new theory that has the potential to resolve the apparent contradiction, and finally present a novel hypothesis to test the theory.” Another way is to open each paragraph with a sentence that summarizes the main point of the paragraph and links it to the preceding points. These opening sentences provide the “transitions” that many beginning researchers have difficulty with. Instead of beginning a paragraph by launching into a description of a previous study, such as “Williams (2004) found that…,” it is better to start by indicating something about why you are describing this particular study. Here are some simple examples:

Another example of this phenomenon comes from the work of Williams (2004).
Williams (2004) offers one explanation of this phenomenon.
An alternative perspective has been provided by Williams (2004).
We used a method based on the one used by Williams (2004).

Finally, remember that your goal is to construct an argument for why your research question is interesting and worth addressing—not necessarily why your favorite answer to it is correct. In other words, your literature review must be balanced. If you want to emphasize the generality of a phenomenon, then of course you should discuss various studies that have demonstrated it. However, if there are other studies that have failed to demonstrate it, you should discuss them too. Or if you are proposing a new theory, then of course you should discuss findings that are consistent with that theory. However, if there are other findings that are inconsistent with it, again, you should discuss them too. It is acceptable to argue that the balance of the research supports the existence of a phenomenon or is consistent with a theory (and that is usually the best that researchers in psychology can hope for), but it is not acceptable to ignore contradictory evidence. Besides, a large part of what makes a research question interesting is uncertainty about its answer.

The Closing

The closing of the introduction—typically the final paragraph or two—usually includes two important elements. The first is a clear statement of the main research question or hypothesis. This statement tends to be more formal and precise than in the opening and is often expressed in terms of operational definitions of the key variables. The second is a brief overview of the method and some comment on its appropriateness. Here, for example, is how Darley and Latané (1968) concluded the introduction to their classic article on the bystander effect:

These considerations lead to the hypothesis that the more bystanders to an emergency, the less likely, or the more slowly, any one bystander will intervene to provide aid. To test this proposition it would be necessary to create a situation in which a realistic “emergency” could plausibly occur. Each subject should also be blocked from communicating with others to prevent his getting information about their behavior during the emergency. Finally, the experimental situation should allow for the assessment of the speed and frequency of the subjects’ reaction to the emergency. The experiment reported below attempted to fulfill these conditions (p. 378).

Thus the introduction leads smoothly into the next major section of the article—the method section.

The method section is where you describe how you conducted your study. An important principle for writing a method section is that it should be clear and detailed enough that other researchers could replicate the study by following your “recipe.” This means that it must describe all the important elements of the study—basic demographic characteristics of the participants, how they were recruited, whether they were randomly assigned, how the variables were manipulated or measured, how counterbalancing was accomplished, and so on. At the same time, it should avoid irrelevant details such as the fact that the study was conducted in Classroom 37B of the Industrial Technology Building or that the questionnaire was double-sided and completed using pencils.

The method section begins immediately after the introduction ends with the heading “Method” (not “Methods”) centered on the page. Immediately after this is the subheading “Participants,” left justified and in italics. The participants subsection indicates how many participants there were, the number of women and men, some indication of their age, other demographics that may be relevant to the study, and how they were recruited, including any incentives given for participation.

Figure 11.1 Three Ways of Organizing an APA-Style Method

After the participants section, the structure can vary a bit. Figure 11.1 “Three Ways of Organizing an APA-Style Method” shows three common approaches. In the first, the participants section is followed by a design and procedure subsection, which describes the rest of the method. This works well for methods that are relatively simple and can be described adequately in a few paragraphs. In the second approach, the participants section is followed by separate design and procedure subsections. This works well when both the design and the procedure are relatively complicated and each requires multiple paragraphs.

What is the difference between design and procedure? The design of a study is its overall structure. What were the independent and dependent variables? Was the independent variable manipulated, and if so, was it manipulated between or within subjects? How were the variables operationally defined? The procedure is how the study was carried out. It often works well to describe the procedure in terms of what the participants did rather than what the researchers did. For example, the participants gave their informed consent, read a set of instructions, completed a block of four practice trials, completed a block of 20 test trials, completed two questionnaires, and were debriefed and excused.

In the third basic way to organize a method section, the participants subsection is followed by a materials subsection before the design and procedure subsections. This works well when there are complicated materials to describe. This might mean multiple questionnaires, written vignettes that participants read and respond to, perceptual stimuli, and so on. The heading of this subsection can be modified to reflect its content. Instead of “Materials,” it can be “Questionnaires,” “Stimuli,” and so on.

The results section is where you present the main results of the study, including the results of the statistical analyses. Although it does not include the raw data—individual participants’ responses or scores—researchers should save their raw data and make them available to other researchers who request them. Some journals now make the raw data available online.

Although there are no standard subsections, it is still important for the results section to be logically organized. Typically it begins with certain preliminary issues. One is whether any participants or responses were excluded from the analyses and why. The rationale for excluding data should be described clearly so that other researchers can decide whether it is appropriate. A second preliminary issue is how multiple responses were combined to produce the primary variables in the analyses. For example, if participants rated the attractiveness of 20 stimulus people, you might have to explain that you began by computing the mean attractiveness rating for each participant. Or if they recalled as many items as they could from study list of 20 words, did you count the number correctly recalled, compute the percentage correctly recalled, or perhaps compute the number correct minus the number incorrect? A third preliminary issue is the reliability of the measures. This is where you would present test-retest correlations, Cronbach’s α, or other statistics to show that the measures are consistent across time and across items. A final preliminary issue is whether the manipulation was successful. This is where you would report the results of any manipulation checks.

The results section should then tackle the primary research questions, one at a time. Again, there should be a clear organization. One approach would be to answer the most general questions and then proceed to answer more specific ones. Another would be to answer the main question first and then to answer secondary ones. Regardless, Bem (2003) suggests the following basic structure for discussing each new result:

  • Remind the reader of the research question.
  • Give the answer to the research question in words.
  • Present the relevant statistics.
  • Qualify the answer if necessary.
  • Summarize the result.

Notice that only Step 3 necessarily involves numbers. The rest of the steps involve presenting the research question and the answer to it in words. In fact, the basic results should be clear even to a reader who skips over the numbers.

The discussion is the last major section of the research report. Discussions usually consist of some combination of the following elements:

  • Summary of the research
  • Theoretical implications
  • Practical implications
  • Limitations
  • Suggestions for future research

The discussion typically begins with a summary of the study that provides a clear answer to the research question. In a short report with a single study, this might require no more than a sentence. In a longer report with multiple studies, it might require a paragraph or even two. The summary is often followed by a discussion of the theoretical implications of the research. Do the results provide support for any existing theories? If not, how can they be explained? Although you do not have to provide a definitive explanation or detailed theory for your results, you at least need to outline one or more possible explanations. In applied research—and often in basic research—there is also some discussion of the practical implications of the research. How can the results be used, and by whom, to accomplish some real-world goal?

The theoretical and practical implications are often followed by a discussion of the study’s limitations. Perhaps there are problems with its internal or external validity. Perhaps the manipulation was not very effective or the measures not very reliable. Perhaps there is some evidence that participants did not fully understand their task or that they were suspicious of the intent of the researchers. Now is the time to discuss these issues and how they might have affected the results. But do not overdo it. All studies have limitations, and most readers will understand that a different sample or different measures might have produced different results. Unless there is good reason to think they would have, however, there is no reason to mention these routine issues. Instead, pick two or three limitations that seem like they could have influenced the results, explain how they could have influenced the results, and suggest ways to deal with them.

Most discussions end with some suggestions for future research. If the study did not satisfactorily answer the original research question, what will it take to do so? What new research questions has the study raised? This part of the discussion, however, is not just a list of new questions. It is a discussion of two or three of the most important unresolved issues. This means identifying and clarifying each question, suggesting some alternative answers, and even suggesting ways they could be studied.

Finally, some researchers are quite good at ending their articles with a sweeping or thought-provoking conclusion. Darley and Latané (1968), for example, ended their article on the bystander effect by discussing the idea that whether people help others may depend more on the situation than on their personalities. Their final sentence is, “If people understand the situational forces that can make them hesitate to intervene, they may better overcome them” (p. 383). However, this kind of ending can be difficult to pull off. It can sound overreaching or just banal and end up detracting from the overall impact of the article. It is often better simply to end when you have made your final point (although you should avoid ending on a limitation).

The references section begins on a new page with the heading “References” centered at the top of the page. All references cited in the text are then listed in the format presented earlier. They are listed alphabetically by the last name of the first author. If two sources have the same first author, they are listed alphabetically by the last name of the second author. If all the authors are the same, then they are listed chronologically by the year of publication. Everything in the reference list is double-spaced both within and between references.

Appendixes, Tables, and Figures

Appendixes, tables, and figures come after the references. An appendix is appropriate for supplemental material that would interrupt the flow of the research report if it were presented within any of the major sections. An appendix could be used to present lists of stimulus words, questionnaire items, detailed descriptions of special equipment or unusual statistical analyses, or references to the studies that are included in a meta-analysis. Each appendix begins on a new page. If there is only one, the heading is “Appendix,” centered at the top of the page. If there is more than one, the headings are “Appendix A,” “Appendix B,” and so on, and they appear in the order they were first mentioned in the text of the report.

After any appendixes come tables and then figures. Tables and figures are both used to present results. Figures can also be used to illustrate theories (e.g., in the form of a flowchart), display stimuli, outline procedures, and present many other kinds of information. Each table and figure appears on its own page. Tables are numbered in the order that they are first mentioned in the text (“Table 1,” “Table 2,” and so on). Figures are numbered the same way (“Figure 1,” “Figure 2,” and so on). A brief explanatory title, with the important words capitalized, appears above each table. Each figure is given a brief explanatory caption, where (aside from proper nouns or names) only the first word of each sentence is capitalized. More details on preparing APA-style tables and figures are presented later in the book.

Sample APA-Style Research Report

Figure 11.2 “Title Page and Abstract” , Figure 11.3 “Introduction and Method” , Figure 11.4 “Results and Discussion” , and Figure 11.5 “References and Figure” show some sample pages from an APA-style empirical research report originally written by undergraduate student Tomoe Suyama at California State University, Fresno. The main purpose of these figures is to illustrate the basic organization and formatting of an APA-style empirical research report, although many high-level and low-level style conventions can be seen here too.

Figure 11.2 Title Page and Abstract

Title Page and Abstract

This student paper does not include the author note on the title page. The abstract appears on its own page.

Figure 11.3 Introduction and Method

Introduction and Method

Note that the introduction is headed with the full title, and the method section begins immediately after the introduction ends.

Figure 11.4 Results and Discussion

Results and Discussion

The discussion begins immediately after the results section ends.

Figure 11.5 References and Figure

References and Figure

If there were appendixes or tables, they would come before the figure.

Key Takeaways

  • An APA-style empirical research report consists of several standard sections. The main ones are the abstract, introduction, method, results, discussion, and references.
  • The introduction consists of an opening that presents the research question, a literature review that describes previous research on the topic, and a closing that restates the research question and comments on the method. The literature review constitutes an argument for why the current study is worth doing.
  • The method section describes the method in enough detail that another researcher could replicate the study. At a minimum, it consists of a participants subsection and a design and procedure subsection.
  • The results section describes the results in an organized fashion. Each primary result is presented in terms of statistical results but also explained in words.
  • The discussion typically summarizes the study, discusses theoretical and practical implications and limitations of the study, and offers suggestions for further research.
  • Practice: Look through an issue of a general interest professional journal (e.g., Psychological Science ). Read the opening of the first five articles and rate the effectiveness of each one from 1 ( very ineffective ) to 5 ( very effective ). Write a sentence or two explaining each rating.
  • Practice: Find a recent article in a professional journal and identify where the opening, literature review, and closing of the introduction begin and end.
  • Practice: Find a recent article in a professional journal and highlight in a different color each of the following elements in the discussion: summary, theoretical implications, practical implications, limitations, and suggestions for future research.

Bem, D. J. (2003). Writing the empirical journal article. In J. M. Darley, M. P. Zanna, & H. R. Roediger III (Eds.), The compleat academic: A practical guide for the beginning social scientist (2nd ed.). Washington, DC: American Psychological Association.

Darley, J. M., & Latané, B. (1968). Bystander intervention in emergencies: Diffusion of responsibility. Journal of Personality and Social Psychology, 4 , 377–383.

Research Methods in Psychology Copyright © 2016 by University of Minnesota is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • HHS Author Manuscripts

Logo of nihpa

Reporting Standards for Research in Psychology

In anticipation of the impending revision of the Publication Manual of the American Psychological Association , APA’s Publications and Communications Board formed the Working Group on Journal Article Reporting Standards (JARS) and charged it to provide the board with background and recommendations on information that should be included in manuscripts submitted to APA journals that report (a) new data collections and (b) meta-analyses. The JARS Group reviewed efforts in related fields to develop standards and sought input from other knowledgeable groups. The resulting recommendations contain (a) standards for all journal articles, (b) more specific standards for reports of studies with experimental manipulations or evaluations of interventions using research designs involving random or nonrandom assignment, and (c) standards for articles reporting meta-analyses. The JARS Group anticipated that standards for reporting other research designs (e.g., observational studies, longitudinal studies) would emerge over time. This report also (a) examines societal developments that have encouraged researchers to provide more details when reporting their studies, (b) notes important differences between requirements, standards, and recommendations for reporting, and (c) examines benefits and obstacles to the development and implementation of reporting standards.

The American Psychological Association (APA) Working Group on Journal Article Reporting Standards (the JARS Group) arose out of a request for information from the APA Publications and Communications Board. The Publications and Communications Board had previously allowed any APA journal editor to require that a submission labeled by an author as describing a randomized clinical trial conform to the CONSORT (Consolidated Standards of Reporting Trials) reporting guidelines ( Altman et al., 2001 ; Moher, Schulz, & Altman, 2001 ). In this context, and recognizing that APA was about to initiate a revision of its Publication Manual ( American Psychological Association, 2001 ), the Publications and Communications Board formed the JARS Group to provide itself with input on how the newly developed reporting standards related to the material currently in its Publication Manual and to propose some related recommendations for the new edition.

The JARS Group was formed of five current and previous editors of APA journals. It divided its work into six stages:

  • establishing the need for more well-defined reporting standards,
  • gathering the standards developed by other related groups and professional organizations relating to both new data collections and meta-analyses,
  • drafting a set of standards for APA journals,
  • sharing the drafted standards with cognizant others,
  • refining the standards yet again, and
  • addressing additional and unresolved issues.

This article is the report of the JARS Group’s findings and recommendations. It was approved by the Publications and Communications Board in the summer of 2007 and again in the spring of 2008 and was transmitted to the task force charged with revising the Publication Manual for consideration as it did its work. The content of the report roughly follows the stages of the group’s work. Those wishing to move directly to the reporting standards can go to the sections titled Information for Inclusion in Manuscripts That Report New Data Collections and Information for Inclusion in Manuscripts That Report Meta-Analyses.

Why Are More Well-Defined Reporting Standards Needed?

The JARS Group members began their work by sharing with each other documents they knew of that related to reporting standards. The group found that the past decade had witnessed two developments in the social, behavioral, and medical sciences that encouraged researchers to provide more details when they reported their investigations. The first impetus for more detail came from the worlds of policy and practice. In these realms, the call for use of “evidence-based” decision making had placed a new emphasis on the importance of understanding how research was conducted and what it found. For example, in 2006, the APA Presidential Task Force on Evidence-Based Practice defined the term evidence-based practice to mean “the integration of the best available research with clinical expertise” (p. 273; italics added). The report went on to say that “evidence-based practice requires that psychologists recognize the strengths and limitations of evidence obtained from different types of research” (p. 275).

In medicine, the movement toward evidence-based practice is now so pervasive (see Sackett, Rosenberg, Muir Grey, Hayes & Richardson, 1996 ) that there exists an international consortium of researchers (the Cochrane Collaboration; http://www.cochrane.org/index.htm ) producing thousands of papers examining the cumulative evidence on everything from public health initiatives to surgical procedures. Another example of accountability in medicine, and the importance of relating medical practice to solid medical science, comes from the member journals of the International Committee of Medical Journal Editors (2007) , who adopted a policy requiring registration of all clinical trials in a public trials registry as a condition of consideration for publication.

In education, the No Child Left Behind Act of 2001 (2002) required that the policies and practices adopted by schools and school districts be “scientifically based,” a term that appears over 100 times in the legislation. In public policy, a consortium similar to that in medicine now exists (the Campbell Collaboration; http://www.campbellcollaboration.org ), as do organizations meant to promote government policymaking based on rigorous evidence of program effectiveness (e.g., the Coalition for Evidence-Based Policy; http://www.excelgov.org/index.php?keyword=a432fbc34d71c7 ). Each of these efforts operates with a definition of what constitutes sound scientific evidence. The developers of previous reporting standards argued that new transparency in reporting is needed so that judgments can be made by users of evidence about the appropriate inferences and applications derivable from research findings.

The second impetus for more detail in research reporting has come from within the social and behavioral science disciplines. As evidence about specific hypotheses and theories accumulates, greater reliance is being placed on syntheses of research, especially meta-analyses ( Cooper, 2009 ; Cooper, Hedges, & Valentine, 2009 ), to tell us what we know about the workings of the mind and the laws of behavior. Different findings relating to a specific question examined with various research designs are now mined by second users of the data for clues to the mediation of basic psychological, behavioral, and social processes. These clues emerge by clustering studies based on distinctions in their methods and then comparing their results. This synthesis-based evidence is then used to guide the next generation of problems and hypotheses studied in new data collections. Without complete reporting of methods and results, the utility of studies for purposes of research synthesis and meta-analysis is diminished.

The JARS Group viewed both of these stimulants to action as positive developments for the psychological sciences. The first provides an unprecedented opportunity for psychological research to play an important role in public and health policy. The second promises a sounder evidence base for explanations of psychological phenomena and a next generation of research that is more focused on resolving critical issues.

The Current State of the Art

Next, the JARS Group collected efforts of other social and health organizations that had recently developed reporting standards. Three recent efforts quickly came to the group’s attention. Two efforts had been undertaken in the medical and health sciences to improve the quality of reporting of primary studies and to make reports more useful for the next users of the data. The first effort is called CONSORT (Consolidated Standards of Reporting Trials; Altman et al., 2001 ; Moher et al., 2001 ). The CONSORT standards were developed by an ad hoc group primarily composed of biostatisticians and medical researchers. CONSORT relates to the reporting of studies that carried out random assignment of participants to conditions. It comprises a checklist of study characteristics that should be included in research reports and a flow diagram that provides readers with a description of the number of participants as they progress through the study—and by implication the number who drop out—from the time they are deemed eligible for inclusion until the end of the investigation. These guidelines are now required by the top-tier medical journals and many other biomedical journals. Some APA journals also use the CONSORT guidelines.

The second effort is called TREND (Transparent Reporting of Evaluations with Nonexperimental Designs; Des Jarlais, Lyles, Crepaz, & the TREND Group, 2004 ). TREND was developed under the initiative of the Centers for Disease Control, which brought together a group of editors of journals related to public health, including several journals in psychology. TREND contains a 22-item checklist, similar to CONSORT, but with a specific focus on reporting standards for studies that use quasi-experimental designs, that is, group comparisons in which the groups were established using procedures other than random assignment to place participants in conditions.

In the social sciences, the American Educational Research Association (2006) recently published “Standards for Reporting on Empirical Social Science Research in AERA Publications.” These standards encompass a broad range of research designs, including both quantitative and qualitative approaches, and are divided into eight general areas, including problem formulation; design and logic of the study; sources of evidence; measurement and classification; analysis and interpretation; generalization; ethics in reporting; and title, abstract, and headings. They contain about two dozen general prescriptions for the reporting of studies as well as separate prescriptions for quantitative and qualitative studies.

Relation to the APA Publication Manual

The JARS Group also examined previous editions of the APA Publication Manual and discovered that for the last half century it has played an important role in the establishment of reporting standards. The first edition of the APA Publication Manual , published in 1952 as a supplement to Psychological Bulletin ( American Psychological Association, Council of Editors, 1952 ), was 61 pages long, printed on 6-in. by 9-in. paper, and cost $1. The principal divisions of manuscripts were titled Problem, Method, Results, Discussion, and Summary (now the Abstract). According to the first Publication Manual, the section titled Problem was to include the questions asked and the reasons for asking them. When experiments were theory-driven, the theoretical propositions that generated the hypotheses were to be given, along with the logic of the derivation and a summary of the relevant arguments. The method was to be “described in enough detail to permit the reader to repeat the experiment unless portions of it have been described in other reports which can be cited” (p. 9). This section was to describe the design and the logic of relating the empirical data to theoretical propositions, the subjects, sampling and control devices, techniques of measurement, and any apparatus used. Interestingly, the 1952 Manual also stated, “Sometimes space limitations dictate that the method be described synoptically in a journal, and a more detailed description be given in auxiliary publication” (p. 25). The Results section was to include enough data to justify the conclusions, with special attention given to tests of statistical significance and the logic of inference and generalization. The Discussion section was to point out limitations of the conclusions, relate them to other findings and widely accepted points of view, and give implications for theory or practice. Negative or unexpected results were not to be accompanied by extended discussions; the editors wrote, “Long ‘alibis,’ unsupported by evidence or sound theory, add nothing to the usefulness of the report” (p. 9). Also, authors were encouraged to use good grammar and to avoid jargon, as “some writing in psychology gives the impression that long words and obscure expressions are regarded as evidence of scientific status” (pp. 25–26).

Through the following editions, the recommendations became more detailed and specific. Of special note was the Report of the Task Force on Statistical Inference ( Wilkinson & the Task Force on Statistical Inference, 1999 ), which presented guidelines for statistical reporting in APA journals that informed the content of the 4th edition of the Publication Manual . Although the 5th edition of the Manual does not contain a clearly delineated set of reporting standards, this does not mean the Manual is devoid of standards. Instead, recommendations, standards, and requirements for reporting are embedded in various sections of the text. Most notably, statements regarding the method and results that should be included in a research report (as well as how this information should be reported) appear in the Manual ’s description of the parts of a manuscript (pp. 10–29). For example, when discussing who participated in a study, the Manual states, “When humans participated as the subjects of the study, report the procedures for selecting and assigning them and the agreements and payments made” (p. 18). With regard to the Results section, the Manual states, “Mention all relevant results, including those that run counter to the hypothesis” (p. 20), and it provides descriptions of “sufficient statistics” (p. 23) that need to be reported.

Thus, although reporting standards and requirements are not highlighted in the most recent edition of the Manual, they appear nonetheless. In that context, then, the proposals offered by the JARS Group can be viewed not as breaking new ground for psychological research but rather as a systematization, clarification, and—to a lesser extent than might at first appear—an expansion of standards that already exist. The intended contribution of the current effort, then, becomes as much one of increased emphasis as increased content.

Drafting, Vetting, and Refinement of the JARS

Next, the JARS Group canvassed the APA Council of Editors to ascertain the degree to which the CONSORT and TREND standards were already in use by APA journals and to make us aware of other reporting standards. Also, the JARS Group requested from the APA Publications Office data it had on the use of auxiliary websites by authors of APA journal articles. With this information in hand, the JARS Group compared the CONSORT, TREND, and AERA standards to one another and developed a combined list of nonredundant elements contained in any or all of the three sets of standards. The JARS Group then examined the combined list, rewrote some items for clarity and ease of comprehension by an audience of psychologists and other social and behavioral scientists, and added a few suggestions of its own.

This combined list was then shared with the APA Council of Editors, the APA Publication Manual Revision Task Force, and the Publications and Communications Board. These groups were requested to react to it. After receiving these reactions and anonymous reactions from reviewers chosen by the American Psychologist , the JARS Group revised its report and arrived at the list of recommendations contained in Tables 1 , ​ ,2, 2 , and ​ and3 3 and Figure 1 . The report was then approved again by the Publications and Communications Board.

An external file that holds a picture, illustration, etc.
Object name is nihms239779f1.jpg

Note. This flowchart is an adaptation of the flowchart offered by the CONSORT Group ( Altman et al., 2001 ; Moher, Schulz, & Altman, 2001 ). Journals publishing the original CONSORT flowchart have waived copyright protection.

Journal Article Reporting Standards (JARS): Information Recommended for Inclusion in Manuscripts That Report New Data Collections Regardless of Research Design

Module A: Reporting Standards for Studies With an Experimental Manipulation or Intervention (in Addition to Material Presented in Table 1 )

Reporting Standards for Studies Using Random and Nonrandom Assignment of Participants to Experimental Groups

Information for Inclusion in Manuscripts That Report New Data Collections

The entries in Tables 1 through ​ through3 3 and Figure 1 divide the reporting standards into three parts. First, Table 1 presents information recommended for inclusion in all reports submitted for publication in APA journals. Note that these recommendations contain only a brief entry regarding the type of research design. Along with these general standards, then, the JARS Group also recommended that specific standards be developed for different types of research designs. Thus, Table 2 provides standards for research designs involving experimental manipulations or evaluations of interventions (Module A). Next, Table 3 provides standards for reporting either (a) a study involving random assignment of participants to experimental or intervention conditions (Module A1) or (b) quasi-experiments, in which different groups of participants receive different experimental manipulations or interventions but the groups are formed (and perhaps equated) using a procedure other than random assignment (Module A2). Using this modular approach, the JARS Group was able to incorporate the general recommendations from the current APA Publication Manual and both the CONSORT and TREND standards into a single set of standards. This approach also makes it possible for other research designs (e.g., observational studies, longitudinal designs) to be added to the standards by adding new modules.

The standards are categorized into the sections of a research report used by APA journals. To illustrate how the tables would be used, note that the Method section in Table 1 is divided into subsections regarding participant characteristics, sampling procedures, sample size, measures and covariates, and an overall categorization of the research design. Then, if the design being described involved an experimental manipulation or intervention, Table 2 presents additional information about the research design that should be reported, including a description of the manipulation or intervention itself and the units of delivery and analysis. Next, Table 3 presents two separate sets of reporting standards to be used depending on whether the participants in the study were assigned to conditions using a random or nonrandom procedure. Figure 1 , an adaptation of the chart recommended in the CONSORT guidelines, presents a chart that should be used to present the flow of participants through the stages of either an experiment or a quasi-experiment. It details the amount and cause of participant attrition at each stage of the research.

In the future, new modules and flowcharts regarding other research designs could be added to the standards to be used in conjunction with Table 1 . For example, tables could be constructed to replace Table 2 for the reporting of observational studies (e.g., studies with no manipulations as part of the data collection), longitudinal studies, structural equation models, regression discontinuity designs, single-case designs, or real-time data capture designs ( Stone & Shiffman, 2002 ), to name just a few.

Additional standards could be adopted for any of the parts of a report. For example, the Evidence-Based Behavioral Medicine Committee ( Davidson et al., 2003 ) examined each of the 22 items on the CONSORT checklist and described for each special considerations for reporting of research on behavioral medicine interventions. Also, this group proposed an additional 5 items, not included in the CONSORT list, that they felt should be included in reports on behavioral medicine interventions: (a) training of treatment providers, (b) supervision of treatment providers, (c) patient and provider treatment allegiance, (d) manner of testing and success of treatment delivery by the provider, and (e) treatment adherence. The JARS Group encourages other authoritative groups of interested researchers, practitioners, and journal editorial teams to use Table 1 as similar starting point in their efforts, adding and deleting items and modules to fit the information needs dictated by research designs that are prominent in specific subdisciplines and topic areas. These revisions could then be in corporated into future iterations of the JARS.

Information for Inclusion in Manuscripts That Report Meta-Analyses

The same pressures that have led to proposals for reporting - standards for manuscripts that report new data collections have led to similar efforts to establish standards for the reporting of other types of research. Particular attention has been focused on the reporting of meta-analyses.

With regard to reporting standards for meta-analysis, the JARS Group began by contacting the members of the Society for Research Synthesis Methodology and asking them to share with the group what they felt were the critical aspects of meta-analysis conceptualization, methodology, and results that need to be reported so that readers (and manuscript reviewers) can make informed, critical judgments about the appropriateness of the methods used for the inferences drawn. This query led to the identification of four other efforts to establish reporting standards for meta-analysis. These included the QUOROM Statement (Quality of Reporting of Meta-analysis; Moher et al., 1999 ) and its revision, PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses; Moher, Liberati, Tetzlaff, Altman, & the PRISMA Group, 2008 ), MOOSE (Meta-analysis of Observational Studies in Epidemiology; Stroup et al., 2000 ), and the Potsdam Consultation on Meta-Analysis ( Cook, Sackett, & Spitzer, 1995 ).

Next the JARS Group compared the content of each of the four sets of standards with the others and developed a combined list of nonredundant elements contained in any or all of them. The JARS Group then examined the combined list, rewrote some items for clarity and ease of comprehension by an audience of psychologists, and added a few suggestions of its own. Then the resulting recommendations were shared with a subgroup of members of the Society for Research Synthesis Methodology who had experience writing and reviewing research syntheses in the discipline of psychology. After these suggestions were incorporated into the list, it was shared with members of the Publications and Communications Board, who were requested to react to it. After receiving these reactions, the JARS Group arrived at the list of recommendations contained in Table 4 , titled Meta-Analysis Reporting Standards (MARS). These were then approved by the Publications and Communications Board.

Meta-Analysis Reporting Standards (MARS): Information Recommended for Inclusion in Manuscripts Reporting Meta-Analyses

Other Issues Related to Reporting Standards

A definition of “reporting standards”.

The JARS Group recognized that there are three related terms that need definition when one speaks about journal article reporting standards: recommendations, standards, and requirements. According to Merriam-Webster’s Online Dictionary (n.d.) , to recommend is “to present as worthy of acceptance or trial … to endorse as fit, worthy, or competent.” In contrast, a standard is more specific and should carry more influence: “something set up and established by authority as a rule for the measure of quantity, weight, extent, value, or quality.” And finally, a requirement goes further still by dictating a course of action—“something wanted or needed”—and to require is “to claim or ask for by right and authority … to call for as suitable or appropriate … to demand as necessary or essential.”

With these definitions in mind, the JARS Group felt it was providing recommendations regarding what information should be reported in the write-up of a psychological investigation and that these recommendations could also be viewed as standards or at least as a beginning effort at developing standards. The JARS Group felt this characterization was appropriate because the information it was proposing for inclusion in reports was based on an integration of efforts by authoritative groups of researchers and editors. However, the proposed standards are not offered as requirements. The methods used in the subdisciplines of psychology are so varied that the critical information needed to assess the quality of research and to integrate it successfully with other related studies varies considerably from method to method in the context of the topic under consideration. By not calling them “requirements,” the JARS Group felt the standards would be given the weight of authority while retaining for authors and editors the flexibility to use the standards in the most efficacious fashion (see below).

The Tension Between Complete Reporting and Space Limitations

There is an innate tension between transparency in reporting and the space limitations imposed by the print medium. As descriptions of research expand, so does the space needed to report them. However, recent improvements in the capacity of and access to electronic storage of information suggest that this trade-off could someday disappear. For example, the journals of the APA, among others, now make available to authors auxiliary websites that can be used to store supplemental materials associated with the articles that appear in print. Similarly, it is possible for electronic journals to contain short reports of research with hot links to websites containing supplementary files.

The JARS Group recommends an increased use and standardization of supplemental websites by APA journals and authors. Some of the information contained in the reporting standards might not appear in the published article itself but rather in a supplemental website. For example, if the instructions in an investigation are lengthy but critical to understanding what was done, they may be presented verbatim in a supplemental website. Supplemental materials might include the flowchart of participants through the study. It might include oversized tables of results (especially those associated with meta-analyses involving many studies), audio or video clips, computer programs, and even primary or supplementary data sets. Of course, all such supplemental materials should be subject to peer review and should be submitted with the initial manuscript. Editors and reviewers can assist authors in determining what material is supplemental and what needs to be presented in the article proper.

Other Benefits of Reporting Standards

The general principle that guided the establishment of the JARS for psychological research was the promotion of sufficient and transparent descriptions of how a study was conducted and what the researcher(s) found. Complete reporting allows clearer determination of the strengths and weaknesses of a study. This permits the users of the evidence to judge more accurately the appropriate inferences and applications derivable from research findings.

Related to quality assessments, it could be argued as well that the existence of reporting standards will have a salutary effect on the way research is conducted. For example, by setting a standard that rates of loss of participants should be reported (see Figure 1 ), researchers may begin considering more concretely what acceptable levels of attrition are and may come to employ more effective procedures meant to maximize the number of participants who complete a study. Or standards that specify reporting a confidence interval along with an effect size might motivate researchers to plan their studies so as to ensure that the confidence intervals surrounding point estimates will be appropriately narrow.

Also, as noted above, reporting standards can improve secondary use of data by making studies more useful for meta-analysis. More broadly, if standards are similar across disciplines, a consistency in reporting could promote interdisciplinary dialogue by making it clearer to researchers how their efforts relate to one another.

And finally, reporting standards can make it easier for other researchers to design and conduct replications and related studies by providing more complete descriptions of what has been done before. Without complete reporting of the critical aspects of design and results, the value of the next generation of research may be compromised.

Possible Disadvantages of Standards

It is important to point out that reporting standards also can lead to excessive standardization with negative implications. For example, standardized reporting could fill articles with details of methods and results that are inconsequential to interpretation. The critical facts about a study can get lost in an excess of minutiae. Further, a forced consistency can lead to ignoring important uniqueness. Reporting standards that appear comprehensive might lead researchers to believe that “If it’s not asked for or does not conform to criteria specified in the standards, it’s not necessary to report.” In rare instances, then, the setting of reporting standards might lead to the omission of information critical to understanding what was done in a study and what was found.

Also, as noted above, different methods are required for studying different psychological phenomena. What needs to be reported in order to evaluate the correspondence between methods and inferences is highly dependent on the research question and empirical approach. Inferences about the effectiveness of psychotherapy, for example, require attention to aspects of research design and analysis that are different from those important for inferences in the neuroscience of text processing. This context dependency pertains not only to topic-specific considerations but also to research designs. Thus, an experimental study of the determinants of well-being analyzed via analysis of variance engenders different reporting needs than a study on the same topic that employs a passive longitudinal design and structural equation modeling. Indeed, the variations in substantive topics and research designs are factorial in this regard. So experiments in psychotherapy and neuroscience could share some reporting standards, even though studies employing structural equation models investigating well-being would have little in common with experiments in neuroscience.

Obstacles to Developing Standards

One obstacle to developing reporting standards encountered by the JARS Group was that differing taxonomies of research approaches exist and different terms are used within different subdisciplines to describe the same operational research variations. As simple examples, researchers in health psychology typically refer to studies that use experimental manipulations of treatments conducted in naturalistic settings as randomized clinical trials, whereas similar designs are referred to as randomized field trials in educational psychology. Some research areas refer to the use of random assignment of participants, whereas others use the term random allocation. Another example involves the terms multilevel model, hierarchical linear model, and mixed effects model, all of which are used to identify a similar approach to data analysis. There have been, from time to time, calls for standardized terminology to describe commonly but inconsistently used scientific terms, such as Kraemer et al.’s (1997) distinctions among words commonly used to denote risk. To address this problem, the JARS Group attempted to use the simplest descriptions possible and to avoid jargon and recommended that the new Publication Manual include some explanatory text.

A second obstacle was that certain research topics and methods will reveal different levels of consensus regarding what is and is not important to report. Generally, the newer and more complex the technique, the less agreement there will be about reporting standards. For example, although there are many benefits to reporting effect sizes, there are certain situations (e.g., multilevel designs) where no clear consensus exists on how best to conceptualize and/or calculate effect size measures. In a related vein, reporting a confidence interval with an effect size is sound advice, but calculating confidence intervals for effect sizes is often difficult given the current state of software. For this reason, the JARS Group avoided developing reporting standards for research designs about which a professional consensus had not yet emerged. As consensus emerges, the JARS can be expanded by adding modules.

Finally, the rapid pace of developments in methodology dictates that any standards would have to be updated frequently in order to retain currency. For example, the state of the art for reporting various analytic techniques is in a constant state of flux. Although some general principles (e.g., reporting the estimation procedure used in a structural equation model) can incorporate new developments easily, other developments can involve fundamentally new types of data for which standards must, by necessity, evolve rapidly. Nascent and emerging areas, such as functional neuroimaging and molecular genetics, may require developers of standards to be on constant vigil to ensure that new research areas are appropriately covered.

Questions for the Future

It has been mentioned several times that the setting of standards for reporting of research in psychology involves both general considerations and considerations specific to separate subdisciplines. And, as the brief history of standards in the APA Publication Manual suggests, standards evolve over time. The JARS Group expects refinements to the contents of its tables. Further, in the spirit of evidence-based decision making that is one impetus for the renewed emphasis on reporting standards, we encourage the empirical examination of the effects that standards have on reporting practices. Not unlike the issues many psychologists study, the proposal and adoption of reporting standards is itself an intervention. It can be studied for its effects on the contents of research reports and, most important, its impact on the uses of psychological research by decision makers in various spheres of public and health policy and by scholars seeking to understand the human mind and behavior.

The Working Group on Journal Article Reporting Standards was composed of Mark Appelbaum, Harris Cooper (Chair), Scott Maxwell, Arthur Stone, and Kenneth J. Sher. The working group wishes to thank members of the American Psychological Association’s (APA’s) Publications and Communications Board, the APA Council of Editors, and the Society for Research Synthesis Methodology for comments on this report and the standards contained herein.

  • Altman DG, Schulz KF, Moher D, Egger M, Davidoff F, Elbourne D, Gotzsche PC, Lang T. The revised CONSORT statement for reporting randomized trials: Explanation and elaboration. Annals of Internal Medicine. 2001. pp. 663–694. Retrieved April 20, 2007, from http://www.consort-statement.org/ [ PubMed ]
  • American Educational Research Association. Standards for reporting on empirical social science research in AERA publications. Educational Researcher. 2006; 35 (6):33–40. [ Google Scholar ]
  • American Psychological Association. Publication manual of the American Psychological Association. 5. Washington, DC: Author; 2001. [ Google Scholar ]
  • American Psychological Association, Council of Editors. Publication manual of the American Psychological Association. Psychological Bulletin. 1952; 49 (Suppl, Pt 2) [ Google Scholar ]
  • APA Presidential Task Force on Evidence-Based Practice. Evidence-based practice in psychology. American Psychologist. 2006; 61 :271–283. [ PubMed ] [ Google Scholar ]
  • Cook DJ, Sackett DL, Spitzer WO. Methodologic guidelines for systematic reviews of randomized control trials in health care from the Potsdam Consultation on Meta-Analysis. Journal of Clinical Epidemiology. 1995; 48 :167–171. [ PubMed ] [ Google Scholar ]
  • Cooper H. Research synthesis and meta-analysis: A step-by-step approach. 4. Thousand, Oaks, CA: Sage; 2009. [ Google Scholar ]
  • Cooper H, Hedges LV, Valentine JC, editors. The handbook of research synthesis and meta-analysis. 2. New York: Russell Sage Foundation; 2009. [ Google Scholar ]
  • Davidson KW, Goldstein M, Kaplan RM, Kaufmann PG, Knatterud GL, Orleans TC, et al. Evidence-based behavioral medicine: What is it and how do we achieve it? Annals of Behavioral Medicine. 2003; 26 :161–171. [ PubMed ] [ Google Scholar ]
  • Des Jarlais DC, Lyles C, Crepaz N the TREND Group. Improving the reporting quality of nonrandomized evaluations of behavioral and public health interventions: The TREND statement. American Journal of Public Health. 2004. pp. 361–366. Retrieved April 20, 2007, from http://www.trend-statement.org/asp/documents/statements/AJPH_Mar2004_Trendstatement.pdf . [ PMC free article ] [ PubMed ]
  • International Committee of Medical Journal Editors. Uniform requirements for manuscripts submitted to biomedical journals: Writing and editing for biomedical publication. 2007. Retrieved April 9, 2008, from http://www.icmje.org/#clin_trials . [ PubMed ]
  • Kraemer HC, Kazdin AE, Offord DR, Kessler RC, Jensen PS, Kupfer DJ. Coming to terms with the terms of risk. Archives of General Psychiatry. 1997; 54 :337–343. [ PubMed ] [ Google Scholar ]
  • Merriam-Webster’s online dictionary. nd. Retrieved April 20, 2007, from http://www.m-w.com/dictionary/
  • Moher D, Cook DJ, Eastwood S, Olkin I, Rennie D, Stroup D for the QUOROM group. Improving the quality of reporting of meta-analysis of randomized controlled trials: The QUOROM statement. Lancet. 1999; 354 :1896–1900. [ PubMed ] [ Google Scholar ]
  • Moher D, Schulz KF, Altman DG. The CONSORT statement: Revised recommendations for improving the quality of reports of parallel-group randomized trials. Annals of Internal Medicine. 2001. pp. 657–662. Retrieved April 20, 2007 from http://www.consort-statement.org . [ PubMed ]
  • Moher D, Liberati A, Tetzlaff J, Altman DG the PRISMA Group. Preferred reporting items for systematic reviews and meta-analysis: The PRISMA statement. 2008. Manuscript submitted for publication. [ PubMed ] [ Google Scholar ]
  • No Child Left Behind Act of 2001, Pub. L. 107–110, 115 Stat. 1425 (2002, January 8).
  • Sackett DL, Rosenberg WMC, Muir Grey JA, Hayes RB, Richardson WS. Evidence based medicine: What it is and what it isn’t. British Medical Journal. 1996; 312 :71–72. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Stone AA, Shiffman S. Capturing momentary, self-report data: A proposal for reporting guidelines. Annals of Behavioral Medicine. 2002; 24 :236–243. [ PubMed ] [ Google Scholar ]
  • Stroup DF, Berlin JA, Morton SC, Olkin I, Williamson GD, Rennie D, et al. Meta-analysis of observational studies in epidemiology. Journal of the American Medical Association. 2000; 283 :2008–2012. [ PubMed ] [ Google Scholar ]
  • Wilkinson L the Task Force on Statistical Inference. Statistical methods in psychology journals: Guidelines and explanations. American Psychologist. 1999; 54 :594–604. [ Google Scholar ]

Succeed with the Learning Hub:  Assisting you to maximise your potential

Current Size: 100%

  • Assignment Types

Research Report (Psychology)

Research Report (Psychology)

Assignment Types Menu

  • Criminology & Justice
  • Social Work
  • Professional Year

Content Each section in a research report has direct links to the other sections and all sections are logically related. As such, it is possible to predict what needs to be included in any section even if only a few sections are available to read. Some assignments provide students with the method and results sections, and then ask students to write the other sections of the lab report. That is, students are asked to deduce the research question and hypothesis or hypotheses from the method and results sections. One way of beginning this task is to think about creating a research story. The PDF resource below looks more closely at this deductive process.

Structure Research reports have a set structure of Title, Introduction, Literature Review (sometimes this is part of the Introduction), Methods, Results, Discussion, Conclusion and References. Some research reports may also require a title page, abstract, and/or appendices, so be sure to check the exact requirements for your specific assessment task. The structure of a research report is made clear by headings and sub-headings, which need to be formatted according to APA Style.

Style Research reports need to be written in a formal and clear style. Research reports may present information in paragraphs, and also in bullet points and numbered lists. Some information in the Results section might be best presented as a table or figure and these must also be presented professionally. They need to be labelled with an identifier (e.g. Figure 1 or Table 1) and a title/caption. The information in the table or figure needs to be discussed within the report, that is, you need to explain what it means in words and refer to the graphic being discussed (e.g. As shown in Figure 1, there was an increase in….. ).

  • Read the unit outline from cover to cover
  • Check your class space for resources about writing a research report for specific assignments
  • Attend all unit sessions whether on campus or online; Educators cover the assignment requirements and often give whole sessions to writing research reports
  • Attend the unit session scheduled for the experiment
  • Attend Peer Assisted Study Sessions available for the units PSYC1022 and PSYC1032
  • Get clear about what you are reporting on; the research question and the hypothesis; as these define the focus of the report
  • Put your deductive thinking cap on
  • Check the APA Website  for information on structure, formatting and writing style according to APA 7th.

research reports in psychology

Useful links

research reports in psychology

Logo for BCcampus Open Publishing

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

Chapter 12: Descriptive Statistics

Expressing Your Results

Learning Objectives

  • Write out simple descriptive statistics in American Psychological Association (APA) style.
  • Interpret and create simple APA-style graphs—including bar graphs, line graphs, and scatterplots.
  • Interpret and create simple APA-style tables—including tables of group or condition means and correlation matrixes.

Once you have conducted your descriptive statistical analyses, you will need to present them to others. In this section, we focus on presenting descriptive statistical results in writing, in graphs, and in tables—following American Psychological Association (APA) guidelines for written research reports. These principles can be adapted easily to other presentation formats such as posters and slide show presentations.

Presenting Descriptive Statistics in Writing

When you have a small number of results to report, it is often most efficient to write them out. There are a few important APA style guidelines here. First, statistical results are always presented in the form of numerals rather than words and are usually rounded to two decimal places (e.g., “2.00” rather than “two” or “2”). They can be presented either in the narrative description of the results or parenthetically—much like reference citations. Here are some examples:

The mean age of the participants was 22.43 years with a standard deviation of 2.34.

Among the low self-esteem participants, those in a negative mood expressed stronger intentions to have unprotected sex ( M  = 4.05,  SD  = 2.32) than those in a positive mood ( M  = 2.15,  SD  = 2.27).

The treatment group had a mean of 23.40 ( SD  = 9.33), while the control group had a mean of 20.87 ( SD  = 8.45).

The test-retest correlation was .96.

There was a moderate negative correlation between the alphabetical position of respondents’ last names and their response time ( r  = −.27).

Notice that when presented in the narrative, the terms  mean  and  standard deviation  are written out, but when presented parenthetically, the symbols  M and  SD  are used instead. Notice also that it is especially important to use parallel construction to express similar or comparable results in similar ways. The third example is  much  better than the following nonparallel alternative:

The treatment group had a mean of 23.40 ( SD  = 9.33), while 20.87 was the mean of the control group, which had a standard deviation of 8.45.

Presenting Descriptive Statistics in Graphs

When you have a large number of results to report, you can often do it more clearly and efficiently with a graph. When you prepare graphs for an APA-style research report, there are some general guidelines that you should keep in mind. First, the graph should always add important information rather than repeat information that already appears in the text or in a table. (If a graph presents information more clearly or efficiently, then you should keep the graph and eliminate the text or table.) Second, graphs should be as simple as possible. For example, the  Publication Manual  discourages the use of colour unless it is absolutely necessary (although colour can still be an effective element in posters, slide show presentations, or textbooks.) Third, graphs should be interpretable on their own. A reader should be able to understand the basic result based only on the graph and its caption and should not have to refer to the text for an explanation.

There are also several more technical guidelines for graphs that include the following:

  • The graph should be slightly wider than it is tall.
  • The independent variable should be plotted on the  x- axis and the dependent variable on the  y- axis.
  • Values should increase from left to right on the  x- axis and from bottom to top on the  y- axis.
  • Axis labels should be clear and concise and include the units of measurement if they do not appear in the caption.
  • Axis labels should be parallel to the axis.
  • Legends should appear within the boundaries of the graph.
  • Text should be in the same simple font throughout and differ by no more than four points.
  • Captions should briefly describe the figure, explain any abbreviations, and include the units of measurement if they do not appear in the axis labels.
  • Captions in an APA manuscript should be typed on a separate page that appears at the end of the manuscript. See  Chapter 11 for more information.

As we have seen throughout this book,  bar graphs  are generally used to present and compare the mean scores for two or more groups or conditions. The bar graph in Figure 12.11 is an APA-style version of Figure 12.4. Notice that it conforms to all the guidelines listed. A new element in Figure 12.11 is the smaller vertical bars that extend both upward and downward from the top of each main bar. These are error bars , and they represent the variability in each group or condition. Although they sometimes extend one standard deviation in each direction, they are more likely to extend one standard error in each direction (as in Figure 12.11). The  standard error  is the standard deviation of the group divided by the square root of the sample size of the group. The standard error is used because, in general, a difference between group means that is greater than two standard errors is statistically significant. Thus one can “see” whether a difference is statistically significant based on a bar graph with error bars.

Sample APA-style bar graph. Long description available.

Line Graphs

Line graphs  are used to present correlations between quantitative variables when the independent variable has, or is organized into, a relatively small number of distinct levels. Each point in a line graph represents the mean score on the dependent variable for participants at one level of the independent variable. Figure 12.12 is an APA-style version of the results of Carlson and Conard. Notice that it includes error bars representing the standard error and conforms to all the stated guidelines.

Sample APA-style line graph. Long description available.

In most cases, the information in a line graph could just as easily be presented in a bar graph. In Figure 12.12, for example, one could replace each point with a bar that reaches up to the same level and leave the error bars right where they are. This emphasizes the fundamental similarity of the two types of statistical relationship. Both are differences in the average score on one variable across levels of another. The convention followed by most researchers, however, is to use a bar graph when the variable plotted on the  x- axis is categorical and a line graph when it is quantitative.

Scatterplots

Scatterplots  are used to present relationships between quantitative variables when the variable on the  x- axis (typically the independent variable) has a large number of levels. Each point in a scatterplot represents an individual rather than the mean for a group of individuals, and there are no lines connecting the points. The graph in Figure 12.13 is an APA-style version of Figure 12.7, which illustrates a few additional points. First, when the variables on the x- axis and  y -axis are conceptually similar and measured on the same scale—as here, where they are measures of the same variable on two different occasions—this can be emphasized by making the axes the same length. Second, when two or more individuals fall at exactly the same point on the graph, one way this can be indicated is by offsetting the points slightly along the  x- axis. Other ways are by displaying the number of individuals in parentheses next to the point or by making the point larger or darker in proportion to the number of individuals. Finally, the straight line that best fits the points in the scatterplot, which is called the regression line, can also be included.

Sample APA-style scatterplot. Long description available.

Expressing Descriptive Statistics in Tables

Like graphs, tables can be used to present large amounts of information clearly and efficiently. The same general principles apply to tables as apply to graphs. They should add important information to the presentation of your results, be as simple as possible, and be interpretable on their own. Again, we focus here on tables for an APA-style manuscript.

The most common use of tables is to present several means and standard deviations—usually for complex research designs with multiple independent and dependent variables. Figure 12.14, for example, shows the results of a hypothetical study similar to the one by MacDonald and Martineau (2002) [1] discussed in  Chapter 5 . (The means in Figure 12.14 are the means reported by MacDonald and Martineau, but the standard errors are not). Recall that these researchers categorized participants as having low or high self-esteem, put them into a negative or positive mood, and measured their intentions to have unprotected sex. Although not mentioned in  Chapter 5 , they also measured participants’ attitudes toward unprotected sex. Notice that the table includes horizontal lines spanning the entire table at the top and bottom, and just beneath the column headings. Furthermore, every column has a heading—including the leftmost column—and there are additional headings that span two or more columns that help to organize the information and present it more efficiently. Finally, notice that APA-style tables are numbered consecutively starting at 1 (Table 1, Table 2, and so on) and given a brief but clear and descriptive title.

Sample APA-style table presenting means and standard deviations. Long description available.

Another common use of tables is to present correlations—usually measured by Pearson’s  r —among several variables. This kind of table is called a  correlation matrix . Figure 12.15 is a correlation matrix based on a study by David McCabe and colleagues (McCabe, Roediger, McDaniel, Balota, & Hambrick, 2010) [2] . They were interested in the relationships between working memory and several other variables. We can see from the table that the correlation between working memory and executive function, for example, was an extremely strong .96, that the correlation between working memory and vocabulary was a medium .27, and that all the measures except vocabulary tend to decline with age. Notice here that only half the table is filled in because the other half would have identical values. For example, the Pearson’s  r  value in the upper right corner (working memory and age) would be the same as the one in the lower left corner (age and working memory). The correlation of a variable with itself is always 1.00, so these values are replaced by dashes to make the table easier to read.

Sample APA-style table (correlation matrix). Long description available.

As with graphs, precise statistical results that appear in a table do not need to be repeated in the text. Instead, the writer can note major trends and alert the reader to details (e.g., specific correlations) that are of particular interest.

Key Takeaways

  • In an APA-style article, simple results are most efficiently presented in the text, while more complex results are most efficiently presented in graphs or tables.
  • APA style includes several rules for presenting numerical results in the text. These include using words only for numbers less than 10 that do not represent precise statistical results, and rounding results to two decimal places, using words (e.g., “mean”) in the text and symbols (e.g., “ M ”) in parentheses.
  • APA style includes several rules for presenting results in graphs and tables. Graphs and tables should add information rather than repeating information, be as simple as possible, and be interpretable on their own with a descriptive caption (for graphs) or a descriptive title (for tables).

Long Descriptions

“Convincing” long description: A four-panel comic strip. In the first panel, a man says to a woman, “I think we should give it another shot.” The woman says, “We should break up, and I can prove it.”

In the second panel, there is a line graph with a downward trend titled “Our Relationship.”

In the third panel, the man, bent over and looking at the graph in the woman’s hands, says, “Huh.”

In the fourth panel, the man says, “Maybe you’re right.” The woman says, “I knew data would convince you.” The man replies, “No, I just think I can do better than someone who doesn’t label her axes.” [Return to “Convincing”]

Figure 12.11 long description: A sample APA-style bar graph, with a horizontal axis labelled “Condition” and a vertical axis labelled “Clinician Rating of Severity.” The caption of the graph says, “Figure X. Mean clinician’s rating of phobia severity for participants receiving the education treatment and the exposure treatment. Error bars represent standard errors.” At the top of each data bar is an error bar, which look likes a capital I: a vertical line with short horizontal lines attached to its top and bottom. The bottom half of each error bar hangs over the top of the data bar, while each top half sticks out the top of the data bar. [Return to Figure 12.11]

Figure 12.12 long description: A sample APA-style line graph with a horizontal axis labelled “Last Name Quartile” and a vertical axis labelled “Response Times (z Scores).” The caption of the graph says, “Figure X. Mean response time by the alphabetical position of respondents’ names in the alphabet. Response times are expressed as z scores. Error bars represent standard errors.” Each data point has an error bar sticking out of its top and bottom. [Return to Figure 12.12]

Figure 12.13 long description: Sample APA-style scatterplot with a horizontal axis labelled “Time 1” and a vertical axis labelled “Time 2.” Each axis has values from 10 to 30. The caption of the scatterplot says, “Figure X. Relationship between scores on the Rosenberg self-esteem scale taken by 25 research methods students on two occasions one week apart. Pearson’s r = .96.” Most of the data points are clustered around the dashed regression line that extends from approximately (12, 11) to (29, 22). [Return to Figure 12.13]

Figure 12.14 long description: Sample APA-style table presenting means and standard deviations. The table is titled “Table X” and is captioned, “Means and Standard Deviations of Intentions to Have Unprotected Sex and Attitudes Toward Unprotected Sex as a Function of Both Mood and Self-Esteem.” The data is organized into negative and positive mood and details intentions and attitudes toward unprotected sex.

Negative mood:

  • High—Mean, 2.46
  • High—Standard Deviation, 1.97
  • Low—Mean, 4.05
  • Low—Standard Deviation, 2.32
  • High—Mean, 1.65
  • High—Standard Deviation, 2.23
  • Low—Mean, 1.95
  • Low—Standard Deviation, 2.01

Positive mood:

  • High—Mean, 2.45
  • High—Standard Deviation, 2.00
  • Low—Mean, 2.15
  • Low—Standard Deviation, 2.27
  • High—Mean, 1.82
  • High—Standard Deviation, 2.32
  • Low—Mean, 1.23
  • Low—Standard Deviation, 1.75

[Return to Figure 12.14]

Figure 12.15 long description: Sample APA-style correlation matrix, titled “Table X: Correlations Between Five Cognitive Variables and Age.” The five cognitive variables are:

  • Working memory
  • Executive function
  • Processing speed
  • Episodic memory

The data is as such:

Media Attributions

  • Convincing by XKCD  CC BY-NC (Attribution NonCommercial)
  • MacDonald, T. K., & Martineau, A. M. (2002). Self-esteem, mood, and intentions to use condoms: When does low self-esteem lead to risky health behaviours? Journal of Experimental Social Psychology, 38 , 299–306. ↵
  • McCabe, D. P., Roediger, H. L., McDaniel, M. A., Balota, D. A., & Hambrick, D. Z. (2010). The relationship between working memory capacity and executive functioning. Neuropsychology, 24 (2), 222–243. doi:10.1037/a0017619 ↵
  • Buss, D. M., & Schmitt, D. P. (1993). Sexual strategies theory: A contextual evolutionary analysis of human mating. Psychological Review, 100 , 204–232. ↵

A figure in which the heights of the bars represent the group means.

Small bars at the top of each main bar in a bar graph that represent the variability in each group or condition.

The standard deviation of the group divided by the square root of the sample size of the group.

A graph used to present correlations between quantitative variables when the independent variable has, or is organized into, a relatively small number of distinct levels.

A graph which shows correlations between quantitative variables; each point represents one person’s score on both variables.

A table showing the correlation between every possible pair of variables in the study.

Research Methods in Psychology - 2nd Canadian Edition Copyright © 2015 by Paul C. Price, Rajiv Jhangiani, & I-Chant A. Chiang is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

research reports in psychology

  • Open access
  • Published: 13 May 2024

The big five factors as differential predictors of self-regulation, achievement emotions, coping and health behavior in undergraduate students

  • Jesús de la Fuente 1 , 2 ,
  • Paul Sander 3 ,
  • Angélica Garzón Umerenkova 4 ,
  • Begoña Urien 1 ,
  • Mónica Pachón-Basallo 1 &
  • Elkin O Luis 1  

BMC Psychology volume  12 , Article number:  267 ( 2024 ) Cite this article

190 Accesses

1 Altmetric

Metrics details

The aim of this research was to analyze whether the personality factors included in the Big Five model differentially predict the self-regulation and affective states of university students and health.

A total of 637 students completed validated self-report questionnaires. Using an ex post facto design, we conducted linear regression and structural prediction analyses.

The findings showed that model factors were differential predictors of both self-regulation and affective states. Self-regulation and affective states, in turn, jointly predict emotional performance while learning and even student health. These results allow us to understand, through a holistic predictive model, the differential predictive relationships of all the factors: conscientiousness and extraversion were predictors regulating positive emotionality and health; the openness to experience factor was non-regulating; nonregulating; and agreeableness and neuroticism were dysregulating, hence precursors of negative emotionality and poorer student health.

Conclusions

These results are important because they allow us to infer implications for guidance and psychological health at university.

Peer Review reports

Introduction

The personality characteristics of students have proven to be essential explanatory and predictive factors of learning behavior and performance at universities [ 1 , 2 , 3 , 4 ]. However, our knowledge about such factors does not exhaust further questions, such as which personality factors tend toward the regulation of learning behavior and which do not? Or can personality factors be arranged on a continuum to understand student differences in their emotions when learning? Consequently, the aim of this study was to analyze whether students’ personality traits differentially predict the regulation of behavior and emotionality. These variables align as different motivational-affective profiles of students, through the type of achievement emotions they experience during study, as well as their coping strategies, motivational state, and ultimately health.

Five-factor model

Previous research has shown the value and consistency of the five-factor model for analyzing students’ personality traits. Pervin, Cervone, and John [ 5 ] defined five factors as follows: (1) Conscientiousness includes a sense of duty, persistence, and behavior that is self-disciplined and goal-directed. The descriptors organized, responsible, and efficient are typically used to describe conscientious persons. (2) Extraversion is characterized by the quantity and intensity of interpersonal relationships, as well as sensation seeking. The descriptors sociable, assertive, and energetic are typically used to describe extraverted persons. (3) Openness to experience incorporates autonomous thinking and willingness to examine unfamiliar ideas and try new things. The descriptors inquisitive, philosophical, and innovative are typically used to describe persons open to experience. (4) Agreeableness is quantified along a continuum from social antagonism to compassion in one’s quality of interpersonal interactions. The descriptors inquisitive, kind, considerate, and generous are often used to describe persons characterized by agreeableness. (5) Finally, neuroticism tends to indicate negative emotions . Persons showing neuroticism are often described as moody, nervous, or touchy.

This construct has appeared to consistently predict individual differences between university students. Prior research has documented its essential role in explaining differences in achievement [ 6 , 7 ], motivational states [ 8 ], students’ learning approaches [ 9 ], self-regulated learning [ 10 ].

Five-factor model, self-regulation, achievement emotions and health

The relationship between the Big Five factors and self-regulation has been analyzed historically with much interest [ 11 , 12 , 13 , 14 , 15 ]. The dimensions of the five-factor model describe fundamental ways in which people differ from one another [ 16 , 17 ]. Of the five factors, conscientiousness may be the best reflection of self-regulation capacity. More recent research has shown consistent evidence of the relationship between these two constructs, especially conscientiousness, which has a positive relationship, and neuroticism, which has a negative relationship with self-regulation [ 18 , 19 ]. The Big Five factors are also related to coping strategies [ 20 ].

The evidence on the role of the five-factor model in self-regulation, achievement emotions, and health has been fairly consistent. On the one hand, self-regulation has a confirmed role as a meta-cognitive variable that is present in students’ mental health problems [ 21 ]. Similarly, personality factors and types of perfectionism have been associated with mental health in university students [ 22 ]. In a complementary fashion, one longitudinal study has shown that personality factors have a persistent effect on self-regulation and health. Sirois and Hirsch [ 23 ] confirmed that the Big Five traits affect balance and health behaviors.

Self-regulation, achievement emotions and health

Self-regulation has recently been considered a significant behavioral meta-ability that regulates other skills in the university environment. It has consistently appeared to be a predictor of achievement emotions [ 24 ], coping strategies [ 25 ], and health behavior [ 26 ]. In the context of university learning, the level of self-regulation is a determining factor in learning approaches, motivation and achievement [ 27 ]. Similarly, the self- vs. externally regulated behavior theory [ 27 , 28 ] assumes that the continuum of self-regulation can be divided into three types: (1) self-regulation behavior, which is the meta-behavior or meta-skill of planning and executing control over one’s behavior; (2) nonregulation behavior (deregulation) , where consistent self-regulating behavior is absent; and (3) nonregulation behavior, when regulatory behavior is maladaptive or contrary to what is expected. Some example behaviors are presented below, and these have already been documented (see Table  1 ). Recently, Beaulieu and collaborators [ 29 ] proposed a self-dysregulation latent profile for describing subjects with lower scores on subscales regarding extraversion, agreeableness and conscientiousness and higher scores concerning negative emotional facets.

Table  1 here.

Consequently, the question that we pose - as yet unresolved - is whether the different personality factors predict a determined type of regulation on the continuum of regulatory behavior, nonregulatory (deregulatory) behavior and dysregulatory behavior, based on evidence.

Aims and hypotheses

Based on the existing evidence, the aim of this study was to establish a structural predictive model that would order personality factors along a continuum as predictors of university students’ regulatory behavior. The following hypotheses were proposed for this purpose: (1) personality factors differentially predict students’ regulatory, nonregulatory and dysregulatory behavior during academic learning; they also differentially determine students’ type of emotional states (positive vs. negative affect); (2) the preceding factors differentially predict achievement emotions (positive vs. negative) during learning, coping strategies (problem-focused vs. emotion-focused) and motivational state (engagement vs. burnout); and (3) all these factors ultimately predict student health, either positively or negatively, depending on their regulatory or dysregulatory nature.

Participants

Data were gathered from 2019 to 2022, encompassing a total of 626 undergraduate students enrolled in Psychology, Primary Education, and Educational Psychology programs across two Spanish universities. Within this cohort, 85.5% were female, and 14.5% were male, with ages ranging from 19 to 24 years and a mean age of 21.33 years. The student distribution was equal between the two universities, with 324 attending one and 318 attending the other. The study employed an incidental, nonrandomized design. The guidance departments at both universities extended invitations for teacher participation, and teachers, in turn, invited their students to partake voluntarily, ensuring anonymity. Questionnaires were completed online for each academic subject, corresponding to the specific teaching-learning process.

Instruments

Five personality factors.

The Big Five Questionnaire [ 30 ], based on the version by Barbaranelli et al. [ 31 ], assessed scores for five personality factors. Confirmatory factor analysis (CFA) of the 67 scale items resulted in a five-factor structure aligned with the Big Five Model. The outcomes demonstrated satisfactory psychometric properties and acceptable fit indices. The second-order confirmatory model exhibited a good fit (chi-square = 38.273; degrees of freedom (20–15) = 5; p  > 0.10; chi/df = 7.64; RMR = 0.0425; NFI = 0.939; RFI = 0.917; IFI = 0.947; TLI = 0.937; CFI = 0.946; RMSEA = 0.065; HoeLength index = 2453 ( p  < 0.05) and 617 ( p  < 0.01)). Internal consistency of the total scale was also strong (alpha = 0.956; Part 1 = 0.932 and Part 2 = 0.832; Spearman-Brown = 0.962 and Guttman = 0.932).

Self-Regulation : The Short Self-Regulation Questionnaire (SSRQ) [ 32 ] gauged self-regulation. The Spanish adaptation, previously validated in Spanish samples [ 33 ], encompassed four factors measured by a total of 17 items. Confirmatory factor analysis confirmed a consistent factor structure (chi-square = 845.593; df = 113; chi/df = 7.483; RMSM = 0.0299; CFI = 0.959, GFI = 0.94, AGFI = 0.96, RMSEA = 0.059). Validity and reliability values (Cronbach’s alpha) were deemed acceptable (total (α = 0.86; Omega = 0.843); goal-setting planning (α = 0.79; Omega = 0.784); perseverance (α = 0.78; Omega = 0.779); decision-making (α = 0.72; Omega = 0.718); and learning from mistakes (α = 0.72; Omega = 0.722)), comparable to those of the English version. Example statements include: “I usually keep track of my progress toward my goals,” “In regard to deciding about a change, I feel overwhelmed by the choice,” and “I learn from my mistakes.”

Positive-negative affect

The Positive and Negative Affect Scale (PANAS-N) [ 34 ], validated with university students, assessed positive and negative affect. The PANAS comprises two factors and 20 items, demonstrating a consistent confirmatory factor structure (chi-square = 1111.147; df = 169; chi/df = 6.518; RMSM = 0.0346; CFI = 0.955, GFI = 0.963, AGFI = 0.96, RMSEA = 0.058). Validity and reliability values (Cronbach’s alpha) were acceptable (total (α = 0.891; Omega = 0.857); positive affect (α = 0.8199; Omega = 0.784); and negative affect (α = 0.795; Omega = 0.776), comparable to those of the English version. Sample items include “I am a lively person, I usually get excited; I have bad moods (I get upset or irritated).”

Learning Achievement Emotion : The variable was measured using the Spanish version [ 35 ] of the Achievement Emotions Questionnaire (AEQ-Learning) [ 36 ], encompassing nine emotions (enjoyment, hope, pride, relief, anger, anxiety, hopelessness, shame, and boredom). Emotions were classified based on valence (positive or negative) and activation (activating or deactivating), resulting in four quadrants. Another classification considered the source or trigger: the ongoing activity, prospective outcome, or retrospective outcome. Psychometric properties were adequate, and the confirmatory model displayed a good fit (chi-square = 529.890; degrees of freedom = 79; chi/df = 6.70; SRMR = 0.053; p  > 0.08; NFI = 0.964; RFI = 0.957; IFI = 0.973; TLI = 0.978, CFI = 0.971; RMSEA = 0.080; HOELTER = 165 ( p  < 0.05) and 178 ( p  < 0.01)). Good internal consistency was found for the total scale (Alpha = 0.939; Part 1 = 0.880, Part 2 = 0.864; Spearman-Brown = 0.913 and 884; Guttman = 0.903). Example items include Item 90: “I am angry when I have to study”; Item 113: “My sense of confidence motivates me”; and Item 144: “I am proud of myself”.

Engagement-Burnout : Engagement was assessed using a validated Spanish version of the Utrecht Work Engagement Scale for Students [ 37 ], demonstrating satisfactory psychometric properties for Spanish students. The model displayed good fit indices, with a second-order structure comprising three factors: vigor, dedication, and absorption. Scale unidimensionality and metric invariance were verified in the samples assessed (chi-square = 592.526, p  > 0.09; df = 84, chi/df = 7.05; SRMR = 0.034; TLI = 0.976, IFI = 0.954, and CFI = 0.923; RMSEA = 0.083; HOELTER = 153, p  < 0.05; 170 p  < 0.01). Cronbach’s alpha for this sample was 0.900 (14 items); the two parts of the scale produced values of 0.856 (7 items) and 0.786 (7 items).

Burnout : The Maslach Burnout Inventory (MBI) [ 38 ], in its validated Spanish version, was employed to assess burnout. This version exhibited adequate psychometric properties for Spanish students. Good fit indices were obtained, with a second-order structure comprising three factors: exhaustion or depletion, cynicism, and lack of effectiveness. Scale unidimensionality and metric invariance were confirmed in the samples assessed (chi-square = 567.885, p  > 0.010, df = 87, chi/df = 6.52; SRMR = 0.054; CFI = 0.956, IFI = 0.951, TLI = 0.951; RMSEA = 0.071; HOELTER = 224, p  < 0.05; 246 p  < 0.01). Cronbach’s alpha for this sample was 0.874 (15 items); the two parts of the scale were 0.853 (8 items) and 0.793 (7 items).

Strategies for coping with academic stress : The Coping Strategies Scale (Escala Estrategias de Coping - EEC) [ 39 ] was utilized in its original version. Constructed based on the Lazarus and Folkman questionnaire [ 40 ] using theoretical-rational criteria, the original 90-item instrument resulted in a 64-item first-order structure. The second-order structure comprised 10 factors and two significant dimensions. A satisfactory fit was observed in the second-order structure (chi-square = 478.750; degrees of freedom = 73, p  > 0.09; chi/df = 6.55; RMSR = 0.052; NFI = 0.901; RFI = 0.945; IFI = 0.903, TLI = 0.951, CFI = 0.903). Reliability was confirmed with Cronbach’s alpha values of 0.93 (complete scale), 0.93 (first half), and 0.90 (second half); Spearman-Brown coefficient of 0.84; and Guttman coefficient of 0.80. Two dimensions and 11 factors were identified: (1) Dimension: emotion-focused coping—F1. Fantasy distraction; F6. Help for action; F8. Preparing for the worst; F9. Venting and emotional isolation; F11. Resigned acceptance. (2) Dimension: problem-focused coping—F2. Help seeking and family counsel; F10. Self-instructions; F10. Positive reappraisal and firmness; F12. Communicating feelings and social support; F13. Seeking alternative reinforcement.

Student Health Behavior : The Physical and Psychosocial Health Inventory [ 41 ] measured this variable, summarizing the World Health Organization (WHO) definition of health: “Health is a state of complete physical, mental, and social well-being and not merely the absence of disease or infirmity.” The inventory focused on the impact of studies, with questions such as “I feel anxious about my studies.” Students responded on a Likert scale from 1 (strongly disagree) to 5 (strongly agree). In the Spanish sample, the model displayed good fit indices (CFI = 0.95, GFI = 0.96, NFI = 0.94; RMSEA = 0.064), with a Cronbach’s alpha of 0.82.

All participants provided informed consent before engaging in the study. The completion of scales was voluntary and conducted through an online platform. Over two academic years, students reported on five distinct teaching-learning processes, each corresponding to a different university subject they were enrolled in during this period. Students took their time to answer the questionnaires gradually throughout the academic year. The assessment for Presage variables took place in September-October of 2018 and 2019, Process variables were assessed in the subsequent February-March, and Product variables were evaluated in May-June. The procedural steps were ethically approved by the Ethics Committee under reference 2018.170, within the broader context of an R&D Project spanning 2018 to 2021.

Data analysis

The ex post facto design [ 42 ] of this cross-sectional study involved bivariate association analyses, multiple regression, and structural predictions (SEMs). Preliminary analyses were executed to ensure the appropriateness of the parameters used in the analyses, including tests for normality (Kolmogorov-Smirnov), skewness, and kurtosis (+-0.05).

Multiple regression

Hypothesis 1 was evaluated using multiple regression analysis through SPSS (v. 26).

Confirmatory factor analysis

To test Hypotheses 2 and 3, a structural equation model (SEM) was employed in this sample. Model fit was assessed by examining the chi-square to degrees of freedom ratio, along with RMSEA (root mean square error of approximation), NFI (normed fit index), CFI (comparative fit index), GFI (goodness-of-fit index), and AGFI (adjusted goodness-of-fit index) [ 43 ]. Ideally, all these values should surpass 0.90. The adequacy of the sample size was confirmed using the Hoelter index [ 44 ]. These analyses were conducted using AMOS (v.22).

Prediction results

The predictive relationships exhibited a continuum along two extremes. On the one hand, conscientiousness, extraversion and openness were significant, graded, and positive predictors of self-regulation. On the other hand, Agreeableness and Neuroticism were negative, graded predictors of self-regulation. A considerable percentage of explained variance was observed ( r 2  = 0.499). The most meaningful finding, however, is that this predictive differential grading is maintained for the rest of the variables analyzed: positive affect ( r 2  = 0.571) and negative affect ( r 2  = 0.524), achievement emotions during study, engagement burnout, problem- and emotion-focused coping strategies, and student health. See Table  2 .

Structural prediction results

Structural prediction model.

Three models were tested. Model 1 proposes the exclusive prediction of personality factors on the rest of the factors, not including self-regulation. Model 2 evaluated the predictive potential of self-regulation on the factors of the Big Five model. Model 3 tested the ability of the Big Five personality traits to predict self-regulation and the other factors. The latter model presented adequate statistical values. These models are shown in Table  3 .

Models of the linear structural results of the variables

Direct effects.

The statistical effects showed a direct, significant, positive predictive effect of the personality factors C (Conscientiousness) and E (Extraversion) on self-regulation. The result for factor O (openness to experience) was not significant. Factors A (agreeableness) and N (neuroticism) were negatively related, especially the latter. In a complementary fashion, factors C and E showed significant, positive predictions of positive affect, while O and A had less strength. Factor N most strongly predicted negative affect.

Moreover, self-regulation positively predicted positive achievement emotions during study and negatively predicted negative achievement emotions. Positive affect predicted positive emotions during study, engagement, and problem-focused coping strategies; negative affect predicted negative emotions during study, burnout, and emotion-focused strategies. Positive emotions during study negatively predict negative emotions and burnout. Engagement positively predicted problem-focused coping and negatively predicted burnout. Finally, problem-focused coping also predicted emotion-focused coping. Emotion-focused coping negatively predicts health and well-being.

Indirect effects

The Big Five factors exhibited consistent directionality. Factors C and E positively predicted positive emotions, engagement, problem-focused coping, and health and negatively predicted negative emotions and burnout. Factor O had low prediction values in both negative and positive cases. Factors A and N were positive predictors of negative emotions during study, burnout, emotion-focused coping and health, while the opposite was true for factors C and E. These factors had positive predictive effects on self-regulation, positive affect, positive emotions during study, engagement, problem-focused strategies and health; in contrast, the other factors had negative effects on negative affect, negative emotions during study, burnout, emotion-focused strategies and health. See Table  4 ; Fig.  1 .

SEM of prediction in the variables Note. C = Conscientiousness; E = Extraversion; O = Openness to experience; A = Agreeableness; N = Neuroticism; SR = Self-Regulation; Pos.A = Positive Affect; Neg.A = Negative Affect; Pe.S = Positive emotions during study; Ne.S = Negative emotions during study; ENG = Engagement; BURN = Burnout; EFCS = Emotion-focused coping strategies; PFCS = Problem-focused coping strategies: HEALTH: Health behavior.

Based on the Self- vs. External-Regulation theory [ 27 , 28 ], the aim of this study was to show, differentially, the regulatory, nonregulatory or dysregulatory power of the Big Five personality factors with respect to study behaviors, associated emotionality during study, motivational states, and ultimately, student health behavior.

Regarding Hypothesis 1 , the results showed a differential, graded prediction of the Big Five personality factors affecting both self-regulation and affective states. The results from the logistic and structural regression analyses showed a clear, graded pattern from the positive predictive relationship of C to the negative predictive relationship of N. On the one hand, they showed the regulatory effect (direct and indirect) of factors C and E, the nonregulatory effect of O, and the dysregulatory effect of factors A and especially N. This evidence offers a differential categorization of the five factors in an integrated manner. On the other hand, their effects on affective tone (direct and indirect) take the same positive direction in C and E, intermediate in the case of O, and negative in A and N. There is plentiful prior evidence that has shown this relationship, though only in part, not in the integrated manner of the model presented here [ 29 , 45 , 46 , 47 ].

Regarding Hypothesis 2 , the evidence shows that self-regulation directly and indirectly predicts affective states in achievement emotions during study. Directionality can be positive or negative according to the influence of C and E and of positive emotionality or of A and N with negative affect. This finding agrees with prior research [ 29 , 48 , 49 , 50 , 51 ].

Regarding Hypothesis 3 , the results have shown clear bidirectionality. Subsequent to the prior influence of personality factors and self-regulation, achievement emotions bring about the resulting motivational states of engagement-burnout and the use of different coping strategies (problem-focused vs. emotion-focused). Positive achievement emotions during study predicted a motivational state of engagement and problem-focused coping strategies and were positive predictors of health; however, negative emotions predicted burnout and emotion-focused coping strategies and were negative predictors of health. These results are in line with prior evidence [ 49 , 52 , 53 ]. Finally, we unequivocally showed a double, sequenced path of emotional variables and affective motivations in a process that ultimately and differentially predicts student health [ 54 , 55 ].

In conclusion, these results allow us to understand the predictive relationships involving these multiple variables in a holistic predictive model, while previous research has addressed this topic only in part [ 56 ]. We believe that these results lend empirical support to the sequence proposed by the SR vs. ER model [ 27 ]: the factors of conscientiousness and extraversion appear to be regulators of positive emotionality, engagement and health; openness to experience is considered to be nonregulating; and agreeableness and neuroticism are dysregulators of the learning process and precursors of negative emotionality and poorer student health [ 57 ]. New levels of detail—in a graded heuristic—have been added to our understanding of the relationships among the five-factor model, self-regulation, achievement emotions and health [ 23 ].

Limitations and research prospects

A primary limitation of this study was that the analysis focused exclusively on the student. The role of the teaching context, therefore, was not considered. Previous research has reported the role of the teaching process, in interaction with student characteristics, in predicting positive or negative emotionality in students [ 49 , 58 ]. However, such results do not undercut the value of the results presented here. Future research should further analyze potential personality types derived from the present categorization according to heuristic values.

Practical implications

The relationships presented may be considered a mental map that orders the constituent factors of the Five-Factor Model on a continuum, from the most adaptive (or regulatory) and deregulatory to the most maladaptive or dysregulatory. This information is very important for carrying out preventive intervention programs for students and for designing programs for those who could benefit from training in self-regulation and positivity. Such intervention could improve how students experience the difficulties inherent in university studies [ 47 , 59 ], another indicator of the need for active Psychology and Counseling Centers at universities.

figure 1

Data availability

No datasets were generated or analysed during the current study.

Abood MH, Alharbi BH, Mhaidat F, Gazo AM. The relationship between personality traits, academic self-efficacy and academic adaptation among University students in Jordan. Int J High Educ. 2020;9(3):120–8. https://doi.org/10.5430/ijhe.v9n3p120 .

Article   Google Scholar  

Farsides T, Woodfield R. Individual differences and undergraduate academic success: the roles of personality, intelligence, and application. Pers Indiv Differ. 2003;34(7):1225–43. https://doi.org/10.1016/S0191-8869(02)00111-3 .

Furnham A, Chamorro-Premuzic T, McDougall F. Personality, cognitive ability, and beliefs about intelligence as predictors of academic performance. Learn Individual Differences. 2003;14(1):47–64. https://doi.org/10.1016/j.lindif.2003.08.002 .

Papageorgiou KA, Likhanov M, Costantini G, Tsigeman E, Zaleshin M, Budakova A, Kovas Y. Personality, behavioral strengths and difficulties and performance of adolescents with high achievements in science, literature, art and sports. Pers Indiv Differ. 2020;160:109917. https://doi.org/10.1016/j.paid.2020.109917 .

Pervin LA, Cervone D, John OP. Personality: theory and research. 9 ed. Wiley international ed: Wiley; 2005.

Google Scholar  

Morales-Vives F, Camps E, Dueñas JM. (2020). Predicting academic achievement in adolescents: The role of maturity, intelligence and personality. Psicothema , 32.1 , 84–91. https://doi.org/10.7334/psicothema2019.262 .

Komarraju M, Karau SJ, Schmeck RR. Role of the big five personality traits in predicting college students’ academic motivation and achievement. Learn Individual Differences. 2009;19(1):47–52. https://doi.org/10.1016/j.lindif.2008.07.001 .

Sorić I, Penezić Z, Burić I. The big five personality traits, goal orientations, and academic achievement. Learn Individual Differences. 2017;54:126–34. https://doi.org/10.1016/j.lindif.2017.01.024 .

Chamorro-Premuzic T, Furnham A. Mainly openness: the relationship between the big five personality traits and learning approaches. Learn Individual Differences. 2009;19(4):524–9. https://doi.org/10.1016/j.lindif.2009.06.004 .

Bruso J, Stefaniak J, Bol L. An examination of personality traits as a predictor of the use of self-regulated learning strategies and considerations for online instruction. Education Tech Research Dev. 2020;68(5):2659–83. https://doi.org/10.1007/s11423-020-09797-y .

Cervone D, Shadel WG, Smith RE, Fiori M. Self-Regulation: Reminders and suggestions from Personality Science. Appl Psychol. 2006;55(3):333–85. https://doi.org/10.1111/j.1464-0597.2006.00261.x .

Gramzow RH, Sedikides C, Panter AT, Sathy V, Harris J, Insko CA. Patterns of self-regulation and the big five. Eur J Pers. 2004;18(5):367–85. https://doi.org/10.1002/per.513 .

Hoyle RH. Personality and self-regulation. In: Hoyle RH, editor. Handbook of personality and self-regulation. Wiley-Blackwell; 2010. pp. 1–18. https://doi.org/10.1002/9781444318111.ch1 .

Hoyle RH, Davisson EK. Selfregulation and personality. In: John OP, Robins RW, editors. Handbook of personality: theory and research. The Guilford; 2021. pp. 608–24.

Jensen-Campbell LA, Knack JM, Waldrip AM, Campbell SD. Do big five personality traits associated with self-control influence the regulation of anger and aggression? J Res Pers. 2007;41(2):403–24. https://doi.org/10.1016/j.jrp.2006.05.001 .

Goldberg LR. An alternative description of personality: the big-five factor structure. J Personal Soc Psychol. 1990;59(6):1216–29. https://doi.org/10.1037/0022-3514.59.6.1216 .

McCrae RR, Costa PT. Validation of the five-factor model of personality across instruments and observers. J Personal Soc Psychol. 1987;52(1):81–90. https://doi.org/10.1037/0022-3514.52.1.81 .

Jackson DO, Park S. Self-regulation and personality among L2 writers: integrating trait, state, and learner perspectives. J Second Lang Writ. 2020;49:100731. https://doi.org/10.1016/j.jslw.2020.100731 .

Grover R, Aggarwal A, Mittal A. Effect of students’ emotions on their positive psychology: a study of Higher Education Institutions. Open Psychol J. 2020;13(1):272–81. https://doi.org/10.2174/1874350102013010272 .

Kira IA, Shuwiekh HA, Ahmed SAE, Ebada EE, Tantawy SF, Waheep NN, Ashby JS. (2022). Coping with COVID-19 Prolonged and Cumulative Stressors: the Case Example of Egypt. International Journal of Mental Health and Addiction, 1–22 . https://doi.org/10.1007/s11469-021-00712-x .

Vega D, Torrubia R, Marco-Pallarés J, Soto A, Rodriguez-Fornells A. Metacognition of daily self-regulation processes and personality traits in borderline personality disorder. J Affect Disord. 2020;267:243–50. https://doi.org/10.1016/j.jad.2020.02.033 .

Article   PubMed   Google Scholar  

Lewis EG, Cardwell JM. The big five personality traits, perfectionism and their association with mental health among UK students on professional degree programmes. BMC Psychol. 2020;8(1):1–10. https://doi.org/10.1186/s40359-020-00423-3 .

Sirois FM, Hirsch JK. Big five traits, affect balance and health behaviors: a self-regulation resource perspective. Pers Indiv Differ. 2015;87:59–64. https://doi.org/10.1016/j.paid.2015.07.031 .

Allaire FS. Findings from a pilot study examining the positive and negative achievement emotions Associated with undergraduates’ first-Year experience. J Coll Student Retention: Res Theory Pract. 2022;23(4):850–72. https://doi.org/10.1177/1521025119881397 .

Sinring A, Aryani F, Umar NF. Examining the effect of self-regulation and psychological capital on the students’ academic coping strategies during the covid-19 pandemic. Int J Instruction. 2022;15(2):487–500. https://www.e-iji.net/dosyalar/iji_2022_2_27.pdf .

Pachón-Basallo M, de la Fuente J, Gonzáles-Torres MC. Regulation/non-regulation/dys-regulation of health behavior, psychological reactance, and health of university undergraduate students. Int J Environ Res Public Health. 2021;18(7):3793.

Article   PubMed   PubMed Central   Google Scholar  

De La Fuente Arias J. (2017). Theory of Self- vs. externally-regulated LearningTM: fundamentals, evidence, and Applicability. Front Psychol, 8.

de la Fuente J, Pachón-Basallo M, Martínez-Vicente JM, Peralta-Sánchez FJ, Garzón-Umerenkova A, Sander P. Self-vs. external-regulation behavior ScaleTM in different psychological contexts: a validation study. Front Psychol. 2022;13:922633.

Beaulieu DA, Proctor CJ, Gaudet DJ, Canales D, Best LA. What is the mindful personality? Implications for physical and psychological health. Acta Psychol. 2022;224:103514. https://doi.org/10.1016/j.actpsy.2022.103514 .

Carrasco MA, Holgado FP, del Barrio MV. Dimensionalidad Del Cuestionario De Los cinco grandes (BFQ-N) en población infantil Española. Psicothema. 2005;17:275–80. http://www.redalyc.org/articulo.oa?id=72717216 . 17, núm. 2, 2005, pp. 286–291.

Barbaranelli C, Caprara GV, Rabasca A, Pastorelli C. A questionnaire for measuring the big five in late childhood. Pers Indiv Differ. 2003;34(4):645–64. https://doi.org/10.1016/S0191-8869(02)00051-X .

Brown JM, Miller WR, y Lawendowski LA. (1999). The Self-Regulation Questionnaire. En L. Vandecreek y T. L. Jackson, editors. Innovations in clinical practice: A source book. Vol. 17. (pp. 281–293). Sarasota. FL: Professional Resources Press.

Pichardo MC, Cano F, Garzón A, de la Fuente J, Peralta FJ, Amate J. Self-regulation questionnaire (SRQ) in Spanish adolescents: factor structure and Rasch Analysis. Front Psychol. 2018;9(1370). https://doi.org/10.3389/fpsyg.2018.01370 .

Sandín B, Chorot P, Lostao L, Joiner TE, Santed MA y, Valiente RM. (1999). Escalas PANAS de afecto positivo y negativo: validación factorial y convergencia transcultural (PANAS Positive and Negative Affect Scales: Factorial Validation and Cross-Cultural Convergence). Psicothema, 11 , 37–51. https://www.psicothema.com/pdf/229.pdf .

De la Fuente J. Self- vs. externally-regulated learning TheoryTM. Almería: University of Almería; 2015.

Pekrun R, Goetz T, Perry RP. (2005). Achievement Emotions Questionnaire (AEQ). User’s manual. Department of Psychology, University of Munich. https://es.scribd.com/doc/217451779/2005-AEQ-Manual .

Shaufeli WR, Martínez IS, Marqués A, Salanova S, Bakker AB. Burnout and engagement in university students. A cross-national study. J Cross-Cult Psychol. 2002;33(5):464–81. https://www.isonderhouden.nl/doc/pdf/arnoldbakker/articles/articles_arnold_bakker_78.pdf .

Maslach C, Jackson SE, Leiter MP. Maslach Burnout Inventory: Manual. 3rd ed. Palo Alto: Consulting Psychologists; 1996.

Sandín B, Chorot P. Cuestionario De afrontamiento del estrés (CAE): Desarrollo Y validación preliminar. Revista De Psicopatología Y Psicología Clínica. 2003;8(1). https://doi.org/10.5944/rppc.8.num.1.2003.3941 .

Lazarus RS, Folkman S. Stress, Appraisal, and coping. New York, NY: Springer; 1984.

Garzón-Umerenkova A, de la Fuente J, Amate J, Paoloni PV, Fadda S, Pérez JF. A Linear empirical model of Self-Regulation on Flourishing, Health, Procrastination, and achievement, among University students. Front Psychol. 2018;9:536. https://doi.org/10.3389/fpsyg.2018.00536 . PMID: 29706922; PMCID: PMC5909179.

Ato M, Ato, López J, Benavente A. Un Sistema De clasificación De Los diseños de investigación en psicología (a classification system for research designs in psychology). Anales De Psicología. 2013;29(3):1038–59. https://doi.org/10.6018/analesps.29.3.178511 .

Schermelleh-Engel K, Moosbrugger H, Müller H. Evaluating the fit of structural equation models: tests of significance and descriptive goodness-of-fit measures. Methods Psychol Res Online. 2003;8(2):23–74. https://www.stats.ox.ac.uk/~snijders/mpr_Schermelleh.pdf .

Tabachnick BG, Fidell LS. Using multivariate statistics. 4th ed. Allyn and Bacon; 2001.

Bajcar B, Babiak J. Neuroticism and cyberchondria: the mediating role of intolerance of uncertainty and defensive pessimism. Pers Indiv Differ. 2020;162:110006. https://doi.org/10.1016/j.paid.2020.110006 .

McCrae RR, Lckenhoff CE. Self-regulation and the five-factor model of personality traits. In: Hoyle RH, editor. Handbook of personality and self-regulation. Wiley-Blackwell; 2010. pp. 145–68. https://doi.org/10.1002/9781444318111.ch7 .

Staller N, Großmann N, Eckes A, Wilde M, Müller FH, Randler C. Academic Self-Regulation, Chronotype and personality in University Students during the remote learning phase due to COVID-19. Front Educ. 2021;6:681840. https://doi.org/10.3389/feduc.2021.681840 .

Ahmed W, van der Werf G, Kuyper H, Minnaert A. Emotions, self-regulated learning, and achievement in mathematics: a growth curve analysis. J Educ Psychol. 2013;105(1):150–61. https://doi.org/10.1037/a0030160 .

de la Fuente J, Amate J, González-Torres MC, Artuch R, García-Torrecillas JM, Fadda S. Effects of levels of Self-Regulation and Regulatory teaching on strategies for coping with academic stress in undergraduate students. Front Psychol. 2020;11:22.

Harley JM, Carter CC, Papaionnou N, Bouchet F, Landis RS, Azevedo R, Karabachian L. Examining the predictive relationship between personality and emotion traits and Learners’ Agent-Direct emotions. In: Conati C, Heffernan N, Mitrovic A, Verdejo MF, editors. Artificial Intelligence in Education. Volume 9112. Springer International Publishing; 2015. pp. 145–54. https://doi.org/10.1007/978-3-319-19773-9_15 .

Villavicencio FT, Bernardo ABI. Positive academic emotions moderate the relationship between self-regulation and academic achievement: positive emotions, self-regulation, and achievement . Br J Educ Psychol. 2013;83(2):329–40. https://doi.org/10.1111/j.2044-8279.2012.02064.x .

Madigan DJ, Curran T. (2020). Does Burnout Affect Academic Achievement? A Meta-Analysis of over 100,000 Students. Educ Psychol Rev (2020). https://doi.org/10.1007/s10648-020-09533-1 .

Wang Y, Xiao H, Zhang X, Wang L. The role of active coping in the Relationship between Learning Burnout and Sleep Quality among College students in China. Front Psychol. 2020;11:647. https://doi.org/10.3389/fpsyg.2020.00647 .

Bedewy D, Gabriel A. Examining perceptions of academic stress and it sources among university students: the perception of academic stress scale. Health Psychol Open. 2015;1–9. https://doi.org/10.1177/2055102915596714 .

Grant L, Kinman G. Enhancing well-being in Social Work students: Building Resilience for the Next Generation. Social Work Educ. 2012;31(5):605–21. https://doi.org/10.1080/02615479.2011.590931 .

Merhi R, Sánchez-Elvira A, Palací FJ. The role of psychological strengths, coping strategies and well-being in the Prediction of Academic Engagement and Burnout in First-Year University students. Revista De Acción Psicológica. 2018;11(2):1–15. https://doi.org/10.5944/ap.15.2.21831 .

Silverman M, Wilson S, Ramsay I, Hunt R, Thomas K, Krueger R, Iacono W. Trait neuroticism and emotion neurocircuitry: functional magnetic resonance imaging evidence for a failure in emotion regulation. Dev Psychopathol. 2019;31(3):1085–99. https://doi.org/10.1017/S0954579419000610 .

Martínez I, Salanova M. (2003). Niveles de Burnout y Engagement en estudiantes universitarios. Relación con el desempeño y desarrollo profesional (Levels of Burnout and Engagement in university students. Relationship with professional development and performance). Revista de Educación , 330, 361–384. https://www.educacionyfp.gob.es/dam/jcr:185fe08c-621e-4376-b0e7-22d89b85e786/re3301911213-pdf.pdf .

Harward DW. Well-being and higher education: a strategy for change and the realization of education’s greater purposes. Bringing Theory to Practice; 2016.

Download references

Acknowledgements

Not applicable.

This research was funded by the R&D Project PID2022-136466NB-I00 and the R&D Project PGC2018-094672-B-I00. University of Navarra (Ministry of Science and Education, Spain), R&D Project UAL18-SEJ-DO31-A-FEDER (University of Almería, Spain), and the European Social Fund.

Author information

Authors and affiliations.

University of Navarra, University Campus, Pamplona, 31009, Spain

Jesús de la Fuente, Begoña Urien, Mónica Pachón-Basallo & Elkin O Luis

Faculty of Psychology, University of Almería, Almería, 04001, Spain

Jesús de la Fuente

Teesside University, Middlesbrough, TS1 3BX, UK

Paul Sander

Fundación Universitaria Konrad Lorenz, Bogotá, Colombia

Angélica Garzón Umerenkova

You can also search for this author in PubMed   Google Scholar

Contributions

Conceptualization, J.d.l.F and ELG; formal analysis and methodology, J.d.l.F and ELG.; project administration, J.d.l.F.; writing—original draft, J.d.l.F, PS, AG, BU, MP, and ELG; writing—review & editing, J.d.l.F, PS, AG, BU, MP and ELG. All authors have read and agreed to the published version of the manuscript.

Corresponding author

Correspondence to Elkin O Luis .

Ethics declarations

Ethics approval and consent to participate.

All procedures in the research process were conducted in accordance with the current guidelines and regulations in 2023. The procedure was approved by the Ethics Committee of the University of Navarra (ref. 2018.170) within the broader context of an R&D Project (2018–2021). Additionally, it is confirmed that informed consent was obtained from all study participants.

Consent for publication

Competing interests.

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Fuente, J.d.l., Sander, P., Garzón Umerenkova, A. et al. The big five factors as differential predictors of self-regulation, achievement emotions, coping and health behavior in undergraduate students. BMC Psychol 12 , 267 (2024). https://doi.org/10.1186/s40359-024-01768-9

Download citation

Received : 08 December 2023

Accepted : 06 May 2024

Published : 13 May 2024

DOI : https://doi.org/10.1186/s40359-024-01768-9

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • The big five factors
  • Self-regulation
  • Achievement emotions and Health Behavior

BMC Psychology

ISSN: 2050-7283

research reports in psychology

Frank T. McAndrew Ph.D.

How to Get Started on Your First Psychology Experiment

Acquiring even a little expertise in advance makes science research easier..

Updated May 16, 2024 | Reviewed by Ray Parker

  • Why Education Is Important
  • Find a Child Therapist
  • Students often struggle at the beginning of research projects—knowing how to begin.
  • Research projects can sometimes be inspired by everyday life or personal concerns.
  • Becoming something of an "expert" on a topic in advance makes designing a study go more smoothly.

ARENA Creative/Shutterstock

One of the most rewarding and frustrating parts of my long career as a psychology professor at a small liberal arts college has been guiding students through the senior capstone research experience required near the end of their college years. Each psychology major must conduct an independent experiment in which they collect data to test a hypothesis, analyze the data, write a research paper, and present their results at a college poster session or at a professional conference.

The rewarding part of the process is clear: The students' pride at seeing their poster on display and maybe even getting their name on an article in a professional journal allows us professors to get a glimpse of students being happy and excited—for a change. I also derive great satisfaction from watching a student discover that he or she has an aptitude for research and perhaps start shifting their career plans accordingly.

The frustrating part comes at the beginning of the research process when students are attempting to find a topic to work on. There is a lot of floundering around as students get stuck by doing something that seems to make sense: They begin by trying to “think up a study.”

The problem is that even if the student's research interest is driven by some very personal topic that is deeply relevant to their own life, they simply do not yet know enough to know where to begin. They do not know what has already been done by others, nor do they know how researchers typically attack that topic.

Students also tend to think in terms of mission statements (I want to cure eating disorders) rather than in terms of research questions (Why are people of some ages or genders more susceptible to eating disorders than others?).

Needless to say, attempting to solve a serious, long-standing societal problem in a few weeks while conducting one’s first psychology experiment can be a showstopper.

Even a Little Bit of Expertise Can Go a Long Way

My usual approach to helping students get past this floundering stage is to tell them to try to avoid thinking up a study altogether. Instead, I tell them to conceive of their mission as becoming an “expert” on some topic that they find interesting. They begin by reading journal articles, writing summaries of these articles, and talking to me about them. As the student learns more about the topic, our conversations become more sophisticated and interesting. Researchable questions begin to emerge, and soon, the student is ready to start writing a literature review that will sharpen the focus of their research question.

In short, even a little bit of expertise on a subject makes it infinitely easier to craft an experiment on that topic because the research done by others provides a framework into which the student can fit his or her own work.

This was a lesson I learned early in my career when I was working on my own undergraduate capstone experience. Faced with the necessity of coming up with a research topic and lacking any urgent personal issues that I was trying to resolve, I fell back on what little psychological expertise I had already accumulated.

In a previous psychology course, I had written a literature review on why some information fails to move from short-term memory into long-term memory. The journal articles that I had read for this paper relied primarily on laboratory studies with mice, and the debate that was going on between researchers who had produced different results in their labs revolved around subtle differences in the way that mice were released into the experimental apparatus in the studies.

Because I already had done some homework on this, I had a ready-made research question available: What if the experimental task was set up so that the researcher had no influence on how the mouse entered the apparatus at all? I was able to design a simple animal memory experiment that fit very nicely into the psychological literature that was already out there, and this prevented a lot of angst.

Please note that my undergraduate research project was guided by the “expertise” that I had already acquired rather than by a burning desire to solve some sort of personal or social problem. I guarantee that I had not been walking around as an undergraduate student worrying about why mice forget things, but I was nonetheless able to complete a fun and interesting study.

research reports in psychology

My first experiment may not have changed the world, but it successfully launched my research career, and I fondly remember it as I work with my students 50 years later.

Frank T. McAndrew Ph.D.

Frank McAndrew, Ph.D., is the Cornelia H. Dudley Professor of Psychology at Knox College.

  • Find a Therapist
  • Find a Treatment Center
  • Find a Psychiatrist
  • Find a Support Group
  • Find Online Therapy
  • United States
  • Brooklyn, NY
  • Chicago, IL
  • Houston, TX
  • Los Angeles, CA
  • New York, NY
  • Portland, OR
  • San Diego, CA
  • San Francisco, CA
  • Seattle, WA
  • Washington, DC
  • Asperger's
  • Bipolar Disorder
  • Chronic Pain
  • Eating Disorders
  • Passive Aggression
  • Personality
  • Goal Setting
  • Positive Psychology
  • Stopping Smoking
  • Low Sexual Desire
  • Relationships
  • Child Development
  • Therapy Center NEW
  • Diagnosis Dictionary
  • Types of Therapy

May 2024 magazine cover

At any moment, someone’s aggravating behavior or our own bad luck can set us off on an emotional spiral that threatens to derail our entire day. Here’s how we can face our triggers with less reactivity so that we can get on with our lives.

  • Emotional Intelligence
  • Gaslighting
  • Affective Forecasting
  • Neuroscience

Stanford University

Along with Stanford news and stories, show me:

  • Student information
  • Faculty/Staff information

We want to provide announcements, events, leadership messages and resources that are relevant to you. Your selection is stored in a browser cookie which you can remove at any time using “Clear all personalization” below.

Asking for help is hard, but others want to help more than we often give them credit for, says Stanford social psychologist Xuan Zhao .

research reports in psychology

Xuan Zhao (Image credit: Anne Ryan)

We shy away from asking for help because we don’t want to bother other people, assuming that our request will feel like an inconvenience to them. But oftentimes, the opposite is true: People want to make a difference in people’s lives and they feel good – happy even – when they are able to help others, said Zhao.

Here, Zhao discusses the research about how asking for help can lead to meaningful experiences and strengthen relationships with others – friends as well as strangers.

Zhao is a research scientist at Stanford SPARQ , a research center in the Psychology Department that brings researchers and practitioners together to fight bias, reduce disparities, and drive culture change. Zhao’s research focuses on helping people create better social interactions in person and online where they feel seen, heard, connected, and appreciated. Her research, recently published in Psychological Science ,  suggests that people regularly underestimate others’ willingness to help.

This fall, Zhao will be co-teaching a two-session workshop Science-Based Practices for a Flourishing Life through Stanford’s well-being program for employees, BeWell.

Why is asking for help hard? For someone who finds it difficult to ask for help, what would you like them to know?

There are several common reasons why people struggle to ask for help. Some people may fear that asking for help would make them appear incompetent, weak, or inferior – recent research from Stanford doctoral student Kayla Good finds that children as young as seven can hold this belief. Some people are concerned about being rejected, which can be embarrassing and painful. Others may be concerned about burdening and inconveniencing others – a topic I recently explored.  These concerns may feel more relevant in some contexts than others, but they are all very relatable and very human.

The good news is those concerns are oftentimes exaggerated and mistaken.

What do people misunderstand about asking for help?

When people are in need of help, they are often caught up in their own concerns and worries and do not fully recognize the prosocial motivations of those around them who are ready to help. This can introduce a persistent difference between how help-seekers and potential helpers consider the same helping event. To test this idea, we conducted several experiments where people either directly interacted with each other to seek and offer help, or imagined or recalled such experiences in everyday life. We consistently observed that help-seekers underestimated how willing strangers – and even friends – would be to help them and how positive helpers would feel afterward, and overestimated how inconvenienced helpers would feel.

These patterns are consistent with work by Stanford psychologist Dale Miller showing that when thinking about what motivates other people, we tend to apply a more pessimistic, self-interested view about human nature. After all, Western societies tend to value independence, so asking others to go out of their way to do something for us may seem wrong or selfish and may impose a somewhat negative experience on the helper.

The truth is, most of us are deeply prosocial and want to make a positive difference in others’ lives. Work by Stanford psychologist Jamil Zaki has shown that empathizing with and helping others in need seems to be an intuitive response, and dozens of studies , including my own, have found that people often feel happier after conducting acts of kindness. These findings extend earlier research by Stanford Professor Frank Flynn and colleagues suggesting that people tend to overestimate how likely their direct request for help would be rejected by others. Finally, other research has even shown that seeking advice can even boost how competent the help-seeker is seen by the advice-giver.

Why is asking for help particularly important? 

We love stories about spontaneous help, and that may explain why random acts of kindness go viral on social media. But in reality , the majority of help occurs only after a request has been made. It’s often not because people don’t want to help and must be pressed to do so. Quite the opposite, people want to help, but they can’t help if they don’t know someone is suffering or struggling, or what the other person needs and how to help effectively, or whether it is their place to help – perhaps they want to respect others’ privacy or agency. A direct request can remove those uncertainties, such that asking for help enables kindness and unlocks opportunities for positive social connections. It can also create emotional closeness when you realize someone trusts you enough to share their vulnerabilities, and by working together toward a shared goal.

It feels like some requests for help may be harder to ask than others. What does research say about different types of help, and how can we use those insights to help us figure out how we should ask for help?

Many factors can influence how difficult it may feel to ask for help. Our recent research has primarily focused on everyday scenarios where the other person is clearly able to help, and all you need is to show up and ask. In some other cases, the kind of help you need may require more specific skills or resources. As long as you make your request Specific, Meaningful, Action-oriented, Realistic, and Time-bound (also known as the SMART criteria ), people will likely be happy to help and feel good after helping.

Of course, not all requests have to be specific. When we face mental health challenges, we may have difficulty articulating what kind of help we need. It is okay to reach out to mental health resources and take the time to figure things out together. They are there to help, and they are happy to help.

You mentioned how cultural norms can get in the way of people asking for help. What is one thing we can all do to rethink the role society plays in our lives?

Work on independent and interdependent cultures by Hazel Markus , faculty director of Stanford SPARQ , can shed much light on this issue. Following her insights, I think we can all benefit from having a little bit more interdependency in our micro- and macro-environments. For instance, instead of promoting “self-care” and implying that it is people’s own responsibility to sort through their own struggles, perhaps our culture could emphasize the value of caring for each other and create more safe spaces to allow open discussions about our challenges and imperfections.

What inspired your research?

I have always been fascinated by social interaction – how we understand and misunderstand each other’s minds, and how social psychology can help people create more positive and meaningful connections. That’s why I have studied topics such as giving compliments , discussing disagreement , sharing personal failures, creating inclusive conversations on social media , and translating social and positive psychology research as daily practices for the public . This project is also motivated by that general passion.

But a more immediate trigger of this project is reading scholarly work suggesting that the reason why people underestimate their likelihood of getting help is because they don’t recognize how uncomfortable and awkward it would be for someone to say “no” to their request. I agree that people underestimate their chance of getting help upon a direct ask, but based on my personal experience, I saw a different reason – when people ask me for help, I often feel genuinely motivated to help them, more than feeling social pressure and a wish to avoid saying no. This project is to voice my different interpretation on why people agree to help. And given that I’ve seen people who have struggled for too long until it was too late to ask for help, I hope my findings can offer them a bit more comfort when the next time they can really use a helping hand and are debating whether they should ask.

11.2 Writing a Research Report in American Psychological Association (APA) Style

Learning objectives.

  • Identify the major sections of an APA-style research report and the basic contents of each section.
  • Plan and write an effective APA-style research report.

In this section, we look at how to write an APA-style empirical research report , an article that presents the results of one or more new studies. Recall that the standard sections of an empirical research report provide a kind of outline. Here we consider each of these sections in detail, including what information it contains, how that information is formatted and organized, and tips for writing each section. At the end of this section is a sample APA-style research report that illustrates many of these principles.

Sections of a Research Report

Title page and abstract.

An APA-style research report begins with a  title page . The title is centered in the upper half of the page, with each important word capitalized. The title should clearly and concisely (in about 12 words or fewer) communicate the primary variables and research questions. This sometimes requires a main title followed by a subtitle that elaborates on the main title, in which case the main title and subtitle are separated by a colon. Here are some titles from recent issues of professional journals published by the American Psychological Association.

  • Sex Differences in Coping Styles and Implications for Depressed Mood
  • Effects of Aging and Divided Attention on Memory for Items and Their Contexts
  • Computer-Assisted Cognitive Behavioral Therapy for Child Anxiety: Results of a Randomized Clinical Trial
  • Virtual Driving and Risk Taking: Do Racing Games Increase Risk-Taking Cognitions, Affect, and Behavior?

Below the title are the authors’ names and, on the next line, their institutional affiliation—the university or other institution where the authors worked when they conducted the research. As we have already seen, the authors are listed in an order that reflects their contribution to the research. When multiple authors have made equal contributions to the research, they often list their names alphabetically or in a randomly determined order.

It’s  Soooo  Cute!  How Informal Should an Article Title Be?

In some areas of psychology, the titles of many empirical research reports are informal in a way that is perhaps best described as “cute.” They usually take the form of a play on words or a well-known expression that relates to the topic under study. Here are some examples from recent issues of the Journal Psychological Science .

  • “Smells Like Clean Spirit: Nonconscious Effects of Scent on Cognition and Behavior”
  • “Time Crawls: The Temporal Resolution of Infants’ Visual Attention”
  • “Scent of a Woman: Men’s Testosterone Responses to Olfactory Ovulation Cues”
  • “Apocalypse Soon?: Dire Messages Reduce Belief in Global Warming by Contradicting Just-World Beliefs”
  • “Serial vs. Parallel Processing: Sometimes They Look Like Tweedledum and Tweedledee but They Can (and Should) Be Distinguished”
  • “How Do I Love Thee? Let Me Count the Words: The Social Effects of Expressive Writing”

Individual researchers differ quite a bit in their preference for such titles. Some use them regularly, while others never use them. What might be some of the pros and cons of using cute article titles?

For articles that are being submitted for publication, the title page also includes an author note that lists the authors’ full institutional affiliations, any acknowledgments the authors wish to make to agencies that funded the research or to colleagues who commented on it, and contact information for the authors. For student papers that are not being submitted for publication—including theses—author notes are generally not necessary.

The  abstract  is a summary of the study. It is the second page of the manuscript and is headed with the word  Abstract . The first line is not indented. The abstract presents the research question, a summary of the method, the basic results, and the most important conclusions. Because the abstract is usually limited to about 200 words, it can be a challenge to write a good one.

Introduction

The  introduction  begins on the third page of the manuscript. The heading at the top of this page is the full title of the manuscript, with each important word capitalized as on the title page. The introduction includes three distinct subsections, although these are typically not identified by separate headings. The opening introduces the research question and explains why it is interesting, the literature review discusses relevant previous research, and the closing restates the research question and comments on the method used to answer it.

The Opening

The  opening , which is usually a paragraph or two in length, introduces the research question and explains why it is interesting. To capture the reader’s attention, researcher Daryl Bem recommends starting with general observations about the topic under study, expressed in ordinary language (not technical jargon)—observations that are about people and their behavior (not about researchers or their research; Bem, 2003 [1] ). Concrete examples are often very useful here. According to Bem, this would be a poor way to begin a research report:

Festinger’s theory of cognitive dissonance received a great deal of attention during the latter part of the 20th century (p. 191)

The following would be much better:

The individual who holds two beliefs that are inconsistent with one another may feel uncomfortable. For example, the person who knows that he or she enjoys smoking but believes it to be unhealthy may experience discomfort arising from the inconsistency or disharmony between these two thoughts or cognitions. This feeling of discomfort was called cognitive dissonance by social psychologist Leon Festinger (1957), who suggested that individuals will be motivated to remove this dissonance in whatever way they can (p. 191).

After capturing the reader’s attention, the opening should go on to introduce the research question and explain why it is interesting. Will the answer fill a gap in the literature? Will it provide a test of an important theory? Does it have practical implications? Giving readers a clear sense of what the research is about and why they should care about it will motivate them to continue reading the literature review—and will help them make sense of it.

Breaking the Rules

Researcher Larry Jacoby reported several studies showing that a word that people see or hear repeatedly can seem more familiar even when they do not recall the repetitions—and that this tendency is especially pronounced among older adults. He opened his article with the following humorous anecdote:

A friend whose mother is suffering symptoms of Alzheimer’s disease (AD) tells the story of taking her mother to visit a nursing home, preliminary to her mother’s moving there. During an orientation meeting at the nursing home, the rules and regulations were explained, one of which regarded the dining room. The dining room was described as similar to a fine restaurant except that tipping was not required. The absence of tipping was a central theme in the orientation lecture, mentioned frequently to emphasize the quality of care along with the advantages of having paid in advance. At the end of the meeting, the friend’s mother was asked whether she had any questions. She replied that she only had one question: “Should I tip?” (Jacoby, 1999, p. 3)

Although both humor and personal anecdotes are generally discouraged in APA-style writing, this example is a highly effective way to start because it both engages the reader and provides an excellent real-world example of the topic under study.

The Literature Review

Immediately after the opening comes the  literature review , which describes relevant previous research on the topic and can be anywhere from several paragraphs to several pages in length. However, the literature review is not simply a list of past studies. Instead, it constitutes a kind of argument for why the research question is worth addressing. By the end of the literature review, readers should be convinced that the research question makes sense and that the present study is a logical next step in the ongoing research process.

Like any effective argument, the literature review must have some kind of structure. For example, it might begin by describing a phenomenon in a general way along with several studies that demonstrate it, then describing two or more competing theories of the phenomenon, and finally presenting a hypothesis to test one or more of the theories. Or it might describe one phenomenon, then describe another phenomenon that seems inconsistent with the first one, then propose a theory that resolves the inconsistency, and finally present a hypothesis to test that theory. In applied research, it might describe a phenomenon or theory, then describe how that phenomenon or theory applies to some important real-world situation, and finally suggest a way to test whether it does, in fact, apply to that situation.

Looking at the literature review in this way emphasizes a few things. First, it is extremely important to start with an outline of the main points that you want to make, organized in the order that you want to make them. The basic structure of your argument, then, should be apparent from the outline itself. Second, it is important to emphasize the structure of your argument in your writing. One way to do this is to begin the literature review by summarizing your argument even before you begin to make it. “In this article, I will describe two apparently contradictory phenomena, present a new theory that has the potential to resolve the apparent contradiction, and finally present a novel hypothesis to test the theory.” Another way is to open each paragraph with a sentence that summarizes the main point of the paragraph and links it to the preceding points. These opening sentences provide the “transitions” that many beginning researchers have difficulty with. Instead of beginning a paragraph by launching into a description of a previous study, such as “Williams (2004) found that…,” it is better to start by indicating something about why you are describing this particular study. Here are some simple examples:

Another example of this phenomenon comes from the work of Williams (2004).

Williams (2004) offers one explanation of this phenomenon.

An alternative perspective has been provided by Williams (2004).

We used a method based on the one used by Williams (2004).

Finally, remember that your goal is to construct an argument for why your research question is interesting and worth addressing—not necessarily why your favorite answer to it is correct. In other words, your literature review must be balanced. If you want to emphasize the generality of a phenomenon, then of course you should discuss various studies that have demonstrated it. However, if there are other studies that have failed to demonstrate it, you should discuss them too. Or if you are proposing a new theory, then of course you should discuss findings that are consistent with that theory. However, if there are other findings that are inconsistent with it, again, you should discuss them too. It is acceptable to argue that the  balance  of the research supports the existence of a phenomenon or is consistent with a theory (and that is usually the best that researchers in psychology can hope for), but it is not acceptable to  ignore contradictory evidence. Besides, a large part of what makes a research question interesting is uncertainty about its answer.

The Closing

The  closing  of the introduction—typically the final paragraph or two—usually includes two important elements. The first is a clear statement of the main research question and hypothesis. This statement tends to be more formal and precise than in the opening and is often expressed in terms of operational definitions of the key variables. The second is a brief overview of the method and some comment on its appropriateness. Here, for example, is how Darley and Latané (1968) [2] concluded the introduction to their classic article on the bystander effect:

These considerations lead to the hypothesis that the more bystanders to an emergency, the less likely, or the more slowly, any one bystander will intervene to provide aid. To test this proposition it would be necessary to create a situation in which a realistic “emergency” could plausibly occur. Each subject should also be blocked from communicating with others to prevent his getting information about their behavior during the emergency. Finally, the experimental situation should allow for the assessment of the speed and frequency of the subjects’ reaction to the emergency. The experiment reported below attempted to fulfill these conditions. (p. 378)

Thus the introduction leads smoothly into the next major section of the article—the method section.

The  method section  is where you describe how you conducted your study. An important principle for writing a method section is that it should be clear and detailed enough that other researchers could replicate the study by following your “recipe.” This means that it must describe all the important elements of the study—basic demographic characteristics of the participants, how they were recruited, whether they were randomly assigned to conditions, how the variables were manipulated or measured, how counterbalancing was accomplished, and so on. At the same time, it should avoid irrelevant details such as the fact that the study was conducted in Classroom 37B of the Industrial Technology Building or that the questionnaire was double-sided and completed using pencils.

The method section begins immediately after the introduction ends with the heading “Method” (not “Methods”) centered on the page. Immediately after this is the subheading “Participants,” left justified and in italics. The participants subsection indicates how many participants there were, the number of women and men, some indication of their age, other demographics that may be relevant to the study, and how they were recruited, including any incentives given for participation.

Figure 11.1 Three Ways of Organizing an APA-Style Method

Figure 11.1 Three Ways of Organizing an APA-Style Method

After the participants section, the structure can vary a bit. Figure 11.1 shows three common approaches. In the first, the participants section is followed by a design and procedure subsection, which describes the rest of the method. This works well for methods that are relatively simple and can be described adequately in a few paragraphs. In the second approach, the participants section is followed by separate design and procedure subsections. This works well when both the design and the procedure are relatively complicated and each requires multiple paragraphs.

What is the difference between design and procedure? The design of a study is its overall structure. What were the independent and dependent variables? Was the independent variable manipulated, and if so, was it manipulated between or within subjects? How were the variables operationally defined? The procedure is how the study was carried out. It often works well to describe the procedure in terms of what the participants did rather than what the researchers did. For example, the participants gave their informed consent, read a set of instructions, completed a block of four practice trials, completed a block of 20 test trials, completed two questionnaires, and were debriefed and excused.

In the third basic way to organize a method section, the participants subsection is followed by a materials subsection before the design and procedure subsections. This works well when there are complicated materials to describe. This might mean multiple questionnaires, written vignettes that participants read and respond to, perceptual stimuli, and so on. The heading of this subsection can be modified to reflect its content. Instead of “Materials,” it can be “Questionnaires,” “Stimuli,” and so on. The materials subsection is also a good place to refer to the reliability and/or validity of the measures. This is where you would present test-retest correlations, Cronbach’s α, or other statistics to show that the measures are consistent across time and across items and that they accurately measure what they are intended to measure.

The  results section  is where you present the main results of the study, including the results of the statistical analyses. Although it does not include the raw data—individual participants’ responses or scores—researchers should save their raw data and make them available to other researchers who request them. Several journals now encourage the open sharing of raw data online.

Although there are no standard subsections, it is still important for the results section to be logically organized. Typically it begins with certain preliminary issues. One is whether any participants or responses were excluded from the analyses and why. The rationale for excluding data should be described clearly so that other researchers can decide whether it is appropriate. A second preliminary issue is how multiple responses were combined to produce the primary variables in the analyses. For example, if participants rated the attractiveness of 20 stimulus people, you might have to explain that you began by computing the mean attractiveness rating for each participant. Or if they recalled as many items as they could from study list of 20 words, did you count the number correctly recalled, compute the percentage correctly recalled, or perhaps compute the number correct minus the number incorrect? A final preliminary issue is whether the manipulation was successful. This is where you would report the results of any manipulation checks.

The results section should then tackle the primary research questions, one at a time. Again, there should be a clear organization. One approach would be to answer the most general questions and then proceed to answer more specific ones. Another would be to answer the main question first and then to answer secondary ones. Regardless, Bem (2003) [3] suggests the following basic structure for discussing each new result:

  • Remind the reader of the research question.
  • Give the answer to the research question in words.
  • Present the relevant statistics.
  • Qualify the answer if necessary.
  • Summarize the result.

Notice that only Step 3 necessarily involves numbers. The rest of the steps involve presenting the research question and the answer to it in words. In fact, the basic results should be clear even to a reader who skips over the numbers.

The  discussion  is the last major section of the research report. Discussions usually consist of some combination of the following elements:

  • Summary of the research
  • Theoretical implications
  • Practical implications
  • Limitations
  • Suggestions for future research

The discussion typically begins with a summary of the study that provides a clear answer to the research question. In a short report with a single study, this might require no more than a sentence. In a longer report with multiple studies, it might require a paragraph or even two. The summary is often followed by a discussion of the theoretical implications of the research. Do the results provide support for any existing theories? If not, how  can  they be explained? Although you do not have to provide a definitive explanation or detailed theory for your results, you at least need to outline one or more possible explanations. In applied research—and often in basic research—there is also some discussion of the practical implications of the research. How can the results be used, and by whom, to accomplish some real-world goal?

The theoretical and practical implications are often followed by a discussion of the study’s limitations. Perhaps there are problems with its internal or external validity. Perhaps the manipulation was not very effective or the measures not very reliable. Perhaps there is some evidence that participants did not fully understand their task or that they were suspicious of the intent of the researchers. Now is the time to discuss these issues and how they might have affected the results. But do not overdo it. All studies have limitations, and most readers will understand that a different sample or different measures might have produced different results. Unless there is good reason to think they  would have, however, there is no reason to mention these routine issues. Instead, pick two or three limitations that seem like they could have influenced the results, explain how they could have influenced the results, and suggest ways to deal with them.

Most discussions end with some suggestions for future research. If the study did not satisfactorily answer the original research question, what will it take to do so? What  new  research questions has the study raised? This part of the discussion, however, is not just a list of new questions. It is a discussion of two or three of the most important unresolved issues. This means identifying and clarifying each question, suggesting some alternative answers, and even suggesting ways they could be studied.

Finally, some researchers are quite good at ending their articles with a sweeping or thought-provoking conclusion. Darley and Latané (1968) [4] , for example, ended their article on the bystander effect by discussing the idea that whether people help others may depend more on the situation than on their personalities. Their final sentence is, “If people understand the situational forces that can make them hesitate to intervene, they may better overcome them” (p. 383). However, this kind of ending can be difficult to pull off. It can sound overreaching or just banal and end up detracting from the overall impact of the article. It is often better simply to end by returning to the problem or issue introduced in your opening paragraph and clearly stating how your research has addressed that issue or problem.

The references section begins on a new page with the heading “References” centered at the top of the page. All references cited in the text are then listed in the format presented earlier. They are listed alphabetically by the last name of the first author. If two sources have the same first author, they are listed alphabetically by the last name of the second author. If all the authors are the same, then they are listed chronologically by the year of publication. Everything in the reference list is double-spaced both within and between references.

Appendices, Tables, and Figures

Appendices, tables, and figures come after the references. An  appendix  is appropriate for supplemental material that would interrupt the flow of the research report if it were presented within any of the major sections. An appendix could be used to present lists of stimulus words, questionnaire items, detailed descriptions of special equipment or unusual statistical analyses, or references to the studies that are included in a meta-analysis. Each appendix begins on a new page. If there is only one, the heading is “Appendix,” centered at the top of the page. If there is more than one, the headings are “Appendix A,” “Appendix B,” and so on, and they appear in the order they were first mentioned in the text of the report.

After any appendices come tables and then figures. Tables and figures are both used to present results. Figures can also be used to display graphs, illustrate theories (e.g., in the form of a flowchart), display stimuli, outline procedures, and present many other kinds of information. Each table and figure appears on its own page. Tables are numbered in the order that they are first mentioned in the text (“Table 1,” “Table 2,” and so on). Figures are numbered the same way (“Figure 1,” “Figure 2,” and so on). A brief explanatory title, with the important words capitalized, appears above each table. Each figure is given a brief explanatory caption, where (aside from proper nouns or names) only the first word of each sentence is capitalized. More details on preparing APA-style tables and figures are presented later in the book.

Sample APA-Style Research Report

Figures 11.2, 11.3, 11.4, and 11.5 show some sample pages from an APA-style empirical research report originally written by undergraduate student Tomoe Suyama at California State University, Fresno. The main purpose of these figures is to illustrate the basic organization and formatting of an APA-style empirical research report, although many high-level and low-level style conventions can be seen here too.

Figure 11.2 Title Page and Abstract. This student paper does not include the author note on the title page. The abstract appears on its own page.

Figure 11.2 Title Page and Abstract. This student paper does not include the author note on the title page. The abstract appears on its own page.

Figure 11.3 Introduction and Method. Note that the introduction is headed with the full title, and the method section begins immediately after the introduction ends.

Figure 11.3 Introduction and Method. Note that the introduction is headed with the full title, and the method section begins immediately after the introduction ends.

Figure 11.4 Results and Discussion The discussion begins immediately after the results section ends.

Figure 11.4 Results and Discussion The discussion begins immediately after the results section ends.

Figure 11.5 References and Figure. If there were appendices or tables, they would come before the figure.

Figure 11.5 References and Figure. If there were appendices or tables, they would come before the figure.

Key Takeaways

  • An APA-style empirical research report consists of several standard sections. The main ones are the abstract, introduction, method, results, discussion, and references.
  • The introduction consists of an opening that presents the research question, a literature review that describes previous research on the topic, and a closing that restates the research question and comments on the method. The literature review constitutes an argument for why the current study is worth doing.
  • The method section describes the method in enough detail that another researcher could replicate the study. At a minimum, it consists of a participants subsection and a design and procedure subsection.
  • The results section describes the results in an organized fashion. Each primary result is presented in terms of statistical results but also explained in words.
  • The discussion typically summarizes the study, discusses theoretical and practical implications and limitations of the study, and offers suggestions for further research.
  • Practice: Look through an issue of a general interest professional journal (e.g.,  Psychological Science ). Read the opening of the first five articles and rate the effectiveness of each one from 1 ( very ineffective ) to 5 ( very effective ). Write a sentence or two explaining each rating.
  • Practice: Find a recent article in a professional journal and identify where the opening, literature review, and closing of the introduction begin and end.
  • Practice: Find a recent article in a professional journal and highlight in a different color each of the following elements in the discussion: summary, theoretical implications, practical implications, limitations, and suggestions for future research.
  • Bem, D. J. (2003). Writing the empirical journal article. In J. M. Darley, M. P. Zanna, & H. R. Roediger III (Eds.),  The complete academic: A practical guide for the beginning social scientist  (2nd ed.). Washington, DC: American Psychological Association. ↵
  • Darley, J. M., & Latané, B. (1968). Bystander intervention in emergencies: Diffusion of responsibility.  Journal of Personality and Social Psychology, 4 , 377–383. ↵

Creative Commons License

Share This Book

  • Increase Font Size

research reports in psychology

West Texas A&M University psychology professor wins major fellowship for research

A MARILLO, Texas (KAMR/KCIT)—West Texas A&M University announced that professor Dr. Maxine De Butte is the university’s first Twanna M. Powell Fellow and will receive $50,000 to further her research into the effects of psychiatric medications on developing brains.

According to WT officials, De Butte, a professor of psychology and associate department head in the Department of Psychology, Sociology and Social Work in the Terry B. Rogers College of Education and Social Sciences, was announced as a Powell Fellow during the May 11 commencement ceremonies.

“It is a great honor to be named the first Powell Fellow, and I can’t tell you how grateful I am,” De Butte said. “This award is a personal achievement, and it has inspired me to continue pushing boundaries and exploring new research avenues in clinical neuroscience.”

WT officials stated that as a Powell Fellow, De Butte will receive $50,000 plus additional University resources to use at her discretion to further her research. She will use animal models for the studies.

“We’ll also look into the differences between male and female rodent brains and how antidepressants and antipsychotics may or may not affect molecular pathways,” De Butte said.

Officials also stated the Powell Fellow Program was established via a gift from philanthropists Twanna and Don Powell. Powell a longtime Amarillo resident and WT alumnus, is the former chair of the Federal Deposit Insurance Corp., who later led the federal government’s recovery efforts following hurricanes Katrina and Rita and served on The Texas A&M University System Board of Regents from 1995 to 2001. Officials noted Powell currently serves on the Amarillo Independent School District board.

WT officials stated the Powell Fellow program is open to professors across WT’s six Colleges with a selection committee that nominates a faculty member from within its ranks created by the deans of each College. The nominees then go before a University-wide committee which includes representatives of the Powell family.

Officials noted that preference is given to faculty members with established records of accomplishments in research and the classroom.

“The Twanna M. Powell Fellows Program supports, recognizes and advances research excellence at West Texas A&M University,” said Dr. Neil Terry, provost and executive vice president for academic affairs. “Dr. DeButte is a distinguished scholar, engaging students while advancing the profession and practice with significant research contributions.”

De Butte joined WT in 2009, teaches biological psychology courses, and is a member of the Society for Neuroscience. Previous research includes the role of hormones such as estrogen on brain injuries. Officials noted De Butte earned degrees at Trent University and Carleton University in Ontario, Canada.

“I come from small, regional universities and wanted to pursue a job at a similar institution,” De Butte said. “I like the one-on-one interaction with students.”

For the latest Amarillo news and regional updates, check with MyHighPlains.com and tune in to KAMR Local 4 News at 5:00, 6:00, and 10:00 p.m. and Fox 14 News at 9:00 p.m. CST.

For the latest news, weather, sports, and streaming video, head to KAMR - MyHighPlains.com.

West Texas A&M University psychology professor wins major fellowship for research

IMAGES

  1. FREE 10+ Sample Psychological Reports in PDF

    research reports in psychology

  2. 🐈 How to do a psychology research paper. How to Write a Psychology

    research reports in psychology

  3. Simple How To Write A Lab Report Psychology Example About Marketing

    research reports in psychology

  4. Intro to Psychology Observation Paper Free Essay Example

    research reports in psychology

  5. How to Write a Psychology Lab Report

    research reports in psychology

  6. Research Report

    research reports in psychology

VIDEO

  1. Psychology and Report Design

  2. Is Peach McIntyre and Wood getting a divorce?

  3. IGNOU M A Internship Case Report: Counselling/Clinical/Organisational Psychology(MPCE-015/025/035 )

  4. Terminology and Psychology of Support and Resistance

  5. Psychology lab report writing guide: the title page

  6. Presentations on Social Psychology and New Research

COMMENTS

  1. Writing a Research Report in American Psychological Association (APA

    An APA-style research report begins with a ... In some areas of psychology, the titles of many empirical research reports are informal in a way that is perhaps best described as "cute." They usually take the form of a play on words or a well-known expression that relates to the topic under study.

  2. Lab Report Format: Step-by-Step Guide & Examples

    In psychology, a lab report outlines a study's objectives, methods, results, discussion, and conclusions, ensuring clarity and adherence to APA (or relevant) formatting guidelines. A typical lab report would include the following sections: title, abstract, introduction, method, results, and discussion.

  3. Psychological Reports: Sage Journals

    Psychological Reports is a bi-monthly peer-reviewed journal that publishes original and creative contributions to the field of general psychology. The journal carries experimental, theoretical, and speculative articles and comments in all areas of psychology. View full journal description. This journal is a ... Sage Research Methods ...

  4. PDF Guide to Writing a Psychology Research Paper

    Component 1: The Title Page. • On the right side of the header, type the first 2-3 words of your full title followed by the page number. This header will appear on every page of you report. • At the top of the page, type flush left the words "Running head:" followed by an abbreviation of your title in all caps.

  5. Psychological Report Writing

    Psychological Report Writing March 8, 2021 - Paper 2 Psychology in Context | Research Methods Back to Paper 2 - Research Methods Writing up Psychological Investigations Through using this website, you have learned about, referred to, and evaluated research studies. These research studies are generally presented to the scientific community as a journal article. […]

  6. PDF GUIDE TO WRITING RESEARCH REPORTS

    THE DEPARTMENT OF PSYCHOLOGY GUIDE TO WRITING RESEARCH REPORTS The following set of guidelines provides psychology students at Essex with the basic information for structuring and formatting reports of research in psychology. During your time here this will be an invaluable reference. You are encouraged to refer to this document each time you ...

  7. PDF Reporting Qualitative Research in Psychology

    oping comprehensive reports that will support their review. Guidance is provided for how to best present qualitative research, with rationales and illustrations. The reporting standards for qualitative meta-analyses, which are integrative analy-ses of findings from across primary qualitative research, are presented in Chapter 8.

  8. Psychological Science: Sage Journals

    The journal publishes cutting-edge research articles and short reports, spanning the entire spectrum of the science of psychology. This journal is the source for the latest findings in cognitive, social, developmental, and health psychology, as well as behavioral neuroscience and biopsychology. View full journal description

  9. Writing a Research Report in American Psychological Association (APA

    In this section, we look at how to write an APA-style empirical research report, an article that presents the results of one or more new studies. Recall that the standard sections of an empirical research report provide a kind of outline. ... The most common type of non-experimental research conducted in psychology is correlational research ...

  10. Psychological Science

    Psychological Science, the flagship journal of the Association for Psychological Science, is the leading peer-reviewed journal publishing empirical research spanning the entire spectrum of the science of psychology.The journal publishes high quality research articles of general interest and on important topics spanning the entire spectrum of the science of psychology.

  11. Journal of Applied Psychology

    Feature Articles, which are full-length articles that focus on a conceptually or theoretically driven empirical contribution (all research strategies and methods, quantitative and qualitative, are considered) or on a theoretical contribution that can shape future research in applied psychology. Research Reports, which are narrower in scope and ...

  12. PDF RESEARCH REPORT (PSYCHOLOGY)

    RESEARCH REPORT (PSYCHOLOGY) A psychology Research Report, or Lab Report, gives an account of an experiment about humanbehaviour. The account not only includes the information about the process of the experiment, but also communicates the relevance, validity, and reliability of the research in a well-developed line of argument. A lab report

  13. Research Paper Structure

    A complete research paper in APA style that is reporting on experimental research will typically contain a Title page, Abstract, Introduction, Methods, Results, Discussion, and References sections. 1 Many will also contain Figures and Tables and some will have an Appendix or Appendices. These sections are detailed as follows (for a more in ...

  14. Psychology

    Read the latest Research articles in Psychology from Scientific Reports. ... This Cognitive Ageing Collection brings together cutting-edge research using a variety of methods and from diverse ...

  15. Library Research in Psychology: Finding it Easily

    These topics include a wide range of issues, from ability tests for employees to research on drugs and the brain, school violence, the impact of AIDS on family members and the ways in which children learn. A variety of resources about psychology are available on the Internet or at any library, including books, journals, newspapers, pamphlets ...

  16. Writing in Psychology Overview

    Experimental reports: Experimental reports detail the results of experimental research projects and are most often written in experimental psychology (lab) courses. Experimental reports are write-ups of your results after you have conducted research with participants. This handout provides a description of how to write an experimental report .

  17. 11.2 Writing a Research Report in American Psychological Association

    In some areas of psychology, the titles of many empirical research reports are informal in a way that is perhaps best described as "cute." They usually take the form of a play on words or a well-known expression that relates to the topic under study. Here are some examples from recent issues of the Journal of Personality and Social Psychology.

  18. Reporting Standards for Research in Psychology

    This article is the report of the JARS Group's findings and recommendations. It was approved by the Publications and Communications Board in the summer of 2007 and again in the spring of 2008 and was transmitted to the task force charged with revising the Publication Manual for consideration as it did its work. The content of the report roughly follows the stages of the group's work.

  19. Free APA Journal Articles

    Recently published articles from subdisciplines of psychology covered by more than 90 APA Journals™ publications. For additional free resources (such as article summaries, podcasts, and more), please visit the Highlights in Psychological Research page. Browse and read free articles from APA Journals across the field of psychology, selected by ...

  20. Research Report (Psychology)

    Psychology research reports give an account of an experiment about human behaviour. The account not only includes the information about the process of the experiment, but it also communicates the relevance, validity and reliability of the research in a well-developed line of argument. A research report demonstrates how the current study relates ...

  21. Expressing Your Results

    In this section, we focus on presenting descriptive statistical results in writing, in graphs, and in tables—following American Psychological Association (APA) guidelines for written research reports. These principles can be adapted easily to other presentation formats such as posters and slide show presentations.

  22. The big five factors as differential predictors of self-regulation

    The aim of this research was to analyze whether the personality factors included in the Big Five model differentially predict the self-regulation and affective states of university students and health. A total of 637 students completed validated self-report questionnaires. Using an ex post facto design, we conducted linear regression and structural prediction analyses.

  23. How to Get Started on Your First Psychology Experiment

    Each psychology major must conduct an independent experiment in which they collect data to test a hypothesis, analyze the data, write a research paper, and present their results at a college ...

  24. 94% of psychologists are concerned about the impact of climate change

    Bolstering the psychology profession . More broadly, APS's research showed that instances of mental ill-health are continuing to increase. "Our research found that among 15-24-year-olds, psychological distress has increased from 18.4% in 2011 to 42.3% in 2021," says Dr Davis-McCabe.

  25. APA resources to help teachers engage students in research

    APA Monitor on Psychology research In Brief. Summaries of the latest peer-reviewed studies within psychology and related fields. ... Psi Beta Research Journal Brief Reports; IMPULSE (Neuroscience) About the author. Sue Orsillo is the senior director of psychology education and training at APA. Her current areas of focus include supporting ...

  26. Subscribe to Stanford Report

    Zhao is a research scientist at Stanford SPARQ, a research center in the Psychology Department that brings researchers and practitioners together to fight bias, reduce disparities, and drive ...

  27. Psychology study participants recruited online may provide ...

    When COVID-19 hit, many behavioral scientists had a way to keep their research running: Move it online. The pandemic boosted an already growing trend of studies conducted via online platforms, among the most popular of which is Amazon's Mechanical Turk (MTurk). The service charges "requesters" a commission to crowdsource tasks—such as completing a survey or solving a puzzle—to remote ...

  28. 11.2 Writing a Research Report in American Psychological Association

    Identify the major sections of an APA-style research report and the basic contents of each section. ... In some areas of psychology, the titles of many empirical research reports are informal in a way that is perhaps best described as "cute." They usually take the form of a play on words or a well-known expression that relates to the topic ...

  29. World Online Ranking of Best Psychology Scientists

    On May 9, 2024, Research.com published the 3rd edition of its annual ranking of the finest scientists in psychology. This study, which includes a list of top scholars, is intended to give the academic community additional visibility and exposure to the influential contributions made by those at the forefront of psychology research.

  30. West Texas A&M University psychology professor wins major ...

    A West Texas A&M University professor, Dr. Maxine De Butte, is the university's first Twanna M. Powell Fellow and will receive $50,000 to further her research into the effects of psychiatric ...